id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
733929 | https://en.wikipedia.org/wiki/Chemical%20physics | Chemical physics | Chemical physics is a branch of physics that studies chemical processes from a physical point of view. It focuses on understanding the physical properties and behavior of chemical systems, using principles from both physics and chemistry. This field investigates physicochemical phenomena using techniques from atomic and molecular physics and condensed matter physics.
The United States Department of Education defines chemical physics as "A program that focuses on the scientific study of structural phenomena combining the disciplines of physical chemistry and atomic/molecular physics. Includes instruction in heterogeneous structures, alignment and surface phenomena, quantum theory, mathematical physics, statistical and classical mechanics, chemical kinetics, and laser physics."
Distinction between Chemical Physics and Physical Chemistry
While at the interface of physics and chemistry, chemical physics is distinct from physical chemistry as it focuses more on using physical theories to understand and explain chemical phenomena at the microscopic level, such as quantum mechanics, statistical mechanics, and molecular dynamics. Meanwhile, physical chemistry uses a broader range of methods, such as thermodynamics and kinetics, to study the physical nature of chemical processes. On the other hand, physical chemistry deals with the physical properties and behavior of matter in chemical reactions, covering a broader range of topics such as thermodynamics, kinetics, and spectroscopy, and often links the macroscopic and microscopic chemical behavior. The distinction between the two fields still needs to be clarified as both fields share common grounds. Scientists often practice in both fields during their research, as there is significant overlap in the topics and techniques used. Journals like PCCP (Physical Chemistry Chemical Physics) cover research in both areas, highlighting their overlap.
History
The term "chemical physics" in its modern sense was first used by the German scientist A. Eucken, who published "A Course in Chemical Physics" in 1930. Prior to this, in 1927, the publication "Electronic Chemistry" by V. N. Kondrat'ev, N. N. Semenov, and Iu. B. Khariton hinted at the meaning of "chemical physics" through its title. The Institute of Chemical Physics of the Academy of Sciences of the USSR was established in 1931. In the United States, "The Journal of Chemical Physics" has been published since 1933.
In 1964, the General Electric Foundation established the Irving Langmuir Award in Chemical Physics to honor outstanding achievements in the field of chemical physics. Named after the Nobel Laureate Irving Langmuir, the award recognizes significant contributions to understanding chemical phenomena through physics principles, impacting areas such as surface chemistry and quantum mechanics.
What chemical physicists do
Chemical physicists investigate the structure and dynamics of ions, free radicals, polymers, clusters, and molecules. Their research includes studying the quantum mechanical aspects of chemical reactions, solvation processes, and the energy flow within and between molecules, and nanomaterials such as quantum dots. Experiments in chemical physics typically involve using spectroscopic methods to understand hydrogen bonding, electron transfer, the formation and dissolution of chemical bonds, chemical reactions, and the formation of nanoparticles.
The research objectives in the theoretical aspect of chemical physics are to understand how chemical structures and reactions work at the quantum mechanical level. This field also aims to clarify how ions and radicals behave and react in the gas phase and to develop precise approximations that simplify the computation of the physics of chemical phenomena.
Chemical physicists are looking for answers to such questions as:
Can we experimentally test quantum mechanical predictions of the vibrations and rotations of simple molecules? Or even those of complex molecules (such as proteins)?
Can we develop more accurate methods for calculating the electronic structure and properties of molecules?
Can we understand chemical reactions from first principles?
Why do quantum dots start blinking (in a pattern suggesting fractal kinetics) after absorbing photons?
How do chemical reactions really take place?
What is the step-by-step process that occurs when an isolated molecule becomes solvated? Or when a whole ensemble of molecules becomes solvated?
Can we use the properties of negative ions to determine molecular structures, understand the dynamics of chemical reactions, or explain photodissociation?
Why does a stream of soft x-rays knock enough electrons out of the atoms in a xenon cluster to cause the cluster to explode?
Journals
The Journal of Chemical Physics
Journal of Physical Chemistry Letters
Journal of Physical Chemistry A
Journal of Physical Chemistry B
Journal of Physical Chemistry C
Physical Chemistry Chemical Physics
Chemical Physics Letters
Chemical Physics
ChemPhysChem
Molecular Physics (journal)
| Physical sciences | Physics basics: General | Physics |
734256 | https://en.wikipedia.org/wiki/Molecular%20modelling | Molecular modelling | Molecular modelling encompasses all methods, theoretical and computational, used to model or mimic the behaviour of molecules. The methods are used in the fields of computational chemistry, drug design, computational biology and materials science to study molecular systems ranging from small chemical systems to large biological molecules and material assemblies. The simplest calculations can be performed by hand, but inevitably computers are required to perform molecular modelling of any reasonably sized system. The common feature of molecular modelling methods is the atomistic level description of the molecular systems. This may include treating atoms as the smallest individual unit (a molecular mechanics approach), or explicitly modelling protons and neutrons with its quarks, anti-quarks and gluons and electrons with its photons (a quantum chemistry approach).
Molecular mechanics
Molecular mechanics is one aspect of molecular modelling, as it involves the use of classical mechanics (Newtonian mechanics) to describe the physical basis behind the models. Molecular models typically describe atoms (nucleus and electrons collectively) as point charges with an associated mass. The interactions between neighbouring atoms are described by spring-like interactions (representing chemical bonds) and Van der Waals forces. The Lennard-Jones potential is commonly used to describe the latter. The electrostatic interactions are computed based on Coulomb's law. Atoms are assigned coordinates in Cartesian space or in internal coordinates, and can also be assigned velocities in dynamical simulations. The atomic velocities are related to the temperature of the system, a macroscopic quantity. The collective mathematical expression is termed a potential function and is related to the system internal energy (U), a thermodynamic quantity equal to the sum of potential and kinetic energies. Methods which minimize the potential energy are termed energy minimization methods (e.g., steepest descent and conjugate gradient), while methods that model the behaviour of the system with propagation of time are termed molecular dynamics.
This function, referred to as a potential function, computes the molecular potential energy as a sum of energy terms that describe the deviation of bond lengths, bond angles and torsion angles away from equilibrium values, plus terms for non-bonded pairs of atoms describing van der Waals and electrostatic interactions. The set of parameters consisting of equilibrium bond lengths, bond angles, partial charge values, force constants and van der Waals parameters are collectively termed a force field. Different implementations of molecular mechanics use different mathematical expressions and different parameters for the potential function. The common force fields in use today have been developed by using chemical theory, experimental reference data, and high level quantum calculations. The method, termed energy minimization, is used to find positions of zero gradient for all atoms, in other words, a local energy minimum. Lower energy states are more stable and are commonly investigated because of their role in chemical and biological processes. A molecular dynamics simulation, on the other hand, computes the behaviour of a system as a function of time. It involves solving Newton's laws of motion, principally the second law, . Integration of Newton's laws of motion, using different integration algorithms, leads to atomic trajectories in space and time. The force on an atom is defined as the negative gradient of the potential energy function. The energy minimization method is useful to obtain a static picture for comparing between states of similar systems, while molecular dynamics provides information about the dynamic processes with the intrinsic inclusion of temperature effects.
Variables
Molecules can be modelled either in vacuum, or in the presence of a solvent such as water. Simulations of systems in vacuum are referred to as gas-phase simulations, while those that include the presence of solvent molecules are referred to as explicit solvent simulations. In another type of simulation, the effect of solvent is estimated using an empirical mathematical expression; these are termed implicit solvation simulations.
Coordinate representations
Most force fields are distance-dependent, making the most convenient expression for these Cartesian coordinates. Yet the comparatively rigid nature of bonds which occur between specific atoms, and in essence, defines what is meant by the designation molecule, make an internal coordinate system the most logical representation. In some fields the IC representation (bond length, angle between bonds, and twist angle of the bond as shown in the figure) is termed the Z-matrix or torsion angle representation. Unfortunately, continuous motions in Cartesian space often require discontinuous angular branches in internal coordinates, making it relatively hard to work with force fields in the internal coordinate representation, and conversely a simple displacement of an atom in Cartesian space may not be a straight line trajectory due to the prohibitions of the interconnected bonds. Thus, it is very common for computational optimizing programs to flip back and forth between representations during their iterations. This can dominate the calculation time of the potential itself and in long chain molecules introduce cumulative numerical inaccuracy. While all conversion algorithms produce mathematically identical results, they differ in speed and numerical accuracy. Currently, the fastest and most accurate torsion to Cartesian conversion is the Natural Extension Reference Frame (NERF) method.
Applications
Molecular modelling methods are used routinely to investigate the structure, dynamics, surface properties, and thermodynamics of inorganic, biological, and polymeric systems. A large number of molecular models of force field are today readily available in databases. The types of biological activity that have been investigated using molecular modelling include protein folding, enzyme catalysis, protein stability, conformational changes associated with biomolecular function, and molecular recognition of proteins, DNA, and membrane complexes.
| Physical sciences | Subdisciplines | Chemistry |
20505600 | https://en.wikipedia.org/wiki/Maternity%20den | Maternity den | In the animal kingdom, a maternity den is a lair where a mother gives birth and nurtures her young when they are in a vulnerable life stage. While dens are typically subterranean, they may also be snow caves or simply beneath rock ledges. Characteristically there is an entrance, and optionally an exit corridor, in addition to a principal chamber.
Examples
Polar bear
The polar bear (Ursus maritimus) creates a maternity den either in an earthen subterranean or in a snow cave. On the Hudson Bay Plain in Manitoba, Canada, many of these subterranean dens are situated in the Wapusk National Park, from which bears migrate to the Hudson Bay when the ice pack forms. The maternity den is the bear's shelter for most of the winter.
Wild dogs
Pack members may guard the maternity den used by the alpha female; such is the case with the African wild dog, Lycaon pictus.
Brown hyena
The brown hyena (Parahyaena brunnea) makes use of maternity dens as a means of nurturing and protecting their cubs. These dens are located in coastal or inland regions, most of them being caverns with narrow entrances. The brown hyena also collects bones and stores them within or around the entrance of these dens.
Red fox
The red fox (Vulpes vulpes) also creates maternity dens. After mating, foxes make a maternity den for raising their offspring. Most often, the mother and father will find and enlarge an old woodchuck burrow. Sometimes, a hollow log, streambank, rock pile, cave, or dense shrub will play the role as a den. The den is usually chosen at a place where there is raised ground so the red foxes can see all around. The main entrance will be approximately three feet wide, and the den will have one or two escape holes. The den is lined with grass and dry leaves.
| Biology and health sciences | Ecology | Biology |
3407706 | https://en.wikipedia.org/wiki/Environmental%20policy | Environmental policy | Environmental policy is the commitment of an organization or government to the laws, regulations, and other policy mechanisms concerning environmental issues. These issues generally include air and water pollution, waste management, ecosystem management, maintenance of biodiversity, the management of natural resources, wildlife and endangered species.
For example, concerning environmental policy, the implementation of an eco-energy-oriented policy at a global level to address the issue of climate change could be addressed.
Policies concerning energy or regulation of toxic substances including pesticides and many types of industrial waste are part of the topic of environmental policy. This policy can be deliberately taken to influence human activities and thereby prevent undesirable effects on the biophysical environment and natural resources, as well as to make sure that changes in the environment do not have unacceptable effects on humans.
Definition
One way is to describe environmental policy is that it comprises two major terms: environment and policy. Environment refers to the physical ecosystems, but can also take into consideration the social dimension (quality of life, health) and an economic dimension (resource management, biodiversity). Policy can be defined as a "course of action or principle adopted or proposed by a government, party, business or individual". Thus, environmental policy tends to focus on problems arising from human impact on the environment, which is important to human society by having a (negative) impact on human values. Such human values are often labeled as good health or the 'clean and green' environment. In practice, policy analysts provide a wide variety of types of information to the public decision-making process.
The concept of environmental policy was first used in the 1960s to recognise that all environmental problems, like the environment itself, are interconnected. Addressing environmental problems effectively (such as air, water, and soil pollution) requires looking at their connections and underlying and common sources, and how policies addressing particular problems can have spill-over effects on other problems and policies. "The environment" thus became a focus for public policy and environmental policy the term to refer to the way environmental issues were addressed more or less comprehensively.
Environmental issues typically addressed by environmental policy include (but are not limited to) air and water pollution, waste management, ecosystem management, biodiversity protection, the protection of natural resources, wildlife and endangered species, and the management of these natural resources for future generations. Relatively recently, environmental policy has also attended to the communication of environmental issues. Environmental policies often address issues in one of three dimensions of the environment: ecological (for instance, policies aimed at protecting a particular species or natural areas), resource (for instance, related to energy, land, water), and the human environment (the environment modified or shaped by humans, for instance, urban planning, pollution). Environmental policy-making is often highly fragmented, although environmental policy analysts have long pointed out the need for the development of more comprehensive and integrated environmental policies.
In contrast to environmental policy, ecological policy addresses issues that focus on achieving benefits (both monetary and non monetary) from the non human ecological world. Broadly included in ecological policy is natural resource management (fisheries, forestry, wildlife, range, biodiversity, and at-risk species). This specialized area of policy possesses its own distinctive features.
History
As documented by environmental historians, human societies have always impacted their environment, often with adverse consequences for themselves and the rest of nature. Their failure to (timely) recognise and address these problems has been a contributing factor to their decline and collapse.
Concerns about pollution and its threat to humans as well as nature has provided major stimulus for the development of environmental policies. In 1863, in the United Kingdom, health problems arising from the release of harmful chemicals led to the adoption of the Alkali Act and the creation of the Alkali Inspectorate. In 1956, the Clean Air Act 1956 was adopted in the wake of London's Great Smog of 1952 that is believed to have killed 12,000 people. Concerns about the effects of pollution fuelled notably by the publication, in 1962, of Rachel Carson's Silent Spring, sparked the beginning of the modern environmental movement. It also marked the start of "the environment" becoming a concern of public policy, as pointed out by Caldwell in 1963. These growing concerns, as well as the growing publicity about environmental problems and accidents, forced governments to introduce or strengthen laws and policies aimed at enhancing environmental protection.
Earth Day founder Gaylord Nelson, then a U.S. Senator from Wisconsin, after witnessing the ravages of the 1969 massive oil spill in Santa Barbara, California, became famous for his environmental work. Administrator Ruckelshaus was confirmed by the Senate on December 2, 1970, which is the traditional date used as the birth of the United States Environmental Protection Agency (EPA). Five months earlier, in July 1970, President Nixon had signed Reorganization Plan No. 3 calling for the establishment of EPA. At the time, environmental policy was a bipartisan issue and the efforts of the United States of America made it an early environmental leader. During this period, legislation was passed to regulate pollutants that go into the air, water tables, and solid waste disposal. President Nixon signed the Clean Air Act in 1970.
In many countries, governments created environment ministries, departments or agencies, and appointed ministers of or for the environment. The world's first minister of the environment was the British Politician Peter Walker from the Conservative Party in 1970.
In the European Union, the very first Environmental Action Programme was adopted by national government representatives in July 1973 during the first meeting of the Council of Environmental Ministers. Since then an increasingly dense network of legislation has developed, which now extends to all areas of environmental protection including air pollution control, water protection and waste policy but also nature conservation and the control of chemicals, biotechnology and other industrial risks. EU environmental policy has thus become a core area of European politics.
Despite commonalities between countries in the development of environmental policies and institutions, they have also adopted different approaches in this area. In the 1970s, the field of Comparative Environmental Politics and Policy emerged to compare the environmental policies and institutions of countries aimed at explaining differences and similarities.
Although particular environmental problems like soil erosion, growing resource scarcity, air and water pollution increasingly became the subject of concern and government regulation in the 19th century, these were seen and addressed as separate issues. The shortcomings of this reactive and fragmented approach received growing recognition during the 1960s and early 1970s, the first wave of environmentalism. This was reflected in the creation, in many countries, of environmental agencies, policies and legislation with the aim of taking a more comprehensive and integrated approach to environmental issues. In 1972, the need for this was also recognised at the international level at the United Nations Conference on the Human Environment, which led to the creation of the United Nations Environment Programme.
Rationale
Growing environmental awareness and concern provided the main rationale for the adoption of environmental policies and institutions by governments. Environmental protection became a focus of public policy.
This rationale for environmental policy is broader than that provided by some interpretations based on economic theories. The rationale for governmental involvement in the environment is often attributed to market failure in the form of forces beyond the control of one person, including the free rider problem and the tragedy of the commons. An example of an externality is when a factory produces waste pollution which may be discharged into a river, ultimately contaminating water. The cost of such action is paid by society at large when they must clean the water before drinking it and is external to the costs of the polluter. The free rider problem occurs when the private marginal cost of taking action to protect the environment is greater than the private marginal benefit, but the social marginal cost is less than the social marginal benefit. The tragedy of the commons is the condition that, because no one person owns the commons, each individual has an incentive to utilize common resources as much as possible. Without governmental involvement, the commons is overused. Examples of tragedies of the commons are overfishing and overgrazing.
The "market failure" rationale for environmental policy has been criticised for its implicit assumptions about the drivers of human behaviour, which are considered to be rooted in the idea that societies are nothing but collections of self-interested "utility-maximising" individuals. As Elinor Ostrom has demonstrated, this is not supported by evidence on how societies actually make resource decisions. The market-failure theory also assumes that "markets" have, or should have precedence over governments in collective decision-making, which is an ideological position that was challenged by Karl Polanyi whose historical analysis shows how the idea of a self-regulating market was politically created. He added that "Such an institution could not exist for any length of time without annihilating the human and natural substance of society."
By contrast, ecological economists argue that economic policies should be developed within a theoretical framework that recognises the biophysical reality. The economic system is a sub-system of the biophysical environmental system on which humans and other species depend for their well-being and survival. The need for grounding environmental policy on ecological principles has also been recognised by many environmental policy analysts, sometimes under the label of ecological rationality and/or environmental integration. From this perspective, political, economic, and other systems, as well as policies, need to be "greened" to make them ecologically rational.
Policy approaches
Instruments
In practice, governments have adopted a wide range of approaches to the development and implementation of environmental policies. To a large extent, differences in approaches have been influenced and shaped by the particular political, economic and social context of a country or polity (like the European Union or the United Nations). The differences in approaches, the reasons behind them, and their results have been the subject of research in the fields of comparative environmental politics and policy. But the study of problems and issues associated with environmental policy development has also been influenced by general public policy theories and analyses. Contributions on this front have been influenced by different academic disciplines, notably economics, public policy, and environmental studies, but also by political-ideological views, politics, and economic interests, among others through "think tanks". Thus, the design of environmental policy and the choice of policy instruments is always political and not just a matter determined by technical and efficiency considerations advanced by scientists, economists or other experts. As Majone has argued: "Policy instruments are seldom ideologically neutral" and "cannot be neatly separated from goals." The choice of policy instruments always occurs in a political context. Differences in ideological preferences of governments and political actors, and in national policy styles, have been argued to strongly influence a government's approach to policy design, including the choice of instruments.
Although many different policy instruments can be identified, and many ways of classifying them have been put forward, very broadly, a minimalist approach distinguishes three kinds or categories of policy instruments: regulation, economic instruments, and normative or "hortatory" approaches. These have also been referred to as "sticks, carrots and sermons". Vedung, based on Majone's classification of power, argues that the main difference underlying these categories is the degree of coercion (authoritative force) involved.
Regulation
Regulation has been a traditional and predominant approach to policymaking in many policy areas and countries. It relies foremost on adopting rules (often backed up by legislation), to prohibit, impose or circumscribe human behaviour and practices. In the environmental policy area, this includes, for instance, the imposition of limits or standards for air and water pollution, car emissions, the regulation or banning of the use of hazardous substances, the phasing out of ozone-depleting substances, waste disposal, and laws to protect endangered species and natural areas.
Regulation is often derogatorily referred to by detractors as a top-down, "command and control" approach as it leaves target groups with little if any control over the way(s) environmental activities or goals must be pursued. Since the 1980s, with the rise of neoliberalism in many countries and the associated redefinition of the role of the state (centred on the notion of governance rather than government), regulation has been touted as ineffective and inefficient, sparking a move toward deregulation and the adoption by many governments of "new" policy instruments, notably market instruments and voluntary agreements, also in the realm of environmental policy.
Economic instruments
Economic instruments involve the imposition or use of economic incentives, including (environmental) taxes, tax exemptions, fees, subsidies, and the creation of markets and rights for trading in substances, pollutants, resources, or activities, such as for SO2, CO2 (carbon or greenhouse gas emissions), water, and tradeable fisheries quota. They are based on the assumption that behaviour and practices are foremost driven by rationality, self-interest and economic considerations and that these motivations can be harnessed for environmental purposes. Decision-making studies cast doubt on these premises. Often, decisions are reached based on irrational influences, unconscious biases, illogical assumptions, and the desire to avoid or create ambiguity and uncertainty.
Market-based policy instruments also have their supporters and detractors. Among the detractors, for example, some environmentalists contend that a more radical, overarching approach is needed than a set of specific initiatives, to deal with climate change. For example, energy efficiency measures may actually increase energy consumption in the absence of a cap on fossil fuel use, as people might drive more fuel-efficient cars. To combat this result, Aubrey Meyer calls for a 'framework-based market' of Contraction and Convergence. The Cap and Share and the Sky Trust are proposals based on the idea. In the case of corporations, it is assumed that such tools make it financially rewarding to engage in efficient environmental management that also improves business and organizational performance They also encourage businesses to become more transparent about their environmental performance by publishing data and reporting.
For economic instruments to function, some form(s) of regulation are needed that involve policy design, for instance, related to the choice and level of taxation, who pays, who qualifies for rights or permits, and the rules on which trading, and a "market" depend for their functioning. For example, the implementation of greener public purchasing programs relies on a combination of regulation and economic incentives.
Normative ("hortatory") instruments
Normative ("hortatory") instruments ("sermons") rely on persuasion and information. They include, among others, campaigns aimed at raising public awareness and enhancing knowledge of environmental problems, calls upon people to change their behaviour and practices (like taking up recycling, reducing waste, the use of water and energy, and using public transport), and voluntary agreements between governments and businesses. They share the aim of encouraging people to do "the right thing", to change their behaviour and practices, and to accept individual or group responsibility for addressing issues. Agreements between the government and private firms and commitments made by firms independent of government requirements are examples of voluntary environmental measures.
Environmental Impact Assessment is a tool that relies foremost on the gathering of knowledge and information about (potential) environmental effects. It originated in the United States but has been adopted in many countries to analyse and assess the potential impacts of projects. Usually undertaken by experts, it is based on the assumption that an objective assessment of effects is possible, and that the knowledge generated will persuade decision-makers to make changes to proposals to mitigate or prevent adverse environmental effects. How EIA rules and processes are designed and implemented depends on regulation and is influenced by the political context. Eccleston and March argue that although policymakers normally have access to reasonably accurate environmental information, political and economic factors are important and often lead to policy decisions that rank environmental priorities of secondary importance.[Reference needed]
The effectiveness of hortatory instruments has also been under debate. Policies relying foremost on such instruments may amount to little more than symbolic policies, implying that governments have little or no intention to effectively address an issue while creating the impression of taking it seriously. Such policies rely more on rhetoric than action. In the environmental realm, sustainable development policies or strategies are often used for this purpose if these are not translated into clear and specific objectives, timeframes and measures. Yet, hortatory policy instruments are often preferred by governments and other actors as they are seen as a way of recognising and sharing collective responsibility, possibly avoiding the need for regulation and/or economic instruments. They are thus often used as a first step towards addressing environmental problems. However, these tools are often combined with some form of legislation and regulation, for instance, in the case of labelling of consumer products (product information), waste disposal and recycling.
Comparison of instruments
There has been much debate about the relative merits of the various kinds of policy instruments. Market instruments are often held up and used as a more efficient and cost-effective, alternative to regulation. Yet, many analysts have pointed out that regulation, economic incentives, "market" instruments, and environmental taxation and subsidies can achieve the same results. For instance, as Kemp and Pontoglio argue, policy instruments cannot be usefully ranked with regard to their effects on eco-innovation, "the often expressed view that market-based approaches such as pollution taxes and emission trading systems are better for promoting eco-innovation is not brought out by the case study literature or by survey analysis", and there is actually more evidence that regulations stimulate radical innovation more than market-based instruments. It has also been argued that If the government can anticipate new technology or is able to react to it optimally, regulatory policies by virtue of administered prices (taxes) and policies by setting quantities (issuing tradable permits) are (almost) equivalent. More generally, the performance of economic instruments in dealing with environmental problems has been a mixed bag, referred to by Hahn as "not very impressive", and has led Tietenberg to conclude that they are "no panacea".
Different instruments are sometimes combined in a policy mix to address a particular environmental problem. Since environmental issues have many aspects, several policy instruments may be required to adequately address each one. Ideally, government policies are carefully formulated so that the individual measures do not undermine one another or create a rigid and cost-ineffective framework. Overlapping policies result in unnecessary administrative costs, increasing the cost of implementation. To help governments realize their policy goals, the OECD Environment Directorate, for example, collects data on the efficiency and consequences of environmental policies implemented by the national governments. Their website provides a database detailing countries' experiences with their environmental policies. The United Nations Economic Commission for Europe, through UNECE, and the OECD's Environmental Performance Reviews, evaluate progress made by its member countries in improving their environmental policies.
However, although regulation, taxation and market instruments can be equally (in-) effective, they may differ significantly in the allocation and distribution of (potential) costs and benefits, with the allocation of tradeable ("property") rights potentially generating significant profits to those who receive such rights. They are, therefore, generally much preferred by affected resource users and industries, which explains their popularity since the rise of neoliberalism. This has led analysts to point out that there are many other important aspects to the choice of policy instruments than their efficiency and cost-effectiveness, such as distributional, ethical and political aspects, and their appropriateness for addressing environmental problems.
Policy analysis
How environmental policies are made, how effective they are, and how they can or should be improved, has become the subject of considerable research and debate. In the academic realm, these questions are commonly addressed under the label of environmental policy analysis.
Environmental policy analysis is a broad field comprising different approaches to explaining and developing environmental policy. The first type has been referred to in the policy literature as the analysis of policy and the second as the analysis for policy. Many approaches are derived from the broader field of public policy analysis which emerged as a scientific enterprise after WWII. While policy analysis as a decision-making tool continued to be applied in the business sector, the study of public policy, defined broadly as "What governments do, why they do it, and what difference it makes, became an important strand in political science. This variety, which has been classified into analycentric, policy process, and meta-policy categories, has also manifested itself in the area of environmental policy analysis which developed since the 1960s.
The analycentric or rational approach
The analycentric approach to environmental policy analysis, which focuses on particular issues and uses mostly quantitative methods to identify "optimal" (cost-effective or efficient) solutions, has been the prevalent way to address environmental problems, both by governments and businesses. It is also often depicted as the rational or scientific approach to and for policy development. While scientific analyses and (preferably) quantitative data provide knowledge of the more immediate sources or causes of environmental problems, such as forms of pollution and climate change, policy prescriptions are based on setting goals, objectives and targets and the identification of the most cost-effective and efficient means by assessing alternative options. Technological innovation, more efficient management, and economic instruments such as cost-benefit analysis, environmental taxes, and tradeable permit schemes (market creation) have been among the preferred means in this approach.
The analycentric or rational approach has been critiqued on various grounds. First, it assumes that there is adequate knowledge and agreement on the causes of problems and the goals to be achieved. Second, the approach (for policy) ignores the way policies are developed in (political) practice. Third, the preferred means are often based on questionable assumptions notably about human behaviour. Many of the limitations of the rational approach were already acknowledged by an early proponent, Herbert Simon, who argued that "limited rationality" provided a more realistic basis for decision-making. This view has also been expressed by advocates of more comprehensive and integrated environmental policy development, who argued that looking at problems in isolation (on a one-by-one basis) ignores the linkages between environmental problems and their causes. In the late 1980s, "green planning" and the adoption of sustainable development strategies, in particular, received support in academic circles and among many governments as rational, goal-based policy approaches aimed at overcoming the limitations of the fragmented analycentric approach.
The policy process approach
The policy process approach emphasises the role and importance of politics and power in policy development. It aims foremost at better understanding how policies are made and put into practice. It commonly involves identifying a variable number of steps, including problem definition and agenda setting, the formulation and selection of policy options, implementation, and evaluation. These are conceived as being parts of a policy cycle, as existing policies are reviewed and changed for political reasons and/or because they are deemed to be unsatisfactory. The various stages have become the focus of much research, generating insights into why and how policies have been developed and implemented, with variable outcomes and effectiveness. These studies show that policy development is more about the role of and interplay between conflicting interests than the result of rational analysis and finding and adopting (optimal) solutions to problems. One of the main schools of thought on this front is that of incrementalism, which argues that policy change often occurs in small steps that accommodate conflicting interests.
Policy process analysis has also been applied to environmental policy in its different stages. It has been used, for instance, to clarify why environmental issues have had difficulty reaching or staying on the public and political agendas. More recently, research has revealed the role and power of businesses, notably the oil industry, in downplaying the risks associated with climate change or "climate change denial." "Think tanks" and the media have been used to sow scepticism about the science behind environmental and other problems, to redefine issues, and to avert policies that threaten the interests of businesses.
Policy process analyses also include studies of the variety of actors and their influence on government decision-making. Although pluralism, the idea that not one group dominates all decision-making in modern societies, has long been the prevailing school of thought in political science, it has been contested by elite theories that assign predominant power to elites in different areas or sectors of decision-making. To what extent environmental groups have had influence on government decisions and policies continues to be a subject of debate. Some argue that Non-Governmental organizations have the greatest influence on environmental policies. These days, many countries are facing huge environmental, social, and economic impacts of rapid population growth, development, and natural resource constraints. As NGOs try to help countries to tackle these issues more successfully, a lack of understanding about their role in civil society and the public perception that the government alone is responsible for the well-being of its citizens and residents makes NGOs tasks more difficult to achieve. NGOs such as Greenpeace and World Wildlife Fund can help tackling issues by conducting research to facilitate policy development, building institutional capacity, and facilitating independent dialogue with civil society to help people live more sustainable lifestyles. The need for a legal framework to recognize NGOs and enable them to access more diverse funding sources, high-level support/endorsement from local figureheads, and engaging NGOs in policy development and implementation is more important as environmental issues continue to increase.
It has been argued that notwithstanding Reagan's efforts to undo environmental regulation in the US, the effects have been limited as environmental interests were already strongly entrenched. Under President Trump, again, many environmental regulations have been dismantled or were scheduled to be rolled back. Other research suggests that many environmental policies adopted by governments are designed to be weak and largely ineffective as business interests use their power to influence or even shape these policies, also at the international level.
International organizations have also made great impacts on environmental policies by creating programmes such as the United Nations Environment Programme and hosting conferences such as the United Nations Earth Summit to address environmental issues. UNEP is the leading global environmental authority tasked with policy guidance for environmental programs. The UNEP monitors environmental aspects, such as waste management, energy use, greenhouse gas inventory, and water use to promote environmental sustainability and address environmental issues.
The role of science and scientists in policy environmental policy development has been another focus of research. Scientists have been instrumental in discovering many environmental problems, from the damaging effects of the use of pesticides, the depletion of the ozone layer, the greenhouse effect, and all kinds of pollution, among others. In this respect, they have often provided legitimacy and support to the raising of concerns by the environmental movement, although they have often been reluctant to get involved in environmental activism out of fear of compromising their scientific credibility. Nonetheless, scientists have played a significant role pushing environmental issues onto the international agenda, together with international ENGOs, in what have been referred to as "epistemic communities." However, to what extent science can be "value-free" has been a subject of debate. Science and scientists always operate in a political-economic context that circumscribes their role, research and its effects. This raises the question of scientific integrity, especially when scientists are paid to serve commercial and political interests.
The meta-policy approach
Meta-policy research focuses on the ways policy development is influenced or shaped by contextual factors, including political institutions and systems, socio-cultural patterns, economic systems, knowledge frameworks, discourses, and the changes therein. The latter may involve deliberate changes to the formal and non-formal institutions through which policy analysis, development, decision-making, and implementation occur, such as the introduction of rules for cost-benefit analysis, risk analysis, consultation and accountability requirements, and organisational change.
How environmental problems are interpreted and defined directly affects the development of environmental policies, at all stages of the policy cycle, from problem recognition, and the formulation of policy options, to decision-making, implementation and policy evaluation. However, much (meta-policy) research has been undertaken on what influences or shapes these views and interpretations. For instance, there is a large body of research that looks at whether societies have moved or are moving towards "post-materialist" values, or to a New Environmental Paradigm. More broadly, the link between dominant worldviews and the way the environment is treated has been a focus of much debate. The rise and growing support for the environmental movement is often seen as a driver towards "greener" societies. If such socio-cultural trends hold, this is expected to lead governments to adopt stronger environmental policies.
Other meta-policy research focuses on the different "environmental discourses" and how they compete for dominance in societies and worldwide. The power to influence or shape people's view of the world has been referred to as "cognitive power". The role of intellectuals, opinion leaders, and the media in shaping and advancing the dominant views and ideologies in societies has been an important focus of Marxist and critical theory that has also influenced the analysis of environmental policy formation. Ownership and control of the media play an important role in the formation of public opinion on environmental issues.
Other meta-policy research relevant to the development of environmental policy focuses on institutional and systemic factors. For instance, the role of environmental institutions and their capacity and power within the broader systems of government is found to be an important factor in advancing or constraining environmental policy. More broadly, the question of whether capitalism is compatible or not with long-term environmental protection has been a subject of debate. As, after the collapse of the Soviet Union and the introduction of capitalism in China, capitalism became a globally dominant system, this question has become even more important to the future development of environmental policy at the national and international levels. As many analysts of global environmental politics have pointed out, the institutions for developing effective environmental policy at that level are weak and rather ineffective, as demonstrated by accounts of continuing environmental deterioration.
Evaluation
Ultimately, the environmental effectiveness of policies is measured by the extent to which they reduce or resolve environmental problems (ecological destruction and degradation, resource degradation and depletion, and adverse effects on humans by environmental modification, including by urban development and pollution). Whether environmental policies have addressed environmental problems more or less effectively remains a topic of debate. On the one hand, some take a very positive and optimistic view, arguing that, on many fronts, the environmental situation, especially as it affects humans, has improved. On the other hand, many scientists and scientific reports paint a bleak picture of where the world is going, based on deteriorating environmental indicators linked to global heating, declining biodiversity, pollution trends (including of new forms of pollution such as the spread of plastic nanoparticles), and ongoing resource degradation and decline (such as water and agricultural land).
Difficulties
Differences in approaches to environmental policy development and design, including the selection of policy instruments, linked to different historical, political-economic and socio-cultural contexts, and the inevitable role and influence of different cognitive and ideological frameworks in the analysis and design of policies, all make that evaluating environmental policies is also a complex and controversial matter.
As many policy analysts have pointed out, judging the merits of policies goes beyond an assessment of the efficiency and cost-effectiveness of the policy instruments used. In the realm of public policy, policy evaluation is a topic that is seen as much more encompassing and complex. Apart from efficiency and cost-effectiveness, many other important aspects of policy and criteria for evaluating them have been identified and discussed, including their knowledge (science) basis, their goals and objectives, ethical issues, distributional effects, and process and legitimacy. Although efforts have been made to put evaluation on its own (trans-) disciplinary footing as a systematic and independent stage in the policy process, either before the adoption of policies (ex-ante evaluation) or after their implementation (ex-post evaluation) this remains fraught with problems. In practice, systematic evaluation remains a largely neglected aspect or stage of policymaking, in large part, because of the political nature and sensitivity of evaluating government's policies.
The difficulties of policy evaluation also apply to environmental policies. Also there, policy evaluation is often approached in simple terms based on the extent to which the stated goals of a policy have been achieved or not ("success or failure"). However, as many environmental policy analysts have pointed out, many other aspects of environmental policy are important. These include the goals and objectives of the policies (which may be deemed too vague, inadequate, poorly or wrongly targeted), their distributional effects (whether they contribute to or reduce environmental and social injustice), the kind of instruments used (for instance, their ethical and political dimensions), the processes by which policies have been developed (public participation and deliberation), and the extent to which they are institutionally supported.
Policy integration
Policy integration has been discussed since the 1980s in many different forms and terminology. Often used terms include policy mainstreaming, policy coordination, holistic governance and – in the environmental field – environmental policy integration. The core idea of policy integration is that policies in one domain should take into account potential side-effects in other domains, so that policies coming from different domains or organizations do not negate each other. Policy integration can take place on multiple "objects", including policy inputs, outputs, procedures, instruments, and goals.
Many environmental thinkers and policy analysts have pointed out that addressing environmental problems effectively requires an integrated approach. As the environment is an integrated whole or system, environmental policies need to take account of the interactions within that system and the effects of human actions and interventions not just on a problem in isolation, but also their (potential) effects of other problems. More often than not, fragmented policies and "solutions", for instance, to combat pollution, lead to the displacement of environmental problems or the generation of new ones. The interconnectedness of the environmental challenge, it has been said, requires an approach that is "ecological rational" and environmentally effective.
Environmental integration, in broad terms, is "the integration of environmental considerations into all areas of human thinking, behaviour and practices that (potentially) affect the environment." This involves, among others, the development and adoption of an overarching view of the environment, an overarching policy to guide the "greening" of policies, and an institutional framework that gives "teeth" to environmental integration.
In academic and government circles (notably the EU), much of the focus has been on environmental policy integration (EPI), the process of integrating environmental objectives into non-environmental policy areas, such as energy, agriculture and transport, rather than leaving them to be pursued solely through "purely" environmental policies. This is often particularly challenging because of the need to reconcile global objectives and international rules with domestic needs and laws. EPI is widely recognised as one of the key elements of sustainable development, and it was adopted as a formal requirement by the EU. More recently, the notion of "climate policy integration", also denoted as "mainstreaming", has been applied to indicate the integration of climate considerations (both mitigation and adaptation) into the broader (often economically focused) activities of government.
Although, in the late 1980s and early 1990s, many governments began to adopt a more comprehensive approach to environmental issues, notably in the form of National Sustainable Development Strategies and "Green Planning", these efforts were largely abandoned during the 1990s due to the rise to prominence of neoliberal thinking, policies and reforms. This development led to the return of the fragmented and reactive approach to environmental problems with an emphasis on climate change and the use of "market-based" instruments.
Comparative environment policy and politics
The field of comparative environment policy and politics aims to explain the differences in performance related to, among others, differences in political systems, institutions, policy styles and cultures. However, the environmental performance of governments remains commonly based on achievements in a range of environmental problems and policy outputs, as measured by separate indicators like CO2 emissions, different forms of air pollution, water quality indicators, and biological diversity (individual species). These assessments are often used as a basis for ranking the environmental performance of countries, with some characterised as leaders and others as laggards. However, such rankings have been treated with scepticism, not only on methodological grounds but especially because they mean little in terms of the extent to which governments take environmental integration seriously. While it has been noted that, at different stages, some countries have been leaders in some areas of environmental integration, these efforts have not been sustained over time.
Possible improvements
Reflecting the diversity of approaches to environmental policy development, influenced by contextual factors, policy perspectives, and political-ideological views, among others, there are also different views on how environmental policy could or should be improved. The three most common standpoints have been referred to as incrementalism ("tinkering"), democratisation, and systemic change.
Incrementalism has been deemed to be the most common (standard) way governments change their policies with the stated aim of improving them. Propagated in particular by Charles Lindblom based on his view of American political reality, he argued that changing policies in small steps is not only the most common way policies are developed, but also the best way, as it avoids making big errors that could result from a "rational-comprehensive" approach. Also, over time, a series of small changes may add up and bring about significant and big change. Although incrementalism has been critiqued for its underlying assumptions and conservative implications ("tinkering"), and also for its failure to come to grips with environmental problems, it is a very recognisable approach to policy "improvement" in many countries.
As incrementalism does not question the political-economic status quo, its suggestions for policy improvement are foremost of a managerial or technological kind. Tinkering with policy and management tools, and technological innovation, are seen as the main and most desirable ("win-win") ways to address environmental (and other) problems. This "technocentric" approach, which is seen as politically neutral, has been a preferred and dominant approach to "solving" environmental problems from the beginning of the environmental era, advocated by governments, businesses, and many environmentalists. The managerial approach also involves training "environmental practitioners" and policy analysts. Given the growing need for trained environmental practitioners, graduate schools throughout the world offer specialized professional degrees in environmental policy studies. While there is not a standard curriculum, students typically take classes in policy analysis, environmental science, environmental law and politics, ecology, energy, and natural resource management. Graduates of these programs are employed by governments, international organizations, private sector, think tanks, advocacy organizations, and universities.
Much of the research and innovation sponsored by governments, businesses and international organisations under the heading of "transition management" is aimed at the gradual (incremental) development of new "transformative" technologies, for instance, in areas like energy, transport and agriculture. An example is the European environmental research and innovation policy, which aims at defining and implementing a transformative agenda to greening the economy and society as a whole so as to achieve "truly" sustainable development. The EU strategies, actions and programmes promote more and better research and innovation for building a resource-efficient, climate-resilient society and thriving economy which are meant to be in sync with the natural environment. Research and innovation in Europe are financially supported by the programme Horizon 2020, which is also open to participation worldwide. Yet, the "transition management" approach to sustainability has been critiqued for its a-political, technocratic and elitist nature. Also, Bucchi argues that the traditional technocentric approach no longer suffices as science has increasingly been commercialised and politicised and lost much of its image of neutrality that it enjoyed with the public at large.
In line with the policy process perspective, many environmental advocates and analysts support improving the opportunities for public participation and input in the policy process, as well as increasing transparency. The policy design literature aims to pull together insights gained from studies of the various stages of the policy cycle to design more effective policies, to better consider the tools, rules and assumptions on which they are based, the groups at which they are targeted, contextual factors, as well as the nature (complexity) of the problem. Enhancing public input and participation is argued to have the potential to improve all stages of the policy cycle, including problem definition, decision-making, policy implementation, and evaluation. UNFCCC research shows that climate-related projects and policies that involve women are more effective. Policies, projects and investments without meaningful participation by women are less effective and often increase existing gender inequalities. Women found climate solutions that cross political or ethnic boundaries have been particularly important in regions where entire ecosystems are under threat, e.g. small island states, the Arctic and the Amazon and in areas where people's livelihoods depend on natural resources e.g. fishing, farming and forestry. However, the degree and kind of opportunities provided for public input and deliberation are seen as a key factor, both for improving the effectiveness of policies and for enhancing their support basis and legitimacy. Enhancing democracy, for instance, by adopting forms of "discursive designs" and other forms of "reflexive" deliberative democracy, aims to create a level playing field on which citizens' representatives have a more equal chance to partake in shaping policy. Relatively recently, "citizens' assemblies" have been used in a range of countries to address controversial topics, including climate change policy. However, as these are temporary and advisory bodies, governments are not bound by their recommendations.
Over time, many governments have introduced laws to provide public access to government-held information, for instance, by the adoption of Freedom of Information legislation. Although a growing number of governments have adopted such legislation, a report by Privacy International notes that in many countries much work remains to be done on the implementation front and the creation of a culture, "leaving access largely unfulfilled."
A third approach to improving environmental policy is based on the view that meaningful progress on resolving environmental problems requires fundamental or systemic change, in particular of the prevailing socio-cultural, political and economic systems. Three categories of factors are commonly identified: cognitive factors (the way(s) environmental problems have been interpreted (cognitive factors), linked to dominant belief and value systems; political factors (the nature of the prevailing political systems); and the nature of the prevailing economic systems. These three types of factors are not mutually exclusive, and analysts often combine them to provide more comprehensive explanations.
That the way environmental problems predominantly are interpreted is a fundamental obstacle to addressing the environmental challenge effectively, has been pointed out already from the earliest stages of the rise of environmental awareness and thinking. Many early environmental thinkers argued that environmental problems are interrelated, finding their roots in the interconnectedness of the environment itself and the failure of human societies to recognise that reality and to heed this in their behaviour and practices. These thinkers point out the need to take a "holistic", ecosystems or integrated approach to the management of the environment and the use of resources. Often, it is argued that such an approach was common to indigenous societies, but that this got pushed aside and lost with the rise of "modernity" and rational-analytic (scientific) thinking. In modern societies, nature has come to be seen, analysed and manipulated as a machine in the service of human ends.
But as the way the environmental challenge is interpreted is closely linked to the dominant socio-cultural (value) system, the latter is also said to need fundamental change. There is a large body of literature on the role and importance of the dominant values in societies and the (possible) changes therein, among others linked to economic development, urbanisation and globalisation. On the one hand, analysts have identified the rise of individualism, materialism, consumerism, and the decline of community values in modern societies and cultures. On the other hand, some analysts, notably based on Ronald Inglehart's work, argue that, with rising standards of living, comes a shift in societies, facilitated by generational change, from material to "post-material" values, including self-actualisation, belonging, and aesthetics. However, it is debatable to what extent this shift represents a move towards environmental values becoming dominant and whether the level of support for the environment depends on a high standard of living. Others, notably inspired by Riley Dunlap's research, more directly explore whether the presently dominant paradigm is being replaced by what is referred to as the "New Environmental Paradigm". As yet, however, the findings of this research are inconclusive, although there is evidence that environmental concern and support have grown globally.
Whether and how the dominant value systems and views on the environment can be purposefully changed by concerted social action aimed at assigning greater priority remains a matter of debate and uncertainty. On the one hand, the environmental movement has been touted as a "vanguard" in shifting the dominant paradigm. On the other hand, the effectiveness of the environmental movement in bringing about fundamental value change can and has been drawn into doubt. One reason is that the environmental movement itself is very diverse in views on the kind of value change(s) required, ranging from technocentric to deep ecological stances. To what extent green parties have been effective in changing dominant value patterns or are themselves subject to being co-opted by dominant values and interests is also subject to debate. To a large extent, as many analysts have pointed out, the ability to shape the dominant values and public views on the environment depends on the relative (cognitive) power held and exercised by groups, notably through control over the media and other institutions such as education, universities, think tanks, and the social media.
The importance of the nature of political systems for the development of environmental (and other) policies has been the subject of much research, including in the field of Comparative Environmental Policy. Analysts have pointed out a broad range of factors that stand in the way of environmental issues being adequately recognised and/or assigned political priority, including the role, privileged access, power and influence, and even dominance of (non-environmental) interest groups, bureaucratic thinking and interests, the lack of openness and transparency, (very) limited opportunities for public input and participation, and the short political horizon linked to electoral cycles. Many of these factors are not confined to liberal-democratic political systems but also play a role, perhaps even more so, in authoritarian political systems.
These political obstacles have generally led to a relative weakness in the power of government institutions (organisations and rules) advocating for environmental interests compared to non-environmental institutions and the circumscription of the power, role and influence of societal environmental groups, including green parties, if not their co-optation by the dominant powers and vested interests. This also affects the "environmental capacity" of political systems, severely limiting efforts to develop more comprehensive and integrated approaches to the environmental challenge.
Other analysts emphasise the importance of economic systems, notably capitalism, as a fundamental obstacle to developing and adopting effective environmental policies. Some take the view that capitalism is fundamentally incompatible with long-term environmental protection, notably because of its inherent growth imperative. Others recognise this imperative as a problem but argue that it is possible to reform capitalism in a way that does not require growth, or that enables "green growth" based on the recognition of environmental limits. Many have pointed out that socialist economic systems have had even worse environmental records than capitalist systems, implying that socialism is no better alternative for the environment even apart from other considerations. However, this view is contested by those who argue that socialism as an economic system does not necessarily require an authoritarian system and that there is scope for creating democratic socialist systems that assign greater priority to collective interests, including environmental protection.
These cognitive, social, political and economic factors are often referred to as systemic, meaning that overcoming these obstacles requires systemic, fundamental or transformative change, notably of the systems that are the sources and drivers of environmental pressures and problems, including the political and economic systems, and sectors like agriculture, energy, and transport. Increasingly, the tweaking of environmental and other policies is seen as inadequate, and there is growing recognition of the need for "transformative change". However, the interrelatedness of these systems raises questions about whether and/or how such transformative change can be achieved, which has led a growing number of environmental analysts, including scientists, to serious doubts and pessimism, although others argue that it remains possible for societies to do so.
| Physical sciences | Earth science basics: General | Earth science |
3408427 | https://en.wikipedia.org/wiki/Credible%20interval | Credible interval | In Bayesian statistics, a credible interval is an interval used to characterize a probability distribution. It is defined such that an unobserved parameter value has a particular probability to fall within it. For example, in an experiment that determines the distribution of possible values of the parameter , if the probability that lies between 35 and 45 is , then is a 95% credible interval.
Credible intervals are typically used to characterize posterior probability distributions or predictive probability distributions. Their generalization to disconnected or multivariate sets is called credible region.
Credible intervals are a Bayesian analog to confidence intervals in frequentist statistics. The two concepts arise from different philosophies: Bayesian intervals treat their bounds as fixed and the estimated parameter as a random variable, whereas frequentist confidence intervals treat their bounds as random variables and the parameter as a fixed value. Also, Bayesian credible intervals use (and indeed, require) knowledge of the situation-specific prior distribution, while the frequentist confidence intervals do not.
Definitions
Credible regions are not unique, as any given probability distribution has an infinite number of ‑credible regions, i.e. regions of probability . For example, in the univariate case, there are multiple definitions for a suitable interval or region:
The smallest credible interval (SCI), sometimes also called the highest density interval. This interval will necessarily include the median whenever . Besides, when the distribution is unimodal, this interval will include the mode.
The smallest credible region (SCR), sometimes also called the highest density region. For a multimodal distribution, this is not necessarily an interval as it can be disconnected. This region will always include the mode.
A quantile-based credible interval, which is computed by taking the inter-quantile interval for some predefined . For instance, the median credible interval (MCI) of probability is the interval where the probability of being below the interval is as likely as being above it, that is to say the interval . It is sometimes also called the equal-tailed interval, and it will always include the median. Many other QBIs can be defined, such as the lowest credible interval (LCI) which is , or the highest credible interval (HCI) which is . These intervals may be more suited for bounded variables.
One may also define an interval for which the mean is the central point, assuming that the mean exists.
‑Smallest Credible Regions (‑SCR) can easily be generalized to the multivariate case, and are bounded by probability density contour lines. They will always contain the mode, but not necessarily the mean, the coordinate-wise median, nor the geometric median.
Credible intervals can also be estimated through the use of simulation techniques such as Markov chain Monte Carlo.
Contrasts with confidence interval
A frequentist 95% confidence interval means that with a large number of repeated samples, 95% of such calculated confidence intervals would include the true value of the parameter. In frequentist terms, the parameter is fixed (cannot be considered to have a distribution of possible values) and the confidence interval is random (as it depends on the random sample).
Bayesian credible intervals differ from frequentist confidence intervals by two major aspects:
credible intervals are intervals whose values have a (posterior) probability density, representing the plausibility that the parameter has those values, whereas confidence intervals regard the population parameter as fixed and therefore not the object of probability. Within confidence intervals, confidence refers to the randomness of the very confidence interval under repeated trials, whereas credible intervals analyse the uncertainty of the target parameter given the data at hand.
credible intervals and confidence intervals treat nuisance parameters in radically different ways.
For the case of a single parameter and data that can be summarised in a single sufficient statistic, it can be shown that the credible interval and the confidence interval coincide if the unknown parameter is a location parameter (i.e. the forward probability function has the form ), with a prior that is a uniform flat distribution; and also if the unknown parameter is a scale parameter (i.e. the forward probability function has the form ), with a Jeffreys' prior — the latter following because taking the logarithm of such a scale parameter turns it into a location parameter with a uniform distribution.
But these are distinctly special (albeit important) cases; in general no such equivalence can be made.
| Mathematics | Statistics | null |
3408660 | https://en.wikipedia.org/wiki/Microscopic%20reversibility | Microscopic reversibility | The principle of microscopic reversibility in physics and chemistry is twofold:
First, it states that the microscopic detailed dynamics of particles and fields is time-reversible because the microscopic equations of motion are symmetric with respect to inversion in time (T-symmetry);
Second, it relates to the statistical description of the kinetics of macroscopic or mesoscopic systems as an ensemble of elementary processes: collisions, elementary transitions or reactions. For these processes, the consequence of the microscopic T-symmetry is: Corresponding to every individual process there is a reverse process, and in a state of equilibrium the average rate of every process is equal to the average rate of its reverse process.
History of microscopic reversibility
The idea of microscopic reversibility was born together with physical kinetics. In 1872, Ludwig Boltzmann represented kinetics of gases as statistical ensemble of elementary collisions. Equations of mechanics are reversible in time, hence, the reverse collisions obey the same laws. This reversibility of collisions is the first example of microreversibility. According to Boltzmann, this microreversibility implies the principle of detailed balance for collisions: at the equilibrium ensemble each collision is equilibrated by its reverse collision. These ideas of Boltzmann were analyzed in detail and generalized by Richard C. Tolman.
In chemistry, J. H. van't Hoff (1884) came up with the idea that equilibrium has dynamical nature and is a result of the balance between the forward and backward reaction rates. He did not study reaction mechanisms with many elementary reactions and could not formulate the principle of detailed balance for complex reactions. In 1901, Rudolf Wegscheider introduced the principle of detailed balance for complex chemical reactions. He found that for a complex reaction the principle of detailed balance implies important and non-trivial relations between reaction rate constants for different reactions. In particular, he demonstrated that the irreversible cycles of reaction are impossible and for the reversible cycles the product of constants of the forward reactions (in the "clockwise" direction) is equal to the product of constants of the reverse reactions (in the "anticlockwise" direction). Lars Onsager (1931) used these relations in his well-known work, without direct citation but with the following remark:
"Here, however, the chemists are accustomed to impose a very interesting additional restriction, namely: when the equilibrium is reached each individual reaction must balance itself. They require that the transition must take place just as frequently as the reverse transition etc."
The quantum theory of emission and absorption developed by Albert Einstein (1916, 1917) gives an example of application of the microreversibility and detailed balance to development of a new branch of kinetic theory.
Sometimes, the principle of detailed balance is formulated in the narrow sense, for chemical reactions only but in the history of physics it has the broader use: it was invented for collisions, used for emission and absorption of quanta, for transport processes and for many other phenomena.
In its modern form, the principle of microreversibility was published by Lewis (1925). In the classical textbooks full theory and many examples of applications are presented.
Time-reversibility of dynamics
The Newton and the Schrödinger equations in the absence of the macroscopic magnetic fields and in the inertial frame of reference are T-invariant: if X(t) is a solution then X(-t) is also a solution (here X is the vector of all dynamic variables, including all the coordinates of particles for the Newton equations and the wave function in the configuration space for the Schrödinger equation).
There are two sources of the violation of this rule:
First, if dynamics depend on a pseudovector like the magnetic field or the rotation angular speed in the rotating frame then the T-symmetry does not hold.
Second, in microphysics of weak interaction the T-symmetry may be violated and only the combined CPT symmetry holds.
Macroscopic consequences of the time-reversibility of dynamics
In physics and chemistry, there are two main macroscopic consequences of the time-reversibility of microscopic dynamics: the principle of detailed balance and the Onsager reciprocal relations.
The statistical description of the macroscopic process as an ensemble of the elementary indivisible events (collisions) was invented by L. Boltzmann and formalised in the Boltzmann equation. He discovered that the time-reversibility of the Newtonian dynamics leads to the detailed balance for collision: in equilibrium collisions are equilibrated by their reverse collisions. This principle allowed Boltzmann to deduce simple and nice formula for entropy production and prove his famous H-theorem. In this way, microscopic reversibility was used to prove macroscopic irreversibility and convergence of ensembles of molecules to their thermodynamic equilibria.
Another macroscopic consequence of microscopic reversibility is the symmetry of kinetic coefficients, the so-called reciprocal relations. The reciprocal relations were discovered in the 19th century by Thomson and Helmholtz for some phenomena but the general theory was proposed by Lars Onsager in 1931. He found also the connection between the reciprocal relations and detailed balance. For the equations of the law of mass action the reciprocal relations appear in the linear approximation near equilibrium as a consequence of the detailed balance conditions. According to the reciprocal relations, the damped oscillations in homogeneous closed systems near thermodynamic equilibria are impossible because the spectrum of symmetric operators is real. Therefore, the relaxation to equilibrium in such a system is monotone if it is sufficiently close to the equilibrium.
| Physical sciences | Reaction | Chemistry |
10280304 | https://en.wikipedia.org/wiki/Malaria%20vaccine | Malaria vaccine | Malaria vaccines are vaccines that prevent malaria, a mosquito-borne infectious disease which affected an estimated 249 million people globally in 85 malaria-endemic countries and areas and caused 608,000 deaths in 2022. The first approved vaccine for malaria is RTS,S, known by the brand name Mosquirix. , the vaccine has been given to 1.5million children living in areas with moderate-to-high malaria transmission. It requires at least three doses in infants by age 2, and a fourth dose extends the protection for another 1–2 years. The vaccine reduces hospital admissions from severe malaria by around 30%.
Research continues with other malaria vaccines. The most effective malaria vaccine is the R21/Matrix-M, with a 77% efficacy rate shown in initial trials and significantly higher antibody levels than with the RTS,S vaccine. It is the first vaccine that meets the World Health Organization's (WHO) goal of a malaria vaccine with at least 75% efficacy, and only the second malaria vaccine to be recommended by the WHO. In April 2023, Ghana's Food and Drugs Authority approved the use of the R21 vaccine for use in children aged between five months and three years old. Following Ghana's decision, Nigeria provisionally approved the R21 vaccine.
Approved vaccines
RTS,S
RTS,S/AS01 (brand name Mosquirix) is the first malaria vaccine approved for public use. It requires at least three doses in infants by age 2, with a fourth dose extending the protection for another 1–2 years. The vaccine reduces hospital admissions from severe malaria by around 30%.
RTS,S was developed by PATH Malaria Vaccine Initiative (MVI) and GlaxoSmithKline (GSK) with support from the Bill and Melinda Gates Foundation. It is a recombinant vaccine, consisting of the Plasmodium falciparum circumsporozoite protein (CSP) from the pre-erythrocytic stage. The CSP antigen causes the production of antibodies capable of preventing the invasion of hepatocytes and also elicits a cellular response enabling the destruction of infected hepatocytes. The CSP vaccine presented problems in the trial stage due to its poor immunogenicity. RTS,S attempted to avoid these by fusing the protein with a surface antigen from hepatitis B virus, creating a more potent and immunogenic vaccine. When tested in trials as an emulsion of oil in water and with the added adjuvants of monophosphoryl A and QS21 (SBAS2), the vaccine gave protective immunity to 7 out of 8 volunteers when challenged with P. falciparum.
RTS,S was engineered using genes from the outer protein of P. falciparum malaria parasite and a portion of a hepatitis B virus plus a chemical adjuvant to boost the immune response. Infection is prevented by inducing high antibody titers that block the parasite from infecting the liver. In November 2012, a Phase III trial of RTS,S found that it provided modest protection against both clinical and severe malaria in young infants.
In October 2013, preliminary results of a phase III clinical trial indicated that RTS,S/AS01 reduced the number of cases among young children by almost 50 percent and among infants by around 25 percent. The study ended in 2014. The effects of a booster dose were positive, even though overall efficacy seems to wane with time. After four years, reductions were 36 percent for children who received three shots and a booster dose. Missing the booster dose reduced the efficacy against severe malaria to a negligible effect. The vaccine was shown to be less effective for infants. Three doses of vaccine plus a booster reduced the risk of clinical episodes by 26 percent over three years but offered no significant protection against severe malaria.
In a bid to accommodate a larger group and guarantee sustained availability for the general public, GSK applied for a marketing license with the European Medicines Agency (EMA) in July 2014. GSK treated the project as a non-profit initiative, with most funding coming from the Gates Foundation, a major contributor to malaria eradication.
In July 2015, Mosquirix received a positive scientific opinion from the European Medicines Agency (EMA) on the proposal for the vaccine to be used to vaccinate children aged 6 weeks to 17 months outside the European Union. A pilot project for vaccination was launched on 23 April 2019 in Malawi, on 30 April 2019 in Ghana, and on 13 September 2019 in Kenya.
In October 2021, the vaccine was endorsed by the World Health Organization for "broad use" in children, making it the first malaria vaccine to receive this recommendation.
The vaccine was prequalified by WHO in July 2022. In August 2022, UNICEF awarded a contract to GSK to supply 18million doses of the RTS,S vaccine over three years. More than 30 countries have areas with moderate to high malaria transmission where the vaccine is expected to be useful.
, 1.5million children in Ghana, Kenya, and Malawi had received at least one injection of the vaccine, with more than 4.5 million doses of the vaccine administered through the countries' routine immunization programs. The next 9 countries to receive the vaccine over the next 2 years are Benin, Burkina Faso, Burundi, Cameroon, the Democratic Republic of the Congo, Liberia, Niger, Sierra Leone, and Uganda.
R21/Matrix-M
The most effective malaria vaccine is R21/Matrix-M, with 77% efficacy shown in initial trials. It is the first vaccine that meets the World Health Organization's goal of a malaria vaccine with at least 75% efficacy. It was developed through a collaboration involving the Jenner Institute at the University of Oxford, the Kenya Medical Research Institute, the London School of Hygiene and Tropical Medicine, Novavax, and the Serum Institute of India. The trials took place at the Institut de Recherche en Sciences de la Santé in Nanoro, Burkina Faso with Halidou Tinto as the principal investigator. The R21 vaccine uses a circumsporozoite protein (CSP) antigen, at a higher proportion than the RTS,S vaccine. It uses the same HBsAg-linked recombinant structure but contains no excess HBsAg. It includes the Matrix-M adjuvant that is also utilized in the Novavax COVID-19 vaccine.
A phase II trial was reported in April 2021, with a vaccine efficacy of 77% and antibody levels significantly higher than with the RTS,S vaccine. A booster shot of R21/Matrix-M that is given 12 months after the primary three-dose regimen maintains a high efficacy against malaria, providing high protection against symptomatic malaria for at least 2 years. A phase III trial with 4,800 children across four African countries was reported in November 2022, demonstrating vaccine efficacy of 74% against a severe malaria episode. Further data from multiple studies is being collected. data from the phase III study had not been formally published, but late-stage data from the study was shared with regulatory authorities.
Ghana's Food and Drugs Authority approved the use of the R21 vaccine in April 2023, for use in children aged between five months to three years old. The Serum Institute of India is preparing to produce between 100–200 million doses of the vaccine per year, and is constructing a vaccine factory in Accra, Ghana. Following Ghana's decision, Nigeria provisionally approved the R21 vaccine.
In October 2023 the WHO endorsed the R21 vaccine against malaria, end of December 2023 it was added to the list of Prequalified Vaccines.
Further developments for a vaccine that targets the erythrocytic stage of the Malaria parasite have been made, provisionally named RH5.1/Matrix-M, which it is hoped will combine with the R21/Matrix-M pre-erythrocytic vaccine to create an even more efficacious second-generation Malaria vaccine.
Agents under development
A completely effective vaccine is not available for malaria, although several vaccines are under development. Multiple vaccine candidates targeting the blood-stage of the parasite's lifecycle have been insufficient on their own. Several potential vaccines targeting the pre-erythrocytic stage are being developed, with RTS,S and R-21/Matrix-M the two approved options so far.
Nanoparticle enhancement of RTS,S
In 2015, researchers used a repetitive antigen display technology to engineer a nanoparticle that displayed malaria-specific B cell and T cell epitopes. The particle exhibited icosahedral symmetry and carried on its surface up to 60 copies of the RTS,S protein. The researchers claimed that the density of the protein was much higher than the 14% of the GSK vaccine.
PfSPZ vaccine
The PfSPZ vaccine is a candidate malaria vaccine developed by Sanaria using radiation-attenuated sporozoites to elicit an immune response. Clinical trials have been promising, with trials in Africa, Europe, and the US protecting over 80% of volunteers. It has been subject to some criticism regarding the ultimate feasibility of large-scale production and delivery in Africa since it must be stored in liquid nitrogen.
The PfSPZ vaccine candidate was granted fast track designation by the U.S. Food and Drug Administration in September 2016.
In April 2019, a phase III trial in Bioko was announced, scheduled to start in early 2020.
Other developments
SPf66 is a synthetic peptide-based vaccine developed by the Manuel Elkin Patarroyo team in Colombia and was tested extensively in endemic areas in the 1990s. Clinical trials showed it to be insufficiently effective, with 28% efficacy in South America and minimal or no efficacy in Africa. This vaccine had no protective effect in the largest placebo-controlled randomized trial in South East Asia and was abandoned.
The CSP (Circum-Sporozoite Protein) was a vaccine developed that initially appeared promising enough to undergo trials. It is also based on the circumsporozoite protein but additionally has the recombinant (Asn-Ala-Pro15Asn-Val-Asp-Pro)2-Leu-Arg(R32LR) protein covalently bound to a purified Pseudomonas aeruginosa toxin (A9). However, at an early stage, a complete lack of protective immunity was demonstrated in those inoculated. The study group used in Kenya had an 82% incidence of parasitaemia while the control group only had an 89% incidence. The vaccine intended to cause an increased T-lymphocyte response in those exposed; this was also not observed.
The NYVAC-Pf7 multi-stage vaccine attempted to use different technology, incorporating seven P. falciparum antigenic genes. These came from a variety of stages during the lifecycle. CSP and sporozoite surface protein 2 (called PfSSP2) were derived from the sporozoite phase. The liver stage antigen 1 (LSA1), three from the erythrocytic stage (merozoite surface protein 1, serine repeat antigen, and AMA-1), and one sexual stage antigen (the 25-kDa Pfs25) were included. This was first investigated using rhesus monkeys and produced encouraging results: 4 out of the 7 antigens produced specific antibody responses (CSP, PfSSP2, MSP1 and PFs25). Later trials in humans, despite demonstrating cellular immune responses in over 90% of the subjects, had very poor antibody responses. Despite this following administration of the vaccine, some candidates had complete protection when challenged with P. falciparum. This result has warranted ongoing trials.
In 1995 a field trial involving [NANP]19-5.1 proved to be very successful. Out of 194 children vaccinated, none developed symptomatic malaria in the 12-week follow-up period, and only 8 failed to have higher levels of antibody present. The vaccine consists of the schizont export protein (5.1) and 19 repeats of the sporozoite surface protein [NANP]. Limitations of the technology exist as it contains only 20% peptide and has low levels of immunogenicity. It also does not contain any immunodominant T-cell epitopes.
A chemical compound undergoing trials for the treatment of tuberculosis and cancer—the JmJc inhibitor ML324 and the antitubercular clinical candidate SQ109—is potentially a new line of drugs to treat malaria and kill the parasite in its infectious stage. More tests still need to be carried out before the compounds can be approved as a viable treatment.
Considerations
The task of developing a preventive vaccine for malaria is a complex process. There are a number of considerations to be made concerning what strategy a potential vaccine should adopt.
Parasite diversity
P. falciparum has demonstrated the capability, through the development of multiple drug-resistant parasites, for evolutionary change. The Plasmodium species has a very high rate of replication, much higher than that needed to ensure transmission in the parasite's lifecycle. This enables pharmaceutical treatments that are effective at reducing the reproduction rate, but not halting it, to exert a high selection pressure, thus favoring the development of resistance. The process of evolutionary change is one of the key considerations necessary when considering potential vaccine candidates. The development of resistance could cause a significant reduction in the efficacy of any potential vaccine thus rendering useless a carefully developed and effective treatment.
Choosing to address the symptom or the source
The parasite induces two main response types from the human immune system. These are anti-parasitic immunity and anti-toxic immunity.
"Anti-parasitic immunity" addresses the source; it consists of an antibody response (humoral immunity) and a cell-mediated immune response. Ideally, a vaccine would enable the development of anti-plasmodial antibodies in addition to generating an elevated cell-mediated response. Potential antigens against which a vaccine could be targeted will be discussed in greater depth later. Antibodies are part of the specific immune response. They exert their effect by activating the complement cascade, stimulating phagocytic cells into endocytosis through adhesion to an external surface of the antigenic substances, thus 'marking' it as offensive. Humoral or cell-mediated immunity consists of many interlinking mechanisms that essentially aim to prevent infection from entering the body (through external barriers or hostile internal environments) and then kill any microorganisms or foreign particles that succeed in penetration. The cell-mediated component consists of many white blood cells (such as monocytes, neutrophils, macrophages, lymphocytes, basophils, mast cells, natural killer cells, and eosinophils) that target foreign bodies by a variety of different mechanisms. In the case of malaria, both systems would be targeted to attempt to increase the potential response generated, thus ensuring the maximum chance of preventing disease.
"Anti-toxic immunity" addresses the symptoms; it refers to the suppression of the immune response associated with the production of factors that either induce symptoms or reduce the effect that any toxic by-products (of micro-organism presence) have on the development of disease. For example, it has been shown that tumor necrosis factor-alpha has a central role in generating the symptoms experienced in severe P. falciparum malaria. Thus a therapeutic vaccine could target the production of TNF-a, preventing respiratory distress and cerebral symptoms. This approach has serious limitations as it would not reduce the parasitic load; rather, it only reduces the associated pathology. As a result, there are substantial difficulties in evaluating efficacy in human trials.
Taking this information into consideration an ideal vaccine candidate would attempt to generate a more substantial cell-mediated and antibody response on parasite presentation. This would have the benefit of increasing the rate of parasite clearance, thus reducing the experienced symptoms and providing a level of consistent future immunity against the parasite.
Potential targets
By their very nature, protozoa are more complex organisms than bacteria and viruses, with more complicated structures and lifecycles. This presents problems in vaccine development but also increases the number of potential targets for a vaccine. These have been summarised into the lifecycle stage and the antibodies that could potentially elicit an immune response.
The epidemiology of malaria varies enormously across the globe and has led to the belief that it may be necessary to adopt very different vaccine development strategies to target different populations. A Type 1 vaccine is suggested for those exposed mostly to P. falciparum malaria in sub-Saharan Africa, with the primary objective to reduce the number of severe malaria cases and deaths in infants and children exposed to high transmission rates. The Type 2 vaccine could be thought of as a 'travelers' vaccine,' aiming to prevent all clinical symptoms in individuals with no previous exposure. This is another major public health problem, with malaria presenting as one of the most substantial threats to travelers' health. Problems with the available pharmaceutical therapies include costs, availability, adverse effects, contraindications, inconvenience, and compliance, many of which would be reduced or eliminated if an effective (greater than 85–90%) vaccine was developed.
The lifecycle of the malaria parasite is particularly complex, presenting initial developmental problems. Despite the huge number of vaccines available, none target parasitic infections. The distinct developmental stages involved in the lifecycle present numerous opportunities for targeting antigens, thus potentially eliciting an immune response. Theoretically, each developmental stage could have a vaccine developed specifically to target the parasite. Moreover, any vaccine produced would ideally have the ability to be of therapeutic value as well as preventing further transmission and is likely to consist of a combination of antigens from different phases of the parasite's development. More than 30 of these antigens are being researched by teams all over the world in the hope of identifying a combination that can elicit immunity in the inoculated individual. Some of the approaches involve surface expression of the antigen, inhibitory effects of specific antibodies on the lifecycle, and the protective effects through immunization or passive transfer of antibodies between an immune and a non-immune host. The majority of research into malarial vaccines has focused on the Plasmodium falciparum strain due to the high mortality caused by the parasite and the ease of carrying out in vitro/in vivo studies. The earliest vaccines attempted to use the parasitic circumsporozoite protein (CSP). This is the most dominant surface antigen of the initial pre-erythrocytic phase. However, problems were encountered due to low efficacy, reactogenicity and low immunogenicity.
The initial stage in the lifecycle, following inoculation, is a relatively short "pre-erythrocytic" or "hepatic" phase. A vaccine at this stage must have the ability to protect against sporozoites invading and possibly inhibiting the development of parasites in the hepatocytes (through inducing cytotoxic T-lymphocytes that can destroy the infected liver cells). However, if any sporozoites evaded the immune system they would then have the potential to be symptomatic and cause the clinical disease.
The second phase of the lifecycle is the "erythrocytic" or blood phase. A vaccine here could prevent merozoite multiplication or the invasion of red blood cells. This approach is complicated by the lack of MHC molecule expression on the surface of erythrocytes. Instead, malarial antigens are expressed, and it is this towards which the antibodies could potentially be directed. Another approach would be to attempt to block the process of erythrocyte adherence to blood vessel walls. It is thought that this process is accountable for much of the clinical syndrome associated with malarial infection; therefore, a vaccine given during this stage would be therapeutic and hence administered during clinical episodes to prevent further deterioration.
The last phase of the lifecycle that has the potential to be targeted by a vaccine is the "sexual stage". This would not give any protective benefits to the individual inoculated but would prevent further transmission of the parasite by preventing the gametocytes from producing multiple sporozoites in the gut wall of the mosquito. It therefore would be used as part of a policy directed at eliminating the parasite from areas of low prevalence or to prevent the development and spread of vaccine-resistant parasites. This type of transmission-blocking vaccine is potentially very important. The evolution of resistance in the malaria parasite occurs very quickly, potentially making any vaccine redundant within a few generations. This approach to the prevention of spread is therefore essential.
Another approach is to target the protein kinases, which are present during the entire lifecycle of the malaria parasite. Research is underway on this, yet production of an actual vaccine targeting these protein kinases may still take a long time.
Report of a vaccine candidate capable of neutralizing all tested strains of Plasmodium falciparum, the most deadly form of the parasite causing malaria, was published in Nature Communications by a team of scientists from the University of Oxford in 2011. The viral vector vaccine, targeting a full-length P. falciparum reticulocyte-binding protein homologue 5 (PfRH5) was found to induce an antibody response in an animal model. The results of this new vaccine confirmed the utility of a key discovery reported by scientists at the Wellcome Trust Sanger Institute, published in Nature. The earlier publication reported P. falciparum relies on a red blood cell surface receptor, known as 'basigin', to invade the cells by binding a protein PfRH5 to the receptor. Unlike other antigens of the malaria parasite which are often genetically diverse, the PfRH5 antigen appears to have little genetic diversity. It was found to induce a very low antibody response in people naturally exposed to the parasite. The high susceptibility of PfRH5 to the cross-strain neutralizing vaccine-induced antibody demonstrated a significant promise for preventing malaria in the long and often difficult road of vaccine development. According to Professor Adrian Hill, a Wellcome Trust Senior Investigator at the University of Oxford, the next step would be the safety tests of this vaccine. At the time (2011) it was projected that if these proved successful, the clinical trials in patients could begin within two to three years.
PfEMP1, one of the proteins known as variant surface antigens (VSAs) produced by Plasmodium falciparum, was found to be a key target of the immune system's response against the parasite. Studies of blood samples from 296 mostly Kenyan children by researchers of Burnet Institute and their cooperators showed that antibodies against PfEMP1 provide protective immunity, while antibodies developed against other surface antigens do not. Their results demonstrated that PfEMP1 could be a target for developing an effective vaccine that will reduce the risk of developing malaria.
Plasmodium vivax is the common malaria species found in India, Southeast Asia, and South America. It can stay dormant in the liver and reemerge years later to elicit new infections. Two key proteins involved in the invasion of the red blood cells (RBC) by P. vivax are potential targets for drug or vaccine development. When the Duffy binding protein (DBP) of P. vivax binds the Duffy antigen (DARC) on the surface of the RBC, the process for the parasite to enter the RBC is initiated. Structures of the core region of DARC and the receptor binding pocket of DBP have been mapped by scientists at the Washington University in St. Louis. The researchers found that the binding is a two-step process that involves two copies of the parasite protein acting together like a pair of tongs that "clamp" two copies of DARC. Antibodies that interfere with the binding by either targeting the key region of the DARC or the DBP will prevent the infection.
Antibodies against the Schizont Egress Antigen-1 (PfSEA-1) were found to disable the parasite's ability to rupture from the infected red blood cells (RBCs), thus preventing it from continuing with its lifecycle. Researchers from Rhode Island Hospital identified Plasmodium falciparum PfSEA-1, a 244 kd malaria antigen expressed in the schizont-infected RBCs. Mice vaccinated with the recombinant PfSEA-1 produced antibodies that interrupted the schizont rupture from the RBCs and decreased the parasite replication. The vaccine protected the mice from the lethal challenge of the parasite. Tanzanian and Kenyan children who have antibodies to PfSEA-1 were found to have fewer parasites in their bloodstream and a milder case of malaria. By blocking the schizont outlet, the PfSEA-1 vaccine may work synergistically with vaccines targeting the other stages of the malaria lifecycle such as hepatocyte and RBC invasion.
Mix of antigenic components
Increasing the potential immunity generated against Plasmodia can be achieved by attempting to target multiple phases in the lifecycle. This is additionally beneficial in reducing the possibility of resistant parasites developing. The use of multiple-parasite antigens can therefore have a synergistic or additive effect.
One of the most successful vaccine candidates in clinical trials consists of recombinant antigenic proteins to the circumsporozoite protein.
History
Individuals who are exposed to the parasite in endemic countries develop acquired immunity against disease and death. Such immunity does not, however, prevent malarial infection; immune individuals often harbour asymptomatic parasites in their blood. This does, however, imply that it is possible to create an immune response that protects against the harmful effects of the parasite.
Research shows that if immunoglobulin is taken from immune adults, purified, and then given to individuals who have no protective immunity, some protection can be gained.
Irradiated mosquitoes
In 1967, it was reported that a level of immunity to the Plasmodium berghei parasite could be given to mice by exposing them to sporozoites that had been irradiated by x-rays. Subsequent human studies in the 1970s showed that humans could be immunized against Plasmodium vivax and Plasmodium falciparum by exposing them to the bites of significant numbers of irradiated mosquitos.
From 1989 to 1999, eleven volunteers recruited from the United States Public Health Service, United States Army, and United States Navy were immunized against Plasmodium falciparum by the bites of 1001–2927 mosquitoes that had been irradiated with 15,000 rads of gamma rays from a Co-60 or Cs-137 source. This level of radiation is sufficient to attenuate the malaria parasites so that, while they can still enter hepatic cells, they cannot develop into schizonts nor infect red blood cells. Over 42 weeks, 24 of 26 tests on the volunteers showed that they were protected from malaria.
| Biology and health sciences | Vaccines | Health |
19333613 | https://en.wikipedia.org/wiki/Sea%20anemone | Sea anemone | Sea anemones ( ) are a group of predatory marine invertebrates constituting the order Actiniaria. Because of their colourful appearance, they are named after the Anemone, a terrestrial flowering plant. Sea anemones are classified in the phylum Cnidaria, class Anthozoa, subclass Hexacorallia. As cnidarians, sea anemones are related to corals, jellyfish, tube-dwelling anemones, and Hydra. Unlike jellyfish, sea anemones do not have a medusa stage in their life cycle.
A typical sea anemone is a single polyp attached to a hard surface by its base, but some species live in soft sediment, and a few float near the surface of the water. The polyp has a columnar trunk topped by an oral disc with a ring of tentacles and a central mouth. The tentacles can be retracted inside the body cavity or expanded to catch passing prey. They are armed with cnidocytes (stinging cells). In many species, additional nourishment comes from a symbiotic relationship with single-celled dinoflagellates, with zooxanthellae, or with green algae, zoochlorellae, that live within the cells. Some species of sea anemone live in association with clownfish, hermit crabs, small fish, or other animals to their mutual benefit.
Sea anemones breed by liberating sperm and eggs through the mouth into the sea. The resulting fertilized eggs develop into planula larvae which, after being planktonic for a while, settle on the seabed and develop directly into juvenile polyps. Sea anemones also breed asexually, by breaking in half or into smaller pieces which regenerate into polyps. Sea anemones are sometimes kept in reef aquariums; the global trade in marine ornamentals for this purpose is expanding and threatens sea anemone populations in some localities, as the trade depends on collection from the wild.
Anatomy
A typical sea anemone is a sessile polyp attached at the base to the surface beneath it by an adhesive foot, called a basal or pedal disc, with a column-shaped body topped by an oral disc. Most are from in diameter and in length, but they are inflatable and vary greatly in dimensions. Some are very large; Urticina columbiana and Stichodactyla mertensii can both exceed in diameter and Metridium farcimen a metre in length. Some species burrow in soft sediment and lack a basal disc, having instead a bulbous lower end, the physa, which anchors them in place.
The column or trunk is generally more or less cylindrical and may be plain and smooth or may bear specialised structures; these include solid papillae (fleshy protuberances), adhesive papillae, cinclides (slits), and small protruding vesicles. In some species the part immediately below the oral disc is constricted and is known as the capitulum. When the animal contracts, the oral disc, tentacles and capitulum fold inside the pharynx and are held in place by a strong sphincter muscle part way up the column. There may be a fold in the body wall, known as a parapet, at this point, and this parapet covers and protects the anemone when it is retracted.
The oral disc has a central mouth, usually slit-shaped, surrounded by one or more whorls of tentacles. The ends of the slit lead to grooves in the wall of the pharynx known as siphonoglyphs; there are usually two of these grooves, but some groups have a single one. The tentacles are generally tapered and often tipped by a pore, but in some species they are branched, club-tipped, or reduced to low knobs. The tentacles are armed with many cnidocytes, cells that are both defensive and used to capture prey. Cnidocytes contain stinging nematocysts, capsule-like organelles capable of everting suddenly, giving the phylum Cnidaria its name. Each nematocyst contains a small venom vesicle filled with actinotoxins, an inner filament, and an external sensory hair. A touch to the hair mechanically triggers a cell explosion, which launches a harpoon-like structure that attaches to the organism that triggered it, and injects a dose of venom in the flesh of the aggressor or prey.
At the base of the tentacles in some species, primarily aggregating anemones, lie acrorhagi, elongated inflatable tentacle-like organs armed with cnidocytes, that can flail around and fend off other encroaching anemones; one or both anemones can be driven off or suffer injury in such battles.
Many sea anemones also have acontia, thin filaments covered in cnidae that can be ejected and retracted for defence.
The venom is a mix of toxins, including neurotoxins, that paralyzes the prey so the anemone can move it to the mouth for digestion inside the gastrovascular cavity. Actinotoxins are highly toxic to prey species of fish and crustaceans. However, Amphiprioninae (clownfish), small banded fish in various colours, are not affected by their host anemone's sting and shelter themselves from predators among its tentacles. Several other species have similar adaptions and are also unaffected (see Mutualistic relationships). Most sea anemones are harmless to humans, but a few highly toxic species (notably Actinodendron arboreum, Phyllodiscus semoni and Stichodactyla spp.) have caused severe injuries and are potentially lethal.
Digestive system
Sea anemones have what can be described as an incomplete gut: the gastrovascular cavity functions as a stomach and possesses a single opening to the outside, which operates as both a mouth and anus. Waste and undigested matter is excreted through this opening. The mouth is typically slit-like in shape, and bears a groove at one or both ends. The groove, termed a siphonoglyph, is ciliated, and helps to move food particles inwards and circulate water through the gastrovascular cavity.
The mouth opens into a flattened pharynx. This consists of an in-folding of the body wall, and is therefore lined by the animal's epidermis. The pharynx typically runs for about one third the length of the body before opening into the gastrovascular cavity that occupies the remainder of the body.
The gastrovascular cavity itself is divided into a number of chambers by mesenteries radiating inwards from the body wall. Some of the mesenteries form complete partitions with a free edge at the base of the pharynx, where they connect, but others reach only partway across. The mesenteries are usually found in multiples of twelve, and are symmetrically arranged around the central lumen. They have stomach lining on both sides, separated by a thin layer of mesoglea, and include filaments of tissue specialised for secreting digestive enzymes. In some species, these filaments extend below the lower margin of the mesentery, hanging free in the gastrovascular cavity as thread-like acontial filaments. These acontia are armed with nematocysts and can be extruded through cinclides, blister-like holes in the wall of the column, for use in defence.
Musculature and nervous system
A primitive nervous system, without centralization, coordinates the processes involved in maintaining homeostasis, as well as biochemical and physical responses to various stimuli. There are two nerve nets, one in the epidermis and one in the gastrodermis; these unite at the pharynx, the junctions of the septa with the oral disc and the pedal disc, and across the mesogloea. No specialized sense organs are present, but sensory cells include nematocytes and chemoreceptors.
The muscles and nerves are much simpler than those of most other animals, although more specialised than in other cnidarians, such as corals. Cells in the outer layer (epidermis) and the inner layer (gastrodermis) have microfilaments that group into contractile fibers. These fibers are not true muscles because they are not freely suspended in the body cavity as they are in more developed animals. Longitudinal fibres are found in the tentacles and oral disc, and also within the mesenteries, where they can contract the whole length of the body. Circular fibers are found in the body wall and, in some species, around the oral disc, allowing the animal to retract its tentacles into a protective sphincter.
Since the anemone lacks a rigid skeleton, the contractile cells pull against the fluid in the gastrovascular cavity, forming a hydrostatic skeleton. The anemone stabilizes itself by flattening its pharynx, which acts as a valve, keeping the gastrovascular cavity at a constant volume and making it rigid. When the longitudinal muscles relax, the pharynx opens and the cilia lining the siphonoglyphs beat, wafting water inwards and refilling the gastrovascular cavity. In general, the sea anemone inflates its body to extend its tentacles and feed, and deflates it when resting or disturbed. The inflated body is also used to anchor the animal inside a crevice, burrow or tube.
Life cycle
Unlike other cnidarians, anemones (and other anthozoans) entirely lack the free-swimming medusal stage of their life cycle; the polyp produces eggs and sperm, and the fertilized egg develops into a planula larva, which develops directly into another polyp. Both sexual and asexual reproduction can occur.
The sexes in sea anemones are separate in some species, while other species are sequential hermaphrodites, changing sex at some stage in their life. The gonads are strips of tissue within the mesenteries. In sexual reproduction, males may release sperm to stimulate females to release eggs, and fertilization occurs, either internally in the gastrovascular cavity or in the water column. The eggs and sperm, or the larvae, usually emerge through the mouth, but in some species, such as Metridium dianthus, may be swept out from the body cavity through the cinclides. In many species the eggs and sperm rise to the surface where fertilisation occurs. The fertilized egg develops into a planula larva, which drifts for a while before sinking to the seabed and undergoing metamorphosis into a juvenile sea anemone. Some larvae preferentially settle onto certain suitable substrates; the mottled anemone (Urticina crassicornis) for example, settles onto green algae, perhaps attracted by a biofilm on the surface.
The brooding anemone (Epiactis prolifera) is gynodioecious, starting life as a female and later becoming hermaphroditic, so that populations consist of females and hermaphrodites. As a female, the eggs can develop parthenogenetically into female offspring without fertilisation, and as a hermaphrodite, the eggs are routinely self-fertilised. The larvae emerge from the anemone's mouth and tumble down the column, lodging in a fold near the pedal disc. Here they develop and grow, remaining for about three months before crawling off to start independent lives.
Sea anemones have great powers of regeneration and can reproduce asexually, by budding, fragmentation, or longitudinal or transverse binary fission. Some species such as certain Anthopleura divide longitudinally, pulling themselves apart, resulting in groups of individuals with identical colouring and markings. Transverse fission is less common, but occurs in Anthopleura stellula and Gonactinia prolifera, with a rudimentary band of tentacles appearing halfway up the column before it splits horizontally. Some species can also reproduce by pedal laceration. In this process, a ring of material may break off from the pedal disc at the base of the column, which then fragments, the pieces regenerating into new clonal individuals. Alternatively, fragments detach separately as the animal creeps across a surface. In Metridium dianthus, fragmentation rates were higher in individuals living among live mussels than among dead shells, and all the new individuals had tentacles within three weeks.
The sea anemone Aiptasia diaphana displays sexual plasticity. Thus asexually produced clones derived from a single founder individual can contain both male and female individuals (ramets). When eggs and sperm (gametes) are formed, they can produce zygotes derived from "selfing" (within the founding clone) or out-crossing, which then develop into swimming planula larvae. Anemones tend to grow and reproduce relatively slowly. The magnificent sea anemone (Heteractis magnifica), for example, may live for decades, with one individual surviving in captivity for eighty years.
Behaviour and ecology
Movement
A sea anemone is capable of changing its shape dramatically. The column and tentacles have longitudinal, transverse and diagonal sheets of muscle and can lengthen and contract, as well as bend and twist. The gullet and mesenteries can evert (turn inside out), or the oral disc and tentacles can retract inside the gullet, with the sphincter closing the aperture; during this process, the gullet folds transversely and water is discharged through the mouth.
Locomotion
Although some species of sea anemone burrow in soft sediment, the majority are mainly sessile, attaching to a hard surface with their pedal disc, and tend to stay in the same spot for weeks or months at a time. They can move, however, being able to creep around on their bases; this gliding can be seen with time-lapse photography but the motion is so slow as to be almost imperceptible to the naked eye. The process resembles the locomotion of a gastropod mollusc, a wave of contraction moving from the functionally posterior portion of the foot towards the front edge, which detaches and moves forwards. Sea anemones can also cast themselves loose from the substrate and drift to a new location. Gonactinia prolifera is unusual in that it can both walk and swim; walking is by making a series of short, looping steps, rather like a caterpillar, attaching its tentacles to the substrate and drawing its base closer; swimming is done by rapid movements of the tentacles beating synchronously like oar strokes. Stomphia coccinea can swim by flexing its column, and the sea onion anemone inflates and casts itself loose, adopting a spherical shape and allowing itself to be rolled about by the waves and currents. There are no truly pelagic sea anemones, but some stages in the life cycle post-metamorphosis are able, in response to certain environmental factors, to cast themselves off and have a free-living stage that aids in their dispersal.
The sea onion Paranthus rapiformis lives on subtidal mud flats and burrows into the sediment, holding itself in place by expanding its basal disc to form an anchor. If it gets washed out of its burrow by strong currents, it contracts into a pearly glistening ball which rolls about. Tube-dwelling anemones, which live in parchment-like tubes, are in the anthozoan subclass Ceriantharia, and are only distantly related to sea anemones.
Feeding and diet
Sea anemones are typically predators, ensnaring prey of suitable size that comes within reach of their tentacles and immobilizing it with the aid of their nematocysts. The prey is then transported to the mouth and thrust into the pharynx. The lips can stretch to aid in prey capture and can accommodate larger items such as crabs, dislodged molluscs and even small fish. Stichodactyla helianthus is reported to trap sea urchins by enfolding them in its carpet-like oral disc. A few species are parasitic on other marine organisms. One of these is Peachia quinquecapitata, the larvae of which develop inside the medusae of jellyfish, feeding on their gonads and other tissues, before being liberated into the sea as free-living juvenile anemones.
Mutualistic relationships
Although not plants and therefore incapable of photosynthesis themselves, many sea anemones form an important facultative mutualistic relationship with certain single-celled algae species that reside in the animals' gastrodermal cells, especially in the tentacles and oral disc. These algae may be either zooxanthellae, zoochlorellae or both. The sea anemone benefits from the products of the algae's photosynthesis, namely oxygen and food in the form of glycerol, glucose and alanine; the algae in turn are assured a reliable exposure to sunlight and protection from micro-feeders, which the sea anemones actively maintain. The algae also benefit by being protected by the sea anemone's stinging cells, reducing the likelihood of being eaten by herbivores. In the aggregating anemone (Anthopleura elegantissima), the colour of the anemone is largely dependent on the proportions and identities of the zooxanthellae and zoochlorellae present. The hidden anemone (Lebrunia coralligens) has a whorl of seaweed-like pseudotentacles, rich in zooxanthellae, and an inner whorl of tentacles. A daily rhythm sees the pseudotentacles spread widely in the daytime for photosynthesis, but they are retracted at night, at which time the tentacles expand to search for prey.
Several species of fish and invertebrates live in symbiotic or mutualistic relationships with sea anemones, most famously the clownfish. The symbiont receives the protection from predators provided by the anemone's stinging cells, and the anemone utilises the nutrients present in its faeces. Other animals that associate with sea anemones include cardinalfish (such as Banggai cardinalfish), juvenile threespot dascyllus, incognito (or anemone) goby, juvenile painted greenling, various crabs (such as Inachus phalangium, Mithraculus cinctimanus and Neopetrolisthes), shrimp (such as certain Alpheus, Lebbeus, Periclimenes and Thor), opossum shrimp (such as Heteromysis and Leptomysis), and various marine snails.
Two of the more unusual relationships are those between certain anemones (such as Adamsia, Calliactis and Neoaiptasia) and hermit crabs or snails, and Bundeopsis or Triactis anemones and Lybia boxing crabs. In the former, the anemones live on the shell of the hermit crab or snail. In the latter, the small anemones are carried in the claws of the boxing crab.
Habitats
Sea anemones are found in both deep oceans and shallow coastal waters worldwide. The greatest diversity is in the tropics, although there are many species adapted to relatively cold waters. The majority of species cling on to rocks, shells or submerged timber, often hiding in cracks or under seaweed, but some burrow into sand and mud, and a few are pelagic. Deep sea mining companies are pressuring governments to let them mine on the bottom of the oceans. By 2024, several companies could begin mining projects in the deep sea. The ecological damage to the habitat of sea anemones and other organisms could be enormous and dangerous and irreversible.
Relationship with humans
Sea anemones and their attendant anemone fish can make attractive aquarium exhibits, and both are often harvested from the wild as adults or juveniles. These fishing activities significantly impact the populations of anemones and anemone fish by drastically reducing the densities of each in exploited areas. Besides their collection from the wild for use in reef aquaria, sea anemones are also threatened by alterations to their environment. Those living in shallow-water coastal locations are affected directly by pollution and siltation, and indirectly by the effect these have on their photosynthetic symbionts and the prey on which they feed.
In southwestern Spain and Sardinia, the snakelocks anemone (Anemonia viridis) is consumed as a delicacy. The whole animal is marinated in vinegar, then coated in a batter similar to that used to make calamari, and deep-fried in olive oil. Anemones are also a source of food for fisherman communities in the east coast of Sabah, Borneo, as well as the Thousand Islands (as rambu-rambu) in Southeast Asia, Taizhou, Zhejiang (as Shasuan).
Fossil record
Most Actiniaria do not form hard parts that can be recognized as fossils, but a few fossils of sea anemones do exist; Mackenzia, from the Middle Cambrian Burgess Shale of Canada, is the oldest fossil identified as a sea anemone.
Taxonomy
Sea anemones, order Actiniaria, are classified in the phylum Cnidaria, class Anthozoa, subclass Hexacorallia. Rodriguez et al. proposed a new classification for the Actiniaria based on extensive DNA results.
Suborders and superfamilies included in Actiniaria are:
Suborder Anenthemonae
Superfamily Edwardsioidea
Superfamily Actinernoidea
Suborder Enthemonae
Superfamily Actinostoloidea
Superfamily Actinioidea
Superfamily Metridioidea
Phylogeny
External relationships
Anthozoa contains three subclasses: Hexacorallia, which contains the Actiniaria; Octocorallia; and Ceriantharia. These are monophyletic, but the relationships within the subclasses remain unresolved.
†= extinct
Internal relationships
The relationships of higher-level taxa in Carlgren's classification are re-interpreted as follows:
| Biology and health sciences | Cnidarians | Animals |
19336290 | https://en.wikipedia.org/wiki/Planter%20%28farm%20implement%29 | Planter (farm implement) | A planter is a farm implement, usually towed behind a tractor, that sows (plants) seeds in rows throughout a field. It is connected to the tractor with a drawbar or a three-point hitch. Planters lay the seeds down in precise manner along rows. Planters vary greatly in size, from 1 row to 54, with the biggest in the world being the 48-row John Deere DB120. Such larger and newer planters comprise multiple modules called row units. The row units are spaced evenly along the planter at intervals that vary widely by crop and locale. The most common row spacing in the United States today is 30 inches.
Design
Various machines meter out seeds for sowing in rows. The ones that handle larger seeds tend to be called planters, whereas the ones that handle smaller seeds tend to be called seed drills, grain drills, and seeders (including precision seeders). They all share a set of similar concepts in the ways that they work, but there is established usage in which the machines for sowing some crops including maize (corn), beans, and peas are mostly called planters, whereas those that sow cereals are drills.
On smaller and older planters, a marker extends out to the side half the width of the planter and creates a line in the field where the tractor should be centered for the next pass. The marker is usually a single disc harrow disc on a rod on each side of the planter. On larger and more modern planters, GPS navigation and auto-steer systems for the tractor are often used, eliminating the need for the marker. Some precision farming equipment such as Case IH AFS uses GPS/RKS and computer-controlled planter to sow seeds to precise position accurate within 2 cm. In an irregularly shaped field, the precision farming equipment will automatically hold the seed release over area already sewn when the tractor has to run overlapping pattern to avoid obstacles such as trees.
Older planters commonly have a seed bin for each row and a fertilizer bin for two or more rows. In each seed bin plates are installed with a certain number of teeth and tooth spacing according to the type of seed to be sown and the rate at which the seeds are to be sown. The tooth size (actually the size of the space between the teeth) is just big enough to allow one seed in at a time but not big enough for two. Modern planters often have a large bin for seeds that are distributed to each row known as central commodity systems.
A class of planters that dig down farther than others are called listers. They are not used much any more, as their use belonged to a set of high-till methods that low-till and no-till methods have largely replaced. Corn listers were common on the Great Plains in the 1920s through 1950s.
Drive systems
There are different types of planters available with the main difference being mechanically driven vs. hydraulic/electrical driven. In a mechanical drive system the unit works by a small suspended tire being driven by another which is in contact with the ground (driven) tire. As the operator lowers the planter the two tires make contact and the planter is engaged. When the driven wheel begins to turn it then turns a series of gears that determine the population of the seed produced. The gears can be changed by the operator in order to change the planting population. A hydraulic driven system came about to correct the shortfalls of the ground driven system. Hydraulic driven systems allow the operator to change population on the go, as well as allowing the computer controller to follow a prepared prescription for an individual field. The system also allowed for plant populations to be infinite in that mechanical gears systems are limited to set number of population settings and gears available from manufactures. In 2014 John Deere introduced the ExactEmerge row unit which introduced high-speed planting. Precision Planting followed suit and released the vDrive system. These system were unique, not that they were electrical, but that they allowed an operator to double their speed when planting. Other manufacturers had already developed an electrical planter, but lacked these additional improvements. Traditionally, an operator would plant at about 4.5-5.5 mph for optimal performance. However, with the advent of these systems electrical motors match the speed of the tractor and "dead-drop" the seed in the trench using either a belt or brush-belt which cause the forward momentum of the planter to be offset by the rearward momentum of the seed. Older systems would instead drop the seed through a tube after the meter rather than place it in the seed trench directly.
| Technology | Farm and garden machinery | null |
19337310 | https://en.wikipedia.org/wiki/Rodent | Rodent | Rodents (from Latin , 'to gnaw') are mammals of the order Rodentia ( ), which are characterized by a single pair of continuously growing incisors in each of the upper and lower jaws. About 40% of all mammal species are rodents. They are native to all major land masses except for Antarctica, and several oceanic islands, though they have subsequently been introduced to most of these land masses by human activity.
Rodents are extremely diverse in their ecology and lifestyles and can be found in almost every terrestrial habitat, including human-made environments. Species can be arboreal, fossorial (burrowing), saltatorial/ricochetal (leaping on their hind legs), or semiaquatic. However, all rodents share several morphological features, including having only a single upper and lower pair of ever-growing incisors. Well-known rodents include mice, rats, squirrels, prairie dogs, porcupines, beavers, guinea pigs, and hamsters. However, rabbits, hares, and pikas, which also have incisors that grow continuously (but have two pairs of upper incisors instead of one), were once included with rodents, but are now considered to be in a separate order, the Lagomorpha. Nonetheless, Rodentia and Lagomorpha are sister groups, sharing a single common ancestor and forming the clade of Glires.
Most rodents are small animals with robust bodies, short limbs, and long tails. They use their sharp incisors to gnaw food, excavate burrows, and defend themselves. Most eat seeds or other plant material, but some have more varied diets. They tend to be social animals and many species live in societies with complex ways of communicating with each other. Mating among rodents can vary from monogamy, to polygyny, to promiscuity. Many have litters of underdeveloped, altricial young, while others are precocial (relatively well developed) at birth.
The rodent fossil record dates back to the Paleocene on the supercontinent of Laurasia. Rodents greatly diversified in the Eocene, as they spread across continents, sometimes even crossing oceans. Rodents reached both South America and Madagascar from Africa and, until the arrival of Homo sapiens, were the only terrestrial placental mammals to reach and colonize Australia.
Rodents have been used as food, for clothing, as pets, and as laboratory animals in research. Some species, in particular, the brown rat, the black rat, and the house mouse, are serious pests, eating and spoiling food stored by humans and spreading diseases. Accidentally introduced species of rodents are often considered to be invasive and have caused the extinction of numerous species, such as island birds, the dodo being an example, previously isolated from land-based predators.
Characteristics
The distinguishing feature of the rodents is their pairs of continuously growing, razor-sharp, open-rooted incisors. These incisors have thick layers of enamel on the front and little enamel on the back. Because they do not stop growing, the animal must continue to wear them down so that they do not reach and pierce the skull. As the incisors grind against each other, the softer dentine on the rear of the teeth wears away, leaving the sharp enamel edge shaped like the blade of a chisel. Most species have up to 22 teeth with no canines or anterior premolars. A gap, or diastema, occurs between the incisors and the cheek teeth in most species. This allows rodents to suck in their cheeks or lips to shield their mouth and throat from wood shavings and other inedible material, discarding this waste from the sides of their mouths. Chinchillas and guinea pigs have a high-fiber diet; their molars have no roots and grow continuously like their incisors.
In many species, the molars are relatively large, intricately structured, and highly cusped or ridged. Rodent molars are well equipped to grind food into small particles. The jaw musculature is strong. The lower jaw is thrust forward while gnawing and is pulled backwards during chewing. Gnawing uses incisors and chewing uses molars, however, due to the cranial anatomy of rodents these feeding methods cannot be used at the same time and are considered to be mutually exclusive. Among rodents, the masseter muscle plays a key role in chewing, making up 60% – 80% of the total muscle mass among masticatory muscles and reflects rodents' herbivorous diet. Rodent groups differ in the arrangement of the jaw muscles and associated skull structures, both from other mammals and amongst themselves.
The Sciuromorpha, such as the eastern grey squirrel, have a large deep masseter, making them efficient at biting with the incisors. The Myomorpha, such as the brown rat, have enlarged temporalis and masseter muscles, making them able to chew powerfully with their molars. In rodents, masseter muscles insert behind the eyes and contribute to eye boggling that occurs during gnawing where the quick contraction and relaxation of the muscle causes the eyeballs to move up and down. The Hystricomorpha, such as the guinea pig, have larger superficial masseter muscles and smaller deep masseter muscles than rats or squirrels, possibly making them less efficient at biting with the incisors, but their enlarged internal pterygoid muscles may allow them to move the jaw further sideways when chewing. The cheek pouch is a specific morphological feature used for storing food and is evident in particular subgroups of rodents like kangaroo rats, hamsters, chipmunks and gophers which have two bags that may range from the mouth to the front of the shoulders. True mice and rats do not contain this structure but their cheeks are elastic due to a high degree of musculature and innervation in the region.
While the largest species, the capybara, can weigh as much as , most rodents weigh less than . Rodents have wide-ranging morphologies, but typically have squat bodies and short limbs. The fore limbs usually have five digits, including an opposable thumb, while the hind limbs have three to five digits. The elbow gives the forearms great flexibility. The majority of species are plantigrade, walking on both the palms and soles of their feet, and have claw-like nails. The nails of burrowing species tend to be long and strong, while arboreal rodents have shorter, sharper nails. Rodent species use a wide variety of methods of locomotion including quadrupedal walking, running, burrowing, climbing, bipedal hopping (kangaroo rats and hopping mice), swimming and even gliding.
Scaly-tailed squirrels and flying squirrels, although not closely related, can both glide from tree to tree using parachute-like membranes that stretch from the fore to the hind limbs. The agouti is fleet-footed and antelope-like, being digitigrade and having hoof-like nails. The majority of rodents have tails, which can be of many shapes and sizes. Some tails are prehensile, as in the Eurasian harvest mouse, and the fur on the tails can vary from bushy to completely bald. The tail is sometimes used for communication, as when beavers slap their tails on the water surface or house mice rattle their tails to indicate alarm. Some species have vestigial tails or no tails at all. In some species, the tail is capable of regeneration if a part is broken off.
Rodents generally have well-developed senses of smell, hearing, and vision. Nocturnal species often have enlarged eyes and some are sensitive to ultraviolet light. Many species have long, sensitive whiskers or vibrissae for touch or "whisking". Whisker action is mostly driven by the brain stem, which is itself provoked by the cortex. However Legg et al. 1989 find an alternate circuit between the cortex and whiskers through the cerebellar circuits, and Hemelt & Keller 2008 the superior colliculus. Some rodents have cheek pouches, which may be lined with fur. These can be turned inside out for cleaning. In many species, the tongue cannot reach past the incisors. Rodents have efficient digestive systems, absorbing nearly 80% of ingested energy. When eating cellulose, the food is softened in the stomach and passed to the cecum, where bacteria reduce it to its carbohydrate elements. The rodent then practices coprophagy, eating its own fecal pellets, so the nutrients can be absorbed by the gut. Rodents therefore often produce a hard and dry fecal pellet. Horn et al. 2013 makes the finding that rodents entirely lack the ability to vomit. In many species, the penis contains a bone, the baculum; the testes can be located either abdominally or at the groin.
Sexual dimorphism occurs in many rodent species. In some rodents, males are larger than females, while in others the reverse is true. Male-bias sexual dimorphism is typical for ground squirrels, kangaroo rats, solitary mole rats and pocket gophers; it likely developed due to sexual selection and greater male–male combat. Female-bias sexual dimorphism exists among chipmunks and jumping mice. It is not understood why this pattern occurs, but in the case of yellow-pine chipmunks, males may have selected larger females due to their greater reproductive success. In some species, such as voles, sexual dimorphism can vary from population to population. In bank voles, females are typically larger than males, but male-bias sexual dimorphism occurs in alpine populations, possibly because of the lack of predators and greater competition between males.
Distribution and habitat
One of the most widespread groups of mammals, rodents can be found on every continent except Antarctica. They are the only terrestrial placental mammals to have colonized Australia and New Guinea without human intervention. Humans have also allowed the animals to spread to many remote oceanic islands (e.g., the Polynesian rat). Rodents have adapted to almost every terrestrial habitat, from cold tundra (where they can live under snow) to hot deserts.
Some species such as tree squirrels and New World porcupines are arboreal, while some, such as gophers, tuco-tucos, and mole rats, live almost completely underground, where they build complex burrow systems. Others dwell on the surface of the ground, but may have a burrow into which they can retreat. Beavers and muskrats are known for being semiaquatic, but the rodent best adapted for aquatic life is probably the earless water rat from New Guinea. Rodents have also thrived in human-created environments such as agricultural and urban areas.
Though some species are common pests for humans, rodents also play important ecological roles. Some rodents are considered keystone species and ecosystem engineers in their respective habitats. In the Great Plains of North America, the burrowing activities of prairie dogs play important roles in soil aeration and nutrient redistribution, raising the organic content of the soil and increasing the absorption of water. They maintain these grassland habitats, and some large herbivores such as bison and pronghorn prefer to graze near prairie dog colonies due to the increased nutritional quality of forage.
Extirpation of prairie dogs can also contribute to regional and local biodiversity loss, increased seed depredation, and the establishment and spread of invasive shrubs. Burrowing rodents may eat the fruiting bodies of fungi and spread spores through their feces, thereby allowing the fungi to disperse and form symbiotic relationships with the roots of plants (which usually cannot thrive without them). As such, these rodents may play a role in maintaining healthy forests.
In many temperate regions, beavers play an essential hydrological role. When building their dams and lodges, beavers alter the paths of streams and rivers and allow for the creation of extensive wetland habitats. One study found that engineering by beavers leads to a 33 percent increase in the number of herbaceous plant species in riparian areas. Another study found that beavers increase wild salmon populations. Meanwhile, some rodents are seen as pests, due to their wide range.
Behavior and life history
Feeding
Most rodents are herbivorous, feeding exclusively on plant material such as seeds, stems, leaves, flowers, and roots. Some are omnivorous and a few are predators. The field vole is a typical herbivorous rodent and feeds on grasses, herbs, root tubers, moss, and other vegetation, and gnaws on bark during the winter. It occasionally eats invertebrates such as insect larvae. The plains pocket gopher eats plant material found underground during tunneling, and also collects grasses, roots, and tubers in its cheek pouches and caches them in underground larder chambers.
The Texas pocket gopher avoids emerging onto the surface to feed by seizing the roots of plants with its jaws and pulling them downwards into its burrow. It also practices coprophagy. The African pouched rat forages on the surface, gathering anything that might be edible into its capacious cheek pouches until its face bulges out sideways. It then returns to its burrow to sort through the material it has gathered and eats the nutritious items.
Agouti species are one of the few animal groups that can break open the large capsules of the Brazil nut fruit. Too many seeds are inside to be consumed in one meal, so the agouti carries some off and caches them. This helps dispersal of the seeds as any that the agouti fails to retrieve are distant from the parent tree when they germinate. Other nut-bearing trees tend to bear a glut of fruits in the autumn. These are too numerous to be eaten in one meal and squirrels gather and store the surplus in crevices and hollow trees. In desert regions, seeds are often available only for short periods. The kangaroo rat collects all it can find and stores them in larder chambers in its burrow.
A strategy for dealing with seasonal plenty is to eat as much as possible and store the surplus nutrients as fat. Marmots do this, and may be 50% heavier in the autumn than in the spring. They rely on their fat reserves during their long winter hibernation. Beavers feed on the leaves, buds, and inner bark of growing trees, as well as aquatic plants. They store food for winter use by felling small trees and leafy branches in the autumn and immersing them in their pond, sticking the ends into the mud to anchor them. Here, they can access their food supply underwater even when their pond is frozen over.
Although rodents have been regarded traditionally as herbivores, most small rodents opportunistically include insects, worms, fungi, fish, or meat in their diets and a few have become specialized to rely on a diet of animal matter. A functional-morphological study of the rodent tooth system supports the idea that primitive rodents were omnivores rather than herbivores. Studies of the literature show that numerous members of the Sciuromorpha and Myomorpha, and a few members of the Hystricomorpha, have either included animal matter in their diets or been prepared to eat such food when offered it in captivity. Examination of the stomach contents of the North American white-footed mouse, normally considered to be herbivorous, showed 34% animal matter.
More specialized carnivores include the shrewlike rats of the Philippines, which feed on insects and soft-bodied invertebrates, and the rakali or Australian water-rat, which devours aquatic insects, fish, crustaceans, mussels, snails, frogs, birds' eggs, and water birds. The grasshopper mouse from dry regions of North America feeds on insects, scorpions, and other small mice, and only a small part of its diet is plant material. It has a chunky body with short legs and tail, but is agile and can easily overpower prey as large as itself.
Social behavior
Rodents exhibit a wide range of types of social behavior ranging from the mammalian caste system of the naked mole-rat, the extensive "town" of the colonial prairie dog, through family groups to the independent, solitary life of the edible dormouse. Adult dormice may have overlapping feeding ranges, but they live in individual nests and feed separately, coming together briefly in the breeding season to mate. The pocket gopher is also a solitary animal outside the breeding season, each individual digging a complex tunnel system and maintaining a territory.
Larger rodents tend to live in family units where parents and their offspring live together until the young disperse. Beavers live in extended family units typically with a pair of adults, this year's kits, the previous year's offspring, and sometimes older young. Brown rats usually live in small colonies with up to six females sharing a burrow and one male defending a territory around the burrow. At high population densities, this system breaks down and males show a hierarchical system of dominance with overlapping ranges. Female offspring remain in the colony while male young disperse. The prairie vole is monogamous and forms a lifelong pair bond. Outside the breeding season, prairie voles live with others in small colonies. A male is not aggressive towards other males until he has mated, after which time he defends a territory, a female, and a nest against other males. The pair huddles together, grooms one another, and shares nesting and pup-raising responsibilities.
Among the most social of rodents are the ground squirrels, which typically form colonies based on female kinship, with males dispersing after weaning and becoming nomadic as adults. Cooperation in ground squirrels varies between species and typically includes making alarm calls, defending territories, sharing food, protecting nesting areas, and preventing infanticide. The black-tailed prairie dog forms large towns that may cover many hectares. The burrows do not interconnect, but are excavated and occupied by territorial family groups known as coteries. A coterie often consists of an adult male, three or four adult females, several nonbreeding yearlings, and the current year's offspring. Individuals within coteries are friendly with each other, but hostile towards outsiders.
Perhaps the most extreme examples of colonial behavior in rodents are the eusocial naked mole rat and Damaraland mole rat. The naked mole rat lives completely underground and can form colonies of up to 80 individuals. Only one female and up to three males in the colony reproduce, while the rest of the members are smaller and sterile, and function as workers. Some individuals are of intermediate size. They help with the rearing of the young and can take the place of a reproductive if one dies. The Damaraland mole rat is characterized by having a single reproductively active male and female in a colony where the remaining animals are not truly sterile, but become fertile only if they establish a colony of their own. The naked mole-rat has a particularly long life-span for a small rodent, about 30 years, and the basis for this longevity has been investigated. Naked mole-rats express DNA repair genes, including core genes in several DNA repair pathways, at a higher level than shorter-lived mice, and thus it was suggested that DNA repair acts as a longevity assurance system.
Communication
Olfactory
Rodents use scent marking in many social contexts including inter- and intra-species communication, the marking of trails and the establishment of territories. Their urine provides genetic information about individuals including the species, the sex and individual identity, and metabolic information on dominance, reproductive status and health. Compounds derived from the major histocompatibility complex (MHC) are bound to several urinary proteins. The odor of a predator depresses scent-marking behavior.
Rodents are able to recognize close relatives by smell and this allows them to show nepotism (preferential behavior toward their kin) and also avoid inbreeding. This kin recognition is by olfactory cues from urine, feces and glandular secretions. The main assessment may involve the MHC, where the degree of relatedness of two individuals is correlated to the MHC genes they have in common. In non-kin communication, where more permanent odor markers are required, as at territorial borders, then non-volatile major urinary proteins (MUPs), which function as pheromone transporters, may also be used. MUPs may also signal individual identity, with each male house mouse (Mus musculus) excreting urine containing about a dozen genetically encoded MUPs.
House mice deposit urine, which contains pheromones, for territorial marking, individual and group recognition, and social organization. Territorial beavers and red squirrels investigate and become familiar with the scents of their neighbors and respond less aggressively to intrusions by them than to those made by non-territorial "floaters" or strangers. This is known as the "dear enemy effect".
Auditory
Many rodent species, particularly those that are diurnal and social, have a wide range of alarm calls that are emitted when they perceive threats. There are both direct and indirect benefits of doing this. A potential predator may stop when it knows it has been detected, or an alarm call can allow conspecifics or related individuals to take evasive action. Several species, for example prairie dogs, have complex anti-predator alarm call systems. These species may have different calls for different predators (e.g. aerial predators or ground-based predators) and each call contains information about the nature of the precise threat. The urgency of the threat is also conveyed by the acoustic properties of the call.
Social rodents have a wider range of vocalizations than do solitary species. Fifteen different call-types have been recognized in adult Kataba mole rats and four in juveniles. Similarly, the common degu, another social, burrowing rodent, exhibits a wide array of communication methods and has an elaborate vocal repertoire comprising fifteen different categories of sound. Ultrasonic calls play a part in social communication between dormice and are used when the individuals are out of sight of each other.
House mice use both audible and ultrasonic calls in a variety of contexts. Audible vocalizations can often be heard during agonistic or aggressive encounters, whereas ultrasound is used in sexual communication and also by pups when they have fallen out of the nest.
Laboratory rats (which are brown rats, Rattus norvegicus) emit short, high frequency, ultrasonic vocalizations during purportedly pleasurable experiences such as rough-and-tumble play, when anticipating routine doses of morphine, during mating, and when tickled. The vocalization, described as a distinct "chirping", has been likened to laughter, and is interpreted as an expectation of something rewarding. In clinical studies, the chirping is associated with positive emotional feelings, and social bonding occurs with the tickler, resulting in the rats becoming conditioned to seek the tickling. However, as the rats age, the tendency to chirp declines. Like most rat vocalizations, the chirping is at frequencies too high for humans to hear without special equipment, so bat detectors have been used for this purpose.
Visual
Rodents, like all placental mammals except primates, have just two types of light receptive cones in their retina, a short wavelength "blue-UV" type and a middle wavelength "green" type. They are therefore classified as dichromats; however, they are visually sensitive into the ultraviolet (UV) spectrum and therefore can see light that humans cannot. The functions of this UV sensitivity are not always clear. In degus, for example, the belly reflects more UV light than the back. Therefore, when a degu stands up on its hind legs, which it does when alarmed, it exposes its belly to other degus and ultraviolet vision may serve a purpose in communicating the alarm. When it stands on all fours, its low UV-reflectance back could help make the degu less visible to predators. Ultraviolet light is abundant during the day but not at night. There is a large increase in the ratio of ultraviolet to visible light in the morning and evening twilight hours. Many rodents are active during twilight hours (crepuscular activity), and UV-sensitivity would be advantageous at these times. Ultraviolet reflectivity is of dubious value for nocturnal rodents.
The urine of many rodents (e.g. voles, degus, mice, rats) strongly reflects UV light and this may be used in communication by leaving visible as well as olfactory markings. However, the amount of UV that is reflected decreases with time, which in some circumstances can be disadvantageous; the common kestrel can distinguish between old and fresh rodent trails and has greater success hunting over more recently marked routes.
Tactile
Vibrations can provide cues to conspecifics about specific behaviors being performed, predator warning and avoidance, herd or group maintenance, and courtship. The Middle East blind mole rat was the first mammal for which seismic communication was documented. These fossorial rodents bang their head against the walls of their tunnels. This behavior was initially interpreted as part of their tunnel building behavior, but it was eventually realized that they generate temporally patterned seismic signals for long-distance communication with neighboring mole rats.
Footdrumming is used widely as a predator warning or defensive action. It is used primarily by fossorial or semi-fossorial rodents. The banner-tailed kangaroo rat produces several complex footdrumming patterns in a number of different contexts, one of which is when it encounters a snake. The footdrumming may alert nearby offspring but most likely conveys that the rat is too alert for a successful attack, thus preventing the snake's predatory pursuit. Several studies have indicated intentional use of ground vibrations as a means of intra-specific communication during courtship among the Cape mole rat. Footdrumming has been reported to be involved in male-male competition; the dominant male indicates its resource holding potential by drumming, thus minimizing physical contact with potential rivals.
Mating strategies
Some species of rodent are monogamous, with an adult male and female forming a lasting pair bond. Monogamy can come in two forms; obligate and facultative. In obligate monogamy, both parents care for the offspring and play an important part in their survival. This occurs in species such as California mice, oldfield mice, Malagasy giant rats and beavers. In these species, males usually mate only with their partners. In addition to increased care for young, obligate monogamy can also be beneficial to the adult male as it decreases the chances of never finding a mate or mating with an infertile female. In facultative monogamy, the males do not provide direct parental care and stay with one female because they cannot access others due to being spatially dispersed. Prairie voles appear to be an example of this form of monogamy, with males guarding and defending females within their vicinity.
In polygynous species, males will try to monopolize and mate with multiple females. As with monogamy, polygyny in rodents can come in two forms; defense and non-defense. Defense polygyny involves males controlling territories that contain resources that attract females. This occurs in ground squirrels like yellow-bellied marmots, California ground squirrels, Columbian ground squirrels and Richardson's ground squirrels. Males with territories are known as "resident" males and the females that live within the territories are known as "resident" females. In the case of marmots, resident males do not appear to ever lose their territories and always win encounters with invading males. Some species are also known to directly defend their resident females and the ensuing fights can lead to severe wounding. In species with non-defense polygyny, males are not territorial and wander widely in search of females to monopolize. These males establish dominance hierarchies, with the high-ranking males having access to the most females. This occurs in species like Belding's ground squirrels and some tree squirrel species.
Promiscuity, in which both males and females mate with multiple partners, also occurs in rodents. In species such as the white-footed mouse, females give birth to litters with multiple paternities. Promiscuity leads to increased sperm competition and males tend to have larger testicles. In the Cape ground squirrel, the male's testes can be 20 percent of its head-body length. Several rodent species have flexible mating systems that can vary between monogamy, polygyny and promiscuity.
Female rodents play an active role in choosing their mates. Factors that contribute to female preference may include the size, dominance and spatial ability of the male. In the eusocial naked mole rats, a single female monopolizes mating from at least three males. Reproductively active female naked mole-rats tend to associate with unfamiliar males (generally non-kin), whereas females that are reproductively inactive do not tend to discriminate. The preference of reproductively active females for unfamiliar males is thought to be an adaptation for inbreeding avoidance, since inbreeding ordinarily leads to the expression of recessive deleterious alleles.
In most rodent species, such as brown rats and house mice, ovulation occurs on a regular cycle while in others, such as voles, it is induced by mating. During copulation, males of some rodent species deposit a mating plug in the female's genital opening, both to prevent sperm leakage and to protect against other males inseminating the female. Females can remove the plug and may do so either immediately or after several hours.
Metabolism of thyroid hormones and iodine in the mediobasal hypothalamus changes in response to photoperiod. Thyroid hormones in turn induce reproductive changes. This is found by Watanabe et al. 2004 and 2007, Barrett et al. 2007, Freeman et al. 2007, and Herwig et al. 2009 in Siberian hamsters, Revel et al. 2006 and Yasuo et al. 2007 in Syrian hamsters, Yasuo et al. 2007 and Ross et al. 2011 in rats, and Ono et al. 2008 in mice.
Birth and parenting
Rodents may be born either altricial (blind, hairless and relatively underdeveloped) or precocial (mostly furred, eyes open and fairly developed) depending on the species. The altricial state is typical for squirrels and mice, while the precocial state usually occurs in species like guinea pigs and porcupines. Females with altricial young typically build elaborate nests before they give birth and maintain them until their offspring are weaned. The female gives birth sitting or lying down and the young emerge in the direction she is facing. The newborns first venture out of the nest a few days after they have opened their eyes and initially keep returning regularly. As they get older and more developed, they visit the nest less often and leave permanently when weaned.
In precocial species, the mothers invest little in nest building and some do not build nests at all. The female gives birth standing and the young emerge behind her. Mothers of these species maintain contact with their highly mobile young with maternal contact calls. Though relatively independent and weaned within days, precocial young may continue to nurse and be groomed by their mothers. Rodent litter sizes also vary and females with smaller litters spend more time in the nest than those with larger litters.
Mother rodents provide both direct parental care, such as nursing, grooming, retrieving and huddling, and indirect parenting, such as food caching, nest building and protection to their offspring. In many social species, young may be cared for by individuals other than their parents, a practice known as alloparenting or cooperative breeding. This is known to occur in black-tailed prairie dogs and Belding's ground squirrels, where mothers have communal nests and nurse unrelated young along with their own. There is some question as to whether these mothers can distinguish which young are theirs. In the Patagonian mara, young are also placed in communal warrens, but mothers do not permit youngsters other than their own to nurse.
Infanticide exists in numerous rodent species and may be practiced by adult conspecifics of either sex. Several reasons have been proposed for this behavior, including nutritional stress, resource competition, avoiding misdirecting parental care and, in the case of males, attempting to make the mother sexually receptive. The latter reason is well supported in primates and lions but less so in rodents. Infanticide appears to be widespread in black-tailed prairie dogs, including infanticide from invading males and immigrant females, as well as occasional cannibalism of an individual's own offspring. To protect against infanticide from other adults, female rodents may employ avoidance or direct aggression against potential perpetrators, multiple mating, territoriality or early termination of pregnancy. Feticide can also occur among rodents; in alpine marmots, dominant females tend to suppress the reproduction of subordinates by being antagonistic towards them while they are pregnant. The resulting stress causes the fetuses to abort.
Intelligence
Rodents have advanced cognitive abilities. They can quickly learn to avoid poisoned baits, which makes them difficult pests to deal with. Guinea pigs can learn and remember complex pathways to food. Squirrels and kangaroo rats are able to locate caches of food by spatial memory, rather than just by smell.
Because laboratory mice (house mice) and rats (brown rats) are widely used as scientific models to further our understanding of biology, a great deal has come to be known about their cognitive capacities. Brown rats exhibit cognitive bias, where information processing is biased by whether they are in a positive or negative affective state. For example, laboratory rats trained to respond to a specific tone by pressing a lever to receive a reward, and to press another lever in response to a different tone so as to avoid receiving an electric shock, are more likely to respond to an intermediate tone by choosing the reward lever if they have just been tickled (something they enjoy), indicating "a link between the directly measured positive affective state and decision making under uncertainty in an animal model."
Laboratory (brown) rats may have the capacity for metacognition—to consider their own learning and then make decisions based on what they know, or do not know, as indicated by choices they make apparently trading off difficulty of tasks and expected rewards, making them the first animals other than primates known to have this capacity, but these findings are disputed, since the rats may have been following simple operant conditioning principles, or a behavioral economic model. Brown rats use social learning in a wide range of situations, but perhaps especially so in acquiring food preferences.
Classification and evolution
Evolutionary history
Dentition is the key feature by which fossil rodents are recognized and the earliest record of such mammals comes from the Paleocene, shortly after the extinction of the non-avian dinosaurs some 66 million years ago. These fossils are found in Laurasia, the supercontinent composed of modern-day North America, Europe, and Asia. The divergence of Glires, a clade consisting of rodents and lagomorphs (rabbits, hares and pikas), from other placental mammals occurred within a few million years after the Cretaceous-Paleogene boundary; rodents and lagomorphs then radiated during the Cenozoic. Some molecular clock data suggest modern rodents (members of the order Rodentia) had appeared by the late Cretaceous, although other molecular divergence estimations are in agreement with the fossil record.
Rodents are thought to have evolved in Asia, where local multituberculate faunas were severely affected by the Cretaceous–Paleogene extinction event and never fully recovered, unlike their North American and European relatives. In the resulting ecological vacuum, rodents and other Glires were able to evolve and diversify, taking the niches left by extinct multituberculates. The correlation between the spread of rodents and the demise of multituberculates is a controversial topic, not fully resolved. American and European multituberculate assemblages do decline in diversity in correlation with the introduction of rodents in these areas, but the remaining Asian multituberculates co-existed with rodents with no observable replacement taking place, and ultimately both clades co-existed for at least 15 million years.
The history of the colonization of the world's continents by rodents is complex. The movements of the large superfamily Muroidea (including hamsters, gerbils, true mice and rats) may have involved up to seven colonizations of Africa, five of North America, four of Southeast Asia, two of South America and up to ten of Eurasia.
During the Eocene, rodents began to diversify. Beavers appeared in Eurasia in the late Eocene before spreading to North America in the late Miocene. Late in the Eocene, hystricognaths invaded Africa, most probably having originated in Asia at least 39.5 million years ago. From Africa, fossil evidence shows that some hystricognaths (caviomorphs) colonized South America, which was an isolated continent at the time, evidently making use of ocean currents to cross the Atlantic on floating debris. Caviomorphs had arrived in South America by 41 million years ago (implying a date at least as early as this for hystricognaths in Africa), and had reached the Greater Antilles by the early Oligocene, suggesting that they must have dispersed rapidly across South America.
Nesomyid rodents are thought to have rafted from Africa to Madagascar 20–24 million years ago. All 27 species of native Malagasy rodents appear to be descendants of a single colonization event.
By 20 million years ago, fossils recognizably belonging to the current families such as Muridae had emerged. By the Miocene, when Africa had collided with Asia, African rodents such as the porcupine began to spread into Eurasia. Some fossil species were very large in comparison to modern rodents and included the giant beaver, Castoroides ohioensis, which grew to a length of and weight of . The largest known rodent was Josephoartigasia monesi, a pacarana with an estimated body length of 3 m (10 ft).
The first rodents arrived in Australia via Indonesia around 5 million years ago. Although marsupials are the most prominent mammals in Australia, many rodents, all belonging to the subfamily Murinae, are among the continent's mammal species. There are about fifty species of 'old endemics', the first wave of rodents to colonize the country in the Miocene and early Pliocene, and eight true rat (Rattus) species of 'new endemics', arriving in a subsequent wave in the late Pliocene or early Pleistocene. The earliest fossil rodents in Australia have a maximum age of 4.5 million years, and molecular data is consistent with the colonization of New Guinea from the west during the late Miocene or early Pliocene followed by rapid diversification. A further wave of adaptive radiation occurred after one or more colonizations of Australia some 2 to 3 million years later.
Rodents participated in the Great American Interchange that resulted from the joining of the Americas by formation of the Isthmus of Panama, around 3 million years ago in the Piacenzian age. In this exchange, a small number of species such as the New World porcupines (Erethizontidae) headed north. However, the main southward invasion of sigmodontines preceded formation of the land bridge by at least several million years, probably occurring via rafting. Sigmodontines diversified explosively once in South America, although some degree of diversification may have already occurred in Central America before the colonization.
Standard classification
The use of the order name "Rodentia" is attributed to the English traveler and naturalist Thomas Edward Bowdich (1821). The Modern Latin word is derived from , present participle of – "to gnaw", "eat away". The hares, rabbits and pikas (order Lagomorpha) have continuously growing incisors, as do rodents, and were at one time included in the order. However, they have an additional pair of incisors in the upper jaw and the two orders have quite separate evolutionary histories. The phylogeny of the rodents places them in the clades Glires, Euarchontoglires and Boreoeutheria. The cladogram below shows the inner and outer relations of Rodentia based on a 2012 attempt by Wu et al. to align the molecular clock with paleontological data:
The living rodent families based on the study done by Fabre et al. 2012.
The order Rodentia may be divided into suborders, infraorders, superfamilies and families. There is a great deal of parallelism and convergence among rodents caused by the fact that they have tended to evolve to fill largely similar niches. This parallel evolution includes not only the structure of the teeth, but also the infraorbital region of the skull (below the eye socket) and makes classification difficult as similar traits may not be due to common ancestry. Brandt (1855) was the first to propose dividing Rodentia into three suborders, Sciuromorpha, Hystricomorpha and Myomorpha, based on the development of certain muscles in the jaw and this system was widely accepted. Schlosser (1884) performed a comprehensive review of rodent fossils, mainly using the cheek teeth, and found that they fitted into the classical system, but Tullborg (1899) proposed just two sub-orders, Sciurognathi and Hystricognathi. These were based on the degree of inflection of the lower jaw and were to be further subdivided into Sciuromorpha, Myomorpha, Hystricomorpha and Bathyergomorpha. Matthew (1910) created a phylogenetic tree of New World rodents but did not include the more problematic Old World species. Further attempts at classification continued without agreement, with some authors adopting the classical three suborder system and others Tullborg's two suborders.
These disagreements remain unresolved, nor have molecular studies fully resolved the situation though they have confirmed the monophyly of the group and that the clade has descended from a common Paleocene ancestor. Carleton and Musser (2005) in Mammal Species of the World have provisionally adopted a five suborder system: Sciuromorpha, Castorimorpha, Myomorpha, Anomaluromorpha, and Hystricomorpha. As of 2021 the American Society of Mammalogists recognizes 34 recent families containing more than 481 genera and 2277 species.
Order Rodentia (from Latin, rodere, to gnaw)
Suborder Anomaluromorpha
Family Anomaluridae: scaly-tailed squirrels
Family Pedetidae: springhares
Family Zenkerellidae: Cameroon scaly-tail
Suborder Castorimorpha
Superfamily Castoroidea
Family Castoridae: beavers
Superfamily Geomyoidea
Family Geomyidae: pocket gophers (true gophers)
Family Heteromyidae: kangaroo rats, kangaroo mice
Suborder Hystricomorpha
Infraorder Ctenodactylomorphi
Family Ctenodactylidae: gundis
Family Diatomyidae: Laotian rock rat
Infraorder Hystricognathi
Parvorder Phiomorpha
Family Bathyergidae: African mole rats
Family Heterocephalidae: naked mole-rat
Family Hystricidae: Old World porcupines
Family Petromuridae: dassie rat
Family Thryonomyidae: cane rats
Parvorder Caviomorpha
Superfamily Erethizontoidea
Family Erethizontidae: New World porcupines
Superfamily Chinchilloidea
Family Chinchillidae: chinchillas, viscachas
Family Dinomyidae: pacaranas
Superfamily Cavioidea
Family Caviidae: cavies, including guinea pigs and the capybara
Family Dasyproctidae: agoutis
Family Cuniculidae: pacas
Superfamily Octodontoidea
Family Abrocomidae: chinchilla rats
Family Ctenomyidae: tuco-tucos
Family Echimyidae: spiny rats, hutias, and nutria
Family Octodontidae: octodonts
Suborder Myomorpha
Superfamily Dipodoidea
Family Dipodidae: jerboas
Family Sminthidae: birch mice
Family Zapodidae: jumping mice
Superfamily Muroidea
Family Calomyscidae: mouse-like hamsters
Family Cricetidae: hamsters, New World rats and mice, muskrats, voles, lemmings
Family Muridae: true mice and rats, gerbils, spiny mice, crested rat
Family Nesomyidae: climbing mice, rock mice, white-tailed rat, Malagasy rats and mice
Family Platacanthomyidae: spiny dormice
Family Spalacidae: mole rats, bamboo rats, zokors
Suborder Sciuromorpha
Family Aplodontiidae: mountain beaver
Family Gliridae (also Myoxidae, Muscardinidae): dormice
Family Sciuridae: squirrels, including chipmunks, prairie dogs, marmots
Interaction with humans
Conservation
While rodents are not the most seriously threatened order of mammals, there are 168 species in 126 genera that are said to warrant conservation attention in the face of limited appreciation by the public. Since 76 percent of rodent genera contain only one species, much phylogenetic diversity could be lost with a comparatively small number of extinctions. In the absence of more detailed knowledge of species at risk and accurate taxonomy, conservation must be based mainly on higher taxa (such as families rather than species) and geographical hot spots. Several species of rice rat have become extinct since the 19th century, probably through habitat loss and the introduction of alien species. In Colombia, the brown hairy dwarf porcupine was recorded from only two mountain localities in the 1920s, while the red crested soft-furred spiny rat is known only from its type locality on the Caribbean coast, so these species are considered vulnerable. The IUCN Species Survival Commission writes "We can safely conclude that many South American rodents are seriously threatened, mainly by environmental disturbance and intensive hunting".
The "three now cosmopolitan commensal rodent pest species" (the brown rat, the black rat and the house mouse) have been dispersed in association with humans, partly on sailing ships in the Age of Exploration, and with a fourth species in the Pacific, the Polynesian rat (Rattus exulans), have severely damaged island biotas around the world. For example, when the black rat reached Lord Howe Island in 1918, over 40 percent of the terrestrial bird species of the island, including the Lord Howe fantail, became extinct within ten years. Similar destruction has been seen on Midway Island (1943) and Big South Cape Island (1962). Conservation projects can with careful planning completely eradicate these pest rodents from islands using an anticoagulant rodenticide such as brodifacoum. This approach has been successful on the island of Lundy in the United Kingdom, where the eradication of an estimated 40,000 brown rats is giving populations of Manx shearwater and Atlantic puffin a chance to recover from near-extinction.
Rodents have also been susceptible to climate change, especially species living on low-lying islands. The Bramble Cay melomys, which lived in the northernmost point of land of Australia, was the first mammal species to be declared extinct as a consequence of human-caused climate change.
Exploitation
Fur
Humanity has long used animal skins for clothing, as the leather is durable and the fur provides extra insulation. The native people of North America made much use of beaver pelts, tanning and sewing them together to make robes. Europeans appreciated the quality of these and the North American fur trade developed and became of prime importance to early settlers. In Europe, the soft underfur known as "beaver wool" was found to be ideal for felting and was made into beaver hats and trimming for clothing. Later, the coypu took over as a cheaper source of fur for felting and was farmed extensively in America and Europe; however, fashions changed, new materials became available and this area of the animal fur industry declined. The chinchilla has a soft and silky coat and the demand for its fur was so high that it was nearly wiped out in the wild before farming took over as the main source of pelts. The quills and guardhairs of porcupines are used for traditional decorative clothing. For example, their guardhairs are used in the creation of the Native American "porky roach" headdress. The main quills may be dyed, and then applied in combination with thread to embellish leather accessories such as knife sheaths and leather bags. Lakota women would harvest the quills for quillwork by throwing a blanket over a porcupine and retrieving the quills it left stuck in the blanket.
Consumption
At least 89 species of rodent, mostly Hystricomorpha such as guinea pigs, agoutis and capybaras, are eaten by humans; in 1985, there were at least 42 different societies in which people eat rats. Guinea pigs were first raised for food around 2500 B.C. and by 1500 B.C. had become the main source of meat for the Inca Empire. Dormice were raised by the Romans in special pots called "gliraria", or in large outdoor enclosures, where they were fattened on walnuts, chestnuts, and acorns. The dormice were also caught from the wild in autumn when they were fattest, and either roasted and dipped into honey or baked while stuffed with a mixture of pork, pine nuts, and other flavorings. Researchers found that in Amazonia, where large mammals were scarce, pacas and common agoutis accounted for around 40 percent of the annual game taken by the indigenous people, but in forested areas where larger mammals were abundant, these rodents constituted only about 3 percent of the take.
Guinea pigs are used in the cuisine of Cuzco, Peru, in dishes such as cuy al horno, baked guinea pig. The traditional Andean stove, known as a qoncha or a fogón, is made from mud and clay reinforced with straw and hair from animals such as guinea pigs. In Peru, there are at any time 20 million domestic guinea pigs, which annually produce 64 million edible carcasses. This animal is an excellent food source since the flesh is 19% protein. In the United States, mostly squirrels, but also muskrats, porcupines, and groundhogs are eaten by humans. The Navajo people ate prairie dog baked in mud, while the Paiute ate gophers, squirrels, and rats.
Animal testing
Rodents are used widely as model organisms in animal testing. Albino mutant rats were first used for research in 1828 and later became the first animal domesticated for purely scientific purposes. Nowadays, the house mouse is the most commonly used laboratory rodent, and in 1979 it was estimated that fifty million were used annually worldwide. They are favored because of their small size, fertility, short gestation period and ease of handling and because they are susceptible to many of the conditions and infections that afflict humans. They are used in research into genetics, developmental biology, cell biology, oncology and immunology. Guinea pigs were popular laboratory animals until the late 20th century; about 2.5 million guinea pigs were used annually in the United States for research in the 1960s, but that total decreased to about 375,000 by the mid-1990s. In 2007, they constituted about 2% of all laboratory animals. Guinea pigs played a major role in the establishment of germ theory in the late 19th century, through the experiments of Louis Pasteur, Émile Roux, and Robert Koch. They have been launched into orbital space flight several times—first by the USSR on the Sputnik 9 biosatellite of 9 March 1961, with a successful recovery. The naked mole rat is the only known mammal that is poikilothermic; it is used in studies on thermoregulation. It is also unusual in not producing the neurotransmitter substance P, a fact which researchers find useful in studies on pain.
Rodents have sensitive olfactory abilities, which have been used by humans to detect odors or chemicals of interest. The Gambian pouched rat is able to detect tuberculosis bacilli with a sensitivity of up to 86.6%, and specificity (detecting the absence of the bacilli) of over 93%; the same species has been trained to detect land mines. Rats have been studied for possible use in hazardous situations such as in disaster zones. They can be trained to respond to commands, which may be given remotely, and even persuaded to venture into brightly lit areas, which rats usually avoid.
As pets
Rodents including guinea pigs, mice, rats, hamsters, gerbils, chinchillas, degus and chipmunks make convenient pets able to live in small spaces, each species with its own qualities. Most are normally kept in cages of suitable sizes and have varied requirements for space and social interaction. If handled from a young age, they are usually docile and do not bite. Guinea pigs have a long lifespan and need a large cage. Rats also need plenty of space and can become very tame, can learn tricks and seem to enjoy human companionship. Mice are short-lived but take up very little space. Hamsters are solitary but tend to be nocturnal. They have interesting behaviors, but unless handled regularly they may be defensive. Gerbils are not usually aggressive, rarely bite and are sociable animals that enjoy the company of humans and their own kind.
As pests and disease vectors
Some rodent species are serious agricultural pests, eating large quantities of food stored by humans. For example, in 2003, the amount of rice lost to mice and rats in Asia was estimated to be enough to feed 200 million people. Most of the damage worldwide is caused by a relatively small number of species, chiefly rats and mice. In Indonesia and Tanzania, rodents reduce crop yields by around fifteen percent, while in some instances in South America losses have reached ninety percent. Across Africa, rodents including Mastomys and Arvicanthis damage cereals, groundnuts, vegetables and cacao. In Asia, rats, mice and species such as Microtus brandti, Meriones unguiculatus and Eospalax baileyi damage crops of rice, sorghum, tubers, vegetables and nuts. In Europe, as well as rats and mice, species of Apodemus, Microtus and in occasional outbreaks Arvicola terrestris cause damage to orchards, vegetables and pasture as well as cereals. In South America, a wider range of rodent species, such as Holochilus, Akodon, Calomys, Oligoryzomys, Phyllotis, Sigmodon and Zygodontomys, damage many crops including sugar cane, fruits, vegetables, and tubers.
Rodents are also significant vectors of disease. The black rat, with the fleas that it carries, plays a primary role in spreading the bacterium Yersinia pestis responsible for bubonic plague, and carries the organisms responsible for typhus, Weil's disease, toxoplasmosis and trichinosis. A number of rodents carry hantaviruses, including the Puumala, Dobrava and Saaremaa viruses, which can infect humans. Rodents also help to transmit diseases including babesiosis, cutaneous leishmaniasis, human granulocytic anaplasmosis, Lyme disease, Omsk hemorrhagic fever, Powassan virus, rickettsialpox, relapsing fever, Rocky Mountain spotted fever, and West Nile virus.
Because rodents are a nuisance and endanger public health, human societies often attempt to control them. Traditionally, this involved poisoning and trapping, methods that were not always safe or effective. More recently, integrated pest management attempts to improve control with a combination of surveys to determine the size and distribution of the pest population, the establishment of tolerance limits (levels of pest activity at which to intervene), interventions, and evaluation of effectiveness based on repeated surveys. Interventions may include education, making and applying laws and regulations, modifying the habitat, changing farming practices, and biological control using pathogens or predators, as well as poisoning and trapping. The use of pathogens such as Salmonella has the drawback that they can infect man and domestic animals, and rodents often become resistant. The use of predators including ferrets, mongooses and monitor lizards has been found unsatisfactory. Domestic and feral cats are able to control rodents effectively, provided the rodent population is not too large. In the UK, two species in particular, the house mouse and the brown rat, are actively controlled to limit damage in growing crops, loss and contamination of stored crops and structural damage to facilities, as well as to comply with the law.
| Biology and health sciences | Biology | null |
19344125 | https://en.wikipedia.org/wiki/Riemann%20hypothesis | Riemann hypothesis | In mathematics, the Riemann hypothesis is the conjecture that the Riemann zeta function has its zeros only at the negative even integers and complex numbers with real part . Many consider it to be the most important unsolved problem in pure mathematics. It is of great interest in number theory because it implies results about the distribution of prime numbers. It was proposed by , after whom it is named.
The Riemann hypothesis and some of its generalizations, along with Goldbach's conjecture and the twin prime conjecture, make up Hilbert's eighth problem in David Hilbert's list of twenty-three unsolved problems; it is also one of the Millennium Prize Problems of the Clay Mathematics Institute, which offers US$1 million for a solution to any of them. The name is also used for some closely related analogues, such as the Riemann hypothesis for curves over finite fields.
The Riemann zeta function ζ(s) is a function whose argument s may be any complex number other than 1, and whose values are also complex. It has zeros at the negative even integers; that is, when s is one of −2, −4, −6, .... These are called its trivial zeros. The zeta function is also zero for other values of s, which are called nontrivial zeros. The Riemann hypothesis is concerned with the locations of these nontrivial zeros, and states that:
Thus, if the hypothesis is correct, all the nontrivial zeros lie on the critical line consisting of the complex numbers where t is a real number and i is the imaginary unit.
Riemann zeta function
The Riemann zeta function is defined for complex s with real part greater than 1 by the absolutely convergent infinite series
Leonhard Euler considered this series in the 1730s for real values of s, in conjunction with his solution to the Basel problem. He also proved that it equals the Euler product
where the infinite product extends over all prime numbers p.
The Riemann hypothesis discusses zeros outside the region of convergence of this series and Euler product. To make sense of the hypothesis, it is necessary to analytically continue the function to obtain a form that is valid for all complex s. Because the zeta function is meromorphic, all choices of how to perform this analytic continuation will lead to the same result, by the identity theorem. A first step in this continuation observes that the series for the zeta function and the Dirichlet eta function satisfy the relation
within the region of convergence for both series. But the eta function series on the right converges not just when the real part of s is greater than one, but more generally whenever s has positive real part. Thus the zeta function can be redefined as , extending it from to a larger domain: , except for the points where is zero. These are the points where can be any nonzero integer; the zeta function can be extended to these values too by taking limits (see ), giving a finite value for all values of s with positive real part except the simple pole at s = 1.
In the strip this extension of the zeta function satisfies the functional equation
One may then define ζ(s) for all remaining nonzero complex numbers s ( and ) by applying this equation outside the strip, and letting ζ(s) equal the right side of the equation whenever s has non-positive real part (and ).
If s is a negative even integer, then , because the factor sin(s/2) vanishes; these are the zeta function's trivial zeros. (If s is a positive even integer this argument does not apply because the zeros of the sine function are canceled by the poles of the gamma function as it takes negative integer arguments.)
The value ζ(0) = −1/2 is not determined by the functional equation, but is the limiting value of ζ(s) as s approaches zero. The functional equation also implies that the zeta function has no zeros with negative real part other than the trivial zeros, so all nontrivial zeros lie in the critical strip where s has real part between 0 and 1.
Origin
Riemann's original motivation for studying the zeta function and its zeros was their occurrence in his explicit formula for the number of primes (x) less than or equal to a given number x, which he published in his 1859 paper "On the Number of Primes Less Than a Given Magnitude". His formula was given in terms of the related function
which counts the primes and prime powers up to x, counting a prime power pn as . The number of primes can be recovered from this function by using the Möbius inversion formula,
where μ is the Möbius function. Riemann's formula is then
where the sum is over the nontrivial zeros of the zeta function and where Π0 is a slightly modified version of Π that replaces its value at its points of discontinuity by the average of its upper and lower limits:
The summation in Riemann's formula is not absolutely convergent, but may be evaluated by taking the zeros ρ in order of the absolute value of their imaginary part. The function li occurring in the first term is the (unoffset) logarithmic integral function given by the Cauchy principal value of the divergent integral
The terms li(xρ) involving the zeros of the zeta function need some care in their definition as li has branch points at 0 and 1, and are defined (for x > 1) by analytic continuation in the complex variable ρ in the region Re(ρ) > 0, i.e. they should be considered as . The other terms also correspond to zeros: the dominant term li(x) comes from the pole at s = 1, considered as a zero of multiplicity −1, and the remaining small terms come from the trivial zeros. For some graphs of the sums of the first few terms of this series see or .
This formula says that the zeros of the Riemann zeta function control the oscillations of primes around their "expected" positions. Riemann knew that the non-trivial zeros of the zeta function were symmetrically distributed about the line , and he knew that all of its non-trivial zeros must lie in the range . He checked that a few of the zeros lay on the critical line with real part 1/2 and suggested that they all do; this is the Riemann hypothesis.
Consequences
The practical uses of the Riemann hypothesis include many propositions known to be true under the Riemann hypothesis, and some that can be shown to be equivalent to the Riemann hypothesis.
Distribution of prime numbers
Riemann's explicit formula for the number of primes less than a given number states that, in terms of a sum over the zeros of the Riemann zeta function, the magnitude of the oscillations of primes around their expected position is controlled by the real parts of the zeros of the zeta function. In particular, the error term in the prime number theorem is closely related to the position of the zeros. For example, if β is the upper bound of the real parts of the zeros, then
, where is the prime-counting function, is the logarithmic integral function, is the natural logarithm of x, and big O notation is used here.
It is already known that 1/2 ≤ β ≤ 1.
Von Koch (1901) proved that the Riemann hypothesis implies the "best possible" bound for the error of the prime number theorem. A precise version of von Koch's result, due to , says that the Riemann hypothesis implies
also showed that the Riemann hypothesis implies
where is Chebyshev's second function.
proved that the Riemann hypothesis implies that for all there is a prime satisfying
.
The constant 4/ may
be reduced to (1 + ε) provided that x is taken to be sufficiently large.
This is an explicit version of a theorem of Cramér.
Growth of arithmetic functions
The Riemann hypothesis implies strong bounds on the growth of many other arithmetic functions, in addition to the primes counting function above.
One example involves the Möbius function μ. The statement that the equation
is valid for every s with real part greater than 1/2, with the sum on the right hand side converging, is equivalent to the Riemann hypothesis. From this we can also conclude that if the Mertens function is defined by
then the claim that
for every positive ε is equivalent to the Riemann hypothesis (J. E. Littlewood, 1912; see for instance: paragraph 14.25 in ). The determinant of the order n Redheffer matrix is equal to M(n), so the Riemann hypothesis can also be stated as a condition on the growth of these determinants. Littlewood's result has been improved several times since then, by Edmund Landau, Edward Charles Titchmarsh, Helmut Maier and Hugh Montgomery, and Kannan Soundararajan. Soundararajan's result is that, conditional on the Riemann hypothesis,
The Riemann hypothesis puts a rather tight bound on the growth of M, since disproved the slightly stronger Mertens conjecture
Another closely related result is due to , that the Riemann hypothesis is equivalent to the statement that the Euler characteristic of the simplicial complex determined by the lattice of integers under divisibility is for all (see incidence algebra).
The Riemann hypothesis is equivalent to many other conjectures about the rate of growth of other arithmetic functions aside from μ(n). A typical example is Robin's theorem, which states that if σ(n) is the sigma function, given by
then
for all if and only if the Riemann hypothesis is true, where γ is the Euler–Mascheroni constant.
A related bound was given by Jeffrey Lagarias in 2002, who proved that the Riemann hypothesis is equivalent to the statement that:
for every natural number , where is the nth harmonic number.
The Riemann hypothesis is also true if and only if the inequality
is true for all , where φ(n) is Euler's totient function and 120569# is the product of the first 120569 primes.
Another example was found by Jérôme Franel, and extended by Landau (see ). The Riemann hypothesis is equivalent to several statements showing that the terms of the Farey sequence are fairly regular. One such equivalence is as follows: if Fn is the Farey sequence of order n, beginning with 1/n and up to 1/1, then the claim that for all
is equivalent to the Riemann hypothesis. Here
is the number of terms in the Farey sequence of order n.
For an example from group theory, if g(n) is Landau's function given by the maximal order of elements of the symmetric group Sn of degree n, then showed that the Riemann hypothesis is equivalent to the bound
for all sufficiently large n.
Lindelöf hypothesis and growth of the zeta function
The Riemann hypothesis has various weaker consequences as well; one is the Lindelöf hypothesis on the rate of growth of the zeta function on the critical line, which says that, for any ,
as
The Riemann hypothesis also implies quite sharp bounds for the growth rate of the zeta function in other regions of the critical strip. For example, it implies that
so the growth rate of and its inverse would be known up to a factor of 2.
Large prime gap conjecture
The prime number theorem implies that on average, the gap between the prime p and its successor is . However, some gaps between primes may be much larger than the average. Cramér proved that, assuming the Riemann hypothesis, every gap is ). This is a case in which even the best bound that can be proved using the Riemann hypothesis is far weaker than what seems true: Cramér's conjecture implies that every gap is , which, while larger than the average gap, is far smaller than the bound implied by the Riemann hypothesis. Numerical evidence supports Cramér's conjecture.
Analytic criteria equivalent to the Riemann hypothesis
Many statements equivalent to the Riemann hypothesis have been found, though so far none of them have led to much progress in proving (or disproving) it. Some typical examples are as follows. (Others involve the divisor function σ(n).)
The Riesz criterion was given by , to the effect that the bound
holds for all ε > 0 if and only if the Riemann hypothesis holds. | Mathematics | Other | null |
19344418 | https://en.wikipedia.org/wiki/Bovine%20spongiform%20encephalopathy | Bovine spongiform encephalopathy | Bovine spongiform encephalopathy (BSE), commonly known as mad cow disease, is an incurable and invariably fatal neurodegenerative disease of cattle. Symptoms include abnormal behavior, trouble walking, and weight loss. Later in the course of the disease, the cow becomes unable to function normally. There is conflicting information about the time between infection and onset of symptoms. In 2002, the World Health Organization suggested it to be approximately four to five years. Time from onset of symptoms to death is generally weeks to months. Spread to humans is believed to result in variant Creutzfeldt–Jakob disease (vCJD). As of 2018, a total of 231 cases of vCJD had been reported globally.
BSE is thought to be due to an infection by a misfolded protein, known as a prion. Cattle are believed to have been infected by being fed meat-and-bone meal that contained either the remains of cattle who spontaneously developed the disease or scrapie-infected sheep products. The United Kingdom was afflicted with an outbreak of BSE and vCJD in the 1980s and 1990s. The outbreak increased throughout the UK due to the practice of feeding meat-and-bone meal to young calves of dairy cows. Cases are suspected based on symptoms and confirmed by examination of the brain. Cases are classified as classic or atypical, with the latter divided into H- and L types. It is a type of transmissible spongiform encephalopathy.
Efforts to prevent the disease in the UK include not allowing any animal older than 30 months to enter either the human food or animal feed supply. In continental Europe, cattle over 30 months must be tested if they are intended for human food. In North America, tissue of concern, known as specified risk material, may not be added to animal feed or pet food. About four million cows were killed during the eradication programme in the UK.
Four cases were reported globally in 2017, and the condition is considered to be nearly eradicated. In the United Kingdom, more than 184,000 cattle were diagnosed from 1986 to 2015, with the peak of new cases occurring in 1993. A few thousand additional cases have been reported in other regions of the world. In addition, it is believed that several million cattle with the condition likely entered the food supply during the outbreak.
Signs
Signs are not seen immediately in cattle, due to the disease's extremely long incubation period. Some cattle have been observed to have an abnormal gait, changes in behavior, tremors and hyper-responsiveness to certain stimuli. Hindlimb ataxia affects the animal's gait and occurs when muscle control is lost. This results in poor balance and coordination. Behavioural changes may include aggression, anxiety relating to certain situations, nervousness, frenzy and an overall change in temperament. Some rare but previously observed signs also include persistent pacing, rubbing and licking. Additionally, nonspecific signs have also been observed which include weight loss, decreased milk production, lameness, ear infections and teeth grinding due to pain. Some animals may show a combination of these signs, while others may only be observed demonstrating one of the many reported. Once clinical signs arise, they typically get worse over the subsequent weeks and months, eventually leading to recumbency, coma and death.
Cause
BSE is an infectious disease believed to be due to a misfolded protein, known as a prion. Cattle are believed to have been infected from being fed meat and bone meal that contained the remains of other cattle who spontaneously developed the disease or scrapie-infected sheep products. The outbreak increased throughout the United Kingdom due to the practice of feeding meat-and-bone meal to young calves of dairy cows.
BSE prions are misfolded forms of the particular brain protein called prion protein. When this protein is misfolded, the normal alpha-helical structure is converted into a beta sheet. The prion induces normally-folded proteins to take on the misfolded phenotype in an exponential cascade. These sheets form small chains which aggregate and cause cell death. Massive cell death forms lesions in the brain which lead to degeneration of physical and mental abilities and ultimately death. The prion is not destroyed even if the beef or material containing it is cooked or heat-treated under normal conditions and pressures. Transmission can occur when healthy animals come in contact with tainted tissues from others with the disease, generally when their food source contains tainted meat.
The British Government enquiry took the view that the cause was not scrapie, as had originally been postulated, but was some event in the 1970s that could not be identified.
Spread to humans
Spread to humans is believed to result in variant Creutzfeldt–Jakob disease (vCJD). The agent can be transmitted to humans by eating food contaminated with it. Though any tissue may be involved, the highest risk to humans is believed to be from eating food contaminated with the brain, spinal cord, or digestive tract. Despite the lack of knowledge on potential factors triggering the misfolded protein forms, idiopathic prion disorders are the most prevalent, accounting for 85–90% of human cases.
Pathogenesis
The pathogenesis of BSE is not well understood or documented like other diseases of this nature. Even though BSE is a disease that results in neurological defects, its pathogenesis occurs in areas that reside outside of the nervous system. There was a strong deposition of PrPSc initially located in the ileal Peyer's patches of the small intestine. The lymphatic system has been identified in the pathogenesis of scrapie. It has not, however, been determined to be an essential part of the pathogenesis of BSE. The Ileal Peyer's patches have been the only organ from this system that has been found to play a major role in the pathogenesis. Infectivity of the Ileal Peyer's patches has been observed as early as four months after inoculation. PrPSc accumulation was found to occur mostly in tangible body macrophages of the Ileal Peyer's patches. Tangible body macrophages involved in PrPSc clearance are thought to play a role in PrPSc accumulation in the Peyer's patches. Accumulation of PrPSc was also found in follicular dendritic cells; however, it was of a lesser degree. Six months after inoculation, there was no infectivity in any tissues, only that of the ileum. This led researchers to believe that the disease agent replicates here. In naturally confirmed cases, there have been no reports of infectivity in the Ileal Peyer's patches. Generally, in clinical experiments, high doses of the disease are administered. In natural cases, it was hypothesized that low doses of the agent were present, and therefore, infectivity could not be observed.
Diagnosis
Diagnosis of BSE continues to be a practical problem. It has an incubation period of months to years, during which no signs are noticed, though the pathway of converting the normal brain prion protein (PrP) into the toxic, disease-related PrPSc form has started. At present, no way is known to detect PrPSc reliably except by examining post mortem brain tissue using neuropathological and immunohistochemical methods. Accumulation of the abnormally folded PrPSc form of PrP is a characteristic of the disease, but it is present at very low levels in easily accessible body fluids such as blood or urine. Researchers have tried to develop methods to measure PrPSc, but no methods for use in materials such as blood have been accepted fully.
The traditional method of diagnosis relies on histopathological examination of the medulla oblongata of the brain, and other tissues, post mortem. Immunohistochemistry can be used to demonstrate prion protein accumulation.
In 2010, a team from New York described detection of PrPSc even when initially present at only one part in a hundred billion (10−11) in brain tissue. The method combines amplification with a novel technology called surround optical fiber immunoassay and some specific antibodies against PrPSc. After amplifying and then concentrating any PrPSc, the samples are labelled with a fluorescent dye using an antibody for specificity and then finally loaded into a microcapillary tube. This tube is placed in a specially constructed apparatus so it is totally surrounded by optical fibres to capture all light emitted once the dye is excited using a laser. The technique allowed detection of PrPSc after many fewer cycles of conversion than others have achieved, substantially reducing the possibility of artifacts, as well as speeding up the assay. The researchers also tested their method on blood samples from apparently healthy sheep that went on to develop scrapie. The animals' brains were analysed once any signs became apparent. The researchers could, therefore, compare results from brain tissue and blood taken once the animals exhibited signs of the diseases, with blood obtained earlier in the animals' lives, and from uninfected animals. The results showed very clearly that PrPSc could be detected in the blood of animals long before the signs appeared. After further development and testing, this method could be of great value in surveillance as a blood- or urine-based screening test for BSE.
Classification
BSE is a transmissible disease that primarily affects the central nervous system; it is a form of transmissible spongiform encephalopathy, like Creutzfeldt–Jakob disease and kuru in humans, scrapie in sheep, and chronic wasting disease in deer.
Prevention
A ban on feeding meat and bone meal to cattle has resulted in a strong reduction in cases in countries where the disease has been present. In disease-free countries, control relies on import control, feeding regulations, and surveillance measures.
In UK and US slaughterhouses, the brain, spinal cord, trigeminal ganglia, intestines, eyes, and tonsils from cattle are classified as specified risk materials, and must be disposed of appropriately.
An enhanced BSE-related feed ban was enacted in both the United States (2009) and Canada (2007) to help improve prevention and elimination of BSE.
Epidemiology
The tests used for detecting BSE vary considerably, as do the regulations in various jurisdictions for when, and which cattle, must be tested. For instance in the EU, the cattle tested are older (30 months or older), while many cattle are slaughtered younger than that. At the opposite end of the scale, Japan tests all cattle at the time of slaughter. Tests are also difficult, as the altered prion protein has very low levels in blood or urine, and no other signal has been found. Newer tests are faster, more sensitive, and cheaper, so future figures possibly may be more comprehensive. Even so, currently the only reliable test is examination of tissues during a necropsy.
As for vCJD in humans, autopsy tests are not always done, so those figures, too, are likely to be too low, but probably by a lesser fraction. In the United Kingdom, anyone with possible vCJD symptoms must be reported to the Creutzfeldt–Jakob Disease Surveillance Unit. In the United States, the CDC has refused to impose a national requirement that physicians and hospitals report cases of the disease. Instead, the agency relies on other methods, including death certificates and urging physicians to send suspicious cases to the National Prion Disease Pathology Surveillance Center at Case Western Reserve University in Cleveland, which is funded by the CDC.
To control potential transmission of vCJD within the United States, the FDA had established strict restrictions on individuals' eligibility to donate blood. Individuals who had spent a cumulative time of three months or more in the United Kingdom between 1980 and 1996, or a cumulative time of five years or more from 1980 to 2020 in any combination of countries in Europe, were prohibited from donating blood. Due to blood shortages associated with the 2020 COVID-19 outbreak these restrictions were temporarily rescinded in 2020. This recommendation was removed in 2022.
Similar rules also apply in Germany and formerly Australia. Anyone who lived in the UK between 1980 and 1996 for longer than six months is prohibited from giving blood. There are also prohibitions on donating breast milk and tissue. However, there are no restrictions on organ donation. Blood donation organisations first considered relaxing the rules after the COVID-19 pandemic and some natural disasters that depleted the blood supply.
North America
The first reported case in North America was in December 1993 from Alberta, Canada. Another Canadian case was reported in May 2003. The first known US occurrence came in December of the same year, it was later confirmed to be a cow of Canadian origin imported to the US. The cow was slaughtered on a farm near Yakima, Washington. The cow was included in the United States Department of Agriculture's surveillance program, specifically targeting cattle with BSE. Canada announced two additional cases of BSE from Alberta in early 2005.
In June 2005, John R. Clifford, chief veterinary officer for the United States Department of Agriculture Animal and Plant Health Inspection Service, confirmed a fully domestic case of BSE in Texas.
United States
The use of animal by-product feeds was never common, as it was in Europe. Soybean meal is cheap and plentiful in the United States, and cottonseed meal (1.5 million tons of which are produced in the US every year, none of which is suitable for humans or any other simple-stomach animals) is even cheaper than soybean meal. Historically, meat and bone meal, blood meal, and meat scraps have almost always commanded a higher price as a feed additive than oilseed meals in the US, so little incentive existed to use animal products to feed ruminants. However, US regulations only partially prohibited the use of animal by-products in feed. In 1997, regulations prohibited the feeding of mammalian by-products to ruminants such as cattle and goats. However, the by-products of ruminants can still be legally fed to pets or other livestock, including pigs and poultry. In addition, it is legal for ruminants to be fed by-products from some of these animals. Because of this, some authors have suggested that under certain conditions, it is still possible for BSE incidence to increase in U.S. cattle.
The US Department of Agriculture (USDA) announced on 19 May 2023 an atypical case of the disease in an older beef cow at a slaughter plant in South Carolina. The USDA said the animal never entered slaughter channels and the agency did not expect any trade impacts as a result. It was the seventh detection of BSE in the United States since 2003, all but one of which have been atypical.
US meat producer Creekstone Farms alleged in a lawsuit that the USDA was preventing the company from testing its slaughtered cattle for BSE.
The USDA has issued recalls of beef supplies that involved introduction of downer cows into the food supply. Hallmark/Westland Meat Packing Company was found to have used electric shocks to prod downer cows into the slaughtering system in 2007. Possibly due to pressure from large agribusiness, the United States has drastically cut back on the number of cows inspected for BSE.
Effect on the US beef industry
Japan was the top importer of US beef, buying $1.7 billion worth in 2003. After the discovery of the first case of BSE in the US on 23 December 2003, Japan halted US beef imports. In December 2005, Japan once again allowed imports of US beef, but reinstated its ban in January 2006 after a violation of the US-Japan beef import agreement: a vertebral column, which should have been removed prior to shipment, was included in a shipment of veal.
Tokyo yielded to US pressure to resume imports, ignoring consumer worries about the safety of US beef, said Japanese consumer groups. Michiko Kamiyama from Food Safety Citizen Watch and Yoko Tomiyama from Consumers Union of Japan said about this: "The government has put priority on the political schedule between the two countries, not on food safety or human health."
Sixty-five nations implemented full or partial restrictions on importing US beef products because of concerns that US testing lacked sufficient rigor. As a result, exports of US beef declined from 1,300,000 tonnes (t) in 2003 (before the first mad cow was detected in the US) to 322,000 t in 2004. This has increased since then to 771,000 t in 2007 and to 1,300,000 t by 2017.
On 31 December 2006, Hematech Inc, a biotechnology company based in Sioux Falls, South Dakota, announced it had used genetic engineering and cloning technology to produce cattle that lacked the PrPC form of the major prion protein (PrP) necessary gene for prion production – thus theoretically making them immune to BSE.
In April 2012, some South Korean retailers ceased importing beef from the United States after a case of BSE was reported. Indonesia also suspended imports of beef from the US after a dairy cow with mad cow disease was discovered in California.
Japan
With 36 confirmed cases, Japan experienced one of the largest number of cases of BSE outside Europe. It was the only country outside Europe and the Americas to report non-imported cases. Reformation of food safety in light of the BSE cases resulted in the establishment of a governmental Food Safety Commission in 2003.
Europe
Cattle are naturally herbivores, eating grasses. In modern industrial cattle-farming, though, various commercial feeds are used, which may contain ingredients including antibiotics, hormones, pesticides, fertilizers, and protein supplements. The use of meat and bone meal, produced from the ground and cooked leftovers of the slaughtering process, as well as from the carcasses of sick and injured animals, such as cattle or sheep, as a protein supplement in cattle feed was widespread in Europe prior to about 1987. Worldwide, soybean meal is the primary plant-based protein supplement fed to cattle. However, soybeans do not grow well in Europe, so cattle raisers throughout Europe turned to the cheaper animal by-product feeds as an alternative. The British inquiry dismissed suggestions that changes to processing might have increased the infectious agents in cattle feed, saying, "changes in process could not have been solely responsible for the emergence of BSE, and changes in regulation were not a factor at all" (the prion causing BSE is not destroyed by food heat treatment).
The first confirmed instance in which an animal fell ill with the disease occurred in 1986 in the United Kingdom, and lab tests the following year indicated the presence of BSE; by November 1987, the British Ministry of Agriculture accepted it had a new disease on its hands. Subsequently, 177 people (as of June 2014) contracted and died of a disease with similar neurological symptoms subsequently called (new) variant Creutzfeldt–Jakob disease (vCJD). This is a separate disease from 'classical' Creutzfeldt–Jakob disease, which is not related to BSE and has been known about since the early 1900s. Three cases of vCJD occurred in people who had lived in or visited the UK – one each in the Republic of Ireland, Canada, and the US. Also, some concern existed about those who work with (and therefore inhale) cattle meat and bone meal, such as horticulturists, who use it as fertilizer. Up-to-date statistics on all types of CJD are published by the National Creutzfeldt–Jakob Disease Surveillance Unit in Edinburgh, Scotland.
For many of the vCJD patients, direct evidence exists that they had consumed tainted beef, and this is assumed to be the mechanism by which all affected individuals contracted it. Disease incidence also appears to correlate with slaughtering practices that led to the mixture of nervous system tissue with ground meat (mince) and other beef. An estimated 400,000 cattle infected with BSE entered the human food chain in the 1980s. Although the BSE epizootic was eventually brought under control by culling all suspect cattle populations, people are still being diagnosed with vCJD each year (though the number of new cases currently has dropped to fewer than five per year). This is attributed to the long incubation period for prion diseases, which is typically measured in years or decades. As a result, the full extent of the human vCJD outbreak is still not known.
The scientific consensus is that infectious BSE prion material is not destroyed through cooking procedures, meaning that even contaminated beef foodstuffs prepared "well done" may remain infectious.
Alan Colchester, a professor of neurology at the University of Kent, and Nancy Colchester, writing in the 3 September 2005 issue of the medical journal The Lancet, proposed a theory that the most likely initial origin of BSE in the United Kingdom was the importation from the Indian Subcontinent of bone meal which contained CJD-infected human remains. The government of India vehemently responded to the research, calling it "misleading, highly mischievous; a figment of imagination; absurd", further adding that India maintained constant surveillance and had not had a single case of either BSE or vCJD. The authors responded in the 22 January 2006 issue of The Lancet that their theory is unprovable only in the same sense as all other BSE origin theories are and that the theory warrants further investigation.
During the course of the investigation into the BSE epizootic, an enquiry was also made into the activities of the Department of Health Medicines Control Agency (MCA). On 7 May 1999, David Osborne Hagger, a retired civil servant who worked in the Medicines Division of the Department of Health between 1984 and 1994, produced a written statement to the BSE Inquiry in which he gave an account of his professional experience of BSE.
In February 1989, the MCA had been asked to "identify relevant manufacturers and obtain information about the bovine material contained in children's vaccines, the stocks of these vaccines and how long it would take to switch to other products". In July, "[the] use of bovine insulin in a small group of mainly elderly patients was noted and it was recognised that alternative products for this group were not considered satisfactory". In September, the BSE Working Party of the Committee on the Safety of Medicines (CSM) recommended that "no licensing action is required at present in regard to products produced from bovine material or using prepared bovine brain in nutrient media and sourced from outside the United Kingdom, the Channel Isles and the Republic of Ireland provided that the country of origin is known to be free of BSE, has competent veterinary advisers and is known to practise good animal husbandry".
In 1990, the British Diabetic Association became concerned regarding the safety of bovine insulin. The CSM assured them "[that] there was no insulin sourced from cattle in the UK or Ireland and that the situation in other countries was being monitored".
In 1991, the European Commission "[expressed] concerns about the possible transmission of the BSE/scrapie agent to man through use of certain cosmetic treatments".
In 1992, sources in France reported to the MCA "that BSE had now been reported in France and there were some licensed surgical sutures derived from French bovine material". Concerns were also raised at a CSM meeting "regarding a possible risk of transmission of the BSE agent in gelatin products".
For this failure, France was heavily criticised internationally. Thillier himself queried why there had never been a ban on French beef or basic safety precautions to stop the food chain becoming contaminated, suggesting "Perhaps because the French government forgot its role in guaranteeing the safety of food products, and this neglect cost the lives of nine people". The Sydney Morning Herald added, "while blustering French politicians blamed Britain for the emergence of the disease – and tried to quarantine the country by banning imports of British beef – they failed to adopt measures to prevent a hidden epidemic at home".
In 2016 France confirmed a further case of BSE.
In October 2015 a case of BSE was confirmed at a farm in Carmarthenshire in Wales. In October 2018, a case of BSE was confirmed at a farm in Aberdeenshire, Scotland, the first such case in Scotland in a decade. The case was believed to be an isolated one, but four other animals from the same herd were being culled for precautionary reasons. Scottish officials confirmed that the case had been identified as part of routine testing and that the diseased cow had not entered the human food chain.
A number of other countries had isolated outbreaks of BSE confirmed, including Spain, Portugal, Belgium and Germany.
The ban on British beef
The BSE crisis led to the European Union (EU) banning exports of British beef with effect from March 1996; the ban lasted for 10 years before it was finally lifted on 1 May 2006 despite attempts in May through September 1996 by British prime minister John Major to get the ban lifted. The ban led to trade disputes between the UK and other EU states, dubbed the "beef war" by media. Restrictions remained for beef containing "vertebral material" and for beef sold on the bone. France continued to impose a ban on British beef illegally long after the European Court of Justice had ordered it to lift its blockade, although it has never paid any fine for doing so.
Russia was proceeding to lift the ban sometime after November 2012 after 16 years; the announcement was made during a visit by the UK's chief veterinary officer Nigel Gibbens.
An exception was agreed for beef from Wales bound for the Dutch market, previously an important market for Northern Irish beef. Of two approved export establishments in the United Kingdom in 1999, one was in Scotland – an establishment to which live beef was supplied from Northern Ireland. As the incidence of BSE was very low in Northern Ireland – only six cases of BSE in 1999 – partly due to the early adoption of an advanced herd tagging and computerization system in the region, calls were made to remove the EU ban on exports with regard to Northern Irish beef.
Wildcat bans from countries known to have BSE were imposed in various European countries, although these were mostly subsequently ruled illegal. The Economist noted, "Unfortunately, much of the crisis in Europe can be blamed on politicians and bureaucrats. Even while some European countries were clamouring for bans on British beef, they were ignoring warnings from the European Commission about how to avoid the spread of BSE in their own herds."
History
Different hypotheses exist for the origin of BSE in cattle. One hypothesis suggests it may have jumped species from the scrapie disease in sheep, and another hypothesis suggests that it evolved from a rare spontaneous form of "mad cow disease" that has been seen occasionally in cattle for many centuries. In the fifth century BC, Hippocrates described a similar illness in cattle and sheep, which he believed also occurred in humans. Publius Flavius Vegetius Renatus recorded cases of a disease with similar characteristics in the fourth and fifth centuries AD.
In more recent UK history, the official BSE inquiry (published 2000) suggested that the outbreak there "probably arose from a single point source in the southwest of England in the 1970s".
The most recent case in Britain was in May 2024, on a farm in Ayrshire, Scotland.
| Biology and health sciences | Prion diseases | Health |
19346761 | https://en.wikipedia.org/wiki/Glyptodon | Glyptodon | {{Automatic taxobox
| fossil_range = Pliocene?-Pleistocene (Montehermosan?–Lujanian)~
| image = Glyptodon-1.jpg
| image_caption = Skeleton of G. clavipes at the Naturhistorisches Museum, Vienna
| taxon = Glyptodon
| authority = Owen, 1839
| type_species = Glyptodon clavipes
| type_species_authority = Owen, 1839
| subdivision_ranks = Other Species
| subdivision = * G. elongatus? Burmeister, 1866
G. jatunkhirkhi Cuadrelli et al., 2020
G. munizi Ameghino, 1881
G. reticulatus Owen, 1845
| range_map = Glyptodon and Glyptotherium Distribution Map2.jpg
| range_map_caption = Distribution of Glyptodon (green) compared to Glyptotheriums (orange).
| synonyms =
}}Glyptodon''' (; ) is a genus of glyptodont, an extinct group of large, herbivorous armadillos, that lived from the Pliocene, around 3.2 million years ago, to the early Holocene, around 11,000 years ago, in South America. It is one of, if not the, best known genus of glyptodont. Glyptodon has a long and storied past, being the first named extinct cingulate and the type genus of the subfamily Glyptodontinae. Fossils of Glyptodon have been recorded as early as 1814 from Pleistocene aged deposits from Uruguay, though many were incorrectly referred to the ground sloth Megatherium by early paleontologists.
The type species, G. clavipes, was described in 1839 by notable British paleontologist Sir Richard Owen. Later in the 19th century, dozens of complete skeletons were unearthed from localities and described by paleontologists such as Florentino Ameghino and Hermann Burmeister. During this era, many species of Glyptodon were dubbed, some of them based on fragmentary or isolated remains. Fossils from North America were also assigned to Glyptodon, but all of them have since been placed in the closely related genus Glyptotherium. It was not until the later end of the 1900s and 21st century that full review of the genus came about, restricting Glyptodon to just five species under one genus.
Glyptodonts were typically large, quadrupedal (four-legged), herbivorous armadillos with armored carapaces (top shell) that were made of hundreds of interconnected osteoderms (structures in dermis composed of bone). Other pieces of armor covered the tails and skull roofs, the skull being tall with hypsodont (high-crowned) teeth. As for the postcranial anatomy, pelves fused to the carapace, an amalgamate vertebral column, short limbs, and small digits are found in glyptodontines. Glyptodon reached up to 2 meters (6.56 feet) long and 400 kilograms (880 pounds) in weight, making it one of the largest glyptodontines but not as large as its close relative Glyptotherium or Doedicurus, the largest known glyptodont. Glyptodon is morphologically and phylogenetically most similar to Glyptotherium, however they differ in several ways. Glyptodon is larger on average, with an elongated carapace, a relatively shorter tail, and a robust zygoma, or cheek bone.
Glyptodonts existed for millions of years, though Glyptodon itself was one its last surviving members. Glyptodon was one of many South American megafauna, with many native groups such as notoungulates and ground sloths reaching immense sizes. Glyptodon had a mixed diet of grasses and other plants, instead living at the edge forests and grasslands where the shrubbery was lower. Glyptodon had a wide muzzle, an adaptation for bulk feeding. The armor could have protected the animal from predators, of which many coexisted with Glyptodon, including the "saber-tooth cat" Smilodon, the large canid Protocyon, and the giant bear Arctotherium.Glyptodon, along with all other glyptodonts, became extinct at the end of the Late Pleistocene, around 12,000 as part of the Late Pleistocene extinctions, along with most large mammals in the Americas. Evidence of hunting of glyptodonts by recently arrived Paleoindians suggests that humans may have been a causal factor in the extinctions.
History
Confusion with Megatherium
The history and taxonomy of Glyptodon is storied and convoluted, as it involved confusion with other genera and dubious species, as well as a lack of detailed data. The first recorded discovery of Glyptodon was as early as 1814 when Uruguayan priest, scientist, soldier, and later politician Dámaso Antonio Larrañaga (1771–1848) wrote about the discovery of several unusual fossils in his Diario de Historia Natural, which included his descriptions of many new species of ants, birds, mammals, and even one of the first figures of the extinct Megatherium, a genus of giant ground sloth that was named in 1796 by French scientist Georges Cuvier (1769–1832). This was the first recorded discovery of a glyptodontine or fossil cingulate. The unusual fossils consisted of a femur, carapace fragments, and a caudal tube (an armored tail covering found in glyptodontines) that he collected from the Pleistocene aged (ca. 2.5-0.011 mya) strata on the banks of the Solís Grande Creek, Uruguay. Larrañaga identified the fossils as those of Dasypus (Megatherium), believing that Megatherium was a subgenus of Dasypus based on the incorrect referral of glyptodontine osteoderms to Megatherium years earlier by Spanish scientist Juan Bautista Bru de Ramón, which misled other scientists to believe that glyptodontine fossils were actually those of armored megatheres.
Larrañaga wrote to French scientist Auguste Saint Hilaire about the discovery, and the letter was reproduced by Cuvier in 1823 in the second volume of his landmark book Recherches sur les ossemens fossiles. Larrañaga also noted that similar fossils had been found in "analogous strata near Lake Merrim, on the frontier of the Portuguese colonies (southern Brazil)." These fossils were also likely those of glyptodontines, possibly the closely related Hoplophorus. The armored Megatherium hypothesis was further supported later in 1827 when portions of a Glyptodon carapace, as well as a partial femur and some caudal armor, were found by a Prussian traveler to Montevideo, Uruguay named Mr. Sellow, who sent the carapace to Berlin where it was described by Professor von Weiss, who referred it to Megatherium. The femur and caudal armor were recovered from the Quegnay in northern Uruguay, while the carapace had been found in the Arapey River. Weiss and other paleontologists noted that the osteoderms closely resembled those of armadillos like Dasypus, but Cuvier's hypothesis was popularized based on the incorrect referral of glyptodontine osteoderms Megatherium.Another work on the armored Megatherium hypothesis was published in 1833 by Berlin scientist E. D'Alton, who described more of the material sent by Sellow, including portions of the limbs, manus, and shoulder girdle. D'Alton recognized the great similarities of the fossils to Dasypus and speculated that it was a giant armadillo, contrary to the notion that they were from Megatherium. Despite this, D'Alton did not erect a new name for the fossils and instead wrote that additional material was necessary to distinguish it from other armadillos. D'Alton did not mention Megatherium or its osteoderms in the paper, but he implied that all of the "Megatherium armor" was instead from his armadillo. This hypothesis was supported by Laurillard in 1836, who mentioned that a plaster cast of a large armadillo carapace represented a distinct taxon from Megatherium and that the armor referred to the sloth was instead from an armadillo.
1837 saw the naming of the first glyptodontine, Hoplophorus euphractus, when Danish paleontologist Peter Wilhelm Lund published a series of memoirs on the fossils of Lagoa Santa in Brazil, dating to the Pleistocene. The fossils included osteoderms comparable to those described earlier by Larrañaga, as well as teeth, skull fragments, limb bones, and other elements. After 1837, several new genera and species of glyptodontines were named in quick succession by European paleontologists: Chlamydotherium based on Sellow's carapace and Orycterotherium based on Sellow's femur were named by German scientist H. G. Bronn 1838, Pachypus by Eduard D'Alton in 1839 based on Sellow's 1833 material, Neothoracophorus (originally Thoracophorus but the name was preoccupied by a beetle) in 1889 by Argentine paleontologist Florentino Ameghino based on isolated osteoderms now identified as those of a juvenile Glyptodon from Patagonia, and Lepitherium in 1839 by Geoffroy Saint-Hilaire based on Sellow's osteoderms.Saint-Hilaire, E. G. (1831). Recherches sur de grands sauriens: trouvés à l'état fossile vers les confins maritimes de la basse normandie, attribués d'abord au crocodile, puis déterminés sous les noms de téléosaurus et sténéosaurus. Firmin Didot. Saint-Hillaire considered the osteoderms found by Sellow to not even be mammal, but instead of a relative of Teleosaurus, a crocodile-like reptile known from Jurassic deposits in France.
Richard Owen and referred species
In 1838, British diplomat Sir Woodbine Parish (1796–1882) was sent an isolated molariform and a letter about the discovery of several large fossils from the Matanza River in Buenos Aires, Argentina that dated to the Pleistocene. Parish later collected several more fossils from localities in Las Averias and Villanueva; the latter preserved the most complete skeleton which included a mandible fragment, partial limbs, and unguals of a single individual. They were deposited in Parish's collection at the Royal College of Surgeons in the United Kingdom that year. Some of these fossils were cast at the Natural History Museum, London, but the original fossils were destroyed after German aerial bombing raids hit the college during World War II from 1940 to 1941. Glyptodon was named by Richard Owen (1804–1892), one of the most influential British naturalists of the Victorian Era, writing a chapter about the animal and publishing a reconstruction of its skeleton in the book Buenos Ayres, and the provinces of the Rio de La Plata: their present state, trade, and debt in 1839.Owen, R. (1839). Note on the Glyptodon. Buenos Aires and the Provinces of the Rio de La Plata, 1-178. Within this book, Owen erroneously believed they were all from the same specimen, the name Glyptodon ("grooved tooth") based on the anatomy of the molariform. A later study found the molariform to actually be from another glyptodontine, Panochthus, and the Villanueva individual was designated the lectotype by Robert Hoffstetter in 1955. The Las Averias individual consists of a carapace that was only mentioned in Owen's description, but was used in later reconstructions of the animal and has since been lost. An issue with the lectotype of G. clavipes is that the material is undiagnostic and indistinguishable from other Glyptodon species and even Glyptotherium, making it dubious.
Cuadrelli et al (2018) designated the species a species inquirenda due to this issue and commented that more analyses are necessary. In 1860, Signor Maximo Terrero collected a partial skeleton, including a skull and carapace, of G. clavipes from the River Salado in southern Buenos Aires and dated to the Pleistocene. These fossils were also sent to the Royal College of Surgeons, where they were described in detail by British paleontologist Thomas Henry Huxley (1825–1895) in 1865 during a comprehensive review of the taxon. This skeleton was also destroyed during WWII, but Huxley published several illustrations that presented great amounts of new information on the taxon.
Later in 1845, many more fossils found by Parish from Pleistocene layers in Argentina were named as new species of Glyptodon by Owen: G. ornatus, G. reticulatus, G. tuberculatus, and G. clavicaudatus in 1847. Of these additional species, only G. reticulatus is still considered a valid species of Glyptodon as G. ornatus was reassigned to the genus Neosclerocalyptus, G. tuberculatus to Panochthus, and G. clavicaudatus to Doedicurus. G. reticulatus was named on the basis of several carapace fragments that had also been recovered from the Matanza River, but they lack detailed locality information and the fossils too were destroyed during WWII. The fragments were cast by the NHMUK as well, being used to diagnose the species.
Other paleontologists also started erecting names for Glyptodon species after the 1840s, but many of them are now seen as dubious, species inquirenda, or synonymous with previously named species. Par L. Nodot described a new genus and species of glyptodontine in 1857, Schistopleurum typus, on the basis of a caudal tube found in the Pampas of Argentina, but it has since been synonymized with G. reticulatus. Another species now seen as valid, G. munizi, was described in 1881 by Argentine paleontologist Florentino Ameghino (1853–1911) on the basis of several osteoderms found in the Ensenadan of Arroyo del Medio, San Nicolás, Argentina.Ameghino, F. 1882. Catálogo de las colecciones de Antropología prehistórica y paleontología de Florentino Ameghino, Partido de Mercedes. En: Catálogo de la Sección de la Provincia de Buenos Aires (República Argentina). Exposición Continental Sudamericana. Anexo A: 35-42. For many years the taxon was only known from the fragmentary holotype, but skull and complete carapace material of the species was later described in detail in 2006 that cemented its validity. German zoologist Hermann Burmeister described several Glyptodon fossils in the 1860s, many of them he named as new species of Glyptodon itself or the synonym Schistopleurum, all of which are now synonyms of Glyptodon and its species. In 1908, Florentino Ameghino named another species of Glyptodon, G. chapalmalensis, based on a carapace fragment that he had collected from the Atlantic Coast of Buenos Aires Province that dated to the Chapadmalalan. In 1932, A. Castellanos made a new genus for G. chapalmalensis, Paraglyptodon, which later included another species, P. uquiensis, that was based on more complete specimens that had been collected from Uquía, Argentina between 1909 and 1912.Castellanos, A. (1953). Descripción de restos de" Paraglyptodon uquiensis" n. sp. de Uquía (Senador Pérez) de Jujuy (No. 32). la Provincia. The former species is dubious, but likely not Glyptodon based on its age. P. uquiensis has been synonymized with Glyptodon and is possibly a valid species, though further analysis is necessary to settle its status.
Reassessment and Glyptotherium
In the 1950s, Argentine paleontologist Alfredo Castellanos (1893–1975) erected new generic names for several species of Glyptodon, the first being Glyptocoileus and second of these being Glyptopedius in 1953 that was made for the species G. elongatus that had been named by Robert Burmeister in 1866 on the basis of a single carapace, though its validity is disputed. Castellanos also referred the species G. reticulatus to the genus, but this unsupported. Yet another genus was erected in 1976 named Heteroglyptodon genuarioi by F. L. Roselli based on an incomplete skeleton that had been collected from the Pleistocene aged Libertad Formation in Nueva Palmira, Uruguay, but it has since been found to be an indeterminate specimen of Glyptodon. Several Glyptodon fossils from Pleistocene deposits in Colombia were described in 2012, extending the known range of the genus north greatly.
Another Glyptodon species was described in 2020 called G. jatunkhirkhi by several authors led by Argentine zoologist Francisco Cuadrelli on the basis of an individual preserving a nearly complete carapace, several caudal rings, and a pelvis that had been collected from Yamparaez, southeast of the Bolivian city of Sucre. The strata they were found in was made up of fluvial, sandy sediments that dated to the Late Pleistocene from elevations as high as above sea level. Several additional paratypes were referred to the species from other Late Pleistocene sites in Eastern Cordillera, Bolivia including a nearly complete skull and several osteoderms. In a phylogenetic analysis conducted by Cuadrelli et al., 2020, G. jatunkhirki was recovered as the most basal Glyptodon species, despite being the same age as the more derived species G. clavipes. Reassessment of Glyptodon species began in the late 20th and early 21st centuries, with various hypotheses developing on the number of valid species. Numbers varied, with some authors considering up to 4 species valid, while phylogenetic analyses in 2018 and 2020 only found the species G. reticulatus, G. munizi, and G. jatunkhirkhi definitively valid; G. clavipes and G. uquiensis as species inquirendas. However a 2016 review of G. uquiensis determined that G. uquiensis was actually a juvenile specimen of Glyptodon, though the species could not be determined.
Fossils from North America were also described and referred to Glyptodon starting in 1875, when civil engineers J. N. Cuatáparo and Santiago Ramírez collected a skull, nearly complete carapace, and associated postcranial skeleton of a glyptodontine from a drainage canal near Tequixquiac, Mexico, the fossils coming from the Rancholabrean stage of the Pleistocene. These fossils were the first found of glyptodontines in North America and were named as a new species of Glyptodon, G. mexicanum, but the fossils have since been lost and the species was synonymized with Glyptotherium cylindricum. Several other North American glyptodontine species were named throughout the late 19th-early 20th century, typically based on fragmentary osteoderms. All North American and Central American fossils of glyptodontines have since been referred to the closely related genus Glyptotherium, which was named in 1903 by American paleontologist Henry Fairfield Osborn.
Taxonomy Glyptodon is the type genus of Glyptodontinae, an extinct subfamily of large, heavily armored armadillos that first evolved in the Late Eocene (ca. 33.5 mya) and went extinct in the Early Holocene during the Late Pleistocene extinctions (ca. 7,000 years ago). Owen recognized that Glyptodon was an edentate, but did not recognize it as being a part of a new subfamily as there were no other recognized glyptodontines in 1839. The family Glyptodontidae was not named until 1869 by John Edward Gray, who included the genera Glyptodon, Panochthus, and Hoplophorus within the group and believed that it was diagnosed by an immovable carapace that was fused to the pelvis. However, Hermann Burmeister proposed the name Biloricata for the family, believing that glyptodontines possessed a ventral plastron (bottom shell) and could pull their heads inside their carapaces like turtles. This name lost all use and his theory has not been supported. The internal phylogenetics of Glyptodontidae was analyzed in greater detail by Florentino Ameghino during his descriptions of earlier members of the clade, which proposed that Glyptodon was descended from earlier genera.
Glyptodontinae was classified in its own family or even superfamily until in 2016, when ancient DNA was extracted from the carapace of a 12,000 year old Doedicurus specimen, and a nearly complete mitochondrial genome was reconstructed (76x coverage). Comparisons with those of modern armadillos revealed that glyptodonts diverged from tolypeutine and chlamyphorine armadillos approximately 34 million years ago in the late Eocene. This prompted moving them from their own family, Glyptodontidae, to the subfamily Glyptodontinae within the extant Chlamyphoridae. Based on this and the fossil record, glyptodonts would have evolved their characteristic shape and large size (gigantism) quite rapidly, possibly in response to the cooling, drying climate and expansion of open savannas. Chylamyphoridae is a group in the order Cingulata, which includes all extant armadillos in addition to other fossil groups like Pachyarmatheriidae and Pampatheridae. Cingulata is itself within the basal mammal group Xenarthra, which includes an array of American mammal groups like Vermilingua (anteaters) and Folivora (sloths and ground sloths) in the order Pilosa. The following phylogenetic analysis was conducted by Frédéric Delsuc and colleagues in 2016 and represents the phylogeny of Cingulata using ancient DNA from Doedicurus to determine the position of it and other Glyptodonts:
The internal phylogeny of Glyptodontinae is convoluted and in flux, with many species and families erected based on fragmentary or undiagnostic material that lacks comprehensive review. Glyptodontinae's tribes were long-considered subfamilies before the 2016 analysis. One tribe, Glyptodontini (typically labeled Glyptodontinae) is a group of younger, larger glyptodontines that evolved in the Middle Miocene (ca. 13 mya) with Boreostemma, but split into two genera, Glyptodon in the south and Glyptotherium in the north, though Glyptotherium also lived in some areas of South America like Venezuela and eastern Brazil. Glyptotherium and Glyptodon lived during the same intervals and are nearly identical to Glyptodon in many aspects, so much so that the first fossils of Glyptotherium to be described were misidentified as those of Glyptodon.Cope, E. D. (1889). the edentata of North America. The American Naturalist, 23(272), 657-664. Glyptodontini is distinguishable from other groups for example in that it has large, conical tubercular osteoderms absent or only present on the caudal (tailward) notch on the posterior end of the carapace and different ornamentation of the armor on the carapace than the tail. Glyptodontini is often recovered as more basal to most other glyptodontines like Doedicurus, Hoplophorus, and Panochthus.Below is the phylogenetic analysis conducted by Cuadrelli et al., 2020 of Glyptodontinae, with Glyptodontidae as a family instead of subfamily, that focuses on advanced glyptodonts:
Description
Like the extant armadillos and all other glyptodontines, Glyptodon had a large, bony carapace that covered much of its torso, as well as smaller cephalic armor covering the roof of its head, akin to that in turtles. The carapace was composed of hundreds of small, hexagonal osteoderms (armored structures made of bone), with Glyptodon carapaces preserving a total of 1,800 osteoderms each. The anatomy of different Glyptodon species varies greatly, mostly in the species G. jatunkhirkhi which is more similar to Glyptotherium in certain aspects.
In the axial skeleton, glyptodontines had strongly fused vertebrae and pelves completely connected to the carapace, traits convergently evolved in turtles. The large tails of glyptodontines likely served as a counterbalance to the rest of the body and Glyptodon's caudal armor ended in a blunt tube that was composed of two concentric tubes fused together, in contrast to those of mace-tailed glyptodontines like Neosclerocalyptus and Doedicurus. Glyptodon had graviportal (weight-bearing), short limbs that are very similar to those in other glyptodontines, being indistinguishable from those of some other taxa. The digits of Glyptotherium are very stout and adapted for weight-bearing, though some preserve large claw sheaths that had an intermediate morphology between claws and hooves.
During the Pleistocene, the diversity of glyptodontines diminished but body size increased, with the largest known glyptodont, Doedicurus, evolving in the Pleistocene. Glyptodon sizes vary between species and individuals. G. clavipes, the type species, was estimated to weigh , G. reticulatus weighed a mere to , and G. munizi weighed . A partial skeleton of G. clavipes measured with a carapace length of , while the carapaces of other species like G. munizi and G. reticulatus measured and long respectively.
Skull, mandible, and dentition
Glyptodont dentition contains entirely hypsodont molariforms, which have one of the most extreme examples of hypsodonty known from terrestrial mammals. The dentition is typical of other armadillos, but is fluted on each side by deep grooves. The anterior teeth were compressed, while the posterior teeth were cylindrical. Glyptodont skulls have several unique features; the maxilla and palatine are enlarged vertically to make space for the molariforms, while the braincase is brachycephalic, short and flat. In Glyptodon and many other glyptodontines, the roof of the skull was covered by a shield composed of polygonal, irregular osteoderms that were variable in size and ankylosed together to form a robust cephalic shield that had a smoothly convex exterior surface without ornamentation. Each osteoderm has a rugose and slightly convex dorsal surface, with ornamentation pattern defined by a central figure, slightly elevated and surrounded by an area without peripheral figures or foramina. Sutures separating osteoderms are well marked, as in Panochthus. Other Pleistocene glyptodontines are known by complete/sub-complete skulls, allowing for comparisons to Glyptodon. Glyptotherium's zygoma are narrow, slender, almost parallel, and close to the sagittal plane in frontal view; in Glyptodon, this structure is broader, robust, divergent rather than parallel and more laterally placed.
The nasal passage was reduced with heavy muscle attachments for some unknown purpose. Some have speculated that the muscle attachments were for a proboscis, or trunk, much like that of a tapir or elephant. The lower jaws were very deep and helped support massive chewing muscles to help chew coarse fibrous plants. Some paleontologists have proposed that Glyptodon and some glyptodontines also had a proboscis or large snout similar to those in proboscideans and tapirs, but few have accepted this hypothesis. Another suggestion, made by A.E. Zurita and colleagues, is that the large nasal sinuses could be correlated with the cold arid climate of Pleistocene South America. A distinctive bar of bone projects downwards on the cheek, extending over the lower jaw, perhaps providing an anchor for powerful snout muscles. In turn, the infraorbital foramina are narrow and not visible in anterior view in Glyptotherium, but in Glyptodon they are broad and clearly visible in anterior view. In lateral view, the dorso-ventral height between the skull roof and the palatal plane in Glyptodon decreases anteriorly, contrary to Glyptotherium; the nasal tip is in a lower plane with respect to the zygomatic arch in Glyptodon, but in Glyptotherium is higher than the zygomatic arch plane. The 1st molariform (molaiform is abbreviated as mf#) of Glyptodon is distinctly trilobate (three-lobed) both lingually and labially, nearly as trilobate as the mf2; on the contrary, Glyptotherium shows a very low trilobation of mf1, which is elliptical in cross-section, the mf2 is weakly trilobate, and the mf3 is trilobate. In both genera, the mf4 to mf8 are fully trilobate and serially identical. These traits separate the two genera. Within the genus Glyptodon this trait varies as well, with G. reticulatus having triloby to a greater degree than G. munizi.The mandibles of Glyptotherium and Glyptodon are very similar, but Glyptotherium's mandible is smaller by about 10% in total size. The angle between the occlusal plane (part of the jaw where upper and lower teeth contact) and the anterior margin of the ascending ramus is approximately 60 in Glyptotherium, while it is 65° in Glyptodon. The ventral margin of the horizontal ramus is more concave in Glyptodon than in Glyptotherium. The symphysis area is extended greatly in Glyptotherium antero-posteriorily compared to Glyptodon. The mf1 is ellipsoidal in Glyptotherium and the mf2 is "submolariform", while in Glyptodon both teeth are trilobate.
Vertebrae and pelvis Glyptodon has 7 cervical vertebrae, of which the first 3 cervicals were fused together while the rest of the cervicals were free except for the 7th. The 7th cervical and the first 2 dorsal vertebrae were fused together into a trivertebral, a broad, flat bone with very small spinous processes (projections from a vertebra) and large articular surfaces that held ribs. All of the other 13 vertebrae in the dorsal column were fused into one long continuous tunnel that is not seen in mammals outside of glyptodontines, some of these vertebrae were so tightly fused that the segments of them cannot be discerned. The centra of these vertebrae were curved, thin bony plates that created a cylinder to support the carapace and the shape of the animal. Spinous processes in these vertebrae are also heavily reduced, with some being only a thin blade of bone ankylosed with other vertebrae. Sacral vertebrae in Glyptodon are also fused and 13 in number, which preserve very unusual oval-shaped, thin, and slightly concave ends on the centra. The pelves are also unusual, as they preserve giant ilia and are fused to the rest of the skeleton.
Carapace and osteoderms
thumb|Glyptodon carapace in Hungarian Natural History Museum|alt=A Glyptodon carapace.Glyptodons osteoderms were attached by synotoses (bony connections) and were found in double or triple rows on the front and sides of the carapace's edges, as well as in the tail armor and cephalic shield. The carapace's osteoderms were conical with a rounded point, while the ones on the tail were just conical. The sulci between these raised structures were deep and wide with parallel lines. The carapace of Glyptodon was strongly elongated compared to those of Boreostemma and Glyptotherium, with the carapace being relatively 65% longer than the former and 14% than the latter. In Glyptodon, the top-bottom height of the carapace represents 60% of its total length, whereas in Glyptotherium it is taller at circa 70%. The antero-posterior dorsal profile of the carapace was convex and its posterior half was higher than the anterior. The apex of the carapace was slightly displaced posteriorly in most Glyptodon species, while in Glyptotherium and Glyptodon jatunkhirkhi it was at the center of the midline. The carapace of most species of Glyptodon is arched subtly, while Glyptotherium and Glyptodon jatunkhirkhis has a very arched back and convex pre-iliac and concave post-iliac, giving it a saddle-like overhang over the tail. Glyptodon osteoderms in the antero-lateral regions of the carapace are strongly ankylosed, giving them little flexibility, while in Glyptotherium they are less ankylosed and more flexible. The osteoderms of the caudal aperture (large conical osteoderms that protect the base of the tail) are more conical in Glyptodon and more rounded in Glyptotherium, though in the latter the anatomy of the caudal aperture osteoderms varies by sex while in Glyptodon it varies by age. The caudal aperture is more vertically oriented in the latter genus, while in Glyptotheirum it is angled posteriorily. Although frequently used to differentiate the two taxa, Glyptodon and Glyptotherium have similar osteoderm morphologies that differ only in several areas. Both genera have tall, thick osteoderms compared to those of many other glyptodontines such as Hoplophorus and Neosclerocalyptus. Glyptodon sometimes preserves a "rosette" pattern, where the osteoderm's central figure is surrounded by a row of peripheral figures, while other specimens lack them completely. G. reticulatus varies from a complete rosette pattern to a reticular surface, which has convex central and peripheral figures. Glyptotherium however always preserves rosettes. The central and radial sulci are deeper and broader in Glyptodon (ca. 4–6 mm) than in Glyptotherium (ca. 1–2.4 mm). The osteoderms in Glyptodon and Glyptotherium have 5-11 peripheral figures, rugose exposed surfaces, and heights up to .
Osteoderms on the ventral side of the body were first mentioned by paleontologist Hermann Burmeister in 1866, postulating that there was a ventral plastron like in turtles based on evidence of small armor in the dermis. This hypothesis has since been disproven, but in the early 2000s, the presence of osteoderms on Glyptodons face, hind legs, and underside was confirmed in several species. The fossils with these characteristics were from the Pleistocene, evolving in younger species like G. reticulatus . These small to medium-sized ossicles were actually embedded in the dermis and did not connect in a pattern.
Tail Glyptodon had very primitive tail anatomy for a glyptodont, possessing eight or nine mobile caudal rings of fused, large, conical osteoderms. These enclosed the base of the tail, which terminated in a short caudal tube composed of two fused caudal rings. Caudal rings were composed of two or three rows of pentagonal osteoderms that transitioned from flat, slightly convex in the posterior rings to conical tubercles by the third caudal ring. The more posterior the rings were, the larger they were, with the exception of the 2nd ring which was the largest and 1st complete ring in the series, creating a cone-shaped tail. The distal scutes are larger, and their free margins are rounded producing a fan-like shape. Most of the osteoderms of the distal row (some individuals preserving up to 12) bear prominent conical outlines, in stark contrast to more advanced glyptodontines like Doedicurus and Panochthus, which had completely fused tails that formed an inflexible mace or club. The caudal tube at the distalmost end of the tail is cylinder-shaped with smaller conical osteoderms and is stubbier proportionally in Glyptodon. In Glyptotherium, this caudal tube represents ca. 20% of the total length of the caudal armor, whereas in Glyptodon, this structure represents 13% of the total length. In Glyptodon, the caudal armor length represents circa 30-40% of the carapace's total length in contrast to Glyptotherium, where this value is greater at around 50%. For example, in specimen MCA 2015 of Glyptodon reticulatus, the terminal tube measured only long in comparison to Glyptotherium texanum specimen UMMP 34 826's long tube.
Paleobiology
Digging abilities
Many armadillo species have digging capabilities, with large claws adapted for scraping dirt in order to make burrows or forage for food underground.Carter, T. S., & Encarnaçao, C. D. (1983). Characteristics and use of burrows by four species of armadillos in Brazil. Journal of Mammalogy, 64(1), 103-108. Much of armadillo diets consist of insects and other invertebrates that live underground, in contrast to the herbivorous diets of Glyptodon and related genera. Being a large armadillo, Glyptodon's fossorial capabilities have been researched on several occasions. Owen (1841) opposed this idea, though pushback came from Nodot (1856) and Sénéchal (1865) who believed digging was possible for the genus. However, the evolution of a rigid carapace as opposed to a flexible one in extant armadillos as well as a weakly developed deltoid crest on the humerus (upper arm bone) provided evidence against fossorial hypotheses. The elbow had a great range of movement, as with digging cingulates, but this is more likely to be due to size adaptations.
Endocranial anatomy
Several complete skulls of Glyptodon enable the endocranial anatomy to be analyzed, as well as compared to other well-preserved taxa like Doedicurus and Panochthus. The brain cavities of the larger glyptodontines Glyptodon, Doedicurus, and Panochthus had a braincase volume of . The encephalization quotient of these taxa are 0.12 to 0.4, lower than most modern armadillos (0.44-1.06) and corresponds to those of pampatheres. The brain of the glyptodontines had an extensive olfactory bulb that took up between 4.8 and 9.7% of the entire brain, while around two thirds of it were occupied by the cerebrum and the rest by the cerebellum. Overall, this is akin to that of other armadillos, but in the latter the cerebrum is smaller relative to the cerebellum and the braincase's total volume. Deviating from the armadillos with their wide olfactory bulb, glyptodontines and pampatheres have elongated and triangular olfactory systems. Several other neuroanatomical characteristics differ between glyptodontines and armadillos, such as the presence of a pronounced sulcus praesylvianus.
In general, living cingulates have smaller brains than anteaters and sloths for reasons unknown. Several theories have been made as to why, such as a shorter rearing phase of offspring, dedication of resources to the development of the carapace, and other biological and functional handicaps. Members of Cingulata also tend to have extremely low metabolisms, causing less energy flow to the development of the brain's neurons. The pattern of large bodies bearing adequate protection and a reduction of intelligence is found in several other groups such as ankylosaurs and stegosaurs, two types of armored dinosaur. However, the carapace itself is considered as a restrictive functional component as it prohibited much neck movement and forced a reduced brain size. This reduction thus resulted in weight loss in the skull, which had a great effect on the skulls of large-headed glyptodontines like Glyptodon.
Feeding and diet
Two main groups of glyptodontines can be distinguished by their feeding habits: narrow-muzzled Miocene propalaehoplophorids and wide-muzzled post-Miocene glyptodontines. The propalaehoplophorids were selective feeders, while the post-Miocene glyptodontines were bulk feeders (obtain nutrients by consuming an entire plant). However, because of their body form and fusion of the cervical vertebrae glyptodontines would have needed to forage near the ground. Their craniomandibular joint limited their jaw to side-to-side movement. Glyptodon's jaws had large ridges of osteodentine which could effectively be used to grind food particles before shearing and pushing them via the constant motion of the mandible. They had a well-developed snout musculature, along with a mobile neck region that helped them secure food. The hyoid shows a robust design that suggests Glyptodon had a large and robust tongue, which may have aided in food intake and processing.Zamorano, M., Scillato-Yané, G. J., Soibelzon, E., Soibelzon, L. H., Bonini, R., & Rodriguez, S. (2018). Hyoid apparatus of Panochthus sp.(Xenarthra; Glyptodontidae) from the Late Pleistocene of the Pampean region (Argentina). Comparative description and muscle reconstruction. Neues Jahrbuch für Geologie und Paläontologie Abhandlungen, 288, 205-219.
Like most other xenarthrans, glyptodontines had lower energy requirements than contemporary mammal groups. The stomachs of glyptodontids are mysterious due to being entirely herbivorous, in contrast to modern, omnivorous armadillos which have simple stomachs instead of the chambered ones of sloths. This in conjugation with the proposed idea that aquatic grazing may have caused the isotopes strongly associated with herbivory observed in Glyptodon fossils. However, aquatic grazing in Glyptodon is little supported though more backing for this hypothesis has been found in the related Glyptotherium. A carbon isotopic analysis of Glyptodon bones by França et al (2015) found that it consumed a variety of both C3 plants and C4 grasses at lower latitudes while it ate exclusively C3 grasses at higher ones, implying an ecological shift based on the climate. A 2012 analysis of isotopes supports this, but the isotopic results are not backed by morphological evidence. The isotopic conclusion would place Glyptodon as a mixed browser in most environments, similar to some other glyptodontines. The 2012 paper also noted that Glyptodon may have had a more flexible diet than previously imagined, with a mix of slightly wooded and slightly open habitats as implied by the consumption of C3 and C4 material. The C4 plants include groups like Poaceae, Cyperaceae, Asteraceae, and Amaranthaceae based on palynological evidence, meaning that Glyptodon likely ate C4 flowering plants in addition to C3 grasses. A mesowear analysis supported their conclusion, however, finding that mixed-feeding causing blunt wear that suggests a more abrasion-dominated diet. This is similar to that of Neosclerocalyptus, but in contrast to Hoplophorus which had sharper wear ends. Neosclerocalyptus favored more open environments despite this, as found by isotopic studies. The mesowear angles of Glyptodon were noted to possess a bimodal distribution, implying a difference between populations, sexes, or species in diet.
Intraspecific combat
Glyptodonts are believed to have taken part in intraspecific fighting. It was presumed that since the tail of Glyptodon was very flexible and had rings of bony plates, it was used as a weapon in fights. Although its tail could be used for defense against predators, evidence suggests that the tail of Glyptodon was primarily for attacks on its own kind. A G. reticulatus fossil displays damage done on the surface of its carapace. A study based on this specimen calculated that Glyptodon tails would have been able to generate enough force to break the carapace of another Glyptodon. This suggests that they likely fought each other to settle territorial or mating disputes through the use of their tails, much like male-to-male fighting among deer using their antlers.
Ontogeny
In 2009, a partial skeleton of a prenatal individual of Glyptodon was described that had been found inside of the pelvic region of a carapace of an adult. The skeleton had been collected from the Pleistocene-aged deposits in the Tarija Valley of Bolivia and included a partial skull, partial mandible, and fragments from the scapulae and femora. The skeleton is the only known prenatal specimen of a glyptodontine and is one of the most complete specimens of an immature Glyptodon known, though dozens of isolated osteoderms from juveniles are known. The preserved skull measures only 51 mm long, but still bears many characteristics of Glyptodon such as a subtriangular naris, a lateral margin on the naris that forms an acute angle of 30-degrees, oval infraorbital foramina, and several other traits. However, the mandible differs in that the ascending ramus is at a 90-degree angle in contrast to the 60-70 degree angles preserved in adults. Interestingly, this mandibular morphology is alike to that in some specimens of Glyptotherium cylindricum.In the osteoderms of juvenile Glyptodon reticulatus, the central figures are larger than the peripheral osteoderms. These central figures are planar, sometimes even concave, and elevated compared to the peripherals. The peripherals in younger individuals are also less distinct and bear weakly marked or absent furrows (grooves that separate osteoderms). On the other hand, peripherals and central figures of adults are similarly sized, distinct, and of similar heights.
Posture
Several interpretations of glyptodontine posture have been made, starting with those by Richard Owen in 1841 using comparative anatomy. Owen theorized that the phalanges were weight-bearing due to their short and broad physiology, in addition to evidence provided in the postcranial skeleton. It was also proposed that an upright posture was possible for Glyptodon, first by Sénéchal (1865) who stated that the tail could be an equilibrium for the front half of the body as well as a method of supporting the legs. Linear measurements were later taken which provided insight into this hypothesis, finding that bipedalism would be possible. The patellar articulation with the femur suggests rotation of the lower leg during knee extension and potentially even knee-locking were feasible.
Sexual dimorphism and group behavior
No evidence of sexual dimorphism in Glyptodon has been described, but it has been observed in the close relative Glyptotherium based on fossils found in Pliocene deposits in Arizona. In the genus, the caudal aperture of males and females differ in that the marginal osteoderms of males are much more conical and convex than those of females. Even in the carapaces of newborn Glyptotherium, the marginal osteoderms are either conical or flat, which enables their sex to be determined. No direct evidence of glyptodontine group behavior has been described, though some localities preserving juveniles, subadults, and adults of Glyptotherium together are known. Living armadillos are loners and only come together during mating season, with the number of offspring varying between one and even twelve babies depending on the species.
Distribution and paleoecology
Glyptodon is one of the most common Pleistocene glyptodontines with a large range from the lowland Pampas to the towering Andean Mountains of Peru and Bolivia, some fossils found at elevations reaching over above sea level. Only G. munizi is found in the early-middle Pleistocene, whereas other species are younger. G. reticulatus is specifically noted to be known from 60ka to as recent as 7ka possibly, though confirmed records only extend to 11 ka. The genus had a generalist diet, which allowed it to fill niches in areas that were inaccessible by grazing genera, with G. reticulatus representing up to 90% of the glyptodontine fossils in the Tarija Valley of Bolivia. However, in regions such as the Pampas, Mesopotamia, and Uruguay, an array of glyptodontines are known. Further evidence of Glyptodons adaptability is found in the Pampas, which were semihumid and temperate from 30,000 to 11,000 ka, alternating between the rainy and dry seasons, over a large area consisting mostly of grasslands dotted with forests and mixed shrubbery. Temperatures in this region were lower than the present, with an estimated mean annual temperature in the Pampas compared to in Buenos Aires today. The Pampas specifically was a mix of semi-arid Patagonian and tropical Brazilian climates during the middle Pleistocene before the expansion of the drier climates. This is in stark contrast to the Bermejo Formation of Formosa Province, Argentina where the climate and fauna suggest a more arid environment with fewer grasslands.Zurita, A. E., M. Taglioretti, M. De los Reyes, C. Oliva, and F. Scaglia. 2014. First Neogene skulls of Doedicurinae (Xenarthra, Glyptodontidae): morphology and phylogenetic implications. Historical Biology 28:423–432. G. jatunkhirkhi specifically is known only from Andean climate of Eastern Cordillera in Bolivia, causing it to evolve to be smaller in size than lowland species due to less support for larger masses. G. jatunkhirkhi is not the only example of this in Xenarthra, with species of Panochthus and Pleurolestodon evolving to be smaller in size in mountainous regions.
During the Ensenadan and Marplatan, Glyptodon coexisted with a variety of mammals unique to the period such as the notoungulate Mesotherium, canid Theriodictis, and a species of the giant bear Arctotherium. In areas such as Uruguay, fossils of Glyptodon have been unearthed alongside the contemporary glyptodontines Doedicurus, Neuryurus, Panochthus; armadillos Chaetophractus, Propaeopus, and Eutatus; and the herbivorous pampathere Pampatherium. As for their distant relatives the ground sloths, the giant Megatherium is known, in addition to two species of the scelidothere Catonyx, and the mylodontid genera Mylodon and Glossotherium. Some other groups are known, including the unusual litopterns Macrauchenia and Neolicaphrium, notoungulate Toxodon, massive proboscidean Notiomastodon, and the equids Equus neogeus and Hippidion. Various artiodactyls have been recorded, including the peccaries Catagonus and Tayassu peccari, extinct deer Morenelaphus and Antifer, and two genera of llamas including Hemiauchenia and Lama. A variety of carnivorans have been recorded, such as the "saber-toothed" Smilodon, the bear Arctotherium bonariense, and the wolf-like canids Protocyon, and Dusicyon. Rodents too have been found, such as Holochilus, Hydrochoerus (capybara), Cavia, and Microcavia. Notably, some of the youngest "terror-bird" fossils of the genus Psilopterus have been unearthed in the area.
Material previously assigned to Glyptodon in northeast Brazil has been reassigned to Glyptotherium, restricting the Brazilian distribution of Glyptodon to the southern provinces. However, two osteoderms with characteristics similar to those of Glyptodon have been found in Sergipe state in the northeast, suggesting that both genera occurred in this region during the Pleistocene. Glyptodon's northernmost locality comes from Pleistocene deposits in central Colombia, though many specimens formerly attributed to the genus come from the bordering country of Venezuela.
Predation and relationship with humans Glyptodon coexisted with a variety of large predators including the cat Smilodon, jaguars, and canid Protocyon.Montalvo, C. I., Zárate, M. A., Bargo, M. S., & Mehl, A. (2013). Registro faunístico y paleoambientes del Cuaternario tardío, provincia de La Pampa, Argentina. Ameghiniana, 50(6), 554-570. This belief is furthered by the discovery of fractured dorsal armor, which implies that Glyptodon had been in physical conflict with other animals. However, isotope analyses of the collagen from Glyptodon and other mammals of the Pampas region by Bocherens et al. (2015) discovered little evidence to support the idea of predators feeding on Glyptodon. Instead, it was found that Glyptodon as well as herbivorous mammals living in denser forests made up a smaller portion of carnivore diets, whereas open grazers such as Lestodon and Macrauchenia were consumed more often. Furthermore, the appearance of secondary armor in the dermis of Glyptodon coincides with the arrival of North American predators in South America during the Great American Interchange. For this reason, it was hypothesized that the osteoderms developed as a defensive/offensive mechanism to combat the new arrivals of the area.Smilodon may have occasionally preyed upon glyptodontines, based on a skull of Glyptotherium texanum which bears the distinctive elliptical puncture marks that best match those of the machairodont cat, indicating that the predator successfully bit into the skull through the armored cephalic shield. The Glyptotherium in question was a juvenile, with a still-developing head shield, making it far more vulnerable to the cat's attack. Although originally theorized by George Brandes to be possible in 1900, Smilodon canines could not pierce the thick carapace osteoderms of glyptodontines. Brandes imagined that the evolution of thick glyptodontine armor and long machairodont canines was an example of coevolution, but Birger Bohlin argued in 1940 that the teeth were far too fragile to do damage against glyptodontine armor.
The coexistence of early hunter-gatherer humans and glyptodontines in South America was first hypothesized in 1881 based on fossil discoveries from the Pampas, and many fossil discoveries from the Late Pleistocene to Early Holocene have been unearthed since that exhibit human predation on glyptodontines. No fossils of Glyptodon preserving direct interactions have been unearthed, but it did inhabit this region alongside humans. At the site of Pay Paso 1, an archaeological site in northwestern Uruguay preserving human-made spear points and other signs of culture were found associated with fossils of Glyptodon and the horse Equus. These were used for radiocarbon dating using collagen, supposedly dating to around 9,000 to 9,500 BP but these dates cannot be verified. During this period, a wide array of Xenarthrans inhabited the Pampas were hunted by humans, with evidence demonstrating that the small () glyptodontine Neosclerocalyptus, the armadillo Eutatus, and the gigantic (2 ton) glyptodontine Doedicurus, the largest glyptodontine known, were hunted. The only other records of human predation from outside the Pampas area a partial carapace, which was eviscerated by humans, and several skulls preserving signs that they were dispatched by human tools. All were found in Venezuela. The discoveries there showed the first signs of human hunting on the skulls of glyptodontines. Hunters may have used the shells of dead animals as shelters in inclement weather.Politis, Gustavo G. and Gutierrez, Maria A. (1998) "Gliptodontes y Cazadores-Recolectores de la Region Pampeana (Argentina)" ("Glyptodonts and Hunter-Gatherers in the Pampas Region (Argentina)") Latin American Antiquity9(2): pp.111–134 in Spanish
Extinction Glyptodon, along with all other glyptodonts became extinct around the end of the Late Pleistocene, as part of a wave of extinctions of most large mammals across the Americas.
Some evidence suggests that humans drove glyptodontines to extinction. Evidence from the Campo Laborde and La Moderna archaeological sites in the Argentine Pampas suggest that Glyptodon's relatives Doedicurus and Panochthus survived until the Early Holocene, coexisting with humans for a minimum of 4,000 years. This overlap provides support for models showing that the South American Pleistocene extinctions resulted from a combination of climatic change and anthropogenic causes. These sites have been interpreted as ones used for butchering megafauna (Megatherium and Doedicurus); however, some of the chronology has been problematic and controversial, due to poor preservation of the collagen used for dating. The extinction rates in South America during the late Pleistocene were the highest out of any continent, with all endemic animals weighing over going extinct by the middle Holocene. This supports the idea of human hunting as a drive for the extinction of Glyptodon, as the arrival of humans around 16,000 years BP to such a formerly isolated continent may have caused extinction rates to become higher.
The extinction of Glyptodon notably coincides with the end of the Antarctic Cold Reversal period in which, for 1,700 years, temperatures dropped before spiking after ending at 12.7 ka. Many climatic fluctuations occurred during the late Pleistocene between humid and dry cycles, with Glyptodon preferring drier climates. Following the Antarctic Cold Reversal, temperatures rose and the climate became more consistently humid, which then led C3 grasses to become increasingly replaced by C4 grasses and southern beech trees. These changes led vulnerable, grazing-specialized forms like glyptodontines, toxodonts, and some ground sloths to become extinct. Around 11.5 ka, temperatures peaked before again dropping, resulting in the extinction of several different genera of mammals including some megafauna. Glyptodon along with genera such as Glossotherium and Morenelaphus'' were wiped out, though several other groups lived for several thousand years after.
| Biology and health sciences | Xenarthra | Animals |
19347033 | https://en.wikipedia.org/wiki/Breastfeeding | Breastfeeding | Breastfeeding, also known as nursing, is the process where breast milk is fed to a child. Breast milk may be from the breast, or may be pumped and fed to the infant. The World Health Organization (WHO) recommend that breastfeeding begin within the first hour of a baby's birth and continue as the baby wants. Health organizations, including the WHO, recommend breastfeeding exclusively for six months. This means that no other foods or drinks, other than vitamin D, are typically given. The WHO recommends exclusive breastfeeding for the first 6 months of life, followed by continued breastfeeding with appropriate complementary foods for up to 2 years and beyond. Of the 135 million babies born every year, only 42% are breastfed within the first hour of life, only 38% of mothers practice exclusive breastfeeding during the first six months, and 58% of mothers continue breastfeeding up to the age of two years and beyond.
Breastfeeding has a number of benefits to both mother and baby that infant formula lacks. Increased breastfeeding to near-universal levels in low and medium income countries could prevent approximately 820,000 deaths of children under the age of five annually. Breastfeeding decreases the risk of respiratory tract infections, ear infections, sudden infant death syndrome (SIDS), and diarrhea for the baby, both in developing and developed countries. Other benefits have been proposed to include lower risks of asthma, food allergies, and diabetes. Breastfeeding may also improve cognitive development and decrease the risk of obesity in adulthood.
Benefits for the mother include less blood loss following delivery, better contraction of the uterus, and a decreased risk of postpartum depression. Breastfeeding delays the return of menstruation, and in very specific circumstances, fertility, a phenomenon known as lactational amenorrhea. Long-term benefits for the mother include decreased risk of breast cancer, cardiovascular disease, diabetes, metabolic syndrome, and rheumatoid arthritis. Breastfeeding is less expensive than infant formula, but its impact on mothers' ability to earn an income is not usually factored into calculations comparing the two feeding methods. It is also common for women to experience generally manageable symptoms such as; vaginal dryness, De Quervain syndrome, cramping, mastitis, moderate to severe nipple pain and a general lack of bodily autonomy. These symptoms generally peak at the start of breastfeeding but disappear or become considerately more manageable after the first few weeks.
Feedings may last as long as 30–60 minutes each as milk supply develops and the infant learns the Suck-Swallow-Breathe pattern. However, as milk supply increases and the infant becomes more efficient at feeding, the duration of feeds may shorten. Older children may feed less often. When direct breastfeeding is not possible, expressing or pumping to empty the breasts can help mothers avoid plugged milk ducts and breast infection, maintain their milk supply, resolve engorgement, and provide milk to be fed to their infant at a later time. Medical conditions that do not allow breastfeeding are rare. Mothers who take certain recreational drugs should not breastfeed, however, most medications are compatible with breastfeeding. Current evidence indicates that it is unlikely that COVID-19 can be transmitted through breast milk.
Smoking tobacco and consuming limited amounts of alcohol and/or coffee are not reasons to avoid breastfeeding.
Breastfeeding physiology
Breast development starts in puberty with the growth of ducts, fat cells, and connective tissue. The ultimate size of the breasts is determined by the number of fat cells. The size of the breast is not related to a mother's breastfeeding capability or the volume of milk she can produce. The process of milk production, termed lactogenesis, occurs in 3 stages. The first stage takes place during pregnancy, allowing for the development of the breast and production of colostrum, the thick, early form of milk that is low in volume, but rich in nutrition. The birth of the baby and the placenta triggers the onset of the second stage of milk production, triggering the milk to come in over the next several days. The third stage of milk production occurs gradually over several weeks, and is characterized by a full milk supply that is regulated locally (at the breast), predominately by the infant's demand for food. This differs from the second stage of lactogenesis, which is regulated centrally (in the brain) by hormone feedback loops that naturally occur after the placenta is delivered.
Although traditionally, lactation occurs following pregnancy, lactation may also be induced with hormone therapy and nipple stimulation in the absence of pregnancy.
Lactogenesis I and other changes in pregnancy
Changes in pregnancy, starting around 16 weeks gestational age, prepare the breast for lactation. These changes, collectively known as Lactogenesis I, are directed by hormones produced by the placenta and the brain, namely estrogen, progesterone, prolactin, which gradually increase throughout the pregnancy, and result in the structural development of the alveolar (milk-producing) tissue and the production of colostrum. While prolactin is the predominant hormone in milk production, progesterone, which is at high levels during pregnancy, blocks the prolactin receptors in the breast, thus inhibiting milk from "coming in" during pregnancy.
Many other physiologic changes occur under the control of progesterone and estrogen. These changes include, but are not limited to, dilation of blood vessels, increased blood flow to the uterus, increased availability of glucose (which subsequently is passed through the placenta to the fetus), and increased skin pigmentation, which results in darkening of the nipples and areola, formation of the linea nigra, and onset of melasma of pregnancy.
From about the 16th week of pregnancy the breasts are able to begin to produce milk. It’s not unusual for small amounts of straw-coloured fluid called colostrum to leak from the nipples during this relatively early stage.
Breast development throughout pregnancy may result in significant Areola and Areolar gland enlargement, erectile nipples, and/or nipple sensitivity.
Lactogenesis II
The third stage of labor describes the period between the birth of the baby and the delivery of the placenta, which normally lasts less than 30 minutes. The delivery of the placenta causes an abrupt drop off of placental hormones. This drop, specifically in progesterone, allows prolactin to work effectively at its receptors in the breast, leading to an array of changes over the next several days that allow the milk to "come in"; these changes are known collectively as Lactogenesis II. Colostrum continues to be produced for these next few days, as Lactogenesis II occurs. Milk may "come in" as late as five days after delivery; however, this process may be delayed due to a number of factors as described in the Process "Delay in milk 'coming in'" subsection below. Oxytocin, which signals the smooth muscle of the uterus to contract during pregnancy, labor, birth and following delivery, is also involved in the process of breastfeeding. Oxytocin also contracts the smooth muscle layer of band-like cells surrounding the milk ducts and alveoli to s the newly produced milk through the duct system and out through the nipple. This process is known as the milk ejection reflex, or let-down. Because of oxytocin's dual activity at the breast and the uterus, breastfeeding mothers may also experience uterine cramping at the time of breastfeeding, for the first several days to weeks.
Lactogenesis III
Prolactin and oxytocin are vital for establishing milk supply initially, however, once the milk supply is well established, the volume and content of the milk produced is controlled locally. Although prolactin levels are higher on average among breastfeeding mothers, prolactin levels themselves do not correlate to milk volume. At this stage, production of milk is triggered by milk drainage from the breasts. The only way to maintain milk supply is to drain the breasts frequently. Infrequent or incomplete drainage of the breasts, decreases blood flow to the alveoli and signals the milk-producing cells to produce less milk. Breast pumps are often used to drain the breasts when the infant is not feeding.
A condition called Mastitis sometimes occurs in this stage, resulting from incomplete milk drainage. The Academy of Breastfeeding Medicine recommends against trying to "empty" the breasts, whether through pushing the baby to feed more or through over use of a breast pump, to prevent causing milk oversupply.
Breast milk
The content of breast milk should be discussed in two separate categories – the nutritional content and the bioactive content, that is the enzymes, proteins, antibodies, and signaling molecules that assist the infant in ways outside of nutrition.
Nutritional content
The pattern of intended nutrient content in breast milk is relatively consistent. Breastmilk is made from nutrients in the mother's bloodstream and bodily stores. It has an optimal balance of fat, sugar, water, and protein that is needed for a baby's age appropriate growth and development. That being said, a variety of factors can influence the nutritional makeup of breastmilk, including gestational age, age of infant, maternal age, maternal smoking, and nutritional needs of the infant.
The first type of milk produced is called colostrum. The volume of colostrum produced during each feeding is appropriate for the size of the newborn stomach and is sufficient, calorically, for feeding a newborn during the first few days of life. Produced during pregnancy and the first days after childbirth, colostrum is rich in protein and Vitamins A, B12 and K, which supports infants' growth, brain development, vision, immune systems, red blood cells, and clotting cascade. The breast milk also has long-chain polyunsaturated fatty acids which help with normal retinal and neural development.
The caloric content of colostrum is about 54 Calories/100mL. The second type of milk is transitional milk, which is produced during the transition from colostrum to mature breast milk. As the breast milk matures over the course to several weeks, the protein content of the milk decreases on average. The caloric content of breastmilk is reflective of the caloric requirements of the infant, increasing steadily after 12 months. The caloric content of breastmilk in the first 12 months of breastfeeding is approximated to be 58-72 Calories/100mL. Comparatively, the caloric content after 48 months is approximately 83-129 Calories/100mL.
When a mother has her full milk supply and is feeding her infant, the first milk to be expressed is called the foremilk. Foremilk is typically thinner and less rich in calories. The hindmilk that follows is rich in calories and fat.
If the mother is not deficient in vitamins, breast milk normally supplies the baby's needs, with the exception of Vitamin D. The CDC, National Health Service (UK), Canadian Paediatric Society, the American Academy of Pediatrics, and the American Academy of Family Physicians all agree that breast milk alone does not provide infants with an adequate amount of Vitamin D, thus they advise parents to supplement their infants with 400 IU Vitamin D daily. Providing this quantity of Vitamin D to breastfeeding infants has been shown to reduce rates of Vitamin D insufficiency (defined as 25-OH vitamin D < 50 nmol/L). However, there was insufficient evidence in the most recent Cochrane Review, to determine if this quantity reduced rates of Vitamin D deficiency (defined as 25-OH vitamin D < 30 nmol/L) or rickets. Term infants typically do not need iron supplementation. Delaying clamping of the cord at birth for at least one minute improves the infants' iron status for the first year. When complementary (solid) foods are introduced at about 6 months of age, parents should make sure to choose iron-rich foods to help maintain their children's iron stores.
Bioactive content
In addition to the nutritional benefits of breastmilk, breast milk also provides enzymes, antibodies, and other substances that support the infant's growth and development. The bioactive makeup of breastmilk also changes based on the needs of the infant; for example, when an infant is recovering from an upper respiratory infection, local signaling allows for increased passage of immune cells and proteins to aid the infant's immune system.
Produced during pregnancy and the first days after childbirth, colostrum is easy to digest and has laxative properties that help the infant to pass early stools. This aids in the excretion of excess bilirubin, which helps to prevent jaundice. Colostrum also helps to seal the infants gastrointestinal tract from foreign substances and germs, which may sensitize the baby to foods that the mother has eaten and decrease the risk of diarrheal illness. Although the baby has received some antibodies (IgG) through the placenta, colostrum contains a substance which is new to the newborn, secretory immunoglobulin A (IgA). IgA works to attack germs in the mucous membranes of the throat, lungs, and intestines, which are most likely to come under attack from germs. Additionally, colostrum and mature breast milk contain many antioxidant and anti-inflammatory enzymes and proteins that decrease the risk of gastrointestinal allergies to food, respiratory allergies to air particles like pollen, and other atopic diseases, such as asthma and eczema.
Process
Commencement
It is recommended for mothers to initiate breastfeeding within the first hour after birth. Uninterrupted skin-to-skin contact and breastfeeding can begin immediately after birth, and should continue for at least one hour after birth. This period of infant-mother interaction, known generally as kangaroo care, or the "golden hour" during the immediate postpartum period, assists in the mother–child bonding for both mother and baby, and is thought to encourage instinctual breastfeeding behavior in the infant. Newborns who are immediately placed on their mother's skin have a natural instinct to latch on to the breast and start nursing, typically within one hour of birth. Success with breastfeeding in this "golden hour" increases the likelihood of successful breastfeeding at discharge.
Skin-to-skin mother-baby contact should still occur, even if the baby is born by Cesarean surgery. The baby is placed on the mother in the operating room or the recovery area. If the mother is unable to immediately hold the baby a family member can provide skin-to-skin care until the mother is able.
Breast crawl
According to studies cited by UNICEF, babies naturally follow a process which leads to a first breastfeed. Shortly after birth, the infant relaxes and makes small movements of the arms, shoulders and head. If placed on the mother's abdomen the baby gradually inches towards the breast, called the breast crawl and begins to feed. After feeding, it is normal for a baby to remain latched to the breast while resting. This is sometimes mistaken for lack of appetite. Absent interruptions, all babies follow this process. Rushing, by picking up and moving the infant to the breast, or interrupting the process, such as removing the baby to be weighed, may complicate subsequent feeding. Activities such as weighing, measuring, bathing, needle-sticks, and eye prophylaxis wait until after the first feeding.
Preterm or low-tone infants
Children who are born preterm (before 37 weeks), children born in the early term period (37 weeks–38 weeks and 6 days), and children born with low muscular tone, such as those with chromosomal abnormalities like Down syndrome or neurological conditions like Cerebral palsy, may have difficulty in initiating breast feeds immediately after birth. These late preterm (34 weeks –36 weeks and 6 days) and early term (37 weeks–38 weeks and 6 days) infants are at increased risk for both breastfeeding cessation and complications of insufficient milk intake (e.g., dehydration, hypoglycemia, jaundice, and excessive weight loss). They are often expected to feed like term babies, but they have less strength and stamina to feed adequately.
By convention, such children are often fed on expressed breast milk or other supplementary feeds through tubes, supplemental nursing systems, bottles, spoons or cups until they develop satisfactory ability to suck and swallow breast milk. Regardless of feeding method chosen, human milk feedings, whether from the mother or a donor, are important in the brain development of premature infants, and the NICU having a standardized protocol for feeding is protective against dangerous gastrointestinal infections (necrotizing enterocolitis) in these infants. Frequent breastfeeding and/or small amounts of supplementation may be needed for successful outcomes; breast pumping and/or hand expression is often helpful in providing adequate stimulation to the mother's breasts.
Starting to breastfeed may be challenging for mothers of preterm infants, especially those born before 34 weeks, because their breasts may still be developing (in Lactogenesis I, see Breastfeeding Physiology). Additionally, mother–infant separation and the stressful environment of the NICU are also barriers to breastfeeding. Availability of a lactation specialist in the NICU can be helpful for mothers trying to establish their milk supply. Additionally, skin-to-skin (Kangaroo Care) has been shown to be safe and beneficial to both mother and baby. Kangaroo Care stabilizes newborn premature infants' vital signs, such as their heart rate, providing a naturally warm environment that helps them regulate their temperature. It is also beneficial to the mother, as it may improve the development of milk supply and be beneficial for mental health.
Timing
Newborn babies usually breastfeed 8 to 12 times every 24 hours, and they typically express hunger cues every one to three hours for the first two to four weeks of their lives.
A newborn has a small stomach capacity, approximately 20 ml. The amount of breast milk that is produced is timed to meet the infant's needs in that the first milk, colostrum, is concentrated but produced in only very small amounts, gradually increasing in volume to meet the expanding size of the infant's stomach capacity.
Many newborns will typically feed for 10 to 15 minutes on each breast, however feeds may last up to 45 minutes depending on infant wakefulness and efficiency.
It is important for parents to recognize the difference between Nutritive and Non-Nutritive Sucking. Nutritive Sucking follows a slow, rhythmic pattern, with 1–2 sucks per swallow. Non-nutritive sucking is a faster-paced sucking pattern with few swallows. This swallow pattern is often observed at the beginning and/or the end of a feed. At the beginning of the feed, this pattern triggers milk letdown, while at the end of the feed, this may be a signal of the infant tired or becoming relaxed with a slower milk velocity.
Duration and exclusivity
Numerous health organizations, including, but not limited to, the CDC, WHO, National Health Service, Canadian Pediatric Society, American Academy of Pediatrics, and American Academy of Family Physicians, recommend breastfeeding exclusively for six months following birth, unless medically contraindicated. Exclusive breastfeeding is defined as "an infant's consumption of human milk with no supplementation of any type (no water, no juice, no nonhuman milk and no foods) except for vitamins, minerals and medications." Supplementation with human donor breastmilk may be indicated in some specific cases, as discussed below.
After solids are introduced at around six months of age, continued breastfeeding is recommended. The American Academy of Pediatrics recommends that babies be breastfed at least until 12 months, or longer if both the mother and child wish. The World Health Organization's guidelines recommend "continue[d] frequent, on-demand breastfeeding until two years of age or beyond.
Extended breastfeeding means breastfeeding after the age of 12 or 24 months, depending on the source. In Western countries such as the United States, Canada, and Great Britain, extended breastfeeding is relatively uncommon and can provoke criticism.
In the United States, 22.4% of babies are breastfed for 12 months, the minimum amount of time advised by the American Academy of Pediatrics. In India, mothers commonly breastfeed for 2 to 3 years.
Supplementation
Supplementation is defined as the use of additional milk or fluid products to feed an infant, in addition to breastmilk, during the first 6 months of life. The Academy of Breastfeeding Medicine recommends only supplementing when medically indicated, as opposed to mixing use of formula and breastmilk for reasons that are not necessarily medical indications. Some medical indications for supplementation include low blood sugar, dehydration, excessive weight loss or poor gain, and jaundice in the infant; true low milk supply; severe nipple pain unrelieved by interventions; and medical contraindications to breastfeeding, as described below. Supplements can be delivered at the breast through a supplemental nursing system in order to stimulate the production of the mother's own milk and to preserve the breastfeeding relationship. Some parents may desire to supplement proactively if early signs of insufficient intake, such as decreased urination, dry mucous membranes, or persistent signs of hunger, are noticed. If these signs are noticed, it is important to have the mother-infant dyad evaluated by a breastfeeding specialist or pediatrician to determine the true cause of the symptoms and determine the need for supplementation. Often, these symptoms are caused by poor milk transfer at the breast, and can be solved with adjustments to the latch, but occasionally they may be caused by other processes, unrelated to breastfeeding, so evaluation is necessary. Supplementation with formula is associated with decreased rates of exclusive breastfeeding at 6 months, and overall decreased length of breastfeeding.
In terms of what to supplement with, the first choice is always the mother's own breastmilk, save any medical contraindications to its use. The second best option for supplementation is pasteurized human donor milk. Finally, specific formulas may be used for supplementation if maternal or donor breastmilk are not options. One situation where this may be the case is in cases of infant metabolic diseases, such as galactosemia. The Academy of Breastfeeding Medicine recommends that supplementation only be used when medically indicated and when overseen by a medical professional, such as a pediatrician or family physician, and after consultation with an IBCLC. Without sufficient breast stimulation, supplementation can reduce the mother's milk production, so pumping would be indicated in these cases if continued breastfeeding is desired.
Indications for use of donor breastmilk are very closely outlined by the American Academy of Pediatrics (AAP). Due to low availability and high cost of donor breastmilk, the AAP recommends prioritizing the use of the milk for infants born with a weight of less than 1500g (approximately 3lb 5oz), as it is helpful in decreasing rates of the severe intestinal infection, necrotizing enterocolitis, in this population.
Position
Effective positioning and technique for latching on are necessary to prevent nipple soreness and allow the baby to obtain enough milk.
Babies can successfully latch on to the breast from multiple positions. Each baby may prefer a particular position. The "football" hold places the baby's legs next to the mother's side with the baby facing the mother. Using the "cradle" or "cross-body" hold, the mother supports the baby's head in the crook of her arm. The "cross-over" hold is similar to the cradle hold, except that the mother supports the baby's head with the opposite hand. The mother may choose a reclining position on her back or side with the baby lying next to her.
No matter the position the parent-infant dyad finds most comfortable, there are a few components of every position which will help facilitate a successful latch. One key component is maternal comfort. The mother should be comfortable while breastfeeding, and should have her back, feet, and arms supported with pillows as necessary. Additionally, when starting the latch process, the infant should be aligned with their abdomen facing their mother, which can be remembered as "tummy-to-mummy," and with their hips, shoulders and head aligned. This alignment helps to facilitate proper, efficient swallowing mechanics.
Latching
Latching refers to how the baby fastens onto the breast while feeding.
Making use of anatomy and reflexes
Sebaceous glands called Glands of Montgomery located in the areola secrete an oily fluid that lubricate and protect the nipple during latching. The visible portions of the glands can be seen on the skin's surface as small round bumps.
The rooting reflex is the baby's natural tendency to turn towards the breast with the mouth open wide. When preparing to latch, mothers should make use of this reflex by gently stroking the baby's philtrum, the area between the upper lip and the nose, with their nipple to induce the baby to open their mouth with a wide gape. One way to help the infant achieve a deep latch is to compress the breast tissue into a "U" or "hamburger shape," so that the infant can fit the breast tissue into their mouth. This is done by the mother placing her thumb and fingers in line with the infant's nose and mouth respectively and using this grip to compress the breast tissue.
Bringing the infant in to latch
If the newborn seems to need help in latching on, the mother should focus on helping the infant by bringing their chin to the breast first. This facilitates a deep, asymmetric latch, and also helps the infant extend their neck and tilt their forehead back to maintain this deep latch and ease the swallowing process.
Signs of a good, deep latch
In a good latch, a large amount of the areola, in addition to the nipple, is in the baby's mouth. The amount of areola visible on either side of the infant's mouth should be asymmetric, meaning most of the "bottom" of the areola should be in the infants mouth and much more of the "top" of the areola should be visible. This position is helpful in pointing the nipple toward the roof of the infant's mouth, helping the infant recruit more milk. The baby's lips should be flanged out. The neck should be extended to facilitate swallowing, and as such, the chin will be close to the breast, and the forehead and nose should be far from the breast. Another sign of a good latch is the contour of the infant's cheeks; the cheeks should be rounded all the way to the edge of the mouth, rather than dimpled or creased at the edge of the mouth. This is a good indicator of effective suck mechanics. Additionally, in order to achieve a deep latch, the infant's mouth must be open wide, preferably wider than 140 degrees.
Signs of a poor, shallow latch
In a poor, shallow latch, the infant may latch close to, or at, the nipple, which can cause the mother pain. While the infant is at the breast, the first indicators of a shallow latch are having the areola be largely visible outside the infant's mouth and a narrow infant mouth angle. Additional signs result from poor positioning when the infant comes toward the breast to latch. If the infant leads with their brow or forehead, they are likely to flex their neck; this mechanism of latching causes the nipple to point down and then hit the hard palate during sucking. From an external view, this manifests as the nose and forehead being close to the breast and the chin far from the breast. This neck flexion also obstructs the normal swallowing mechanism, preventing the infant from drinking efficiently. In addition to not being able to swallow properly, this shallow latch prevents the infant from adequately compressing the glandular tissue behind the nipple and stimulating milk flow; thus, they may begin to apply more suction, which manifests externally as cheek dimpling, or sucking their cheeks in.
Let-down reflex
When the baby suckles muscles in the breast squeeze milk towards the nipples. This is called the let-down reflex. Some women report that they do not experience anything while others report a tingling feeling which is sometimes described as quite strong.
The baby may be seen to respond to the beginning of the flow of milk by changing from quick sucks to deep rhythmic swallows. Sometimes the let-down is so strong that the baby splutters and coughs and the mother may need to remove the baby from her breast for a short time until the flow becomes less forceful. Milk may also let-down unexpectedly when a mother hears her baby cry or even only thinks about the baby. Nursing pads may be made or purchased to absorb unexpected milk flows.
Problems with breastfeeding
Inverted nipples
Infants of mothers with inverted nipples can still achieve a good latch with perhaps a little extra effort. For some women, the nipple may easily become erect when stimulated. Other women may require modified breastfeeding techniques, and some may need extra devices, such as nipple shells, modified syringes, or breast pumps to expose the nipple. La Leche League and Toronto Public Health offer several techniques to use during pregnancy or even in the early days following birth that may help to bring a flat or inverted nipple out.
Use of pacifiers
The World Health Organization's Ten Steps to Successful Breastfeeding recommends total avoidance of pacifiers for breastfeeding infants. In 2016 a large review of studies reported that the use of a pacifier beginning at birth or after lactation was established did not significantly affect the duration of exclusive and partial breastfeeding up to four months of age. The CDC, however, currently (2022) reports that early use of pacifiers can have a negative outcome on the success of breastfeeding and they suggest that it should be delayed until breastfeeding is firmly established.
Ankyloglossia
Ankyloglossia, also called "tongue-tie" may cause shallow latch, poor milk transfer, and other problems with breastfeeding. There are two types of tongue-ties; an anterior tongue-tie occurs when a band of tissue, known as the frenulum, attaches the tongue to the base of the mouth, restricting the tongue's vertical movement and preventing the infant from pressing the breast and nipple into the soft palate. A posterior tongue-tie is a band of tissue that can only be felt on exam, and tends to impact breastfeeding less severely than its anterior counterpart. If it is determined that the inability to latch on properly is related to ankyloglossia, a simple surgical procedure to clip the frenulum can correct the condition. The Academy of Breastfeeding Medicine and the Australian Dental Association have raised concern over the growing trend of oral tie surgeries, due to evidence for benefit being low-quality, inconsistent, or unsupported.
Engorgement
Engorgement is the swelling and stretching of the breast tissue due to accumulation of fluid in the tissue surrounding and supporting the milk-producing cells and ducts. Engorgement most frequently occurs as milk "comes in" and during the weaning process. As milk is coming in, several processes occur. At the end of pregnancy there is dilation of the blood vessels which supply the breast, allowing for leaking into the tissue or interstitial space. Additionally, the birth of an infant is followed by massive fluid shifts to both offload excess fluid, which had been used to supply oxygen and nutrients to the fetus through the placenta, which is no longer needed, and supply additional fluid to the breasts in order to start the process of making milk. These fluid shifts often result in some of this excess fluid leaking into the breast tissue. Finally, milk "coming in" can create an uncomfortably full feeling, which combined with the aforementioned fluid accumulation in the breast tissue, can cause severe pain. If breastfeeding is suddenly stopped, the breasts are likely to become engorged. Pumping small amounts to relieve discomfort helps to gradually train the breasts to produce less milk. There is presently no safe medication to prevent engorgement, but cold compresses and ibuprofen may help to relieve pain and swelling. Pain should go away with emptying of the breasts. If symptoms continue and comfort measures are not helpful, it is possible a blocked milk duct or infection may be present and seek medical intervention.
Nipple pain
Although very common, nipple pain and nipple trauma (cracking, open sores) should not be normalized, as these are often signs of a shallow latch or other underlying problem that can be evaluated and fixed. In addition to shallow latch, other causes of nipple pain include, but are not limited to, skin infection or inflammation, blood vessel spasm or the equivalent of Raynaud Syndrome in the breast, mastitis, plugged ducts, and nipple blebs. Pain caused by a problem deep in the breast may also present with nipple pain due to the paths of nerves in the breast. In addition to the serious nature of many of these causes, nipple pain is a common reason for a mother stopping breastfeeding, so it is important that mothers experiencing nipple pain be evaluated.
Delay in milk "coming in"
While milk normally "comes in" by 3 days after birth, there are several reasons this may be delayed. Risk factors for this delay include maternal diabetes, stressful delivery, retained placenta, prolonged labor and birth by C-section. Mothers experiencing a delay in their milk coming in should consult with a lactation specialist and their pediatrician, as they may need to supplement with donor milk or formula to help the infant gain weight and pump to encourage milk to come in sooner and in greater volume.
Low milk supply
Breast milk supply augments in response to the baby's demand for milk, and decreases when milk is allowed to remain in the breasts. When considering a possibly low milk supply, it is important to consider the difference between "perceived low milk supply" and "true low milk supply". Perceived low milk supply occurs when mothers, for a variety of reasons, believe that they are not making enough milk to feed their infant. These reasons may include fussiness, colic, preference for the bottle as opposed to the breast, long nursing duration, decreased sensation of breast fulness, and even decreased frequency of infant stools. However, in these cases, it important to reassure the parent that infant weight gain is absolute proof of adequate milk intake. Thus, if the infant breastfeeding exclusively, and is gaining weight appropriately, then the parent can be reassured that they are producing enough milk.
True low milk supply can be either primary (caused by medical conditions or anatomical issues in the mother), secondary (caused by not thoroughly and regularly removing milk from the breasts) or both. Primary causes may manifest prior to or during pregnancy, during labor, and even after birth. Secondary causes are far more common than primary ones. One study found that 15% of healthy first-time mothers had low milk supply 2–3 weeks after birth, with secondary causes accounting for at least two-thirds of those cases.
Poor milk intake is signaled by poor infant weight gain, signs of dehydration, and hypoglycemia. Poor milk intake can be caused by poor milk transfer by the infant or by true low milk supply by the mother. When the milk "comes in" appropriately, but is followed by decreased milk supply, this is most often caused by allowing milk to remain in the breasts for long periods of time, or insufficiently draining the breasts during feeds. If the baby is latching and swallowing well (signs of good milk transfer), but is not gaining weight as expected or is showing signs of dehydration, low milk supply in the mother can be suspected, and a lactation specialist should be consulted.
Newborn jaundice
More than 80% of newborns develop jaundice within several days of birth. Jaundice, or yellowing of the skin and eyes, occurs when bilirubin, a byproduct of the breakdown/recycling of red blood cells, builds up in the newborn's bloodstream faster than the liver can break it down and excrete it through the baby's urine and stool. By continuing to breastfeed frequently (start at 8-12 times per day), the infant's body can usually rid itself of the bilirubin excess by encouraging more urine and stool production. However, in some cases, the infant may need additional treatments, such as UV light therapy or additional feedings (see Supplementation) to keep the condition from progressing into more severe problems.
There are two types of newborn jaundice related to breastfeeding.
Breastfeeding jaundice is quite common and may occur in the first week of life in conjunction with ongoing weight loss. The cause is thought to be low caloric intake. Formula-fed infants tend to lose less weight after birth compared to breastfed infants, supporting the hypothesis that breastfeeding jaundice is related to caloric intake rather than volume intake. Individual risk factors, such as breastfeeding, are not predictive of developing severe jaundice: Breastfeeding is a risk factor for severely high levels of bilirubin, but the risk factor is very common, and the risk of severely high bilirubin remains small.
Breast milk jaundice is jaundice that persists despite appropriate weight gain. This type of jaundice may start as breastfeeding jaundice and persist, or may not appear until after the baby has begun to gain weight, typically around 4–5 days old. It often persists beyond the second and third weeks of life. There is no single cause of breast milk jaundice; rather, the causes are multifactorial and frequently debated in the literature. The causes of breast milk jaundice include variations in bilirubin metabolism, genetic variations, and variations in breastmilk, including the harmless and helpful germs found naturally on the surface of the skin and in the breastmilk. Breast milk jaundice is usually not a reason to stop nursing. It is important to consult with a physician to determine when it may be necessary to test for other causes of jaundice that may require additional treatment, such as enzyme deficiencies or problems with the red blood cells (i.e., elliptocytosis, spherocytosis, hemolysis, glucose-6-phosphate dehydrogenase deficiency).
Weaning
Weaning is the process of replacing breast milk with other foods; the infant is fully weaned after the replacement is complete. Psychological factors affect the weaning process for both mother and infant, as issues of closeness and separation are very prominent. Unless a medical emergency necessitates abruptly stopping breastfeeding, it is best to gradually increase the period between feedings and/or eliminate feedings to allow the breasts to adjust to the decreased demands without becoming engorged. Studies show that a large number of women discontinue breastfeeding early due to lack of working place support for breastfeeding mothers. La Leche League advises parents to shift their children's focus at bedtime away from breastfeeding, as it is often the most difficult feeding for them to let go.
If weaning is begun at 12 months or later it is not necessary to switch to infant formula or "toddler formula" as is sold commercially. At 12 months it is recommended that the baby be switched to whole cow's milk. Reduced-fat or skim milk generally is not appropriate before age 2 because it does not have enough fat or calories to promote early brain development.
If the mother was experiencing lactational amenorrhea, periods will begin to return with weaning, along with restored fertility.
Extended breastfeeding
Extended breastfeeding usually means breastfeeding beyond the age of 12 to 24 months, depending on the culture. The American Academy of Family Physicians states that "health outcomes for mothers and babies are best when breastfeeding continues for at least two years. The American Academy of Pediatrics recommends that mothers nurse for the first 12 months and "thereafter for as long as mother and baby desire." The World Health Organization recommends breastfeeding up to age 2 "or beyond."
Breast milk is known to contain lactoferrin (Lf), which protects the infant from infection caused by a wide range of pathogens. The amount of Lf in breast milk is lactation-stage related. One study looked at Lf concentration in prolonged lactation from the first to the 48th month postpartum. It was found to be at the highest level in colostrum, dropped to the lowest level during 1 – 12 months of lactation, and then increased significantly during the 13–24 months of lactation, close to the Lf concentration in colostrum. At over 24 months the level dropped, though not significantly.
Professional breastfeeding support
Lactation consultants are trained to assist mothers in preventing and solving breastfeeding difficulties such as sore nipples and low milk supply. They commonly work in hospitals, physician or midwife practices, public health programs, and private practice. Lactation consultants earn their credential, International Board Certified Lactation Consultant (IBCLC), through the International Board of Lactation Consultant Examiners.
Breastfeeding support from a lactation consultant is associated with higher rates of any breastfeeding at 6 months but not at 1 month or 3 months post pregnancy based on a meta-analysis of studies conducted in the US and Canada. Peer support for breastfeeding has been found to be associated with higher rates of any breastfeeding at 1 month and 3 to 6 months and of exclusive breastfeeding at 1 month, but it is unrelated to breastfeeding outcomes past 6 months post pregnancy.
Contraindications to breastfeeding
Maternal contraindications
Medical conditions that do not allow breastfeeding are fairly rare. Infants that are otherwise healthy uniformly benefit from breastfeeding, however, extra precautions should be taken or breastfeeding avoided in circumstances including certain infectious diseases and medical conditions.
Maternal infections
HIV
A breastfeeding child can become infected with HIV. Factors such as the mother's viral load complicate breastfeeding recommendations for HIV-positive mothers. The World Health Organization highlights the possibility of breastfeeding in mothers on anti-viral therapy and with undetectable viral loads, especially in areas where access to clean water is poor and where death from infectious diseases is common, citing low transmission rates when the mother is on anti-viral therapy. They also recommend that national authorities in each country decide which infant feeding practice should be promoted by their maternal and child health services to best avoid HIV transmission from mother to child. However, the CDC continues to recommend against HIV-positive mothers breastfeeding in the United States. Infant formula should only be given if this can be safely done.
Human T-lymphotropic virus (types I and II)
Human T-Lymphotrophic Virus (HTLV) is able to be passed through breastmilk from mother to child. The worldwide rate of transmission through breastmilk is estimated to be 3.9–27%, and this risk is increased by high maternal viral load and prolonged periods of breastfeeding. Current data demonstrates that while breastfeeding for less than six months does not, independently, increase risk of HTLV-1 transmission, not breastfeeding during that time does decrease risk of transmission. As such, CDC recommends against breastfeeding when mothers have HTLV Types I or II. Recognizing the importance of breastfeeding in more resource-poor areas of the world, the World Health Organization recommends shortening the duration of breastfeeding, or avoiding breastfeeding where possible.
Hemorrhagic viral disease (Marburg virus, Ebola)
Mothers with Marburg virus or Ebola should not breastfeed their infants or feed them with expressed breastmilk.
Tuberculosis
Infants whose mothers have suspected untreated tuberculosis (TB) should be isolated from their mothers to reduce risk of transmission. As such, these infants should not be breastfed during this time and until the mother has been treated appropriately for 2 weeks and is no longer contagious. However, these infants may be fed expressed breastmilk from their mother. Transmission of TB through breastmilk, without an isolated breast infection caused by the Tuberculosis bacteria (Mycobacterium tuberculosis), has never been documented in the scientific literature. Mothers who do have an isolated breast infection caused by Mycobacterium tuberculosis, termed tuberculous mastitis, should not feed their infants with their own breastmilk, even if it is fed by bottle.
Herpes simplex
Herpes simplex virus (HSV) is the virus that causes genital herpes and oral cold sores, and it can be very dangerous to infants. The CDC advises to continue breastfeeding if there are no open/active lesions on the breast and other lesions covered.
Herpes zoster (chickenpox and shingles)
Varicella zoster is the virus responsible for chickenpox and shingles (also known as herpes zoster). The CDC advises that breastfeeding is safe to continue as long as the breasts are clear of lesions, also emphasizing that if pumping or hand expressing milk, proper hand-hygiene should be used to minimize transfer.
COVID-19 (no contraindication)
In May 2020, WHO and UNICEF stressed that the ongoing COVID-19 pandemic was not a reason to discontinue breastfeeding. They recommend that women should continue to breastfeed during the pandemic even if they have confirmed or suspected COVID-19 because evidence indicates that it is not likely that COVID-19 can be transmitted through breast milk. A study published in 2021 found that, while SARS-CoV-2 RNA may be found in some samples of breastmilk from recently infected mothers, the breastmilk does not contain infectious virus and is not considered a transmission risk factor. Mothers who have suspected or confirmed diagnoses of COVID-19 should thoroughly wash their hands and wear a well-fitting mask prior to breastfeeding their infant, or express breastmilk and feed the infant by bottle.
Substance use
Alcohol
Moderate alcohol consumption by breastfeeding mothers can significantly affect infants. Even one or two drinks, including beer, may reduce milk intake by 20 to 23%, leading to increased agitation and poor sleep patterns. Regular heavy drinking (more than two drinks daily) can shorten breastfeeding duration and cause issues in infants, such as excessive sedation, fluid retention, and hormonal imbalances. Additionally, higher alcohol consumption may negatively impact children's academic achievement.
When breastfeeding, alcohol may be consumed in moderation and does not require "Pumping-and-Dumping" (pumping and discarding breastmilk). Alcohol crosses from the blood to the breastmilk by diffusion. Thus, the concentration of alcohol in the breastmilk is approximately equal to the concentration in the maternal bloodstream at any given time. As the mother's liver processes the alcohol, more and more alcohol is pulled out of the breastmilk and back into the bloodstream. Thus, it is suggested to wait 2 hours after drinking before nursing or pumping. In the case of infrequent binge drinking, it has been shown that infants consume through breastmilk only a fraction of the alcohol their mothers have ingested. While a minute, clinically insignificant amount of alcohol may be absorbed into the infant's bloodstream, it is unlikely that this amount would cause any noticeable cognitive or neuromotor effects.
Marijuana
The data on the use of marijuana during breastfeeding is limited, however, in part due to our lack of knowledge in this area, the CDC recommends against using marijuana or marijuana-containing products, including CBD, during breastfeeding. The main active ingredient in marijuana, tetrahydrocannabinol (THC), can be found in breastmilk anywhere from six days to more than six weeks after marijuana use. There is limited data on the long term effects of this exposure on the infant, however some studies have voiced concern regarding delayed motor development in infants exposed to THC.
Tobacco
Mothers who smoke or use other tobacco products can breastfeed their infants, according to La Leche League, the CDC, and the Royal Women's Hospital (Australia). However, it is important to note that maternal tobacco use may decrease milk supply. Additionally, tobacco smoking, regardless of feeding method, increases risk of SIDS and respiratory illnesses. Thus, attempting to decrease tobacco use or even cease helps to minimize tobacco exposure to infants and maximize the benefits of breastfeeding. Tobacco cessation products, such as nicotine patches and other medications, like bupropion, are able to be used by breastfeeding mothers.
Other recreational drugs
Mothers utilizing recreational drugs, such as cocaine, methamphetamines, PCP, and heroin, should not breastfeed.
Medication use
Most medications are compatible with continuing to breastfeed. Many medicines pass into breastmilk in small amounts, however, very few medications actually reach the infant and are absorbed in a way that would actually impact the infant. Several characteristics of medications, including size and pH of the medication molecule and how well the medication is absorbed in the GI tract, influence how much of a medication may reach, and may ultimately be absorbed, by the infant. In addition to the effects on the infant, many medications are known to significantly suppress milk production, including pseudoephedrine, diuretics, and contraceptives that contain estrogen.
There are several resources to assist medical professionals in determining which medications are safe for pregnancy and breastfeeding. While patients are able to use these resources as well, they are targeted toward medical professionals. Patients should be encouraged to consult a lactation specialist or a medical provider trained in breastfeeding medicine if any concerns arise. Two helpful resources are listed below.
LactMed @ NIH (Drugs and Lactation Database (LactMed))
InfantRisk App (InfantRisk Center at Texas Tech University Health Sciences Center)
Pumping-and-dumping
"Pumping-and-dumping" is the concept of expressing breastmilk and discarding it due to a medication or substance "tainting" the breastmilk. It was once believed that drinking alcohol or taking any medications, even medicines like ibuprofen, required pumping-and-dumping. However, this is no longer the case. Pumping-and-dumping, or stopping breastfeeding altogether, is only required in very rare circumstances, such as with radioactive medications or chemotherapy.
If a parent is concerned with a possible milk contaminant, they can express and save the breastmilk until they are able to consult with a lactation specialist or another medical professional trained in breastfeeding medicine.
Infantile contraindications
Galactosemia
Galactosemia is a metabolic disorder that prevents the infant from breaking down galactose, which is one of the two components of lactose, a type of sugar found in milk. Lactose is also found in breastmilk, so infants with galactosemia should not breastfeed.
Methods
Expressed milk
A mother may express milk (remove milk from breasts) for storage and later use. Expression may occur manually with hand expression, or by using a breast pump.
Mothers express milk for multiple reasons. Expressing breast milk can maintain a mother's milk supply when mother and child are apart. A sick baby who is unable to nurse can take expressed milk through a nasogastric tube. Some babies are unable or unwilling to nurse. Maternal breastmilk is the food of choice for premature babies; these infants may be fed maternal milk through tubes, supplemental nursing systems, bottles, spoons or cups until they develop satisfactory ability to suck and swallow breast milk. Some women donate expressed breast milk (EBM) to others, either directly or through a milk bank. This allows mothers who cannot breastfeed to give their baby the benefits of breast milk. While informally-shared breastmilk does carry the nutritional benefits of breastmilk, this breastmilk is most often not pasteurized or screened, and thus carries with it the risk of transmitting diseases or medications that are unsafe for infants. Parents considering directed or informal milk sharing should discuss this option with their doctor, and they should be familiar with the donors medical history and milk handling practices. Use of informally-shared (unscreened, not pasteurized) milk from an anonymous donor is discouraged by the Academy of Breastfeeding Medicine.
Babies feed differently with artificial nipples than from a breast. With the breast, the infant's tongue massages the milk out rather than sucking, and the nipple does not go as far into the mouth. Drinking from a bottle takes less effort and the milk may come more rapidly, potentially causing the baby to lose desire for the breast. This is often referred to as nipple confusion or nipple preference. While some infants do experience this preference for the bottle, many infants do not and will be able to alternate between bottle and breast without issue.
"Exclusively expressing" and "exclusively pumping" are terms for a mother who exclusively feeds a baby expressed milk. Exclusively pumping is poorly studied in the literature, especially in recent years. However, from available evidence, it appears to be fairly uncommon, with only approximately 7% of study participants reporting exclusive pumping.
Storage of expressed breastmilk
Breastmilk may be stored for various amounts of time depending on storage temperature and conditions. The content and quality of expressed milk changes over time as it is stored, particularly when frozen. For example, there is a decrease in the ability of breastmilk to kill bacteria when it is stored in the refrigerator for more than 48 hours. Additionally, the quantity of fat, protein, and calories in breastmilk decreases when the milk is frozen for more than 3 months. While several components of breastmilk change over time, inflammatory factors (cytokines) and maternal antibodies, and growth factors are thought to be stable for at least 6 months when the breastmilk is frozen. Storage guidelines, according to the CDC, La Leche League International and the Academy of Breastfeeding Medicine, are noted in the table below.
Breastmilk storage containers
Expressed breastmilk can be stored in freezer storage bags, containers made specifically for breastmilk, a supplemental nursing system, or a bottle ready for use. Parents should avoid using storage containers which contain bisphenol A (BPA). Additionally, use of polyethylene containers have been shown to decrease the immune benefits of breastmilk, including its ability to kill bacteria and the maternal antibodies it contains, by up to 60%.
Shared nursing
It is not only a mother who may breastfeed a child. Parents may hire another person to do so (a wet nurse), or may share childcare with another mother (cross-nursing). Both of these were common throughout history. It remains popular in some developing nations, including those in Africa, for more than one woman to breastfeed a child. Shared breastfeeding is a risk factor for HIV infection in infants. Shared nursing can sometimes provoke negative social reactions in the English-speaking world.
Tandem nursing
It is possible for a mother to continue breastfeeding an older sibling while also breastfeeding a new baby; this is called tandem nursing. During the late stages of pregnancy, the milk changes to colostrum. While some children continue to breastfeed even with this change, others may wean. Most mothers can produce enough milk for tandem nursing, but the new baby should be nursed first for at least the first few days after delivery to ensure that it receives enough colostrum.
Breastfeeding triplets or larger broods is a challenge given babies' varying appetites. Breasts can respond to the demand and produce larger milk quantities; mothers have breastfed triplets successfully.
Re-lactation and induced lactation
Re-lactation is the process of restarting breastfeeding. In developing countries, mothers may restart breastfeeding after a weaning as part of an oral rehydration treatment for diarrhea. In developed countries, re-lactation is common after early medical problems are resolved, or because a mother changes her mind about breastfeeding.
Re-lactation is most easily accomplished with a newborn or with a baby that was previously breastfeeding; if the baby was initially bottle-fed, the baby may refuse to suckle. If the mother has recently stopped breastfeeding, chances are higher that the milk supply will return and be adequate. Although some mothers successfully re-lactate after months-long interruptions, success is higher for shorter interruptions.
Techniques to promote lactation use frequent attempts to breastfeed, extensive skin-to-skin contact with the baby, and frequent, long pumping sessions. Suckling may be encouraged with a tube filled with infant formula, so that the baby associates suckling at the breast with food. A dropper or syringe without the needle may be used to place milk onto the breast while the baby suckles. The mother should allow the infant to suckle at least ten times during 24 hours, and more times if the baby is interested. These times can include every two hours, whenever the baby seems interested, longer at each breast, and when the baby is sleepy when they might suckle more readily. In keeping with increasing contact between mother and child, including increasing skin-to-skin contact, grandmothers should pull back and help in other ways. Later on, grandmothers can again provide more direct care for the infant.
These techniques require the mother's commitment over a period of weeks or months. However, even when lactation is established, the supply may not be large enough to breastfeed exclusively. A supportive social environment improves the likelihood of success. As the mother's milk production increases, other feeding can decrease. Parents and other family members should watch the baby's weight gain and urine output to assess nutritional adequacy.
A WHO manual for physicians and senior health workers citing a 1992 source states: "If a baby has been breastfeeding sometimes, the breastmilk supply increases in a few days. If a baby has stopped breastfeeding, it may take 1–2 weeks or more before much breastmilk comes."
Induced lactation, also called adoptive lactation, is the process of starting breastfeeding in a woman who did not give birth. This usually requires the adoptive mother to take hormones and other drugs to stimulate breast development and promote milk production. In some cultures, breastfeeding an adoptive child creates milk kinship that builds community bonds across class and other hierarchal bonds.
Health effects
Support for breastfeeding is universal among major health and children's organizations. WHO states, "Breast milk is the ideal food for the healthy growth and development of infants; breastfeeding is also an integral part of the reproductive process with important implications for the health of mothers."
Breastfeeding is associated with a lowered risk of a number of diseases in both mothers and babies. Comparing infants that were exclusively breastfed for at least 3 months with never-breastfed infants, the American Academy of Pediatrics reported that in the first year of life breastfed babies averaged about $400 in savings of health care costs.
Baby
Early breastfeeding is associated with fewer nighttime feeding problems. Early skin-to-skin contact between mother and baby improves breastfeeding outcomes and increases cardio-respiratory stability. Some studies show that breastfeeding aids general health, growth and development in the infant. Infants who are not breastfed are at mildly increased risk of developing acute and chronic diseases, including lower respiratory infection, ear infections, bacteremia, bacterial meningitis, botulism, urinary tract infection and necrotizing enterocolitis. Breastfeeding may protect against sudden infant death syndrome, insulin-dependent diabetes mellitus, Crohn's disease, ulcerative colitis, childhood lymphoma, allergic diseases, digestive diseases, obesity, or childhood leukemia later in life. and may enhance cognitive development. The CDC reports that infants who are breastfed have reduced risks of ear infections, obesity, type 1 diabetes, asthma, SIDS, and lower respiratory, and gastrointestinal infections.
It is hard however to distinguish the importance of breastfeeding per se and other correlated socioeconomic factors (breastfeeding is more frequent in richer families with higher educations). Comparing breastfed and non-breastfed siblings in a given family drastically decreases the association between breastfeeding and long-term child well-being.
Growth
The average breastfed baby doubles its birth weight in 5–6 months. By one year, a typical breastfed baby weighs about times its birth weight. At one year, breastfed babies tend to be leaner than formula-fed babies, which improves long-run health.
The Davis Area Research on Lactation, Infant Nutrition and Growth (DARLING) study reported that breastfed and formula-fed groups had similar weight gain during the first 3 months, but the breastfed babies began to drop below the median beginning at 6 to 8 months and were significantly lower weight than the formula-fed group between 6 and 18 months. Length gain and head circumference values were similar between groups, suggesting that the breastfed babies were leaner.
Infections
Breast milk contains several anti-infective factors such as bile salt stimulated lipase (protecting against amoebic infections) and lactoferrin (which binds to iron and inhibits the growth of intestinal bacteria).
Exclusive breastfeeding until six months of age helps to protect an infant from gastrointestinal infections in both developing and industrialized countries. The risk of death due to diarrhea and other infections increases when babies are either partially breastfed or not breastfed at all. Infants who are exclusively breastfed for the first six months are less likely to die of gastrointestinal infections than infants who switched from exclusive to partial breastfeeding at three to four months.
During breastfeeding, approximately 0.25–0.5 grams per day of secretory IgA antibodies pass to the baby via milk. This is one of the important features of colostrum. The main target for these antibodies are probably microorganisms in the baby's intestine. The rest of the body displays some uptake of IgA, but this amount is relatively small.
Maternal vaccinations while breastfeeding is safe for almost all vaccines. Additionally, the mother's immunity obtained by vaccination against tetanus, diphtheria, whooping cough and influenza can protect the baby from these diseases, and breastfeeding can reduce fever rate after infant immunization. However, smallpox and yellow fever vaccines increase the risk of infants developing vaccinia and encephalitis.
Several studies have suggested that breast milk can pass antibodies to the infant for as long as the child continues to nurse. The antibodies may be in the mother's system as a result of being ill or they may be acquired by drinking milk from a mother who has recently been vaccinated for a particular disease. One small study done on nursing mothers who had received the
COVID-19 vaccine found that breastmilk continued to contain antibodies for as long as 80 days after receiving the vaccine.
Mortality
The World Health Organization reports that babies who receive no breast milk are almost six times more likely to die by the age of one month than those who are partially or fully breastfed. Access to healthcare is the single critical determinant of survival or death for the infant.
Childhood obesity
The protective effect of breastfeeding against obesity is consistent, though small, across many studies. A 2013 longitudinal study reported less obesity at ages two and four years among infants who were breastfed for at least four months.
Allergic diseases
In children who are at risk for developing allergic diseases (defined as at least one parent or sibling having atopy), atopic syndrome can be prevented or delayed through 4-month exclusive breastfeeding, though these benefits may not persist.
Backwash effect
The backwash effect in breastfeeding refers to the process where an infant's saliva flows back into the mother's breast during nursing. This backward flow may introduce the baby's saliva into the mammary gland, potentially prompting the mother's body to produce tailored immune responses in her breast milk to meet the infant's specific needs.
Other health effects
Breastfeeding may reduce the risk of necrotizing enterocolitis (NEC) in premature babies.
Breastfeeding or introduction of gluten while breastfeeding does not protect against celiac disease among at-risk children. Breast milk of healthy human mothers who eat gluten-containing foods presents high levels of non-degraded gliadin (the main gluten protein). Early introduction of traces of gluten in babies to potentially induce tolerance does not reduce the risk of developing celiac disease. Delaying the introduction of gluten does not prevent, but is associated with a delayed onset of the disease.
About 14 to 19 percent of leukemia cases may be prevented by breastfeeding for six months or longer. However, breastfeeding is also the primary cause of adult T-cell leukemia/lymphoma, as the HTLV-1 virus is transmitted through breastmilk.
Breastfeeding is associated with a lower chance of developing diabetes mellitus type 1 in the offspring. Breastfed babies also appear to have a lower likelihood of developing diabetes mellitus type 2 later in life.
Breastfeeding may decrease the risk of cardiovascular disease in later life, as indicated by lower cholesterol and C-reactive protein levels in breastfed adult women. Breastfed infants have somewhat lower blood pressure later in life, but it is unclear how much practical benefit this provides.
A 1998 study suggested that breastfed babies have a better chance of good dental health than formula-fed infants because of the developmental effects of breastfeeding on the oral cavity and airway. It was thought that with fewer malocclusions, breastfed children may have a reduced need for orthodontic intervention. The report suggested that children with a well rounded, "U-shaped" dental arch, which is found more commonly in breastfed children, may have fewer problems with snoring and sleep apnea in later life. A 2016 review found that breastfeeding protected against malocclusions.
Breastfeeding duration has been correlated with child maltreatment outcomes, including neglect and sexual abuse.
Intelligence
It is unclear whether breastfeeding improves intelligence later in life. Several studies found no relationship after controlling for confounding factors like maternal intelligence (smarter mothers were more likely to breastfeed their babies). However, other studies concluded that breastfeeding was associated with increased cognitive development in childhood, although the cause may be increased mother–child interaction rather than nutrition.
Mother
Maternal bond
Oxytocin, a hormone released during breastfeeding, may play a role in maternal-infant attachment and bonding, potentially via decreased anxiety and stress.
Fertility
Exclusive breastfeeding usually delays the return of fertility through lactational amenorrhea, although it does not provide reliable birth control. Breastfeeding may delay the return to fertility for some women by suppressing ovulation. Mothers may not ovulate, or have regular periods, during the entire lactation period. The non-ovulating period varies by individual. This has been used as natural contraception, with greater than 98% effectiveness during the first six months after birth if specific nursing behaviors are followed.
Postpartum bleeding
During the third stage of labor, the time between the delivery of the baby and the passage of the placenta, and the fourth, the final stage of birth, excessive blood loss can endanger the life of the mother. When the newborn nurses the mother secretes oxytocin which causes the uterus to cramp and reduce blood loss. Nursing also causes the uterus to cramp for a number of days postpartum, helping it to return to its pre-pregnancy size. Some women report moderate to severe pain, especially women who have given birth several times, during a nursing session for the first few days following delivery.
Weight retention
It is unclear whether breastfeeding causes mothers to lose weight after giving birth. The National Institutes of Health states that it may help with weight loss.
Chronic conditions
Breastfeeding is also associated with a lower risk of type 2 diabetes among mothers who practice it. Longer duration of breastfeeding is associated with reduced risk of hypertension.
For breastfeeding women, long-term health benefits include reduced risk of breast cancer, ovarian cancer, and endometrial cancer. According to the American Heart Association, breastfeeding also reduces the risk of maternal heart disease and stroke.
A 2011 review found it unclear whether breastfeeding affects the risk of postpartum depression. Later reviews have found tentative evidence of a lower risk among mothers who successfully breastfeed, though it is unknown whether breastfeeding decreases depression, or whether depression decreases breastfeeding.
Dysphoric milk ejection reflex
Dysphoric milk ejection reflex (D-MER) is a condition in which breastfeeding women develop negative emotions that begin just before the milk letdown reflex and last less than a few minutes.
It may recur with every letdown, including unexpected letdowns when the baby is not feeding. It presents as an emotional reaction but may also produce physical feelings such as nausea. It is different from postpartum depression and other known psychological conditions. A 2019 study reported a prevalence rate of 9.1%. As off 2021, very little research has been done and many health care providers and lactation practitioners remain barely able to recognize the syndrome. An October 2021 review of literature published to that date suggested that the lack of up-to-date information "makes it necessary to educate mothers because educated mothers are usually better at handling postpartum situations if they are prepared in advance." There is as yet no medication to treat the symptoms although women have reported that they have found it of benefit to learn that they are not alone and that the symptoms were not "just in their head".
Social factors
The majority of mothers intend to breastfeed at birth. Many factors can disrupt this intent. Research done in the US shows that information about breastfeeding is rarely provided by a women's obstetricians during their prenatal visits and some health professionals incorrectly believe that commercially prepared formula is nutritionally equivalent to breast milk. Many hospitals have instituted practices that encourage breastfeeding, however a 2012 survey in the US found that 24% of maternity services were still providing supplements of commercial infant formula as a general practice in the first 48 hours after birth. The Surgeon General's Call to Action to Support Breastfeeding attempts to educate practitioners.
Social support
A review found that when effective forms of support are offered to women, exclusive breastfeeding and duration of breastfeeding are increased. Characteristics of effective support includes ongoing, face-to-face support tailored to fit their needs. It may be offered by lay/peer supporters, professional supporters, or a combination of both. This review contrasts with another large review that looked at education programs alone, which found no conclusive evidence of initiation of breastfeeding or the proportion of women breastfeeding either exclusively or partially at 3 months and 6 months.
Positive social support in essential relationships of new mothers plays a central role in the promotion of breastfeeding outside of the confines of medical centers. Social support can come in many incarnations, including tangible, affectionate, social interaction, and emotional and informational support. An increase in these capacities of support has shown to greatly positively effect breastfeeding rates, especially among women with education below a high school level.
Some mothers that have used lactation rooms have taken to leaving sticky notes to not only thank the businesses that have provided them but to support, encourage, and praise the nursing mothers who use them.
In the social circles surrounding the mother, support is most crucial from the male partner, the mother's mother, and her family and friends. Research has shown that the closest relationships to the mother have the strongest impact on breastfeeding rates, while negative perspectives on breastfeeding from close relatives hinder its prevalence.
Mother – Adolescence is a risk factor for low breastfeeding rates, although classes, books and personal counseling (professional or lay) can help compensate. Some women fear that breastfeeding will negatively impact the look of their breasts. However, a 2008 study found that breastfeeding had no effect on the breasts; other factors did contribute to "drooping" of the breasts, such as advanced age, number of pregnancies and smoking behavior.
Partner – Partners may lack knowledge of breastfeeding and their role in the practice.
Wet nursing – Social and cultural attitudes towards breastfeeding in the African-American community are also influenced by the legacy of forced wet-nursing during slavery.
Maternity leave
Work is the most commonly cited reason for not breastfeeding. In 2012 Save the Children examined maternity leave laws, ranking 36 industrialized countries according to their support for breastfeeding. Norway ranked first, while the United States came in last. Maternity leave in the US varies widely, including by state. The United States does not mandate paid maternity leave for any employee however the Family Medical Leave Act (FMLA) guarantees qualifying mothers up to 12 weeks unpaid leave although the majority of US mothers resume work earlier. A large 2011 study found that women who returned to work at or after 13 weeks after childbirth were more likely to predominantly breastfeed beyond three months.
Healthcare
Caesarean section
Women are less likely to start breastfeeding after caesarean delivery compared with vaginal delivery.
Breast surgery
Breastfeeding can generally be attempted after breast augmentation or reduction surgery, however prior breast surgery is a risk factor for low milk supply.
A 2014 review found that women who have breast implant surgery were less likely to exclusively breast feed, however it was based on only three small studies and the reasons for the correlation were not clear.
A large follow-up study done in 2014 found a reduced rate of breastfeeding in women who had undergone breast augmentation surgery, however again the reasons were unclear. The authors suggested that women contemplating augmentation should be provided with information related to the rates of successful breastfeeding as part of informed decision making when contemplating surgery.
Prior breast reduction surgery is strongly associated with an increased probability of low milk supply due to disruption to tissues and nerves. Some surgical techniques for breast reduction appear to be more successful than others in preserving the tissues that generate and channel milk to the nipple. A 2017 review found that women were more likely to have success with breastfeeding with these techniques.
Medications
Breastfeeding mothers should inform their healthcare provider about all of the medications they are taking, including herbal products. Nursing mothers may be immunized and may take most over-the-counter drugs and prescription drugs without risk to the baby but certain drugs, including some painkillers and some psychiatric drugs, may pose a risk.
The US National Library of Medicine publishes "LactMed," an up-to-date online database of information on drugs and lactation. Geared to both healthcare practitioners and nursing mothers, LactMed contains over 450 drug records with information such as potential drug effects and alternative drugs to consider.
Some substances in the mother's food and drink are passed to the baby through breast milk, including mercury (found in some carnivorous fish), caffeine, and bisphenol A.
Medical conditions
Undiagnosed maternal celiac disease may cause a short duration of the breastfeeding period. Treatment with the gluten-free diet can increase its duration and restore it to the average value of the healthy women.
Mothers with all types of diabetes mellitus normally use insulin to control their blood sugar, as the safety of other antidiabetic drugs while breastfeeding is unknown.
Women with polycystic ovary syndrome, which is associated with some hormonal differences and obesity, may have greater difficulty with producing a sufficient supply to support exclusive breastfeeding, especially during the first weeks.
Ethnicity and socioeconomic status
The rates of breastfeeding in the African-American community remain much lower than any other race, for a variety of proposed reasons. These include the legacy of wet nursing during slavery, higher rates of poor perinatal health, higher stress levels, less access to support, and less flexibility in the workplace. While for other races as socio-economic class raises rates of breastfeeding also go up, for the African-American community breastfeeding rates remain consistently low regardless of socio-economic class.
There are also racial disparities in access to maternity care practices that support breastfeeding. In the US, primarily African-American neighborhoods are more likely to have facilities (such as hospitals and female healthcare clinics) that do not support breastfeeding, contributing to the low rate of breastfeeding in the African-American community. Comparing facilities in primarily African American neighborhoods to ones in primarily European-American neighborhoods, the rates of practices that support or discourage breastfeeding were: limited use of supplements (13.1% compared with 25.8%) and rooming-in (27.7–39.4%)
Low-income mothers are more likely to have unintended pregnancies. Mothers whose pregnancies are unintended are less likely to breastfeed.
Especially the combination of powdered formula with unclean water can be very harmful to the health of babies. In the late 1970s, there was a boycott against Nestle due to the great number of baby deaths due to formula. Dr. Michele Barry explains that breastfeeding is most imperative in poverty environments due to the lack of access of clean water for the formula. A Lancet study in 2016 found that universal breastfeeding would prevent the deaths of 800,000 children as well as save $300,000,000.
Social acceptance
Some women feel discomfort when breastfeeding in public. Public breastfeeding may be forbidden in some places, not addressed by law in others, and a legal right in others. Even given a legal right, some mothers are reluctant to breastfeed, while others may object to the practice.
It is estimated that around 63% of mothers across the world have publicly breast-fed. The media have reported a number of incidents in which workers or members of the public have objected to or forbidden women breastfeeding. Some mothers avoid the negative attention and choose to move to another location. But some mothers have protested their treatment, and have taken legal action or engaged in protests. Protests have included a public boycott of the offender's business, organizing a "nurse-in" or a breastfeeding flash mob, in which groups of nursing mothers gather at the location where the complaint originated and nursed their babies at the same time. In response, some companies have apologised and instituted reforms.
The use of infant formula was thought to be a way for western culture to adapt to negative perceptions of breastfeeding. The breast pump offered a way for mothers to supply breast milk with most of formula feeding's convenience and without enduring possible disapproval of nursing. Some may object to breastfeeding because of the implicit association between infant feeding and sex. These negative cultural connotations may reduce breastfeeding duration.
Maternal guilt and shame is often affected by how a mother feeds an infant. These emotions occur in both bottle- and breast- feeding mothers, although for different reasons. Bottle feeding mothers may feel that they should be breastfeeding. Conversely, breastfeeding mothers may feel forced to feed in uncomfortable circumstances. Some may see breastfeeding as, "indecent, disgusting, animalistic, sexual, and even possibly a perverse act." Advocates (known by the neologism "lactivists") use "nurse-ins" to show support for breastfeeding in public. One study that approached the subject from a feminist viewpoint suggested that both nursing and non-nursing mothers often feel maternal guilt and shame with formula feeding mothers feeling that they are not living up to the ideals of motherhood and nursing mothers concerned that they are transgressing "cultural expectations regarding feminine modesty." The authors advocate that women be provided with education on breastfeeding's benefits as well as problem-solving skills, however there is no conclusive evidence that breastfeeding education alone improves initiation of breastfeeding or the proportion of women breastfeeding either exclusively or partially at 3 months and 6 months.
Location
In the United States, all fifty states, along with the District of Columbia, Puerto Rico and the U.S. Virgin Islands, have laws that allow a mother to breastfeed a baby in any public or private location. In that country, the Friendly Airports for Mothers (FAM) Act was signed into law in 2019, and the requirements went into effects in 2021. This law requires that all large and medium hub airports in the U.S. provide a private, non-bathroom lactation space in each terminal building.
Some commercial establishments in the U.S. provide breastfeeding rooms, although laws generally specify that mothers may breastfeed anywhere without requiring a special area. Despite these laws, many women in the United States continue to be publicly shamed or asked to refrain from breastfeeding in public. In the United Kingdom, the Equality Act 2010 makes the prevention of breastfeeding in any public place discrimination under the law. In Scotland, it is a criminal offense for one to attempt to prevent another from feeding a child under 24 months in public.
While laws in the U.S. were passed in 2010 which required that nursing mothers who had returned to work be given a non-bathroom space to express milk and a reasonable break time to do so, as of 2016 the majority of American women still did not have access to both accommodations.
In 2014, newly elected Pope Francis drew worldwide commentary when he encouraged mothers to breastfeed babies in church. During a papal baptism, he said that mothers "should not stand on ceremony" if their children were hungry. "If they are hungry, mothers, feed them, without thinking twice," he said, smiling. "Because they are the most important people here."
Prevalence
Globally, about 38% of babies are exclusively breastfed during their first six months of life. In the United States, the rate of women beginning to breastfeed was 76% in 2009 increasing to 83% in 2015 with 58% still breastfeeding at 6 months, although only 25% were still breastfeeding exclusively. African-American women have persistently low rates of breastfeeding compared to White and Hispanic American women. In 2014, 58.1% of African-American women breastfeed in the early postpartum period, compared to 77.7% of White women and 80.6% of Hispanic women. In 2019, 84.1% of U.S women giving birth initiated breastfeeding, with 87.4%, 85.5%, 73.6%, 90.3% and 83.1% of Hispanic, White, African-American, Asian and Multiracial mothers initiating, respectively. Rates of initiation among African-American mothers varied widely by state, with lows under 53% and highs over 90%.
Breastfeeding rates in different parts of China vary considerably.
Rates in the United Kingdom were the lowest in the world in 2015 with only 0.5% of mothers still breastfeeding at a year, while in Germany 23% are doing so, 56% in Brazil and 99% in Senegal.
In Australia, for children born in 2004, more than 90% were initially breastfed. In Canada for children born in 2005–06, more than 50% were only breastfed and more than 15% received both breastmilk and other liquids, by the age of 3 months.
History
In the Egyptian, Greek and Roman Empires, women usually fed only their own children. However, breastfeeding began to be seen as something too common to be done by royalty, and wet nurses were employed to breastfeed the children of the royal families. This extended over time, particularly in western Europe, where noble women often made use of wet nurses. Lower-class women breastfed their infants and used a wet nurse only if they were unable to feed their own infant. Attempts were made in 15th-century Europe to use cow or goat milk, but these attempts were not successful. In the 18th century, flour or cereal mixed with broth were introduced as substitutes for breastfeeding, but this provided inadequate nutrition. The appearance of improved infant formulas in the mid 19th century and its increased use caused a decrease in breastfeeding rates, which accelerated after World War II, and for some in the US, Canada, and UK, breastfeeding was seen as uncultured. From the 1960s onwards, breastfeeding experienced a revival which continued into the 2000s, though negative attitudes towards the practice were still entrenched in some countries up to the 1990s.
Society and culture
Financial considerations
Breastfeeding is less costly than alternatives, but the mother generally must eat more food than would otherwise be necessary. In the US, the extra money spent on food (about US $ each week) is usually about half as much money as the cost of infant formula. According to the CDC, breastfeeding mothers need an extra 450 to 500 calories per day compared to their pre-pregnancy caloric intake.
Breastfeeding reduces health care costs and the cost of caring for sick babies. Parents of breastfed babies are less likely to miss work and lose income because their babies are sick. Looking at three of the most common infant illnesses, lower respiratory tract illnesses, otitis media, and gastrointestinal illness, one study compared infants that had been exclusively breastfed for at least three months to those who had not. It found that in the first year of life there were 2033 excess office visits, 212 excess days of hospitalization, and 609 excess prescriptions for these three illnesses per 1000 never-breastfed infants compared with 1000 infants exclusively breastfed for at least 3 months. However, in a study of over 140,000 newborns in the first month of life, exclusively breastfed newborns had higher hospital readmission rates than those exclusively formula fed, and those exclusively breastfed also had more neonatal outpatient visits compared to those exclusively formula fed.
Criticism of breastfeeding advocacy
There are controversies and ethical considerations surrounding the means used by public campaigns which attempt to increase breastfeeding rates, relating to pressure put on women, and potential feeling of guilt and shame of women who fail to breastfeed; and social condemnation of women who use formula. In addition to this, there is also the moral question as to what degree the state or medical community can interfere with a person's self-determination: for example in the United Arab Emirates the law requires women to breastfeed babies for at least 2 years and allows husbands to sue them if they do not.
It is widely assumed that if women's healthcare providers encourage them to breastfeed, those who choose not to will experience more guilt. Evidence does not support this assumption. On the contrary, a study on the effects of prenatal breastfeeding counselling found that those who had received such counselling and chosen to formula-feed denied experiencing feelings of guilt. Women were equally comfortable with their subsequent choices for feeding their infant regardless of whether they had received encouragement to breastfeed.
Preventing a situation where women are denied agency and/or stigmatized for formula use is also seen as important. In 2018, in the UK, a policy statement from the Royal College of Midwives said that women should be supported and not stigmatized, if after being given advice and information, they choose to formula feed.
Social marketing
Social marketing is a marketing approach intended to change people's behavior to benefit both individuals and society. When applied to breastfeeding promotion, social marketing works to provide positive messages and images of breastfeeding to increase visibility. Social marketing in the context of breastfeeding has shown efficacy in media campaigns.
Some oppose the marketing of infant formula, especially in developing countries. They are concerned that mothers who use formula will stop breastfeeding and become dependent upon substitutes that are unaffordable or less safe. Through efforts including the Nestlé boycott, they have advocated for bans on free samples of infant formula and for the adoption of pro-breastfeeding codes such as the International Code of Marketing of Breast-milk Substitutes by the World Health Assembly in 1981 and the Innocenti Declaration by WHO and UNICEF policy-makers in August 1990. Additionally, formula companies have spent millions internationally on campaigns to promote the use of formula as an alternative to mother's milk. Giving out gift bags that contain infant formula to women as they leave the hospital is also a marketing strategy. The U.S. Government Accountability Office has reported that women who receive formula samples at discharge are associated with lower breastfeeding rates than those who did not receive gift bags.
Baby Friendly Hospital Initiative
The Baby Friendly Hospital Initiative (BFHI) is a program launched by the World Health Organization (WHO) in conjunction with UNICEF in order to promote infant feeding and maternal bonding through certified hospitals and birthing centers. BFHI was developed as a response to the influence held by formula companies in private and public maternal health care. The initiative has two core tenets: the Ten Steps to Successful Breastfeeding and the International Code of Marketing of Breast-milk Substitutes. The BFHI has especially targeted hospitals and birthing centers in the developing world, as these facilities are most at risk to the detrimental effects of reduced breastfeeding rates. As of 2018, 530 hospitals in the United States hold the "Baby-Friendly" title in all 50 states. Globally, there are more than 20,000 "Baby-Friendly" hospitals worldwide in over 150 countries.
Representation on television
The first depiction of breastfeeding on television was in the children's program Sesame Street, in 1977. With few exceptions since that time, breastfeeding on television has either been portrayed as strange, disgusting, or a source of comedy, or it has been omitted entirely in favor of bottle feeding.
Religion
In some cultures, people who have been breastfed by the same woman are milk-siblings who are equal in legal and social standing to a consanguineous sibling. Islam has a complex system of rules regarding this, known as Rada (fiqh). Like the Christian practice of godparenting, milk kinship established a second family that could take responsibility for a child whose biological parents came to harm. "Milk kinship in Islam thus appears to be a culturally distinctive, but by no means unique, institutional form of adoptive kinship."
In Western countries, differences in breastfeeding practices have also been observed according to the affiliation or practice of Christian religions; unaffiliated and Protestant women exhibit higher rates of breastfeeding.
Workplace
Many mothers have to return to work a short time after their babies have been born. In the U.S. about 70% of mothers with children younger than three years old work full-time with 1/3 of the mothers returning to work within 3 months and 2/3 returning within 6 months. Working outside of the home and full-time work are significantly associated with lower rates of breastfeeding and breastfeeding for a shorter duration of time.
According to the Centers for Disease Control and Prevention, support for breastfeeding in the workplace includes several types of employee benefits and services, including writing corporate policies to support breastfeeding women; teaching employees about breastfeeding; providing designated private space for breastfeeding or expressing milk; allowing flextime to support milk expression during work; giving mothers options for returning to work, such as remote work, part-time jobs, and extended maternity leave; providing on-site or near-site child care; providing high-quality breast pumps; and offering professional lactation consultants.
Programs to promote and assist nursing mothers have been found to help maintain breastfeeding. In the United States the CDC reports on a study that "examined the effect of corporate lactation programs on breastfeeding behavior among employed women in California [which] included prenatal classes, perinatal counseling, and lactation management after the return to work". They found that "about 75% of mothers in the lactation programs continued breastfeeding at least 6 months, although nationally only 10% of mothers employed full-time who initiated breastfeeding were still breastfeeding at 6 months."
Section 4207 of the United States' Patient Protection and Affordable Care Act (2010) amended the Fair Labor Standards Act and required employers to provide a reasonable break time for an hourly employee to breastfeed a child if the child is less than one year old. The employee must be allowed to breastfeed in a private place, other than a bathroom. The employer is not required to pay the employee during the break time. Employers with fewer than 50 employees are not required to comply with the law if doing so would impose an undue hardship to the employer based on its size, finances, nature, or structure of its business.
A 2016 study found: "1) federal law does not address lactation space functionality and accessibility, 2) federal law only protects a subset of employees, and 3) enforcement of the federal law requires women to file a complaint with the United States Department of Labor. To address each of these issues, we recommend the following modifications to current law: 1) additional requirements surrounding lactation space and functionality, 2) mandated coverage of exempt employees, and 3) requirement that employers develop company-specific lactation policies." As of 2019 the majority of women still did not have access to both accommodations. Working mother advocate and Entrepreneur writer Christine Michel Carter documented her experience pumping in a bathroom while working for an employer violating the Fair Labor Standards Act.
In 2022 the Providing Urgent Maternal Protections for Nursing Mothers Act, also known as the PUMP Act, became law in the United States. It mandates that salaried workers who are breastfeeding must have break time and a private space that is not a bathroom. However, employers who do not have 50 or more employees do not have to follow this law if following it would create an undue hardship because of expense or difficulty.
In Canada, British Columbia and Ontario, provincial Human Rights Codes prevent against workplace discrimination due to breastfeeding. In British Columbia, employers are required to provide accommodation to employees who breastfeed or express breast milk. Although no specific requirements are mandated, under the Human Rights Code, accommodations suggested include paid breaks (not including meal breaks), private facilities that include clean running water, comfortable seating areas, and refrigeration equipment, as well as flexibility in terms of work-related conflicts. In Ontario, employers are encouraged to accommodate breastfeeding employees by providing additional breaks without fear of discrimination. Unlike in British Columbia, the Ontario Code does not include specific recommendations, and therefore leaves significant flexibility for employers.
Research
Breastfeeding research continues to assess prevalence, HIV transmission, pharmacology, costs, benefits, immunology, contraindications, and comparisons to synthetic breast milk substitutes. Factors related to the mental health of the nursing mother in the perinatal period have been studied. While cognitive behavior therapy may be the treatment of choice, medications are sometimes used. The use of therapy rather than medication reduces the infant's exposure to medication that may be transmitted through the milk. In coordination with institutional organisms, researchers are also studying the social impact of breastfeeding throughout history. Accordingly, strategies have been developed to foster the increase of the breastfeeding rates in the different countries.
| Biology and health sciences | Health and fitness | null |
12598694 | https://en.wikipedia.org/wiki/Callorhinchus | Callorhinchus | Callorhinchus, the plough-nosed chimaeras or elephantfish, are the only living genus in the family Callorhinchidae (sometimes spelled Callorhynchidae). A few extinct genera only known from fossil remains are recognized. Callorhinchus spp. are similar in form and habits to other chimaeras, but are distinguished by the presence of an elongated, flexible, fleshy snout, with a vague resemblance to a ploughshare. They are only found in the oceans of the Southern Hemisphere along the ocean bottom on muddy and sandy substrates. They filter feed, with small shellfish making up the bulk of their diet. The plough-nosed chimaera lays eggs on the ocean floor that hatch at around 8 months. They are currently not a target of conservation efforts; however, they may be susceptible to overfishing and trawling.
Plough-nose chimaeras are the only extant chimaeras that still inhabit relatively shallow neritic habitats, which are thought to have been the ancestral habitats for chimaeriforms up until the beginning of the Cenozoic. All other chimaera groups have since shifted their habitats into deeper waters.
Morphology
Plough-nose chimaeras range from about in total length. Their usual color is black or brown, and, often a mixture between the two. While the club-like snout makes elephantfish easy to recognize, they have several other distinctive features. They possess large pectoral fins, believed to aid in moving swiftly through the water. They also have two dorsal fins spaced widely apart, which help identify the species in the open ocean. In front of each pectoral fin is one single gill opening. Between the two dorsal fins is a spine, and the second dorsal fin is significantly smaller than the more anterior one. The caudal fin is divided into two lobes, the top one being larger. The eyes, set high on the head, are often green in color.
The snout is used to probe the sea bottom in search of the invertebrates and small fishes on which it preys. The remainder of the body is flat and compressed, often described as elongated. The mouth is just under this snout and the eyes are located high on top of the head. They have broad, flat teeth that have adapted for this eating habit, two pairs that reside in the upper jaw and one pair in the lower jaw.
In addition to its use for feeding, the "trunks" of the Callorhinchus fish can sense movement and electric fields, allowing them to locate their prey.
Phylogeny
Phylogenetically, they are the oldest group of living jawed chondrichthyes. They possess the same cartilaginous skeleton seen in sharks, but are considered holocephalans to distinguish them from elasmobranchs, containing sharks and rays. Because of this, they provide a useful research organism for studying the early development of the jawed characteristic. Among the chondrichthyans, Callorhinchus has the smallest genome. Because of this, it has been proposed to be used for entire genome sequencing to represent the cartilaginous fish class. Their name comes from the fact that they share traits of both sharks and rays. They can be distinguished from sharks because they possess an operculum over their gill slits. Additionally, their skin is smooth, not covered in tough scales, characteristic of the shark. While the shark's jaw is loosely attached to the skull, the family Callorhinchidae differ in that their jaws are fused to their skulls.
Distribution
Members of this genus are all found in subtropical and temperate waters in the Southern Hemisphere:
Callorhinchus callorynchus resides off southern South American waters, ranging from Tierra del Fuego north to Peru (in the Pacific) and southern Brazil (in the Atlantic). It is fished for year-round in the waters off of Brazil and Argentina.
Callorhinchus capensis is found in the oceans off southern Africa, including Namibia and South Africa.
Callorhinchus milii is found in the southwestern Pacific Ocean near the coasts of Australia and New Zealand in warmer, more temperate waters. Still, in these temperate waters, the elephantfish reside in the cooler continental shelf. During the spring and summer, C. milii migrates to estuaries and inshore bays to mate.
Physiology
The encephalization quotient is 1.1, compared to 6 in humans. Compared to humans, it has a larger cerebellum than forebrain. Its vision is very poor and the electrical sensing capabilities of the snout are predominantly used to find food. Both its circulatory and endocrine systems are similar to similar vertebrates, likely due to the early homologous structures the Callorhinchidae possess relative to the other Chondrichthyes.
Diet
The Callorhinchidae are predominantly filter feeders, feeding on the sandy sediment of the ocean bottoms or continental shelves. The large protrusion of the snout aids in this task. Their diet consists of molluscs, more specifically, clams. Besides this, the Callorhinchidae have been shown to also feed on invertebrates such as jellyfish or small octopuses. They are considered to be incapable of eating bony fish, in that they cannot keep up with the teleosts' speed.
Reproduction
The Callorhinchidae are oviparous. Mating and spawning happen during the spring and early summer. Males possess the characteristic claspers near the pelvic fin that are seen in sharks, and these are used to transport the gametes. They migrate to more shallow waters to spawn. Also, a club-like protrusion from the head is used to hold onto the female during mating. The keratinous eggs are released onto the muddy sediment of the ocean bottom, usually in shallower water. At first, the egg is a golden yellow color, but this transforms into brown, and finally, black right before hatching. The average time in the egg is 8 months, and the embryo uses the yolk for all nourishment. Once hatched, the young instinctively move to deeper water. The egg cases are long and flat, and resemble pieces of seaweed.
Species
The family contains three extant species, all in the same genus:
Callorhinchus callorynchus Linnaeus, 1758 (Ploughnose chimaera, American elephantfish, or cockfish)
Callorhinchus capensis A. H. A. Duméril, 1865 (Cape elephantfish)
Callorhinchus milii Bory de Saint-Vincent, 1823 (Australian ghostshark)
A number of fossil species are also known, extending back into the mid-Cretaceous (Albian).
Fishery and conservation effort
Currently, no effort is being made to conserve the family Callorhinchidae, but the species are heavily fished for food in South America. Because of this, they are extremely susceptible to being overfished. The greatest risk to this species is trawling or net fishing. Using this method, large numbers are caught quickly. Once caught, the fish are sold as whitefish or silver trumpeter fillets. The most common location of export is Australia. Under the IUCN, the three extant species of Callorhinchidae are all listed as least concern, as they remain common. While fishing quotas are in place in Australia and New Zealand, this is the furthest that the conservation effort spans. Rarely, they are caught to be placed in aquaria, but this is much less common than fishery for food.
| Biology and health sciences | Chimaeriformes | Animals |
12605522 | https://en.wikipedia.org/wiki/Condenser%20%28laboratory%29 | Condenser (laboratory) | In chemistry, a condenser is laboratory apparatus used to condense vaporsthat is, turn them into liquidsby cooling them down.
Condensers are routinely used in laboratory operations such as distillation, reflux, and extraction. In distillation, a mixture is heated until the more volatile components boil off, the vapors are condensed, and collected in a separate container. In reflux, a reaction involving volatile liquids is carried out at their boiling point, to speed it up; and the vapors that inevitably come off are condensed and returned to the reaction vessel. In Soxhlet extraction, a hot solvent is infused onto some powdered material, such as ground seeds, to leach out some poorly soluble component; the solvent is then automatically distilled out of the resulting solution, condensed, and infused again.
Many different types of condensers have been developed for different applications and processing volumes. The simplest and oldest condenser is just a long tube through which the vapors are directed, with the outside air providing the cooling. More commonly, a condenser has a separate tube or outer chamber through which water (or some other fluid) is circulated, to provide a more effective cooling.
Laboratory condensers are usually made of glass for chemical resistance, for ease of cleaning, and to allow visual monitoring of the operation; specifically, borosilicate glass to resist thermal shock and uneven heating by the condensing vapor. Some condensers for dedicated operations (like water distillation) may be made of metal. In professional laboratories, condensers usually have ground glass joints for airtight connection to the vapor source and the liquid receptacle; however, flexible tubing of an appropriate material is often used instead. The condenser may also be fused to a boiling flask as a single glassware item, as in the old retort and in devices for microscale distillation.
History
The water-cooled condenser, which was popularized by Justus von Liebig, was invented by Weigel, Poisonnier, and Gadolin, and perfected by Göttling, all in the late 18th century. Several designs that are still in common use were developed and became popular in the 19th century, when chemistry became a widely practiced scientific discipline.
General principles
Designing and maintaining systems and processes using condensers requires that the heat of the entering vapor never overwhelm the ability of the chosen condenser and cooling mechanism; as well, the thermal gradients and material flows established are critical aspects, and as processes scale from laboratory to pilot plant and beyond, the design of condenser systems becomes a precise engineering science.
Temperature
In order for a substance to condense from a pure vapor, the pressure of the latter must be higher than the vapor pressure of the adjacent liquid; that is, the liquid must be below its boiling point at that pressure. In most designs, the liquid is only a thin film on the inner surface of the condenser, so its temperature is essentially the same as of that surface. Therefore, the primary consideration in the design or choice of a condenser is to ensure that its inner surface is below the liquid's boiling point.
Heat flow
As the vapor condenses, it releases the corresponding heat of vaporization, that tends to raise the temperature of the condenser's inner surface. Therefore, a condenser must be able to remove that heat energy quickly enough to keep the temperature low enough, at the maximum rate of condensation that is expected to occur. This concern can be addressed by increasing the area of the condensation surface, by making the wall thinner, and/or by providing a sufficiently effective heat sink (such as circulating water) on the other side of it.
Material flow
The condenser must also be dimensioned so that the condensed liquid can flow out at the maximum rate (mass over time) that the vapor is expected to enter it. Care must also be taken to prevent the boiling liquid to enter the condenser as splattering from explosive boiling, or droplets created as bubbles pop.
Carrier gases
Additional considerations apply if the gas inside the condenser is not pure vapor of the desired liquid, but a mixture with gases that have a much lower boiling point (as may occur in dry distillation, for example). Then the partial pressure of its vapor must be considered when obtaining its condensation temperature. For example, if the gas entering the condenser is a mixture of 25% ethanol vapor and 75% carbon dioxide (by moles) at 100 kPa (typical atmospheric pressure), the condensation surface must be kept below 48 °C, the boiling point of ethanol at 25 kPa.
Moreover, if the gas is not pure vapor, condensation will create a layer of gas with even lower vapor contents right next to the condensing surface, further lowering the boiling point. Therefore, the condenser's design must be such that the gas is well-mixed and/or that all of it is forced to pass very close to the condensation surface.
Liquid mixtures
Finally, if the input to the condenser is a mixture of two or more miscible liquids (as is the case in fractional distillation), one must consider the vapor pressure and the percentage of the gas for each component, which depends on the composition of the liquid as well as its temperature; and all these parameters typically vary along the condenser.
Coolant flow direction
Most condensers can be divided in two broad classes.
The concurrent condensers receive the vapor through one port and deliver the liquid through another port, as required in simple distillation. They are usually mounted vertically or tilted, with the vapor input at the top and the liquid output at the bottom.
The countercurrent condensers are intended to return the liquid toward the source of the vapor, as required in reflux and fractional distillation. They are usually mounted vertically, above the source of the vapor, that enters them from the bottom. In both cases, the condensed liquid is allowed to flow back to the source by its own weight.
The classification is not exclusive, since several types can be used in both modes.
Historical condensers
Straight tube
The simplest type of condenser is a straight tube, cooled only by the surrounding air. The tube is held in a vertical or oblique position, and the vapor is fed through the upper end. The heat of condensation is carried away by convection.
The neck of the retort is a classical example of a straight tube condenser. However, this kind of condenser may also be a separate piece of equipment. Straight tube condensers are no longer widely used in research laboratories, but may be used in special applications and simple school demonstrations.
Still head
The still head is another ancient type of air-cooled condenser. It consists of a roughly globular vessel with an opening at the bottom, through which the vapor is introduced. The vapor condenses on the inner wall of the vessel, and drips along it, collecting at the bottom of the head and then draining through a tube to a collecting vessel below. A raised lip around the input opening prevents the liquid from spilling through it. As in the tube condenser, the heat of condensation is carried away by natural convection. Any vapor that does not condense in the head may still condense in the neck.
Still head type condensers are now rarely used in laboratories, and are usually topped by some other type of reflux condenser where most of the condensation takes place.
Modern condensers
Liebig
The Liebig condenser is the simplest design with circulating coolant, easy to build and inexpensive. It is named after Justus von Liebig, who perfected an earlier design by Weigel and Göttling and popularized it. It consists of two concentric straight glass tubes, the inner one being longer and protruding at both extremities. The ends of the outer tube are sealed (usually by a blown glass ring seal), forming a water jacket, and is fitted with side ports near the ends for cooling fluid inflow and outflow. The ends of the inner tube, that carries the vapor and condensed liquid, are open.
Compared to the simple air-cooled tube, the Liebig condenser is more efficient at removing the heat of condensation and at maintaining the inner surface a stable low temperature.
West
The West condenser is variant of the Liebig type, with a more slender design, with cone and socket. The fused-on narrower coolant jacket may render more efficient cooling with respect to coolant consumption.
Allihn
The Allihn-condenser or bulb condenser is named after Felix Richard Allihn (1854–1915). The Allihn-condenser consists of a long glass tube with a water jacket. A series of bulbs on the tube increases the surface area upon which the vapor constituents may condense. Ideally suited for laboratory-scale refluxing; indeed, the term reflux condenser often means this type specifically.
Davies
A Davies condenser, also known as a double surface condenser, is similar to the Liebig condenser, but with three concentric glass tubes instead of two. The coolant circulates in both the outer jacket and the central tube. This increases the cooling surface, so that the condenser can be shorter than an equivalent Liebig condenser. According to Alan Gall, archivist of the Institute of Science and Technology, Sheffield, England, the 1981 catalog of Adolf Gallenkamp & Co. of London (makers of scientific apparatus) states that the Davies condenser was invented by James Davies, a director of the Gallenkamp company. In 1904, Gallenkamp was offering "Davies' Condensers" for sale:. In 1920, Gallenkamp listed "J. Davies" as a director of the company.
Graham
A Graham or Grahams condenser has a coolant-jacketed spiral coil running the length of the condenser serving as the vapor–condensate path. This is not to be confused with the coil condenser. The coiled condenser tubes inside will provide more surface area for cooling and for this reason it is most favorable to use but the drawback of this condenser is that as the vapors get condensed, it tends to move them up in the tube to evaporate which will also lead to the flooding of solution mixture.
It may also be called Inland Revenue condenser due to the application for which it was developed.
Coil
A coil condenser is essentially a Graham condenser with an inverted coolant–vapor configuration. It has a spiral coil running the length of the condenser through which coolant flows, and this coolant coil is jacketed by the vapor–condensate path.
Dimroth
A Dimroth condenser, also known as a spiral condenser, named after Otto Dimroth, is somewhat similar to the coil condenser; it has an internal double spiral through which coolant flows such that the coolant inlet and outlet are both at the top. The vapors travel through the jacket from bottom to top. Dimroth condensers are more effective than conventional coil condensers. They are often found in rotary evaporators which may use a more elaborate arrangement with several spirals.
There also exists a version of Dimroth condenser with an external jacket, like in a Davies condenser, to further increase the cooling surface.
Cold finger
A cold finger is a cooling device in the form of a vertical tube that is cooled from the inside, that is to be immersed in the vapor while supported at the upper end only. This may be either flow-cooled, with both coolant ports at the top, or open-topped where liquid or solid coolant is simply placed inside. The vapor is meant to condense on the rod and drip down from the free end, and eventually reach the collecting vessel. A cold finger may be a separate piece of equipment, or may be only a part of a condenser of another type. Cold fingers are also used to condense vapors produced by sublimation in which case the result is a solid that adheres to the finger and must be scraped off, or as a cold-trap, where the liquid or solid condensate is not intended to return to the source of the vapor (often used to protect vacuum pumps and/or prevent venting of harmful gasses).
Friedrichs
The Friedrichs condenser (sometimes incorrectly spelled Friedrich's) was invented by Fritz Walter Paul Friedrichs, who published a design for this type of condenser in 1912. It consists of a large water-cooled finger tightly fitted inside a wide cylindrical housing. The finger has a helical ridge along its length, so as to leave a narrow helical path for the vapor. This arrangement forces the vapor to spend a long time in contact with the finger.
Refluxing and fractional distillation columns
Vigreux
The Vigreux column, named after the French glass blower (1869–1951) who invented it in 1904, consists of a wide glass tube with multiple internal glass "fingers" that point downwards. Each "finger" is created by melting a small section of the wall and pushing the soft glass inwards. The vapor that enters from the lower opening condenses on the fingers and drips down from them. It is usually air-cooled, but may have an outer glass jacket for forced fluid cooling.
Snyder
The Snyder column is a wide glass tube divided into sections (usually 3 to 6) by horizontal glass partitions or constrictions. Each partition has a hole, into which seats a hollow glass bead with an inverted "teardrop" shape. Vigreux-like glass "fingers" limit the vertical motion of each bead. These floating glass stoppers act as check valves, closing and opening with vapor flow, and enhancing vapor-condensate mixing. A Snyder column can be used with a Kuderna-Danish concentrator to efficiently separate a low boiling extraction solvent such as methylene chloride from volatile but higher boiling extract components (e.g., after the extraction of organic contaminants in soil).
Widmer
The Widmer column was developed as a doctoral research project by student Gustav Widmer at ETH Zurich in the early 1920s, combining a Golodetz-type arrangement of concentric tubes and the Dufton-type rod-with-spiral core. It consists of four concentric glass tubes and a central glass rod, with a thinner glass rod coiled around it to increase the surface area. The two outer tubes (#3 and #4) form an insulating dead air chamber (shaded). Vapor rises from a boiling flask into space (1), proceeds up through the space between tubes #2 and #3, then down the space between tubes #1 and #2, and finally up between tube #1 and the central rod. Arriving at space (3), vapor is then directed via a distillation head (glass branching adapter) to cooling and collection.
A so-called modified Widmer column design was reported as being in wide use, but undocumented, by L. P. Kyrides in 1940.
Packed
A packed column is a condenser used in fractional distillation. Its main component is a tube filled with small objects to increase the surface area and the number of theoretical plates. The tube can be the inner conduit of some other type, such as Liebig or Allhin. These columns can achieve theoretical plate counts of 1–2 per 5 cm of packed length.
A large variety of packing materials and object shapes has been used, including beads, rings, or helices (such as Fenske rings Raschig or Lessing rings) of glass, porcelain, aluminum, copper, nickel, or stainless steel; nichrome and inconel wires (akin to Podbielniak columns), stainless steel gauze (Dixon rings), etc. Specific combinations are known as Hempel, Todd, and Stedman columns.
Other
Spinning band distillation uses a spinning helical band (spun by a motor) inside a straight tube to increase the mixing of the upgoing vapor and downcoming reflux liquid.
Oldershaw columns have the same theory of operation as industrial plate columns. They are highly efficient for fractionating but they have significant holdup (the amount of liquid in the column during use) and their complexity makes them one of the more expensive types of glass column.
Straight tube, or air condensers, which are just a straight tube, can be used as a crude reflux column.
Pear columns consist of multiple bulbous segments shaped like inverted pears.
Alternative coolants
Condensers with forced-circulation cooling usually employ water as the cooling fluid. The flow may be open, from a tap to a sink, and driven only by the water pressure in the tap. Alternatively, a closed system may be used, in which the water is drawn by a pump from a tank, possibly refrigerated, and returned to it. Water-cooled condensers are suitable for liquids with boiling points well above 0 °C, and can easily condense vapours with boiling points much higher than that of the water.
Other cooling fluids may be used instead of water. Air with forced circulation can be effective enough for situations with high boiling point and low condensation rate. Conversely, low-temperature coolants, such as acetone cooled by dry ice or chilled water with antifreeze additives, can be used for liquids with low boiling point (like dimethyl ether, −23.6 °C). Open-topped cold fingers can use a wider variety of coolants since they allow solids to be inserted, and can be used with water ice, dry ice, and liquid nitrogen.
| Physical sciences | Phase separations | Chemistry |
2488868 | https://en.wikipedia.org/wiki/Tarbela%20Dam | Tarbela Dam | Tarbela Dam (, ) is an earth-filled dam along the Indus River in Pakistan's Khyber Pakhtunkhwa province. It is mainly located in Haripur Tehsil. It is about from the city of Swabi KPK, northwest of Islamabad, and east of Peshawar. It is the largest earth-filled dam in the world. The dam is high above the riverbed and its reservoir, Tarbela Lake, has a surface area of approximately .
The Tarbela Dam is located on the Indus River near the village of Tarbela in Bara, approximately 30 kilometers from the town of Attock. Positioned where the Indus River emerges from the foothills of the Himalayas and enters the Potwar Plateau, the dam features a reservoir for water storage. The average annual flow available is 101 billion cubic meters (3221 m3/sec). It stands 143 meters tall and covers an area of 243 square kilometers. It has a storage capacity of 11.9 billion cubic meters of water and has nine gates to control the outflow of water. The dam was completed in 1976 and was designed to utilize water from the Indus River for irrigation, flood control, and the generation of hydroelectric power by storing flows during the monsoon period while subsequently releasing stored water during the low flow period in winter. The installed capacity of the 4,888 MW Tarbela hydroelectric power stations will increase to 6,418 MW after completion of the planned fifth extension financed by Asian Infrastructure Investment Bank and the World Bank. Then, it will be the 12th largest hydroelectric dam in the world, for electricity production capacity.
Project description
The dam is at a narrow spot in the Indus River valley, named after the town of Tarbela in the Haripur District of the Hazara Division within the Khyber Pakhtunkhwa province of Pakistan.
The main dam wall, built of earth and rock fill, stretches from the island to river right, standing high. A pair of concrete auxiliary dams spans the river from the island to river left. The dam's two spillways are on the auxiliary dams rather than the main dam. The main spillway has a discharge capacity of and the auxiliary spillway, . Annually, over 70% of water discharged at Tarbela passes over the spillways and is not used for hydropower generation.
Five large tunnels were constructed as part of the outlet works. Hydroelectricity is generated from turbines in tunnel 1 through 3, while tunnels 4 and 5 were designed for irrigation use. Both tunnels are to be converted to hydropower tunnels to increase Tarbela's electricity-generating capacity. These tunnels were originally used to divert the Indus River while the dam was being constructed.
MA hydroelectric power plant on the right side of the main dam houses 14 generators fed with water from outlet tunnels 1, 2, and 3. There are four 175 MW generators on tunnel 1, six 175 MW generators on tunnel 2, and four 432 MW generators on tunnel 3, for a total generating capacity of 3,478 MW.
Tarbela Reservoir is long, with a surface area of . The reservoir initially stored of water, with a live storage of , though this figure has been reduced over the subsequent 35 years of operation to due to silting. The maximum elevation of the reservoir is above MSL and the minimum operating elevation is above MSL. The catchment area upriver of the Tarbela Dam is spread over of land largely supplemented by snow and glacier melt from the southern slopes of the Himalayas. There are two main Indus River tributaries upstream of the Tarbela Dam. These are the Shyok River, joining near Skardu, and the Siran River near Tarbela.
Background
Tarbela Dam was constructed as part of the Indus Basin Project after signing of the 1960 Indus Waters Treaty between India and Pakistan. The purpose was to compensate for the loss of water supplies of the eastern rivers (Ravi, Sutlej and Beas) that were designated for exclusive use by India per terms of the treaty. By the mid-1970s, power generation capacity was added in three subsequent hydro-electrical project extensions which were completed in 1992, installing a total of 3,478 MW generating capacity.
Construction
Construction of Tarbela Dam was carried out in three stages to meet the diversion requirements of the river. Construction was undertaken by the Italian firm Salini Impregilo.
Stage 1
In the first stage, the Indus River was allowed to flow in its natural channel, while construction works commenced on the right bank where a long and wide diversion channel was being excavated along with a high buttress dam that was also being constructed. Stage 1 construction lasted approximately 2½ years.
Stage 2
The main embankment dam and the upstream blanket were constructed across the main valley of the river Indus as part of the second stage of construction. During this time, water from the Indus river remained diverted through the diversion channel. By the end of construction works in stage 2, tunnels had been built for diversion purposes. Stage 2 construction took 3 years to complete.
Stage 3
Under the third stage of construction, works were carried out on the closure of the diversion channel and construction of the dam in that portion while the river was made to flow through diversion tunnels. The remaining portion of upstream blanket and the main dam at higher levels was also completed as part of stage 3 works, which were concluded in 1976.
Re-settlement of people affected by Tarbela Dam
An area of about 260 square kilometers and about of land was acquired for construction. The large reservoir of the dam submerged 135 villages, which resulted in the displacement of a population of about 96,000 people, many of whom were relocated to townships surrounding the Tarbela Reservoir or in adjacent higher valleys.
For the land and built-up property acquired under the Land Acquisition Act of 1984, a cash compensation of Rs 469.65 million was paid to those affected. In the absence of a national policy, the resettlement concerns of the people displaced by the Tarbela Dam were addressed on an ad hoc basis. In 2011, many such people had still not been resettled or given land in compensation for their losses by the government of Pakistan, in accordance with its contractual obligations with the World Bank.
Lifespan
Because the source of the Indus River is glacial meltwater from the Himalayas, the river carries huge amounts of sediment, with an annual suspended sediment load of 200 million tons. Live storage capacity of Tarbela reservoir had declined more than 33.5 per cent to 6.434 million acre feet (MAF) against its original capacity of 9.679 MAF because of sedimentation over the past 38 years. The useful life of the dam and reservoir was estimated to be approximately 50 years. However, sedimentation has been much lower than predicted, and it is now estimated that the useful lifespan of the dam will be 85 years, to about 2060.
Pakistan plans to construct several large dams upstream of Tarbela, including the Diamer-Bhasha Dam. Upon completion of the Diamer-Bhasha dam, sediment loads into Tarbela will be decreased by 69%.
Project benefits
In addition to fulfilling the primary purpose of the dam, i.e., supplying water for irrigation, Tarbela Power Station has generated 341.139 billion kWh of hydro-electric energy since commissioning. A record annual generation of 16.463 billion kWh was recorded during 1998–99. Annual generation during 2007–08 was 14.959 billion kWh while the station shared peak load of 3702 MW during the year, which was 23.057% of total WAPDA system peak.
Tarbela-IV Extension Project
Tarbela dam extension-IV was planned in June, 2012, and PC-1 was developed for the project. US ambassador Richard Olson offered aid for construction of this project during his visit to Pakistan, in March, 2013. In September 2013, Pakistan's Water and Power Development Authority signed a Rs. 26.053 billion contract with Chinese firm Sinohydro and Germany's Voith Hydro for executing civil works on the 1,410 MW Tarbela-IV Extension Project. Construction commenced in February 2014, and was completed in February 2018.
This project was constructed on Tunnel No. 4 of Tarbela Dam. It consists of three turbine-generator units, each with a capacity of 470 MW. The project is expected to provide an average of 3.84 billion units of electricity annually to the National Grid. It is intended to help supplement electricity supply during the high-demand summer months.
Annual benefits of the project were estimated at Rs. 30.7 billion. On an annual basis, over 70% of water passing through Tarbela is discharged over spillways, while only a portion of the remaining 30% is used for hydropower generation.
The Water and Power Development Authority in Pakistan says the third and last unit at its 1,410-MW Tarbela 4th Extension Hydropower Project has been synchronized with the National Grid. With this extension, the installed capacity of the Tarbela Hydel Power Station has increased to 4,888 MW.
Financing
The project's cost was initially estimated to be $928 million, but the cost was revised downwards to $651 million. The World Bank had agreed to provide an $840 million loan for the project in June 2013.
The loan had two components: The first is a $400 million International Development Association loan, which will be lent as a concessional loan at low interest rates. The second portion consists of a $440 million from the World Bank's International Bank for Reconstruction and Development. Pakistan's Water and Power Development Authority was to provide the remaining $74 million required for construction, before the project's cost was downwardly revised by $277 million. Interest costs for the loans are estimated to cost $83.5 million.
Because of revised lower costs to $651 million from $928 million, the World Bank permitted Pakistani officials to expedite completion of the project by 8 months at a cost of an additional $51 million. Pakistani officials were also permitted to divert $126 million towards the Tarbela-V Extension Project.
Tarbela-V Extension Project
The Tarbela Dam was built with five original tunnels, with the first three dedicated to hydropower generation, and the remaining two slated for irrigation use. The fourth phase extension project uses the first of the two irrigation tunnels, while the fifth phase extension will use the second irrigation tunnel. Pakistan's Water and Power Development Authority sought expressions of interest for the Tarbela-V Extension Project in August 2014, and was given final consent for construction in September 2015.
The hydropower project of tunnel 5 has two major components: power generation facilities and power evacuation facilities. The major works included under the project are modifications to tunnel 5 and building a new power house and its ancillaries to generate about 1,800GWh of power annually, a new 50 km of 500kV double-circuit transmission line from Tarbela to the Islamabad West Grid Station for power evacuation, and a new 500kV Islamabad West Grid Station.
Construction commenced in August 2021 and will require an estimated 3.5 years for completion. The project will require the installation of three turbines with a capacity of 510 MW each in Tarbela's fifth tunnel which was previously dedicated to agricultural use. Upon completion, the total power generating capacity of Tarbela Dam will increase to 6,418 MW.
Financing
In November 2015, the World Bank affirmed that it would finance at least $326 million of the project's estimated $796 million cost which includes $126 million of funding that was diverted from the $840 million fourth phase extension project after costs for that project were revised downwards. In September 2016, the World Bank approved an additional financing of $390 million for the fifth extension hydropower project of Tarbela dam that will support the scaling up of the power generation capacity by adding 1,530 megawatts to the existing tunnel 5.
The project will be financed by the International Bank for Reconstruction and Development (IBRD), with a variable spread and 20-year maturity, including a six-year grace period. This will be the first World Bank-supported project in South Asia to be jointly financed with the Asian Infrastructure Investment Bank (AIIB) which will be providing $300m and the Government of Pakistan $133.5m. The total cost of the project is $823.5m.
| Technology | Dams | null |
2490859 | https://en.wikipedia.org/wiki/Flight%20management%20system | Flight management system | A flight management system (FMS) is a fundamental component of a modern airliner's avionics. An FMS is a specialized computer system that automates a wide variety of in-flight tasks, reducing the workload on the flight crew to the point that modern civilian aircraft no longer carry flight engineers or navigators. A primary function is in-flight management of the flight plan. Using various sensors (such as GPS and INS often backed up by radio navigation) to determine the aircraft's position, the FMS can guide the aircraft along the flight plan. From the cockpit, the FMS is normally controlled through a Control Display Unit (CDU) which incorporates a small screen and keyboard or touchscreen. The FMS sends the flight plan for display to the Electronic Flight Instrument System (EFIS), Navigation Display (ND), or Multifunction Display (MFD). The FMS can be summarised as being a dual system consisting of the Flight Management Computer (FMC), CDU and a cross talk bus.
The modern FMS was introduced on the Boeing 767, though earlier navigation computers did exist. Now, systems similar to FMS exist on aircraft as small as the Cessna 182. In its evolution an FMS has had many different sizes, capabilities and controls. However certain characteristics are common to all FMSs.
Navigation database
All FMSs contain a navigation database. The navigation database contains the elements from which the flight plan is constructed. These are defined via the ARINC 424 standard. The navigation database (NDB) is normally updated every 28 days, in order to ensure that its contents are current. Each FMS contains only a subset of the ARINC / AIRAC data, relevant to the capabilities of the FMS.
The NDB contains all of the information required for building a flight plan, consisting of:
Waypoints/Intersection
Airways
Radio navigation aids including distance measuring equipment (DME), VHF omnidirectional range (VOR), non-directional beacons (NDBs) and instrument landing systems (ILSs).
Airports
Runways
Standard instrument departure (SID)
Standard terminal arrival (STAR)
Holding patterns (only as part of IAPs-although can be entered by command of ATC or at pilot's discretion)
Instrument approach procedure (IAP)
Waypoints can also be defined by the pilot(s) along the route or by reference to other waypoints with entry of a place in the form of a waypoint (e.g. a VOR, NDB, ILS, airport or waypoint/intersection).
Flight plan
The flight plan is generally determined on the ground, before departure either by the pilot for smaller aircraft or a professional dispatcher for airliners. It is entered into the FMS either by typing it in, selecting it from a saved library of common routes (Company Routes) or via an ACARS datalink with the airline dispatch center.
During preflight, other information relevant to managing the flight plan is entered. This can include performance information such as gross weight, fuel weight and center of gravity. It will include altitudes including the initial cruise altitude. For aircraft that do not have a GPS, the initial position is also required.
The pilot uses the FMS to modify the flight plan in flight for a variety of reasons. Significant engineering design minimizes the keystrokes in order to minimize pilot workload in flight and eliminate any confusing information (Hazardously Misleading Information).
The FMS also sends the flight plan information for display on the Navigation Display (ND) of the flight deck instruments Electronic Flight Instrument System (EFIS). The flight plan generally appears as a magenta line, with other airports, radio aids and waypoints displayed.
Some FMSs can calculate special flight plans, often for tactical requirements, such as search patterns, rendezvous, in-flight refueling tanker orbits, and calculated air release points (CARP) for accurate parachute jumps.
Position determination
Once in flight, a principal task of the FMS is obtaining a position fix, i.e., to determine the aircraft's position and the accuracy of that position. Simple FMS use a single sensor, generally GPS in order to determine position. But modern FMS use as many sensors as they can, such as VORs, in order to determine and validate their exact position. Some FMS use a Kalman filter to integrate the positions from the various sensors into a single position. Common sensors include:
Airline-quality GPS receivers act as the primary sensor as they have the highest accuracy and integrity.
Radio aids designed for aircraft navigation act as the second highest quality sensors. These include;
Scanning DME (distance measuring equipment) that check the distances from five different DME stations simultaneously in order to determine one position every 10 seconds.
VORs (VHF omnidirectional radio range) that supply a bearing. With two VOR stations the aircraft position can be determined, but the accuracy is limited.
Inertial reference systems (IRS) use ring laser gyros and accelerometers in order to calculate the aircraft position. They are highly accurate and independent of outside sources. Airliners use the weighted average of three independent IRS to determine the “triple mixed IRS” position.
The FMS constantly crosschecks the various sensors and determines a single aircraft position and accuracy. The accuracy is described as the Actual Navigation Performance (ANP) a circle that the aircraft can be anywhere within measured as the diameter in nautical miles.
Modern airspace has a set required navigation performance (RNP). The aircraft must have its ANP less than its RNP in order to operate in certain high-level airspace.
Guidance
Given the flight plan and the aircraft's position, the FMS calculates the course to follow. The pilot can follow this course manually (much like following a VOR radial), or the autopilot can be set to follow the course.
The FMS mode is normally called LNAV or Lateral Navigation for the lateral flight plan and VNAV or vertical navigation for the vertical flight plan. VNAV provides speed and pitch or altitude targets and LNAV provides roll steering command to the autopilot.
VNAV
Sophisticated aircraft, generally airliners such as the Airbus A320 or Boeing 737 and other turbofan powered aircraft, have full performance Vertical Navigation (VNAV). The purpose of VNAV is to predict and optimize the vertical path. Guidance includes control of the pitch axis and control of the throttle.
The FMS needs to have a comprehensive flight and engine model in order to have the data required to do this. The function can create a forecast vertical path along the lateral flight plan using this information. The aircraft manufacturer is usually the only source of this comprehensive flight model.
The vertical profile is constructed by the FMS during pre-flight. Together with the lateral flight plan, it makes use of the aircraft's starting empty weight, fuel weight, center of gravity, and cruising altitude. The first step on a vertical course is to rise to cruise height. Vertical limitations such as "At or ABOVE 8,000" are present in some SID waypoints. Reducing thrust, or "FLEX" climbing, may be used throughout the ascent to spare the engines. Each needs to be taken into account when making vertical profile projections.
Implementation of an accurate VNAV is difficult and expensive, but it pays off in fuel savings primarily in cruise and descent. In cruise, where most of the fuel is burned, there are multiple methods for fuel savings.
As an aircraft burns fuel it gets lighter and can cruise higher where there is less drag. Step climbs or cruise climbs facilitate this. VNAV can determine where the step or cruise climbs (in which the aircraft climbs continuously) should occur to minimize fuel consumption.
Performance optimization allows the FMS to determine the best or most economical speed to fly in level flight. This is often called the ECON speed. This is based on the cost index, which is entered to give a weighting between speed and fuel efficiency. The cost index is calculated by dividing the per-hour cost of operating the plane by the cost of fuel. Generally a cost index of 999 gives ECON speeds as fast as possible without consideration of fuel and a cost index of zero gives maximum fuel economy while disregarding other hourly costs such as maintenance and crew expenses. ECON mode is the VNAV speed used by most airliners in cruise.
RTA or required time of arrival allows the VNAV system to target arrival at a particular waypoint at a defined time. This is often useful for airport arrival slot scheduling. In this case, VNAV regulates the cruise speed or cost index to ensure the RTA is met.
The first thing the VNAV calculates for the descent is the top of descent point (TOD). This is the point where an efficient and comfortable descent begins. Normally this will involve an idle descent, but for some aircraft an idle descent is too steep and uncomfortable. The FMS calculates the TOD by “flying” the descent backwards from touchdown through the approach and up to cruise. It does this using the flight plan, the aircraft flight model and descent winds. For airline FMS, this is a very sophisticated and accurate prediction, for simple FMS (on smaller aircraft) it can be determined by a “rule of thumb” such as a 3 degree descent path.
From the TOD, the VNAV determines a four-dimensional predicted path. As the VNAV commands the throttles to idle, the aircraft begins its descent along the VNAV path. If either the predicted path is incorrect or the downpath winds different from the predictions, then the aircraft will not perfectly follow the path. The aircraft varies the pitch in order to maintain the path. Since the throttles are at idle this will modulate the speed. Normally the FMS allows the speed to vary within a small band. After this, either the throttles advance (if the aircraft is below path) or the FMS requests speed brakes with a message, often "DRAG REQUIRED" (if the aircraft is above path). On Airbus aircraft, this message also appears on the PFD and, if the aircraft is extremely high on path, "MORE DRAG" will be displayed. On Boeing aircraft, if the aircraft gets too far off the prescribed path, it will switch from VNAV PTH (which follows the calculated path) to VNAV SPD (which descends as fast as possible while maintaining a selected speed, similar to OP DES (open descent) on Airbuses.
An ideal idle descent, also known as a “green descent” uses the minimum fuel, minimizes pollution (both at high altitude and local to the airport) and minimizes local noise. While most modern FMS of large airliners are capable of idle descents, most air traffic control systems cannot handle multiple aircraft each using its own optimum descent path to the airport, at this time. Thus the use of idle descents is minimized by Air Traffic Control.
| Technology | Aircraft components | null |
432961 | https://en.wikipedia.org/wiki/Intertropical%20Convergence%20Zone | Intertropical Convergence Zone | The Intertropical Convergence Zone (ITCZ , or ICZ), known by sailors as the doldrums or the calms because of its monotonous windless weather, is the area where the northeast and the southeast trade winds converge. It encircles Earth near the thermal equator though its specific position varies seasonally. When it lies near the geographic equator, it is called the near-equatorial trough. Where the ITCZ is drawn into and merges with a monsoonal circulation, it is sometimes referred to as a monsoon trough (a usage that is more common in Australia and parts of Asia).
Meteorology
The ITCZ was originally identified from the 1920s to the 1940s as the Intertropical Front (ITF), but after the recognition in the 1940s and the 1950s of the significance of wind field convergence in tropical weather production, the term Intertropical Convergence Zone (ITCZ) was then applied.
The ITCZ appears as a band of clouds, usually thunderstorms, that encircle the globe near the Equator. In the Northern Hemisphere, the trade winds move in a southwestward direction from the northeast, while in the Southern Hemisphere, they move northwestward from the southeast. When the ITCZ is positioned north or south of the Equator, these directions change according to the Coriolis effect imparted by Earth's rotation. For instance, when the ITCZ is situated north of the Equator, the southeast trade wind changes to a southwest wind as it crosses the Equator. The ITCZ is formed by vertical motion largely appearing as convective activity of thunderstorms driven by solar heating, which effectively draw air in; these are the trade winds. The ITCZ is effectively a tracer of the ascending branch of the Hadley cell and is wet. The dry descending branch is the horse latitudes.
The location of the ITCZ gradually varies with the seasons, roughly corresponding with the location of the thermal equator. As the heat capacity of the oceans is greater than air over land, migration is more prominent over land. Over the oceans, where the convergence zone is better defined, the seasonal cycle is more subtle, as the convection is constrained by the distribution of ocean temperatures. Sometimes, a double ITCZ forms, with one located north and another south of the Equator, one of which is usually stronger than the other. When this occurs, a narrow ridge of high pressure forms between the two convergence zones.
ITCZ over oceans vs. land
The ITCZ is commonly defined as an equatorial zone where the trade winds converge. Rainfall seasonality is traditionally attributed to the north–south migration of the ITCZ, which follows the sun. Although this is largely valid over the equatorial oceans, the ITCZ and the region of maximum rainfall can be decoupled over the continents. The equatorial precipitation over land is not simply a response to just the surface convergence. Rather, it is modulated by a number of regional features such as local atmospheric jets and waves, proximity to the oceans, terrain-induced convective systems, moisture recycling, and spatiotemporal variability of land cover and albedo.
South Pacific convergence zone
The South Pacific convergence zone (SPCZ) is a reverse-oriented, or west-northwest to east-southeast aligned, trough extending from the west Pacific warm pool southeastwards towards French Polynesia. It lies just south of the equator during the Southern Hemisphere warm season, but can be more extratropical in nature, especially east of the International Date Line. It is considered the largest and most important piece of the ITCZ, and has the least dependence upon heating from a nearby land mass during the summer than any other portion of the monsoon trough. The southern ITCZ in the southeast Pacific and southern Atlantic, known as the SITCZ, occurs during the Southern Hemisphere fall between 3° and 10° south of the equator east of the 140th meridian west longitude during cool or neutral El Niño–Southern Oscillation (ENSO) patterns. When ENSO reaches its warm phase, otherwise known as El Niño, the tongue of lowered sea surface temperatures due to upwelling off the South American continent disappears, which causes this convergence zone to vanish as well.
Effects on weather
Variation in the location of the intertropical convergence zone drastically affects rainfall in many equatorial nations, resulting in the wet and dry seasons of the tropics rather than the cold and warm seasons of higher latitudes. Longer term changes in the intertropical convergence zone can result in severe droughts or flooding in nearby areas.
In some cases, the ITCZ may become narrow, especially when it moves away from the equator; the ITCZ can then be interpreted as a front along the leading edge of the equatorial air. There appears to be a 15 to 25-day cycle in thunderstorm activity along the ITCZ, which is roughly half the wavelength of the Madden–Julian oscillation (MJO).
Within the ITCZ the average winds are slight, unlike the zones north and south of the equator where the trade winds feed. As trans-equator sea voyages became more common, sailors in the eighteenth century named this belt of calm the doldrums because of the calm, stagnant, or inactive winds.
Role in tropical cyclone formation
Tropical cyclogenesis depends upon low-level vorticity as one of its six requirements, and the ITCZ fills this role as it is a zone of wind change and speed, otherwise known as horizontal wind shear. As the ITCZ migrates to tropical and subtropical latitudes and even beyond during the respective hemisphere's summer season, increasing Coriolis force makes the formation of tropical cyclones within this zone more possible. Surges of higher pressure from high latitudes can enhance tropical disturbances along its axis. In the north Atlantic and the northeastern Pacific oceans, tropical waves move along the axis of the ITCZ causing an increase in thunderstorm activity, and clusters of thunderstorms can develop under weak vertical wind shear.
Hazards
In the Age of Sail, to find oneself becalmed in this region in a hot and muggy climate could mean death when wind was the only effective way to propel ships across the ocean. Calm periods within the doldrums could strand ships for days or weeks. Even today, leisure and competitive sailors attempt to cross the zone as quickly as possible as the erratic weather and wind patterns may cause unexpected delays.
In 2009, thunderstorms along the Intertropical Convergence Zone played a role in the loss of Air France Flight 447, which crashed while flying from Rio de Janeiro–Galeão International Airport to Charles de Gaulle Airport near Paris. The aircraft crashed with no survivors while flying through a series of large ITCZ thunderstorms, and ice forming rapidly on airspeed sensors was the precipitating cause for a cascade of human errors which ultimately doomed the flight. Most aircraft flying these routes are able to avoid the larger convective cells without incident.
Effects of climate change
Based on paleoclimate proxies, the position and intensity of the ITCZ varied in prehistoric times along with changes in global climate. During Heinrich events within the last 100 ka, a southward shift of the ITCZ coincided with the intensification of the Northern Hemisphere Hadley cell coincident with weakening of the Southern Hemisphere Hadley cell. The ITCZ shifted north during the mid-Holocene but migrated south following changes in insolation during the late-Holocene towards its current position. The ITCZ has also undergone periods of contraction and expansion within the last millennium. A southward shift of the ITCZ commencing after the 1950s and continuing into the 1980s may have been associated with cooling induced by aerosols in the Northern Hemisphere based on results from climate models; a northward rebound began subsequently following forced changes in the gradient in temperature between the Northern and Southern hemispheres. These fluctuations in ITCZ positioning had robust effects on climate; for instance, displacement of the ITCZ may have led to drought in the Sahel in the 1980s.
Atmospheric convection may become stronger and more concentrated at the center of the ITCZ in response to a globally warming climate, resulting in sharpened contrasts in precipitation between the ITCZ core (where precipitation would be amplified) and its edges (where precipitation would be suppressed). Atmospheric reanalyses suggest that the ITCZ over the Pacific has narrowed and intensified since at least 1979, in agreement with data collected by satellites and in-situ precipitation measurements. The drier ITCZ fringes are also associated with an increase in outgoing longwave radiation outward of those areas, particularly over land within the mid-latitudes and the subtropics. This change in the ITCZ is also reflected by increasing salinity within the Atlantic and Pacific underlying the ITCZ fringes and decreasing salinity underlying central belt of the ITCZ. The IPCC Sixth Assessment Report indicated "medium agreement" from studies regarding the strengthening and tightening of the ITCZ due to anthropogenic climate change.
Less certain are the regional and global shifts in ITCZ position as a result of climate change, with paleoclimate data and model simulations highlighting contrasts stemming from asymmetries in forcing from aerosols, volcanic activity, and orbital variations, as well as uncertainties associated with changes in monsoons and the Atlantic meridional overturning circulation. The climate simulations run as part of Coupled Model Intercomparison Project Phase 5 (CMIP5) did not show a consistent global displacement of the ITCZ under anthropogenic climate change. In contrast, most of the same simulations show narrowing and intensification under the same prescribed conditions. However, simulations in Coupled Model Intercomparison Project Phase 6 (CMIP6) have shown greater agreement over some regional shifts of the ITCZ in response to anthropogenic climate change, including a northward displacement over the Indian Ocean and eastern Africa and a southward displacement over the eastern Pacific and Atlantic oceans.
In literature
The doldrums are notably described in Samuel Taylor Coleridge's poem The Rime of the Ancient Mariner (1798) and also provide a metaphor for the initial state of boredom and indifference of Milo, the child hero of Norton Juster's classic 1961 children's novel The Phantom Tollbooth. It is also cited in the 1939 book Wind, Sand and Stars.
| Physical sciences | Winds | Earth science |
432986 | https://en.wikipedia.org/wiki/Physical%20fitness | Physical fitness | Physical fitness is a state of health and well-being and, more specifically, the ability to perform aspects of sports, occupations, and daily activities. Physical fitness is generally achieved through proper nutrition, moderate-vigorous physical exercise, and sufficient rest along with a formal recovery plan.
Before the Industrial Revolution, fitness was defined as the capacity to carry out the day's activities without undue fatigue or lethargy. However, with automation and changes in lifestyles, physical fitness is now considered a measure of the body's ability to function efficiently and effectively in work and leisure activities, to be healthy, to resist hypokinetic diseases, to improve immune system function, and to meet emergency situations.
Overview
Fitness is defined as the quality or state of being fit and healthy. Around 1950, perhaps consistent with the Industrial Revolution and the treatise of World War II, the term "fitness" increased in western vernacular by a factor of ten. The modern definition of fitness describes either a person or machine's ability to perform a specific function or a holistic definition of human adaptability to cope with various situations. This has led to an interrelation of human fitness and physical attractiveness that has mobilized global fitness and fitness equipment industries. Regarding specific function, fitness is attributed to persons who possess significant aerobic or anaerobic ability (i.e., endurance or strength). A well-rounded fitness program improves a person in all aspects of fitness compared to practicing only one, such as only cardio/respiratory or only weight training.
A comprehensive fitness program tailored to an individual typically focuses on one or more specific skills, and on age- or health-related needs such as bone health. Many sources also cite mental, social and emotional health as an important part of overall fitness. This is often presented in textbooks as a triangle made up of three points, which represent physical, emotional, and mental fitness. Physical fitness has been shown to have benefits in preventing ill health and assisting recovery from injury or illness. Along with the physical health benefits of fitness, it has also been shown to have a positive impact on mental health as well by assisting in treating anxiety and depression.
Physical fitness can also prevent or treat many other chronic health conditions brought on by unhealthy lifestyle or aging as well and has been listed frequently as one of the most popular and advantageous self-care therapies. Working out can also help some people sleep better by building up sleeping pressure and possibly alleviate some mood disorders in certain individuals.
Developing research has demonstrated that many of the benefits of exercise are mediated through the role of skeletal muscle as an endocrine organ. That is, contracting muscles release multiple substances known as myokines, which promote the growth of new tissue, tissue repair, and various anti-inflammatory functions, which in turn reduce the risk of developing various inflammatory diseases.
Activity guidelines
The 2018 Physical Activity Guidelines for Americans were released by the U.S. Department of Health and Human Services to provide science-based guidance for people ages 3 years and older to improve their health by participating in regular physical activity. These guidelines recommend that all adults should move more and sit less throughout the day to improve health-related quality of life including mental, emotional, and physical health. For substantial health benefits, adults should perform at least 150 to 300 minutes of moderate-intensity, or 75 to 150 minutes per week of vigorous-intensity aerobic physical activity, or an equivalent combination of both spread throughout the week. The recommendation for physical activity to occur in bouts of at least 10 minutes has been eliminated, as new research suggests that bouts of any length contribute to the health benefits linked to the accumulated volume of physical activity. Additional health benefits may be achieved by engaging in more than 300 minutes (5 hours) of moderate-intensity physical activity per week. Adults should also do muscle-strengthening activities that are of moderate or greater intensity and involve all major muscle groups on two or more days a week, as these activities provide additional health benefits.
Guidelines in the United Kingdom released in July 2011 include the following points:
The intensity at which a person exercises is key, and light activity such as strolling and house work is unlikely to have much positive impact on the health of most people. For aerobic exercise to be beneficial, it must raise the heart rate and cause perspiration. A person should do a minimum of 150 minutes a week of moderate-intensity aerobic exercise. There are more health benefits gained if a person exercises beyond 150 minutes.
Sedentary time (time spent not standing, such as when on a chair or in bed) is bad for a person's health, and no amount of exercise can negate the effects of sitting for too long.
These guidelines are now much more in line with those used in the U.S., which also includes recommendations for muscle-building and bone-strengthening activities such as lifting weights and yoga.
Exercise
Aerobic exercise
Cardiorespiratory fitness can be measured using VO2 max, a measure of the amount of oxygen the body can uptake and utilize. Aerobic exercise, which improves cardiorespiratory fitness and increase stamina, involves movement that increases the heart rate to improve the body's oxygen consumption. This form of exercise is an important part of all training regiments, whether for professional athletes or for the everyday person.
Prominent examples of aerobic exercises include:
Jogging – Running at a steady and gentle pace. This form of exercise is great for maintaining weight and building a cardiovascular base to later perform more intense exercises.
Working on elliptical trainer – This is a stationary exercise machine used to perform walking, or running without causing excessive stress on the joints. This form of exercise is perfect for people with achy hips, knees, and ankles.
Walking – Moving at a fairly regular pace for a short, medium or long distance.
Treadmill training – Many treadmills have programs set up that offer numerous different workout plans. One effective cardiovascular activity would be to switch between running and walking. Typically warm up first by walking and then switch off between walking for three minutes and running for three minutes.
Swimming – Using the arms and legs to keep oneself afloat in water and moving either forwards or backward. This is a good full-body exercise for those who are looking to strengthen their core while improving cardiovascular endurance.
Cycling – Riding a bicycle typically involves longer distances than walking or jogging. This is another low-impact exercise on the joints and is great for improving leg strength.
Anaerobic exercise
Anaerobic exercise features high-intensity movements performed in a short period of time. It is a fast, high-intensity exercise that does not require the body to utilize oxygen to produce energy. It helps to promote strength, endurance, speed, and power; and is used by bodybuilders to build workout intensity. Anaerobic exercises are thought to increase the metabolic rate, thereby allowing one to burn additional calories as the body recovers from exercise due to an increase in body temperature and excess post-exercise oxygen consumption (EPOC) after the exercise ended.
Prominent examples of anaerobic exercises include:
Weight training – A common type of strength training for developing the strength and size of skeletal muscles.
Isometric exercise – Helps to maintain strength. A muscle action in which no visible movement occurs and the resistance matches the muscular tension.
Sprinting – Running short distances as fast as possible, training for muscle explosiveness.
Interval training – Alternating short bursts (lasting around 30 seconds) of intense activity with longer intervals (three to four minutes) of less intense activity. This type of activity also builds speed and endurance.
Training
Specific or task-oriented fitness is a person's ability to perform in a specific activity, such as sports or military service, with a reasonable efficiency. Specific training prepares athletes to perform well in their sport. These include, among others:
100 m sprint: In a sprint, the athlete must be trained to work anaerobically throughout the race, an example of how to do this would be interval training.
Century ride: Cyclists must be prepared aerobically for a bike ride of 100 miles or more.
Middle distance running: Athletes require both speed and endurance to gain benefit out of this training. The hard-working muscles are at their peak for a longer period of time as they are being used at that level for the longer period of time.
Marathon: In this case, the athlete must be trained to work aerobically, and their endurance must be built-up to a maximum.
Many firefighters and police officers undergo regular fitness testing to determine if they are capable of the physically demanding tasks required of the job.
Members of armed forces are often required to pass a formal fitness test. For example, soldiers of the U.S. Army must be able to pass the Army Physical Fitness Test (APFT).
Hill sprints: Requires a high level of fitness to begin with; the exercise is particularly good for the leg muscles. The Army often trains to do mountain climbing and races.
Plyometric and isometric exercises: An excellent way to build strength and increase muscular endurance.
Sand running creates less strain on leg muscles than running on grass or concrete. This is because sand collapses beneath the foot, which softens the landing. Sand training is an effective way to lose weight and become fit, as more effort is needed (one and a half times more) to run on the soft sand than on a hard surface.
Aquajogging is a form of exercise that decreases strain on joints and bones. The water supplies minimal impact to muscles and bones, which is good for those recovering from injury. Furthermore, the resistance of the water as one jogs through it provides an enhanced effect of exercise (the deeper you are, the greater the force needed to pull your leg through).
Swimming: Squatting exercise helps in enhancing a swimmer's start.
For physical fitness activity to benefit an individual, the exertion must trigger a sufficient amount of stimuli. Exercise with the correct amount of intensity, duration, and frequency can produce a significant amount of improvement. The person may overall feel better, but the physical effects on the human body take weeks or months to notice—and possibly years for full development. For training purposes, exercise must provide a stress or demand on either a function or tissue. To continue improvements, this demand must eventually increase little over an extended period of time. This sort of exercise training has three basic principles: overload, specificity, and progression. These principles are related to health but also enhancement of physical working capacity.
High intensity interval training
High-intensity interval training (HIIT) consists of repeated, short bursts of exercise, completed at a high level of intensity. These sets of intense activity are followed by a predetermined time of rest or low-intensity activity. Studies have shown that exercising at a higher intensity can have the effect of increasing cardiac benefits for humans when compared with exercising at a low or moderate level. When one's workout consists of a HIIT session, their body has to work harder to replace the oxygen it lost. Research into the benefits of HIIT have shown that it can be very successful for reducing fat, especially around the abdominal region. Furthermore, when compared to continuous moderate exercise, HIIT proves to burn more calories and increase the amount of fat burned post- HIIT session. Lack of time is one of the main reasons stated for not exercising; HIIT is a great alternative for those people because the duration of a HIIT session can be as short as 10 minutes, making it much quicker than conventional workouts.
Effects
Controlling blood pressure
Physical fitness has been proven to support the body's blood pressure. Staying active and exercising regularly builds a stronger heart. The heart is the main organ in charge of systolic blood pressure and diastolic blood pressure. Engaging in a physical activity raises blood pressure. Once the subject stops the activity, the blood pressure returns to normal. The more physical activity, the easier this process becomes, resulting in a fitter cardiovascular profile. Through regular physical fitness, it becomes easier to create a rise in blood pressure. This lowers the force on the arteries, and lowers the overall blood pressure.
Cancer prevention
Centers for disease control and prevention provide lifestyle guidelines for maintaining a balanced diet and engaging in physical activity to reduce the risk of disease. The WCRF/ American Institute for Cancer Research (AICR) published a list of recommendations that reflect the dietary and exercise behaviors which are proven to reduce incidence of cancer.
The WCRF/AICR recommendations include the following:
Be as lean as possible without becoming underweight.
Each week, adults should engage in at least 150 minutes of moderate-intensity physical activity or 75 minutes of vigorous-intensity physical activity.
Children should engage in at least one hour of moderate or vigorous physical activity each week.
Be physically active for at least thirty minutes every day.
Avoid sugar, and limit the consumption of energy-packed foods.
Balance one's diet with a variety of vegetables, grains, fruits, legumes, etc.
Limit sodium intake and the consumption of red meats and processed meats.
Limit alcoholic drinks to two for men and one for women a day.
These recommendations are also widely supported by the American Cancer Society. The guidelines have been evaluated and individuals who have higher guideline adherence scores have substantially reduced cancer risk as well as improved outcomes of a multitude of chronic health problems. Regular physical activity is a factor that helps reduce an individual's blood pressure and improves cholesterol levels, two key components that correlate with heart disease and type 2 diabetes. The American Cancer Society encourages the public to "adopt a physically active lifestyle" by meeting the criteria in a variety of physical activities such as hiking, swimming, circuit training, resistance training, lifting, etc. It is understood that cancer is not a disease that can be cured by physical fitness alone, however, because it is a multifactorial disease, physical fitness is a controllable prevention. The large associations between physical fitness and reduced cancer risk are enough to provide a strategy of preventative interventions.
The American Cancer Society asserts different levels of activity ranging from moderate to vigorous to clarify the recommended time spent on a physical activity. These classifications of physical activity consider intentional exercise and basic activities performed on a daily basis and give the public a greater understanding of what fitness levels suffice as future disease prevention.
Inflammation
Studies have shown an association between increased physical activity and reduced inflammation. It produces both a short-term inflammatory response and a long-term anti-inflammatory effect. Physical activity reduces inflammation in conjunction with or independent of changes in body weight. However, the mechanisms linking physical activity to inflammation are unknown.
Immune system
Physical activity boosts the immune system. This is dependent on the concentration of endogenous factors (such as sex hormones, metabolic hormones and growth hormones), body temperature, blood flow, hydration status and body position. Physical activity has been shown to increase the levels of natural killer (NK) cells, NK T cells, macrophages, neutrophils and eosinophils, complements, cytokines, antibodies and T cytotoxic cells. However, the mechanism linking physical activity to immune system is not fully understood.
Weight control
Achieving resilience through physical fitness promotes a vast and complex range of health-related benefits. Individuals who keep up physical fitness levels generally regulate their distribution of body fat and prevent obesity. Studies prove that running uses calories in the body that come from the macronutrients eaten daily. In order for the body to be able to run, it will use those ingested calories, therefore it will burn calories. Abdominal fat, specifically visceral fat, is most directly affected by engaging in aerobic exercise. Strength training has been known to increase the amount of muscle in the body, however, it can also reduce body fat. Sex steroid hormones, insulin, and appropriate immune responses are factors that mediate metabolism in relation to abdominal fat. Therefore, physical fitness provides weight control through regulation of these bodily functions.
Menopause and physical fitness
Menopause is often said to have occurred when a woman has had no vaginal bleeding for over a year since her last menstrual cycle. There are a number of symptoms connected to menopause, most of which can affect the quality of life of a woman involved in this stage of her life. One way to reduce the severity of the symptoms is to exercise and keep a healthy level of fitness. Prior to and during menopause, as the female body changes, there can be physical, physiological or internal changes to the body. These changes can be reduced or even prevented with regular exercise. These changes include:
Preventing weight gain: around menopause women tend to experience a reduction in muscle mass and an increase in fat levels. Increasing the amount of physical exercise undertaken can help to prevent these changes.
Reducing the risk of breast cancer: weight loss from regular exercise may offer protection from breast cancer.
Strengthening bones: physical activity can slow the bone loss associated with menopause, reducing the chance of bone fractures and osteoporosis.
Reducing the risk of disease: excess weight can increase the risk of heart disease and type 2 diabetes, and regular physical activity can counter these effects.
Boosting mood: being involved in regular activities can improve psychological health, an effect that can be seen at any age and not just during or after menopause.
The Melbourne Women's Midlife Health Project followed 438 women over an eight-year period providing evidence showing that even though physical activity was not associated with vasomotor symptoms (more commonly known as hot flashes) in this cohort at the beginning, women who reported they were physically active every day at the beginning were 49% less likely to have reported bothersome hot flushes. This is in contrast to women whose level of activity decreased and were more likely to experience bothersome hot flushes.
Mental health
Studies have shown that physical activity can improve mental health and well-being. This improvement is due to an increase in blood flow to the brain, allowing for the release of hormones as well as a decrease of stress hormone levels in the body (e.g., cortisol, adrenaline) while also stimulating the human body's mood boosters and natural painkillers. Not only does exercise release these feel-good hormones, it can also help relieve stress and help build confidence. The same way exercising can help humans to have a healthier life, it also can improve sleep quality. Based on studies, even 10 minutes of exercise per day can help insomnia. These trends improve as physical activity is performed on a consistent basis, which makes exercise effective in relieving symptoms of depression and anxiety, positively impacting mental health and bringing about several other benefits. For example:
Physical activity has been linked to the alleviation of depression and anxiety symptoms.
In patients with schizophrenia, physical fitness has been shown to improve their quality of life and decrease the effects of schizophrenia.
Being fit can improve one's self-esteem.
Working out can improve one's mental alertness and it can reduce fatigue.
Studies have shown a reduction in stress levels.
Increased opportunity for social interaction, allowing for improved social skills
To achieve some of these benefits, the Centers for Disease Control and Prevention suggests at least 30–60 minutes of exercise 3-5 times a week.
Different forms of exercise have been proven to improve mental health and reduce the risk of depression, anxiety, and suicide.
Benefits of Exercise on Mental health include ... Improved sleep, Stress relief, Improvement in mood, Increased energy and stamina, Reduced tiredness that can increase mental alertness. There are beneficial effects for mental health as well as physical health when it comes to exercise.
History
In the 1940s, an emigrant M.D. from Austria named Hans Kraus began testing children in the U.S. and Europe for what he termed, "Muscular Fitness." (in other words, muscular functionality) Through his testing, he found children in the U.S. to be far less physically capable than European children. Kraus published some alarming papers in various journals and got the attention of some powerful people, including a senator from Pennsylvania who took the findings to President Dwight D. Eisenhower. President Eisenhower was "shocked." He set up a series of conferences and committees; then in July 1956, Eisenhower established the President's Council on Youth Fitness.
In Greece, physical fitness was considered to be an essential component of a healthy life and it was the norm for men to frequent a gymnasium. Physical fitness regimes were also considered to be of paramount importance in a nation's ability to train soldiers for an effective military force. Partly for these reasons, organized fitness regimes have been in existence throughout known history and evidence of them can be found in many countries.
Gymnasiums which would seem familiar today began to become increasingly common in the 19th century. The industrial revolution had led to a more sedentary lifestyle for many people and there was an increased awareness that this had the potential to be harmful to health. This was a key motivating factor for the forming of a physical culture movement, especially in Europe and the USA. This movement advocated increased levels of physical fitness for men, women, and children and sought to do so through various forms of indoor and outdoor activity, and education. In many ways, it laid the foundations for modern fitness culture.
Education
The following is a list of some institutions that educate people about physical fitness:
American Council on Exercise (ACE)
National Academy of Sports Medicine (NASM)
International Sports Science Association (ISSA)
| Biology and health sciences | Health and fitness | null |
433085 | https://en.wikipedia.org/wiki/Atmospheric%20circulation | Atmospheric circulation | Atmospheric circulation is the large-scale movement of air and together with ocean circulation is the means by which thermal energy is redistributed on the surface of the Earth. The Earth's atmospheric circulation varies from year to year, but the large-scale structure of its circulation remains fairly constant. The smaller-scale weather systems – mid-latitude depressions, or tropical convective cells – occur chaotically, and long-range weather predictions of those cannot be made beyond ten days in practice, or a month in theory (see chaos theory and the butterfly effect).
The Earth's weather is a consequence of its illumination by the Sun and the laws of thermodynamics. The atmospheric circulation can be viewed as a heat engine driven by the Sun's energy and whose energy sink, ultimately, is the blackness of space. The work produced by that engine causes the motion of the masses of air, and in that process it redistributes the energy absorbed by the Earth's surface near the tropics to the latitudes nearer the poles, and thence to space.
The large-scale atmospheric circulation "cells" shift polewards in warmer periods (for example, interglacials compared to glacials), but remain largely constant as they are, fundamentally, a property of the Earth's size, rotation rate, heating and atmospheric depth, all of which change little. Over very long time periods (hundreds of millions of years), a tectonic uplift can significantly alter their major elements, such as the jet stream, and plate tectonics may shift ocean currents. During the extremely hot climates of the Mesozoic, a third desert belt may have existed at the Equator.
Latitudinal circulation features
The wind belts girdling the planet are organised into three cells in each hemisphere—the Hadley cell, the Ferrel cell, and the polar cell. Those cells exist in both the northern and southern hemispheres. The vast bulk of the atmospheric motion occurs in the Hadley cell. The high pressure systems acting on the Earth's surface are balanced by the low pressure systems elsewhere. As a result, there is a balance of forces acting on the Earth's surface.
The horse latitudes are an area of high pressure at about 30° to 35° latitude (north or south) where winds diverge into the adjacent zones of Hadley or Ferrel cells, and which typically have light winds, sunny skies, and little precipitation.
Hadley cell
The atmospheric circulation pattern that George Hadley described was an attempt to explain the trade winds. The Hadley cell is a closed circulation loop which begins at the equator. There, moist air is warmed by the Earth's surface, decreases in density and rises. A similar air mass rising on the other side of the equator forces those rising air masses to move poleward. The rising air creates a low pressure zone near the equator. As the air moves poleward, it cools, becomes denser, and descends at about the 30th parallel, creating a high-pressure area. The descended air then travels toward the equator along the surface, replacing the air that rose from the equatorial zone, closing the loop of the Hadley cell. The poleward movement of the air in the upper part of the troposphere deviates toward the east, caused by the coriolis acceleration. At the ground level, however, the movement of the air toward the equator in the lower troposphere deviates toward the west, producing a wind from the east. The winds that flow to the west (from the east, easterly wind) at the ground level in the Hadley cell are called the trade winds.
Though the Hadley cell is described as located at the equator, it shifts northerly (to higher latitudes) in June and July and southerly (toward lower latitudes) in December and January, as a result of the Sun's heating of the surface. The zone where the greatest heating takes place is called the "thermal equator". As the southern hemisphere's summer is in December to March, the movement of the thermal equator to higher southern latitudes takes place then.
The Hadley system provides an example of a thermally direct circulation. The power of the Hadley system, considered as a heat engine, is estimated at 200 terawatts.
Ferrel cell
Part of the air rising at 60° latitude diverges at high altitude toward the poles and creates the polar cell. The rest moves toward the equator where it collides at 30° latitude with the high-level air of the Hadley cell. There it subsides and strengthens the high pressure ridges beneath. A large part of the energy that drives the Ferrel cell is provided by the polar and Hadley cells circulating on either side, which drag the air of the Ferrel cell with it.
The Ferrel cell, theorized by William Ferrel (1817–1891), is, therefore, a secondary circulation feature, whose existence depends upon the Hadley and polar cells on either side of it. It might be thought of as an eddy created by the Hadley and polar cells.
The air of the Ferrel cell that descends at 30° latitude returns poleward at the ground level, and as it does so it deviates toward the east. In the upper atmosphere of the Ferrel cell, the air moving toward the equator deviates toward the west. Both of those deviations, as in the case of the Hadley and polar cells, are driven by conservation of angular momentum. As a result, just as the easterly Trade Winds are found below the Hadley cell, the Westerlies are found beneath the Ferrel cell.
The Ferrel cell is weak, because it has neither a strong source of heat nor a strong sink, so the airflow and temperatures within it are variable. For this reason, the mid-latitudes are sometimes known as the "zone of mixing." The Hadley and polar cells are truly closed loops, the Ferrel cell is not, and the telling point is in the Westerlies, which are more formally known as "the Prevailing Westerlies." The easterly Trade Winds and the polar easterlies have nothing over which to prevail, as their parent circulation cells are strong enough and face few obstacles either in the form of massive terrain features or high pressure zones. The weaker Westerlies of the Ferrel cell, however, can be disrupted. The local passage of a cold front may change that in a matter of minutes, and frequently does. As a result, at the surface, winds can vary abruptly in direction. But the winds above the surface, where they are less disrupted by terrain, are essentially westerly. A low pressure zone at 60° latitude that moves toward the equator, or a high pressure zone at 30° latitude that moves poleward, will accelerate the Westerlies of the Ferrel cell. A strong high, moving polewards may bring westerly winds for days.
The Ferrel system acts as a heat pump with a coefficient of performance of 12.1, consuming kinetic energy from the Hadley and polar systems at an approximate rate of 275 terawatts.
Polar cell
The polar cell is a simple system with strong convection drivers. Though cool and dry relative to equatorial air, the air masses at the 60th parallel are still sufficiently warm and moist to undergo convection and drive a thermal loop. At the 60th parallel, the air rises to the tropopause (about 8 km at this latitude) and moves poleward. As it does so, the upper-level air mass deviates toward the east. When the air reaches the polar areas, it has cooled by radiation to space and is considerably denser than the underlying air. It descends, creating a cold, dry high-pressure area. At the polar surface level, the mass of air is driven away from the pole toward the 60th parallel, replacing the air that rose there, and the polar circulation cell is complete. As the air at the surface moves toward the equator, it deviates westwards, again as a result of the Coriolis effect. The air flows at the surface are called the polar easterlies, flowing from northeast to southwest near the north pole and from southeast to northwest near the south pole.
The outflow of air mass from the cell creates harmonic waves in the atmosphere known as Rossby waves. These ultra-long waves determine the path of the polar jet stream, which travels within the transitional zone between the tropopause and the Ferrel cell. By acting as a heat sink, the polar cell moves the abundant heat from the equator toward the polar regions.
The polar cell, terrain, and katabatic winds in Antarctica can create very cold conditions at the surface, for instance the lowest temperature recorded on Earth: −89.2 °C at Vostok Station in Antarctica, measured in 1983.
Contrast between cells
The Hadley cell and the polar cell are similar in that they are thermally direct; in other words, they exist as a direct consequence of surface temperatures. Their thermal characteristics drive the weather in their domain. The sheer volume of energy that the Hadley cell transports, and the depth of the heat sink contained within the polar cell, ensures that transient weather phenomena not only have negligible effect on the systems as a whole, but — except under unusual circumstances — they do not form. The endless chain of passing highs and lows which is part of everyday life for mid-latitude dwellers, under the Ferrel cell at latitudes between 30 and 60° latitude, is unknown above the 60th and below the 30th parallels. There are some notable exceptions to this rule; over Europe, unstable weather extends to at least the 70th parallel north.
Longitudinal circulation features
While the Hadley, Ferrel, and polar cells (whose axes are oriented along parallels or latitudes) are the major features of global heat transport, they do not act alone. Temperature differences also drive a set of circulation cells, whose axes of circulation are longitudinally oriented. This atmospheric motion is known as zonal overturning circulation.
Latitudinal circulation is a result of the highest solar radiation per unit area (solar intensity) falling on the tropics. The solar intensity decreases as the latitude increases, reaching essentially zero at the poles. Longitudinal circulation, however, is a result of the heat capacity of water, its absorptivity, and its mixing. Water absorbs more heat than does the land, but its temperature does not rise as greatly as does the land. As a result, temperature variations on land are greater than on water.
The Hadley, Ferrel, and polar cells operate at the largest scale of thousands of kilometers (synoptic scale). The latitudinal circulation can also act on this scale of oceans and continents, and this effect is seasonal or even decadal. Warm air rises over the equatorial, continental, and western Pacific Ocean regions. When it reaches the tropopause, it cools and subsides in a region of relatively cooler water mass.
The Pacific Ocean cell plays a particularly important role in Earth's weather. This entirely ocean-based cell comes about as the result of a marked difference in the surface temperatures of the western and eastern Pacific. Under ordinary circumstances, the western Pacific waters are warm, and the eastern waters are cool. The process begins when strong convective activity over equatorial East Asia and subsiding cool air off South America's west coast create a wind pattern which pushes Pacific water westward and piles it up in the western Pacific. (Water levels in the western Pacific are about 60 cm higher than in the eastern Pacific.).
The daily (diurnal) longitudinal effects are at the mesoscale (a horizontal range of 5 to several hundred kilometres). During the day, air warmed by the relatively hotter land rises, and as it does so it draws a cool breeze from the sea that replaces the risen air. At night, the relatively warmer water and cooler land reverses the process, and a breeze from the land, of air cooled by the land, is carried offshore by night.
Walker circulation
The Pacific cell is of such importance that it has been named the Walker circulation after Sir Gilbert Walker, an early-20th-century director of British observatories in India, who sought a means of predicting when the monsoon winds of India would fail. While he was never successful in doing so, his work led him to the discovery of a link between the periodic pressure variations in the Indian Ocean, and those between the eastern and western Pacific, which he termed the "Southern Oscillation".
The movement of air in the Walker circulation affects the loops on either side. Under normal circumstances, the weather behaves as expected. But every few years, the winters become unusually warm or unusually cold, or the frequency of hurricanes increases or decreases, and the pattern sets in for an indeterminate period.
The Walker Cell plays a key role in this and in the El Niño phenomenon. If convective activity slows in the Western Pacific for some reason (this reason is not currently known), the climates of areas adjacent to the Western Pacific are affected. First, the upper-level westerly winds fail. This cuts off the source of returning, cool air that would normally subside at about 30° south latitude, and therefore the air returning as surface easterlies ceases. There are two consequences. Warm water ceases to surge into the eastern Pacific from the west (it was "piled" by past easterly winds) since there is no longer a surface wind to push it into the area of the east Pacific. This and the corresponding effects of the Southern Oscillation result in long-term unseasonable temperatures and precipitation patterns in North and South America, Australia, and Southeast Africa, and the disruption of ocean currents.
Meanwhile, in the Atlantic, fast-blowing upper level Westerlies of the Hadley cell form, which would ordinarily be blocked by the Walker circulation and unable to reach such intensities. These winds disrupt the tops of nascent hurricanes and greatly diminish the number which are able to reach full strength.
El Niño – Southern Oscillation
El Niño and La Niña are opposite surface temperature anomalies of the Southern Pacific, which heavily influence the weather on a large scale. In the case of El Niño, warm surface water approaches the coasts of South America which results in blocking the upwelling of nutrient-rich deep water. This has serious impacts on the fish populations.
In the La Niña case, the convective cell over the western Pacific strengthens inordinately, resulting in colder than normal winters in North America and a more robust cyclone season in South-East Asia and Eastern Australia. There is also an increased upwelling of deep cold ocean waters and more intense uprising of surface air near South America, resulting in increasing numbers of drought occurrences, although fishermen reap benefits from the more nutrient-filled eastern Pacific waters.
| Physical sciences | Atmospheric circulation | null |
433118 | https://en.wikipedia.org/wiki/Xenon%20hexafluoroplatinate | Xenon hexafluoroplatinate | Xenon hexafluoroplatinate is the product of the reaction of platinum hexafluoride with xenon, in an experiment that proved the chemical reactivity of the noble gases. This experiment was performed by Neil Bartlett at the University of British Columbia, who formulated the product as "Xe+[PtF6]−", although subsequent work suggests that Bartlett's product was probably a salt mixture and did not in fact contain this specific salt.
Preparation
"Xenon hexafluoroplatinate" is prepared from xenon and platinum hexafluoride (PtF6) as gaseous solutions in SF6. The reactants are combined at 77 K and slowly warmed to allow for a controlled reaction.
Structure
The material described originally as "xenon hexafluoroplatinate" is probably not Xe+[PtF6]−. The main problem with this formulation is "Xe+", which would be a radical and would dimerize or abstract a fluorine atom to give XeF+. Thus, Bartlett discovered that Xe undergoes chemical reactions, but the nature and purity of his initial mustard yellow product remains uncertain. Further work indicates that Bartlett's product probably contained [XeF]+[PtF5]−, [XeF]+[Pt2F11]−, and [Xe2F3]+[PtF6]−. The title "compound" is a salt, consisting of an octahedral anionic fluoride complex of platinum and various xenon cations.
It has been proposed that the platinum fluoride forms a negatively charged polymeric network with xenon or xenon fluoride cations held in its interstices. A preparation of "XePtF6" in HF solution results in a solid which has been characterized as a polymeric network associated with XeF+. This result is evidence for such a polymeric structure of xenon hexafluoroplatinate.
History
In 1962, Neil Bartlett discovered that a mixture of platinum hexafluoride gas and oxygen formed a red solid. The red solid turned out to be dioxygenyl hexafluoroplatinate, Bartlett noticed that the ionization energy for O2 (1175 kJ mol−1) was very close to the ionization energy for Xe (1170 kJ mol−1). He then asked his colleagues to give him some xenon "so that he could try out some reactions", whereupon he established that xenon indeed reacts with PtF6. Although, as discussed above, the product was probably a mixture of several compounds, Bartlett's work was the first proof that compounds could be prepared from a noble gas. Since Bartlett's observation, many well-defined compounds of xenon have been reported including XeF2, XeF4, and XeF6.
| Physical sciences | Noble gas compounds | Chemistry |
433892 | https://en.wikipedia.org/wiki/Salvia%20officinalis | Salvia officinalis | Salvia officinalis, the common sage or sage, is a perennial, evergreen subshrub, with woody stems, grayish leaves, and blue to purplish flowers. It is a member of the mint family Lamiaceae and native to the Mediterranean region, though it has been naturalized in many places throughout the world. It has a long history of medicinal and culinary use, and in modern times it has been used as an ornamental garden plant. The common name "sage" is also used for closely related species and cultivars.
Description
Cultivars are quite variable in size, leaf and flower color, and foliage pattern, with many variegated leaf types. The Old World type grows to approximately tall and wide, with lavender flowers most common, though they can also be white, pink, or purple. The plant flowers in late spring or summer. The leaves are oblong, ranging in size up to long by wide. Leaves are grey-green, rugose on the upper side, and nearly white underneath due to the many short soft hairs. Modern cultivars include leaves with purple, rose, cream, and yellow in many variegated combinations. The common sage gives its name to the grayish-green color sage, due to the distinctive color of its leaves.
Taxonomy
Salvia officinalis was described by Carl Linnaeus in 1753. It has been grown for centuries in the Old World for its food and healing properties, and was often described in old herbals for the many miraculous properties attributed to it. The binary name, officinalis, refers to the plant's medicinal use—the officina was the traditional storeroom of a monastery where herbs and medicines were stored. S. officinalis has been classified under many other scientific names over the years, including six different names since 1940 alone. It is the type species for the genus Salvia.
Etymology
The specific epithet officinalis refers to plants with a well-established medicinal or culinary value.
Salvia officinalis has numerous common names. Some of the best-known are sage, common sage, garden sage, golden sage, kitchen sage, true sage, culinary sage, Dalmatian sage, and broadleaf sage. Cultivated forms include purple sage and red sage.
Distribution and habitat
Native to the Mediterranean region, it has been naturalized in many places throughout the world.
Cultivation
In favourable conditions in the garden, S. officinalis can grow to a substantial size (1 square metre or more), but a number of cultivars are more compact. As such they are valued as small ornamental flowering shrubs, rather than for their herbal properties. Some provide low ground cover, especially in sunny dry environments. Like many herbs they can be killed by a cold wet winter, especially if the soil is not well drained. But they are easily propagated from summer cuttings, and some cultivars are produced from seeds.
Named cultivars include:
'Alba', a white-flowered cultivar
'Aurea', golden sage
'Berggarten', a cultivar with large leaves, which rarely blooms, extending the useful life of the leaves
'Extrakta', has leaves with higher oil concentrations
'Icterina', a cultivar with yellow-green variegated leaves
'Lavandulaefolia', a small leaved cultivar
'Purpurascens' ('Purpurea'), a purple-leafed cultivar
'Tricolor', a cultivar with white, purple and green variegated leaves
'Icterina' and 'Purpurascens' have gained the Royal Horticultural Society's Award of Garden Merit.
Uses
Historical uses
Salvia officinalis has been used since ancient times for treating snakebites, increasing women's fertility, and more. The Romans referred to sage as the "holy herb," and employed it in their religious rituals. Theophrastus wrote about two different sages, a wild undershrub he called sphakos, and a similar cultivated plant he called elelisphakos. Pliny the Elder said the latter plant was called salvia by the Romans, and used as a diuretic, a local anesthetic for the skin, a styptic, and for other uses. Charlemagne recommended the plant for cultivation in the early Middle Ages, and during the Carolingian Empire, it was cultivated in monastery gardens. Walafrid Strabo described it in his poem Hortulus as having a sweet scent and being useful for many human ailments—he went back to the Greek root for the name and called it lelifagus.
The plant had a high reputation throughout the Middle Ages, with many sayings referring to its healing properties and value. It was sometimes called S. salvatrix (sage the savior). Dioscorides, Pliny, and Galen all recommended sage as a diuretic, hemostatic, emmenagogue, and tonic. Le Menagier de Paris, in addition to recommending cold sage soup and sage sauce for poultry, recommends infusion of sage for washing hands at table. John Gerard's Herball (1597) states that sage "is singularly good for the head and brain, it quickeneth the senses and memory, strengtheneth the sinews, restoreth health to those that have the palsy, and taketh away shakey trembling of the members." Gervase Markham's The English Huswife (1615) gives a recipe for a tooth-powder of sage and salt. It appears in recipes for Four Thieves Vinegar, a blend of herbs which was supposed to ward off the plague. In past centuries, it was also used for hair care, insect bites and wasp stings, nervous conditions, mental conditions, oral preparations for inflammation of the mouth, tongue and throat, and also to reduce fevers.
Culinary
In Britain, sage has for generations been listed as one of the essential herbs, along with parsley, rosemary, and thyme (as in the folk song "Scarborough Fair"). It has a savory, slightly peppery flavor. Sage appears in the 14th and 15th centuries in a "Cold Sage Sauce", known in French, English and Lombard cuisine, probably traceable to its appearance in Le Viandier de Taillevent. It appears in many European cuisines, notably Italian, Balkan and Middle Eastern cookery. In Italian cuisine, it is an essential condiment for saltimbocca and other dishes, favored with fish. In British and American cooking, it is traditionally served as sage and onion stuffing, an accompaniment to roast turkey or chicken at Christmas or Thanksgiving Day, and for Sunday roast dinners. Other dishes include pork casserole, Sage Derby cheese and Lincolnshire sausages. Despite the common use of traditional and available herbs in French cuisine, sage never found favor there.
Essential oil
Common sage is grown in parts of Europe for distillation of an essential oil, although other species such as Salvia fruticosa may also be harvested and distilled with it.
Research
In 2014 and 2017, extracts of Salvia officinalis and S. lavandulaefolia were under preliminary research for their potential effects on human brain function. The thujone present in Salvia extracts may be neurotoxic.
| Biology and health sciences | Herbs and spices | Plants |
434188 | https://en.wikipedia.org/wiki/Bioremediation | Bioremediation | Bioremediation broadly refers to any process wherein a biological system (typically bacteria, microalgae, fungi in mycoremediation, and plants in phytoremediation), living or dead, is employed for removing environmental pollutants from air, water, soil, flue gasses, industrial effluents etc., in natural or artificial settings. The natural ability of organisms to adsorb, accumulate, and degrade common and emerging pollutants has attracted the use of biological resources in treatment of contaminated environment. In comparison to conventional physicochemical treatment methods bioremediation may offer advantages as it aims to be sustainable, eco-friendly, cheap, and scalable.
Most bioremediation is inadvertent, involving native organisms. Research on bioremediation is heavily focused on stimulating the process by inoculation of a polluted site with organisms or supplying nutrients to promote their growth. Environmental remediation is an alternative to bioremediation.
While organic pollutants are susceptible to biodegradation, heavy metals cannot be degraded, but rather oxidized or reduced. Typical bioremediations involves oxidations. Oxidations enhance the water-solubility of organic compounds and their susceptibility to further degradation by further oxidation and hydrolysis. Ultimately biodegradation converts hydrocarbons to carbon dioxide and water. For heavy metals, bioremediation offers few solutions. Metal-containing pollutant can be removed, at least partially, with varying bioremediation techniques. The main challenge to bioremediations is rate: the processes are slow.
Bioremediation techniques can be classified as (i) in situ techniques, which treat polluted sites directly, vs (ii) ex situ techniques which are applied to excavated materials. In both these approaches, additional nutrients, vitamins, minerals, and pH buffers are added to enhance the growth and metabolism of the microorganisms. In some cases, specialized microbial cultures are added (biostimulation). Some examples of bioremediation related technologies are phytoremediation, bioventing, bioattenuation, biosparging, composting (biopiles and windrows), and landfarming. Other remediation techniques include thermal desorption, vitrification, air stripping, bioleaching, rhizofiltration, and soil washing. Biological treatment, bioremediation, is a similar approach used to treat wastes including wastewater, industrial waste and solid waste. The end goal of bioremediation is to remove harmful compounds to improve soil and water quality.
In situ techniques
Bioventing
Bioventing is a process that increases the oxygen or air flow into the unsaturated zone of the soil, this in turn increases the rate of natural in situ degradation of the targeted hydrocarbon contaminant. Bioventing, an aerobic bioremediation, is the most common form of oxidative bioremediation process where oxygen is provided as the electron acceptor for oxidation of petroleum, polyaromatic hydrocarbons (PAHs), phenols, and other reduced pollutants. Oxygen is generally the preferred electron acceptor because of the higher energy yield and because oxygen is required for some enzyme systems to initiate the degradation process. Microorganisms can degrade a wide variety of hydrocarbons, including components of gasoline, kerosene, diesel, and jet fuel. Under ideal aerobic conditions, the biodegradation rates of the low- to moderate-weight aliphatic, alicyclic, and aromatic compounds can be very high. As molecular weight of the compound increases, the resistance to biodegradation increases simultaneously. This results in higher contaminated volatile compounds due to their high molecular weight and an increased difficulty to remove from the environment.
Most bioremediation processes involve oxidation-reduction reactions where either an electron acceptor (commonly oxygen) is added to stimulate oxidation of a reduced pollutant (e.g. hydrocarbons) or an electron donor (commonly an organic substrate) is added to reduce oxidized pollutants (nitrate, perchlorate, oxidized metals, chlorinated solvents, explosives and propellants). In both these approaches, additional nutrients, vitamins, minerals, and pH buffers may be added to optimize conditions for the microorganisms. In some cases, specialized microbial cultures are added (bioaugmentation) to further enhance biodegradation.
Approaches for oxygen addition below the water table include recirculating aerated water through the treatment zone, addition of pure oxygen or peroxides, and air sparging. Recirculation systems typically consist of a combination of injection wells or galleries and one or more recovery wells where the extracted groundwater is treated, oxygenated, amended with nutrients and re-injected. However, the amount of oxygen that can be provided by this method is limited by the low solubility of oxygen in water (8 to 10 mg/L for water in equilibrium with air at typical temperatures). Greater amounts of oxygen can be provided by contacting the water with pure oxygen or addition of hydrogen peroxide (H2O2) to the water. In some cases, slurries of solid calcium or magnesium peroxide are injected under pressure through soil borings. These solid peroxides react with water releasing H2O2 which then decomposes releasing oxygen. Air sparging involves the injection of air under pressure below the water table. The air injection pressure must be great enough to overcome the hydrostatic pressure of the water and resistance to air flow through the soil.
Biostimulation
Bioremediation can be carried out by bacteria that are naturally present. In biostimulation, the population of these helpful bacteria can be increased by adding nutrients.
Bacteria can in principle be used to degrade hydrocarbons. Specific to marine oil spills, nitrogen and phosphorus have been key nutrients in biodegradation. The bioremediation of hydrocarbons suffers from low rates.
Bioremediation can involve the action of microbial consortium. Within the consortium, the product of one species could be the substrate for another species.
Anaerobic bioremediation can in principle be employed to treat a range of oxidized contaminants including chlorinated ethylenes (PCE, TCE, DCE, VC), chlorinated ethanes (TCA, DCA), chloromethanes (CT, CF), chlorinated cyclic hydrocarbons, various energetics (e.g., perchlorate, RDX, TNT), and nitrate. This process involves the addition of an electron donor to: 1) deplete background electron acceptors including oxygen, nitrate, oxidized iron and manganese and sulfate; and 2) stimulate the biological and/or chemical reduction of the oxidized pollutants. The choice of substrate and the method of injection depend on the contaminant type and distribution in the aquifer, hydrogeology, and remediation objectives. Substrate can be added using conventional well installations, by direct-push technology, or by excavation and backfill such as permeable reactive barriers (PRB) or biowalls. Slow-release products composed of edible oils or solid substrates tend to stay in place for an extended treatment period. Soluble substrates or soluble fermentation products of slow-release substrates can potentially migrate via advection and diffusion, providing broader but shorter-lived treatment zones. The added organic substrates are first fermented to hydrogen (H2) and volatile fatty acids (VFAs). The VFAs, including acetate, lactate, propionate and butyrate, provide carbon and energy for bacterial metabolism.
Bioattenuation
During bioattenuation, biodegradation occurs naturally with the addition of nutrients or bacteria. The indigenous microbes present will determine the metabolic activity and act as a natural attenuation. While there is no anthropogenic involvement in bioattenuation, the contaminated site must still be monitored.
Biosparging
Biosparging is the process of groundwater remediation as oxygen, and possible nutrients, is injected. When oxygen is injected, indigenous bacteria are stimulated to increase rate of degradation. However, biosparging focuses on saturated contaminated zones, specifically related to ground water remediation.
UNICEF, power producers, bulk water suppliers, and local governments are early adopters of low cost bioremediation, such as aerobic bacteria tablets which are simply dropped into water.
Ex situ techniques
Biopiles
Biopiles, similar to bioventing, are used to remove petroleum pollutants by introducing aerobic hydrocarbons to contaminated soils. However, the soil is excavated and piled with an aeration system. This aeration system enhances microbial activity by introducing oxygen under positive pressure or removes oxygen under negative pressure.
Windrows
Windrow systems are similar to compost techniques where soil is periodically turned in order to enhance aeration. This periodic turning also allows contaminants present in the soil to be uniformly distributed which accelerates the process of bioremediation.
Landfarming
Landfarming, or land treatment, is a method commonly used for sludge spills. This method disperses contaminated soil and aerates the soil by cyclically rotating. This process is an above land application and contaminated soils are required to be shallow in order for microbial activity to be stimulated. However, if the contamination is deeper than 5 feet, then the soil is required to be excavated to above ground. While it is an ex situ technique, it can also be considered an in situ technique as Landfarming can be performed at the site of contamination.
In situ vs. Ex situ
Ex situ techniques are often more expensive because of excavation and transportation costs to the treatment facility, while in situ techniques are performed at the site of contamination so they only have installation costs. While there is less cost there is also less of an ability to determine the scale and spread of the pollutant. The pollutant ultimately determines which bioremediation method to use. The depth and spread of the pollutantare other important factors.
Heavy metals
Heavy metals are introduced into the environment by both anthropogenic activities and natural factors. Anthropogenic activities include industrial emissions, electronic waste, and mining. Natural factors include mineral weathering, soil erosion, and forest fires. Heavy metals including cadmium, chromium, lead and uranium are unlike organic compounds and cannot be biodegraded. However, bioremediation processes can potentially be used to minimize the mobility of these material in the subsurface, lowering the potential for human and environmental exposure. Heavy metals from these factors are predominantly present in water sources due to runoff where it is uptake by marine fauna and flora.
Hexavalent chromium (Cr[VI]) and uranium (U[VI]) can be reduced to less mobile and/or less toxic forms (e.g., Cr[III], U[IV]). Similarly, reduction of sulfate to sulfide (sulfidogenesis) can be used to immobilize certain metals (e.g., zinc, cadmium).
The mobility of certain metals including chromium (Cr) and uranium (U) varies depending on the oxidation state of the material. Microorganisms can be used to lower the toxicity and mobility of chromium by reducing hexavalent chromium, Cr(VI) to trivalent Cr(III). Reduction of the more mobile U(VI) species affords the less mobile U(IV) derivatives. Microorganisms are used in this process because the reduction rate of these metals is often slow in the absence of microbial interactions Research is also underway to develop methods to remove metals from water by enhancing the sorption of the metal to cell walls. This approach has been evaluated for treatment of cadmium, chromium, and lead. Genetically modified bacteria has also been explored for use in sequestration of Arsenic. Phytoextraction processes concentrate contaminants in the biomass for subsequent removal.
Metal extractions can in principle be performed in situ or ex situ where in situ is preferred since it is less expensive to excavate the substrate.
Bioremediation is not specific to metals. In 2010 there was a massive oil spill in the Gulf of Mexico. Populations of bacteria and archaea were used to rejuvenate the coast after the oil spill. These microorganisms over time have developed metabolic networks that can utilize hydrocarbons such as oil and petroleum as a source of carbon and energy. Microbial bioremediation is a very effective modern technique for restoring natural systems by removing toxins from the environment.
Pesticides
Of the many ways to deal with pesticide contamination, bioremediation promises to be more effective. Many sites around the world are contaminated with agrichemicals. These agrichemicals often resist biodegradation, by design. Harming all manners of organic life with long term health issues such as cancer, rashes, blindness, paralysis, and mental illness. An example is Lindane which was a commonly used insecticide in the 20th century. Long time exposure poses a serious threat to humans and the surrounding ecosystem. Lindane reduces the potential of beneficial bacteria in the soil such as nitrogen fixation cyanobacteria. As well as causing central nervous system issues in smaller mammals such as seizures, dizziness, and even death. What makes it so harmful to these organisms is how quickly distributed it gets through the brain and fatty tissues. While Lindane has been mostly limited to specific use, it is still produced and used around the world.
Actinobacteria has been a promising candidate in situ technique specifically for removing pesticides. When certain strains of Actinobacteria have been grouped together, their efficiency in degrading pesticides has enhanced. As well as being a reusable technique that strengthens through further use by limiting the migration space of these cells to target specific areas and not fully consume their cleansing abilities. Despite encouraging results, Actinobacteria has only been used in controlled lab settings and will need further development in finding the cost effectiveness and scalability of use.
Limitations of bioremediation
Bioremediation can be used to mineralize organic pollutants, to partially transform the pollutants, or alter their mobility. Heavy metals and radionuclides generally cannot be biodegraded, but can be bio-transformed to less mobile forms. In some cases, microbes do not fully mineralize the pollutant, potentially producing a more toxic compound. For example, under anaerobic conditions, the reductive dehalogenation of TCE may produce dichloroethylene (DCE) and vinyl chloride (VC), which are suspected or known carcinogens. However, the microorganism Dehalococcoides can further reduce DCE and VC to the non-toxic product ethene. The molecular pathways for bioremediation are of considerable interest. In addition, knowing these pathways will help develop new technologies that can deal with sites that have uneven distributions of a mixture of contaminants.
Biodegradation requires microbial population with the metabolic capacity to degrade the pollutant. The biological processes used by these microbes are highly specific, therefore, many environmental factors must be taken into account and regulated as well. It can be difficult to extrapolate the results from the small-scale test studies into big field operations. In many cases, bioremediation takes more time than other alternatives such as land filling and incineration. Another example is bioventing, which is inexpensive to bioremediate contaminated sites, however, this process is extensive and can take a few years to decontaminate a site.>
Another major drawback is finding the right species to perform bioremediation. In order to prevent the introduction and spreading of an invasive species to the ecosystem, an indigenous species is needed. As well as a species plentiful enough to clean the whole site without exhausting the population. Finally the species should be resilient enough to withstand the environmental conditions. These specific criteria may make it difficult to perform bioremediation on a contaminated site.
In agricultural industries, the use of pesticides is a top factor in direct soil contamination and runoff water contamination. The limitation or remediation of pesticides is the low bioavailability. Altering the pH and temperature of the contaminated soil is a resolution to increase bioavailability which, in turn, increased degradation of harmful compounds.
The compound acrylonitrile is commonly produced in industrial setting but adversely contaminates soils. Microorganisms containing nitrile hydratases (NHase) degraded harmful acrylonitrile compounds into non-polluting substances.
Since the experience with harmful contaminants are limited, laboratory practices are required to evaluate effectiveness, treatment designs, and estimate treatment times. Bioremediation processes may take several months to several years depending on the size of the contaminated area.
Genetic engineering
The use of genetic engineering to create organisms specifically designed for bioremediation is under preliminary research. Two category of genes can be inserted in the organism: degradative genes, which encode proteins required for the degradation of pollutants, and reporter genes, which encode proteins able to monitor pollution levels. Numerous members of Pseudomonas have been modified with the lux gene for the detection of the polyaromatic hydrocarbon naphthalene. A field test for the release of the modified organism has been successful on a moderately large scale.
There are concerns surrounding release and containment of genetically modified organisms into the environment due to the potential of horizontal gene transfer. Genetically modified organisms are classified and controlled under the Toxic Substances Control Act of 1976 under United States Environmental Protection Agency. Measures have been created to address these concerns. Organisms can be modified such that they can only survive and grow under specific sets of environmental conditions. In addition, the tracking of modified organisms can be made easier with the insertion of bioluminescence genes for visual identification.
Genetically modified organisms have been created to treat oil spills and break down certain plastics (PET).
Additive manufacturing
Additive manufacturing technologies such as bioprinting offer distinctive benefits that can be leveraged in bioremediation to develop structures with characteristics tailored to biological systems and environmental cleanup needs, and even though the adoption of this technology in bioremediation is in its early stages, the area is seeing massive growth.
| Technology | Environmental remediation | null |
434221 | https://en.wikipedia.org/wiki/James%20Webb%20Space%20Telescope | James Webb Space Telescope | The James Webb Space Telescope (JWST) is a space telescope designed to conduct infrared astronomy. As the largest telescope in space, it is equipped with high-resolution and high-sensitivity instruments, allowing it to view objects too old, distant, or faint for the Hubble Space Telescope. This enables investigations across many fields of astronomy and cosmology, such as observation of the first stars and the formation of the first galaxies, and detailed atmospheric characterization of potentially habitable exoplanets.
Although the Webb's mirror diameter is 2.7 times larger than that of the Hubble Space Telescope, it produces images of comparable sharpness because it observes in the longer-wavelength infrared spectrum. The longer the wavelength of the spectrum, the larger the information-gathering surface required (mirrors in the infrared spectrum or antenna area in the millimeter and radio ranges) for an image comparable in clarity to the visible spectrum of the Hubble Space Telescope.
The Webb was launched on 25 December 2021 on an Ariane 5 rocket from Kourou, French Guiana. In January 2022 it arrived at its destination, a solar orbit near the Sun–Earth L2 Lagrange point, about from Earth. The telescope's first image was released to the public on 11 July 2022.
The U.S. National Aeronautics and Space Administration (NASA) led Webb's design and development and partnered with two main agencies: the European Space Agency (ESA) and the Canadian Space Agency (CSA). The NASA Goddard Space Flight Center in Maryland managed telescope development, while the Space Telescope Science Institute in Baltimore on the Homewood Campus of Johns Hopkins University operates Webb. The primary contractor for the project was Northrop Grumman.
The telescope is named after James E. Webb, who was the administrator of NASA from 1961 to 1968 during the Mercury, Gemini, and Apollo programs.
Webb's primary mirror consists of 18 hexagonal mirror segments made of gold-plated beryllium, which together create a mirror, compared with Hubble's . This gives Webb a light-collecting area of about , about six times that of Hubble. Unlike Hubble, which observes in the near ultraviolet and visible (0.1 to 0.8 μm), and near infrared (0.8–2.5 μm) spectra, Webb observes a lower frequency range, from long-wavelength visible light (red) through mid-infrared (0.6–28.5 μm). The telescope must be kept extremely cold, below , so that the infrared light emitted by the telescope itself does not interfere with the collected light. Its five-layer sunshield protects it from warming by the Sun, Earth, and Moon.
Initial designs for the telescope, then named the Next Generation Space Telescope, began in 1996. Two concept studies were commissioned in 1999, for a potential launch in 2007 and a US$1 billion budget. The program was plagued with enormous cost overruns and delays. A major redesign was accomplished in 2005, with construction completed in 2016, followed by years of exhaustive testing, at a total cost of US$10 billion.
Features
The mass of the James Webb Space Telescope (JWST) is about half that of the Hubble Space Telescope. Webb has a gold-coated beryllium primary mirror made up of 18 separate hexagonal mirrors. The mirror has a polished area of , of which is obscured by the secondary support struts, giving a total collecting area of . This is over six times larger than the collecting area of Hubble's diameter mirror, which has a collecting area of . The mirror has a gold coating to provide infrared reflectivity and this is covered by a thin layer of glass for durability.
Webb is designed primarily for near-infrared astronomy, but can also see orange and red visible light, as well as the mid-infrared region, depending on the instrument being used. It can detect objects up to 100 times fainter than Hubble can, and objects much earlier in the history of the universe, back to redshift z≈20 (about 180 million years cosmic time after the Big Bang). For comparison, the earliest stars are thought to have formed between z≈30 and z≈20 (100–180 million years cosmic time), and the first galaxies may have formed around redshift z≈15 (about 270 million years cosmic time). Hubble is unable to see further back than very early reionization at about z≈11.1 (galaxy GN-z11, 400 million years cosmic time).
The design emphasizes the near to mid-infrared for several reasons:
high-redshift (very early and distant) objects have their visible emissions shifted into the infrared, and therefore their light can be observed only via infrared astronomy;
infrared light passes more easily through dust clouds than visible light;
colder objects such as debris disks and planets emit most strongly in the infrared;
these infrared bands are difficult to study from the ground or by existing space telescopes such as Hubble.
Ground-based telescopes must look through Earth's atmosphere, which is opaque in many infrared bands (see figure at right). Even where the atmosphere is transparent, many of the target chemical compounds, such as water, carbon dioxide, and methane, also exist in the Earth's atmosphere, vastly complicating analysis. Existing space telescopes such as Hubble cannot study these bands since their mirrors are insufficiently cool (the Hubble mirror is maintained at about ) which means that the telescope itself radiates strongly in the relevant infrared bands.
Webb can also observe objects in the Solar System at an angle of more than 85° from the Sun and having an apparent angular rate of motion less than 0.03 arc seconds per second. This includes Mars, Jupiter, Saturn, Uranus, Neptune, Pluto, their satellites, and comets, asteroids and minor planets at or beyond the orbit of Mars. Webb has the near-IR and mid-IR sensitivity to be able to observe virtually all known Kuiper Belt Objects. In addition, it can observe opportunistic and unplanned targets within 48 hours of a decision to do so, such as supernovae and gamma ray bursts.
Location and orbit
Webb operates in a halo orbit, circling around a point in space known as the Sun–Earth L2 Lagrange point, approximately beyond Earth's orbit around the Sun. Its actual position varies between about from L2 as it orbits, keeping it out of both Earth and Moon's shadow. By way of comparison, Hubble orbits above Earth's surface, and the Moon is roughly from Earth. Objects near this Sun–Earth point can orbit the Sun in synchrony with the Earth, allowing the telescope to remain at a roughly constant distance with continuous orientation of its sunshield and equipment bus toward the Sun, Earth and Moon. Combined with its wide shadow-avoiding orbit, the telescope can simultaneously block incoming heat and light from all three of these bodies and avoid even the smallest changes of temperature from Earth and Moon shadows that would affect the structure, yet still maintain uninterrupted solar power and Earth communications on its sun-facing side. This arrangement keeps the temperature of the spacecraft constant and below the necessary for faint infrared observations.
Sunshield protection
To make observations in the infrared spectrum, Webb must be kept under ; otherwise, infrared radiation from the telescope itself would overwhelm its instruments. Its large sunshield blocks light and heat from the Sun, Earth, and Moon, and its position near the Sun–Earth keeps all three bodies on the same side of the spacecraft at all times. Its halo orbit around the L2 point avoids the shadow of the Earth and Moon, maintaining a constant environment for the sunshield and solar arrays. The resulting stable temperature for the structures on the dark side is critical to maintaining precise alignment of the primary mirror segments.
The sunshield consists of five layers, each approximately as thin as a human hair. Each layer is made of Kapton E film, coated with aluminum on both sides. The two outermost layers have an additional coating of doped silicon on the Sun-facing sides, to better reflect the Sun's heat back into space. Accidental tears of the delicate film structure during deployment testing in 2018 led to further delays to the telescope deployment.
The sunshield was designed to be folded twelve times so that it would fit within the Ariane 5 rocket's payload fairing, which is in diameter, and long. The shield's fully deployed dimensions were planned as .
Keeping within the shadow of the sunshield limits the field of regard of Webb at any given time. The telescope can see 40 percent of the sky from any one position, but can see all of the sky over a period of six months.
Optics
Webb's primary mirror is a -diameter gold-coated beryllium reflector with a collecting area of . If it had been designed as a single, large mirror, it would have been too large for existing launch vehicles. The mirror is therefore composed of 18 hexagonal segments (a technique pioneered by Guido Horn d'Arturo), which unfolded after the telescope was launched. Image plane wavefront sensing through phase retrieval is used to position the mirror segments in the correct location using precise actuators. Subsequent to this initial configuration, they only need occasional updates every few days to retain optimal focus. This is unlike terrestrial telescopes, for example the Keck telescopes, which continually adjust their mirror segments using active optics to overcome the effects of gravitational and wind loading. The Webb telescope uses 132 small actuation motors to position and adjust the optics. The actuators can position the mirror with 10 nanometer accuracy.
Webb's optical design is a three-mirror anastigmat, which makes use of curved secondary and tertiary mirrors to deliver images that are free from optical aberrations over a wide field. The secondary mirror is in diameter. In addition, there is a fine steering mirror which can adjust its position many times per second to provide image stabilization. Point light sources in images taken by Webb have six diffraction spikes plus two fainter ones, due to the hexagonal shape of the primary mirror segments.
Scientific instruments
The Integrated Science Instrument Module (ISIM) is a framework that provides electrical power, computing resources, cooling capability as well as structural stability to the Webb telescope. It is made with bonded graphite-epoxy composite attached to the underside of Webb's telescope structure. The ISIM holds the four science instruments and a guide camera.
NIRCam (Near Infrared Camera) is an infrared imager which has spectral coverage ranging from the edge of the visible (0.6 μm) through to the near infrared (5 μm). There are 10 sensors each of 4 megapixels. NIRCam serves as the observatory's wavefront sensor, which is required for wavefront sensing and control activities, used to align and focus the main mirror segments. NIRCam was built by a team led by the University of Arizona, with principal investigator Marcia J. Rieke.
NIRSpec (Near Infrared Spectrograph) performs spectroscopy over the same wavelength range. It was built by the European Space Agency (ESA) at ESTEC in Noordwijk, Netherlands. The leading development team includes members from Airbus Defence and Space, Ottobrunn and Friedrichshafen, Germany, and the Goddard Space Flight Center; with Pierre Ferruit (École normale supérieure de Lyon) as NIRSpec project scientist. The NIRSpec design provides three observing modes: a low-resolution mode using a prism, an R~1000 multi-object mode, and an R~2700 integral field unit or long-slit spectroscopy mode. Switching of the modes is done by operating a wavelength preselection mechanism called the Filter Wheel Assembly, and selecting a corresponding dispersive element (prism or grating) using the Grating Wheel Assembly mechanism. Both mechanisms are based on the successful ISOPHOT wheel mechanisms of the Infrared Space Observatory. The multi-object mode relies on a complex micro-shutter mechanism to allow for simultaneous observations of hundreds of individual objects anywhere in NIRSpec's field of view. There are two sensors, each of 4 megapixels.
MIRI (Mid-Infrared Instrument) measures the mid-to-long-infrared wavelength range from 5 to 27 μm. It contains both a mid-infrared camera and an imaging spectrometer. MIRI was developed as a collaboration between NASA and a consortium of European countries, and is led by George Rieke (University of Arizona) and Gillian Wright (UK Astronomy Technology Centre, Edinburgh, Scotland). The temperature of the MIRI must not exceed : a helium gas mechanical cooler sited on the warm side of the environmental shield provides this cooling.
FGS/NIRISS (Fine Guidance Sensor and Near Infrared Imager and Slitless Spectrograph), led by the Canadian Space Agency (CSA) under project scientist John Hutchings (Herzberg Astronomy and Astrophysics Research Centre), is used to stabilize the line-of-sight of the observatory during science observations. Measurements by the FGS are used both to control the overall orientation of the spacecraft and to drive the fine steering mirror for image stabilization. The CSA also provided a Near Infrared Imager and Slitless Spectrograph (NIRISS) module for astronomical imaging and spectroscopy in the 0.8 to 5 μm wavelength range, led by principal investigator René Doyon at the Université de Montréal. Although they are often referred together as a unit, the NIRISS and FGS serve entirely different purposes, with one being a scientific instrument and the other being a part of the observatory's support infrastructure.
NIRCam and MIRI feature starlight-blocking coronagraphs for observation of faint targets such as extrasolar planets and circumstellar disks very close to bright stars.
Spacecraft bus
The spacecraft bus is the primary support component of the JWST, hosting a multitude of computing, communication, electric power, propulsion, and structural parts. Along with the sunshield, it forms the spacecraft element of the space telescope. The spacecraft bus is on the Sun-facing "warm" side of the sunshield and operates at a temperature of about .
The structure of the spacecraft bus has a mass of , and must support the space telescope. It is made primarily of graphite composite material. The assembly was completed in California in 2015. It was integrated with the rest of the space telescope leading to its 2021 launch. The spacecraft bus can rotate the telescope with a pointing precision of one arcsecond, and isolates vibration to two milliarcseconds.
Webb has two pairs of rocket engines (one pair for redundancy) to make course corrections on the way to L2 and for station keepingmaintaining the correct position in the halo orbit. Eight smaller thrusters are used for attitude controlthe correct pointing of the spacecraft. The engines use hydrazine fuel ( at launch) and dinitrogen tetroxide as oxidizer ( at launch).
Servicing
Webb is not intended to be serviced in space. A crewed mission to repair or upgrade the observatory, as was done for Hubble, would not be possible, and according to NASA Associate Administrator Thomas Zurbuchen, despite best efforts, an uncrewed remote mission was found to be beyond available technology at the time Webb was designed. During the long Webb testing period, NASA officials referred to the idea of a servicing mission, but no plans were announced. Since the successful launch, NASA has stated that nevertheless limited accommodation was made to facilitate future servicing missions. These accommodations included precise guidance markers in the form of crosses on the surface of Webb, for use by remote servicing missions, as well as refillable fuel tanks, removable heat protectors, and accessible attachment points.
Software
Ilana Dashevsky and Vicki Balzano write that Webb uses a modified version of JavaScript, called Nombas ScriptEase 5.00e, for its operations; it follows the ECMAScript standard and "allows for a modular design flow, where on-board scripts call lower-level scripts that are defined as functions". "The JWST science operations will be driven by ASCII (instead of binary command blocks) on-board scripts, written in a customized version of JavaScript. The script interpreter is run by the flight software, which is written in the programming language C++. The flight software operates the spacecraft and the science instruments."
Comparison with other telescopes
The desire for a large infrared space telescope traces back decades. In the United States, the Space Infrared Telescope Facility (later called the Spitzer Space Telescope) was planned while the Space Shuttle was in development, and the potential for infrared astronomy was acknowledged at that time. Unlike ground telescopes, space observatories are free from atmospheric absorption of infrared light. Space observatories opened a "new sky" for astronomers.
However, there is a challenge involved in the design of infrared telescopes: they need to stay extremely cold, and the longer the wavelength of infrared, the colder they need to be. If not, the background heat of the device itself overwhelms the detectors, making it effectively blind. This can be overcome by careful design. One method is to put the key instruments in a dewar with an extremely cold substance, such as liquid helium. The coolant will slowly vaporize, limiting the lifetime of the instrument from as short as a few months to a few years at most.
It is also possible to maintain a low temperature by designing the spacecraft to enable near-infrared observations without a supply of coolant, as with the extended missions of the Spitzer Space Telescope and the Wide-field Infrared Survey Explorer, which operated at reduced capacity after coolant depletion. Another example is Hubble's Near Infrared Camera and Multi-Object Spectrometer (NICMOS) instrument, which started out using a block of nitrogen ice that depleted after a couple of years, but was then replaced during the STS-109 servicing mission with a cryocooler that worked continuously. The Webb Space Telescope is designed to cool itself without a dewar, using a combination of sunshields and radiators, with the mid-infrared instrument using an additional cryocooler.
Webb's delays and cost increases have been compared to those of its predecessor, the Hubble Space Telescope. When Hubble formally started in 1972, it had an estimated development cost of US$300 million (), but by the time it was sent into orbit in 1990, the cost was about four times that. In addition, new instruments and servicing missions increased the cost to at least US$9 billion by 2006 ().
Development history
Background (development to 2003)
Discussions of a Hubble follow-on started in the 1980s, but serious planning began in the early 1990s. The Hi-Z telescope concept was developed between 1989 and 1994: a fully baffled aperture infrared telescope that would recede to an orbit at 3 Astronomical unit (AU). This distant orbit would have benefited from reduced light noise from zodiacal dust. Other early plans called for a NEXUS precursor telescope mission.
Correcting the flawed optics of the Hubble Space Telescope (HST) in its first years played a significant role in the birth of Webb. In 1993, NASA conducted STS-61, the Space Shuttle mission that replaced HST's camera and installed a retrofit for its imaging spectrograph to compensate for the spherical aberration in its primary mirror.
The HST & Beyond Committee was formed in 1994 "to study possible missions and programs for optical-ultraviolet astronomy in space for the first decades of the 21st century." Emboldened by HST's success, its 1996 report explored the concept of a larger and much colder, infrared-sensitive telescope that could reach back in cosmic time to the birth of the first galaxies. This high-priority science goal was beyond the HST's capability because, as a warm telescope, it is blinded by infrared emission from its own optical system. In addition to recommendations to extend the HST mission to 2005 and to develop technologies for finding planets around other stars, NASA embraced the chief recommendation of HST & Beyond for a large, cold space telescope (radiatively cooled far below 0 °C), and began the planning process for the future Webb telescope.
Preparation for the 2000 Astronomy and Astrophysics Decadal Survey (a literature review produced by the United States National Research Council that includes identifying research priorities and making recommendations for the upcoming decade) included further development of the scientific program for what became known as the Next Generation Space Telescope, and advancements in relevant technologies by NASA. As it matured, studying the birth of galaxies in the young universe, and searching for planets around other starsthe prime goals coalesced as "Origins" by HST & Beyond became prominent.
As hoped, the NGST received the highest ranking in the 2000 Decadal Survey.
An administrator of NASA, Dan Goldin, coined the phrase "faster, better, cheaper", and opted for the next big paradigm shift for astronomy, namely, breaking the barrier of a single mirror. That meant going from "eliminate moving parts" to "learn to live with moving parts" (i.e. segmented optics). With the goal to reduce mass density tenfold, silicon carbide with a very thin layer of glass on top was first looked at, but beryllium was selected at the end.
The mid-1990s era of "faster, better, cheaper" produced the NGST concept, with an aperture to be flown to , roughly estimated to cost US$500 million. In 1997, NASA worked with the Goddard Space Flight Center, Ball Aerospace & Technologies, and TRW to conduct technical requirement and cost studies of the three different concepts, and in 1999 selected Lockheed Martin and TRW for preliminary concept studies. Launch was at that time planned for 2007, but the launch date was pushed back many times (see table further down).
In 2002, the project was renamed after NASA's second administrator (1961–1968), James E. Webb (1906–1992). Webb led the agency during the Apollo program and established scientific research as a core NASA activity.
In 2003, NASA awarded TRW the US$824.8 million prime contract for Webb. The design called for a de-scoped primary mirror and a launch date of 2010. Later that year, TRW was acquired by Northrop Grumman in a hostile bid and became Northrop Grumman Space Technology.
Early development and replanning (2003–2007)
Development was managed by NASA's Goddard Space Flight Center in Greenbelt, Maryland, with John C. Mather as its project scientist. The primary contractor was Northrop Grumman Aerospace Systems, responsible for developing and building the spacecraft element, which included the satellite bus, sunshield, Deployable Tower Assembly (DTA) which connects the Optical Telescope Element to the spacecraft bus, and the Mid Boom Assembly (MBA) which helps to deploy the large sunshields on orbit, while Ball Aerospace & Technologies was subcontracted to develop and build the OTE itself, and the Integrated Science Instrument Module (ISIM).
Cost growth revealed in spring 2005 led to an August 2005 re-planning. The primary technical outcomes of the re-planning were significant changes in the integration and test plans, a 22-month launch delay (from 2011 to 2013), and elimination of system-level testing for observatory modes at wavelengths shorter than 1.7 μm. Other major features of the observatory were unchanged. Following the re-planning, the project was independently reviewed in April 2006.
In the 2005 re-plan, the life-cycle cost of the project was estimated at US$4.5 billion. This comprised approximately US$3.5 billion for design, development, launch and commissioning, and approximately US$1.0 billion for ten years of operations. The ESA agreed in 2004 to contributing about €300 million, including the launch. The CSA pledged CA$39 million in 2007 and in 2012 delivered its contributions in equipment to point the telescope and detect atmospheric conditions on distant planets.
Detailed design and construction (2007–2021)
In January 2007, nine of the ten technology development items in the project successfully passed a Non-Advocate Review. These technologies were deemed sufficiently mature to retire significant risks in the project. The remaining technology development item (the MIRI cryocooler) completed its technology maturation milestone in April 2007. This technology review represented the beginning step in the process that ultimately moved the project into its detailed design phase (Phase C). By May 2007, costs were still on target. In March 2008, the project successfully completed its Preliminary Design Review (PDR). In April 2008, the project passed the Non-Advocate Review. Other passed reviews include the Integrated Science Instrument Module review in March 2009, the Optical Telescope Element review completed in October 2009, and the Sunshield review completed in January 2010.
In April 2010, the telescope passed the technical portion of its Mission Critical Design Review (MCDR). Passing the MCDR signified the integrated observatory can meet all science and engineering requirements for its mission. The MCDR encompassed all previous design reviews. The project schedule underwent review during the months following the MCDR, in a process called the Independent Comprehensive Review Panel, which led to a re-plan of the mission aiming for a 2015 launch, but as late as 2018. By 2010, cost over-runs were impacting other projects, though Webb itself remained on schedule.
By 2011, the Webb project was in the final design and fabrication phase (Phase C).
Assembly of the hexagonal segments of the primary mirror, which was done via robotic arm, began in November 2015 and was completed on 3 February 2016. The secondary mirror was installed on 3 March 2016. Final construction of the Webb telescope was completed in November 2016, after which extensive testing procedures began.
In March 2018, NASA delayed Webb's launch an additional two years to May 2020 after the telescope's sunshield ripped during a practice deployment and the sunshield's cables did not sufficiently tighten. In June 2018, NASA delayed the launch by an additional 10 months to March 2021, based on the assessment of the independent review board convened after the failed March 2018 test deployment. The review identified that Webb launch and deployment had 344 potential single-point failures – tasks that had no alternative or means of recovery if unsuccessful, and therefore had to succeed for the telescope to work. In August 2019, the mechanical integration of the telescope was completed, something that was scheduled to be done 12 years before in 2007.
After construction was completed, Webb underwent final tests at Northrop Grumman's historic Space Park in Redondo Beach, California. A ship carrying the telescope left California on 26 September 2021, passed through the Panama Canal, and arrived in French Guiana on 12 October 2021.
Cost and schedule issues
NASA's lifetime cost for the project is expected to be US$9.7 billion, of which US$8.8 billion was spent on spacecraft design and development and US$861 million is planned to support five years of mission operations. Representatives from ESA and CSA stated their project contributions amount to approximately €700 million and CA$200 million, respectively.
A study in 1984 by the Space Science Board estimated that to build a next-generation infrared observatory in orbit would cost US$4 billion (US$7B in 2006 dollars, or $10B in 2020 dollars). While this came close to the final cost of Webb, the first NASA design considered in the late 1990s was more modest, aiming for a $1 billion price tag over 10 years of construction. Over time this design expanded, added funding for contingencies, and had scheduling delays.
By 2008, when the project entered preliminary design review and was formally confirmed for construction, over US$1 billion had already been spent on developing the telescope, and the total budget was estimated at US$5 billion (equivalent to $ billion in ). In summer 2010, the mission passed its Critical Design Review (CDR) with excellent grades on all technical matters, but schedule and cost slips at that time prompted Maryland U.S. Senator Barbara Mikulski to call for external review of the project. The Independent Comprehensive Review Panel (ICRP) chaired by J. Casani (JPL) found that the earliest possible launch date was in late 2015 at an extra cost of US$1.5 billion (for a total of US$6.5 billion). They also pointed out that this would have required extra funding in FY2011 and FY2012 and that any later launch date would lead to a higher total cost.
On 6 July 2011, the United States House of Representatives' appropriations committee on Commerce, Justice, and Science moved to cancel the James Webb project by proposing an FY2012 budget that removed US$1.9 billion from NASA's overall budget, of which roughly one quarter was for Webb. US$3 billion had been spent and 75% of its hardware was in production. This budget proposal was approved by subcommittee vote the following day. The committee charged that the project was "billions of dollars over budget and plagued by poor management". In response, the American Astronomical Society issued a statement in support of Webb, as did Senator Mikulski. A number of editorials supporting Webb appeared in the international press during 2011 as well. In November 2011, Congress reversed plans to cancel Webb and instead capped additional funding to complete the project at US$8 billion.
While similar issues had affected other major NASA projects such as the Hubble telescope, some scientists expressed concerns about growing costs and schedule delays for the Webb telescope, worrying that its budget might be competing with those of other space science programs. A 2010 Nature article described Webb as "the telescope that ate astronomy". NASA continued to defend the budget and timeline of the program to Congress.
In 2018, Gregory L. Robinson was appointed as the new director of the Webb program. Robinson was credited with raising the program's schedule efficiency (how many measures were completed on time) from 50% to 95%. For his role in improving the performance of the Webb program, Robinsons's supervisor, Thomas Zurbuchen, called him "the most effective leader of a mission I have ever seen in the history of NASA." In July 2022, after Webb's commissioning process was complete and it began transmitting its first data, Robinson retired following a 33-year career at NASA.
On 27 March 2018, NASA pushed back the launch to May 2020 or later, with a final cost estimate to come after a new launch window was determined with the ESA. In 2019, its mission cost cap was increased by US$800 million. After launch windows were paused in 2020 due to the COVID-19 pandemic, Webb was launched at the end of 2021, with a total cost of just under US$10 billion.
No single area drove the cost. For future large telescopes, there are five major areas critical to controlling overall cost:
System Complexity
Critical path and overhead
Verification challenges
Programmatic constraints
Early integration and test considerations
Partnership
NASA, ESA and CSA have collaborated on the telescope since 1996. ESA's participation in construction and launch was approved by its members in 2003 and an agreement was signed between ESA and NASA in 2007. In exchange for full partnership, representation and access to the observatory for its astronomers, ESA is providing the NIRSpec instrument, the Optical Bench Assembly of the MIRI instrument, an Ariane 5 ECA launcher, and manpower to support operations. The CSA provided the Fine Guidance Sensor and the Near-Infrared Imager Slitless Spectrograph and manpower to support operations.
Several thousand scientists, engineers, and technicians spanning 15 countries have contributed to the build, test and integration of Webb. A total of 258 companies, government agencies, and academic institutions participated in the pre-launch project; 142 from the United States, 104 from 12 European countries (including 21 from the U.K., 16 from France, 12 from Germany and 7 international), and 12 from Canada. Other countries as NASA partners, such as Australia, were involved in post-launch operation.
Participating countries:
Naming concerns
In 2002, NASA administrator (2001–2004) Sean O'Keefe made the decision to name the telescope after James E. Webb, the administrator of NASA from 1961 to 1968 during the Mercury, Gemini, and much of the Apollo programs.
In 2015, concerns were raised around Webb's possible role in the lavender scare, the mid-20th-century persecution by the U.S. government targeting homosexuals in federal employment. In 2022, NASA released a report of an investigation, based on an examination of more than 50,000 documents. The report found "no available evidence directly links Webb to any actions or follow-up related to the firing of individuals for their sexual orientation", either in his time in the State Department or at NASA.
Mission goals
The James Webb Space Telescope has four key goals:
to search for light from the first stars and galaxies that formed in the universe after the Big Bang
to study galaxy formation and evolution
to understand star formation and planet formation
to study planetary systems and the origins of life
These goals can be accomplished more effectively by observation in near-infrared light rather than light in the visible part of the spectrum. For this reason, Webb's instruments will not measure visible or ultraviolet light like the Hubble Telescope, but will have a much greater capacity to perform infrared astronomy. Webb will be sensitive to a range of wavelengths from 0.6 to 28 μm (corresponding respectively to orange light and deep infrared radiation at about ).
Webb may be used to gather information on the dimming light of star KIC 8462852, which was discovered in 2015, and has some abnormal light-curve properties.
Additionally, it will be able to tell if an exoplanet has methane in its atmosphere, allowing astronomers to determine whether or not the methane is a biosignature.
Orbit design
Webb orbits the Sun near the second Lagrange point () of the Sun–Earth system, which is farther from the Sun than the Earth's orbit, and about four times farther than the Moon's orbit. Normally an object circling the Sun farther out than Earth would take longer than one year to complete its orbit. But near the point, the combined gravitational pull of the Earth and the Sun allow a spacecraft to orbit the Sun in the same time that it takes the Earth. Staying close to Earth allows data rates to be much faster for a given size of antenna.
The telescope circles about the Sun–Earth point in a halo orbit, which is inclined with respect to the ecliptic, has a radius varying between about and , and takes about half a year to complete. Since is just an equilibrium point with no gravitational pull, a halo orbit is not an orbit in the usual sense: the spacecraft is actually in orbit around the Sun, and the halo orbit can be thought of as controlled drifting to remain in the vicinity of the point. This requires some station-keeping: around per year from the total ∆v budget of . Two sets of thrusters constitute the observatory's propulsion system. Because the thrusters are located solely on the Sun-facing side of the observatory, all station-keeping operations are designed to slightly undershoot the required amount of thrust in order to avoid pushing Webb beyond the semi-stable point, a situation which would be unrecoverable. Randy Kimble, the Integration and Test Project Scientist for the JWST, compared the precise station-keeping of Webb to "Sisyphus [...] rolling this rock up the gentle slope near the top of the hill – we never want it to roll over the crest and get away from him".
Infrared astronomy
Webb is the formal successor to the Hubble Space Telescope (HST), and since its primary emphasis is on infrared astronomy, it is also a successor to the Spitzer Space Telescope. Webb will far surpass both those telescopes, being able to see many more and much older stars and galaxies. Observing in the infrared spectrum is a key technique for achieving this, because of cosmological redshift, and because it better penetrates obscuring dust and gas. This allows observation of dimmer, cooler objects. Since water vapor and carbon dioxide in the Earth's atmosphere strongly absorbs most infrared, ground-based infrared astronomy is limited to narrow wavelength ranges where the atmosphere absorbs less strongly. Additionally, the atmosphere itself radiates in the infrared spectrum, often overwhelming light from the object being observed. This makes a space telescope preferable for infrared observation.
The more distant an object is, the younger it appears; its light has taken longer to reach human observers. Because the universe is expanding, as the light travels it becomes red-shifted, and objects at extreme distances are therefore easier to see if viewed in the infrared. Webb's infrared capabilities are expected to let it see back in time to the first galaxies forming just a few hundred million years after the Big Bang.
Infrared radiation can pass more freely through regions of cosmic dust that scatter visible light. Observations in infrared allow the study of objects and regions of space which would be obscured by gas and dust in the visible spectrum, such as the molecular clouds where stars are born, the circumstellar disks that give rise to planets, and the cores of active galaxies.
Relatively cool objects (temperatures less than several thousand degrees) emit their radiation primarily in the infrared, as described by Planck's law. As a result, most objects that are cooler than stars are better studied in the infrared. This includes the clouds of the interstellar medium, brown dwarfs, planets both in our own and other solar systems, comets, and Kuiper belt objects that will be observed with the Mid-Infrared Instrument (MIRI).
Some of the missions in infrared astronomy that impacted Webb development were Spitzer and the Wilkinson Microwave Anisotropy Probe (WMAP). Spitzer showed the importance of mid-infrared, which is helpful for tasks such as observing dust disks around stars. Also, the WMAP probe showed the universe was "lit up" at redshift 17, further underscoring the importance of the mid-infrared. Both these missions were launched in the early 2000s, in time to influence Webb development.
Ground support and operations
The Space Telescope Science Institute (STScI), in Baltimore, Maryland, on the Homewood Campus of Johns Hopkins University, was selected in 2003 as the Science and Operations Center (S&OC) for Webb with an initial budget of US$162.2 million intended to support operations through the first year after launch. In this capacity, STScI was to be responsible for the scientific operation of the telescope and delivery of data products to the astronomical community. Data was to be transmitted from Webb to the ground via the NASA Deep Space Network, processed and calibrated at STScI, and then distributed online to astronomers worldwide. Similar to how Hubble is operated, anyone, anywhere in the world, will be allowed to submit proposals for observations. Each year several committees of astronomers will peer review the submitted proposals to select the projects to observe in the coming year. The authors of the chosen proposals will typically have one year of private access to the new observations, after which the data will become publicly available for download by anyone from the online archive at STScI.
The bandwidth and digital throughput of the satellite is designed to operate at 458 gigabits of data per day for the length of the mission (equivalent to a sustained rate of 5.42 Mbps). Most of the data processing on the telescope is done by conventional single-board computers. The digitization of the analog data from the instruments is performed by the custom SIDECAR ASIC (System for Image Digitization, Enhancement, Control And Retrieval Application Specific Integrated Circuit). NASA stated that the SIDECAR ASIC will include all the functions of a instrument box in a package and consume only 11 milliwatts of power. Since this conversion must be done close to the detectors, on the cold side of the telescope, the low power dissipation is crucial for maintaining the low temperature required for optimal operation of Webb.
The telescope is equipped with a solid-state drive (SSD) with a capacity of 68 GB, used as temporary storage for data collected from its scientific instruments. By the end of the 10-year mission, the usable capacity of the drive is expected to decrease to 60 GB due to the effects of radiation and read/write operations.
Micrometeoroid strike
The C3 mirror segment suffered a micrometeoroid strike from a large dust mote-sized particle between 23 and 25 May, the fifth and largest strike since launch, reported 8 June 2022, which required engineers to compensate for the strike using a mirror actuator. Despite the strike, a NASA characterization report states "all JWST observing modes have been reviewed and confirmed to be ready for science use" as of July 10, 2022.
From launch through commissioning
Launch
The launch (designated Ariane flight VA256) took place as scheduled at 12:20 UTC on 25 December 2021 on an Ariane 5 rocket that lifted off from the Guiana Space Centre in French Guiana. The telescope was confirmed to be receiving power, starting a two-week deployment phase of its parts and traveling to its target destination. The telescope was released from the upper stage 27 minutes 7 seconds after launch, beginning a 30-day adjustment to place the telescope in a Lissajous orbit around the L2 Lagrange point.
The telescope was launched with slightly less speed than needed to reach its final orbit, and slowed down as it travelled away from Earth, in order to reach L2 with only the velocity needed to enter its orbit there. The telescope reached L2 on 24 January 2022. The flight included three planned course corrections to adjust its speed and direction. This is because the observatory could recover from underthrust (going too slowly), but could not recover from overthrust (going too fast) – to protect highly temperature-sensitive instruments, the sunshield must remain between telescope and Sun, so the spacecraft could not turn around or use its thrusters to slow down.
An orbit is unstable, so JWST needs to use propellant to maintain its halo orbit around L2 (known as station-keeping) to prevent the telescope from drifting away from its orbital position. It was designed to carry enough propellant for 10 years, but the precision of the Ariane 5 launch and the first midcourse correction were credited with saving enough onboard fuel that JWST may be able to maintain its orbit for around 20 years instead. Space.com called the launch "flawless".
Transit and structural deployment
Webb was released from the rocket upper stage 27 minutes after a flawless launch. Starting 31 minutes after launch, and continuing for about 13 days, Webb began the process of deploying its solar array, antenna, sunshield, and mirrors. Nearly all deployment actions are commanded by the Space Telescope Science Institute in Baltimore, Maryland, except for two early automatic steps, solar panel unfolding and communication antenna deployment. The mission was designed to give ground controllers flexibility to change or modify the deployment sequence in case of problems.
At 7:50p.m. EST on 25 December 2021, about 12 hours after launch, the telescope's pair of primary rockets began firing for 65 minutes to make the first of three planned mid-course corrections. On day two, the high gain communication antenna deployed automatically.
On 27 December 2021, at 60 hours after launch, Webb's rockets fired for nine minutes and 27 seconds to make the second of three mid-course corrections for the telescope to arrive at its L2 destination. On 28 December 2021, three days after launch, mission controllers began the multi-day deployment of Webb's all-important sunshield. On 30 December 2021, controllers successfully completed two more steps in unpacking the observatory. First, commands deployed the aft "momentum flap", a device that provides balance against solar pressure on the sunshield, saving fuel by reducing the need for thruster firing to maintain Webb's orientation.
On 31 December 2021, the ground team extended the two telescoping "mid booms" from the left and right sides of the observatory. The left side deployed in 3 hours and 19 minutes; the right side took 3 hours and 42 minutes. Commands to separate and tension the membranes followed between 3 and 4 January and were successful. On 5 January 2022, mission control successfully deployed the telescope's secondary mirror, which locked itself into place to a tolerance of about one and a half millimeters.
The last step of structural deployment was to unfold the wings of the primary mirror. Each panel consists of three primary mirror segments and had to be folded to allow the space telescope to be installed in the fairing of the Ariane rocket for the launch of the telescope. On 7 January 2022, NASA deployed and locked in place the port-side wing, and on 8 January, the starboard-side mirror wing. This successfully completed the structural deployment of the observatory.
On 24 January 2022, at 2:00p.m. Eastern Standard Time, nearly a month after launch, a third and final course correction took place, inserting Webb into its planned halo orbit around the Sun–Earth L2 point.
The MIRI instrument has four observing modes – imaging, low-resolution spectroscopy, medium-resolution spectroscopy and coronagraphic imaging. "On Aug. 24, a mechanism that supports medium-resolution spectroscopy (MRS), exhibited what appears to be increased friction during setup for a science observation. This mechanism is a grating wheel that allows scientists to select between short, medium, and longer wavelengths when making observations using the MRS mode," said NASA in a press statement.
Commissioning and testing
On 12 January 2022, while still in transit, mirror alignment began. The primary mirror segments and secondary mirror were moved away from their protective launch positions. This took about 10 days, because the 132 actuator motors are designed to fine-tune the mirror positions at microscopic accuracy (10 nanometer increments) and must each move over 1.2 million increments (12.5 mm) during initial alignment.
Mirror alignment requires each of the 18 mirror segments, and the secondary mirror, to be positioned to within 50 nanometers. NASA compares the required accuracy by analogy: "If the Webb primary mirror were the size of the United States, each [mirror] segment would be the size of Texas, and the team would need to line the height of those Texas-sized segments up with each other to an accuracy of about 1.5 inches".
Mirror alignment was a complex operation split into seven phases, that had been repeatedly rehearsed using a 1:6 scale model of the telescope. Once the mirrors reached , NIRCam targeted the 6th magnitude star HD 84406 in Ursa Major. To do this, NIRCam took 1560 images of the sky and used these wide-ranging images to determine where in the sky each segment of the main mirror initially pointed. At first, the individual primary mirror segments were greatly misaligned, so the image contained 18 separate, blurry, images of the star field, each containing an image of the target star. The 18 images of HD 84406 are matched to their respective mirror segments, and the 18 segments are brought into approximate alignment centered on the star ("Segment Image Identification"). Each segment was then individually corrected of its major focusing errors, using a technique called phase retrieval, resulting in 18 separate good quality images from the 18 mirror segments ("Segment Alignment"). The 18 images from each segment, were then moved so they precisely overlap to create a single image ("Image Stacking").
With the mirrors positioned for almost correct images, they had to be fine tuned to their operational accuracy of 50 nanometers, less than one wavelength of the light that will be detected. A technique called dispersed fringe sensing was used to compare images from 20 pairings of mirrors, allowing most of the errors to be corrected ("Coarse Phasing"), and then introduced light defocus to each segment's image, allowing detection and correction of almost all remaining errors ("Fine Phasing"). These two processes were repeated three times, and Fine Phasing will be routinely checked throughout the telescope's operation. After three rounds of Coarse and Fine Phasing, the telescope was well aligned at one place in the NIRCam field of view. Measurements will be made at various points in the captured image, across all instruments, and corrections calculated from the detected variations in intensity, giving a well-aligned outcome across all instruments ("Telescope Alignment Over Instrument Fields of View"). Finally, a last round of Fine Phasing and checks of image quality on all instruments was performed, to ensure that any small residual errors remaining from the previous steps, were corrected ("Iterate Alignment for Final Correction"). The telescope's mirror segments were then aligned and able to capture precise focused images.
In preparation for alignment, NASA announced at 19:28 UTC on 3 February 2022, that NIRCam had detected the telescope's first photons (although not yet complete images). On 11 February 2022, NASA announced the telescope had almost completed phase 1 of alignment, with every segment of its primary mirror having located and imaged the target star HD 84406, and all segments brought into approximate alignment. Phase 1 alignment was completed on 18 February 2022, and a week later, phases 2 and 3 were also completed. This meant the 18 segments were working in unison, however until all 7 phases are complete, the segments were still acting as 18 smaller telescopes rather than one larger one. At the same time as the primary mirror was being commissioned, hundreds of other instrument commissioning and calibration tasks were also ongoing.
Allocation of observation time
Webb observing time is allocated through a General Observers (GO) program, a Guaranteed Time Observations (GTO) program, and a Director's Discretionary Early Release Science (DD-ERS) program. The GTO program provides guaranteed observing time for scientists who developed hardware and software components for the observatory. The GO program provides all astronomers the opportunity to apply for observing time and will represent the bulk of the observing time. GO programs are selected through peer review by a Time Allocation Committee (TAC), similar to the proposal review process used for the Hubble Space Telescope.
Early Release Science program
In November 2017, the Space Telescope Science Institute announced the selection of 13 Director's Discretionary Early Release Science (DD-ERS) programs, chosen through a competitive proposal process. The observations for these programs – Early Release Observations (ERO) – were to be obtained during the first five months of Webb science operations after the end of the commissioning period. A total of 460 hours of observing time was awarded to these 13 programs, which span science topics including the Solar System, exoplanets, stars and star formation, nearby and distant galaxies, gravitational lenses, and quasars. These 13 ERS programs were to use a total of 242.8 hours of observing time on the telescope (not including Webb observing overheads and slew time).
General Observer Program
For GO Cycle 1 there were 6,000 hours of observation time available to allocate, and 1,173 proposals were submitted requesting a total of 24,500 hours of observation time. Selection of Cycle 1 GO programs was announced on 30 March 2021, with 266 programs approved. These included 13 large programs and treasury programs producing data for public access. The Cycle 2 GO program was announced on May 10, 2023. Webb science observations are nominally scheduled in weekly increments. The observation plan for every week is published on Mondays by the Space Telescope Science Institute. In Cycle 4 the telescope showed its continued popularity in the astronomy community by garnering 2,377 proposals for 78,000 hours of observing time, nine times more than the available amount.
Scientific results
The JWST completed its commissioning and was ready to begin full scientific operations on 11 July 2022. With some exceptions, most experiment data is kept private for one year for the exclusive use of scientists running that particular experiment, and then the raw data will be released to the public. JWST observations substantially advanced understanding of exoplanets, the first billion years of the universe, and many other astrophysical and cosmological phenomena.
First full-color images
The first full-color images and spectroscopic data were released on 12 July 2022, which also marked the official beginning of Webb's general science operations. U.S. President Joe Biden revealed the first image, Webb's First Deep Field, on 11 July 2022. Additional releases around this time include:
Carina Nebula young, star-forming region called NGC 3324 about 8,500 light-years from Earth, described by NASA as "Cosmic Cliffs".
WASP-96b including an analysis of atmosphere with evidence of water around a giant gas planet orbiting a distant star 1120 light-years from Earth.
Southern Ring Nebula clouds of gas and dust expelled by a dying star 2,500 light-years from Earth.
Stephan's Quintet a visual display of five galaxies with colliding gas and dust clouds creating new stars; four central galaxies are 290 million light-years from Earth.
SMACS J0723.3-7327 a galaxy cluster at redshift 0.39, with distant background galaxies whose images are distorted and magnified due to gravitational lensing by the cluster. This image has been called Webb's First Deep Field. It was later discovered that in this picture the JWST had also revealed three ancient galaxies that existed shortly after the Big Bang. Its images of these distant galaxies are views of the universe 13.1 billion years ago.
On 14 July 2022, NASA presented images of Jupiter and related areas by the JWST, including infrared views.
In a preprint released around the same time, NASA, ESA and CSA scientists stated that "almost across the board, the science performance of JWST is better than expected". The document described a series of observations during the commissioning, when the instruments captured spectra of transiting exoplanets with a precision better than 1000 ppm per data point, and tracked moving objects with speeds up to 67 milliarcseconds/second, more than twice as fast as the requirement. It also obtained the spectra of hundreds of stars simultaneously in a dense field towards the Milky Way's Galactic Center. Other targets included:
Moving targets: Jupiter's rings and moons (particularly Europa, Thebe and Metis), asteroids 2516 Roman, 118 Peitho, 6481 Tenzing, 1773 Rumpelstilz, 216 Kleopatra, 2035 Stearns, 4015 Wilson-Harrington and
NIRCam grism time-series, NIRISS SOSS and NIRSpec BOTS mode: the Jupiter-sized planet HAT-P-14b
NIRISS aperture masking interferometry (AMI): A clear detection of the very low-mass companion star AB Doradus C, which had a separation of only 0.3 arcseconds to the primary. This observation was the first demonstration of AMI in space.
MIRI low-resolution spectroscopy (LRS): a hot super-Earth planet L 168-9 b (TOI-134) around a bright M-dwarf star (red dwarf star)
Bright early galaxies
Within two weeks of the first Webb images, several preprint papers described a wide range of high redshift and very luminous (presumably large) galaxies believed to date from 235 million years (z=16.7) to 280 million years after the Big Bang, far earlier than previously known. On 17 August 2022, NASA released a large mosaic image of 690 individual frames taken by the NIRCam on Webb of numerous very early galaxies. Some early galaxies observed by Webb like CEERS-93316, which has an estimated redshift of approximately z=16.7 corresponding to 235.8 million years after the Big Bang, are high redshift galaxy candidates. In September 2022, primordial black holes were proposed as explaining these unexpectedly large and early galaxies. In May 2024, the JWST identified the most distant known galaxy, JADES-GS-z14-0, seen just 290 million years after the Big Bang, corresponding to a redshift of 14.32. Part of the JWST Advanced Deep Extragalactic Survey (JADES), this discovery highlights a galaxy significantly more luminous and massive than expected for such an early period. Detailed analysis using JWST's NIRSpec and MIRI instruments revealed this galaxy's remarkable properties, including its significant size and dust content, challenging current models of early galaxy formation.
Subsequent noteworthy observations and interpretations
In June 2023 detection of organic molecules 12 billion light-years away in a galaxy called SPT0418-47 using the Webb telescope was announced.
On 12 July 2023, NASA celebrated the first year of operations with the release of Webb's image of a small star-forming region in the Rho Ophiuchi cloud complex, 390 light years away.
In September 2023, two astrophysicists questioned the accepted Standard Model of Cosmology, based on the latest JWST studies.
In December 2023, NASA released Christmas holiday-related images by JWST, including the Christmas Tree Galaxy Cluster and others.
In May 2024, the JWST detected the farthest known black hole merger. Occurring within the galaxy system ZS7, 740 million years after the Big Bang, this discovery suggests a fast growth rate for black holes through mergers, even in the young Universe.
Gallery
| Technology | Space-based observatories | null |
435078 | https://en.wikipedia.org/wiki/Plant%20nursery | Plant nursery | A nursery is a place where plants are propagated and grown to a desired size. Mostly the plants concerned are for gardening, forestry, or conservation biology, rather than agriculture. They include retail nurseries, which sell to the general public; wholesale nurseries, which sell only to businesses such as other nurseries and commercial gardeners; and private nurseries, which supply the needs of institutions or private estates. Some will also work in plant breeding.
A "nurseryman" is a person who owns or works in a nursery.
Some nurseries specialize in certain areas, which may include: propagation and the selling of small or bare root plants to other nurseries; growing out plant materials to a saleable size, or retail sales. Nurseries may also specialize in one type of plant, e.g., groundcovers, shade plants, or rock garden plants. Some produce bulk stock, whether seedlings or grafted trees, of particular varieties for purposes such as fruit trees for orchards or timber trees for forestry. Some producers produce stock seasonally, ready in the spring for export to colder regions where propagation could not have been started so early or to regions where seasonal pests prevent profitable growing early in the season.
Nurseries
There are a number of different types of nurseries, broadly grouped as wholesale or retail nurseries, with some overlap depending on the specific operation. Wholesale nurseries produce plants in large quantities which are sold to retail nurseries
Wholesale nurseries may be small operations that produce a specific type of plant using a small area of land, or very larger operations covering many acres. They propagate plant material or buy plants from other nurseries which may include rooted or unrooted cuttings, or small rooted plants called plugs, or field ~grown bare root plants, which are planted and grown to a desired size. Some wholesale nurseries produce plants on contract for others which place an order for a specific number and size of plant, while others produce a wide range of plants that are offered for sale to other nurseries and landscapers and sold as first come first served. Retail nurseries sell plants ready to be placed in the landscape or used in homes and businesses.
Methods
Propagation nurseries produce new plants from seeds, cuttings, tissue culture, grafting, or division. The plants are then grown out to a salable size and either sold to other nurseries that may continue to grow the plants out in larger containers or field grow them to desired size. Propagation nurseries may also sell plant material large enough for retail sales and thus sale directly to retail nurseries or garden centers (which rarely propagated their own plants).
Nurseries may produce plants for reforestation, zoos, parks, and cities. Tree nurseries in the U.S. produce around 1.3 billion seedlings per year for reforestation projects.
Nurseries grow plants in open fields, on container fields, in tunnels or greenhouses. In open fields, nurseries grow decorative trees, shrubs and herbaceous perennials. On a containerfield nurseries grow small trees, shrubs and herbaceous plants, usually destined for sales in garden centers. These have proper ventilation, sunlight etc. Plants may be grown by seeds, but the most common method is by planting cuttings, which can be taken from shoot tips or roots.
Conditioning
With the objective of fitting planting stock more able to withstand stresses after outplanting, various nursery treatments have been attempted or developed and applied to nursery stock. Buse and Day (1989), for instance, studied the effect of conditioning of white spruce and black spruce transplants on their morphology, physiology, and subsequent performance after outplanting. Root pruning, wrenching, and fertilization with potassium at 375 kg/ha were the treatments applied. Root pruning and wrenching modified stock in the nursery by decreasing height, root collar diameter, shoot:root ratio, and bud size, but did not improve survival or growth after planting. Fertilization reduced root growth in black spruce but not of white spruce.
Important factors for nursery production
For a nursery to produce healthy crops, they must manage many factors, a few of them being irrigation, landscape topography, and soil conditions of the site.
Irrigation
Plants need water to grow. Water needs will vary depending on plant species, weather, and soil. An example is in Ontario, irrigation water is used most in late spring and in summer, when plants need water most, and based on climate patterns in Ontario, this time is also when there is the least amount of rainfall. Some nurseries will create water sources by building a dam, or changing a watercourse. or building manmade ponds. The water source and water pumps should to be close to fields In this situation, water will need testing for pH, and testing for chemicals in the water to ensure an acceptable water quality. Two common types of irrigation systems are drip irrigation, and overhead irrigation.
Landscape Topography
A good slope for a plant nursery is 1–2 degrees. Any more than 5 degrees will make the nursery susceptible to soil erosion. The nursery stock should be planted in rows running across the slopes. If the landscape of the nursery has sections of land where erosion could occur, the nursery needs to come up with a solution such as by using erosion prevention structures like rip rap. Topography impacts nursery design and layout and it is a factor in strategizing what direction to plant rows. It also impacts where windbreaks should be planted. If an area has a flat slope and is open, it will need a windbreak.
Soil Conditions
For a nursery to produce healthy crops, it will need to have healthy soil. The soil should have good drainage and nutrient holding capacity. Soil testing will help a nursery find out its pH levels, and also the amounts of nutrients in the soil. To test soil drainage, one method is to dig an 18" deep hole that is at least 4" in diameter. Fill the hole with water, and leave it for an hour. This will allow the soil to saturate. Next, fill the hole with water again but leave the top 2" of soil in the hole without water. Wait an hour then return to the hole with a measurement tool like a ruler to find out how much water has drained from the hole. The corresponding measurements will allow the tester to decide what type of drainage capacity their soil has. If the water level drops 1/2" or less it is poor draining. If water drops 1/2" to 1", it drains at a medium rate. 1"< means that the soil drains quickly.
Hardening off, frost hardiness
Seedlings vary in their susceptibility to injury from frost. Damage can be catastrophic if "unhardened" seedlings are exposed to frost. Frost hardiness may be defined as the minimum temperature at which a certain percentage of a random seedling population will survive or will sustain a given level of damage (Siminovitch 1963, Timmis and Worrall 1975). The term LT50 (lethal temperature for 50% of a population) is commonly used. Determination of frost hardiness in Ontario is based on electrolyte leakage from mainstem terminal tips 2 cm to 3 cm long in weekly samplings (Colombo and Hickie 1987). The tips are frozen then thawed, immersed in distilled water, the electrical conductivity of which depends on the degree to which cell membranes have been ruptured by freezing releasing electrolyte. A −15 °C frost hardiness level has been used to determine the readiness of container stock to be moved outside from the greenhouse, and −40 °C has been the level determining readiness for frozen storage (Colombo 1997).
In an earlier technique, potted seedlings were placed in a freezer chest and cooled to some level for some specific duration; a few days after removal, seedlings were assessed for damage using various criteria, including odour, general visual appearance, and examination of cambial tissue (Ritchie 1982).
Stock for fall planting must be properly hardened-off. Conifer seedlings are considered to be hardened off when the terminal buds have formed and the stem and root tissues have ceased growth. Other characteristics that in some species indicate dormancy are color and stiffness of the needles, but these are not apparent in white spruce.
Forest tree nurseries
Whether in the forest or in the nursery, seedling growth is fundamentally influenced by soil fertility, but nursery soil fertility is readily amenable to amelioration, much more so than is forest soil.
Nitrogen, phosphorus, and potassium are regularly supplied as fertilizers, and calcium and magnesium are supplied occasionally. Applications of fertilizer nitrogen do not build up in the soil to develop any appreciable storehouse of available nitrogen for future crops. Phosphorus and potassium, however, can be accumulated as a storehouse available for extended periods.
Fertilization permits seedling growth to continue longer through the growing season than unfertilized stock; fertilized white spruce attained twice the height of unfertilized. High fertility in the rooting medium favours shoot growth over root growth, and can produce top-heavy seedlings ill-suited to the rigors of the outplant site. Nutrients in oversupply can reduce growth or the uptake of other nutrients. As well, an excess of nutrient ions can prolong or weaken growth to interfere with the necessary development of dormancy and hardening of tissues in time to withstand winter weather.
Stock types, sizes and lots
Nursery stock size typically follows the normal curve when lifted for planting stock. The runts at the lower end of the scale are usually culled to an arbitrary limit, but, especially among bareroot stock, the range in size is commonly considerable. Dobbs (1976) and McMinn (1985a) examined how the performance of 2+0 bareroot white spruce related to differences in initial size of planting stock. The stock was regraded into large, medium, and small fractions according to fresh weight. The small fraction (20% of the original stock) had barely one-quarter of the dry matter mass of the large fraction at the time of outplanting. Ten years later, in the blade-scarified site, seedlings of the large fraction had almost 50% greater stem volume than had seedlings of the small fraction. Without site preparation, large stock were more than twice the size of small stock after 10 years.
Similar results were obtained with regraded 2+1 transplants sampled to determine root growth capacity. The large stock had higher RGC as well as greater mass than the small stock fraction.
The value of large size at the time of planting is especially apparent when outplants face strong competition from other vegetation, although high initial mass does not guarantee success. That the growth potential of planting stock depends on much more than size seems clear from the indifferent success of the transplanting of small 2+0 seedlings for use as 2+1 "reclaim" transplants. The size of bareroot white spruce seedlings and transplants also had a major influence on field performance.
The field performance among various stock types in Ontario plantations was examined by Paterson and Hutchison (1989): the white spruce stock types were 2+0, 1.5+0.5, 1.5+1.5, and 3+0. The nursery stock was grown at Midhurst Forest Tree Nursery, and carefully handled through lifting on 3 lift dates, packing, and hot-planting into cultivated weed-free loam. After 7 years, overall survival was 97%, with no significant differences in survival among stock types. The 1.5+1.5 stock with a mean height of 234 cm was significantly taller by 18% to 25% than the other stock types. The 1.5+1.5 stock also had significantly greater dbh than the other stock types by 30–43%. The best stock type was 57 cm taller and 1 cm greater in dbh than the poorest. Lifting date had no significant effect on growth or survival.
High elevation sites in British Columbia's southern mountains are characterized by a short growing season, low air and soil temperatures, severe winters, and deep snow. The survival and growth of Engelmann spruce and subalpine fir outplanted in 3 silvicultural trials on such sites in gaps of various sizes were compared by Lajzerowicz et al. (2006). Survival after 5 or 6 years decreased with smaller gaps. Height and diameter also decreased with decreasing size of gap; mean heights were 50 cm to 78 cm after 6 years, in line with height expectations for Engelmann spruce in a high-elevation planting study in southeastern British Columbia. In the larger gaps (≥1.0 ha), height increment by year 6 was ranging from 10 cm to 20 cm. Lajzerrowicz et al. Concluded that plantings of conifers in clearcuts at high elevations in the southern mountains of British Columbia are likely to be successful, even close to timberline; and group selection silvicultural systems based on gaps 0.1 ha or larger are also likely to succeed. Gaps smaller than 0.1 ha do not provide suitable conditions for obtaining adequate survival or for growth of outplanted conifers.
Planting stock
Planting stock, "seedlings, transplants, cuttings, and occasionally wildings, for use in planting out," is nursery stock that has been made ready for outplanting. The amount of seed used in white spruce seedling production and direct seeding varies with method.
A working definition of planting stock quality was accepted at the 1979 IUFRO Workshop on Techniques for Evaluating Planting Stock Quality in New Zealand: "The quality of planting stock is the degree to which that stock realizes the objectives of management (to the end of the rotation or achievement of specified sought benefits) at minimum cost. Quality is fitness for purpose." Clear expression of objectives is therefore prerequisite to any determination of planting stock quality. Not only does performance have to be determined, but performance has to be rated against the objectives of management. Planting stock is produced in order to give effect to the forest policy of the organization.
A distinction needs to be made between "planting stock quality" and "planting stock performance potential" (PSPP). The actual performance of any given batch of outplanted planting stock is determined only in part by the kind and condition, i.e., the intrinsic PSPP, of the planting stock.
The PSPP is impossible to estimate reliably by eye because outward appearance, especially of stock withdrawn from refrigerated storage, can deceive even experienced foresters, who would be offended if their ability were questioned to recognize good planting stock when they saw it. Prior to Wakeley's (1954) demonstration of the importance of the physiological state of planting stock in determining the ability of the stock to perform after outplanting, and to a considerable extent even afterwards, morphological appearance has generally served as the basis for estimating the quality of planting stock. Gradually, however, a realization developed that more was involved. Tucker et al. (1968), for instance, after assessing 10-year survival data from several experimental white spruce plantations in Manitoba noted that "Perhaps the most important point revealed here is that certain lots of transplants performed better than others", even though all transplants were handled and planted with care. The intuitive "stock that looks good must be good" is a persuasive, but potentially dangerous maxim. That greatest of teachers, Bitter Experience, has often enough demonstrated the fallibility of such assessment, even though the corollary "stock that looks bad must be bad" is likely to be well founded. The physiological qualities of planting stock are hidden from the eye and must be revealed by testing. The potential for survival and growth of a batch of planting stock may be estimated from various features, morphological and physiological, of the stock or a sample thereof.
The size and shape and general appearance of a seedling can nevertheless give useful indications of PSPP. In low-stress outplanting situations, and with a minimized handling and lifting-planting cycle, a system based on specification for nursery stock and minimum morphological standards for acceptable seedlings works tolerably well. In certain circumstances, benefits often accrue from the use of large planting stock of highly ranked morphological grades. Length of leading shoot, diameter of stem, volume of root system, shoot:root ratios, and height:diameter ratios have been correlated with performance under specific site and planting conditions. However, the concept that larger is better negates the underlying complexities. Schmidt-Vogt (1980), for instance, found that whereas mortality among large outplants is greater than among small in the year of planting, mortality in subsequent growing seasons is higher among small outplants than among large. Much of the literature on comparative seedling performance is clouded by uncertainty as to whether the stocks being compared share the same physiological condition; differences invalidate such comparisons.
Height and root-collar diameter are generally accepted as the most useful morphological criteria and are often the only ones used in specifying standards. Quantification of root system morphology is difficult but can be done, e.g. by using the photometric rhizometer to determine intercept area, or volume by displacement or gravimetric methods.
Planting stock is always subject to a variety of conditions that are never optimal in toto. The effect of sub-optimal conditions is to induce stress in the plants. The nursery manager aims, and is normally able to avoid stresses greater than moderate, i.e., restricting stresses to levels that can be tolerated by the plants without incurring serious damage. The adoption of nursery regimes to equip planting stock with characteristics conferring increased ability to withstand outplanting stresses, by managing stress levels in the nursery to "condition" planting stock to increase tolerance to various post-planting environmental stresses, has become widespread, particularly with containerized stock.
Outplanted stock that is unable to tolerate high temperatures occurring at soil surfaces will fail to establish on many forest sites, even in the far north. Factors affecting heat tolerance were investigated by Colombo et al. (1995); the production and roles of heat shock proteins (HSPs) are important in this regard. HSPs, present constitutively in black spruce and many other, perhaps most, higher plants are important both for normal cell functioning and in a stress response mechanism following exposure to high, non-lethal temperature. In black spruce at least, there is an association between HSPs and increased levels of heat tolerance. Investigation of the diurnal variability in heat tolerance of roots and shoots in black spruce seedlings 14 to 16 weeks old found in all 4 trials that shoot heat tolerance was significantly greater in the afternoon than in the morning. The trend in root heat tolerance was similar to that found in the shoots; root systems exposed to 47 °C for 15 minutes in the afternoon averaged 75 new roots after a 2-week growth period, whereas only 28 new roots developed in root systems similarly exposed in the morning. HSP73 was detected in black spruce nuclear, mitochondrial, microsomal, and soluble protein fractions, while HSP72 was observed only in the soluble protein fraction. Seedlings exhibited constitutive synthesis of HSP73 at 26 °C in all except the nuclear membrane fraction in the morning; HSP levels at 26 °C in the afternoon were higher than in the morning in the mitochondrial and microsomal protein factions. Heat shock affected the abundance of HSPs depending on protein fraction and time of day. Without heat shock, nuclear membrane-bound HSP73 was absent from plants in the morning and only weakly present in the afternoon, and heat shock increased the abundance of nuclear membrane. Heat shock also affected the abundance of HSP73 in the afternoon, and caused HSP73 to appear in the morning. In the mitochondrial and microsomal protein fractions, an afternoon heat shock reduced HSP73, whereas a morning heat shock increased HSP73 in the mitochondrial but decreased it in the microsomal fraction. Heat shock increased soluble HSP72/73 levels in both the morning and afternoon. In all instances, shoot and root heat tolerances were significantly greater in the afternoon than in the morning.
Planting stock continues to respire during storage even if frozen. Temperature is the major factor controlling the rate, and care must be taken to avoid overheating. Navratil (1982) found that closed containers in cold storage averaged internal temperatures 1.5 °C to 2.0 °C above the nominal storage temperature. Depletion of reserves can be estimated from the decrease in dry weight. Cold-stored 3+0 white spruce nursery stock in northern Ontario had lost 9% to 16% of dry weight after 40 days of storage. Carbohydrates can also be determined directly.
The propensity of a root system to develop new roots or extend existing roots cannot be determined by eye, yet it is the factor that makes or breaks the outcome of an outplanting operation. The post-planting development of roots or root systems of coniferous planting stock is determined by many factors, some physiological, some environmental. Unsatisfactory rates of post-planting survival unrelated to the morphology of the stock, led to attempts to test the physiological condition of planting stock, particularly to quantify the propensity to produce new root growth. New root growth can be assumed to be necessary for successful establishment of stock after planting, but although the thesis that RGC is positively related to field performance would seem to be reasonable, supporting evidence has been meager.
The physiological condition of seedlings is reflected by changes in root activity. This is helpful in determining the readiness of stock for lifting and storing and also for outplanting after storage. Navratil (1982) reported a virtually perfect (R² = 0.99) linear relationship in the frequency of 3+0 white spruce white root tips longer than 10 mm with time in the fall at Pine Ridge Forest Nursery, Alberta, decreasing during a 3-week period to zero on October 13 in 1982.Root regenerating research with white spruce in Canada (Hambly 1973, Day and MacGillivray 1975, Day and Breunig 1997) followed similar lines to that of Stone's (1955) pioneering work in California.
Simpson and Ritchie (1997) debated the proposition that root growth potential of planting stock predicts field performance; their conclusion was that root growth potential, as a surrogate for seedling vigor, can predict field performance, but only under such situations as site conditions permit. Survival after planting is only partly a function of an outplant's ability to initiate roots in test conditions; root growth capacity is not the sole predictor of plantation performance.
Some major problems militate against greater use of RGC in forestry, including: unstandardized techniques; unstandardized quantification; uncertain correlation between quantified RGC and field performance; variability within given, nominally identical, kinds of planting stock; and the irrelevance of RGC test values determined on a sub-sample of a parent population that subsequently, before it is planted, undergoes any substantive physiological or physical change. In its present form, RGC testing is silviculturally useful chiefly as a means of detecting planting stock that, while visually unimpaired, is moribund.
Seedling moisture content can be increased or decreased in storage, depending on various factors including especially the type of container and the kind and amount of moisture-retaining material present. When seedlings exceed 20 bars PMS in storage, survival after outplanting becomes problematical. The Relative Moisture Content of stock lifted during dry conditions can be increased gradually when stored in appropriate conditions. White spruce (3+0) packed in Kraft bags in northern Ontario increased RMC by 20% to 36% within 40 days.
Bareroot 1.5+1.5 white spruce were taken from cold storage and planted early in May on a clear-felled boreal forest site in northeastern Ontario. Similar plants were potted and kept in a greenhouse. In outplanted trees, maximum stomatal conductances (g) were initially low (<0.01 cm/s), and initial base xylem pressure potentials (PSIb) were -2.0 MPa. During the growing season, g increased to about 0.20 cm/s and PSIb to -1.0 MPa. Minimum xylem pressure potential (PSIm) was initially -2.5 MPa, increasing to -2.0 MPa on day 40, and about -1.6 MPa by day 110. During the first half of the growing season, PSIm was below turgor loss point. The osmotic potential at turgor loss point decreased after planting to -2.3 MPa 28 days later. In the greenhouse, minimum values of PSIT were -2.5 MPa (in the first day after planting. the maximum bulk modulus of elasticity was greater in white spruce than in similarly treated jack pine and showed greater seasonal changes. Relative water content (RWC) at turgor loss was 80–87%. Available turgor (TA), defined as the integral of turgor over the range of RWC between PSIb and xylem pressure potential at the turgor loss point) was 4.0% for white spruce at the beginning of the season compared with 7.9% for jack pine, but for the rest of the season TA for jack pine was only 2%, to 3% that of white spruce. Diurnal turgor (Td), the integral of turgor over the range of RWC between PSIb and PSIm, as a percentage of TA was higher in field-planted white spruce than jack pine until the end of the season.
The stomata of both white and black spruce were more sensitive to atmospheric evaporative demands and plant moisture stress during the first growing season after outplanting on 2 boreal sites in northern Ontario than were jack pine stomata, physiological differences that favoured growth and establishment being more in jack pine than in the spruces.
With black spruce and jack pine, but not with white spruce, Grossnickle and Blake's (1987) findings warrant mention in relation to the bareroot-containerized debate. During the first growing season after outplanting, containerized seedlings of both species had greater needle conductance than bareroot seedlings over a range of absolute humidity deficits. Needle conductance of containerized seedlings of both species remained high during periods of high absolute humidity deficits and increasing plant moisture stress. Bareroot outplants of both species had a greater early season resistance to water-flow through the soil–plant–atmosphere continuum (SPAC) than had containerized outplants. Resistance to water flow through the SPAC decreased in bareroot stock of both species as the season progressed, and was comparable to containerized seedlings 9 to 14 weeks after planting. Bareroot black spruce had greater new-root development than containerized stock throughout the growing season.
The greater efficiency of water use in newly transplanted 3-year-old white spruce seedlings under low levels of absolute humidity difference in water-stressed plants immediately after planting helps explain the commonly observed favourable response of young outplants to the nursing effect of a partial canopy. Silvicultural treatments promoting higher humidity levels at the planting microsite should improve white spruce seedling photosynthesis immediately after planting.
Stock types (Seedling nomenclature)
Planting stock is grown under many diverse nursery culture regimes, in facilities ranging from sophisticated computerized greenhouses to open compounds. Types of stock include bareroot seedlings and transplants, and various kinds of containerized stock. For simplicity, both container-grown and bareroot stock are generally referred to as seedlings, and transplants are nursery stock that have been lifted and transplanted into another nursery bed, usually at wider spacing. The size and physiological character of stock vary with the length of growing period and with growing conditions. Until the technology of raising containerized nursery stock bourgeoned in the second half of the twentieth- century, bareroot planting stock classified by its age in years was the norm.
Classification by age
The number of years spent in the nursery seedbed by any particular lot of planting stock is indicated by the 1st of a series of numbers. The 2nd number indicates the years subsequently spent in the transplant line, and a zero is shown if indeed there has been no transplanting. A 3rd number, if any, would indicate the years subsequently spent after a second lifting and transplanting. The numbers are sometimes separated by dashes, but separation by plus sign is more logical inasmuch as the sum of the individual numbers gives the age of the planting stock. Thus 2+0 is 2-year-old seedling planting stock that has not been transplanted, and Candy's (1929) white spruce 2+2+3 stock had spent 2 years in the seedbed, 2 years in transplant lines, and another 3 years in transplant lines after a second transplanting. Variations have included such self-explanatory combinations, such as 1½+1½, etc.
The class of planting stock to use on a particular site is generally selected on the basis of historical record of survival, growth, and total cost of surviving trees. In the Lake States, Kittredge concluded that good stock of 2+1 white spruce was the smallest size likely to succeed and was better than larger and more expensive stock when judged by final cost of surviving trees.
Classification by seedling description code
Because age alone is an inadequate descriptor of planting stock, various codes have been developed to describe such components of stock characteristics as height, stem diameter, and shoot:root ratio. A description code may include an indication of the intended planting season.
Physiological characteristics
Neither age classification nor seedling description code indicate the physiological condition of planting stock, though rigid adherence to a given cultural regime together with observation of performance over a number of years of planting can produce stock suitable for performing on a "same again" basis.
Classification by Production System
Nursery plant material is sold using a variety of systems. The most common systems for woody plants are bare root, containers, and ball & burlap. There are manuals specifically for the production of bare root and containerized crops In North America, the American Standard for Nursery Stock (ANSI Z.60.1) and the Canadian nursery stock standard set specifications that determine what category of size a nursery plant material belongs. The categories relate to size of the plant, plant calliper and` height ratio, and the size of the root ball.
If plant stock is grown in a pot of any size or material, it is considered container grown plant stock. the benefits of using the system of container grown plant stock include the convenience of being able to maintain and transport the plant stock. However, container grown plant stock will develop poor root structure when the roots hit the side of the container and begin to circle. When the roots circle the pot, the plant is considered root bound. Container grown plant stock may be grown to size in the field and transplanted into a container, or grown in a container until marketable size if grown in container rather than a field, continual upsizing of pots will be important for preventing the plant becoming root bound. Some ways to prevent a crop from becoming root bound is by using air-pruning containers, which have spaces around the pot that expose growing media and roots to air. The air will stop the root tip from growing and circling the pot. Nurseries will also mechanically prune roots with "U" shaped or linear blades that are connected to tractors. Container production can be used for any plant species.
If a nursery plant is sold as "bare root", it means that soil has been removed from the roots, the product being sold is just the plant. Plants sold as bare root are marketed in the winter, to sell to customers in spring. Plants sold as bare root include herbaceous and woody perennial plants. Bare root plants are grown in the field during the growing season until they become a harvestable bare root crop. During dormancy, bare root plants are dug up, bundled, stored in a cool warehouse with roots in a moist media, they will be sold, transplanted back into the field in spring, or disposed of if there is not enough space in the field. The issue of being root bound is non existent for bare root plants because there is no container for the roots to circle around, bare root nursery stock has the standard of being free of root deformities, and being free of pests.
If a plant is ball and burlapped, it means the nursery dug around the plant with its soil while it's in the field and wrapped it in burlap which they tie down with rope. Nurseries may also use wire baskets to support the ball and burlap trees if needed. Ball and burlap trees loose close to 90% of their root systems when dug The size of the root ball of a ball and burlap tree depends on the calliper of the tree, and the species of the tree. Root balls must have the depth to keep most of the plant root system and also be deep enough to keep plant root ball intact while the plant is being moved or planted.
There are terms used to identify the stage that the nursery plants are at. Liners are young plants that are one or two years old. They may be sold as bare root or in containers. A whip is a tree with just a trunk and little to no branches. Whips can be grown from hardwood cuttings, seedlings, or propagated by budding, which is a method of grafting propagation where a single bud of a desired cultivar is grafted onto a rootstock plant. In the case of budding, the rootstock will be older than the crown.
| Technology | Buildings and infrastructure | null |
435089 | https://en.wikipedia.org/wiki/Noble%20gas%20compound | Noble gas compound | In chemistry, noble gas compounds are chemical compounds that include an element from the noble gases, group 8 or 18 of the periodic table. Although the noble gases are generally unreactive elements, many such compounds have been observed, particularly involving the element xenon.
From the standpoint of chemistry, the noble gases may be divided into two groups: the relatively reactive krypton (ionisation energy 14.0 eV), xenon (12.1 eV), and radon (10.7 eV) on one side, and the very unreactive argon (15.8 eV), neon (21.6 eV), and helium (24.6 eV) on the other. Consistent with this classification, Kr, Xe, and Rn form compounds that can be isolated in bulk at or near standard temperature and pressure, whereas He, Ne, Ar have been observed to form true chemical bonds using spectroscopic techniques, but only when frozen into a noble gas matrix at temperatures of or lower, in supersonic jets of noble gas, or under extremely high pressures with metals.
The heavier noble gases have more electron shells than the lighter ones. Hence, the outermost electrons are subject to a shielding effect from the inner electrons that makes them more easily ionized, since they are less strongly attracted to the positively-charged nucleus. This results in an ionization energy low enough to form stable compounds with the most electronegative elements, fluorine and oxygen, and even with less electronegative elements such as nitrogen and carbon under certain circumstances.
History and background
When the family of noble gases was first identified at the end of the nineteenth century, none of them were observed to form any compounds and so it was initially believed that they were all inert gases (as they were then known) which could not form compounds. With the development of atomic theory in the early twentieth century, their inertness was ascribed to a full valence shell of electrons which render them very chemically stable and nonreactive. All noble gases have full s and p outer electron shells (except helium, which has no p sublevel), and so do not form chemical compounds easily. Their high ionization energy and almost zero electron affinity explain their non-reactivity.
In 1933, Linus Pauling predicted that the heavier noble gases would be able to form compounds with fluorine and oxygen. Specifically, he predicted the existence of krypton hexafluoride () and xenon hexafluoride (), speculated that might exist as an unstable compound, and suggested that xenic acid would form perxenate salts. These predictions proved quite accurate, although subsequent predictions for indicated that it would be not only thermodynamically unstable, but kinetically unstable. As of 2022, has not been made, although the octafluoroxenate(VI) anion () has been observed.
By 1960, no compound with a covalently bound noble gas atom had yet been synthesized. The first published report, in June 1962, of a noble gas compound was by Neil Bartlett, who noticed that the highly oxidising compound platinum hexafluoride ionised to . As the ionisation energy of to (1165 kJ mol−1) is nearly equal to the ionisation energy of Xe to (1170 kJ mol−1), he tried the reaction of Xe with . This yielded a crystalline product, xenon hexafluoroplatinate, whose formula was proposed to be .
It was later shown that the compound is actually more complex, containing both and . Nonetheless, this was the first real compound of any noble gas.
The first binary noble gas compounds were reported later in 1962. Bartlett synthesized xenon tetrafluoride () by subjecting a mixture of xenon and fluorine to high temperature. Rudolf Hoppe, among other groups, synthesized xenon difluoride () by the reaction of the elements.
Following the first successful synthesis of xenon compounds, synthesis of krypton difluoride () was reported in 1963.
True noble gas compounds
In this section, the non-radioactive noble gases are considered in decreasing order of atomic weight, which generally reflects the priority of their discovery, and the breadth of available information for these compounds. The radioactive elements radon and oganesson are harder to study and are considered at the end of the section.
Xenon compounds
After the initial 1962 studies on and , xenon compounds that have been synthesized include other fluorides (), oxyfluorides (, , , , ) and oxides (, and ). Xenon fluorides react with several other fluorides to form fluoroxenates, such as sodium octafluoroxenate(VI) (), and fluoroxenonium salts, such as trifluoroxenonium hexafluoroantimonate ().
In terms of other halide reactivity, short-lived excimers of noble gas halides such as or XeCl are prepared in situ, and are used in the function of excimer lasers.
Recently, xenon has been shown to produce a wide variety of compounds of the type where n is 1, 2 or 3 and X is any electronegative group, such as , , , , , , etc.; the range of compounds is impressive, similar to that seen with the neighbouring element iodine, running into the thousands and involving bonds between xenon and oxygen, nitrogen, carbon, boron and even gold, as well as perxenic acid, several halides, and complex ions.
The compound contains a Xe–Xe bond, which is the longest element-element bond known (308.71 pm = 3.0871 Å). Short-lived excimers of are reported to exist as a part of the function of excimer lasers.
Krypton compounds
Krypton gas reacts with fluorine gas under extreme forcing conditions, forming according to the following equation:
reacts with strong Lewis acids to form salts of the and cations. The preparation of reported by Grosse in 1963, using the Claasen method, was subsequently shown to be a mistaken identification.
Krypton compounds with other than Kr–F bonds (compounds with atoms other than fluorine) have also been described. reacts with to produce the unstable compound, , with a krypton-oxygen bond. A krypton-nitrogen bond is found in the cation , produced by the reaction of with below −50 °C.
Argon compounds
The discovery of HArF was announced in 2000. The compound can exist in low temperature argon matrices for experimental studies, and it has also been studied computationally. Argon hydride ion was obtained in the 1970s.
This molecular ion has also been identified in the Crab nebula, based on the frequency of its light emissions.
There is a possibility that a solid salt of could be prepared with or anions.
Neon and helium compounds
The ions, , , , and are known from optical and mass spectrometric studies. Neon also forms an unstable hydrate. There is some empirical and theoretical evidence for a few metastable helium compounds which may exist at very low temperatures or extreme pressures. The stable cation was reported in 1925, but was not considered a true compound since it is not neutral and cannot be isolated. In 2016 scientists created the helium compound disodium helide () which was the first helium compound discovered.
Radon and oganesson compounds
Radon is not chemically inert, but its short half-life (3.8 days for 222Rn) and the high energy of its radioactivity make it difficult to investigate its only fluoride (), its reported oxide (), and their reaction products.
All known oganesson isotopes have even shorter half-lives in the millisecond range and no compounds are known yet, although some have been predicted theoretically. It is expected to be even more reactive than radon, more like a normal element than a noble gas in its chemistry.
Reports prior to xenon hexafluoroplatinate and xenon tetrafluoride
Clathrates
Prior to 1962, the only isolated compounds of noble gases were clathrates (including clathrate hydrates); other compounds such as coordination compounds were observed only by spectroscopic means. Clathrates (also known as cage compounds) are compounds of noble gases in which they are trapped within cavities of crystal lattices of certain organic and inorganic substances. Ar, Kr, Xe and Ne can form clathrates with crystalline hydroquinone. Kr and Xe can appear as guests in crystals of melanophlogite.
Helium-nitrogen () crystals have been grown at room temperature at pressures ca. 10 GPa in a diamond anvil cell. Solid argon-hydrogen clathrate () has the same crystal structure as the Laves phase. It forms at pressures between 4.3 and 220 GPa, though Raman measurements suggest that the molecules in dissociate above 175 GPa. A similar solid forms at pressures above 5 GPa. It has a face-centered cubic structure where krypton octahedra are surrounded by randomly oriented hydrogen molecules. Meanwhile, in solid xenon atoms form dimers inside solid hydrogen.
Coordination compounds
Coordination compounds such as have been postulated to exist at low temperatures, but have never been confirmed.
Xenon is known to function as a metal ligand. In addition to the charged [AuXe4]2+, xenon, krypton, and argon all reversibly bind to gaseous M(CO)5, where M=Cr, Mo, or W. P-block metals also bind noble gases: XeBeO has been observed spectroscopically and both XeBeS and FXeBO are predicted stable.
Also, compounds such as and were reported to have been formed by electron bombardment, but recent research has shown that these are probably the result of He being adsorbed on the surface of the metal; therefore, these compounds cannot truly be considered chemical compounds.
Hydrates
Hydrates are formed by compressing noble gases in water, where it is believed that the water molecule, a strong dipole, induces a weak dipole in the noble gas atoms, resulting in dipole-dipole interaction. Heavier atoms are more influenced than smaller ones, hence was reported to have been the most stable hydrate; it has a melting point of 24 °C. The deuterated version of this hydrate has also been produced.
Fullerene adducts
Noble gases can also form endohedral fullerene compounds where the noble gas atom is trapped inside a fullerene molecule. In 1993, it was discovered that when is exposed to a pressure of around 3 bar of He or Ne, the complexes and are formed. Under these conditions, only about one out of every 650,000 cages was doped with a helium atom; with higher pressures (3000 bar), it is possible to achieve a yield of up to 0.1%. Endohedral complexes with argon, krypton and xenon have also been obtained, as well as numerous adducts of .
Applications
Most applications of noble gas compounds are either as oxidising agents or as a means to store noble gases in a dense form. Xenic acid is a valuable oxidising agent because it has no potential for introducing impurities—xenon is simply liberated as a gas—and so is rivalled only by ozone in this regard. The perxenates are even more powerful oxidizing agents. Xenon-based oxidants have also been used for synthesizing carbocations stable at room temperature, in solution.
Stable salts of xenon containing very high proportions of fluorine by weight (such as tetrafluoroammonium heptafluoroxenate(VI), , and the related tetrafluoroammonium octafluoroxenate(VI) ), have been developed as highly energetic oxidisers for use as propellants in rocketry.
Xenon fluorides are good fluorinating agents.
Clathrates have been used for separation of He and Ne from Ar, Kr, and Xe, and also for the transportation of Ar, Kr, and Xe. (For instance, radioactive isotopes of krypton and xenon are difficult to store and dispose, and compounds of these elements may be more easily handled than the gaseous forms.) In addition, clathrates of radioisotopes may provide suitable formulations for experiments requiring sources of particular types of radiation; hence. 85Kr clathrate provides a safe source of beta particles, while 133Xe clathrate provides a useful source of gamma rays.
| Physical sciences | Noble gas compounds | Chemistry |
435335 | https://en.wikipedia.org/wiki/Plant%20nutrition | Plant nutrition | Plant nutrition is the study of the chemical elements and compounds necessary for plant growth and reproduction, plant metabolism and their external supply. In its absence the plant is unable to complete a normal life cycle, or that the element is part of some essential plant constituent or metabolite. This is in accordance with Justus von Liebig's law of the minimum. The total essential plant nutrients include seventeen different elements: carbon, oxygen and hydrogen which are absorbed from the air, whereas other nutrients including nitrogen are typically obtained from the soil (exceptions include some parasitic or carnivorous plants).
Plants must obtain the following mineral nutrients from their growing medium:
The macronutrients: nitrogen (N), phosphorus (P), potassium (K), calcium (Ca), sulfur (S), magnesium (Mg), carbon (C), hydrogen (H), oxygen (O)
The micronutrients (or trace minerals): iron (Fe), boron (B), chlorine (Cl), manganese (Mn), zinc (Zn), copper (Cu), molybdenum (Mo), nickel (Ni)
These elements stay beneath soil as salts, so plants absorb these elements as ions. The macronutrients are taken-up in larger quantities; hydrogen, oxygen, nitrogen and carbon contribute to over 95% of a plant's entire biomass on a dry matter weight basis. Micronutrients are present in plant tissue in quantities measured in parts per million, ranging from 0.1 to 200 ppm, or less than 0.02% dry weight.
Most soil conditions across the world can provide plants adapted to that climate and soil with sufficient nutrition for a complete life cycle, without the addition of nutrients as fertilizer. However, if the soil is cropped it is necessary to artificially modify soil fertility through the addition of fertilizer to promote vigorous growth and increase or sustain yield. This is done because, even with adequate water and light, nutrient deficiency can limit growth and crop yield.
History
Carbon, hydrogen and oxygen are the basic nutrients plants receive from air and water. Justus von Liebig proved in 1840 that plants needed nitrogen, potassium and phosphorus. Liebig's law of the minimum states that a plant's growth is limited by nutrient deficiency. Plant cultivation in media other than soil was used by Arnon and Stout in 1939 to show that molybdenum was essential to tomato growth.
Processes
Plants take up essential elements from the soil through their roots and from the air through their leaves. Nutrient uptake in the soil is achieved by cation exchange, wherein root hairs pump hydrogen ions (H+) into the soil through proton pumps. These hydrogen ions displace cations attached to negatively charged soil particles so that the cations are available for uptake by the root. In the leaves, stomata open to take in carbon dioxide and expel oxygen. The carbon dioxide molecules are used as the carbon source in photosynthesis.
The root, especially the root hair, a unique cell, is the essential organ for the uptake of nutrients. The structure and architecture of the root can alter the rate of nutrient uptake. Nutrient ions are transported to the center of the root, the stele, in order for the nutrients to reach the conducting tissues, xylem and phloem. The Casparian strip, a cell wall outside the stele but in the root, prevents passive flow of water and nutrients, helping to regulate the uptake of nutrients and water. Xylem moves water and mineral ions in the plant and phloem accounts for organic molecule transportation. Water potential plays a key role in a plant's nutrient uptake. If the water potential is more negative in the plant than the surrounding soils, the nutrients will move from the region of higher solute concentration—in the soil—to the area of lower solute concentration - in the plant.
There are three fundamental ways plants uptake nutrients through the root:
Simple diffusion occurs when a nonpolar molecule, such as O2, CO2, and NH3 follows a concentration gradient, moving passively through the cell lipid bilayer membrane without the use of transport proteins.
Facilitated diffusion is the rapid movement of solutes or ions following a concentration gradient, facilitated by transport proteins.
Active transport is the uptake by cells of ions or molecules against a concentration gradient; this requires an energy source, usually ATP, to power molecular pumps that move the ions or molecules through the membrane.
Nutrients can be moved in plants to where they are most needed. For example, a plant will try to supply more nutrients to its younger leaves than to its older ones. When nutrients are mobile in the plant, symptoms of any deficiency become apparent first on the older leaves. However, not all nutrients are equally mobile. Nitrogen, phosphorus, and potassium are mobile nutrients while the others have varying degrees of mobility. When a less-mobile nutrient is deficient, the younger leaves suffer because the nutrient does not move up to them but stays in the older leaves. This phenomenon is helpful in determining which nutrients a plant may be lacking.
Many plants engage in symbiosis with microorganisms. Two important types of these relationship are
with bacteria such as rhizobia, that carry out biological nitrogen fixation, in which atmospheric nitrogen (N2) is converted into ammonium (NH); and
with mycorrhizal fungi, which through their association with the plant roots help to create a larger effective root surface area. Both of these mutualistic relationships enhance nutrient uptake.
The Earth's atmosphere contains over 78 percent nitrogen. Plants called legumes, including the agricultural crops alfalfa and soybeans, widely grown by farmers, harbour nitrogen-fixing bacteria that can convert atmospheric nitrogen into nitrogen the plant can use. Plants not classified as legumes such as wheat, corn and rice rely on nitrogen compounds present in the soil to support their growth. These can be supplied by mineralization of soil organic matter or added plant residues, nitrogen fixing bacteria, animal waste, through the breaking of triple bonded N2 molecules by lightning strikes or through the application of fertilizers.
Functions of nutrients
At least 17 elements are known to be essential nutrients for plants. In relatively large amounts, the soil supplies nitrogen, phosphorus, potassium, calcium, magnesium, and sulfur; these are often called the macronutrients. In relatively small amounts, the soil supplies iron, manganese, boron, molybdenum, copper, zinc, chlorine, and cobalt, the so-called micronutrients. Nutrients must be available not only in sufficient amounts but also in appropriate ratios.
Plant nutrition is a difficult subject to understand completely, partially because of the variation between different plants and even between different species or individuals of a given clone. Elements present at low levels may cause deficiency symptoms, and toxicity is possible at levels that are too high. Furthermore, deficiency of one element may present as symptoms of toxicity from another element, and vice versa. An abundance of one nutrient may cause a deficiency of another nutrient. For example, K+ uptake can be influenced by the amount of NH available.
Nitrogen is plentiful in the Earth's atmosphere, and a number of commercially-important agricultural plants engage in nitrogen fixation (conversion of atmospheric nitrogen to a biologically useful form). However, plants mostly receive their nitrogen through the soil, where it is already converted in biological useful form. This is important because the nitrogen in the atmosphere is too large for the plant to consume, and takes a lot of energy to convert into smaller forms. These include soybeans, edible beans and peas as well as clovers and alfalfa used primarily for feeding livestock. Plants such as the commercially-important corn, wheat, oats, barley and rice require nitrogen compounds to be present in the soil in which they grow.
Carbon and oxygen are absorbed from the air while other nutrients are absorbed from the soil. Green plants ordinarily obtain their carbohydrate supply from the carbon dioxide in the air by the process of photosynthesis. Each of these nutrients is used for a different essential function.
Basic nutrients
The basic nutrients are derived from air and water.
Carbon
Carbon forms the backbone of most plant biomolecules, including proteins, starches and cellulose. Carbon is fixed through photosynthesis; this converts carbon dioxide from the air into carbohydrates which are used to store and transport energy within the plant.
Hydrogen
Hydrogen is necessary for building sugars and building the plant. It is obtained almost entirely from water. Hydrogen ions are imperative for a proton gradient to help drive the electron transport chain in photosynthesis and for respiration.
Oxygen
Oxygen is a component of many organic and inorganic molecules within the plant, and is acquired in many forms. These include: O2 and CO2 (mainly from the air via leaves) and H2O, NO, H2PO and SO (mainly from the soil water via roots). Plants produce oxygen gas (O2) along with glucose during photosynthesis but then require O2 to undergo aerobic cellular respiration and break down this glucose to produce ATP.
Macronutrients (primary)
Nitrogen
Nitrogen is a major constituent of several of the most important plant substances. For example, nitrogen compounds comprise 40% to 50% of the dry matter of protoplasm, and it is a constituent of amino acids, the building blocks of proteins. It is also an essential constituent of chlorophyll. In many agricultural settings, nitrogen is the limiting nutrient for rapid growth.
Phosphorus
Like nitrogen, phosphorus is involved with many vital plant processes. Within a plant, it is present mainly as a structural component of the nucleic acids: deoxyribonucleic acid (DNA) and ribonucleic acid (RNA), as well as a constituent of fatty phospholipids, that are important in membrane development and function. It is present in both organic and inorganic forms, both of which are readily translocated within the plant. All energy transfers in the cell are critically dependent on phosphorus. As with all living things, phosphorus is part of the Adenosine triphosphate (ATP), which is of immediate use in all processes that require energy with the cells. Phosphorus can also be used to modify the activity of various enzymes by phosphorylation, and is used for cell signaling. Phosphorus is concentrated at the most actively growing points of a plant and stored within seeds in anticipation of their germination.
Potassium
Unlike other major elements, potassium does not enter into the composition of any of the important plant constituents involved in metabolism, but it does occur in all parts of plants in substantial amounts. It is essential for enzyme activity including enzymes involved in primary metabolism. It plays a role in turgor regulation, effecting the functioning of the stomata and cell volume growth.
It seems to be of particular importance in leaves and at growing points. Potassium is outstanding among the nutrient elements for its mobility and solubility within plant tissues.
Processes involving potassium include the formation of carbohydrates and proteins, the regulation of internal plant moisture, as a catalyst and condensing agent of complex substances, as an accelerator of enzyme action, and as contributor to photosynthesis, especially under low light intensity. Potassium regulates the opening and closing of the stomata by a potassium ion pump. Since stomata are important in water regulation, potassium regulates water loss from the leaves and increases drought tolerance. Potassium serves as an activator of enzymes used in photosynthesis and respiration. Potassium is used to build cellulose and aids in photosynthesis by the formation of a chlorophyll precursor. The potassium ion (K+) is highly mobile and can aid in balancing the anion (negative) charges within the plant. A relationship between potassium nutrition and cold resistance has been found in several tree species, including two species of spruce. Potassium helps in fruit coloration, shape and also increases its brix. Hence, quality fruits are produced in potassium-rich soils.
Research has linked K+ transport with auxin homeostasis, cell signaling, cell expansion, membrane trafficking and phloem transport.
Macronutrients (secondary and tertiary)
Sulfur
Sulfur is a structural component of some amino acids (including cystein and methionine) and vitamins, and is essential for chloroplast growth and function; it is found in the iron-sulfur complexes of the electron transport chains in photosynthesis. It is needed for N2 fixation by legumes, and the conversion of nitrate into amino acids and then into protein.
Calcium
Calcium in plants occurs chiefly in the leaves, with lower concentrations in seeds, fruits, and roots. A major function is as a constituent of cell walls. When coupled with certain acidic compounds of the jelly-like pectins of the middle lamella, calcium forms an insoluble salt. It is also intimately involved in meristems, and is particularly important in root development, with roles in cell division, cell elongation, and the detoxification of hydrogen ions. Other functions attributed to calcium are: the neutralization of organic acids; inhibition of some potassium-activated ions; and a role in nitrogen absorption. A notable feature of calcium-deficient plants is a defective root system. Roots are usually affected before above-ground parts. Blossom end rot is also a result of inadequate calcium.
Calcium regulates transport of other nutrients into the plant and is also involved in the activation of certain plant enzymes. Calcium deficiency results in stunting. This nutrient is involved in photosynthesis and plant structure. It is needed as a balancing cation for anions in the vacuole and as an intracellular messenger in the cytosol.
Magnesium
The outstanding role of magnesium in plant nutrition is as a constituent of the chlorophyll molecule. As a carrier, it is also involved in numerous enzyme reactions as an effective activator, in which it is closely associated with energy-supplying phosphorus compounds.
Micro-nutrients
Plants are able sufficiently to accumulate most trace elements. Some plants are sensitive indicators of the chemical environment in which they grow (Dunn 1991), and some plants have barrier mechanisms that exclude or limit the uptake of a particular element or ion species, e.g., alder twigs commonly accumulate molybdenum but not arsenic, whereas the reverse is true of spruce bark (Dunn 1991). Otherwise, a plant can integrate the geochemical signature of the soil mass permeated by its root system together with the contained groundwaters. Sampling is facilitated by the tendency of many elements to accumulate in tissues at the plant's extremities. Some micronutrients can be applied as seed coatings.
Iron
Iron is necessary for photosynthesis and is present as an enzyme cofactor in plants. Iron deficiency can result in interveinal chlorosis and necrosis.
Iron is not a structural part of chlorophyll but very much essential for its synthesis. Copper deficiency can be responsible for promoting an iron deficiency.
It helps in the electron transport of plant.
As with other biological processes, the main useful form of iron is that of iron(II) due to its higher solubility in neutral pH. However, plants are also capable of using iron(III) via citric acid, using the photo-reduction of ferric citrate. In the field, as with many other transitional metal elements, iron fertilizer is supplied as a chelate.
Molybdenum
Molybdenum is a cofactor to enzymes important in building amino acids and is involved in nitrogen metabolism. Molybdenum is part of the nitrate reductase enzyme (needed for the reduction of nitrate) and the nitrogenase enzyme (required for biological nitrogen fixation). Reduced productivity as a result of molybdenum deficiency is usually associated with the reduced activity of one or more of these enzymes.
Boron
Boron has many functions in a plant: it affects flowering and fruiting, pollen germination, cell division, and active salt absorption. The metabolism of amino acids and proteins, carbohydrates, calcium, and water are strongly affected by boron. Many of those listed functions may be embodied by its function in moving the highly polar sugars through cell membranes by reducing their polarity and hence the energy needed to pass the sugar. If sugar cannot pass to the fastest growing parts rapidly enough, those parts die.
Copper
Copper is important for photosynthesis. Symptoms for copper deficiency include chlorosis. It is involved in many enzyme processes; necessary for proper photosynthesis; involved in the manufacture of lignin (cell walls) and involved in grain production. It is difficult to find in some soil conditions.
Manganese
Manganese is necessary for photosynthesis, including the building of chloroplasts. Manganese deficiency may result in coloration abnormalities, such as discolored spots on the foliage.
Sodium
Sodium is involved in the regeneration of phosphoenolpyruvate in CAM and C4 plants. Sodium can potentially replace potassium's regulation of stomatal opening and closing.
Essentiality of sodium:
Essential for C4 plants rather C3
Substitution of K by Na: Plants can be classified into four groups:
Group A—a high proportion of K can be replaced by Na and stimulate the growth, which cannot be achieved by the application of K
Group B—specific growth responses to Na are observed but they are much less distinct
Group C—Only minor substitution is possible and Na has no effect
Group D—No substitution occurs
Stimulate the growth—increase leaf area and stomata. Improves the water balance
Na functions in metabolism
C4 metabolism
Impair the conversion of pyruvate to phosphoenol-pyruvate
Reduce the photosystem II activity and ultrastructural changes in mesophyll chloroplast
Replacing K functions
Internal osmoticum
Stomatal function
Photosynthesis
Counteraction in long distance transport
Enzyme activation
Improves the crop quality e.g. improves the taste of carrots by increasing sucrose
Zinc
Zinc is required in a large number of enzymes and plays an essential role in DNA transcription. A typical symptom of zinc deficiency is the stunted growth of leaves, commonly known as "little leaf" and is caused by the oxidative degradation of the growth hormone auxin.
Nickel
In vascular plants, nickel is absorbed by plants in the form of Ni2+ ion. Nickel is essential for activation of urease, an enzyme involved with nitrogen metabolism that is required to process urea. Without nickel, toxic levels of urea accumulate, leading to the formation of necrotic lesions. In non-vascular plants, nickel activates several enzymes involved in a variety of processes, and can substitute for zinc and iron as a cofactor in some enzymes.
Chlorine
Chlorine, as compounded chloride, is necessary for osmosis and ionic balance; it also plays a role in photosynthesis.
Cobalt
Cobalt has proven to be beneficial to at least some plants although it does not appear to be essential for most species. It has, however, been shown to be essential for nitrogen fixation by the nitrogen-fixing bacteria associated with legumes and other plants.
Silicon
Silicon is not considered an essential element for plant growth and development. It is always found in abundance in the environment and hence if needed it is available. It is found in the structures of plants and improves the health of plants.
In plants, silicon has been shown in experiments to strengthen cell walls, improve plant strength, health, and productivity. There have been studies showing evidence of silicon improving drought and frost resistance, decreasing lodging potential and boosting the plant's natural pest and disease fighting systems. Silicon has also been shown to improve plant vigor and physiology by improving root mass and density, and increasing above ground plant biomass and crop yields. Silicon is currently under consideration by the Association of American Plant Food Control Officials (AAPFCO) for elevation to the status of a "plant beneficial substance".
Vanadium
Vanadium may be required by some plants, but at very low concentrations. It may also be substituting for molybdenum.
Selenium
Selenium is probably not essential for flowering plants, but it can be beneficial; it can stimulate plant growth, improve tolerance of oxidative stress, and increase resistance to pathogens and herbivory.
Mobility
Mobile
Nitrogen is transported via the xylem from the roots to the leaf canopy as nitrate ions, or in an organic form, such as amino acids or amides. Nitrogen can also be transported in the phloem sap as amides, amino acids and ureides; it is therefore mobile within the plant, and the older leaves exhibit chlorosis and necrosis earlier than the younger leaves. Because phosphorus is a mobile nutrient, older leaves will show the first signs of deficiency. Magnesium is very mobile in plants, and, like potassium, when deficient is translocated from older to younger tissues, so that signs of deficiency appear first on the oldest tissues and then spread progressively to younger tissues.
Immobile
Because calcium is phloem immobile, calcium deficiency can be seen in new growth. When developing tissues are forced to rely on the xylem, calcium is supplied by transpiration only.
Boron is not relocatable in the plant via the phloem. It must be supplied to the growing parts via the xylem. Foliar sprays affect only those parts sprayed, which may be insufficient for the fastest growing parts, and is very temporary.
In plants, sulfur cannot be mobilized from older leaves for new growth, so deficiency symptoms are seen in the youngest tissues first. Symptoms of deficiency include yellowing of leaves and stunted growth.
Nutrient deficiency
Symptoms
The effect of a nutrient deficiency can vary from a subtle depression of growth rate to obvious stunting, deformity, discoloration, distress, and even death. Visual symptoms distinctive enough to be useful in identifying a deficiency are rare. Most deficiencies are multiple and moderate. However, while a deficiency is seldom that of a single nutrient, nitrogen is commonly the nutrient in shortest supply.
Chlorosis of foliage is not always due to mineral nutrient deficiency. Solarization can produce superficially similar effects, though mineral deficiency tends to cause premature defoliation, whereas solarization does not, nor does solarization depress nitrogen concentration.
Macronutrients
Nitrogen deficiency most often results in stunted growth, slow growth, and chlorosis. Nitrogen deficient plants will also exhibit a purple appearance on the stems, petioles and underside of leaves from an accumulation of anthocyanin pigments.
Phosphorus deficiency can produce symptoms similar to those of nitrogen deficiency, characterized by an intense green coloration or reddening in leaves due to lack of chlorophyll. If the plant is experiencing high phosphorus deficiencies the leaves may become denatured and show signs of death. Occasionally the leaves may appear purple from an accumulation of anthocyanin. As noted by Russel: "Phosphate deficiency differs from nitrogen deficiency in being extremely difficult to diagnose, and crops can be suffering from extreme starvation without there being any obvious signs that lack of phosphate is the cause". Russell's observation applies to at least some coniferous seedlings, but Benzian found that although response to phosphorus in very acid forest tree nurseries in England was consistently high, no species (including Sitka spruce) showed any visible symptom of deficiency other than a slight lack of lustre. Phosphorus levels have to be exceedingly low before visible symptoms appear in such seedlings. In sand culture at 0 ppm phosphorus, white spruce seedlings were very small and tinted deep purple; at 0.62 ppm, only the smallest seedlings were deep purple; at 6.2 ppm, the seedlings were of good size and color.
The root system is less effective without a continuous supply of calcium to newly developing cells. Even short term disruptions in calcium supply can disrupt biological functions and root function. A common symptom of calcium deficiency in leaves is the curling of the leaf towards the veins or center of the leaf. Many times this can also have a blackened appearance. The tips of the leaves may appear burned and cracking may occur in some calcium deficient crops if they experience a sudden increase in humidity. Calcium deficiency may arise in tissues that are fed by the phloem, causing blossom end rot in watermelons, peppers and tomatoes, empty peanut pods and bitter pits in apples. In enclosed tissues, calcium deficiency can cause celery black heart and "brown heart" in greens like escarole.
Researchers found that partial deficiencies of K or P did not change the fatty acid composition of phosphatidyl choline in Brassica napus L. plants. Calcium deficiency did, on the other hand, lead to a marked decline of polyunsaturated compounds that would be expected to have negative impacts for integrity of the plant membrane, that could effect some properties like its permeability, and is needed for the ion uptake activity of the root membranes.
Potassium deficiency may cause necrosis or interveinal chlorosis. Deficiency may result in higher risk of pathogens, wilting, chlorosis, brown spotting, and higher chances of damage from frost and heat. When potassium is moderately deficient, the effects first appear in the older tissues, and from there progress towards the growing points. Acute deficiency severely affects growing points, and die-back commonly occurs. Symptoms of potassium deficiency in white spruce include: browning and death of needles (chlorosis); reduced growth in height and diameter; impaired retention of needles; and reduced needle length.
Micronutrients
Mo deficiency is usually found on older growth. Fe, Mn and Cu effect new growth, causing green or yellow veins, Zn ca effect old and new leaves, and B will be seem on terminal buds. A plant with zinc deficiency may have leaves on top of each other due to reduced internodal expansion.
Zinc is the most widely deficient micronutrient for industrial crop cultivation, followed by boron. Acidifying N fertilizers create micro-sites around the granule that keep micronutrient cations soluble for longer in alkaline soils, but high concentrations of P or C may negate these effects.
Boron deficiencies effecting seed yields and pollen fertility are common in laterite soils. Boron is essential for the proper forming and strengthening of cell walls. Lack of boron results in short thick cells producing stunted fruiting bodies and roots. Deficiency results in the death of the terminal growing points and stunted growth. Inadequate amounts of boron affect many agricultural crops, legume forage crops most strongly. Boron deficiencies can be detected by analysis of plant material to apply a correction before the obvious symptoms appear, after which it is too late to prevent crop loss. Strawberries deficient in boron will produce lumpy fruit; apricots will not blossom or, if they do, will not fruit or will drop their fruit depending on the level of boron deficit. Broadcast of boron supplements is effective and long term; a foliar spray is immediate but must be repeated.
Toxicity
Boron concentration in soil water solution higher than one ppm is toxic to most plants. Toxic concentrations within plants are 10 to 50 ppm for small grains and 200 ppm in boron-tolerant crops such as sugar beets, rutabaga, cucumbers, and conifers. Toxic soil conditions are generally limited to arid regions or can be caused by underground borax deposits in contact with water or volcanic gases dissolved in percolating water.
Availability and uptake
Nitrogen fixation
There is an abundant supply of nitrogen in the earth's atmosphere—N2 gas comprises nearly 79% of air. However, N2 is unavailable for use by most organisms because there is a triple bond between the two nitrogen atoms in the molecule, making it almost inert. In order for nitrogen to be used for growth it must be "fixed" (combined) in the form of ammonium (NH) or nitrate (NO) ions. The weathering of rocks releases these ions so slowly that it has a negligible effect on the availability of fixed nitrogen. Therefore, nitrogen is often the limiting factor for growth and biomass production in all environments where there is a suitable climate and availability of water to support life.
Microorganisms have a central role in almost all aspects of nitrogen availability, and therefore for life support on earth. Some bacteria can convert N2 into ammonia by the process termed nitrogen fixation; these bacteria are either free-living or form symbiotic associations with plants or other organisms (e.g., termites, protozoa), while other bacteria bring about transformations of ammonia to nitrate, and of nitrate to N2 or other nitrogen gases. Many bacteria and fungi degrade organic matter, releasing fixed nitrogen for reuse by other organisms. All these processes contribute to the nitrogen cycle.
Nitrogen enters the plant largely through the roots. A "pool" of soluble nitrogen accumulates. Its composition within a species varies widely depending on several factors, including day length, time of day, night temperatures, nutrient deficiencies, and nutrient imbalance. Short day length promotes asparagine formation, whereas glutamine is produced under long day regimes. Darkness favors protein breakdown accompanied by high asparagine accumulation. Night temperature modifies the effects due to night length, and soluble nitrogen tends to accumulate owing to retarded synthesis and breakdown of proteins. Low night temperature conserves glutamine; high night temperature increases accumulation of asparagine because of breakdown. Deficiency of K accentuates differences between long- and short-day plants. The pool of soluble nitrogen is much smaller than in well-nourished plants when N and P are deficient since uptake of nitrate and further reduction and conversion of N to organic forms is restricted more than is protein synthesis. Deficiencies of Ca, K, and S affect the conversion of organic N to protein more than uptake and reduction. The size of the pool of soluble N is no guide per se to growth rate, but the size of the pool in relation to total N might be a useful ratio in this regard. Nitrogen availability in the rooting medium also affects the size and structure of tracheids formed in the long lateral roots of white spruce (Krasowski and Owens 1999).
Root environment
Mycorrhiza
Phosphorus is most commonly found in the soil in the form of polyprotic phosphoric acid (H3PO4), but is taken up most readily in the form of H2PO. Phosphorus is available to plants in limited quantities in most soils because it is released very slowly from insoluble phosphates and is rapidly fixed once again. Under most environmental conditions it is the element that limits growth because of this constriction and due to its high demand by plants and microorganisms. Plants can increase phosphorus uptake by a mutualism with mycorrhiza. On some soils, the phosphorus nutrition of some conifers, including the spruces, depends on the ability of mycorrhizae to take up, and make soil phosphorus available to the tree, hitherto unobtainable to the non-mycorrhizal root. Seedling white spruce, greenhouse-grown in sand testing negative for phosphorus, were very small and purple for many months until spontaneous mycorrhizal inoculation, the effect of which was manifested by a greening of foliage and the development of vigorous shoot growth.
Root temperature
When soil-potassium levels are high, plants take up more potassium than needed for healthy growth. The term luxury consumption has been applied to this. Potassium intake increases with root temperature and depresses calcium uptake. Calcium to boron ratio must be maintained in a narrow range for normal plant growth. Lack of boron causes failure of calcium metabolism which produces hollow heart in beets and peanuts.
Nutrient interactions
Calcium and magnesium inhibit the uptake of trace metals. Copper and zinc mutually reduce uptake of each other. Zinc also effects iron levels of plants. These interactions are dependent on species and growing conditions. For example, for clover, lettuce and red beet plants nearing toxic levels of zinc, copper and nickel, these three elements increased the toxicity of the others in a positive relationship. In barley positive interaction was observed between copper and zinc, while in French beans the positive interaction occurred between nickel and zinc. Other researchers have studied the synergistic and antagonistic effects of soil conditions on lead, zinc, cadmium and copper in radish plants to develop predictive indicators for uptake like soil pH.
Calcium absorption is increased by water-soluble phosphate fertilizers, and is used when potassium and potash fertilizers decrease the uptake of phosphorus, magnesium and calcium. For these reasons, imbalanced application of potassium fertilizers can markedly decrease crop yields.
Solubility and soil pH
Boron is available to plants over a range of pH, from 5.0 to 7.5. Boron is absorbed by plants in the form of the anion BO. It is available to plants in moderately soluble mineral forms of Ca, Mg and Na borates and the highly soluble form of organic compounds. It is mobile in the soil, hence, it is prone to leaching. Leaching removes substantial amounts of boron in sandy soil, but little in fine silt or clay soil. Boron's fixation to those minerals at high pH can render boron unavailable, while low pH frees the fixed boron, leaving it prone to leaching in wet climates. It precipitates with other minerals in the form of borax in which form it was first used over 400 years ago as a soil supplement. Decomposition of organic material causes boron to be deposited in the topmost soil layer. When soil dries it can cause a precipitous drop in the availability of boron to plants as the plants cannot draw nutrients from that desiccated layer. Hence, boron deficiency diseases appear in dry weather.
Most of the nitrogen taken up by plants is from the soil in the forms of NO, although in acid environments such as boreal forests where nitrification is less likely to occur, ammonium NH is more likely to be the dominating source of nitrogen. Amino acids and proteins can only be built from NH, so NO must be reduced.
Fe and Mn become oxidized and are highly unavailable in acidic soils.
Measurements
Nutrient status (mineral nutrient and trace element composition, also called ionome and nutrient profile) of plants are commonly portrayed by tissue elementary analysis. Interpretation of the results of such studies, however, has been controversial. During recent decades the nearly two-century-old "law of minimum" or "Liebig's law" (that states that plant growth is controlled not by the total amount of resources available, but by the scarcest resource) has been replaced by several mathematical approaches that use different models in order to take the interactions between the individual nutrients into account.
Later developments in this field were based on the fact that the nutrient elements (and compounds) do not act independently from each other; Baxter, 2015, because there may be direct chemical interactions between them or they may influence each other's uptake, translocation, and biological action via a number of mechanisms as exemplified for the case of ammonia.
Plant nutrition in agricultural systems
Fertilizers
Boron is highly soluble in the form of borax or boric acid and is too easily leached from soil making these forms unsuitable for use as a fertilizer. Calcium borate is less soluble and can be made from sodium tetraborate. Boron is often applied to fields as a contaminant in other soil amendments but is not generally adequate to make up the rate of loss by cropping. The rates of application of borate to produce an adequate alfalfa crop range from 15 pounds per acre for a sandy-silt, acidic soil of low organic matter, to 60 pounds per acre for a soil with high organic matter, high cation exchange capacity and high pH. Application rates should be limited to a few pounds per acre in a test plot to determine if boron is needed generally. Otherwise, testing for boron levels in plant material is required to determine remedies. Excess boron can be removed by irrigation and assisted by application of elemental sulfur to lower the pH and increase boron solubility. Foliar sprays are used on fruit crop trees in soils of high alkalinity.
Selenium is, however, an essential mineral element for animal (including human) nutrition and selenium deficiencies are known to occur when food or animal feed is grown on selenium-deficient soils. The use of inorganic selenium fertilizers can increase selenium concentrations in edible crops and animal diets thereby improving animal health.
It is useful to apply a high phosphorus content fertilizer, such as bone meal, to perennials to help with successful root formation.
Hydroponics
Hydroponics is a method for growing plants in a water-nutrient solution without using nutrient-rich soil or substrates. Researchers and home gardeners can grow their plants in a controlled environment. The most common artificial nutrient solution is the Hoagland solution, developed by D. R. Hoagland and W. C. Snyder in 1933. The solution (known as A-Z solution) consists of all the essential macro- and micronutrients in the correct proportions necessary for most plant growth. An aerator is used to prevent an anoxic event or hypoxia. Hypoxia can affect the nutrient uptake of a plant because, without oxygen present, respiration becomes inhibited within the root cells. The nutrient film technique is a hydroponic technique in which the roots are not fully submerged. Incomplete submergence allows for adequate aeration of the roots, while a "film" thin layer of nutrient-rich water is pumped through the system to provide nutrients and water to the plant.
| Biology and health sciences | Plant anatomy and morphology: General | Biology |
435341 | https://en.wikipedia.org/wiki/Pinus%20taeda | Pinus taeda | Pinus taeda, commonly known as loblolly pine, is one of several pines native to the Southeastern United States, from East Texas to Florida, and north to southern New Jersey. The wood industry classifies the species as a southern yellow pine. U.S. Forest Service surveys found that loblolly pine is the second-most common species of tree in the United States, after red maple. For its timber, the pine species is regarded as the most commercially important tree in the Southeastern U.S. The common name loblolly is given because the pine species is found mostly in lowlands and swampy areas.
Loblolly pine is the first among over 100 species of Pinus to have its complete genome sequenced. As of March 2014, it was the organism having the largest sequenced genome size. Its genome, with 22 billion base pairs, is seven times larger than that of humans. As of 2018, assembly of the axolotl genome (32Gb) displaced loblolly pine as the largest assembled genome. The loblolly pine was selected as the official state tree of Arkansas in 1939.
Description
Loblolly pine can reach a height of with a diameter of . Exceptional specimens may reach tall, the largest of the southern pines. Its needles are in bundles (fascicles) of three, sometimes twisted, and measure long, an intermediate length for southern pines, shorter than those of the longleaf pine or slash pine, but longer than those of the shortleaf pine and spruce pine. The needles usually last up to two years before they fall, which gives the species its evergreen character. Needles are yellowish-green to grayish green.
Although some needles fall throughout the year due to severe weather, insect damage, and drought, most needles fall during the autumn and winter of their second year. The seed cones are green, ripening pale buff-brown, in length, broad when closed, opening to wide, each scale bearing a sharp spine long.
Bark is reddish brown and deeply fissured into irregular, broad, scaly plates on older trees. Branches are reddish-brown to dark yellowish brown.
Loblolly pines are one of the fastest growing pines making it a valuable species in the lumber industry. The lumber marketed as yellow pine lumber and similar usage to other southern pines such as the more stronger Longleaf and Shortleaf pines. They are also used as pulpwood. It grows at an average of 2 feet per year. The tallest loblolly pine currently known, which is tall, and the largest, which measures in volume, are in Congaree National Park.
Etymology and taxonomy
The word "loblolly" is a combination of "lob", referring to thick, heavy bubbling of cooking porridge, and "lolly", an old British dialect word for "broth, soup, or any other food boiled in a pot". In the southern United States, the word is used to mean "a mudhole; a mire," a sense derived from an allusion to the consistency of porridge. Hence, the pine is named as it is generally found in lowlands and swampy areas. Loblolly pines grow well in acidic clay soil, which is common throughout the South, thus are often found in large stands in rural places.
Other old names, now rarely used, include oldfield pine due to its status as an early colonizer of abandoned fields; bull pine due to its size (several other yellow pines are also often so named, especially large isolated specimens); rosemary pine due to loblolly's distinctive fragrance compared to the other southern pines; and North Carolina pine.
For the scientific name, Pinus is the Latin name for the pines and taeda refers to the resinous wood.
Ecology
With the advent of wildfire suppression, loblolly pine has become prevalent in some parts of the Deep South that were once dominated by longleaf pine and, especially in northern Florida, slash pine.
Its rate of growth is rapid, even among the generally fast-growing southern pines. The yellowish, resinous wood is prized for lumber, but is also used for wood pulp. This tree is commercially grown in extensive plantations.
Loblolly pine is the pine of the Lost Pines Forest around Bastrop, Texas, and in McKinney Roughs Nature Park along the Texas Colorado River. These are isolated populations on areas of acidic sandy soil, surrounded by alkaline clays that are poor for pine growth.
A study using loblolly pines showed that higher atmospheric carbon dioxide levels may help the trees to endure ice storms better.
Notable trees
The famous "Eisenhower Tree" on the 17th hole of Augusta National Golf Club was a loblolly pine. U.S. President Dwight D. Eisenhower, an Augusta National member, hit the tree so many times that at a 1956 club meeting, he proposed that it be cut down. Not wanting to offend the President, the club's chairman, Clifford Roberts, immediately adjourned the meeting rather than reject the request outright. In February 2014, an ice storm severely damaged the Eisenhower Tree. The opinion of arborists was that the tree could not be saved and should be removed, which it subsequently was.
The "Morris Pine" is located in southeastern Arkansas; it is over 300 years old with a diameter of and a height of .
Loblolly pine seeds were carried aboard the Apollo 14 flight. On its return, the seeds were planted in several locations in the US, including the grounds of the White House. , a number of these moon trees remain alive.
Genome
Pines are the most common conifers and the genus Pinus consists of more than 100 species. Sequencing of their genomes remained a huge challenge because of the high complexity and size. Loblolly pine became the first species with its complete genome sequenced. This was the largest genome assembled until 2018, when the axolotl genome (32Gb) was assembled.
The loblolly pine genome is made up of 22.18 billion base pairs, which is more than seven times that of humans. Conifer genomes are known to be full of repetitive DNA, which make up 82% of the genome in loblolly pine (compared to only 50% in humans). The number of genes is estimated at 50,172, of which 15,653 are already confirmed. Most of the genes are duplicates. Some genes have the longest introns observed among fully sequenced plant genomes.
Inbreeding depression
Gymnosperms are predominantly outcrossing, but lack genetic self-incompatibility. Loblolly pine, like most gymnosperms, exhibits high levels of inbreeding depression, especially in the embryonic stage. The loblolly pine harbors an average load of at least eight lethal equivalents. A lethal equivalent is the number of deleterious genes per haploid genome whose cumulative effect is the equivalent of one lethal gene. The presence of at least eight lethal equivalents implies substantial inbreeding depression upon self-fertilization.
| Biology and health sciences | Pinaceae | Plants |
435420 | https://en.wikipedia.org/wiki/Anaerobic%20respiration | Anaerobic respiration | Anaerobic respiration is respiration using electron acceptors other than molecular oxygen (O2). Although oxygen is not the final electron acceptor, the process still uses a respiratory electron transport chain.
In aerobic organisms undergoing respiration, electrons are shuttled to an electron transport chain, and the final electron acceptor is oxygen. Molecular oxygen is an excellent electron acceptor. Anaerobes instead use less-oxidizing substances such as nitrate (), fumarate (), sulfate (), or elemental sulfur (S). These terminal electron acceptors have smaller reduction potentials than O2. Less energy per oxidized molecule is released. Therefore, anaerobic respiration is less efficient than aerobic.
As compared with fermentation
Anaerobic cellular respiration and fermentation generate ATP in very different ways, and the terms should not be treated as synonyms. Cellular respiration (both aerobic and anaerobic) uses highly reduced chemical compounds such as NADH and FADH2 (for example produced during glycolysis and the citric acid cycle) to establish an electrochemical gradient (often a proton gradient) across a membrane. This results in an electrical potential or ion concentration difference across the membrane. The reduced chemical compounds are oxidized by a series of respiratory integral membrane proteins with sequentially increasing reduction potentials, with the final electron acceptor being oxygen (in aerobic respiration) or another chemical substance (in anaerobic respiration). A proton motive force drives protons down the gradient (across the membrane) through the proton channel of ATP synthase. The resulting current drives ATP synthesis from ADP and inorganic phosphate.
Fermentation, in contrast, does not use an electrochemical gradient but instead uses only substrate-level phosphorylation to produce ATP. The electron acceptor NAD+ is regenerated from NADH formed in oxidative steps of the fermentation pathway by the reduction of oxidized compounds. These oxidized compounds are often formed during the fermentation pathway itself, but may also be external. For example, in homofermentative lactic acid bacteria, NADH formed during the oxidation of glyceraldehyde-3-phosphate is oxidized back to NAD+ by the reduction of pyruvate to lactic acid at a later stage in the pathway. In yeast, acetaldehyde is reduced to ethanol to regenerate NAD+.
There are two important anaerobic microbial methane formation pathways, through carbon dioxide / bicarbonate () reduction (respiration) or acetate fermentation.
Ecological importance
Anaerobic respiration is a critical component of the global nitrogen, iron, sulfur, and carbon cycles through the reduction of the oxyanions of nitrogen, sulfur, and carbon to more-reduced compounds. The biogeochemical cycling of these compounds, which depends upon anaerobic respiration, significantly impacts the carbon cycle and global warming. Anaerobic respiration occurs in many environments, including freshwater and marine sediments, soil, subsurface aquifers, deep subsurface environments, and biofilms. Even environments that contain oxygen, such as soil, have micro-environments that lack oxygen due to the slow diffusion characteristics of oxygen gas.
An example of the ecological importance of anaerobic respiration is the use of nitrate as a terminal electron acceptor, or dissimilatory denitrification, which is the main route by which fixed nitrogen is returned to the atmosphere as molecular nitrogen gas. The denitrification process is also very important in host-microbe interactions. Like mitochondria in oxygen-respiring microorganisms, some single-cellular anaerobic ciliates use denitrifying endosymbionts to gain energy. Another example is methanogenesis, a form of carbon-dioxide respiration, that is used to produce methane gas by anaerobic digestion. Biogenic methane can be a sustainable alternative to fossil fuels. However, uncontrolled methanogenesis in landfill sites releases large amounts of methane into the atmosphere, acting as a potent greenhouse gas. Sulfate respiration produces hydrogen sulfide, which is responsible for the characteristic 'rotten egg' smell of coastal wetlands and has the capacity to precipitate heavy metal ions from solution, leading to the deposition of sulfidic metal ores.
Economic relevance
Dissimilatory denitrification is widely used in the removal of nitrate and nitrite from municipal wastewater. An excess of nitrate can lead to eutrophication of waterways into which treated water is released. Elevated nitrite levels in drinking water can lead to problems due to its toxicity. Denitrification converts both compounds into harmless nitrogen gas.
Specific types of anaerobic respiration are also critical in bioremediation, which uses microorganisms to convert toxic chemicals into less-harmful molecules to clean up contaminated beaches, aquifers, lakes, and oceans. For example, toxic arsenate or selenate can be reduced to less toxic compounds by various anaerobic bacteria via anaerobic respiration. The reduction of chlorinated chemical pollutants, such as vinyl chloride and carbon tetrachloride, also occurs through anaerobic respiration.
Anaerobic respiration is useful in generating electricity in microbial fuel cells, which employ bacteria that respire solid electron acceptors (such as oxidized iron) to transfer electrons from reduced compounds to an electrode. This process can simultaneously degrade organic carbon waste and generate electricity.
Examples of electron acceptors in respiration
| Biology and health sciences | Metabolic processes | Biology |
18340510 | https://en.wikipedia.org/wiki/Recurve%20bow | Recurve bow | In archery, a recurve bow is one of the main shapes a bow can take, with limbs that curve away from the archer when unstrung. A recurve bow stores more energy and delivers energy more efficiently than the equivalent straight-limbed bow, giving a greater amount of energy and speed to the arrow. A recurve will permit a shorter bow than the simple straight limb bow for given arrow energy, and this form was often preferred by archers in environments where long weapons could be cumbersome, such as in brush and forest terrain, or while on horseback.
Recurved limbs also put greater stress on the materials used to make the bow, and they may make more noise with the shot. Extreme recurves make the bow unstable when being strung. An unstrung recurve bow can have a confusing shape and many Native American weapons, when separated from their original owners and cultures, were incorrectly strung backwards and destroyed when attempts were made to shoot them. A test performed by Hepworth and Smith in 2002 of a preparation manufactured from bovine tendon and pearl glue and used in traditional Asiatic recurve bows showed that the composite "was found to absorb 18 MJ/m3 of energy to failure, comparable to carbon fibre composites, spring steel and butyl rubber."
Historical use
Recurve bows made out of composite materials were used by, among other groups, the Persians, Parthians, Sarmatians, Scythians, Alans, Dacians, Cumans, Hyksos, Magyars, Huns, Bulgars, Greeks, Turks, Mongols, Koreans and Chinese.
The recurve bow spread to Egypt and much of Asia in the second millennium BC.
Perhaps the most ancient written record of the use of recurved bows is found Psalm 78:57 ("They were turned aside like a deceitful bow" KJV), which is dated by most scholars to the eighth century BC.
19th century Bible scholar Adam Clarke pointed out that "If a person, who is unskillful or weak, attempt to recurve and string one of these bows, if he take not great heed, it will spring back, and regain its quiescent position; and, perhaps, break his arm. And sometimes I have known it, when bent, to start aside, - regain its quiescent position, to my no small danger... this is precisely the kind of bow mentioned by Homer, Odyssey xxi, which none of Penelope's suitors could bend, called καμπυλα τοξα [] in the state of rest; but τοξον παλιντονον [], the recurved bow when prepared for use."
The standard weapon of Roman imperial archers was a composite recurve, and the stiffening laths (also called siyah in Arabic/Asian bows and szarv (horns) in Hungarian bows) used to form the actual recurved ends have been found on Roman sites throughout the Empire, as far north as Bar Hill Fort on the Antonine Wall in Scotland.
The Turkish archer used recurve bows, which were manufactured from laminates of wood glued with animal tissue like horn and sinew, to great destructive effect during the reign of the Ottomans.
Its use by the Mongol armies allowed massed individuals on horseback to raid from the Pacific to central Europe, thanks to the relatively short length of recurve bows, with which archers could maneuver while seated on their mount. The rise of the Mongols can be partially attributed to the good range and power of the bows of Genghis Khan's armies. These bows were made of a bamboo core, with horn on the belly (facing towards the archer) and sinew on the back, bound together with animal glue.
During the Middle Ages composite recurve bows were used in the drier European countries because the laminate glue would not moisten and thereby lose its adhesive power; the all-wooden straight longbow was the normal form in wetter areas. Recurve bows depicted in the British Isles (see illustrations in "The Great War Bow") may have been composite weapons, or wooden bows with ends recurved by heat and force, or simply artistic licence.
The bows of many Indigenous North American were recurved, especially West Coast Indian bows.
Recurve bows went out of widespread use in warfare with the greater availability of effective firearms in various nations at the end of the 19th century.
In Ancient China, recurve bow had a long history in battles. During the battle between the Song dynasty and the state of Liao Jin, the utilization of recurve bows was widely recorded. During Ming Dynasty, a deeper modification was applied and named as Ming-style recurve bow.
Modern use
Self bows, composite bows, and laminated bows using the recurve form are still made and used by bowyers, amateurs, and professional archers.
The unqualified phrase "recurve bow" or just "a recurve" in modern archery circles usually refers to a typical modern recurve bow, as used by archers in the Olympics and many other competitive events. It employs advanced technologies and materials. The limbs are usually made from multiple layers of fibreglass, carbon and/or wood on a core of carbon foam or wood. The riser (the centre section of the bow) is generally separate and is constructed from wood, carbon, aluminium alloy or magnesium alloy. The term 'riser' is used because, in a one-piece bow, the centre section rises from the limbs in a taper to spread the stress. Several manufacturers produce risers made of carbon fibre (with metal fittings) or aluminium with carbon fibre. Risers for beginners are usually made of wood or plastic. The synthetic materials allow economic, predictable manufacture for consistent performance. The greater mass of a modern bow is in itself an aid to stability, and therefore to accuracy. However, accuracy is also related to a bow's draw weight, as well as how well an archer handles it. It is therefore imperative for an archer, particularly a beginner, never to overestimate their capabilities, and to choose a draw weight that is appropriate for their body build and level of experience.
The modern recurve is the only form of bow permitted in the Olympics (though the compound bow is permitted in some categories at the Paralympic Games) and is the most widely used by European and Asian sporting archers.
There is a movement to have future Olympic Games include the compound bow in competition, due to its framework technology being more available and widespread, which would make competitive stat-tracking and testing easier.
The modern Olympic-style recurve is a development of the American flatbow, with rectangular-section limbs that taper towards the limb tips. Most recurves today are "take-down" bows; that is, the limbs can be detached from the riser, for ease of transportation and storage as well as interchangeability. Older recurves and some modern hunting recurves are one-piece bows. Hunters often prefer one-piece bows over take-down bows, because the limb pockets on take-down bows can make unwanted noise while drawing.
Barebow is another type of modern recurve bow. It usually uses the same riser and limbs as a recurve, but lacks a sight, stabilizers, and clicker. While they may still look similar, it is tuned differently with a negative tiller and a different weight distribution. This is due to the archer's anchor point being on the corner of the mouth instead of below the chin.
Terminology
Arrow rest Where the arrow rests during draw. These may be simple fixed rests or may be spring-loaded or magnetic flip rests.
Back The face of the bow on the opposite side to the string
Belly The face of the bow on the same side as the string
Bow sight An aiming aid attached to the riser
Brace height The distance between the deepest part of the grip and the string; fistmele is the traditional term, referring to the equivalent length of a closed fist with the thumb extended, indicating the proper traditional distance used between the deepest part of the grip and the string.
Grip The part of the bow held by the bow hand
Limbs The upper and lower working parts of the bow, which come in a variety of different poundages
Nocking point The place on the bowstring where the arrow nock is fitted
Riser The rigid centre section of a bow to which the limbs are attached
String The cord that attaches to both limb tips and transforms stored energy from the limbs into kinetic energy in the arrow
Sling A strap or cord attached to the bow handle, wrist or fingers to prevent the bow from falling from the hand
Finger tab or thumb ring A protection for the fingers that draw the string. Can also provide a better release performance. Usually made of leather.
Tiller The difference between the limb-string distances measured where the limbs are attached to the riser. Usually the upper distance is slightly more than the bottom one, resulting in a positive tiller. Reflects the power-balance between both limbs.
Other equipment
Archers often have many other pieces of equipment attached to their recurve bows, such as:
Clicker a blade or wire device fitted to the riser, positioned to drop off the arrow when the archer has reached optimum draw length. Used correctly, this ensures the same cast-force each time. Many archers train themselves to shoot automatically when the clicker 'clicks' off the arrow.
Kisser a button or nodule attached to the bowstring. The archer touches the kisser to the same spot on the face each time (usually the lips, hence the name) to give a consistent vertical reference.
Plunger button a fine-tuning device consisting of a spring-cushioned tip inside a housing. The plunger button screws through the riser so that the tip emerges above the rest. The side of the arrow is in contact with the tip when the arrow is on the rest. The spring is tuned so that it allows a certain amount of movement of the arrow towards the riser on release, bringing the arrow to the ideal "centre shot" location. The plunger button is used to compensate for the arrow's flex since the arrow flexes as the string pushes onto it with a very high acceleration. The device is also known as a cushion plunger, pressure button, or Berger button.
Stabilizers weight-bearing rods attached to a recurve bow to balance the bow to the archer's liking, and to dampen the effect of torque and dissipate vibration.
| Technology | Archery | null |
1188676 | https://en.wikipedia.org/wiki/Kelp%20forest | Kelp forest | Kelp forests are underwater areas with a high density of kelp, which covers a large part of the world's coastlines. Smaller areas of anchored kelp are called kelp beds. They are recognized as one of the most productive and dynamic ecosystems on Earth. Although algal kelp forest combined with coral reefs only cover 0.1% of Earth's total surface, they account for 0.9% of global primary productivity. Kelp forests occur worldwide throughout temperate and polar coastal oceans. In 2007, kelp forests were also discovered in tropical waters near Ecuador.
Physically formed by brown macroalgae, kelp forests provide a unique habitat for marine organisms and are a source for understanding many ecological processes. Over the last century, they have been the focus of extensive research, particularly in trophic ecology, and continue to provoke important ideas that are relevant beyond this unique ecosystem. For example, kelp forests can influence coastal oceanographic patterns and provide many ecosystem services.
However, the influence of humans has often contributed to kelp forest degradation. Of particular concern are the effects of overfishing nearshore ecosystems, which can release herbivores from their normal population regulation and result in the overgrazing of kelp and other algae. This can rapidly result in transitions to barren landscapes where relatively few species persist. Already due to the combined effects of overfishing and climate change, kelp forests have all but disappeared in many especially vulnerable places, such as Tasmania's east coast and the coast of Northern California. The implementation of marine protected areas is one management strategy useful for addressing such issues, since it may limit the impacts of fishing and buffer the ecosystem from additive effects of other environmental stressors.
Kelp
The term kelp refers to marine algae belonging to the order Laminariales (phylum: Ochrophyta). Though not considered a taxonomically diverse order, kelps are highly diverse structurally and functionally. The most widely recognized species are the giant kelps (Macrocystis spp.), although numerous other genera such as Laminaria, Ecklonia, Lessonia, Nereocystis, Alaria, and Eisenia are described.
A wide range of sea life uses kelp forests for protection or food, including fish. In the North Pacific kelp forests, particularly rockfish, and many invertebrates, such as amphipods, shrimp, marine snails, bristle worms, and brittle stars. Many marine mammals and birds are also found, including seals, sea lions, whales, sea otters, gulls, terns, snowy egrets, great blue herons, and cormorants, as well as some shore birds.
Frequently considered an ecosystem engineer, kelp provides a physical substrate and habitat for kelp forest communities. In algae (kingdom Protista), the body of an individual organism is known as a thallus rather than as a plant (kingdom Plantae). The morphological structure of a kelp thallus is defined by three basic structural units:
The holdfast is a root-like mass that anchors the thallus to the sea floor, though unlike true roots it is not responsible for absorbing and delivering nutrients to the rest of the thallus.
The stipe is analogous to a plant stalk, extending vertically from the holdfast and providing a support framework for other morphological features.
The fronds are leaf- or blade-like attachments extending from the stipe, sometimes along its full length, and are the sites of nutrient uptake and photosynthetic activity.
In addition, many kelp species have pneumatocysts, or gas-filled bladders, usually located at the base of fronds near the stipe. These structures provide the necessary buoyancy for kelp to maintain an upright position in the water column.
The environmental factors necessary for kelp to survive include hard substrate (usually rock or sand), high nutrients (e.g., nitrogen, phosphorus), and light (minimum annual irradiance dose > 50 E m−2). Especially productive kelp forests tend to be associated with areas of significant oceanographic upwelling, a process that delivers cool, nutrient-rich water from depth to the ocean's mixed surface layer. Water flow and turbulence facilitate nutrient assimilation across kelp fronds throughout the water column. Water clarity affects the depth to which sufficient light can be transmitted. In ideal conditions, giant kelp (Macrocystis spp.) can grow as much as 30–60 cm vertically per day. Some species, such as Nereocystis, are annuals, while others such as Eisenia are perennials, living for more than 20 years. In perennial kelp forests, maximum growth rates occur during upwelling months (typically spring and summer) and die-backs correspond to reduced nutrient availability, shorter photoperiods, and increased storm frequency.
Kelps are primarily associated with temperate and arctic waters worldwide. Of the more dominant genera, Laminaria is mainly associated with both sides of the Atlantic Ocean and the coasts of China and Japan; Ecklonia is found in Australia, New Zealand, and South Africa; and Macrocystis occurs throughout the northeastern and southeastern Pacific Ocean, Southern Ocean archipelagos, and in patches around Australia, New Zealand, and South Africa. The region with the greatest diversity of kelps (>20 species) is the northeastern Pacific, from north of San Francisco, California, to the Aleutian Islands, Alaska.
Although kelp forests are unknown in tropical surface waters, a few species of Laminaria have been known to occur exclusively in tropical deep waters. This general absence of kelp from the tropics is believed to be mostly due to insufficient nutrient levels associated with warm, oligotrophic waters. One recent study spatially overlaid the requisite physical parameters for kelp with mean oceanographic conditions and produced a model predicting the existence of subsurface kelps throughout the tropics worldwide to depths of . For a hotspot in the Galapagos Islands, the local model was improved with fine-scale data and tested; the research team found thriving kelp forests in all eight of their sampled sites, all of which had been predicted by the model, thus validating their approach. This suggests that their global model might actually be fairly accurate, and if so, kelp forests would be prolific in tropical subsurface waters worldwide. The importance of this contribution has been rapidly acknowledged within the scientific community and has prompted an entirely new trajectory of kelp forest research, highlighting the potential for kelp forests to provide marine organisms spatial refuge under climate change and providing possible explanations for evolutionary patterns of kelps worldwide.
Ecosystem architecture
The architecture of a kelp forest ecosystem is based on its physical structure, which influences the associated species that define its community structure. Structurally, the ecosystem includes three guilds of kelp and two guilds occupied by other algae:
Canopy kelps include the largest species and often constitute floating canopies that extend to the ocean surface (e.g., Macrocystis and Alaria).
Stipitate kelps generally extend a few meters above the sea floor and can grow in dense aggregations (e.g., Eisenia and Ecklonia).
Prostrate kelps lie near and along the sea floor (e.g., Laminaria).
The benthic assemblage is composed of other algal species (e.g., filamentous and foliose functional groups, articulated corallines) and sessile organisms along the ocean bottom.
Encrusting coralline algae directly and often extensively cover geologic substrate.
Multiple kelp species often co-exist within a forest; the term understory canopy refers to the stipitate and prostrate kelps. For example, a Macrocystis canopy may extend many meters above the seafloor towards the ocean surface, while an understory of the kelps Eisenia and Pterygophora reaches upward only a few meters. Beneath these kelps, a benthic assemblage of foliose red algae may occur. The dense vertical infrastructure with overlying canopy forms a system of microenvironments similar to those observed in a terrestrial forest, with a sunny canopy region, a partially shaded middle, and darkened seafloor. Each guild has associated organisms, which vary in their levels of dependence on the habitat, and the assemblage of these organisms can vary with kelp morphologies. For example, in California, Macrocystis pyrifera forests, the nudibranch Melibe leonina, and skeleton shrimp Caprella californica are closely associated with surface canopies; the kelp perch Brachyistius frenatus, rockfish Sebastes spp., and many other fishes are found within the stipitate understory; brittle stars and turban snails Tegula spp. are closely associated with the kelp holdfast, while various herbivores, such as sea urchins and abalone, live under the prostrate canopy; many seastars, hydroids, and benthic fishes live among the benthic assemblages; solitary corals, various gastropods, and echinoderms live over the encrusting coralline algae. In addition, pelagic fishes and marine mammals are loosely associated with kelp forests, usually interacting near the edges as they visit to feed on resident organisms.
Trophic ecology
Classic studies in kelp forest ecology have largely focused on trophic interactions (the relationships between organisms and their food webs), particularly the understanding and top-down trophic processes. Bottom-up processes are generally driven by the abiotic conditions required for primary producers to grow, such as availability of light and nutrients, and the subsequent transfer of energy to consumers at higher trophic levels. For example, the occurrence of kelp is frequently correlated with oceanographic upwelling zones, which provide unusually high concentrations of nutrients to the local environment. This allows kelp to grow and subsequently support herbivores, which in turn support consumers at higher trophic levels. By contrast, in top-down processes, predators limit the biomass of species at lower trophic levels through consumption. In the absence of predation, these lower-level species flourish because resources that support their energetic requirements are not limiting. In a well-studied example from Alaskan kelp forests, sea otters (Enhydra lutris) control populations of herbivorous sea urchins through predation. When sea otters are removed from the ecosystem (for example, by human exploitation), urchin populations are released from predatory control and grow dramatically. This leads to increased herbivore pressure on local kelp stands. Deterioration of the kelp itself results in the loss of physical ecosystem structure and subsequently, the loss of other species associated with this habitat. In Alaskan kelp forest ecosystems, sea otters are the keystone species that mediates this trophic cascade. In Southern California, kelp forests persist without sea otters and the control of herbivorous urchins is instead mediated by a suite of predators including lobsters and large fishes, such as the California sheephead. The effect of removing one predatory species in this system differs from Alaska because redundancy exists in the trophic levels and other predatory species can continue to regulate urchins. However, the removal of multiple predators can effectively release urchins from predator pressure and allow the system to follow trajectories towards kelp forest degradation. Similar examples exist in Nova Scotia, South Africa, Australia, and Chile. The relative importance of top-down versus bottom-up control in kelp forest ecosystems and the strengths of trophic interactions continue to be the subject of considerable scientific investigation.
The transition from macroalgal (i.e. kelp forest) to denuded landscapes dominated by sea urchins (or ‘urchin barrens’) is a widespread phenomenon, often resulting from trophic cascades like those described above; the two phases are regarded as alternative stable states of the ecosystem. The recovery of kelp forests from barren states has been documented following dramatic perturbations, such as urchin disease or large shifts in thermal conditions. Recovery from intermediate states of deterioration is less predictable and depends on a combination of abiotic factors and biotic interactions in each case.
Though urchins are usually the dominant herbivores, others with significant interaction strengths include seastars, isopods, kelp crabs, and herbivorous fishes. In many cases, these organisms feed on kelp that has been dislodged from substrate and drifts near the ocean floor rather than expend energy searching for intact thalli on which to feed. When sufficient drift kelp is available, herbivorous grazers do not exert pressure on attached thalli; when drift subsidies are unavailable, grazers directly impact the physical structure of the ecosystem. Many studies in Southern California have demonstrated that the availability of drift kelp specifically influences the foraging behavior of sea urchins. Drift kelp and kelp-derived particulate matter have also been important in subsidizing adjacent habitats, such as sandy beaches and the rocky intertidal.
Patch dynamics
Another major area of kelp forest research has been directed at understanding the spatial-temporal patterns of kelp patches. Not only do such dynamics affect the physical landscape, but they also affect species that associate with kelp for refuge or foraging activities. Large-scale environmental disturbances have offered important insights concerning mechanisms and ecosystem resilience. Examples of environmental disturbances include:
Acute and chronic pollution events have been shown to impact southern California kelp forests, though the intensity of the impact seems to depend on both the nature of the contaminants and duration of exposure. Pollution can include sediment deposition and eutrophication from sewage, industrial byproducts and contaminants like PCBs and heavy metals (for example, copper, zinc), runoff of organophosphates from agricultural areas, anti-fouling chemicals used in harbors and marinas (for example, TBT and creosote) and land-based pathogens like fecal coliform bacteria.
Catastrophic storms can remove surface kelp canopies through wave activity, but usually leave understory kelps intact; they can also remove urchins when little spatial refuge is available. Interspersed canopy clearings create a seascape mosaic where sunlight penetrates deeper into the kelp forest and species that are normally light-limited in the understory can flourish. Similarly, substrate cleared of kelp holdfasts can provide space for other sessile species to establish themselves and occupy the seafloor, sometimes directly competing with juvenile kelp and even inhibiting their settlement.
El Niño-Southern Oscillation (ENSO) events involve the depression of oceanographic thermoclines, severe reductions of nutrient input, and changes in storm patterns. Stress due to warm water and nutrient depletion can increase the susceptibility of kelp to storm damage and herbivorous grazing, sometimes even prompting phase shifts to urchin-dominated landscapes. In general, oceanographic conditions (that is, water temperature, currents) influence the recruitment success of kelp and its competitors, which clearly affect subsequent species interactions and kelp forest dynamics.
Overfishing higher trophic levels that naturally regulate herbivore populations is also recognized as an important stressor in kelp forests. As described in the previous section, the drivers and outcomes of trophic cascades are important for understanding spatial-temporal patterns of kelp forests.
In addition to ecological monitoring of kelp forests before, during, and after such disturbances, scientists try to tease apart the intricacies of kelp forest dynamics using experimental manipulations. By working on smaller spatial-temporal scales, they can control for the presence or absence of specific biotic and abiotic factors to discover the operative mechanisms. For example, in southern Australia, manipulations of kelp canopy types demonstrated that the relative amount of Ecklonia radiata in a canopy could be used to predict understory species assemblages; consequently, the proportion of E. radiata can be used as an indicator of other species occurring in the environment.
Human use
Kelp forests have been important to human existence for thousands of years. Indeed, many now theorise that the first colonisation of the Americas was due to fishing communities following the Pacific kelp forests during the last ice age. One theory contends that the kelp forests that would have stretched from northeast Asia to the American Pacific coast would have provided many benefits to ancient boaters The kelp forests would have provided many sustenance opportunities, as well as acting as a type of buffer from rough water. Besides these benefits, researchers believe that the kelp forests might have helped early boaters navigate, acting as a type of "kelp highway". Theorists also suggest that the kelp forests would have helped these ancient colonists by providing a stable way of life and preventing them from having to adapt to new ecosystems and develop new survival methods even as they traveled thousands of miles.
Modern economies are based on fisheries of kelp-associated species such as lobster and rockfish. Humans can also harvest kelp directly to feed aquaculture species such as abalone and to extract the compound alginic acid, which is used in products like toothpaste and antacids. Kelp forests are valued for recreational activities such as SCUBA diving and kayaking; the industries that support these sports represent one benefit related to the ecosystem and the enjoyment derived from these activities represents another. All of these are examples of ecosystem services provided specifically by kelp forests. The Monterey Bay aquarium was the first aquarium to exhibit an alive kelp forest.
As carbon sequesters
Kelp forests grow in rocky places along the shore that are constantly eroding carrying material out to the deep sea. The kelp then sinks to the ocean floor and store the carbon where is it unlikely to be disturbed by human activity. Researchers from the University of Western Australia estimated kelp forest around Australia sequestered 1.3-2.8 teragrams of carbon per year which is 27–34% of the total annual blue carbon sequestered in the Australian continent by tidal marshes, mangrove forests and seagrass beds. Every year 200 million tons of carbon dioxide are being sequestered by macroalgae such as kelp.
Threats and management
Given the complexity of kelp forests – their variable structure, geography, and interactions – they pose a considerable challenge to environmental managers. Extrapolating even well-studied trends to the future is difficult because interactions within the ecosystem will change under variable conditions, not all relationships in the ecosystem are understood, and the nonlinear thresholds to transitions are not yet recognized.
Major issues of concern include marine pollution and water quality, kelp harvesting and fisheries, invasive species, and climate change. The most pressing threat to kelp forest preservation may be the overfishing of coastal ecosystems, which by removing higher trophic levels facilitates their shift to depauperate urchin barrens. The maintenance of biodiversity is recognized as a way of generally stabilizing ecosystems and their services through mechanisms such as functional compensation and reduced susceptibility to foreign species invasions. More recently, the 2022 IPCC report states that kelp and other seaweeds in most regions are undergoing mass mortalities from high temperature extremes and range shifts from warming, as they are stationary and cannot adapt quick enough to deal with the rapidly increasing temperature of the Earth and thus, the ocean.
In many places, managers have opted to regulate the harvest of kelp and/or the taking of kelp forest species by fisheries. While these may be effective in one sense, they do not necessarily protect the entirety of the ecosystem. Marine protected areas (MPAs) offer a unique solution that encompasses not only target species for harvesting, but also the interactions surrounding them and the local environment as a whole. Direct benefits of MPAs to fisheries (for example, spillover effects) have been well documented around the world. Indirect benefits have also been shown for several cases among species such as abalone and fishes in Central California. Most importantly, MPAs can be effective at protecting existing kelp forest ecosystems and may also allow for the regeneration of those that have been affected.
Kelp forest restoration in California
In the 2010s, Northern California lost 95% of its kelp ecosystems due to marine heatwaves.
Kelp bed recovery efforts in California are primarily focusing on sea urchin removal, both by scuba divers, and by sea otters, which are natural predators.
A brown alga, Sargassum horneri, an invasive species first spotted in 2003, has also been a concern.
The Sunflower sea star is an important keystone species which helps control sea urchin abundance, but an outbreak of Sea star wasting disease and a vulnerability to climate change has led to its critical endangerment.
Researchers at the Bodega Marine Laboratory of UC Davis are developing replanting strategies, and volunteers of the Orange County Coastkeeper group are replanting giant kelp. Humboldt State University began cultivating bull kelp in its research farm in 2021.
Research efforts at the state level to prevent kelp forest collapse in California were announced in July 2020.
At the federal level, H.R. 4458, the Keeping Ecosystems Living and Productive (KELP) Act, introduced July 29, 2021, seeks to establish a new grant program within NOAA for kelp forest restoration.
Ocean Rainforest, a Faroe Islands-based company, secured $4.5 million in U.S. government funding to grow giant kelp on an 86-acre farm off the coast of Santa Barbara, California.
Global conservation efforts
The United Nations Environment Programme Norwegian Blue Forests Network 2023 report titled 'Into the Blue: Securing a Sustainable Future for Kelp Forests' documents a global decline in kelp forests, with an annual reduction rate of 1.8%. Over the past 50 years, 40–60% of these ecosystems have degraded due to factors such as climate change, poor water quality, and overfishing. The report underscores the urgency of implementing global conservation efforts and emphasizes the need for international cooperation to adopt area-based management strategies. These strategies aim to mitigate the aforementioned impacts and enhance the resilience and sustainability of kelp forests.
Kelp forest restoration, practiced in 16 countries over 300 years, has gained momentum, particularly from 2009 to 2019, involving diverse societal sectors such as academia, governments, and businesses. Large-scale restoration successes demonstrate its feasibility, with the best outcomes often near existing kelp forests, emphasizing the importance of preventing their decline. However, challenges persist, including the need for cost-effective methods, funding mechanisms, and adaptations to climate change. This restoration work not only supports ecological recovery but also offers significant social and economic benefits, aligning with the United Nations Sustainable Development Goals (SDGs), and underscores the importance of multi-sector collaboration.
| Physical sciences | Oceanography | Earth science |
1189275 | https://en.wikipedia.org/wiki/Seabed | Seabed | The seabed (also known as the seafloor, sea floor, ocean floor, and ocean bottom) is the bottom of the ocean. All floors of the ocean are known as 'seabeds'.
The structure of the seabed of the global ocean is governed by plate tectonics. Most of the ocean is very deep, where the seabed is known as the abyssal plain. Seafloor spreading creates mid-ocean ridges along the center line of major ocean basins, where the seabed is slightly shallower than the surrounding abyssal plain. From the abyssal plain, the seabed slopes upward toward the continents and becomes, in order from deep to shallow, the continental rise, slope, and shelf. The depth within the seabed itself, such as the depth down through a sediment core, is known as the "depth below seafloor". The ecological environment of the seabed and the deepest waters are collectively known, as a habitat for creatures, as the "benthos".
Most of the seabed throughout the world's oceans is covered in layers of marine sediments. Categorized by where the materials come from or composition, these sediments are classified as either: from land (terrigenous), from biological organisms (biogenous), from chemical reactions (hydrogenous), and from space (cosmogenous). Categorized by size, these sediments range from very small particles called clays and silts, known as mud, to larger particles from sand to boulders.
Features of the seabed are governed by the physics of sediment transport and by the biology of the creatures living in the seabed and in the ocean waters above. Physically, seabed sediments often come from the erosion of material on land and from other rarer sources, such as volcanic ash. Sea currents transport sediments, especially in shallow waters where tidal energy and wave energy cause resuspension of seabed sediments. Biologically, microorganisms living within the seabed sediments change seabed chemistry. Marine organisms create sediments, both within the seabed and in the water above. For example, phytoplankton with silicate or calcium carbonate shells grow in abundance in the upper ocean, and when they die, their shells sink to the seafloor to become seabed sediments.
Human impacts on the seabed are diverse. Examples of human effects on the seabed include exploration, plastic pollution, and exploitation by mining and dredging operations. To map the seabed, ships use acoustic technology to map water depths throughout the world. Submersible vehicles help researchers study unique seabed ecosystems such as hydrothermal vents. Plastic pollution is a global phenomenon, and because the ocean is the ultimate destination for global waterways, much of the world's plastic ends up in the ocean and some sinks to the seabed. Exploitation of the seabed involves extracting valuable minerals from sulfide deposits via deep sea mining, as well as dredging sand from shallow environments for construction and beach nourishment.
Structure
Most of the oceans have a common structure, created by common physical phenomena, mainly from tectonic movement, and sediment from various sources. The structure of the oceans, starting with the continents, begins usually with a continental shelf, continues to the continental slope – which is a steep descent into the ocean, until reaching the abyssal plain – a topographic plain, the beginning of the seabed, and its main area. The border between the continental slope and the abyssal plain usually has a more gradual descent, and is called the continental rise, which is caused by sediment cascading down the continental slope.
The mid-ocean ridge, as its name implies, is a mountainous rise through the middle of all the oceans, between the continents. Typically a rift runs along the edge of this ridge. Along tectonic plate edges there are typically oceanic trenches – deep valleys, created by the mantle circulation movement from the mid-ocean mountain ridge to the oceanic trench.
Hotspot volcanic island ridges are created by volcanic activity, erupting periodically, as the tectonic plates pass over a hotspot. In areas with volcanic activity and in the oceanic trenches there are hydrothermal vents – releasing high pressure and extremely hot water and chemicals into the typically freezing water around it.
Deep ocean water is divided into layers or zones, each with typical features of salinity, pressure, temperature and marine life, according to their depth. Lying along the top of the abyssal plain is the abyssal zone, whose lower boundary lies at about 6,000 m (20,000 ft). The hadal zone – which includes the oceanic trenches, lies between 6,000 and 11,000 metres (20,000–36,000 ft) and is the deepest oceanic zone.
Depth below seafloor
Depth below seafloor is a vertical coordinate used in geology, paleontology, oceanography, and petrology (see ocean drilling).
The acronym "mbsf" (meaning "meters below the seafloor") is a common convention used for depths below the seafloor.
Sediments
Sediments in the seabed vary in origin, from eroded land materials carried into the ocean by rivers or wind flow, waste and decompositions of sea creatures, and precipitation of chemicals within the sea water itself, including some from outer space. There are four basic types of sediment of the sea floor:
Terrigenous (also lithogenous) describes the sediment from continents eroded by rain, rivers, and glaciers, as well as sediment blown into the ocean by the wind, such as dust and volcanic ash.
Biogenous material is the sediment made up of the hard parts of sea creatures, mainly phytoplankton, that accumulate on the bottom of the ocean.
Hydrogenous sediment is material that precipitates in the ocean when oceanic conditions change, or material created in hydrothermal vent systems.
Cosmogenous sediment comes from extraterrestrial sources.
Terrigenous and biogenous
Terrigenous sediment is the most abundant sediment found on the seafloor. Terrigenous sediments come from the continents. These materials are eroded from continents and transported by wind and water to the ocean. Fluvial sediments are transported from land by rivers and glaciers, such as clay, silt, mud, and glacial flour. Aeolian sediments are transported by wind, such as dust and volcanic ash.
Biogenous sediment is the next most abundant material on the seafloor. Biogenous sediments are biologically produced by living creatures. Sediments made up of at least 30% biogenous material are called "oozes." There are two types of oozes: Calcareous oozes and Siliceous oozes. Plankton grow in ocean waters and create the materials that become oozes on the seabed. Calcareous oozes are predominantly composed of calcium shells found in phytoplankton such as coccolithophores and zooplankton like the foraminiferans. These calcareous oozes are never found deeper than about 4,000 to 5,000 meters because at further depths the calcium dissolves. Similarly, Siliceous oozes are dominated by the siliceous shells of phytoplankton like diatoms and zooplankton such as radiolarians. Depending on the productivity of these planktonic organisms, the shell material that collects when these organisms die may build up at a rate anywhere from 1 mm to 1 cm every 1000 years.
Hydrogenous and cosmogenous
Hydrogenous sediments are uncommon. They only occur with changes in oceanic conditions such as temperature and pressure. Rarer still are cosmogenous sediments. Hydrogenous sediments are formed from dissolved chemicals that precipitate from the ocean water, or along the mid-ocean ridges, they can form by metallic elements binding onto rocks that have water of more than 300 °C circulating around them. When these elements mix with the cold sea water they precipitate from the cooling water. Known as manganese nodules, they are composed of layers of different metals like manganese, iron, nickel, cobalt, and copper, and they are always found on the surface of the ocean floor.
Cosmogenous sediments are the remains of space debris such as comets and asteroids, made up of silicates and various metals that have impacted the Earth.
Size classification
Another way that sediments are described is through their descriptive classification. These sediments vary in size, anywhere from 1/4096 of a mm to greater than 256 mm. The different types are: boulder, cobble, pebble, granule, sand, silt, and clay, each type becoming finer in grain. The grain size indicates the type of sediment and the environment in which it was created. Larger grains sink faster and can only be pushed by rapid flowing water (high energy environment) whereas small grains sink very slowly and can be suspended by slight water movement, accumulating in conditions where water is not moving so quickly. This means that larger grains of sediment may come together in higher energy conditions and smaller grains in lower energy conditions.
Benthos
Topography
Seabed topography (ocean topography or marine topography) refers to the shape of the land (topography) when it interfaces with the ocean. These shapes are obvious along coastlines, but they occur also in significant ways underwater. The effectiveness of marine habitats is partially defined by these shapes, including the way they interact with and shape ocean currents, and the way sunlight diminishes when these landforms occupy increasing depths. Tidal networks depend on the balance between sedimentary processes and hydrodynamics however, anthropogenic influences can impact the natural system more than any physical driver.
Marine topographies include coastal and oceanic landforms ranging from coastal estuaries and shorelines to continental shelves and coral reefs. Further out in the open ocean, they include underwater and deep sea features such as ocean rises and seamounts. The submerged surface has mountainous features, including a globe-spanning mid-ocean ridge system, as well as undersea volcanoes, oceanic trenches, submarine canyons, oceanic plateaus and abyssal plains.
The mass of the oceans is approximately 1.35 metric tons, or about 1/4400 of the total mass of the Earth. The oceans cover an area of 3.618 km2 with a mean depth of 3,682 m, resulting in an estimated volume of 1.332 km3.
Features
Each region of the seabed has typical features such as common sediment composition, typical topography, salinity of water layers above it, marine life, magnetic direction of rocks, and sedimentation. Some features of the seabed include flat abyssal plains, mid-ocean ridges, deep trenches, and hydrothermal vents.
Seabed topography is flat where layers of sediments cover the tectonic features. For example, the abyssal plain regions of the ocean are relatively flat and covered in many layers of sediments. Sediments in these flat areas come from various sources, including but not limited to: land erosion sediments from rivers, chemically precipitated sediments from hydrothermal vents, Microorganism activity, sea currents eroding the seabed and transporting sediments to the deeper ocean, and phytoplankton shell materials.
Where the seafloor is actively spreading and sedimentation is relatively light, such as in the northern and eastern Atlantic Ocean, the original tectonic activity can be clearly seen as straight line "cracks" or "vents" thousands of kilometers long. These underwater mountain ranges are known as mid-ocean ridges.
Other seabed environments include hydrothermal vents, cold seeps, and shallow areas. Marine life is abundant in the deep sea around hydrothermal vents. Large deep sea communities of marine life have been discovered around black and white smokers – vents emitting chemicals toxic to humans and most vertebrates. This marine life receives its energy both from the extreme temperature difference (typically a drop of 150 degrees) and from chemosynthesis by bacteria. Brine pools are another seabed feature, usually connected to cold seeps. In shallow areas, the seabed can host sediments created by marine life such as corals, fish, algae, crabs, marine plants and other organisms.
Human impact
Exploration
The seabed has been explored by submersibles such as Alvin and, to some extent, scuba divers with special equipment. Hydrothermal vents were discovered in 1977 by researchers using an underwater camera platform. In recent years satellite measurements of ocean surface topography show very clear maps of the seabed, and these satellite-derived maps are used extensively in the study and exploration of the ocean floor.
Plastic pollution
In 2020 scientists created what may be the first scientific estimate of how much microplastic currently resides in Earth's seafloor, after investigating six areas of ~3 km depth ~300 km off the Australian coast. They found the highly variable microplastic counts to be proportionate to plastic on the surface and the angle of the seafloor slope. By averaging the microplastic mass per cm3, they estimated that Earth's seafloor contains ~14 million tons of microplastic – about double the amount they estimated based on data from earlier studies – despite calling both estimates "conservative" as coastal areas are known to contain much more microplastic pollution. These estimates are about one to two times the amount of plastic thought – per Jambeck et al., 2015 – to currently enter the oceans annually.
Exploitation
In art and culture
Some children's play songs include elements such as "There's a hole at the bottom of the sea", or "A sailor went to sea... but all that he could see was the bottom of the deep blue sea".
On and under the seabed are archaeological sites of historic interest, such as shipwrecks and sunken towns. This underwater cultural heritage is protected by the UNESCO Convention on the Protection of the Underwater Cultural Heritage. The convention aims at preventing looting and the destruction or loss of historic and cultural information by providing an international legal framework.
| Physical sciences | Oceanography | null |
1189386 | https://en.wikipedia.org/wiki/Horseshoe%20orbit | Horseshoe orbit | In celestial mechanics, a horseshoe orbit is a type of co-orbital motion of a small orbiting body relative to a larger orbiting body. The osculating (instantaneous) orbital period of the smaller body remains very near that of the larger body, and if its orbit is a little more eccentric than that of the larger body, during every period it appears to trace an ellipse around a point on the larger object's orbit.
However, the loop is not closed but drifts forward or backward so that the point it circles will appear to move smoothly along the larger body's orbit over a long period of time. When the object approaches the larger body closely at either end of its trajectory, its apparent direction changes. Over an entire cycle the center traces the outline of a horseshoe, with the larger body between the 'horns'.
Asteroids in horseshoe orbits with respect to Earth include 54509 YORP, , , and possibly . A broader definition includes 3753 Cruithne, which can be said to be in a compound and/or transition orbit, or and . By 2016, 12 horseshoe librators of Earth have been discovered.
Saturn's moons Epimetheus and Janus occupy horseshoe orbits with respect to each other (in their case, there is no repeated looping: each one traces a full horseshoe with respect to the other).
Explanation of horseshoe orbital cycle
Background
The following explanation relates to an asteroid which is in such an orbit around the Sun, and is also affected by the Earth.
The asteroid is in almost the same solar orbit as Earth. Both take approximately one year to orbit the Sun.
It is also necessary to grasp two rules of orbit dynamics:
A body closer to the Sun completes an orbit more quickly than a body farther away.
If a body accelerates along its orbit, its orbit moves outwards from the Sun. If it decelerates, the orbital radius decreases.
The horseshoe orbit arises because the gravitational attraction of the Earth changes the shape of the elliptical orbit of the asteroid. The shape changes are very small but result in significant changes relative to the Earth.
The horseshoe becomes apparent only when mapping the movement of the asteroid relative to both the Sun and the Earth. The asteroid always orbits the Sun in the same direction. However, it goes through a cycle of catching up with the Earth and falling behind, so that its movement relative to both the Sun and the Earth traces a shape like the outline of a horseshoe.
Stages of the orbit
Starting at point A, on the inner ring between and Earth, the satellite is orbiting faster than the Earth and is on its way toward passing between the Earth and the Sun. But Earth's gravity exerts an outward accelerating force, pulling the satellite into a higher orbit which (per Kepler's third law) decreases its angular speed.
When the satellite gets to point B, it is traveling at the same speed as Earth. Earth's gravity is still accelerating the satellite along the orbital path, and continues to pull the satellite into a higher orbit. Eventually, at Point C, the satellite reaches a high and slow enough orbit such that it starts to lag behind Earth. It then spends the next century or more appearing to drift 'backwards' around the orbit when viewed relative to the Earth. Its orbit around the Sun still takes only slightly more than one Earth year. Given enough time, the Earth and the satellite will be on opposite sides of the Sun.
Eventually the satellite comes around to point D where Earth's gravity is now reducing the satellite's orbital velocity. This causes it to fall into a lower orbit, which actually increases the angular speed of the satellite around the Sun. This continues until point E where the satellite's orbit is now lower and faster than Earth's orbit, and it begins moving out ahead of Earth. Over the next few centuries it completes its journey back to point A.
On the longer term, asteroids can transfer between horseshoe orbits and quasi-satellite orbits. Quasi-satellites aren't gravitationally bound to their planet, but appear to circle it in a retrograde direction as they circle the Sun with the same orbital period as the planet. By 2016, orbital calculations showed that four of Earth's horseshoe librators and all five of its then known quasi-satellites repeatedly transfer between horseshoe and quasi-satellite orbits.
Energy viewpoint
A somewhat different, but equivalent, view of the situation may be noted by considering conservation of energy. It is a theorem of classical mechanics that a body moving in a time-independent potential field will have its total energy, E = T + V, conserved, where E is total energy, T is kinetic energy (always non-negative) and V is potential energy, which is negative. It is apparent then, since V = -GM/R near a gravitating body of mass M and orbital radius R, that seen from a stationary frame, V will be increasing for the region behind M, and decreasing for the region in front of it. However, orbits with lower total energy have shorter periods, and so a body moving slowly on the forward side of a planet will lose energy, fall into a shorter-period orbit, and thus slowly move away, or be "repelled" from it. Bodies moving slowly on the trailing side of the planet will gain energy, rise to a higher, slower, orbit, and thereby fall behind, similarly repelled. Thus a small body can move back and forth between a leading and a trailing position, never approaching too close to the planet that dominates the region.
Tadpole orbit
| Physical sciences | Orbital mechanics | Astronomy |
1189582 | https://en.wikipedia.org/wiki/Shivering | Shivering | Shivering (also called shuddering) is a bodily function in response to cold and extreme fear in warm-blooded animals. When the core body temperature drops, the shivering reflex is triggered to maintain homeostasis. Skeletal muscles begin to shake in small movements, creating warmth by expending energy. Shivering can also be a response to fever, as a person may feel cold. During fever, the hypothalamic set point for temperature is raised. The increased set point causes the body temperature to rise (pyrexia), but also makes the patient feel cold until the new set point is reached. Severe chills with violent shivering are called rigors. Rigors occur because the patient's body is shivering in a physiological attempt to increase body temperature to the new set point.
Biological basis
Located in the posterior hypothalamus near the wall of the third ventricle is an area called the primary motor center for shivering. This area is normally inhibited by signals from the heat center in the anterior hypothalamic-preoptic area but is excited by cold signals from the skin and spinal cord. Therefore, this center becomes activated when the body temperature falls even a fraction of a degree below a critical temperature level.
Increased muscular activity results in the generation of heat as a byproduct. Most often, when the purpose of the muscle activity is to produce motion, the heat is wasted energy. In shivering, the heat is the main intended product and is utilized for warmth.
Newborn babies, infants, and young children experience a greater (net) heat loss than adults because of greater surface-area-to-volume ratio. As they cannot shiver to maintain body heat, they rely on non-shivering thermogenesis. Children have an increased amount of brown adipose tissue (increased vascular supply, and high mitochondrial density), and, when cold-stressed, will have greater oxygen consumption and will release norepinephrine. Norepinephrine will react with lipases in brown fat to break down fat into triglycerides. Triglycerides are then metabolized to glycerol and non-esterified fatty acids. These are then further degraded in the needed heat-generating process to form CO2 and water. Chemically, in mitochondria, the proton gradient producing the proton electromotive force that is ordinarily used to synthesize ATP is instead bypassed to produce heat directly.
Shivering can also appear after surgery. This is known as postanesthetic shivering.
In humans, shivering can also be caused by cognition. This is known as psychogenic shivering.
Shivering and the elderly
The functional capacity of the thermoregulatory system alters with aging, reducing the resistance of elderly people to extreme external temperatures. The shiver response may be greatly diminished or even absent in the elderly, resulting in a significant drop in mean deep body temperature upon exposure to cold. Standard tests of thermoregulatory function show a markedly different rate of decline of thermoregulatory processes in different individuals with ageing.
| Biology and health sciences | Symptoms and signs | Health |
1190123 | https://en.wikipedia.org/wiki/Adamantane | Adamantane | Adamantane is an organic compound with formula C10H16 or, more descriptively, (CH)4(CH2)6. Adamantane molecules can be described as the fusion of three cyclohexane rings. The molecule is both rigid and virtually stress-free. Adamantane is the most stable isomer of C10H16. The spatial arrangement of carbon atoms in the adamantane molecule is the same as in the diamond crystal. This similarity led to the name adamantane, which is derived from the Greek adamantinos (relating to steel or diamond). It is a white solid with a camphor-like odor. It is the simplest diamondoid.
The discovery of adamantane in petroleum in 1933 launched a new field of chemistry dedicated to the synthesis and properties of polyhedral organic compounds. Adamantane derivatives have found practical application as drugs, polymeric materials, and thermally stable lubricants.
History and synthesis
In 1924, H. Decker suggested the existence of adamantane, which he called decaterpene.
The first attempted laboratory synthesis was made in 1924 by German chemist Hans Meerwein using the reaction of formaldehyde with diethyl malonate in the presence of piperidine. Instead of adamantane, Meerwein obtained 1,3,5,7-tetracarbomethoxybicyclo[3.3.1]nonane-2,6-dione: this compound, later named Meerwein's ester, was used in the synthesis of adamantane and its derivatives. D. Bottger tried to obtain adamantane using Meerwein's ester as precursor. The product, tricyclo-[3.3.1.13,7], was not adamantane, but a derivative.
Other researchers attempted to synthesize adamantane using phloroglucinol and derivatives of cyclohexanone, but also failed.
Adamantane was first synthesized by Vladimir Prelog in 1941 from Meerwein's ester. With a yield of 0.16%, the five-stage process was impractical (simplified in the image below). The method is used to synthesize certain derivatives of adamantane.
Prelog's method was refined in 1956. The decarboxylation yield was increased by the addition of the Hunsdiecker pathway (11%) and the Hoffman reaction (24%) that raised the total yield to 6.5%. The process was still too complex, and a more convenient method was found in 1957 by Paul von Ragué Schleyer: dicyclopentadiene was first hydrogenated in the presence of a catalyst (e.g. platinum dioxide) to give tricyclodecane and then transformed into adamantane using a Lewis acid (e.g. aluminium chloride) as another catalyst. This method increased the yield to 30–40% and provided an affordable source of adamantane; it therefore stimulated characterization of adamantane and is still used in laboratory practice. The adamantane synthesis yield was later increased to 60% and 98% by ultrasound and superacid catalysis. Today, adamantane is an affordable chemical compound with a cost of one or two USD per gram.
All the above methods yield adamantane as a polycrystalline powder. Using this powder, single crystals can be grown from the melt, solution, or vapor phase (e.g. with the Bridgman–Stockbarger technique). Melt growth results in the worst crystalline quality with a mosaic spread in the X-ray reflection of about 1°. The best crystals are obtained from the liquid phase, but the growth is impracticably slow – several months for a 5–10 mm crystal. Growth from the vapor phase is a reasonable compromise in terms of speed and quality. Adamantane is sublimed in a quartz tube placed in a furnace, which is equipped with several heaters maintaining a certain temperature gradient (about 10 °C/cm for adamantane) along the tube. Crystallization starts at one end of the tube, which is kept near the freezing point of adamantane. Slow cooling of the tube, while maintaining the temperature gradient, gradually shifts the melting zone (rate ~2 mm/hour), producing a single-crystal boule.
Natural occurrence
Adamantane was first isolated from petroleum by the Czech chemists S. Landa, V. Machacek, and M. Mzourek. They used fractional distillation of petroleum. They could produce only a few milligrams of adamantane, but noticed its high boiling and melting points. Because of the (assumed) similarity of its structure to that of diamond, the new compound was named adamantane.
Petroleum remains a source of adamantane; the content varies from between 0.0001% and 0.03% depending on the oil field and is too low for commercial production.
Petroleum contains more than thirty derivatives of adamantane. Their isolation from a complex mixture of hydrocarbons is possible due to their high melting point and the ability to distill with water vapor and form stable adducts with thiourea.
Physical properties
Pure adamantane is a colorless, crystalline solid with a characteristic camphor smell. It is practically insoluble in water, but readily soluble in nonpolar organic solvents. Adamantane has an unusually high melting point for a hydrocarbon. At 270 °C, its melting point is much higher than other hydrocarbons with the same molecular weight, such as camphene (45 °C), limonene (−74 °C), ocimene (50 °C), terpinene (60 °C) or twistane (164 °C), or than a linear C10H22 hydrocarbon decane (−28 °C). However, adamantane slowly sublimes even at room temperature. Adamantane can be distilled with water vapor.
Structure
As deduced by electron diffraction and X-ray crystallography, the molecule has Td symmetry. The carbon–carbon bond lengths are 1.54 Å, almost identical to that of diamond. The carbon–hydrogen distances are 1.112 Å.
At ambient conditions, adamantane crystallizes in a face-centered cubic structure (space group Fm3m, a = 9.426 ± 0.008 Å, four molecules in the unit cell) containing orientationally disordered adamantane molecules. This structure transforms into an ordered, primitive, tetragonal phase (a = 6.641 Å, c = 8.875 Å) with two molecules per cell, either upon cooling to 208 K or pressurizing to above 0.5 GPa.
This phase transition is of the first order; it is accompanied by an anomaly in the heat capacity, elastic, and other properties. In particular, whereas adamantane molecules freely rotate in the cubic phase, they are frozen in the tetragonal one; the density increases stepwise from 1.08 to 1.18 g/cm3, and the entropy changes by a significant amount of 1594 J/(mol·K).
Hardness
Elastic constants of adamantane were measured using large (centimeter-sized) single crystals and the ultrasonic echo technique. The principal value of the elasticity tensor, C11, was deduced as 7.52, 8.20, and 6.17 GPa for the <110>, <111>, and <100> crystalline directions. For comparison, the corresponding values for crystalline diamond are 1161, 1174, and 1123 GPa. The arrangement of carbon atoms is the same in adamantane and diamond; however, in the adamantane solid, molecules do not form a covalent lattice as in diamond, but interact through weak van der Waals forces. As a result, adamantane crystals are very soft and plastic.
Spectroscopy
The nuclear magnetic resonance (NMR) spectrum of adamantane consists of two poorly resolved signals, which correspond to sites 1 and 2 (see picture below). The 1H and 13C NMR chemical shifts are respectively 1.873 and 1.756 ppm and are 28.46 and 37.85 ppm. The simplicity of these spectra is consistent with high molecular symmetry.
Mass spectra of adamantane and its derivatives are rather characteristic. The main peak at m/z = 136 corresponds to the ion. Its fragmentation results in weaker signals as m/z = 93, 80, 79, 67, 41 and 39.
The infrared absorption spectrum of adamantane is relatively simple because of the high symmetry of the molecule. The main absorption bands and their assignment are given in the table:
* Legends correspond to types of oscillations: δ – deformation, ν – stretching, ρ and ω – out of plane deformation vibrations of CH2 groups.
Optical activity
Adamantane derivatives with different substituents at every nodal carbon sites are chiral. Such optical activity was described in adamantane in 1969 with the four different substituents being hydrogen, bromine, methyl, and carboxyl. The values of specific rotation are small and are usually within 1°.
Nomenclature
Using the rules of systematic nomenclature, adamantane is called tricyclo[3.3.1.13,7]decane. However, IUPAC recommends using the name "adamantane".
The adamantane molecule is composed of only carbon and hydrogen and has Td symmetry. Therefore, its 16 hydrogen and 10 carbon atoms can be described by only two sites, which are labeled in the figure as 1 (4 equivalent sites) and 2 (6 equivalent sites).
Structural relatives of adamantane are noradamantane and homoadamantane, which respectively contain one less and one more CH2 link than the adamantane.
The functional group derived from adamantane is adamantyl, formally named as 1-adamantyl or 2-adamantyl depending on which site is connected to the parent molecule. Adamantyl groups are a bulky pendant group used to improve the thermal and mechanical properties of polymers.
Chemical properties
Adamantane cations
The adamantane cation can be produced by treating 1-fluoro-adamantane with SbF5. Its stability is relatively high.
The dication of 1,3-didehydroadamantane was obtained in solutions of superacids. It also has elevated stability due to the phenomenon called "three-dimensional aromaticity" or homoaromaticity. This four-center two-electron bond involves one pair of electrons delocalized among the four bridgehead atoms.
Reactions
Most reactions of adamantane occur via the 3-coordinated carbon sites. They are involved in the reaction of adamantane with concentrated sulfuric acid which produces adamantanone.
The carbonyl group of adamantanone allows further reactions via the bridging site. For example, adamantanone is the starting compound for obtaining such derivatives of adamantane as 2-adamantanecarbonitrile and 2-methyl-adamantane.
Bromination
Adamantane readily reacts with various brominating agents, including molecular bromine. The composition and the ratio of the reaction products depend on the reaction conditions and especially the presence and type of catalysts.
Boiling of adamantane with bromine results in a monosubstituted adamantane, 1-bromadamantane. Multiple substitution with bromine is achieved by adding a Lewis acid catalyst.
The rate of bromination is accelerated upon addition of Lewis acids and is unchanged by irradiation or addition of free radicals. This indicates that the reaction occurs via an ionic mechanism.
Fluorination
The first fluorinations of adamantane were conducted using 1-hydroxyadamantane and 1-aminoadamantane as initial compounds. Later, fluorination was achieved starting from adamantane itself. In all these cases, reaction proceeded via formation of the adamantane cation which then interacted with fluorinated nucleophiles. Fluorination of adamantane with gaseous fluorine has also been reported.
Carboxylation
Carboxylation of adamantane with formic acid gives 1-adamantanecarboxylic acid.
Oxidation
1-Hydroxyadamantane is readily formed by hydrolysis of 1-bromadamantane in aqueous solution of acetone. It can also be produced by ozonation of the adamantane: Oxidation of the alcohol gives adamantanone.
Others
Adamantane interacts with benzene in the presence of Lewis acids, resulting in a Friedel–Crafts reaction. Aryl-substituted adamantane derivatives can be easily obtained starting from 1-hydroxyadamantane. In particular, the reaction with anisole proceeds under normal conditions and does not require a catalyst.
Nitration of adamantane is a difficult reaction characterized by moderate yields. A nitrogen-substituted drug amantadine can be prepared by reacting adamantane with bromine or nitric acid to give the bromide or nitroester at the 1-position. Reaction of either compound with acetonitrile affords the acetamide, which is hydrolyzed to give 1-adamantylamine:
Uses
Adamantane itself enjoys few applications since it is merely an unfunctionalized hydrocarbon. It is used in some dry etching masks and polymer formulations.
In solid-state NMR spectroscopy, adamantane is a common standard for chemical shift referencing.
In dye lasers, adamantane may be used to extend the life of the gain medium; it cannot be photoionized under atmosphere because its absorption bands lie in the vacuum-ultraviolet region of the spectrum. Photoionization energies have been determined for adamantane as well as for several bigger diamondoids.
In medicine
All medical applications known so far involve not pure adamantane, but its derivatives. The first adamantane derivative used as a drug was amantadine – first (1967) as an antiviral drug against various strains of influenza and then to treat Parkinson's disease. Other drugs among adamantane derivatives include adapalene, adapromine, bromantane (bromantan), carmantadine, chlodantane (chlodantan), dopamantine, gludantan (gludantane), hemantane (hymantane), idramantone (kemantane), memantine, nitromemantine rimantadine, saxagliptin, somantadine, tromantadine, and vildagliptin. Polymers of adamantane have been patented as antiviral agents against HIV.
Influenza virus strains have developed drug resistance to amantadine and rimantadine, which are not effective against prevalent strains as of 2016.
In designer drugs
Adamantane was recently identified as a key structural subunit in several synthetic cannabinoid designer drugs, namely AB-001 and SDB-001.
Spacecraft propellant
Adamantane is an attractive candidate for propellant in Hall-effect thrusters because it ionizes easily, can be stored in solid form rather than a heavy pressure tank, and is relatively nontoxic.
Potential technological applications
Some alkyl derivatives of adamantane have been used as a working fluid in hydraulic systems. Adamantane-based polymers might find application for coatings of touchscreens, and there are prospects for using adamantane and its homologues in nanotechnology. For example, the soft cage-like structure of adamantane solid allows incorporation of guest molecules, which can be released inside the human body upon breaking the matrix. Adamantane could be used as molecular building blocks for self-assembly of molecular crystals.
Adamantane analogues
Many molecules and ions adopt adamantane-like cage structures. Those include phosphorus trioxide P4O6, arsenic trioxide As4O6, phosphorus pentoxide P4O10 = (PO)4O6, phosphorus pentasulfide P4S10 = (PS)4S6, and hexamethylenetetramine C6N4H12 = N4(CH2)6. Particularly notorious is tetramethylenedisulfotetramine, often shortened to "tetramine", a rodenticide banned in most countries for extreme toxicity to humans. The silicon analogue of adamantane, sila-adamantane, was synthesized in 2005. Arsenicin A is a naturally occurring organoarsenic chemical isolated from the New Caledonian sea sponge Echinochalina bargibanti and is the first known heterocycle to contain multiple arsenic atoms.
Conjoining adamantane cages produces higher diamondoids, such as diamantane (C14H20 – two fused adamantane cages), triamantane (C18H24), tetramantane (C22H28), pentamantane (C26H32), hexamantane (C26H30), etc. Their synthesis is similar to that of adamantane and like adamantane, they can also be extracted from petroleum, though at even much smaller yields.
| Physical sciences | Aliphatic hydrocarbons | Chemistry |
1190754 | https://en.wikipedia.org/wiki/Silicate%20mineral | Silicate mineral | Silicate minerals are rock-forming minerals made up of silicate groups. They are the largest and most important class of minerals and make up approximately 90 percent of Earth's crust.
In mineralogy, silica (silicon dioxide, ) is usually considered a silicate mineral rather than an oxide mineral. Silica is found in nature as the mineral quartz, and its polymorphs.
On Earth, a wide variety of silicate minerals occur in an even wider range of combinations as a result of the processes that have been forming and re-working the crust for billions of years. These processes include partial melting, crystallization, fractionation, metamorphism, weathering, and diagenesis.
Living organisms also contribute to this geologic cycle. For example, a type of plankton known as diatoms construct their exoskeletons ("frustules") from silica extracted from seawater. The frustules of dead diatoms are a major constituent of deep ocean sediment, and of diatomaceous earth.
General structure
A silicate mineral is generally an inorganic compound consisting of subunits with the formula [SiO2+n]2n−. Although depicted as such, the description of silicates as anions is a simplification. Balancing the charges of the silicate anions are metal cations, Mx+. Typical cations are Mg2+, Fe2+, and Na+. The Si-O-M linkage between the silicates and the metals are strong, polar-covalent bonds. Silicate anions ([SiO2+n]2n−) are invariably colorless, or when crushed to a fine powder, white. The colors of silicate minerals arise from the metal component, commonly iron.
In most silicate minerals, silicon is tetrahedral, being surrounded by four oxides. The coordination number of the oxides is variable except when it bridges two silicon centers, in which case the oxide has a coordination number of two.
Some silicon centers may be replaced by atoms of other elements, still bound to the four corner oxygen corners. If the substituted atom is not normally tetravalent, it usually contributes extra charge to the anion, which then requires extra cations. For example, in the mineral orthoclase , the anion is a tridimensional network of tetrahedra in which all oxygen corners are shared. If all tetrahedra had silicon centers, the anion would be just neutral silica . Replacement of one in every four silicon atoms by an aluminum atom results in the anion , whose charge is neutralized by the potassium cations .
Main groups
In mineralogy, silicate minerals are classified into seven major groups according to the structure of their silicate anion:
Tectosilicates can only have additional cations if some of the silicon is replaced by an atom of lower valence such as aluminum. Al for Si substitution is common.
Nesosilicates or orthosilicates
Nesosilicates (from Greek 'island'), or orthosilicates, have the orthosilicate ion, present as isolated (insular) tetrahedra connected only by interstitial cations. The Nickel–Strunz classification is 09.A –examples include:
Phenakite group
Phenakite –
Willemite –
Olivine group
Forsterite –
Fayalite –
Tephroite –
Garnet group
Pyrope –
Almandine –
Spessartine –
Grossular –
Andradite –
Uvarovite –
Hydrogrossular –
Zircon group
Zircon –
Thorite –
Hafnon –
group
Andalusite –
Kyanite –
Sillimanite –
Dumortierite –
Topaz –
Staurolite –
Humite group –
Norbergite –
Chondrodite –
Humite –
Clinohumite –
Datolite –
Titanite –
Chloritoid –
Mullite (aka Porcelainite) –
Sorosilicates
Sorosilicates (from Greek 'heap, mound') have isolated pyrosilicate anions , consisting of double tetrahedra with a shared oxygen vertex—a silicon:oxygen ratio of 2:7. The Nickel–Strunz classification is 09.B. Examples include:
Thortveitite –
Hemimorphite (calamine) –
Lawsonite –
Axinite –
Ilvaite –
Epidote group (has both and groups}
Epidote –
Zoisite –
Tanzanite –
Clinozoisite –
Allanite –
Dollaseite-(Ce) –
Vesuvianite (idocrase) –
Cyclosilicates
Cyclosilicates (from Greek 'circle'), or ring silicates, have three or more tetrahedra linked in a ring. The general formula is (SixO3x)2x−, where one or more silicon atoms can be replaced by other 4-coordinated atom(s). The silicon:oxygen ratio is 1:3. Double rings have the formula (Si2xO5x)2x− or a 2:5 ratio. The Nickel–Strunz classification is 09.C. Possible ring sizes include:
Some example minerals are:
3-member single ring
Benitoite –
4-member single ring
Papagoite – .
6-member single ring
Beryl –
Bazzite –
Sugilite –
Tourmaline –
Pezzottaite –
Osumilite –
Cordierite –
Sekaninaite –
9-member single ring
Eudialyte –
6-member double ring
Milarite –
The ring in axinite contains two B and four Si tetrahedra and is highly distorted compared to the other 6-member ring cyclosilicates.
Inosilicates
Inosilicates (from Greek [genitive: ] 'fibre'), or chain silicates, have interlocking chains of silicate tetrahedra with either , 1:3 ratio, for single chains or , 4:11 ratio, for double chains. The Nickel–Strunz classification is 09.D – examples include:
Single chain inosilicates
Pyroxene group
Enstatite – orthoferrosilite series
Enstatite –
Ferrosilite –
Pigeonite –
Diopside – hedenbergite series
Diopside –
Hedenbergite –
Augite –
Sodium pyroxene series
Jadeite –
Aegirine (or acmite) –
Spodumene –
Pyroxferroite -
Pyroxenoid group
Wollastonite –
Rhodonite –
Pectolite –
Double chain inosilicates
Amphibole group
Anthophyllite –
Cummingtonite series
Cummingtonite –
Grunerite –
Tremolite series
Tremolite –
Actinolite –
Hornblende –
Sodium amphibole group
Glaucophane –
Riebeckite (asbestos) –
Arfvedsonite –
Phyllosilicates
Phyllosilicates (from Greek 'leaf'), or sheet silicates, form parallel sheets of silicate tetrahedra with or a 2:5 ratio. The Nickel–Strunz classification is 09.E. All phyllosilicate minerals are hydrated, with either water or hydroxyl groups attached.
Examples include:
Serpentine subgroup
Antigorite –
Chrysotile –
Lizardite –
Clay minerals group
1:1 clay minerals (TO)
Halloysite –
Kaolinite –
2:1 clay minerals (TOT)
Pyrophyllite –
Talc –
Illite –
Montmorillonite (smectite) –
Chlorite –
Vermiculite –
Other clay minerals
Sepiolite –
Palygorskite (or attapulgite) –
Mica group
Biotite –
Fuchsite –
Muscovite –
Phlogopite –
Lepidolite –
Margarite –
Glauconite –
Tectosilicates
Tectosilicates, or "framework silicates," have a three-dimensional framework of silicate tetrahedra with in a 1:2 ratio. This group comprises nearly 75% of the crust of the Earth. Tectosilicates, with the exception of the quartz group, are aluminosilicates. The Nickel–Strunz classifications are 09.F and 09.G, 04.DA (Quartz/ silica family). Examples include:
3D-Silicates, quartz family
Quartz –
Tridymite –
Cristobalite –
Coesite –
Stishovite –
Moganite –
Chalcedony –
Tectosilicates, feldspar group
Alkali feldspars (potassium feldspars)
Microcline –
Orthoclase –
Anorthoclase –
Sanidine –
Plagioclase feldspars
Albite –
Oligoclase – (Na:Ca 4:1)
Andesine – (Na:Ca 3:2)
Labradorite – (Na:Ca 2:3)
Bytownite – (Na:Ca 1:4)
Anorthite –
Tectosilicates, feldspathoid family
Nosean –
Cancrinite –
Leucite –
Nepheline –
Sodalite –
Hauyne –
Lazurite –
Tectosilicates, scapolite group
Marialite –
Meionite –
Tectosilicates, zeolite family
Natrolite –
Erionite –
Chabazite –
Heulandite –
Stilbite –
Scolecite –
Mordenite –
Analcime –
| Physical sciences | Silicate minerals | Earth science |
1190913 | https://en.wikipedia.org/wiki/Air%20brake%20%28aeronautics%29 | Air brake (aeronautics) | In aeronautics, air brakes or speed brakes are a type of flight control surface used on an aircraft to increase the drag on the aircraft. When extended into the airstream, air brakes cause an increase in the drag on the aircraft. When not in use, they conform to the local streamlined profile of the aircraft in order to help minimize drag.
Air brakes differ from spoilers in that air brakes are designed to increase drag while making little change to lift, whereas spoilers reduce the lift-to-drag ratio and require a higher angle of attack to maintain lift, resulting in a higher stall speed.
History
In the early decades of powered flight, air brakes were flaps mounted on the wings. They were manually controlled by a lever in the cockpit, and mechanical linkages to the air brake.
An early type of air brake, developed in 1931, was fitted to the aircraft wing support struts.
In 1936, Hans Jacobs, who headed Nazi Germany's Deutsche Forschungsanstalt für Segelflug (DFS) glider research organization before World War II, developed blade-style self-operating dive brakes, on the upper and lower surface of each wing, for gliders. Most early gliders were equipped with spoilers on the wings in order to adjust their angle of descent during approach to landing. More modern gliders use air brakes that may spoil lift as well as increase drag, dependent on where they are positioned.
A British report written in 1942 discusses the need for dive brakes to enable dive bombers, torpedo bombers and fighter aircraft to meet their respective combat performance requirements and, more generally, glide-path control. It discusses different types of air brakes and their requirements, in particular that they should have no appreciable effect on lift or trim and how this may be achieved with split trailing edge flaps on the wings, for example. There was also a requirement to vent the brake surfaces using numerous perforations or slots to reduce airframe buffeting.
A US report written in 1949 describes numerous air brake configurations, and their performance, on wings and fuselage for propeller and jet aircraft.
Air brake configurations
Often, characteristics of both spoilers and air brakes are desirable and are combined - most modern airliner jets feature combined spoiler and air brake controls. On landing, the deployment of these spoilers ("lift dumpers") causes a significant reduction in wing lift, so the weight of the aircraft is transferred from the wings to the undercarriage. The increased weight increases the available friction force for braking. In addition, the form drag created by the spoilers directly assists the braking effect. Reverse thrust is also used to help slow the aircraft after landing.
Virtually all jet-powered aircraft have an air brake or, in the case of most airliners, lift spoilers that also act as air brakes. Propeller-driven aircraft benefit from the natural braking effect of the propeller when engine power is reduced to idle, but jet engines have no similar braking effect, so jet-powered aircraft must use air brakes to control speed and descent angle during landing approach. Many early jets used parachutes as air brakes on approach (Arado Ar 234, Boeing B-47) or after landing (English Electric Lightning).
Split-tailcone air brakes have been used on the Blackburn Buccaneer naval strike aircraft designed in the 1950s and Fokker F28 Fellowship and British Aerospace 146 airliners. The Buccaneer air brake, when opened, reduced the length of the aircraft in the confined space on an aircraft carrier.
The F-15 Eagle, Sukhoi Su-27, F-18 Hornet and other fighters have an air brake located just behind the cockpit.
Split control surfaces
The deceleron is an aileron that functions normally in flight but can split in half such that the top half goes up as the bottom half goes down to brake. This technique was first used on the F-89 Scorpion and has since been used by Northrop on several aircraft, including the B-2 Spirit.
The Space Shuttle used a similar system. The vertically split rudder opened in "clamshell" fashion on landing to act as a speed brake.
| Technology | Aircraft components | null |
1191101 | https://en.wikipedia.org/wiki/Electric%20displacement%20field | Electric displacement field | In physics, the electric displacement field (denoted by D), also called electric flux density, is a vector field that appears in Maxwell's equations. It accounts for the electromagnetic effects of polarization and that of an electric field, combining the two in an auxiliary field. It plays a major role in the physics of phenomena such as the capacitance of a material, the response of dielectrics to an electric field, how shapes can change due to electric fields in piezoelectricity or flexoelectricity as well as the creation of voltages and charge transfer due to elastic strains.
In any material, if there is an inversion center then the charge at, for instance, and are the same. This means that there is no dipole. If an electric field is applied to an insulator, then (for instance) the negative charges can move slightly towards the positive side of the field, and the positive charges in the other direction. This leads to an induced dipole which is described as a polarization. There can be slightly different movements of the negative electrons and positive nuclei in molecules, or different displacements of the atoms in an ionic compound. Materials which do not have an inversion center display piezoelectricity and always have a polarization; in others spatially varying strains can break the inversion symmetry and lead to polarization, the flexoelectric effect. Other stimuli such as magnetic fields can lead to polarization in some materials, this being called the magnetoelectric effect.
Definition
The electric displacement field "D" is defined aswhere is the vacuum permittivity (also called permittivity of free space), E is the electric field, and P is the (macroscopic) density of the permanent and induced electric dipole moments in the material, called the polarization density.
The displacement field satisfies Gauss's law in a dielectric:
In this equation, is the number of free charges per unit volume. These charges are the ones that have made the volume non-neutral, and they are sometimes referred to as the space charge. This equation says, in effect, that the flux lines of D must begin and end on the free charges. In contrast , which is called the bound charge, is an effective density of the charges that are part of a dipole. In the example of an insulating dielectric between metal capacitor plates, the only free charges are on the metal plates and dielectric contains only dipoles. The net, unbalanced bound charge at the metal/dielectric interface balances the charge on the metal plate. If the dielectric is replaced by a doped semiconductor or an ionised gas, etc, then electrons move relative to the ions, and if the system is finite they both contribute to at the edges.
D is not determined exclusively by the free charge. As E has a curl of zero in electrostatic situations, it follows that
The effect of this equation can be seen in the case of an object with a "frozen in" polarization like a bar electret, the electric analogue to a bar magnet. There is no free charge in such a material, but the inherent polarization gives rise to an electric field, demonstrating that the D field is not determined entirely by the free charge. The electric field is determined by using the above relation along with other boundary conditions on the polarization density to yield the bound charges, which will, in turn, yield the electric field.
In a linear, homogeneous, isotropic dielectric with instantaneous response to changes in the electric field, P depends linearly on the electric field,
where the constant of proportionality is called the electric susceptibility of the material. Thus
where is the relative permittivity of the material, and is the permittivity.
In linear, homogeneous, isotropic media, ε is a constant. However, in linear anisotropic media it is a tensor, and in nonhomogeneous media it is a function of position inside the medium. It may also depend upon the electric field (nonlinear materials) and have a time dependent response. Explicit time dependence can arise if the materials are physically moving or changing in time (e.g. reflections off a moving interface give rise to Doppler shifts). A different form of time dependence can arise in a time-invariant medium, as there can be a time delay between the imposition of the electric field and the resulting polarization of the material. In this case, P is a convolution of the impulse response susceptibility χ and the electric field E. Such a convolution takes on a simpler form in the frequency domain: by Fourier transforming the relationship and applying the convolution theorem, one obtains the following relation for a linear time-invariant medium:
where is the frequency of the applied field. The constraint of causality leads to the Kramers–Kronig relations, which place limitations upon the form of the frequency dependence. The phenomenon of a frequency-dependent permittivity is an example of material dispersion. In fact, all physical materials have some material dispersion because they cannot respond instantaneously to applied fields, but for many problems (those concerned with a narrow enough bandwidth) the frequency-dependence of ε can be neglected.
At a boundary, , where σf is the free charge density and the unit normal points in the direction from medium 2 to medium 1.
History
The earliest known use of the term is from the year 1864, in James Clerk Maxwell's paper A Dynamical Theory of the Electromagnetic Field. Maxwell introduced the term D, specific capacity of electric induction, in a form different from the modern and familiar notations.
It was Oliver Heaviside who reformulated the complicated Maxwell's equations to the modern form. It wasn't until 1884 that Heaviside, concurrently with Willard Gibbs and Heinrich Hertz, grouped the equations together into a distinct set. This group of four equations was known variously as the Hertz–Heaviside equations and the Maxwell–Hertz equations, and is sometimes still known as the Maxwell–Heaviside equations; hence, it was probably Heaviside who lent D the present significance it now has.
Example: Displacement field in a capacitor
Consider an infinite parallel plate capacitor where the space between the plates is empty or contains a neutral, insulating medium. In both cases, the free charges are only on the metal capacitor plates. Since the flux lines D end on free charges, and there are the same number of uniformly distributed charges of opposite sign on both plates, then the flux lines must all simply traverse the capacitor from one side to the other. In SI units, the charge density on the plates is proportional to the value of the D field between the plates. This follows directly from Gauss's law, by integrating over a small rectangular box straddling one plate of the capacitor:
On the sides of the box, dA is perpendicular to the field, so the integral over this section is zero, as is the integral on the face that is outside the capacitor where D is zero. The only surface that contributes to the integral is therefore the surface of the box inside the capacitor, and hence
where A is the surface area of the top face of the box and is the free surface charge density on the positive plate. If the space between the capacitor plates is filled with a linear homogeneous isotropic dielectric with permittivity , then there is a polarization induced in the medium, and so the voltage difference between the plates is
where d is their separation.
Introducing the dielectric increases ε by a factor and either the voltage difference between the plates will be smaller by this factor, or the charge must be higher. The partial cancellation of fields in the dielectric allows a larger amount of free charge to dwell on the two plates of the capacitor per unit of potential drop than would be possible if the plates were separated by vacuum.
If the distance d between the plates of a finite parallel plate capacitor is much smaller than its lateral dimensions
we can approximate it using the infinite case and obtain its capacitance as
| Physical sciences | Electrostatics | Physics |
1191951 | https://en.wikipedia.org/wiki/Glycated%20hemoglobin | Glycated hemoglobin | Glycated hemoglobin, also called glycohemoglobin, is a form of hemoglobin (Hb) that is chemically linked to a sugar. Most monosaccharides, including glucose, galactose, and fructose, spontaneously (that is, non-enzymatically) bond with hemoglobin when they are present in the bloodstream. However, glucose is only 21% as likely to do so as galactose and 13% as likely to do so as fructose, which may explain why glucose is used as the primary metabolic fuel in humans.
The formation of excess sugar-hemoglobin linkages indicates the presence of excessive sugar in the bloodstream and is an indicator of diabetes or other hormone diseases in high concentration . A1c is of particular interest because it is easy to detect. The process by which sugars attach to hemoglobin is called glycation and the reference system is based on HbA1c, defined as beta-N-1-deoxy fructosyl hemoglobin as component.
There are several ways to measure glycated hemoglobin, of which HbA1c (or simply A1c) is a standard single test. HbA1c is measured primarily to determine the three-month average blood sugar level and is used as a standard diagnostic test for evaluating the risk of complications of diabetes and as an assessment of glycemic control. The test is considered a three-month average because the average lifespan of a red blood cell is three to four months. Normal levels of glucose produce a normal amount of glycated hemoglobin. As the average amount of plasma glucose increases, the fraction of glycated hemoglobin increases in a predictable way. In diabetes, higher amounts of glycated hemoglobin, indicating higher of blood glucose levels, have been associated with cardiovascular disease, nephropathy, neuropathy, and retinopathy.
Terminology
Glycated hemoglobin is preferred over glycosylated hemoglobin to reflect the correct (non-enzymatic) process. Early literature often used glycosylated as it was unclear which process was involved until further research was performed. The terms are still sometimes used interchangeably in English-language literature.
The naming of HbA1c derives from hemoglobin type A being separated on cation exchange chromatography. The first fraction to separate, probably considered to be pure hemoglobin A, was designated HbA0, and the following fractions were designated HbA1a, HbA1b, and HbA1c, in their order of elution. Improved separation techniques have subsequently led to the isolation of more subfractions.
History
Hemoglobin A1c was first separated from other forms of hemoglobin by Huisman and Meyering in 1958 using a chromatographic column. It was first characterized as a glycoprotein by Bookchin and Gallop in 1968. Its increase in diabetes was first described in 1969 by Samuel Rahbar et al. The reactions leading to its formation were characterized by Bunn and his coworkers in 1975.
The use of hemoglobin A1c for monitoring the degree of control of glucose metabolism in diabetic patients was proposed in 1976 by Anthony Cerami, Ronald Koenig, and coworkers.
Damage mechanisms
Glycated hemoglobin causes an increase of highly reactive free radicals inside blood cells, altering the properties of their cell membranes. This leads to blood cell aggregation and increased blood viscosity, which results in impaired blood flow.
Another way glycated hemoglobin causes damage is via inflammation, which results in atherosclerotic plaque (atheroma) formation. Free-radical build-up promotes the excitation of Fe2+-hemoglobin through into abnormal ferryl hemoglobin (Fe4+-Hb). Fe4+ is unstable and reacts with specific amino acids in hemoglobin to regain its Fe3+ oxidation state. Hemoglobin molecules clump together via cross-linking reactions, and these hemoglobin clumps (multimers) promote cell damage and the release of Fe4+-hemoglobin into the matrix of innermost layers (subendothelium) of arteries and veins. This results in increased permeability of interior surface (endothelium) of blood vessels and production of pro-inflammatory monocyte adhesion proteins, which promote macrophage accumulation in blood vessel surfaces, ultimately leading to harmful plaques in these vessels.
Highly glycated Hb-AGEs go through vascular smooth muscle layer and inactivate acetylcholine-induced endothelium-dependent relaxation, possibly through binding to nitric oxide (NO), preventing its normal function. NO is a potent vasodilator and also inhibits formation of plaque-promoting LDLs (sometimes called "bad cholesterol") oxidized form.
This overall degradation of blood cells also releases heme from them. Loose heme can cause oxidation of endothelial and LDL proteins, which results in plaques.
Principle in medical diagnostics
Glycation of proteins is a frequent occurrence, but in the case of hemoglobin, a nonenzymatic condensation reaction occurs between glucose and the N-end of the beta chain. This reaction produces a Schiff base (, R=beta chain, CHR'=glucose-derived), which is itself converted to 1-deoxyfructose. This second conversion is an example of an Amadori rearrangement.
When blood glucose levels are high, glucose molecules attach to the hemoglobin in red blood cells. The longer hyperglycemia occurs in blood, the more glucose binds to hemoglobin in the red blood cells and the higher the glycated hemoglobin.
Once a hemoglobin molecule is glycated, it remains that way. A buildup of glycated hemoglobin within the red cell, therefore, reflects the average level of glucose to which the cell has been exposed during its life-cycle. Measuring glycated hemoglobin assesses the effectiveness of therapy by monitoring long-term serum glucose regulation.
A1c is a weighted average of blood glucose levels during the life of the red blood cells (117 days for men and 106 days in women). Therefore, glucose levels on days nearer to the test contribute substantially more to the level of A1c than the levels in days further from the test.
This is also supported by data from clinical practice showing that HbA1c levels improved significantly after 20 days from start or intensification of glucose-lowering treatment.
Measurement
Several techniques are used to measure hemoglobin A1c. Laboratories may use high-performance liquid chromatography, immunoassay, enzymatic assay, capillary electrophoresis, or boronate affinity chromatography. Point of care (e.g., doctor's office) devices use immunoassay boronate affinity chromatography.
In the United States, HbA1c testing laboratories are certified by the National Glycohemoglobin Standardization Program to standardize them against the results of the 1993 Diabetes Control and Complications Trial (DCCT). An additional percentage scale, Mono S has previously been in use by Sweden and KO500 is in use in Japan.
Switch to IFCC units
The American Diabetes Association, European Association for the Study of Diabetes, and International Diabetes Federation have agreed that, in the future, HbA1c is to be reported in the International Federation of Clinical Chemistry and Laboratory Medicine (IFCC) units. IFCC reporting was introduced in Europe except for the UK in 2003; the UK carried out dual reporting from 1 June 2009 until 1 October 2011.
Conversion between DCCT and IFCC is by the following equation:
Interpretation of results
Laboratory results may differ depending on the analytical technique, the age of the subject, and biological variation among individuals. Higher levels of HbA1c are found in people with persistently elevated blood sugar, as in diabetes mellitus. While diabetic patient treatment goals vary, many include a target range of HbA1c values. A diabetic person with good glucose control has an HbA1c level that is close to or within the reference range.
The International Diabetes Federation and the American College of Endocrinology recommend HbA1c values below 48 mmol/mol (6.5 DCCT %), while the American Diabetes Association recommends HbA1c be below 53 mmol/mol (7.0 DCCT %) for most patients. Results from large trials in suggested that a target below 53 mmol/mol (7.0 DCCT %) for older adults with type 2 diabetes may be excessive: Below 53 mmol/mol, the health benefits of reduced A1c become smaller, and the intensive glycemic control required to reach this level leads to an increased rate of dangerous hypoglycemic episodes.
A retrospective study of 47,970 type 2 diabetes patients, aged 50 years and older, found that patients with an HbA1c more than 48 mmol/mol (6.5 DCCT %) had an increased mortality rate, but a later international study contradicted these findings.
A review of the UKPDS, Action to Control Cardiovascular Risk in Diabetes (ACCORD), Advance and Veterans Affairs Diabetes Trials (VADT) estimated that the risks of the main complications of diabetes (diabetic retinopathy, diabetic nephropathy, diabetic neuropathy, and macrovascular disease) decreased by about 3% for every 1 mmol/mol decrease in HbA1c.
However, a trial by ACCORD designed specifically to determine whether reducing HbA1c below 42 mmol/mol (6.0 DCCT %) using increased amounts of medication would reduce the rate of cardiovascular events found higher mortality with this intensive therapy, so much so that the trial was terminated 17 months early.
Practitioners must consider patients' health, their risk of hypoglycemia, and their specific health risks when setting a target HbA1c level. Because patients are responsible for averting or responding to their own hypoglycemic episodes, their input and the doctors' assessments of the patients' self-care skills are also important.
Persistent elevations in blood sugar (and, therefore, HbA1c) increase the risk of long-term vascular complications of diabetes, such as coronary disease, heart attack, stroke, heart failure, kidney failure, blindness, erectile dysfunction, neuropathy (loss of sensation, especially in the feet), gangrene, and gastroparesis (slowed emptying of the stomach). Poor blood glucose control also increases the risk of short-term complications of surgery such as poor wound healing.
All-cause mortality is higher above 64 mmol/mol (8.0 DCCT%) HbA1c as well as below 42 mmol/mol (6.0 DCCT %) in diabetic patients, and above 42 mmol/mol (6.0 DCCT %) as well as below 31 mmol/mol (5.0 DCCT %) in non-diabetic persons, indicating the risks of hyperglycemia and hypoglycemia, respectively. Similar risk results are seen for cardiovascular disease.
The 2022 ADA guidelines reaffirmed the recommendation that HbA1c should be maintained below 7.0% for most patients. Higher target values are appropriate for children and adolescents, patients with extensive co-morbid illness and those with a history of severe hypoglycemia. More stringent targets (<6.0%) are preferred for pregnant patients if this can be achieved without significant hypoglycemia.
Factors other than glucose that affect A1c
Lower-than-expected levels of HbA1c can be seen in people with shortened red blood cell lifespans, such as with glucose-6-phosphate dehydrogenase deficiency, sickle-cell disease, or any other condition causing premature red blood cell death. For these patients, alternate assessment with fructosamine or glycated albumin is recommended; these methods reflect glycemic control over the preceding 2-3 weeks. Blood donation will result in rapid replacement of lost RBCs with newly formed red blood cells. Since these new RBCs will have only existed for a short period of time, their presence will lead HbA1c to underestimate the actual average levels. There may also be distortions resulting from blood donation during the preceding two months, due to an abnormal synchronization of the age of the RBCs, resulting in an older than normal average blood cell life (resulting in an overestimate of actual average blood glucose levels). Conversely, higher-than-expected levels can be seen in people with a longer red blood cell lifespan, such as with iron deficiency.
Results can be unreliable in many circumstances, for example after blood loss, after surgery, blood transfusions, anemia, or high erythrocyte turnover; in the presence of chronic renal or liver disease; after administration of high-dose vitamin C; or erythropoetin treatment. Hypothyroidism can artificially raise the A1c. In general, the reference range (that found in healthy young persons), is about 30–33 mmol/mol (4.9–5.2 DCCT %). The mean HbA1c for diabetics type 1 in Sweden in 2014 was 63 mmol/mol (7.9 DCCT%) and for type 2, 61 mmol/mol (7.7 DCCT%). HbA1c levels show a small, but statistically significant, progressive uptick with age; the clinical importance of this increase is unclear.
Mapping from A1c to estimated average glucose
The approximate mapping between HbA1c values given in DCCT percentage (%) and eAG (estimated average glucose) measurements is given by the following equation:
eAG(mg/dL) = 28.7 × A1C − 46.7eAG(mmol/L) = 1.59 × A1C − 2.59(Data in parentheses are 95% confidence intervals>)
Normal, prediabetic, and diabetic ranges
The 2010 American Diabetes Association Standards of Medical Care in Diabetes added the HbA1c ≥ 48 mmol/mol (≥6.5 DCCT %) as another criterion for the diagnosis of diabetes.
Indications and uses
Glycated hemoglobin testing is recommended for both checking the blood sugar control in people who might be prediabetic and monitoring blood sugar control in patients with more elevated levels, termed diabetes mellitus. For a single blood sample, it provides far more revealing information on glycemic behavior than a fasting blood sugar value. However, fasting blood sugar tests are crucial in making treatment decisions. The American Diabetes Association guidelines are similar to others in advising that the glycated hemoglobin test be performed at least twice a year in patients with diabetes who are meeting treatment goals (and who have stable glycemic control) and quarterly in patients with diabetes whose therapy has changed or who are not meeting glycemic goals.
Glycated hemoglobin measurement is not appropriate where a change in diet or treatment has been made within six weeks. Likewise, the test assumes a normal red blood cell aging process and mix of hemoglobin subtypes (predominantly HbA in normal adults). Hence, people with recent blood loss, hemolytic anemia, or genetic differences in the hemoglobin molecule (hemoglobinopathy) such as sickle-cell disease and other conditions, as well as those who have donated blood recently, are not suitable for this test.
Due to glycated hemoglobin's variability (as shown in the table above), additional measures should be checked in patients at or near recommended goals. People with HbA1c values at 64 mmol/mol or less should be provided additional testing to determine whether the HbA1c values are due to averaging out high blood glucose (hyperglycemia) with low blood glucose (hypoglycemia) or the HbA1c is more reflective of an elevated blood glucose that does not vary much throughout the day. Devices such as continuous blood glucose monitoring allow people with diabetes to determine their blood glucose levels on a continuous basis, testing every few minutes. Continuous use of blood glucose monitors is becoming more common, and the devices are covered by many health insurance plans, including Medicare in the United States. The supplies tend to be expensive, since the sensors must be changed at least every 2 weeks. Another useful test in determining if HbA1c values are due to wide variations of blood glucose throughout the day is 1,5-anhydroglucitol, also known as GlycoMark. GlycoMark reflects only the times that the person experiences hyperglycemia above 180 mg/dL over a two-week period.
Concentrations of hemoglobin A1 (HbA1) are increased, both in diabetic patients and in patients with kidney failure, when measured by ion-exchange chromatography. The thiobarbituric acid method (a chemical method specific for the detection of glycation) shows that patients with kidney failure have values for glycated hemoglobin similar to those observed in normal subjects, suggesting that the high values in these patients are a result of binding of something other than glucose to hemoglobin.
In autoimmune hemolytic anemia, concentrations of HbA1 is undetectable. Administration of prednisolone will allow the HbA1 to be detected. The alternative fructosamine test may be used in these circumstances and it also reflects an average of blood glucose levels over the preceding 2 to 3 weeks.
All the major institutions such as the International Expert Committee Report, drawn from the International Diabetes Federation, the European Association for the Study of Diabetes, and the American Diabetes Association, suggest the HbA1c level of 48 mmol/mol (6.5 DCCT %) as a diagnostic level. The Committee Report further states that, when HbA1c testing cannot be done, the fasting and glucose-tolerance tests be done. Screening for diabetes during pregnancy continues to require fasting and glucose-tolerance measurements for gestational diabetes at 24 to 28 weeks gestation, although glycated hemoglobin may be used for screening at the first prenatal visit.
Modification by diet
Meta-analysis has shown probiotics to cause a statistically significant reduction in glycated hemoglobin in type-2 diabetics. Trials with multiple strains of probiotics had statistically significant reductions in glycated hemoglobin, whereas trials with single strains did not.
Standardization and traceability
Most clinical studies recommend the use of HbA1c assays that are traceable to the DCCT assay. The National Glycohemoglobin Standardization Program (NGSP) and IFCC have improved assay standardization. For initial diagnosis of diabetes, only HbA1c methods that are NGSP-certified should be used, not point-of-care testing devices. Analytical performance has been a problem with earlier point-of-care devices for HbA1c testing, specifically large standard deviations and negative bias.
Veterinary medicine
HbA1c testing has not been found useful in the monitoring during the treatment of cats and dogs with diabetes, and is not generally used; monitoring of fructosamine levels is favoured instead.
| Biology and health sciences | Proteins | Biology |
105200 | https://en.wikipedia.org/wiki/Cancer%20%28constellation%29 | Cancer (constellation) | Cancer is one of the twelve constellations of the zodiac and is located in the Northern celestial hemisphere. Its name is Latin for crab and it is commonly represented as one. Cancer is a medium-size constellation with an area of 506 square degrees and its stars are rather faint, its brightest star Beta Cancri having an apparent magnitude of 3.5. It contains ten stars with known planets, including 55 Cancri, which has five: one super-earth and four gas giants, one of which is in the habitable zone and as such has expected temperatures similar to Earth. At the (angular) heart of this sector of our celestial sphere is Praesepe (Messier 44), one of the closest open clusters to Earth and a popular target for amateur astronomers.
Characteristics
Cancer is a medium-sized constellation that is bordered by Gemini to the west, Lynx to the north, Leo Minor to the northeast, Leo to the east, Hydra to the south, and Canis Minor to the southwest. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Cnc".
The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 3 main and 7 western edgework forming sides (illustrated in infobox). In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between and . Covering 506 square degrees or 0.921% of the sky, it ranks 31st of the 88 constellations in size. It can be seen at latitudes between +90° and -60° and is best visible at 9 p.m. during the month of March. Cancer borders the bright constellations of Leo, Gemini and Canis Minor. Under city skies, Cancer is invisible to the naked eye.
Features
Stars
Cancer is the dimmest of the zodiacal constellations, having only two stars above the fourth magnitude. The German cartographer Johann Bayer used the Greek letters Alpha through Omega to label the most prominent stars in the constellation, followed by the letter A, then lowercase b, c and d. Within the constellation's borders, there are 104 stars brighter than or equal to apparent magnitude 6.5.
Also known as Altarf or Tarf, Beta Cancri is the brightest star in Cancer at apparent magnitude 3.5. Located 290 ± 30 light-years from Earth, it is a binary star system, its main component an orange giant of spectral type K4III that is varies slightly from a baseline magnitude of 3.53—dipping by 0.005 magnitude over a period of 6 days. An aging star, it has expanded to around 50 times the Sun's diameter and shines with 660 times its luminosity. It has a faint magnitude 14 red dwarf companion located 29 arcseconds away that takes 76,000 years to complete an orbit. Altarf represents a part of Cancer's body.
At magnitude 3.9 is Delta Cancri, also known as Asellus Australis. Located 131±1 light-years from Earth, it is an orange-hued giant star that has swollen and cooled off the main sequence to become an orange giant with a radius 11 times and luminosity 53 times that of the Sun. Its common name means "southern donkey". The star also holds a record for the longest name, "Arkushanangarushashutu," derived from ancient Babylonian language, which translates to "the southeast star in the Crab." Delta Cancri also makes it easy to find X Cancri, the reddest star in the sky. Known as Asellus Borealis "northern donkey", Gamma Cancri is a white-hued A-type subgiant of spectral type A1IV and magnitude 4.67, that is 35 times as luminous as of the Sun. It is located 181 ± 2 light-years from Earth.
Iota Cancri is a wide double star. The primary is a yellow-hued G-type bright giant star of magnitude 4.0, located 330 ± 20 light-years from Earth. It spent much of its stellar life as a B-type main sequence star before expanding and cooling to its current state as it spent its core hydrogen. The secondary is a white main sequence star of spectral type A3V and magnitude 6.57. Despite having different distances when measured by the HIPPARCOS satellite, the two stars share a common proper motion and appear to be a natural binary system.
Located 181 ± 2 light-years from Earth, Alpha Cancri (Acubens) is a multiple star with a primary component an apparent white main sequence star of spectral type A5 and magnitude 4.26. The secondary is of magnitude 12.0 and is visible in small amateur telescopes. Its common name means "the claw". The primary is actually two very similar white main sequence stars that are 5.3 AU distant from each other and the secondary is two small main sequence stars, most likely red dwarfs, that are 600 AU from the main pair. Hence the system is a quadruple one.
Zeta Cancri or Tegmine ("the shell") is a multiple star system that contains at least four stars located 82 light-years from Earth. The two brightest components are a binary star with an orbital period of 1100 years; the brighter component is a yellow-hued binary pair and the dimmer component is a yellow-hued star of magnitude 6.2. The brighter component is itself a binary star with a period of 59.6 years; its primary is of magnitude 5.6 and its secondary is of magnitude 6.0. This pair is at its greatest separation around 2019.
Ten star systems have been found to have planets. Rho1 Cancri or 55 Cancri (or Copernicus) is a binary star approximately 40.9 light-years distant from Earth. 55 Cancri consists of a yellow dwarf and a smaller red dwarf, with five planets orbiting the primary star; one low-mass planet that may be either a hot, water-rich world or a carbon planet and four gas giants. 55 Cancri A, classified as a rare "super metal-rich" star, is one of the top 100 target stars for NASA's Terrestrial Planet Finder mission, ranked 63rd on the list. The red dwarf 55 Cancri B, a suspected binary, appears to be gravitationally bound to the primary star, as the two share common proper motion.
YBP 1194 is a sunlike star in the open cluster M67 that has been found to have three planets.
Deep-sky objects
Cancer is best known among stargazers as the home of Praesepe (Messier 44), an open cluster also called the Beehive Cluster, located right in the centre of the constellation. Located about 590 light-years from Earth, it is one of the nearest open clusters to our Solar System. M 44 contains about 50 stars, the brightest of which are of the sixth magnitude. Epsilon Cancri is the brightest member at magnitude 6.3. Praesepe is also one of the larger open clusters visible; it has an area of 1.5 square degrees, or three times the size of the full Moon. It is most easily observed when Cancer is high in the sky. North of the Equator, this period stretches from February to May.
Ptolemy described the Beehive Cluster as "the nebulous mass in the breast of Cancer." It was one of the first objects Galileo observed with his telescope in 1609, spotting 40 stars in the cluster. Today, there are about 1010 high-probability members, most of them (68 percent) red dwarfs. The Greeks and Romans identified the nebulous object as a manger from which two donkeys, represented by the neighbouring stars [1213] Asellus Borealis and [1210] Asellus Australis, were eating. The stars represent the donkeys that the god Dionysus and his tutor Silenus rode in the war against the Titans. The ancient Chinese interpreted the object as a ghost or demon riding in a carriage, calling it a "cloud of pollen blown from under willow catkins."
The smaller, denser open cluster Messier 67 can also be found in Cancer, 2600 light-years from Earth. It has an area of approximately 0.5 square degrees, the size of the full Moon. It contains approximately 200 stars, the brightest of which are of the tenth magnitude.
QSO J0842+1835 is a quasar used to measure the speed of gravity in VLBI experiment conducted by Edward Fomalont and Sergei Kopeikin in September 2002.
OJ 287 is a BL Lacertae object located 3.5 billion light years away that has produced quasi-periodic optical outbursts going back approximately 120 years, as first apparent on photographic plates from 1891. It was first detected at radio wavelengths during the course of the Ohio Sky Survey. Its central supermassive black hole is among the largest known, with a mass of 18 billion solar masses, more than six times the value calculated for the previous largest object.
History and mythology
Cancer was first recorded by Claudius Ptolemy in the in The Mathematical Syntaxis (a.k.a. Almagest), under the Greek name (Karkinos).
In the late 1890s, R.H. Allen asserted the following, with no supporting citation:
"Cancer is said to have been the place for the Akkadian Sun of the South, perhaps from its position at the winter solstice in very remote antiquity; but afterwards it was associated with the fourth month Duzu , our June–July, and was known as the Northern Gate of Sun ..."
Very few of Cancer's stars are visible to the naked eye, and its brightest stars are only 4th magnitude. Cancer was often considered the "Dark Sign", quaintly described as "black and without eyes". Dante, alluded to its faintness in Paradiso, and mentioned it being visible for the whole night when it culminated at midnight in a Northern Hemisphere winter month:
Then a light among them brightened,
so that, if Cancer one such crystal had,
winter would have a month of only a day.
Cancer was the backdrop to the Sun's most northerly position in the sky (the summer solstice) in ancient times, when the Earth's Sun-facing side was maximally tilted towards the south, in the Gregorian calendar kept within a few days of June 21. Equivalently, this is the date when the Sun is directly overhead as far north as 23.437° N. The northern-most parallel where the sun is directly overhead is still called the Tropic of Cancer, even though the corresponding position on the sky now occurs in Taurus, due to the precession of the equinoxes.
The close conjunction of Jupiter and Saturn in 1563 – which was observed by Tycho Brahe and led him to note the inaccuracy of existing ephemerides and to begin his own program of astronomical measurements – occurred in Cancer not far from Praesepe.
In Greek mythology, Cancer is identified with the crab that appeared while Heracles fought the many-headed Lernaean Hydra. Hercules slew the crab after it bit him in the foot. Afterwards, the goddess Hera, an enemy of Heracles, placed the crab among the stars.
Illustrations
The modern symbol for Cancer represents the pincers of a crab, but Cancer has been represented as many types of creatures, usually those living in the water, and always those with an exoskeleton.
In the Egyptian records of about 2000 BC it was described as Scarabaeus (Scarab), the sacred emblem of immortality. In Babylonia the constellation was known as MUL.AL.LUL, a name which can refer to both a crab and a snapping turtle. On boundary stones, the image of a turtle or tortoise appears quite regularly and it is believed that this represents Cancer since a conventional crab has not so far been discovered on any of these monuments.
There also appears to be a strong connection between the Babylonian constellation and ideas of death and a passage to the underworld, which may be the origin of these ideas in later Greek myths associated with Hercules and the Hydra.
In the 12th century, an illustrated astronomical manuscript shows it as a water beetle. Albumasar writes of this sign in Flowers of Abu Ma'shar. A 1488 Latin translation depicts cancer as a large crayfish, which also is the constellation's name in most Germanic languages. Jakob Bartsch and Stanislaus Lubienitzki, in the 17th century, described it as a lobster.
Names
R.H. Allen, in Star Names: Their lore and meanings, lists names for the constellation as follows:
In Ancient Greece, Aratus called the crab (Karkinos), which was followed by Hipparchus and Ptolemy. The Alfonsine tables called it Carcinus, a Latinized form of the Greek word. Eratosthenes extended this as , , (Karkinos, Onoi, kai Fatne): the Crab, Asses, and Crib. In Ancient Rome, Manilius and Ovid called the constellation Litoreus (shore-inhabiting). Astacus and Cammarus appear in various classic writers, while it is called Nepa in Cicero's De Finibus and the works of Columella, Plautus, and Varro; all of these words signify a crab, lobster, or scorpion.
Athanasius Kircher said that in Coptic Egypt it was (Klaria), the Bestia seu Statio Typhonis (the Power of Darkness). Jérôme Lalande identified this with Anubis, one of the Egyptian divinities commonly associated with Sirius.
The Indian language Sanskrit shares a common ancestor with Greek, and the Sanskrit name of Cancer is Karka and Karkata. In Telugu it is "Karkatakam", in Kannada "Karkataka" or "Kataka", in Tamil Kadagam, and in Sinhala . The later Hindus knew it as Kulira, from the Greek (Kolouros), the term originated by Proclus.
Astrology
, the Sun appears in the constellation Cancer from July 20 – August 9. In tropical astrology, the Sun is considered to be in the sign of Cancer from June 22 – July 22, and in sidereal astrology, from July 16 – August 16. The symbol of the astrological sign (which now covers roughly the constellation of Gemini) is (♋︎).
Equivalents
In Chinese astronomy, the stars of Cancer lie within the Vermilion Bird of the South (南方朱雀, Nán Fāng Zhū Què).
| Physical sciences | Zodiac | Astronomy |
105219 | https://en.wikipedia.org/wiki/Cancer | Cancer | Cancer is a group of diseases involving abnormal cell growth with the potential to invade or spread to other parts of the body. These contrast with benign tumors, which do not spread. Possible signs and symptoms include a lump, abnormal bleeding, prolonged cough, unexplained weight loss, and a change in bowel movements. While these symptoms may indicate cancer, they can also have other causes. Over 100 types of cancers affect humans.
Tobacco use is the cause of about 22% of cancer deaths. Another 10% are due to obesity, poor diet, lack of physical activity or excessive alcohol consumption. Other factors include certain infections, exposure to ionizing radiation, and environmental pollutants. Infection with specific viruses, bacteria and parasites is an environmental factor causing approximately 16–18% of cancers worldwide. These infectious agents include Helicobacter pylori, hepatitis B, hepatitis C, human papillomavirus infection, Epstein–Barr virus, Human T-lymphotropic virus 1, Kaposi's sarcoma-associated herpesvirus and Merkel cell polyomavirus. Human immunodeficiency virus (HIV) does not directly cause cancer but it causes immune deficiency that can magnify the risk due to other infections, sometimes up to several thousandfold (in the case of Kaposi's sarcoma). Importantly, vaccination against hepatitis B and human papillomavirus have been shown to nearly eliminate risk of cancers caused by these viruses in persons successfully vaccinated prior to infection.
These environmental factors act, at least partly, by changing the genes of a cell. Typically, many genetic changes are required before cancer develops. Approximately 5–10% of cancers are due to inherited genetic defects. Cancer can be detected by certain signs and symptoms or screening tests. It is then typically further investigated by medical imaging and confirmed by biopsy.
The risk of developing certain cancers can be reduced by not smoking, maintaining a healthy weight, limiting alcohol intake, eating plenty of vegetables, fruits, and whole grains, vaccination against certain infectious diseases, limiting consumption of processed meat and red meat, and limiting exposure to direct sunlight. Early detection through screening is useful for cervical and colorectal cancer. The benefits of screening for breast cancer are controversial. Cancer is often treated with some combination of radiation therapy, surgery, chemotherapy and targeted therapy. Pain and symptom management are an important part of care. Palliative care is particularly important in people with advanced disease. The chance of survival depends on the type of cancer and extent of disease at the start of treatment. In children under 15 at diagnosis, the five-year survival rate in the developed world is on average 80%. For cancer in the United States, the average five-year survival rate is 66% for all ages.
In 2015, about 90.5 million people worldwide had cancer. In 2019, annual cancer cases grew by 23.6 million people, and there were 10 million deaths worldwide, representing over the previous decade increases of 26% and 21%, respectively.
The most common types of cancer in males are lung cancer, prostate cancer, colorectal cancer, and stomach cancer. In females, the most common types are breast cancer, colorectal cancer, lung cancer, and cervical cancer. If skin cancer other than melanoma were included in total new cancer cases each year, it would account for around 40% of cases. In children, acute lymphoblastic leukemia and brain tumors are most common, except in Africa, where non-Hodgkin lymphoma occurs more often. In 2012, about 165,000 children under 15 years of age were diagnosed with cancer. The risk of cancer increases significantly with age, and many cancers occur more commonly in developed countries. Rates are increasing as more people live to an old age and as lifestyle changes occur in the developing world. The global total economic costs of cancer were estimated at US$1.16 trillion (equivalent to $ trillion in ) per year .
Etymology and definitions
The word comes from the ancient Greek καρκίνος, meaning 'crab' and 'tumor'. Greek physicians Hippocrates and Galen, among others, noted the similarity of crabs to some tumors with swollen veins. The word was introduced in English in the modern medical sense around 1600.
Cancers comprise a large family of diseases that involve abnormal cell growth with the potential to invade or spread to other parts of the body. They form a subset of neoplasms. A neoplasm or tumor is a group of cells that have undergone unregulated growth and will often form a mass or lump, but may be distributed diffusely.
All tumor cells show the six hallmarks of cancer. These characteristics are required to produce a malignant tumor. They include:
Cell growth and division absent the proper signals
Continuous growth and division even given contrary signals
Avoidance of programmed cell death
Limitless number of cell divisions
Promoting blood vessel construction
Invasion of tissue and formation of metastases
The progression from normal cells to cells that can form a detectable mass to cancer involves multiple steps known as malignant progression.
Signs and symptoms
When cancer begins, it produces no symptoms. Signs and symptoms appear as the mass grows or ulcerates. The findings that result depend on cancer's type and location. Few symptoms are specific. Many frequently occur in individuals who have other conditions. Cancer can be difficult to diagnose and can be considered a "great imitator".
People may become anxious or depressed post-diagnosis. The risk of suicide in people with cancer is approximately double.
Local symptoms
Local symptoms may occur due to the mass of the tumor or its ulceration. For example, mass effects from lung cancer can block the bronchus resulting in cough or pneumonia; esophageal cancer can cause narrowing of the esophagus, making it difficult or painful to swallow; and colorectal cancer may lead to narrowing or blockages in the bowel, affecting bowel habits. Masses in breasts or testicles may produce observable lumps. Ulceration can cause bleeding that can lead to symptoms such as coughing up blood (lung cancer), anemia or rectal bleeding (colon cancer), blood in the urine (bladder cancer), or abnormal vaginal bleeding (endometrial or cervical cancer). Although localized pain may occur in advanced cancer, the initial tumor is usually painless. Some cancers can cause a buildup of fluid within the chest or abdomen.
Systemic symptoms
Systemic symptoms may occur due to the body's response to the cancer. This may include fatigue, unintentional weight loss, or skin changes. Some cancers can cause a systemic inflammatory state that leads to ongoing muscle loss and weakness, known as cachexia.
Some cancers, such as Hodgkin's disease, leukemias, and liver or kidney cancers, can cause a persistent fever.
Shortness of breath, called dyspnea, is a common symptom of cancer and its treatment. The causes of cancer-related dyspnea can include tumors in or around the lung, blocked airways, fluid in the lungs, pneumonia, or treatment reactions including an allergic response. Treatment for dyspnea in patients with advanced cancer can include fans, bilevel ventilation, acupressure/reflexology and multicomponent nonpharmacological interventions.
Some systemic symptoms of cancer are caused by hormones or other molecules produced by the tumor, known as paraneoplastic syndromes. Common paraneoplastic syndromes include hypercalcemia, which can cause altered mental state, constipation and dehydration, or hyponatremia, which can also cause altered mental status, vomiting, headaches, or seizures.
Metastasis
Metastasis is the spread of cancer to other locations in the body. The dispersed tumors are called metastatic tumors, while the original is called the primary tumor. Almost all cancers can metastasize. Most cancer deaths are due to cancer that has metastasized.
Metastasis is common in the late stages of cancer and it can occur via the blood or the lymphatic system or both. The typical steps in metastasis are:
Local invasion
Intravasation into the blood or lymph.
Circulation through the body.
Extravasation into the new tissue.
Proliferation
Angiogenesis
Different types of cancers tend to metastasize to particular organs. Overall, the most common places for metastases to occur are the lungs, liver, brain, and the bones.
While some cancers can be cured if detected early, metastatic cancer is more difficult to treat and control. Nevertheless, some recent treatments are demonstrating encouraging results.
Causes
The majority of cancers, some 90–95% of cases, are due to genetic mutations from environmental and lifestyle factors. The remaining 5–10% are due to inherited genetics. Environmental refers to any cause that is not inherited, such as lifestyle, economic, and behavioral factors and not merely pollution. Common environmental factors that contribute to cancer death include tobacco use (25–30%), diet and obesity (30–35%), infections (15–20%), radiation (both ionizing and non-ionizing, up to 10%), lack of physical activity, and pollution. Psychological stress does not appear to be a risk factor for the onset of cancer, though it may worsen outcomes in those who already have cancer.
Environmental or lifestyle factors that caused cancer to develop in an individual can be identified by analyzing mutational signatures from genomic sequencing of tumor DNA. For example, this can reveal if lung cancer was caused by tobacco smoke, if skin cancer was caused by UV radiation, or if secondary cancers were caused by previous chemotherapy treatment.
Cancer is generally not a transmissible disease. Exceptions include rare transmissions that occur with pregnancies and occasional organ donors. However, transmissible infectious diseases such as hepatitis B, Epstein-Barr virus, Human Papilloma Virus and HIV, can contribute to the development of cancer.
Chemicals
Exposure to particular substances have been linked to specific types of cancer. These substances are called carcinogens.
Tobacco smoke, for example, causes 90% of lung cancer. Tobacco use can cause cancer throughout the body including in the mouth and throat, larynx, esophagus, stomach, bladder, kidney, cervix, colon/rectum, liver and pancreas. Tobacco smoke contains over fifty known carcinogens, including nitrosamines and polycyclic aromatic hydrocarbons.
Tobacco is responsible for about one in five cancer deaths worldwide and about one in three in the developed world.
Lung cancer death rates in the United States have mirrored smoking patterns, with increases in smoking followed by dramatic increases in lung cancer death rates and, more recently, decreases in smoking rates since the 1950s followed by decreases in lung cancer death rates in men since 1990.
Alcohol increases the risk of cancer of the breast (in women), throat, liver, oesophagus, mouth, larynx, and colon.
In Western Europe, 10% of cancers in males and 3% of cancers in females are attributed to alcohol exposure, especially liver and digestive tract cancers. Cancer from work-related substance exposures may cause between 2 and 20% of cases, causing at least 200,000 deaths. Cancers such as lung cancer and mesothelioma can come from inhaling tobacco smoke or asbestos fibers, or leukemia from exposure to benzene.
Exposure to perfluorooctanoic acid (PFOA), which is predominantly used in the production of Teflon, is known to cause two kinds of cancer.
Chemotherapy drugs such as platinum-based compounds are carcinogens that increase the risk of secondary cancers
Azathioprine, an immunosuppressive medication, is a carcinogen that can cause primary tumors to develop.
Diet and exercise
Diet, physical inactivity, and obesity are related to up to 30–35% of cancer deaths. In the United States, excess body weight is associated with the development of many types of cancer and is a factor in 14–20% of cancer deaths. A UK study including data on over 5 million people showed higher body mass index to be related to at least 10 types of cancer and responsible for around 12,000 cases each year in that country. Physical inactivity is believed to contribute to cancer risk, not only through its effect on body weight but also through negative effects on the immune system and endocrine system. More than half of the effect from the diet is due to overnutrition (eating too much), rather than from eating too few vegetables or other healthful foods.
Some specific foods are linked to specific cancers. A high-salt diet is linked to gastric cancer. Aflatoxin B1, a frequent food contaminant, causes liver cancer. Betel nut chewing can cause oral cancer. National differences in dietary practices may partly explain differences in cancer incidence. For example, gastric cancer is more common in Japan due to its high-salt diet while colon cancer is more common in the United States. Immigrant cancer profiles mirror those of their new country, often within one generation.
Infection
Worldwide, approximately 18% of cancer deaths are related to infectious diseases. This proportion ranges from a high of 25% in Africa to less than 10% in the developed world. Viruses are the usual infectious agents that cause cancer but bacteria and parasites may also play a role. Oncoviruses (viruses that can cause human cancer) include:
Human papillomavirus (cervical cancer),
Epstein–Barr virus (B-cell lymphoproliferative disease and nasopharyngeal carcinoma),
Kaposi's sarcoma herpesvirus (Kaposi's sarcoma and primary effusion lymphomas),
Hepatitis B and hepatitis C viruses (hepatocellular carcinoma)
Human T-cell leukemia virus-1 (T-cell leukemias).
Merkel cell polyomavirus (Merkel cell carcinoma)
Bacterial infection may also increase the risk of cancer, as seen in
Helicobacter pylori-induced gastric carcinoma.
Colibactin, a genotoxin associated with Escherichia coli infection (colorectal cancer)
Parasitic infections associated with cancer include:
Schistosoma haematobium (squamous cell carcinoma of the bladder)
The liver flukes, Opisthorchis viverrini and Clonorchis sinensis (cholangiocarcinoma).
Radiation
Radiation exposure such as ultraviolet radiation and radioactive material is a risk factor for cancer. Many non-melanoma skin cancers are due to ultraviolet radiation, mostly from sunlight. Sources of ionizing radiation include medical imaging and radon gas.
Ionizing radiation is not a particularly strong mutagen. Residential exposure to radon gas, for example, has similar cancer risks as passive smoking. Radiation is a more potent source of cancer when combined with other cancer-causing agents, such as radon plus tobacco smoke. Radiation can cause cancer in most parts of the body, in all animals and at any age. Children are twice as likely to develop radiation-induced leukemia as adults; radiation exposure before birth has ten times the effect.
Medical use of ionizing radiation is a small but growing source of radiation-induced cancers. Ionizing radiation may be used to treat other cancers, but this may, in some cases, induce a second form of cancer. It is also used in some kinds of medical imaging.
Prolonged exposure to ultraviolet radiation from the sun can lead to melanoma and other skin malignancies. Clear evidence establishes ultraviolet radiation, especially the non-ionizing medium wave UVB, as the cause of most non-melanoma skin cancers, which are the most common forms of cancer in the world.
Non-ionizing radio frequency radiation from mobile phones, electric power transmission and other similar sources has been described as a possible carcinogen by the World Health Organization's International Agency for Research on Cancer. Evidence, however, has not supported a concern. This includes that studies have not found a consistent link between mobile phone radiation and cancer risk.
Heredity
The vast majority of cancers are non-hereditary (sporadic). Hereditary cancers are primarily caused by an inherited genetic defect. Less than 0.3% of the population are carriers of a genetic mutation that has a large effect on cancer risk and these cause less than 3–10% of cancer. Some of these syndromes include: certain inherited mutations in the genes BRCA1 and BRCA2 with a more than 75% risk of breast cancer and ovarian cancer, and hereditary nonpolyposis colorectal cancer (HNPCC or Lynch syndrome), which is present in about 3% of people with colorectal cancer, among others.
Statistically for cancers causing most mortality, the relative risk of developing colorectal cancer when a first-degree relative (parent, sibling or child) has been diagnosed with it is about 2. The corresponding relative risk is 1.5 for lung cancer, and 1.9 for prostate cancer. For breast cancer, the relative risk is 1.8 with a first-degree relative having developed it at 50 years of age or older, and 3.3 when the relative developed it when being younger than 50 years of age.
Taller people have an increased risk of cancer because they have more cells than shorter people. Since height is genetically determined to a large extent, taller people have a heritable increase of cancer risk.
Physical agents
Some substances cause cancer primarily through their physical, rather than chemical, effects. A prominent example of this is prolonged exposure to asbestos, naturally occurring mineral fibers that are a major cause of mesothelioma (cancer of the serous membrane) usually the serous membrane surrounding the lungs. Other substances in this category, including both naturally occurring and synthetic asbestos-like fibers, such as wollastonite, attapulgite, glass wool and rock wool, are believed to have similar effects. Non-fibrous particulate materials that cause cancer include powdered metallic cobalt and nickel and crystalline silica (quartz, cristobalite and tridymite). Usually, physical carcinogens must get inside the body (such as through inhalation) and require years of exposure to produce cancer.
Physical trauma resulting in cancer is relatively rare. Claims that breaking bones resulted in bone cancer, for example, have not been proven. Similarly, physical trauma is not accepted as a cause for cervical cancer, breast cancer or brain cancer. One accepted source is frequent, long-term application of hot objects to the body. It is possible that repeated burns on the same part of the body, such as those produced by kanger and kairo heaters (charcoal hand warmers), may produce skin cancer, especially if carcinogenic chemicals are also present. Frequent consumption of scalding hot tea may produce esophageal cancer. Generally, it is believed that cancer arises, or a pre-existing cancer is encouraged, during the process of healing, rather than directly by the trauma. However, repeated injuries to the same tissues might promote excessive cell proliferation, which could then increase the odds of a cancerous mutation.
Chronic inflammation has been hypothesized to directly cause mutation. Inflammation can contribute to proliferation, survival, angiogenesis and migration of cancer cells by influencing the tumor microenvironment. Oncogenes build up an inflammatory pro-tumorigenic microenvironment.
Hormones
Hormones also play a role in the development of cancer by promoting cell proliferation. Insulin-like growth factors and their binding proteins play a key role in cancer cell proliferation, differentiation and apoptosis, suggesting possible involvement in carcinogenesis.
Hormones are important agents in sex-related cancers, such as cancer of the breast, endometrium, prostate, ovary and testis and also of thyroid cancer and bone cancer. For example, the daughters of women who have breast cancer have significantly higher levels of estrogen and progesterone than the daughters of women without breast cancer. These higher hormone levels may explain their higher risk of breast cancer, even in the absence of a breast-cancer gene. Similarly, men of African ancestry have significantly higher levels of testosterone than men of European ancestry and have a correspondingly higher level of prostate cancer. Men of Asian ancestry, with the lowest levels of testosterone-activating androstanediol glucuronide, have the lowest levels of prostate cancer.
Other factors are relevant: obese people have higher levels of some hormones associated with cancer and a higher rate of those cancers. Women who take hormone replacement therapy have a higher risk of developing cancers associated with those hormones. On the other hand, people who exercise far more than average have lower levels of these hormones and lower risk of cancer. Osteosarcoma may be promoted by growth hormones. Some treatments and prevention approaches leverage this cause by artificially reducing hormone levels and thus discouraging hormone-sensitive cancers.
Autoimmune diseases
There is an association between celiac disease and an increased risk of all cancers. People with untreated celiac disease have a higher risk, but this risk decreases with time after diagnosis and strict treatment. This may be due to the adoption of a gluten-free diet, which seems to have a protective role against development of malignancy in people with celiac disease. However, the delay in diagnosis and initiation of a gluten-free diet seems to increase the risk of malignancies. Rates of gastrointestinal cancers are increased in people with Crohn's disease and ulcerative colitis, due to chronic inflammation. Immunomodulators and biologic agents used to treat these diseases may promote developing extra-intestinal malignancies.
Pathophysiology
Genetics
Cancer is fundamentally a disease of tissue growth regulation. For a normal cell to transform into a cancer cell, the genes that regulate cell growth and differentiation must be altered.
The affected genes are divided into two broad categories. Oncogenes are genes that promote cell growth and reproduction. Tumor suppressor genes are genes that inhibit cell division and survival. Malignant transformation can occur through the formation of novel oncogenes, the inappropriate over-expression of normal oncogenes, or by the under-expression or disabling of tumor suppressor genes. Typically, changes in multiple genes are required to transform a normal cell into a cancer cell.
Genetic changes can occur at different levels and by different mechanisms. The gain or loss of an entire chromosome can occur through errors in mitosis. More common are mutations, which are changes in the nucleotide sequence of genomic DNA.
Large-scale mutations involve the deletion or gain of a portion of a chromosome. Genomic amplification occurs when a cell gains copies (often 20 or more) of a small chromosomal locus, usually containing one or more oncogenes and adjacent genetic material. Translocation occurs when two separate chromosomal regions become abnormally fused, often at a characteristic location. A well-known example of this is the Philadelphia chromosome, or translocation of chromosomes 9 and 22, which occurs in chronic myelogenous leukemia and results in production of the BCR-abl fusion protein, an oncogenic tyrosine kinase.
Small-scale mutations include point mutations, deletions, and insertions, which may occur in the promoter region of a gene and affect its expression, or may occur in the gene's coding sequence and alter the function or stability of its protein product. Disruption of a single gene may also result from integration of genomic material from a DNA virus or retrovirus, leading to the expression of viral oncogenes in the affected cell and its descendants.
Replication of the data contained within the DNA of living cells will probabilistically result in some errors (mutations). Complex error correction and prevention are built into the process and safeguard the cell against cancer. If a significant error occurs, the damaged cell can self-destruct through programmed cell death, termed apoptosis. If the error control processes fail, then the mutations will survive and be passed along to daughter cells.
Some environments make errors more likely to arise and propagate. Such environments can include the presence of disruptive substances called carcinogens, repeated physical injury, heat, ionising radiation, or hypoxia.
The errors that cause cancer are self-amplifying and compounding, for example:
A mutation in the error-correcting machinery of a cell might cause that cell and its children to accumulate errors more rapidly.
A further mutation in an oncogene might cause the cell to reproduce more rapidly and more frequently than its normal counterparts.
A further mutation may cause loss of a tumor suppressor gene, disrupting the apoptosis signaling pathway and immortalizing the cell.
A further mutation in the signaling machinery of the cell might send error-causing signals to nearby cells.
The transformation of a normal cell into cancer is akin to a chain reaction caused by initial errors, which compound into more severe errors, each progressively allowing the cell to escape more controls that limit normal tissue growth. This rebellion-like scenario is an undesirable survival of the fittest, where the driving forces of evolution work against the body's design and enforcement of order. Once cancer has begun to develop, this ongoing process, termed clonal evolution, drives progression towards more invasive stages. Clonal evolution leads to intra-tumour heterogeneity (cancer cells with heterogeneous mutations) that complicates designing effective treatment strategies and requires an evolutionary approach to designing treatment.
Characteristic abilities developed by cancers are divided into categories, specifically evasion of apoptosis, self-sufficiency in growth signals, insensitivity to anti-growth signals, sustained angiogenesis, limitless replicative potential, metastasis, reprogramming of energy metabolism and evasion of immune destruction.
Epigenetics
The classical view of cancer is a set of diseases driven by progressive genetic abnormalities that include mutations in tumor-suppressor genes and oncogenes, and in chromosomal abnormalities. A role for epigenetic alterations was identified in the early 21st century.
Epigenetic alterations are functionally relevant modifications to the genome that do not change the nucleotide sequence. Examples of such modifications are changes in DNA methylation (hypermethylation and hypomethylation), histone modification and changes in chromosomal architecture (caused by inappropriate expression of proteins such as HMGA2 or HMGA1). Each of these alterations regulates gene expression without altering the underlying DNA sequence. These changes may remain through cell divisions, endure for multiple generations, and can be considered as equivalent to mutations.
Epigenetic alterations occur frequently in cancers. As an example, one study listed protein coding genes that were frequently altered in their methylation in association with colon cancer. These included 147 hypermethylated and 27 hypomethylated genes. Of the hypermethylated genes, 10 were hypermethylated in 100% of colon cancers and many others were hypermethylated in more than 50% of colon cancers.
While epigenetic alterations are found in cancers, the epigenetic alterations in DNA repair genes, causing reduced expression of DNA repair proteins, may be of particular importance. Such alterations may occur early in progression to cancer and are a possible cause of the genetic instability characteristic of cancers.
Reduced expression of DNA repair genes disrupts DNA repair. This is shown in the figure at the 4th level from the top. (In the figure, red wording indicates the central role of DNA damage and defects in DNA repair in progression to cancer.) When DNA repair is deficient DNA damage remains in cells at a higher than usual level (5th level) and causes increased frequencies of mutation and/or epimutation (6th level). Mutation rates increase substantially in cells defective in DNA mismatch repair or in homologous recombinational repair (HRR). Chromosomal rearrangements and aneuploidy also increase in HRR defective cells.
Higher levels of DNA damage cause increased mutation (right side of figure) and increased epimutation. During repair of DNA double strand breaks, or repair of other DNA damage, incompletely cleared repair sites can cause epigenetic gene silencing.
Deficient expression of DNA repair proteins due to an inherited mutation can increase cancer risks. Individuals with an inherited impairment in any of 34 DNA repair genes (see article DNA repair-deficiency disorder) have increased cancer risk, with some defects ensuring a 100% lifetime chance of cancer (e.g. p53 mutations). Germ line DNA repair mutations are noted on the figure's left side. However, such germline mutations (which cause highly penetrant cancer syndromes) are the cause of only about 1 percent of cancers.
In sporadic cancers, deficiencies in DNA repair are occasionally caused by a mutation in a DNA repair gene but are much more frequently caused by epigenetic alterations that reduce or silence expression of DNA repair genes. This is indicated in the figure at the 3rd level. Many studies of heavy metal-induced carcinogenesis show that such heavy metals cause a reduction in expression of DNA repair enzymes, some through epigenetic mechanisms. DNA repair inhibition is proposed to be a predominant mechanism in heavy metal-induced carcinogenicity. In addition, frequent epigenetic alterations of the DNA sequences code for small RNAs called microRNAs (or miRNAs). miRNAs do not code for proteins, but can "target" protein-coding genes and reduce their expression.
Cancers usually arise from an assemblage of mutations and epimutations that confer a selective advantage leading to clonal expansion (see Field defects in progression to cancer). Mutations, however, may not be as frequent in cancers as epigenetic alterations. An average cancer of the breast or colon can have about 60 to 70 protein-altering mutations, of which about three or four may be "driver" mutations and the remaining ones may be "passenger" mutations.
Metastasis
Metastasis is the spread of cancer to other locations in the body. The dispersed tumors are called metastatic tumors, while the original is called the primary tumor. Almost all cancers can metastasize. Most cancer deaths are due to cancer that has metastasized.
Metastasis is common in the late stages of cancer and it can occur via the blood or the lymphatic system or both. The typical steps in metastasis are local invasion, intravasation into the blood or lymph, circulation through the body, extravasation into the new tissue, proliferation and angiogenesis. Different types of cancers tend to metastasize to particular organs, but overall the most common places for metastases to occur are the lungs, liver, brain and the bones.
Metabolism
Normal cells typically generate only about 30% of energy from glycolysis, whereas most cancers rely on glycolysis for energy production (Warburg effect). But a minority of cancer types rely on oxidative phosphorylation as the primary energy source, including lymphoma, leukemia, and endometrial cancer. Even in these cases, however, the use of glycolysis as an energy source rarely exceeds 60%. A few cancers use glutamine as the major energy source, partly because it provides nitrogen required for nucleotide (DNA, RNA) synthesis. Cancer stem cells often use oxidative phosphorylation or glutamine as a primary energy source.
Diagnosis
Most cancers are initially recognized either because of the appearance of signs or symptoms or through screening. Neither of these leads to a definitive diagnosis, which requires the examination of a tissue sample by a pathologist. People with suspected cancer are investigated with medical tests. These commonly include blood tests, X-rays, (contrast) CT scans and endoscopy.
The tissue diagnosis from the biopsy indicates the type of cell that is proliferating, its histological grade, genetic abnormalities and other features. Together, this information is useful to evaluate the prognosis and to choose the best treatment.
Cytogenetics and immunohistochemistry are other types of tissue tests. These tests provide information about molecular changes (such as mutations, fusion genes and numerical chromosome changes) and may thus also indicate the prognosis and best treatment.
Cancer diagnosis can cause psychological distress and psychosocial interventions, such as talking therapy, may help people with this. Some people choose to disclose the diagnosis widely; others prefer to keep the information private, especially shortly after the diagnosis, or to disclose it only partially or to selected people.
Classification
Cancers are classified by the type of cell that the tumor cells resemble and is therefore presumed to be the origin of the tumor. These types include:
Carcinoma: Cancers derived from epithelial cells. This group includes many of the most common cancers and include nearly all those in the breast, prostate, lung, pancreas and colon. Most of these are of the adenocarcinoma type, which means that the cancer has gland-like differentiation.
Sarcoma: Cancers arising from connective tissue (i.e. bone, cartilage, fat, nerve), each of which develops from cells originating in mesenchymal cells outside the bone marrow.
Lymphoma and leukemia: These two classes arise from hematopoietic (blood-forming) cells that leave the marrow and tend to mature in the lymph nodes and blood, respectively.
Germ cell tumor: Cancers derived from pluripotent cells, most often presenting in the testicle or the ovary (seminoma and dysgerminoma, respectively).
Blastoma: Cancers derived from immature "precursor" cells or embryonic tissue.
Cancers are usually named using -carcinoma, -sarcoma or -blastoma as a suffix, with the Latin or Greek word for the organ or tissue of origin as the root. For example, cancers of the liver parenchyma arising from malignant epithelial cells is called hepatocarcinoma, while a malignancy arising from primitive liver precursor cells is called a hepatoblastoma and a cancer arising from fat cells is called a liposarcoma. For some common cancers, the English organ name is used. For example, the most common type of breast cancer is called ductal carcinoma of the breast. Here, the adjective ductal refers to the appearance of cancer under the microscope, which suggests that it has originated in the milk ducts.
Benign tumors (which are not cancers) are named using -oma as a suffix with the organ name as the root. For example, a benign tumor of smooth muscle cells is called a leiomyoma (the common name of this frequently occurring benign tumor in the uterus is fibroid). Confusingly, some types of cancer use the -noma suffix, examples including melanoma and seminoma.
Some types of cancer are named for the size and shape of the cells under a microscope, such as giant cell carcinoma, spindle cell carcinoma and small-cell carcinoma.
Prevention
Cancer prevention is defined as active measures to decrease cancer risk. The vast majority of cancer cases are due to environmental risk factors. Many of these environmental factors are controllable lifestyle choices. Thus, cancer is generally preventable. Between 70% and 90% of common cancers are due to environmental factors and therefore potentially preventable.
Greater than 30% of cancer deaths could be prevented by avoiding risk factors including: tobacco, excess weight/obesity, poor diet, physical inactivity, alcohol, sexually transmitted infections and air pollution. Further, poverty could be considered as an indirect risk factor in human cancers. Not all environmental causes are controllable, such as naturally occurring background radiation and cancers caused through hereditary genetic disorders and thus are not preventable via personal behavior.
In 2019, ~44% of all cancer deaths – or ~4.5 M deaths or ~105 million lost disability-adjusted life years – were due to known clearly preventable risk factors, led by smoking, alcohol use and high BMI, according to a GBD systematic analysis.
Dietary
While many dietary recommendations have been proposed to reduce cancer risks, the evidence to support them is not definitive. The primary dietary factors that increase risk are obesity and alcohol consumption. Diets low in fruits and vegetables and high in red meat have been implicated but reviews and meta-analyses do not come to a consistent conclusion. A 2014 meta-analysis found no relationship between fruits and vegetables and cancer. Coffee is associated with a reduced risk of liver cancer. Studies have linked excessive consumption of red or processed meat to an increased risk of breast cancer, colon cancer and pancreatic cancer, a phenomenon that could be due to the presence of carcinogens in meats cooked at high temperatures. In 2015 the IARC reported that eating processed meat (e.g., bacon, ham, hot dogs, sausages) and, to a lesser degree, red meat was linked to some cancers.
Dietary recommendations for cancer prevention typically include an emphasis on vegetables, fruit, whole grains and fish and an avoidance of processed and red meat (beef, pork, lamb), animal fats, pickled foods and refined carbohydrates.
Medication
Medications can be used to prevent cancer in a few circumstances. In the general population, NSAIDs reduce the risk of colorectal cancer; however, due to cardiovascular and gastrointestinal side effects, they cause overall harm when used for prevention. Aspirin has been found to reduce the risk of death from cancer by about 7%. COX-2 inhibitors may decrease the rate of polyp formation in people with familial adenomatous polyposis; however, it is associated with the same adverse effects as NSAIDs. Daily use of tamoxifen or raloxifene reduce the risk of breast cancer in high-risk women. The benefit versus harm for 5-alpha-reductase inhibitor such as finasteride is not clear.
Vitamin supplementation does not appear to be effective at preventing cancer. While low blood levels of vitamin D are correlated with increased cancer risk, whether this relationship is causal and vitamin D supplementation is protective is not determined. One 2014 review found that supplements had no significant effect on cancer risk. Another 2014 review concluded that vitamin D3 may decrease the risk of death from cancer (one fewer death in 150 people treated over 5 years), but concerns with the quality of the data were noted.
Beta-Carotene supplementation increases lung cancer rates in those who are high risk. Folic acid supplementation is not effective in preventing colon cancer and may increase colon polyps. Selenium supplementation has not been shown to reduce the risk of cancer.
Vaccination
Vaccines have been developed that prevent infection by some carcinogenic viruses. Human papillomavirus vaccine (Gardasil and Cervarix) decrease the risk of developing cervical cancer. The hepatitis B vaccine prevents infection with hepatitis B virus and thus decreases the risk of liver cancer. The administration of human papillomavirus and hepatitis B vaccinations is recommended where resources allow.
Screening
Unlike diagnostic efforts prompted by symptoms and medical signs, cancer screening involves efforts to detect cancer after it has formed, but before any noticeable symptoms appear. This may involve physical examination, blood or urine tests or medical imaging.
Cancer screening is not available for many types of cancers. Even when tests are available, they may not be recommended for everyone. Universal screening or mass screening involves screening everyone. Selective screening identifies people who are at higher risk, such as people with a family history. Several factors are considered to determine whether the benefits of screening outweigh the risks and the costs of screening. These factors include:
Possible harms from the screening test: for example, X-ray images involve exposure to potentially harmful ionizing radiation
The likelihood of the test correctly identifying cancer
The likelihood that cancer is present: Screening is not normally useful for rare cancers.
Possible harms from follow-up procedures
Whether suitable treatment is available
Whether early detection improves treatment outcomes
Whether cancer will ever need treatment
Whether the test is acceptable to the people: If a screening test is too burdensome (for example, extremely painful), then people will refuse to participate.
Cost
Recommendations
U.S. Preventive Services Task Force
The U.S. Preventive Services Task Force (USPSTF) issues recommendations for various cancers:
Strongly recommends cervical cancer screening in women who are sexually active and have a cervix at least until the age of 65.
Recommend that Americans be screened for colorectal cancer via fecal occult blood testing, sigmoidoscopy, or colonoscopy starting at age 50 until age 75.
Evidence is insufficient to recommend for or against screening for skin cancer, oral cancer, lung cancer, or prostate cancer in men under 75.
Routine screening is not recommended for bladder cancer, testicular cancer, ovarian cancer, pancreatic cancer, or prostate cancer.
Recommends mammography for breast cancer screening every two years from ages 50–74, but does not recommend either breast self-examination or clinical breast examination. A 2013 Cochrane review concluded that breast cancer screening by mammography had no effect in reducing mortality because of overdiagnosis and overtreatment.
Japan
Screens for gastric cancer using photofluorography due to the high incidence there.
Genetic testing
Genetic testing for individuals at high risk of certain cancers is recommended by unofficial groups. Carriers of these mutations may then undergo enhanced surveillance, chemoprevention, or preventative surgery to reduce their subsequent risk.
Management
Many treatment options for cancer exist. The primary ones include surgery, chemotherapy, radiation therapy, hormonal therapy, targeted therapy and palliative care. Which treatments are used depends on the type, location and grade of the cancer as well as the patient's health and preferences. The treatment intent may or may not be curative.
Chemotherapy
Chemotherapy is the treatment of cancer with one or more cytotoxic anti-neoplastic drugs (chemotherapeutic agents) as part of a standardized regimen. The term encompasses a variety of drugs, which are divided into broad categories such as alkylating agents and antimetabolites. Traditional chemotherapeutic agents act by killing cells that divide rapidly, a critical property of most cancer cells.
It was found that providing combined cytotoxic drugs is better than a single drug, a process called the combination therapy, which has an advantage in the statistics of survival and response to the tumor and in the progress of the disease. A Cochrane review concluded that combined therapy was more effective to treat metastasized breast cancer. However, generally it is not certain whether combination chemotherapy leads to better health outcomes, when both survival and toxicity are considered.
Targeted therapy is a form of chemotherapy that targets specific molecular differences between cancer and normal cells. The first targeted therapies blocked the estrogen receptor molecule, inhibiting the growth of breast cancer. Another common example is the class of Bcr-Abl inhibitors, which are used to treat chronic myelogenous leukemia (CML). Currently, targeted therapies exist for many of the most common cancer types, including bladder cancer, breast cancer, colorectal cancer, kidney cancer, leukemia, liver cancer, lung cancer, lymphoma, pancreatic cancer, prostate cancer, skin cancer, and thyroid cancer as well as other cancer types.
The efficacy of chemotherapy depends on the type of cancer and the stage. In combination with surgery, chemotherapy has proven useful in cancer types including breast cancer, colorectal cancer, pancreatic cancer, osteogenic sarcoma, testicular cancer, ovarian cancer and certain lung cancers. Chemotherapy is curative for some cancers, such as some leukemias, ineffective in some brain tumors, and needless in others, such as most non-melanoma skin cancers. The effectiveness of chemotherapy is often limited by its toxicity to other tissues in the body. Even when chemotherapy does not provide a permanent cure, it may be useful to reduce symptoms such as pain or to reduce the size of an inoperable tumor in the hope that surgery will become possible in the future.
Radiation
Radiation therapy involves the use of ionizing radiation in an attempt to either cure or improve symptoms. It works by damaging the DNA of cancerous tissue, causing mitotic catastrophe resulting in the death of the cancer cells. To spare normal tissues (such as skin or organs, which radiation must pass through to treat the tumor), shaped radiation beams are aimed from multiple exposure angles to intersect at the tumor, providing a much larger dose there than in the surrounding, healthy tissue. As with chemotherapy, cancers vary in their response to radiation therapy.
Radiation therapy is used in about half of cases. The radiation can be either from internal sources (brachytherapy) or external sources. The radiation is most commonly low energy X-rays for treating skin cancers, while higher energy X-rays are used for cancers within the body. Radiation is typically used in addition to surgery and or chemotherapy. For certain types of cancer, such as early head and neck cancer, it may be used alone. Radiation therapy after surgery for brain metastases has been shown to not improve overall survival in patients compared to surgery alone. For painful bone metastasis, radiation therapy has been found to be effective in about 70% of patients.
Surgery
Surgery is the primary method of treatment for most isolated, solid cancers and may play a role in palliation and prolongation of survival. It is typically an important part of definitive diagnosis and staging of tumors, as biopsies are usually required. In localized cancer, surgery typically attempts to remove the entire mass along with, in certain cases, the lymph nodes in the area. For some types of cancer this is sufficient to eliminate the cancer.
Palliative care
Palliative care is treatment that attempts to help the patient feel better and may be combined with an attempt to treat the cancer. Palliative care includes action to reduce physical, emotional, spiritual and psycho-social distress. Unlike treatment that is aimed at directly killing cancer cells, the primary goal of palliative care is to improve quality of life.
People at all stages of cancer treatment typically receive some kind of palliative care. In some cases, medical specialty professional organizations recommend that patients and physicians respond to cancer only with palliative care. This applies to patients who:
for lung cancer, see and
for breast cancer, see
for colon cancer, see
for other general statements see and
Display low performance status, implying limited ability to care for themselves
Received no benefit from prior evidence-based treatments
Are not eligible to participate in any appropriate clinical trial
No strong evidence implies that treatment would be effective
Palliative care may be confused with hospice and therefore only indicated when people approach end of life. Like hospice care, palliative care attempts to help the patient cope with their immediate needs and to increase comfort. Unlike hospice care, palliative care does not require people to stop treatment aimed at the cancer.
Multiple national medical guidelines recommend early palliative care for patients whose cancer has produced distressing symptoms or who need help coping with their illness. In patients first diagnosed with metastatic disease, palliative care may be immediately indicated. Palliative care is indicated for patients with a prognosis of less than 12 months of life even given aggressive treatment.
Immunotherapy
A variety of therapies using immunotherapy, stimulating or helping the immune system to fight cancer, have come into use since 1997. Approaches include:
Monoclonal antibody therapy
Checkpoint therapy (therapy that targets the immune checkpoints or regulators of the immune system)
Adoptive cell transfer
Laser therapy
Laser therapy uses high-intensity light to treat cancer by shrinking or destroying tumors or precancerous growths. Lasers are most commonly used to treat superficial cancers that are on the surface of the body or the lining of internal organs. It is used to treat basal cell skin cancer and the very early stages of others like cervical, penile, vaginal, vulvar, and non-small cell lung cancer. It is often combined with other treatments, such as surgery, chemotherapy, or radiation therapy. Laser-induced interstitial thermotherapy (LITT), or interstitial laser photocoagulation, uses lasers to treat some cancers using hyperthermia, which uses heat to shrink tumors by damaging or killing cancer cells. Laser are more precise than surgery and cause less damage, pain, bleeding, swelling, and scarring. A disadvantage is surgeons must have specialized training. It may be more expensive than other treatments.
Alternative medicine
Complementary and alternative cancer treatments are a diverse group of therapies, practices and products that are not part of conventional medicine. "Complementary medicine" refers to methods and substances used along with conventional medicine, while "alternative medicine" refers to compounds used instead of conventional medicine. Most complementary and alternative medicines for cancer have not been studied or tested using conventional techniques such as clinical trials. Some alternative treatments have been investigated and shown to be ineffective but still continue to be marketed and promoted. Cancer researcher Andrew J. Vickers stated, "The label 'unproven' is inappropriate for such therapies; it is time to assert that many alternative cancer therapies have been 'disproven'."
Prognosis
Survival rates vary by cancer type and by the stage at which it is diagnosed, ranging from majority survival to complete mortality five years after diagnosis. Once a cancer has metastasized, prognosis normally becomes much worse. About half of patients receiving treatment for invasive cancer (excluding carcinoma in situ and non-melanoma skin cancers) die from that cancer or its treatment. A majority of cancer deaths are due to metastases of the primary tumor.
Survival is worse in the developing world, partly because the types of cancer that are most common there are harder to treat than those associated with developed countries.
Those who survive cancer develop a second primary cancer at about twice the rate of those never diagnosed. The increased risk is believed to be due to the random chance of developing any cancer, the likelihood of surviving the first cancer, the same risk factors that produced the first cancer, unwanted side effects of treating the first cancer (particularly radiation therapy), and better compliance with screening.
Predicting short- or long-term survival depends on many factors. The most important are the cancer type and the patient's age and overall health. Those who are frail with other health problems have lower survival rates than otherwise healthy people. Centenarians are unlikely to survive for five years even if treatment is successful. People who report a higher quality of life tend to survive longer. People with lower quality of life may be affected by depression and other complications and/or disease progression that both impairs quality and quantity of life. Additionally, patients with worse prognoses may be depressed or report poorer quality of life because they perceive that their condition is likely to be fatal.
People with cancer have an increased risk of blood clots in their veins which can be life-threatening. The use of blood thinners such as heparin decrease the risk of blood clots but have not been shown to increase survival in people with cancer. People who take blood thinners also have an increased risk of bleeding.
Although extremely rare, some forms of cancer, even from an advanced stage, can heal spontaneously. This phenomenon is known as spontaneous remission.
Epidemiology
Estimates are that in 2018, 18.1 million new cases of cancer and 9.6 million deaths occur globally. About 20% of males and 17% of females will get cancer at some point in time while 13% of males and 9% of females will die from it.
In 2008, approximately 12.7 million cancers were diagnosed (excluding non-melanoma skin cancers and other non-invasive cancers) and in 2010 nearly 7.98 million people died. Cancers account for approximately 16% of deaths. The most common are lung cancer (1.76 million deaths), colorectal cancer (860,000) stomach cancer (780,000), liver cancer (780,000), and breast cancer (620,000). This makes invasive cancer the leading cause of death in the developed world and the second leading in the developing world. Over half of cases occur in the developing world.
Deaths from cancer were 5.8 million in 1990. Deaths have been increasing primarily due to longer lifespans and lifestyle changes in the developing world. The most significant risk factor for developing cancer is age. Although it is possible for cancer to strike at any age, most patients with invasive cancer are over 65. According to cancer researcher Robert A. Weinberg, "If we lived long enough, sooner or later we all would get cancer." Some of the association between aging and cancer is attributed to immunosenescence, errors accumulated in DNA over a lifetime and age-related changes in the endocrine system. Aging's effect on cancer is complicated by factors such as DNA damage and inflammation promoting it and factors such as vascular aging and endocrine changes inhibiting it.
Some slow-growing cancers are particularly common, but often are not fatal. Autopsy studies in Europe and Asia showed that up to 36% of people have undiagnosed and apparently harmless thyroid cancer at the time of their deaths and that 80% of men develop prostate cancer by age 80. As these cancers do not cause the patient's death, identifying them would have represented overdiagnosis rather than useful medical care.
The three most common childhood cancers are leukemia (34%), brain tumors (23%) and lymphomas (12%). In the United States cancer affects about 1 in 285 children. Rates of childhood cancer increased by 0.6% per year between 1975 and 2002 in the United States and by 1.1% per year between 1978 and 1997 in Europe. Death from childhood cancer decreased by half between 1975 and 2010 in the United States.
History
Cancer has existed for all of human history. The earliest written record regarding cancer is from in the Egyptian Edwin Smith Papyrus and describes breast cancer. Hippocrates () described several kinds of cancer, referring to them with the Greek word καρκίνος karkinos (crab or crayfish). This name comes from the appearance of the cut surface of a solid malignant tumor, with "the veins stretched on all sides as the animal the crab has its feet, whence it derives its name". Galen stated that "cancer of the breast is so called because of the fancied resemblance to a crab given by the lateral prolongations of the tumor and the adjacent distended veins". Celsus ( – 50 AD) translated karkinos into the Latin cancer, also meaning crab and recommended surgery as treatment. Galen (2nd century AD) disagreed with the use of surgery and recommended purgatives instead. These recommendations largely stood for 1000 years.
In the 15th, 16th and 17th centuries, it became acceptable for doctors to dissect bodies to discover the cause of death. The German professor Wilhelm Fabry believed that breast cancer was caused by a milk clot in a mammary duct. The Dutch professor Francois de la Boe Sylvius, a follower of Descartes, believed that all disease was the outcome of chemical processes and that acidic lymph fluid was the cause of cancer. His contemporary Nicolaes Tulp believed that cancer was a poison that slowly spreads and concluded that it was contagious.
The physician John Hill described tobacco sniffing as the cause of nose cancer in 1761. This was followed by the report in 1775 by British surgeon Percivall Pott that chimney sweeps' carcinoma, a cancer of the scrotum, was a common disease among chimney sweeps. With the widespread use of the microscope in the 18th century, it was discovered that the 'cancer poison' spread from the primary tumor through the lymph nodes to other sites ("metastasis"). This view of the disease was first formulated by the English surgeon Campbell De Morgan between 1871 and 1874.
Society and culture
Although many diseases (such as heart failure) may have a worse prognosis than most cases of cancer, cancer is the subject of widespread fear and taboos. The euphemism of "a long illness" to describe cancers leading to death is still commonly used in obituaries, rather than naming the disease explicitly, reflecting an apparent stigma. Cancer is also euphemised as "the C-word"; Macmillan Cancer Support uses the term to try to lessen the fear around the disease. In Nigeria, one local name for cancer translates into English as "the disease that cannot be cured". This deep belief that cancer is necessarily a difficult and usually deadly disease is reflected in the systems chosen by society to compile cancer statistics: the most common form of cancer—non-melanoma skin cancers, accounting for about one-third of cancer cases worldwide, but very few deaths—are excluded from cancer statistics specifically because they are easily treated and almost always cured, often in a single, short, outpatient procedure.
Western conceptions of patients' rights for people with cancer include a duty to fully disclose the medical situation to the person, and the right to engage in shared decision-making in a way that respects the person's own values. In other cultures, other rights and values are preferred. For example, most African cultures value whole families rather than individualism. In parts of Africa, a diagnosis is commonly made so late that cure is not possible, and treatment, if available at all, would quickly bankrupt the family. As a result of these factors, African healthcare providers tend to let family members decide whether, when and how to disclose the diagnosis, and they tend to do so slowly and circuitously, as the person shows interest and an ability to cope with the grim news. People from Asian and South American countries also tend to prefer a slower, less candid approach to disclosure than is idealized in the United States and Western Europe, and they believe that sometimes it would be preferable not to be told about a cancer diagnosis. In general, disclosure of the diagnosis is more common than it was in the 20th century, but full disclosure of the prognosis is not offered to many patients around the world.
In the United States and some other cultures, cancer is regarded as a disease that must be "fought" to end the "civil insurrection"; a War on Cancer was declared in the US. Military metaphors are particularly common in descriptions of cancer's human effects, and they emphasize both the state of the patient's health and the need to take immediate, decisive actions himself rather than to delay, to ignore or to rely entirely on others. The military metaphors also help rationalize radical, destructive treatments.
In the 1970s, a relatively popular alternative cancer treatment in the US was a specialized form of talk therapy, based on the idea that cancer was caused by a bad attitude. People with a "cancer personality"—depressed, repressed, self-loathing and afraid to express their emotions—were believed to have manifested cancer through subconscious desire. Some psychotherapists claimed that treatment to change the patient's outlook on life would cure the cancer. Among other effects, this belief allowed society to blame the victim for having caused the cancer (by "wanting" it) or having prevented its cure (by not becoming a sufficiently happy, fearless and loving person). It also increased patients' anxiety, as they incorrectly believed that natural emotions of sadness, anger or fear shorten their lives. The idea was ridiculed by Susan Sontag, who published Illness as Metaphor while recovering from treatment for breast cancer in 1978. Although the original idea is now generally regarded as nonsense, the idea partly persists in a reduced form with a widespread, but incorrect, belief that deliberately cultivating a habit of positive thinking will increase survival. This notion is particularly strong in breast cancer culture.
One idea about why people with cancer are blamed or stigmatized, called the just-world fallacy, is that blaming cancer on the patient's actions or attitudes allows the blamers to regain a sense of control. This is based upon the blamers' belief that the world is fundamentally just and so any dangerous illness, like cancer, must be a type of punishment for bad choices, because in a just world, bad things would not happen to good people.
Economic effect
The total health care expenditure on cancer in the US was estimated to be $80.2 billion in 2015. Even though cancer-related health care expenditure have increased in absolute terms during recent decades, the share of health expenditure devoted to cancer treatment has remained close to 5% between the 1960s and 2004. A similar pattern has been observed in Europe where about 6% of all health care expenditure are spent on cancer treatment. In addition to health care expenditure and financial toxicity, cancer causes indirect costs in the form of productivity losses due to sick days, permanent incapacity and disability as well as premature death during working age. Cancer causes also costs for informal care. Indirect costs and informal care costs are typically estimated to exceed or equal the health care costs of cancer.
Workplace
In the United States, cancer is included as a protected condition by the Equal Employment Opportunity Commission (EEOC), mainly due to the potential for cancer having discriminating effects on workers. Discrimination in the workplace could occur if an employer holds a false belief that a person with cancer is not capable of doing a job properly, and may ask for more sick leave than other employees. Employers may also make hiring or firing decisions based on misconceptions about cancer disabilities, if present. The EEOC provides interview guidelines for employers, as well as lists of possible solutions for assessing and accommodating employees with cancer.
Effect on divorce
A study found women were around six times more likely to be divorced soon after a diagnosis of cancer compared to men. Rate of separation for cancer-survivors showed correlations with race, age, income, and comorbidities in a study. A review found a somewhat decreased divorce rate for most cancer types, and noted study heterogeneity and methodological weaknesses for many studies on effects of cancer on divorce.
Research
Because cancer is a class of diseases, it is unlikely that there will ever be a single "cure for cancer" any more than there will be a single treatment for all infectious diseases. Angiogenesis inhibitors were once incorrectly thought to have potential as a "silver bullet" treatment applicable to many types of cancer. Angiogenesis inhibitors and other cancer therapeutics are used in combination to reduce cancer morbidity and mortality.
Experimental cancer treatments are studied in clinical trials to compare the proposed treatment to the best existing treatment. Treatments that succeeded in one cancer type can be tested against other types. Diagnostic tests are under development to better target the right therapies to the right patients, based on their individual biology.
Cancer research focuses on the following issues:
Agents (e.g. viruses) and events (e.g. mutations) that cause or facilitate genetic changes in cells destined to become cancer.
The precise nature of the genetic damage and the genes that are affected by it.
The consequences of those genetic changes on the biology of the cell, both in generating the defining properties of a cancer cell and in facilitating additional genetic events that lead to further progression of the cancer.
The improved understanding of molecular biology and cellular biology due to cancer research has led to new treatments for cancer since US President Richard Nixon declared the "War on Cancer" in 1971. Since then, the country has spent over $200 billion on cancer research, including resources from public and private sectors. The cancer death rate (adjusting for size and age of the population) declined by five percent between 1950 and 2005.
Competition for financial resources appears to have suppressed the creativity, cooperation, risk-taking and original thinking required to make fundamental discoveries, unduly favoring low-risk research into small incremental advancements over riskier, more innovative research. Other consequences of competition appear to be many studies with dramatic claims whose results cannot be replicated and perverse incentives that encourage grantee institutions to grow without making sufficient investments in their own faculty and facilities.
Virotherapy, which uses convert viruses, is being studied.
In the wake of the COVID-19 pandemic, there has been a worry that cancer research and treatment are slowing down.
On 2 December 2023, Nano Today published a groundbreaking discovery involving "NK cell-engaging nanodrones" for targeted cancer treatment. The development of "NK cell-engaging nanodrones" represents a significant leap forward in cancer treatment, showcasing how cutting-edge nanotechnology and immunotherapy can be combined to target and eliminate cancer cells with unprecedented precision. These nanodrones are designed to harness the power of natural killer (NK) cells, which play a crucial role in the body's immune response against tumors. By directing these NK cells specifically to the sites of tumors, the nanodrones can effectively concentrate the immune system's attack on the cancer cells, potentially leading to better outcomes for patients.
The key innovation here lies in the use of protein cage nanoparticle-based systems. These systems are engineered to carry signals that attract NK cells directly to the tumor, overcoming one of the major challenges in cancer immunotherapy: ensuring that the immune cells find and attack only the cancer cells without harming healthy tissue. This targeted approach not only increases the efficacy of the treatment but also minimizes side effects, a common concern with broader-acting cancer therapies.
Pregnancy
Cancer affects approximately 1 in 1,000 pregnant women. The most common cancers found during pregnancy are the same as the most common cancers found in non-pregnant women during childbearing ages: breast cancer, cervical cancer, leukemia, lymphoma, melanoma, ovarian cancer and colorectal cancer.
Diagnosing a new cancer in a pregnant woman is difficult, in part because any symptoms are commonly assumed to be a normal discomfort associated with pregnancy. As a result, cancer is typically discovered at a somewhat later stage than average. Some imaging procedures, such as MRIs (magnetic resonance imaging), CT scans, ultrasounds and mammograms with fetal shielding are considered safe during pregnancy; some others, such as PET scans, are not.
Treatment is generally the same as for non-pregnant women. However, radiation and radioactive drugs are normally avoided during pregnancy, especially if the fetal dose might exceed 100 cGy. In some cases, some or all treatments are postponed until after birth if the cancer is diagnosed late in the pregnancy. Early deliveries are often used to advance the start of treatment. Surgery is generally safe, but pelvic surgeries during the first trimester may cause miscarriage. Some treatments, especially certain chemotherapy drugs given during the first trimester, increase the risk of birth defects and pregnancy loss (spontaneous abortions and stillbirths).
Elective abortions are not required and, for the most common forms and stages of cancer, do not improve the mother's survival. In a few instances, such as advanced uterine cancer, the pregnancy cannot be continued and in others, the patient may end the pregnancy so that she can begin aggressive chemotherapy.
Some treatments can interfere with the mother's ability to give birth vaginally or to breastfeed. Cervical cancer may require birth by Caesarean section. Radiation to the breast reduces the ability of that breast to produce milk and increases the risk of mastitis. Also, when chemotherapy is given after birth, many of the drugs appear in breast milk, which could harm the baby.
Other animals
Veterinary oncology, concentrating mainly on cats and dogs, is a growing specialty in wealthy countries and the major forms of human treatment such as surgery and radiotherapy may be offered. The most common types of cancer differ, but the cancer burden seems at least as high in pets as in humans. Animals, typically rodents, are often used in cancer research and studies of natural cancers in larger animals may benefit research into human cancer.
Across wild animals, there is still limited data on cancer. Nonetheless, a study published in 2022, explored cancer risk in (non-domesticated) zoo mammals, belonging to 191 species, 110,148 individual, demonstrated that cancer is a ubiquitous disease of mammals and it can emerge anywhere along the mammalian phylogeny. This research also highlighted that cancer risk is not uniformly distributed along mammals. For instance, species in the order Carnivora are particularly prone to be affected by cancer (e.g. over 25% of clouded leopards, bat-eared foxes and red wolves die of cancer), while ungulates (especially even-toed ungulates) appear to face consistently low cancer risks.
In non-humans, a few types of transmissible cancer have also been described, wherein the cancer spreads between animals by transmission of the tumor cells themselves. This phenomenon is seen in dogs with Sticker's sarcoma (also known as canine transmissible venereal tumor), and in Tasmanian devils with devil facial tumour disease (DFTD).
| Biology and health sciences | Illness and injury | null |
105340 | https://en.wikipedia.org/wiki/Strange%20quark | Strange quark | The strange quark or s quark (from its symbol, s) is the third lightest of all quarks, a type of elementary particle. Strange quarks are found in subatomic particles called hadrons. Examples of hadrons containing strange quarks include kaons (), strange D mesons (), Sigma baryons (), and other strange particles.
According to the IUPAP, the symbol s is the official name, while "strange" is to be considered only as a mnemonic. The name sideways has also been used because the s quark (but also the other three remaining quarks) has an I value of 0 while the u ("up") and d ("down") quarks have values of + and − respectively.
Along with the charm quark, it is part of the second generation of matter. It has an electric charge of e and a bare mass of . Like all quarks, the strange quark is an elementary fermion with spin , and experiences all four fundamental interactions: gravitation, electromagnetism, weak interactions, and strong interactions. The antiparticle of the strange quark is the strange antiquark (sometimes called antistrange quark or simply antistrange), which differs from it only in that some of its properties have equal magnitude but opposite sign.
The first strange particle (a particle containing a strange quark) was discovered by George Rochester and Clifford Butler in Department of Physics and Astronomy, University of Manchester in 1947 (kaons), with the existence of the strange quark itself (and that of the up and down quarks) postulated in 1964 by Murray Gell-Mann and George Zweig to explain the eightfold way classification scheme of hadrons. The first evidence for the existence of quarks came in 1968, in deep inelastic scattering experiments at the Stanford Linear Accelerator Center. These experiments confirmed the existence of up and down quarks, and by extension, strange quarks, as they were required to explain the eightfold way.
History
In the beginnings of particle physics (first half of the 20th century), hadrons such as protons, neutrons and pions were thought to be elementary particles. However, new hadrons were discovered and the "particle zoo" grew from a few particles in the early 1930s and 1940s to several dozens of them in the 1950s. Some particles were much longer lived than others; most particles decayed through the strong interaction and had lifetimes of around 10−23 seconds. When they decayed through the weak interactions, they had lifetimes of around 10−10 seconds. While studying these decays, Murray Gell-Mann (in 1953) and Kazuhiko Nishijima (in 1955) developed the concept of strangeness (which Nishijima called eta-charge, after the eta meson ()) to explain the "strangeness" of the longer-lived particles. The Gell-Mann–Nishijima formula is the result of these efforts to understand strange decays.
Despite their work, the relationships between each particle and the physical basis behind the strangeness property remained unclear. In 1961, Gell-Mann and Yuval Ne'eman independently proposed a hadron classification scheme called the eightfold way, also known as SU(3) flavor symmetry. This ordered hadrons into isospin multiplets. The physical basis behind both isospin and strangeness was only explained in 1964, when Gell-Mann and George Zweig independently proposed the quark model, which at that time consisted only of the up, down, and strange quarks. Up and down quarks were the carriers of isospin, while the strange quark carried strangeness. While the quark model explained the eightfold way, no direct evidence of the existence of quarks was found until 1968 at the Stanford Linear Accelerator Center. Deep inelastic scattering experiments indicated that protons had substructure, and that protons made of three more-fundamental particles explained the data (thus confirming the quark model).
At first people were reluctant to identify the three-bodies as quarks, instead preferring Richard Feynman's parton description, but over time the quark theory became accepted (see November Revolution).
| Physical sciences | Fermions | Physics |
105355 | https://en.wikipedia.org/wiki/Biomechanics | Biomechanics | Biomechanics is the study of the structure, function and motion of the mechanical aspects of biological systems, at any level from whole organisms to organs, cells and cell organelles, using the methods of mechanics. Biomechanics is a branch of biophysics.
Etymology
The word "biomechanics" (1899) and the related "biomechanical" (1856) come from the Ancient Greek βίος bios "life" and μηχανική, mēchanikē "mechanics", to refer to the study of the mechanical principles of living organisms, particularly their movement and structure.
Subfields
Biofluid mechanics
Biological fluid mechanics, or biofluid mechanics, is the study of both gas and liquid fluid flows in or around biological organisms. An often studied liquid biofluid problem is that of blood flow in the human cardiovascular system. Under certain mathematical circumstances, blood flow can be modeled by the Navier–Stokes equations. In vivo whole blood is assumed to be an incompressible Newtonian fluid. However, this assumption fails when considering forward flow within arterioles. At the microscopic scale, the effects of individual red blood cells become significant, and whole blood can no longer be modeled as a continuum. When the diameter of the blood vessel is just slightly larger than the diameter of the red blood cell the Fahraeus–Lindquist effect occurs and there is a decrease in wall shear stress. However, as the diameter of the blood vessel decreases further, the red blood cells have to squeeze through the vessel and often can only pass in a single file. In this case, the inverse Fahraeus–Lindquist effect occurs and the wall shear stress increases.
An example of a gaseous biofluids problem is that of human respiration. Respiratory systems in insects have been studied for bioinspiration for designing improved microfluidic devices.
Biotribology
Biotribology is the study of friction, wear and lubrication of biological systems, especially human joints such as hips and knees. In general, these processes are studied in the context of contact mechanics and tribology.
Additional aspects of biotribology include analysis of subsurface damage resulting from two surfaces coming in contact during motion, i.e. rubbing against each other, such as in the evaluation of tissue-engineered cartilage.
Comparative biomechanics
Comparative biomechanics is the application of biomechanics to non-human organisms, whether used to gain greater insights into humans (as in physical anthropology) or into the functions, ecology and adaptations of the organisms themselves. Common areas of investigation are animal locomotion and feeding, as these have strong connections to the organism's fitness and impose high mechanical demands. Animal locomotion has many manifestations, including running, jumping and flying. Locomotion requires energy to overcome friction, drag, inertia, and gravity, though which factor predominates varies with environment.
Comparative biomechanics overlaps strongly with many other fields, including ecology, neurobiology, developmental biology, ethology, and paleontology, to the extent of commonly publishing papers in the journals of these other fields. Comparative biomechanics is often applied in medicine (with regards to common model organisms such as mice and rats) as well as in biomimetics, which looks to nature for solutions to engineering problems.
Computational biomechanics
Computational biomechanics is the application of engineering computational tools, such as the Finite element method to study the mechanics of biological systems. Computational models and simulations are used to predict the relationship between parameters that are otherwise challenging to test experimentally, or used to design more relevant experiments reducing the time and costs of experiments. Mechanical modeling using finite element analysis has been used to interpret the experimental observation of plant cell growth to understand how they differentiate, for instance. In medicine, over the past decade, the Finite element method has become an established alternative to in vivo surgical assessment. One of the main advantages of computational biomechanics lies in its ability to determine the endo-anatomical response of an anatomy, without being subject to ethical restrictions. This has led FE modeling (or other discretization techniques) to the point of becoming ubiquitous in several fields of Biomechanics while several projects have even adopted an open source philosophy (e.g., BioSpine).
Computational biomechanics is an essential ingredient in surgical simulation, which is used for surgical planning, assistance, and training. In this case, numerical (discretization) methods are used to compute, as fast as possible, a system's response to boundary conditions such as forces, heat and mass transfer, and electrical and magnetic stimuli.
Continuum biomechanics
The mechanical analysis of biomaterials and biofluids is usually carried forth with the concepts of continuum mechanics. This assumption breaks down when the length scales of interest approach the order of the microstructural details of the material. One of the most remarkable characteristics of biomaterials is their hierarchical structure. In other words, the mechanical characteristics of these materials rely on physical phenomena occurring in multiple levels, from the molecular all the way up to the tissue and organ levels.
Biomaterials are classified into two groups: hard and soft tissues. Mechanical deformation of hard tissues (like wood, shell and bone) may be analysed with the theory of linear elasticity. On the other hand, soft tissues (like skin, tendon, muscle, and cartilage) usually undergo large deformations, and thus, their analysis relies on the finite strain theory and computer simulations. The interest in continuum biomechanics is spurred by the need for realism in the development of medical simulation.
Neuromechanics
Neuromechanics uses a biomechanical approach to better understand how the brain and nervous system interact to control the body. During motor tasks, motor units activate a set of muscles to perform a specific movement, which can be modified via motor adaptation and learning. In recent years, neuromechanical experiments have been enabled by combining motion capture tools with neural recordings.
Plant biomechanics
The application of biomechanical principles to plants, plant organs and cells has developed into the subfield of plant biomechanics. Application of biomechanics for plants ranges from studying the resilience of crops to environmental stress to development and morphogenesis at cell and tissue scale, overlapping with mechanobiology.
Sports biomechanics
In sports biomechanics, the laws of mechanics are applied to human movement in order to gain a greater understanding of athletic performance and to reduce sport injuries as well. It focuses on the application of the scientific principles of mechanical physics to understand movements of action of human bodies and sports implements such as cricket bat, hockey stick and javelin etc. Elements of mechanical engineering (e.g., strain gauges), electrical engineering (e.g., digital filtering), computer science (e.g., numerical methods), gait analysis (e.g., force platforms), and clinical neurophysiology (e.g., surface EMG) are common methods used in sports biomechanics.
Biomechanics in sports can be stated as the body's muscular, joint, and skeletal actions while executing a given task, skill, or technique. Understanding biomechanics relating to sports skills has the greatest implications on sports performance, rehabilitation and injury prevention, and sports mastery. As noted by Doctor Michael Yessis, one could say that best athlete is the one that executes his or her skill the best.
Vascular biomechanics
The main topics of the vascular biomechanics is the description of the mechanical behaviour of vascular tissues.
It is well known that cardiovascular disease is the leading cause of death worldwide. Vascular system in the human body is the main component that is supposed to maintain pressure and allow for blood flow and chemical exchanges. Studying the mechanical properties of these complex tissues improves the possibility of better understanding cardiovascular diseases and drastically improves personalized medicine.
Vascular tissues are inhomogeneous with a strongly non linear behaviour. Generally this study involves complex geometry with intricate load conditions and material properties. The correct description of these mechanisms is based on the study of physiology and biological interaction. Therefore, is necessary to study wall mechanics and hemodynamics with their interaction.
It is also necessary to premise that the vascular wall is a dynamic structure in continuous evolution. This evolution directly follows the chemical and mechanical environment in which the tissues are immersed like Wall Shear Stress or biochemical signaling.
Immunomechanics
The emerging field of immunomechanics focuses on characterising mechanical properties of the immune cells and their functional relevance. Mechanics of immune cells can be characterised using various force spectroscopy approaches such as acoustic force spectroscopy and optical tweezers, and these measurements can be performed at physiological conditions (e.g. temperature). Furthermore, one can study the link between immune cell mechanics and immunometabolism and immune signalling. The term "immunomechanics" is some times interchangeably used with immune cell mechanobiology or cell mechanoimmunology.
Other applied subfields of biomechanics include
Allometry
Animal locomotion and Gait analysis
Biotribology
Biofluid mechanics
Cardiovascular biomechanics
Comparative biomechanics
Computational biomechanics
Ergonomy
Forensic Biomechanics
Human factors engineering and occupational biomechanics
Injury biomechanics
Implant (medicine), Orthotics and Prosthesis
Kinaesthetics
Kinesiology (kinetics + physiology)
Musculoskeletal and orthopedic biomechanics
Rehabilitation
Soft body dynamics
Sports biomechanics
History
Antiquity
Aristotle, a student of Plato, can be considered the first bio-mechanic because of his work with animal anatomy. Aristotle wrote the first book on the motion of animals, De Motu Animalium, or On the Movement of Animals. He saw animal's bodies as mechanical systems, pursued questions such as the physiological difference between imagining performing an action and actual performance. In another work, On the Parts of Animals, he provided an accurate description of how the ureter uses peristalsis to carry urine from the kidneys to the bladder.
With the rise of the Roman Empire, technology became more popular than philosophy and the next bio-mechanic arose. Galen (129 AD-210 AD), physician to Marcus Aurelius, wrote his famous work, On the Function of the Parts (about the human body). This would be the world's standard medical book for the next 1,400 years.
Renaissance
The next major biomechanic would not be around until the 1490s, with the studies of human anatomy and biomechanics by Leonardo da Vinci. He had a great understanding of science and mechanics and studied anatomy in a mechanics context. He analyzed muscle forces and movements and studied joint functions. These studies could be considered studies in the realm of biomechanics. Leonardo da Vinci studied anatomy in the context of mechanics. He analyzed muscle forces as acting along lines connecting origins and insertions, and studied joint function. Da Vinci is also known for mimicking some animal features in his machines. For example, he studied the flight of birds to find means by which humans could fly; and because horses were the principal source of mechanical power in that time, he studied their muscular systems to design machines that would better benefit from the forces applied by this animal.
In 1543, Galen's work, On the Function of the Parts was challenged by Andreas Vesalius at the age of 29. Vesalius published his own work called, On the Structure of the Human Body. In this work, Vesalius corrected many errors made by Galen, which would not be globally accepted for many centuries. With the death of Copernicus came a new desire to understand and learn about the world around people and how it works. On his deathbed, he published his work, On the Revolutions of the Heavenly Spheres. This work not only revolutionized science and physics, but also the development of mechanics and later bio-mechanics.
Galileo Galilei, the father of mechanics and part time biomechanic was born 21 years after the death of Copernicus. Over his years of science, Galileo made a lot of biomechanical aspects known. For example, he discovered that "animals' masses increase disproportionately to their size, and their bones must consequently also disproportionately increase in girth, adapting to loadbearing rather than mere size. The bending strength of a tubular structure such as a bone is increased relative to its weight by making it hollow and increasing its diameter. Marine animals can be larger than terrestrial animals because the water's buoyancy relieves their tissues of weight."
Galileo Galilei was interested in the strength of bones and suggested that bones are hollow because this affords maximum strength with minimum weight. He noted that animals' bone masses increased disproportionately to their size. Consequently, bones must also increase disproportionately in girth rather than mere size. This is because the bending strength of a tubular structure (such as a bone) is much more efficient relative to its weight. Mason suggests that this insight was one of the first grasps of the principles of biological optimization.
In the 17th century, Descartes suggested a philosophic system whereby all living systems, including the human body (but not the soul), are simply machines ruled by the same mechanical laws, an idea that did much to promote and sustain biomechanical study.
Industrial era
The next major bio-mechanic, Giovanni Alfonso Borelli, embraced Descartes' mechanical philosophy and studied walking, running, jumping, the flight of birds, the swimming of fish, and even the piston action of the heart within a mechanical framework. He could determine the position of the human center of gravity, calculate and measure inspired and expired air volumes, and he showed that inspiration is muscle-driven and expiration is due to tissue elasticity.
Borelli was the first to understand that "the levers of the musculature system magnify motion rather than force, so that muscles must produce much larger forces than those resisting the motion". Influenced by the work of Galileo, whom he personally knew, he had an intuitive understanding of static equilibrium in various joints of the human body well before Newton published the laws of motion. His work is often considered the most important in the history of bio-mechanics because he made so many new discoveries that opened the way for the future generations to continue his work and studies.
It was many years after Borelli before the field of bio-mechanics made any major leaps. After that time, more and more scientists took to learning about the human body and its functions. There are not many notable scientists from the 19th or 20th century in bio-mechanics because the field is far too vast now to attribute one thing to one person. However, the field is continuing to grow every year and continues to make advances in discovering more about the human body. Because the field became so popular, many institutions and labs have opened over the last century and people continue doing research. With the Creation of the American Society of Bio-mechanics in 1977, the field continues to grow and make many new discoveries.
In the 19th century Étienne-Jules Marey used cinematography to scientifically investigate locomotion. He opened the field of modern 'motion analysis' by being the first to correlate ground reaction forces with movement. In Germany, the brothers Ernst Heinrich Weber and Wilhelm Eduard Weber hypothesized a great deal about human gait, but it was Christian Wilhelm Braune who significantly advanced the science using recent advances in engineering mechanics. During the same period, the engineering mechanics of materials began to flourish in France and Germany under the demands of the Industrial Revolution. This led to the rebirth of bone biomechanics when the railroad engineer Karl Culmann and the anatomist Hermann von Meyer compared the stress patterns in a human femur with those in a similarly shaped crane. Inspired by this finding Julius Wolff proposed the famous Wolff's law of bone remodeling.
Applications
The study of biomechanics ranges from the inner workings of a cell to the movement and development of limbs, to the mechanical properties of soft tissue, and bones. Some simple examples of biomechanics research include the investigation of the forces that act on limbs, the aerodynamics of bird and insect flight, the hydrodynamics of swimming in fish, and locomotion in general across all forms of life, from individual cells to whole organisms. With growing understanding of the physiological behavior of living tissues, researchers are able to advance the field of tissue engineering, as well as develop improved treatments for a wide array of pathologies including cancer.
Biomechanics is also applied to studying human musculoskeletal systems. Such research utilizes force platforms to study human ground reaction forces and infrared videography to capture the trajectories of markers attached to the human body to study human 3D motion. Research also applies electromyography to study muscle activation, investigating muscle responses to external forces and perturbations.
Biomechanics is widely used in orthopedic industry to design orthopedic implants for human joints, dental parts, external fixations and other medical purposes. Biotribology is a very important part of it. It is a study of the performance and function of biomaterials used for orthopedic implants. It plays a vital role to improve the design and produce successful biomaterials for medical and clinical purposes. One such example is in tissue engineered cartilage. The dynamic loading of joints considered as impact is discussed in detail by Emanuel Willert.
It is also tied to the field of engineering, because it often uses traditional engineering sciences to analyze biological systems. Some simple applications of Newtonian mechanics and/or materials sciences can supply correct approximations to the mechanics of many biological systems. Applied mechanics, most notably mechanical engineering disciplines such as continuum mechanics, mechanism analysis, structural analysis, kinematics and dynamics play prominent roles in the study of biomechanics.
Usually biological systems are much more complex than man-built systems. Numerical methods are hence applied in almost every biomechanical study. Research is done in an iterative process of hypothesis and verification, including several steps of modeling, computer simulation and experimental measurements.
| Biology and health sciences | Basics | null |
105375 | https://en.wikipedia.org/wiki/Student%27s%20t-distribution | Student's t-distribution | In probability theory and statistics, Student's distribution (or simply the distribution) is
a continuous probability distribution that generalizes the standard normal distribution. Like the latter, it is symmetric around zero and bell-shaped.
However, has heavier tails and the amount of probability mass in the tails is controlled by the parameter For the Student's distribution becomes the standard Cauchy distribution, which has very "fat" tails; whereas for it becomes the standard normal distribution which has very "thin" tails.
The Student's distribution plays a role in a number of widely used statistical analyses, including Student's test for assessing the statistical significance of the difference between two sample means, the construction of confidence intervals for the difference between two population means, and in linear regression analysis.
In the form of the location-scale distribution it generalizes the normal distribution and also arises in the Bayesian analysis of data from a normal family as a compound distribution when marginalizing over the variance parameter.
Definitions
Probability density function
Student's distribution has the probability density function (PDF) given by
where is the number of degrees of freedom and is the gamma function. This may also be written as
where is the beta function. In particular for integer valued degrees of freedom we have:
For and even,
For and odd,
The probability density function is symmetric, and its overall shape resembles the bell shape of a normally distributed variable with mean 0 and variance 1, except that it is a bit lower and wider. As the number of degrees of freedom grows, the distribution approaches the normal distribution with mean 0 and variance 1. For this reason is also known as the normality parameter.
The following images show the density of the distribution for increasing values of The normal distribution is shown as a blue line for comparison. Note that the distribution (red line) becomes closer to the normal distribution as increases.
Cumulative distribution function
The cumulative distribution function (CDF) can be written in terms of , the regularized
incomplete beta function. For
where
Other values would be obtained by symmetry. An alternative formula, valid for is
where is a particular instance of the hypergeometric function.
For information on its inverse cumulative distribution function, see .
Special cases
Certain values of give a simple form for Student's t-distribution.
Properties
Moments
For the raw moments of the distribution are
Moments of order or higher do not exist.
The term for even, may be simplified using the properties of the gamma function to
For a distribution with degrees of freedom, the expected value is if and its variance is if The skewness is 0 if and the excess kurtosis is if
How the distribution arises (characterization)
As the distribution of a test statistic
Student's t-distribution with degrees of freedom can be defined as the distribution of the random variable T with
where
Z is a standard normal with expected value 0 and variance 1;
V has a chi-squared distribution () with degrees of freedom;
Z and V are independent;
A different distribution is defined as that of the random variable defined, for a given constant μ, by
This random variable has a noncentral t-distribution with noncentrality parameter μ. This distribution is important in studies of the power of Student's t-test.
Derivation
Suppose X1, ..., Xn are independent realizations of the normally-distributed, random variable X, which has an expected value μ and variance σ2. Let
be the sample mean, and
be an unbiased estimate of the variance from the sample. It can be shown that the random variable
has a chi-squared distribution with degrees of freedom (by Cochran's theorem). It is readily shown that the quantity
is normally distributed with mean 0 and variance 1, since the sample mean is normally distributed with mean μ and variance σ2/n. Moreover, it is possible to show that these two random variables (the normally distributed one Z and the chi-squared-distributed one V) are independent. Consequently the pivotal quantity
which differs from Z in that the exact standard deviation σ is replaced by the sample standard error s, has a Student's t-distribution as defined above. Notice that the unknown population variance σ2 does not appear in T, since it was in both the numerator and the denominator, so it canceled. Gosset intuitively obtained the probability density function stated above, with equal to n − 1, and Fisher proved it in 1925.
The distribution of the test statistic T depends on , but not μ or σ; the lack of dependence on μ and σ is what makes the t-distribution important in both theory and practice.
Sampling distribution of t-statistic
The distribution arises as the sampling distribution
of the statistic. Below the one-sample statistic is discussed, for the corresponding two-sample statistic see Student's t-test.
Unbiased variance estimate
Let be independent and identically distributed samples from a normal distribution with mean and variance The sample mean and unbiased sample variance are given by:
The resulting (one sample) statistic is given by
and is distributed according to a Student's distribution with degrees of freedom.
Thus for inference purposes the statistic is a useful "pivotal quantity" in the case when the mean and variance are unknown population parameters, in the sense that the statistic has then a probability distribution that depends on neither nor
ML variance estimate
Instead of the unbiased estimate we may also use the maximum likelihood estimate
yielding the statistic
This is distributed according to the location-scale distribution:
Compound distribution of normal with inverse gamma distribution
The location-scale distribution results from compounding a Gaussian distribution (normal distribution) with mean and unknown variance, with an inverse gamma distribution placed over the variance with parameters and In other words, the random variable X is assumed to have a Gaussian distribution with an unknown variance distributed as inverse gamma, and then the variance is marginalized out (integrated out).
Equivalently, this distribution results from compounding a Gaussian distribution with a scaled-inverse-chi-squared distribution with parameters and The scaled-inverse-chi-squared distribution is exactly the same distribution as the inverse gamma distribution, but with a different parameterization, i.e.
The reason for the usefulness of this characterization is that in Bayesian statistics the inverse gamma distribution is the conjugate prior distribution of the variance of a Gaussian distribution. As a result, the location-scale distribution arises naturally in many Bayesian inference problems.
Maximum entropy distribution
Student's distribution is the maximum entropy probability distribution for a random variate X for which is fixed.
Integral of Student's probability density function and -value
The function is the integral of Student's probability density function, between and , for It thus gives the probability that a value of t less than that calculated from observed data would occur by chance. Therefore, the function can be used when testing whether the difference between the means of two sets of data is statistically significant, by calculating the corresponding value of and the probability of its occurrence if the two sets of data were drawn from the same population. This is used in a variety of situations, particularly in tests. For the statistic , with degrees of freedom, is the probability that would be less than the observed value if the two means were the same (provided that the smaller mean is subtracted from the larger, so that It can be easily calculated from the cumulative distribution function of the distribution:
where is the regularized incomplete beta function.
For statistical hypothesis testing this function is used to construct the p-value.
Related distributions
In general
The noncentral distribution generalizes the distribution to include a noncentrality parameter. Unlike the nonstandardized distributions, the noncentral distributions are not symmetric (the median is not the same as the mode).
The discrete Student's distribution is defined by its probability mass function at r being proportional to: Here a, b, and k are parameters. This distribution arises from the construction of a system of discrete distributions similar to that of the Pearson distributions for continuous distributions.
One can generate Student samples by taking the ratio of variables from the normal distribution and the square-root of the . If we use instead of the normal distribution, e.g., the Irwin–Hall distribution, we obtain over-all a symmetric 4 parameter distribution, which includes the normal, the uniform, the triangular, the Student and the Cauchy distribution. This is also more flexible than some other symmetric generalizations of the normal distribution.
distribution is an instance of ratio distributions.
The square of a random variable distributed is distributed .
Location-scale distribution
Location-scale transformation
Student's distribution generalizes to the three parameter location-scale distribution by introducing a location parameter and a scale parameter With
and location-scale family transformation
we get
The resulting distribution is also called the non-standardized Student's distribution.
Density and first two moments
The location-scale distribution has a density defined by:
Equivalently, the density can be written in terms of :
Other properties of this version of the distribution are:
Special cases
If follows a location-scale distribution then for is normally distributed with mean and variance
The location-scale distribution with degree of freedom is equivalent to the Cauchy distribution
The location-scale distribution with and reduces to the Student's distribution
Occurrence and applications
In frequentist statistical inference
Student's distribution arises in a variety of statistical estimation problems where the goal is to estimate an unknown parameter, such as a mean value, in a setting where the data are observed with additive errors. If (as in nearly all practical statistical work) the population standard deviation of these errors is unknown and has to be estimated from the data, the distribution is often used to account for the extra uncertainty that results from this estimation. In most such problems, if the standard deviation of the errors were known, a normal distribution would be used instead of the distribution.
Confidence intervals and hypothesis tests are two statistical procedures in which the quantiles of the sampling distribution of a particular statistic (e.g. the standard score) are required. In any situation where this statistic is a linear function of the data, divided by the usual estimate of the standard deviation, the resulting quantity can be rescaled and centered to follow Student's distribution. Statistical analyses involving means, weighted means, and regression coefficients all lead to statistics having this form.
Quite often, textbook problems will treat the population standard deviation as if it were known and thereby avoid the need to use the Student's distribution. These problems are generally of two kinds: (1) those in which the sample size is so large that one may treat a data-based estimate of the variance as if it were certain, and (2) those that illustrate mathematical reasoning, in which the problem of estimating the standard deviation is temporarily ignored because that is not the point that the author or instructor is then explaining.
Hypothesis testing
A number of statistics can be shown to have distributions for samples of moderate size under null hypotheses that are of interest, so that the distribution forms the basis for significance tests. For example, the distribution of Spearman's rank correlation coefficient , in the null case (zero correlation) is well approximated by the distribution for sample sizes above about 20.
Confidence intervals
Suppose the number A is so chosen that
when has a distribution with degrees of freedom. By symmetry, this is the same as saying that satisfies
so A is the "95th percentile" of this probability distribution, or Then
where is the sample standard deviation of the observed values. This is equivalent to
Therefore, the interval whose endpoints are
is a 90% confidence interval for μ. Therefore, if we find the mean of a set of observations that we can reasonably expect to have a normal distribution, we can use the distribution to examine whether the confidence limits on that mean include some theoretically predicted value – such as the value predicted on a null hypothesis.
It is this result that is used in the Student's tests: since the difference between the means of samples from two normal distributions is itself distributed normally, the distribution can be used to examine whether that difference can reasonably be supposed to be zero.
If the data are normally distributed, the one-sided confidence limit (UCL) of the mean, can be calculated using the following equation:
The resulting UCL will be the greatest average value that will occur for a given confidence interval and population size. In other words, being the mean of the set of observations, the probability that the mean of the distribution is inferior to is equal to the confidence
Prediction intervals
The distribution can be used to construct a prediction interval for an unobserved sample from a normal distribution with unknown mean and variance.
In Bayesian statistics
The Student's distribution, especially in its three-parameter (location-scale) version, arises frequently in Bayesian statistics as a result of its connection with the normal distribution. Whenever the variance of a normally distributed random variable is unknown and a conjugate prior placed over it that follows an inverse gamma distribution, the resulting marginal distribution of the variable will follow a Student's distribution. Equivalent constructions with the same results involve a conjugate scaled-inverse-chi-squared distribution over the variance, or a conjugate gamma distribution over the precision. If an improper prior proportional to is placed over the variance, the distribution also arises. This is the case regardless of whether the mean of the normally distributed variable is known, is unknown distributed according to a conjugate normally distributed prior, or is unknown distributed according to an improper constant prior.
Related situations that also produce a distribution are:
The marginal posterior distribution of the unknown mean of a normally distributed variable, with unknown prior mean and variance following the above model.
The prior predictive distribution and posterior predictive distribution of a new normally distributed data point when a series of independent identically distributed normally distributed data points have been observed, with prior mean and variance as in the above model.
Robust parametric modeling
The distribution is often used as an alternative to the normal distribution as a model for data, which often has heavier tails than the normal distribution allows for; see e.g. Lange et al. The classical approach was to identify outliers (e.g., using Grubbs's test) and exclude or downweight them in some way. However, it is not always easy to identify outliers (especially in high dimensions), and the distribution is a natural choice of model for such data and provides a parametric approach to robust statistics.
A Bayesian account can be found in Gelman et al. The degrees of freedom parameter controls the kurtosis of the distribution and is correlated with the scale parameter. The likelihood can have multiple local maxima and, as such, it is often necessary to fix the degrees of freedom at a fairly low value and estimate the other parameters taking this as given. Some authors report that values between 3 and 9 are often good choices. Venables and Ripley suggest that a value of 5 is often a good choice.
Student's process
For practical regression and prediction needs, Student's processes were introduced, that are generalisations of the Student distributions for functions. A Student's process is constructed from the Student distributions like a Gaussian process is constructed from the Gaussian distributions. For a Gaussian process, all sets of values have a multidimensional Gaussian distribution. Analogously, is a Student process on an interval if the correspondent values of the process () have a joint multivariate Student distribution. These processes are used for regression, prediction, Bayesian optimization and related problems. For multivariate regression and multi-output prediction, the multivariate Student processes are introduced and used.
Table of selected values
The following table lists values for distributions with degrees of freedom for a range of one-sided or two-sided critical regions. The first column is , the percentages along the top are confidence levels and the numbers in the body of the table are the factors described in the section on confidence intervals.
The last row with infinite gives critical points for a normal distribution since a distribution with infinitely many degrees of freedom is a normal distribution. (See Related distributions above).
Calculating the confidence interval
Let's say we have a sample with size 11, sample mean 10, and sample variance 2. For 90% confidence with 10 degrees of freedom, the one-sided value from the table is 1.372 . Then with confidence interval calculated from
we determine that with 90% confidence we have a true mean lying below
In other words, 90% of the times that an upper threshold is calculated by this method from particular samples, this upper threshold exceeds the true mean.
And with 90% confidence we have a true mean lying above
In other words, 90% of the times that a lower threshold is calculated by this method from particular samples, this lower threshold lies below the true mean.
So that at 80% confidence (calculated from 100% − 2 × (1 − 90%) = 80%), we have a true mean lying within the interval
Saying that 80% of the times that upper and lower thresholds are calculated by this method from a given sample, the true mean is both below the upper threshold and above the lower threshold is not the same as saying that there is an 80% probability that the true mean lies between a particular pair of upper and lower thresholds that have been calculated by this method; see confidence interval and prosecutor's fallacy.
Nowadays, statistical software, such as the R programming language, and functions available in many spreadsheet programs compute values of the distribution and its inverse without tables.
Computational methods
Monte Carlo sampling
There are various approaches to constructing random samples from the Student's distribution. The matter depends on whether the samples are required on a stand-alone basis, or are to be constructed by application of a quantile function to uniform samples; e.g., in the multi-dimensional applications basis of copula-dependency. In the case of stand-alone sampling, an extension of the Box–Muller method and its polar form is easily deployed. It has the merit that it applies equally well to all real positive degrees of freedom, , while many other candidate methods fail if is close to zero.
History
In statistics, the distribution was first derived as a posterior distribution in 1876 by Helmert and Lüroth. As such, Student's t-distribution is an example of Stigler's Law of Eponymy. The distribution also appeared in a more general form as Pearson type IV distribution in Karl Pearson's 1895 paper.
In the English-language literature, the distribution takes its name from William Sealy Gosset's 1908 paper in Biometrika under the pseudonym "Student". One version of the origin of the pseudonym is that Gosset's employer preferred staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Another version is that Guinness did not want their competitors to know that they were using the test to determine the quality of raw material.
Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley where sample sizes might be as few as 3. Gosset's paper refers to the distribution as the "frequency distribution of standard deviations of samples drawn from a normal population". It became well known through the work of Ronald Fisher, who called the distribution "Student's distribution" and represented the test value with the letter .
| Mathematics | Probability | null |
105659 | https://en.wikipedia.org/wiki/Upwelling | Upwelling | Upwelling is an oceanographic phenomenon that involves wind-driven motion of dense, cooler, and usually nutrient-rich water from deep water towards the ocean surface. It replaces the warmer and usually nutrient-depleted surface water. The nutrient-rich upwelled water stimulates the growth and reproduction of primary producers such as phytoplankton. The biomass of phytoplankton and the presence of cool water in those regions allow upwelling zones to be identified by cool sea surface temperatures (SST) and high concentrations of chlorophyll a.
The increased availability of nutrients in upwelling regions results in high levels of primary production and thus fishery production. Approximately 25% of the total global marine fish catches come from five upwellings, which occupy only 5% of the total ocean area. Upwellings that are driven by coastal currents or diverging open ocean have the greatest impact on nutrient-enriched waters and global fishery yields.
Mechanisms
The three main drivers that work together to cause upwelling are wind, Coriolis effect, and Ekman transport. They operate differently for different types of upwelling, but the general effects are the same. In the overall process of upwelling, winds blow across the sea surface at a particular direction, which causes a wind-water interaction. As a result of the wind, the water has transported a net of 90 degrees from the direction of the wind due to Coriolis forces and Ekman transport. Ekman transport causes the surface layer of water to move at about a 45-degree angle from the direction of the wind, and the friction between that layer and the layer beneath it causes the successive layers to move in the same direction. This results in a spiral of water moving down the water column. Then, it is the Coriolis forces that dictate which way the water will move; in the Northern hemisphere, the water is transported to the right of the direction of the wind. In the Southern Hemisphere, the water is transported to the left of the wind. If this net movement of water is divergent, then upwelling of deep water occurs to replace the water that was lost.
Types
The major upwellings in the ocean are associated with the divergence of currents that bring deeper, colder, nutrient rich waters to the surface. There are at least five types of upwelling: coastal upwelling, large-scale wind-driven upwelling in the ocean interior, upwelling associated with eddies, topographically-associated upwelling, and broad-diffusive upwelling in the ocean interior.
Coastal
Coastal upwelling is the best known type of upwelling, and the most closely related to human activities as it supports some of the most productive fisheries in the world. Coastal upwelling will occur if the wind direction is parallel to the coastline and generates wind-driven currents. The wind-driven currents are diverted to the right of the winds in the Northern Hemisphere and to the left in the Southern Hemisphere due to the Coriolis effect. The result is a net movement of surface water at right angles to the direction of the wind, known as the Ekman transport ( | Physical sciences | Oceanography | Earth science |
105709 | https://en.wikipedia.org/wiki/Aerial%20tramway | Aerial tramway | An aerial tramway, aerial tram, sky tram, aerial cablecar, aerial cableway, telepherique, or seilbahn is a type of aerial lift which uses one or two stationary cables for support, with a third moving cable providing propulsion. With this form of lift, the grip of an aerial tramway cabin is fixed onto the propulsion cable and cannot be decoupled from it during operation. Aerial tramways usually provide lower line capacities and longer wait times than gondola lifts.
Terminology
Cable car is the usual term in British English, where tramway generally refers to a railed street tramway. In American English, cable car may additionally refer to a cable-pulled street tramway with detachable vehicles (e.g., San Francisco's cable cars). Consequently careful phrasing is necessary to prevent confusion.
It is also sometimes called a ropeway or even incorrectly referred to as a gondola lift. A gondola lift has cabins suspended from a continuously circulating cable whereas aerial trams simply shuttle back and forth on cables. In Japan, the two are considered as the same category of vehicle and called ropeway, while the term cable car refers to both ground-level cable cars and funiculars. An aerial railway where the vehicles are suspended from a fixed track as opposed to a cable is known as a suspension railway.
Overview
An aerial tramway consists of one or two fixed cables (called track cables), one loop of cable (called a haulage rope), and one or two passenger or cargo cabins. The fixed cables provide support for the cabins while the haulage rope, by means of a grip, is solidly connected to the truck (the wheel set that rolls on the track cables). An electric motor drives the haulage rope which provides propulsion. Aerial tramways are constructed as reversible systems; vehicles shuttling back and forth between two end terminals and propelled by a cable loop which stops and reverses direction when the cabins arrive at the end stations. Aerial tramways differ from gondola lifts in that gondola lifts are considered continuous systems (cabins attached onto a circulating haul cable that moves continuously).
Two-car tramways use a jig-back system: a large electric motor is located at the bottom of the tramway so that it effectively pulls one cabin down, using that cabin's weight to help pull the other cabin up. A similar system of cables is used in a funicular railway. The two passenger or cargo cabins, which carry from 4 to over 150 people, are situated at opposite ends of the loops of cable. Thus, while one is coming up, the other is going down the mountain, and they pass each other midway on the cable span.
Some aerial trams have only one cabin, which lends itself better to systems with small elevation changes along the cable run.
History
The first design of an aerial lift was by Croatian polymath Fausto Veranzio, and the first operational aerial tram was built in 1644 by Adam Wybe in Gdańsk, Poland. It was moved by horses and used to move soil over the river to build defences. It is called the first known cable lift in European history and precedes the invention of steel cables. It is not known how long this lift was used. Germany installed the second cable lift 230 years later, now using iron wire cable.
In mining
Aerial tramways are sometimes used in mountainous regions to carry ore from a mine located high on the mountain to an ore mill located at a lower elevation. Ore tramways were common in the early 20th century at the mines in North and South America. One can still be seen in the San Juan Mountains of the US state of Colorado. Another famous use of aerial tramways was at the Kennecott Copper mine in Wrangell-St. Elias National Park, Alaska.
Other firms entered the mining tramway business, including Otto, Leschen, Breco Ropeways Ltd., Ceretti and Tanfani, and Riblet. A major British contributor was Bullivant, which became a constituent of British Ropes in 1924.
Moving people
In the beginning of the 20th century, the rise of the middle class and the leisure industry allowed for investment in sight-seeing transport. Prior to 1893, a combined goods and passenger carrying cableway was installed at Gibraltar. Initially, its passengers were military personnel. An 1893 industry publication said of a two-mile system in Hong Kong that it "is the only wire tramway which has been erected exclusively for the carriage of individuals" (albeit workmen). After the pioneer cable car suitable for public transport on Mount Ulia in 1907 (San Sebastián, Spain) by Leonardo Torres Quevedo and the Wetterhorn Elevator (Grindelwald, Switzerland) in 1908, others to the top of high peaks in the Alps of Austria, Germany and Switzerland resulted. They were much less expensive to build than the earlier rack railway.
One of the first aerial trams was at Chamonix, while others in Switzerland, and Garmisch soon followed. From this, it was a natural transposition to build ski lifts and chairlifts. The first cable car in North America was at Cannon Mountain in Franconia, New Hampshire in 1938.
Many aerial tramways were built by Von Roll Ltd. of Switzerland, later acquired by Austrian lift manufacturer Doppelmayr. Other German, Swiss, and Austrian firms played an important role in the cable car business: Bleichert, Heckel, Pohlig, PHB (Pohlig-Heckel-Bleichert), Garaventa and Waagner-Biró. Now there are three groups dominating the world market: Doppelmayr Garaventa Group, Leitner Group, and Poma, the last two being owned by one person.
Some aerial tramways have their own propulsion, such as the Lasso Mule or the Josef Mountain Aerial Tramway near Merano, Italy.
Urban transport
While typically used for ski resorts, aerial tramways have come into use in the urban environment. The 1976 Roosevelt Island Tramway in New York City, the 2022 Rakavlit cable car in Haifa, Israel and the 2006 Portland Aerial Tram are examples where this technology has been successfully adapted for public transport.
Telpherage
The telpherage concept was first publicised in 1883 and several experimental lines were constructed. It was designed to compete not with railways, but with horses and carts.
The first commercial telpherage line was in Glynde, which is in Sussex, England. It was built to connect a newly opened clay pit to the local railway station and opened in 1885.
Double deckers
There are aerial tramways with double deck cabins. The Vanoise Express cable car carries 200 people in each cabin at a height of over the Ponturin gorge in France. The Shinhotaka Ropeway carries 121 people in each cabin at Mount Hotaka in Japan. The CabriO cable car to the summit of the Stanserhorn in Switzerland carries 60 persons, with the upper floor accommodating 30 people in the open air.
Records
First – Adam Wybe's construction in Gdańsk (1644). It was the first rope railway with many supports and the biggest built until the end of 19th century.
Longest (at time of building) and years operated:
1906–1927 Chilecito – Mina La Mejicana, Argentina ( and branch).
1925–1950 Dúrcal – Motril, Spain ( and branch).
1937–1941 Asmara – Massawa, Eritrea ( and branch), technically a Funifor.
1943–1987 Kristineberg-Boliden, Sweden. still working as the Norsjö ropeway.
Second longest:
1959–1986 Moanda – Mbinda, Gabon – Republic of Congo.
Longest over water:
1906 – the same century; Thio, New Caledonia. ship loading.
1941–2006 Forsby-Köping limestone cableway, Sweden. crossing of Hjälmaren strait. 42 km system.
2007 Nha Trang City – Vinpearl Land, Hon Tre Island, Vietnam. Total length 3.3 km.
Longest currently operational:
Norsjö aerial tramway Mensträsk-Bjurfors in Norsjö, Sweden. Passenger tramway, a section of the former 96-km Kristineberg-Boliden industrial ropeway.
12.5 km (7.8 mi) Mérida cable car Mérida, Venezuela.
Grindelwald–Männlichen gondola cableway, Switzerland
Wings of Tatev, Armenia, the world's longest reversible cable car line of one section.
Medeu-Shimbulak tramway near Almaty, Kazakhstan.
Sandia Peak Tramway, reversible tramway in Albuquerque, New Mexico.
Highest lift:
from at Chilecito – Mina La Mejicana, Argentina (drops back to at upper terminal).
Highest lift currently operational:
3188 m (10,459 ft) from 1,577 MSL to 4,765 MSL (5,174 FAMSL to 15,633 FAMSL) Mérida cable car, Venezuela.
Highest station:
Greater than 1935-19?? Aucanquilcha, Chile.
Lowest station:
below sea level Masada cableway, Israel.
Tallest support tower:
Cat Hai – Phu Long cable car, Vietnam.
As mass transit:
The Roosevelt Island Tramway in New York City was the first aerial tramway in North America used by commuters as a mode of mass transit (See Transportation in New York City). Passengers pay with the same farecard used for the New York City Subway.
The Portland Aerial Tram in Portland, Oregon, was opened in January 2007 and became the second public transportation aerial tramway in North America.
In Medellin, Colombia, both the Metro and the recent Metrocable aerial tramway addition can be used while paying a single fare.
Largest rotating cars:
Palm Springs Aerial Tramway in Palm Springs, California.
List of accidents
Despite the introduction of various safety measures (back-up power generators, evacuation plans, etc.) there have been several serious incidents on aerial tramways, some fatal.
August 29, 1961: A military plane split the hauling cable of the Vallée Blanche Aerial Tramway on the Aiguille du Midi in the Mont Blanc massif: six people killed.
July 9, 1974: Ulriksbanen is an aerial tramway in Bergen, Norway, operated by a tow rope, which hauls it, and a carrying rope. On July 9, 1974, as the carriage reached its destination at the top station and just as the carriage operator was about to open the doors, the tow rope broke. The carriage operator was thrown into the back of the vehicle, preventing him from reaching the emergency brake. The carriage began whizzing down the still intact carrying rope, gathering speed quickly and approaching the first vertical mast about 70meters away. Because the tow rope was broken, it was no longer taut at the point where it crossed over the the carriage crossed the mast, the broken tow rope jammed up and caused the carriage to jump off the carrying rope and begin to free-fall straight down towards the ground 15meters below. The carriage crashed to the ground on a downslope, causing the carriage to careen down the mountainside a further 30meters before it was crushed up against some boulders, finally coming to a stop. Four of the eight occupants were killed.
March 9, 1976: In the Italian Dolomites at Cavalese, a cab fell after a rope broke, killing 43. (See 1976 Cavalese cable car crash)
April 15, 1978: In a storm, two carrying ropes of the Squaw Valley Aerial Tramway in California fell from the aerial tramway support tower. One of the ropes partly destroyed the cabin. Four were killed, 32 injured.
June 1, 1990: Nineteen were killed and fifteen injured after a hauling rope broke in the 1990 Tbilisi Cable car accident
February 3, 1998: U.S. Marine Corps EA-6B Prowler jets severed the cable of an aerial ropeway in Cavalese, Italy, killing 20 people. (See Cavalese cable car disaster (1998))
July 1, 1999: Saint-Étienne-en-Dévoluy, France. An aerial tramway car detached from the cable it was traveling on and fell to the valley floor, killing all 20 occupants. The majority were employees and contractors of an international astronomical observatory run by the Institut de Radioastronomie Millémétrique. (See Saint-Étienne-en-Dévoluy cable car disaster)
October 19, 2003: Four were killed and 11 injured when three cars slipped off the cable of the Darjeeling Ropeway.
April 2, 2004: In Yerevan, Armenia on an urban cable car one of the two cabins derailed from the steel track cable and fell to the ground killing five, including two Iranians, and injuring 11 others. The second cabin slammed onto the lower station injuring three people.
October 9, 2004: Crash of a cabin of the Grünberg aerial tramway in Gmunden, Austria. Many injuries.
December 31, 2012: The Alyeska Resort Aerial Tramway was blown sideways while operating in high winds and was impaled on the tower guide, severely damaging the contacting cabin. Only minor injuries were incurred.
December 4, 2018, an exterior panel of the Portland Aerial Tram dropped at least 100 feet (30 m) and struck a pedestrian walking below.
May 23, 2021: 14 people were killed when a cable failed 300 m from the top of the Mottarone mountain.
October 21, 2021: One person died after a cable car cabin became detached from its cable at the Ještěd mountain in Liberec, Czech Republic.
April 12, 2024: One person died and seven people injured after a cable car cabin hit a pole and burst open in Antalya, Turkey.
Gallery
Cableways in fiction
"Ascension"
Blind Fury
Get Carter – coal spoil conveyor Blackhall Beach near Blackhall Colliery
Electric City (web series)
The Haunting of Tram Car 015 (P. Djèlí Clark)
Hoodwinked!
Kongfrontation
Moonraker (film)
Nighthawks (1981 film)
Night Train to Munich
Nitrome's Skywire games
On Her Majesty's Secret Service (film)
Where Eagles Dare
Zootopia
Kiff (TV series)
| Technology | Rail and cable transport | null |
105803 | https://en.wikipedia.org/wiki/Remote%20control | Remote control | In electronics, a remote control (also known as a remote or clicker) is an electronic device used to operate another device from a distance, usually wirelessly. In consumer electronics, a remote control can be used to operate devices such as a television set, DVD player or other digital home media appliance. A remote control can allow operation of devices that are out of convenient reach for direct operation of controls. They function best when used from a short distance. This is primarily a convenience feature for the user. In some cases, remote controls allow a person to operate a device that they otherwise would not be able to reach, as when a garage door opener is triggered from outside.
Early television remote controls (1956–1977) used ultrasonic tones. Present-day remote controls are commonly consumer infrared devices which send digitally-coded pulses of infrared radiation. They control functions such as power, volume, channels, playback, track change, energy, fan speed, and various other features. Remote controls for these devices are usually small wireless handheld objects with an array of buttons. They are used to adjust various settings such as television channel, track number, and volume. The remote control code, and thus the required remote control device, is usually specific to a product line. However, there are universal remotes, which emulate the remote control made for most major brand devices.
Remote controls in the 2000s include Bluetooth or Wi-Fi connectivity, motion sensor-enabled capabilities and voice control. Remote controls for 2010s onward Smart TVs may feature a standalone keyboard on the rear side to facilitate typing, and be usable as a pointing device.
History
Wired and wireless remote control was developed in the latter half of the 19th century to meet the need to control unmanned vehicles (for the most part military torpedoes). These included a wired version by German engineer Werner von Siemens in 1870, and radio controlled ones by British engineer Ernest Wilson and C. J. Evans (1897) and a prototype that inventor Nikola Tesla demonstrated in New York in 1898. In 1903 Spanish engineer Leonardo Torres Quevedo introduced a radio based control system called the "Telekino" at the Paris Academy of Sciences, which he hoped to use to control a dirigible airship of his own design. Unlike previous “on/off” techniques, the Telekino was able to execute a finite but not limited set of different mechanical actions using a single communication channel. From 1904 to 1906 Torres chose to conduct Telekino testings in the form of a three-wheeled land vehicle with an effective range of 20 to 30 meters, and guiding a manned electrically powered boat, which demonstrated a standoff range of 2 kilometers. The first remote-controlled model airplane flew in 1932, and the use of remote control technology for military purposes was worked on intensively during the Second World War, one result of this being the German Wasserfall missile.
By the late 1930s, several radio manufacturers offered remote controls for some of their higher-end models. Most of these were connected to the set being controlled by wires, but the Philco Mystery Control (1939) was a battery-operated low-frequency radio transmitter, thus making it the first wireless remote control for a consumer electronics device. Using pulse-count modulation, this also was the first digital wireless remote control.
Television remote controls
One of the first remote intended to control a television was developed by Zenith Radio Corporation in 1950. The remote, called Lazy Bones, was connected to the television by a wire. A wireless remote control, the Flash-Matic, was developed in 1955 by Eugene Polley. It worked by shining a beam of light onto one of four photoelectric cells, but the cell did not distinguish between light from the remote and light from other sources. The Flashmatic also had to be pointed very precisely at one of the sensors in order to work.
In 1956, Robert Adler developed Zenith Space Command, a wireless remote. It was mechanical and used ultrasound to change the channel and volume. When the user pushed a button on the remote control, it struck a bar and clicked, hence they were commonly called "clickers", and the mechanics were similar to a pluck. Each of the four bars emitted a different fundamental frequency with ultrasonic harmonics, and circuits in the television detected these sounds and interpreted them as channel-up, channel-down, sound-on/off, and power-on/off.
Later, the rapid decrease in price of transistors made possible cheaper electronic remotes that contained a piezoelectric crystal that was fed by an oscillating electric current at a frequency near or above the upper threshold of human hearing, though still audible to dogs. The receiver contained a microphone attached to a circuit that was tuned to the same frequency. Some problems with this method were that the receiver could be triggered accidentally by naturally occurring noises or deliberately by metal against glass, for example, and some people could hear the lower ultrasonic harmonics.
In 1970, RCA introduced an all-electronic remote control that uses digital signals and metal–oxide–semiconductor field-effect transistor (MOSFET) memory. This was widely adopted for color television, replacing motor-driven tuning controls.
The impetus for a more complex type of television remote control came in 1973, with the development of the Ceefax teletext service by the BBC. Most commercial remote controls at that time had a limited number of functions, sometimes as few as three: next channel, previous channel, and volume/off. This type of control did not meet the needs of Teletext sets, where pages were identified with three-digit numbers. A remote control that selects Teletext pages would need buttons for each numeral from zero to nine, as well as other control functions, such as switching from text to picture, and the normal television controls of volume, channel, brightness, color intensity, etc. Early Teletext sets used wired remote controls to select pages, but the continuous use of the remote control required for Teletext quickly indicated the need for a wireless device. So BBC engineers began talks with one or two television manufacturers, which led to early prototypes in around 1977–1978 that could control many more functions. ITT was one of the companies and later gave its name to the ITT protocol of infrared communication.
In 1980, the most popular remote control was the Starcom Cable TV Converter (from Jerrold Electronics, a division of General Instrument) which used 40-kHz sound to change channels. Then, a Canadian company, Viewstar, Inc., was formed by engineer Paul Hrivnak and started producing a cable TV converter with an infrared remote control. The product was sold through Philips for approximately $190 CAD. The Viewstar converter was an immediate success, the millionth converter being sold on March 21, 1985, with 1.6 million sold by 1989.
Other remote controls
The Blab-off was a wired remote control created in 1952 that turned a TV's (television) sound on or off so that viewers could avoid hearing commercials. In the 1980s Steve Wozniak of Apple started a company named CL 9. The purpose of this company was to create a remote control that could operate multiple electronic devices. The CORE unit (Controller Of Remote Equipment) was introduced in the fall of 1987. The advantage to this remote controller was that it could "learn" remote signals from different devices. It had the ability to perform specific or multiple functions at various times with its built-in clock. It was the first remote control that could be linked to a computer and loaded with updated software code as needed. The CORE unit never made a huge impact on the market. It was much too cumbersome for the average user to program, but it received rave reviews from those who could. These obstacles eventually led to the demise of CL 9, but two of its employees continued the business under the name Celadon. This was one of the first computer-controlled learning remote controls on the market.
In the 1990s, cars were increasingly sold with electronic remote control door locks. These remotes transmit a signal to the car which locks or unlocks the door locks or unlocks the trunk. An aftermarket device sold in some countries is the remote starter. This enables a car owner to remotely start their car. This feature is most associated with countries with winter climates, where users may wish to run the car for several minutes before they intend to use it, so that the car heater and defrost systems can remove ice and snow from the windows.
Proliferation
By the early 2000s, the number of consumer electronic devices in most homes greatly increased, along with the number of remotes to control those devices. According to the Consumer Electronics Association, an average US home has four remotes. To operate a home theater as many as five or six remotes may be required, including one for cable or satellite receiver, VCR or digital video recorder (DVR/PVR), DVD player, TV and audio amplifier. Several of these remotes may need to be used sequentially for some programs or services to work properly. However, as there are no accepted interface guidelines, the process is increasingly cumbersome. One solution used to reduce the number of remotes that have to be used is the universal remote, a remote control that is programmed with the operation codes for most major brands of TVs, DVD players, etc. In the early 2010s, many smartphone manufacturers began incorporating infrared emitters into their devices, thereby enabling their use as universal remotes via an included or downloadable app.
Technique
The main technology used in home remote controls is infrared (IR) light. The signal between a remote control handset and the device it controls consists of pulses of infrared light, which is invisible to the human eye but can be seen through a digital camera, video camera or phone camera. The transmitter in the remote control handset sends out a stream of pulses of infrared light when the user presses a button on the handset. A transmitter is often a light-emitting diode (LED) which is built into the pointing end of the remote control handset. The infrared light pulses form a pattern unique to that button. The receiver in the device recognizes the pattern and causes the device to respond accordingly.
Opto components and circuits
Most remote controls for electronic appliances use a near infrared diode to emit a beam of light that reaches the device. A 940 nm wavelength LED is typical. This infrared light is not visible to the human eye but picked up by sensors on the receiving device. Video cameras see the diode as if it produces visible purple light. With a single channel (single-function, one-button) remote control the presence of a carrier signal can be used to trigger a function. For multi-channel (normal multi-function) remote controls more sophisticated procedures are necessary: one consists of modulating the carrier with signals of different frequencies. After the receiver demodulates the received signal, it applies the appropriate frequency filters to separate the respective signals. One can often hear the signals being modulated on the infrared carrier by operating a remote control in very close proximity to an AM radio not tuned to a station. Today, IR remote controls almost always use a pulse width modulated code, encoded and decoded by a digital computer: a command from a remote control consists of a short train of pulses of carrier-present and carrier-not-present of varying widths.
Consumer electronics infrared protocols
Different manufacturers of infrared remote controls use different protocols to transmit the infrared commands. The RC-5 protocol that has its origins within Philips, uses, for instance, a total of 14 bits for each button press. The bit pattern is modulated onto a carrier frequency that, again, can be different for different manufacturers and standards, in the case of RC-5, the carrier is 36 kHz. Other consumer infrared protocols include the various versions of SIRCS used by Sony, the RC-6 from Philips, the Ruwido R-Step, and the NEC TC101 protocol.
Infrared, line of sight and operating angle
Since infrared (IR) remote controls use light, they require line of sight to operate the destination device. The signal can, however, be reflected by mirrors, just like any other light source. If operation is required where no line of sight is possible, for instance when controlling equipment in another room or installed in a cabinet, many brands of IR extenders are available for this on the market. Most of these have an IR receiver, picking up the IR signal and relaying it via radio waves to the remote part, which has an IR transmitter mimicking the original IR control. Infrared receivers also tend to have a more or less limited operating angle, which mainly depends on the optical characteristics of the phototransistor. However, it is easy to increase the operating angle using a matte transparent object in front of the receiver.
Radio remote control systems
Radio remote control (RF remote control) is used to control distant objects using a variety of radio signals transmitted by the remote control device. As a complementary method to infrared remote controls, the radio remote control is used with electric garage door or gate openers, automatic barrier systems, burglar alarms and industrial automation systems. Standards used for RF remotes are: Bluetooth AVRCP, Zigbee (RF4CE), Z-Wave. Most remote controls use their own coding, transmitting from 8 to 100 or more pulses, fixed or Rolling code, using OOK or FSK modulation. Also, transmitters or receivers can be universal, meaning they are able to work with many different codings. In this case, the transmitter is normally called a universal remote control duplicator because it is able to copy existing remote controls, while the receiver is called a universal receiver because it works with almost any remote control in the market.
A radio remote control system commonly has two parts: transmit and receive. The transmitter part is divided into two parts, the RF remote control and the transmitter module. This allows the transmitter module to be used as a component in a larger application. The transmitter module is small, but users must have detailed knowledge to use it; combined with the RF remote control it is much simpler to use.
The receiver is generally one of two types: a super-regenerative receiver or a superheterodyne. The super-regenerative receiver works like that of an intermittent oscillation detection circuit. The superheterodyne works like the one in a radio receiver. The superheterodyne receiver is used because of its stability, high sensitivity and it has relatively good anti-interference ability, a small package and lower price.
Usage
Industry
A remote control is used for controlling substations, pump storage power stations and HVDC-plants. For these systems often PLC-systems working in the longwave range are used.
Power line remote control
A subset of Power-Line communication that sends remote control signals over energized AC power lines. This was used to remotely control home automation before the invention of WIFI connected smart switches.
Garage and gate
Garage and gate remote controls are very common, especially in some countries such as the US, Australia, and the UK, where garage doors, gates and barriers are widely used. Such a remote is very simple by design, usually only one button, and some with more buttons to control several gates from one control. Such remotes can be divided into two categories by the encoder type used: fixed code and rolling code. If you find dip-switches in the remote, it is likely to be fixed code, an older technology which was widely used. However, fixed codes have been criticized for their (lack of) security, thus rolling code has been more and more widely used in later installations.
Military
Remotely operated torpedoes were demonstrated in the late 19th century in the form of several types of remotely controlled torpedoes. The early 1870s saw remotely controlled torpedoes by John Ericsson (pneumatic), John Louis Lay (electric wire guided), and Victor von Scheliha (electric wire guided).
The Brennan torpedo, invented by Louis Brennan in 1877 was powered by two contra-rotating propellers that were spun by rapidly pulling out wires from drums wound inside the torpedo. Differential speed on the wires connected to the shore station allowed the torpedo to be guided to its target, making it "the world's first practical guided missile". In 1898 Nikola Tesla publicly demonstrated a "wireless" radio-controlled torpedo that he hoped to sell to the U.S. Navy.
Archibald Low was known as the "father of radio guidance systems" for his pioneering work on guided rockets and planes during the First World War. In 1917, he demonstrated a remote-controlled aircraft to the Royal Flying Corps and in the same year built the first wire-guided rocket. As head of the secret RFC experimental works at Feltham, A. M. Low was the first person to use radio control successfully on an aircraft, an "Aerial Target". It was "piloted" from the ground by future world aerial speed record holder Henry Segrave. Low's systems encoded the command transmissions as a countermeasure to prevent enemy intervention. By 1918 the secret D.C.B. Section of the Royal Navy's Signals School, Portsmouth under the command of Eric Robinson V.C. used a variant of the Aerial Target's radio control system to control from ‘mother’ aircraft different types of naval vessels including a submarine.
The military also developed several early remote control vehicles. In World War I, the Imperial German Navy employed FL-boats (Fernlenkboote) against coastal shipping. These were driven by internal combustion engines and controlled remotely from a shore station through several miles of wire wound on a spool on the boat. An aircraft was used to signal directions to the shore station. EMBs carried a high explosive charge in the bow and traveled at speeds of thirty knots. The Soviet Red Army used remotely controlled teletanks during the 1930s in the Winter War against Finland and the early stages of World War II. A teletank is controlled by radio from a control tank at a distance of 500 to 1,500 meters, the two constituting a telemechanical group. The Red Army fielded at least two teletank battalions at the beginning of the Great Patriotic War. There were also remotely controlled cutters and experimental remotely controlled planes in the Red Army.
Remote controls in military usage employ jamming and countermeasures against jamming. Jammers are used to disable or sabotage the enemy's use of remote controls. The distances for military remote controls also tend to be much longer, up to intercontinental distance satellite-linked remote controls used by the U.S. for their unmanned airplanes (drones) in Afghanistan, Iraq, and Pakistan. Remote controls are used by insurgents in Iraq and Afghanistan to attack coalition and government troops with roadside improvised explosive devices, and terrorists in Iraq are reported in the media to use modified TV remote controls to detonate bombs.
Space
In the winter of 1971, the Soviet Union explored the surface of the Moon with the lunar vehicle Lunokhod 1, the first roving remote-controlled robot to land on another celestial body. Remote control technology is also used in space travel, for instance, the Soviet Lunokhod vehicles were remote-controlled from the ground. Many space exploration rovers can be remotely controlled, though vast distance to a vehicle results in a long time delay between transmission and receipt of a command.
PC control
Existing infrared remote controls can be used to control PC applications. Any application that supports shortcut keys can be controlled via infrared remote controls from other home devices (TV, VCR, AC). This is widely used with multimedia applications for PC based home theater systems. For this to work, one needs a device that decodes IR remote control data signals and a PC application that communicates to this device connected to PC. A connection can be made via serial port, USB port or motherboard IrDA connector. Such devices are commercially available but can be homemade using low-cost microcontrollers. LIRC (Linux IR Remote control) and WinLIRC (for Windows) are software packages developed for the purpose of controlling PC using TV remote and can be also used for homebrew remote with lesser modification.
Photography
Remote controls are used in photography, in particular to take long-exposure shots. Many action cameras such as the GoPros as well as standard DSLRs including Sony's Alpha series incorporate Wi-Fi based remote control systems. These can often be accessed and even controlled via cell-phones and other mobile devices.
Video games
Video game consoles had not used wireless controllers until recently, mainly because of the difficulty involved in playing the game while keeping the infrared transmitter pointed at the console. Early wireless controllers were cumbersome and when powered on alkaline batteries, lasted only a few hours before they needed replacement. Some wireless controllers were produced by third parties, in most cases using a radio link instead of infrared. Even these were very inconsistent, and in some cases, had transmission delays, making them virtually useless. Some examples include the Double Player for NES, the Master System Remote Control System and the Wireless Dual Shot for the PlayStation.
The first official wireless game controller made by a first party manufacturer was the CX-42 for Atari 2600. The Philips CD-i 400 series also came with a remote control, the WaveBird was also produced for the GameCube. In the seventh generation of gaming consoles, wireless controllers became standard. Some wireless controllers, such as those of the PlayStation 3 and Wii, use Bluetooth. Others, like the Xbox 360, use proprietary wireless protocols.
Standby power
To be turned on by a wireless remote, the controlled appliance must always be partly on, consuming standby power.
Alternatives
Hand-gesture recognition has been researched as an alternative to remote controls for television sets.
| Technology | Basics_4 | null |
105807 | https://en.wikipedia.org/wiki/Infectious%20mononucleosis | Infectious mononucleosis | Infectious mononucleosis (IM, mono), also known as glandular fever, is an infection usually caused by the Epstein–Barr virus (EBV). Most people are infected by the virus as children, when the disease produces few or no symptoms. In young adults, the disease often results in fever, sore throat, enlarged lymph nodes in the neck, and fatigue. Most people recover in two to four weeks; however, feeling tired may last for months. The liver or spleen may also become swollen, and in less than one percent of cases splenic rupture may occur.
While usually caused by the Epstein–Barr virus, also known as human herpesvirus 4, which is a member of the herpesvirus family, a few other viruses and the protozoon Toxoplasma gondii may also cause the disease. It is primarily spread through saliva but can rarely be spread through semen or blood. Spread may occur by objects such as drinking glasses or toothbrushes or through a cough or sneeze. Those who are infected can spread the disease weeks before symptoms develop. Mono is primarily diagnosed based on the symptoms and can be confirmed with blood tests for specific antibodies. Another typical finding is increased blood lymphocytes of which more than 10% are reactive. The monospot test is not recommended for general use due to poor accuracy.
There is no vaccine for EBV; however, there is ongoing research. Infection can be prevented by not sharing personal items or saliva with an infected person. Mono generally improves without any specific treatment. Symptoms may be reduced by drinking enough fluids, getting sufficient rest, and taking pain medications such as paracetamol (acetaminophen) and ibuprofen.
Mononucleosis most commonly affects those between the ages of 15 and 24 years in the developed world. In the developing world, people are more often infected in early childhood when there are fewer symptoms. In those between 16 and 20 it is the cause of about 8% of sore throats. About 45 out of 100,000 people develop infectious mono each year in the United States. Nearly 95% of people have had an EBV infection by the time they are adults. The disease occurs equally at all times of the year. Mononucleosis was first described in the 1920s and is colloquially known as "the kissing disease".
Signs and symptoms
The signs and symptoms of infectious mononucleosis vary with age.
Children
Before puberty, the disease typically only produces flu-like symptoms, if any at all. When found, symptoms tend to be similar to those of common throat infections (mild pharyngitis, with or without tonsillitis).
Denis Burkitt reckoned that 18 per 100,000 children every year are infected with Burkitts’s lymphoma(Epstein-Barr virus). This usually causes a violent tumor in children.
Adolescents and young adults
In adolescence and young adulthood, the disease presents with a characteristic triad:
Fever – usually lasting 14 days; often mild.
Sore throat – usually severe for 3–5 days, before resolving in the next 7–10 days.
Swollen glands – mobile; usually located around the back of the neck (posterior cervical lymph nodes) and sometimes throughout the body.
Another major symptom is feeling tired. Headaches are common, and abdominal pains with nausea or vomiting sometimes also occur. Symptoms most often disappear after about 2–4 weeks. However, fatigue and a general feeling of being unwell (malaise) may sometimes last for months. Fatigue lasts more than one month in an estimated 28% of cases. Mild fever, swollen neck glands and body aches may also persist beyond 4 weeks. Most people are able to resume their usual activities within 2–3 months.
The most prominent sign of the disease is often pharyngitis, which is frequently accompanied by enlarged tonsils with pus—an exudate similar to that seen in cases of strep throat. In about 50% of cases, small reddish-purple spots called petechiae can be seen on the roof of the mouth. Palatal enanthem can also occur, but is relatively uncommon.
A small minority of people spontaneously present a rash, usually on the arms or trunk, which can be macular (morbilliform) or papular. Almost all people given amoxicillin or ampicillin eventually develop a generalized, itchy maculopapular rash, which however does not imply that the person will have adverse reactions to penicillins again in the future. Occasional cases of erythema nodosum and erythema multiforme have been reported. Seizures may also occasionally occur.
Complications
Spleen enlargement is common in the second and third weeks, although this may not be apparent on physical examination. Rarely the spleen may rupture. There may also be some enlargement of the liver. Jaundice occurs only occasionally.
It generally gets better on its own in people who are otherwise healthy. When caused by EBV, infectious mononucleosis is classified as one of the Epstein–Barr virus–associated lymphoproliferative diseases. Occasionally the disease may persist and result in a chronic infection. This may develop into systemic EBV-positive T cell lymphoma.
Older adults
Infectious mononucleosis mainly affects younger adults. When older adults do catch the disease, they less often have characteristic signs and symptoms such as the sore throat and lymphadenopathy. Instead, they may primarily experience prolonged fever, fatigue, malaise and body pains. They are more likely to have liver enlargement and jaundice. People over 40 years of age are more likely to develop serious illness.
Incubation period
The exact length of time between infection and symptoms is unclear. A review of the literature made an estimate of 33–49 days. In adolescents and young adults, symptoms are thought to appear around 4–6 weeks after initial infection. Onset is often gradual, though it can be abrupt. The main symptoms may be preceded by 1–2 weeks of fatigue, feeling unwell and body aches.
Cause
Epstein–Barr virus
About 90% of cases of infectious mononucleosis are caused by the Epstein–Barr virus, a member of the Herpesviridae family of DNA viruses. It is one of the most commonly found viruses throughout the world. Contrary to common belief, the Epstein–Barr virus is not highly contagious. It can only be contracted through direct contact with an infected person's saliva, such as through kissing or sharing toothbrushes. About 95% of the population has been exposed to this virus by the age of 40, but only 15–20% of teenagers and about 40% of exposed adults actually develop infectious mononucleosis.
Cytomegalovirus
About 5–7% of cases of infectious mononucleosis is caused by human cytomegalovirus (CMV), another type of herpes virus. This virus is found in body fluids including saliva, urine, blood, tears, breast milk and genital secretions. A person becomes infected with this virus by direct contact with infected body fluids. Cytomegalovirus is most commonly transmitted through kissing and sexual intercourse. It can also be transferred from an infected mother to her unborn child. This virus is often "silent" because the signs and symptoms cannot be felt by the person infected. However, it can cause life-threatening illness in infants, people with HIV, transplant recipients, and those with weak immune systems. For those with weak immune systems, cytomegalovirus can cause more serious illnesses such as pneumonia and inflammations of the retina, esophagus, liver, large intestine, and brain. Approximately 90% of the human population has been infected with cytomegalovirus by the time they reach adulthood, but most are unaware of the infection. Once a person becomes infected with cytomegalovirus, the virus stays in their body throughout the person's lifetime. During this latent phase, the virus can be detected only in monocytes.
Other causes
Toxoplasma gondii, a parasitic protozoon, is responsible for less than 1% of the infectious mononucleosis cases. Viral hepatitis, adenovirus, rubella, and herpes simplex viruses have also been reported as rare causes of infectious mononucleosis.
Transmission
Epstein–Barr virus infection is spread via saliva, and has an incubation period of four to seven weeks. The length of time that an individual remains contagious is unclear, but the chances of passing the illness to someone else may be the highest during the first six weeks following infection. Some studies indicate that a person can spread the infection for many months, possibly up to a year and a half.
Pathophysiology
The virus replicates first within epithelial cells in the pharynx (which causes pharyngitis, or sore throat), and later primarily within B cells (which are invaded via their CD21). The host immune response involves cytotoxic (CD8-positive) T cells against infected B lymphocytes, resulting in enlarged, reactive lymphocytes (Downey cells).
When the infection is acute (recent onset, instead of chronic), heterophile antibodies are produced.
Cytomegalovirus, adenovirus and Toxoplasma gondii (toxoplasmosis) infections can cause symptoms similar to infectious mononucleosis, but a heterophile antibody test will test negative and differentiate those infections from infectious mononucleosis.
Mononucleosis is sometimes accompanied by secondary cold agglutinin disease, an autoimmune disease in which abnormal circulating antibodies directed against red blood cells can lead to a form of autoimmune hemolytic anemia. The cold agglutinin detected is of anti-i specificity.
Diagnosis
The disease is diagnosed based on:
Physical examination
The presence of an enlarged spleen, and swollen posterior cervical, axillary, and inguinal lymph nodes are the most useful to suspect a diagnosis of infectious mononucleosis. On the other hand, the absence of swollen cervical lymph nodes and fatigue are the most useful to dismiss the idea of infectious mononucleosis as the correct diagnosis. The insensitivity of the physical examination in detecting an enlarged spleen means it should not be used as evidence against infectious mononucleosis. A physical examination may also show petechiae in the palate.
Heterophile antibody test
The heterophile antibody test, or monospot test, works by agglutination of red blood cells from guinea pigs, sheep and horses. This test is specific but not particularly sensitive (with a false-negative rate of as high as 25% in the first week, 5–10% in the second, and 5% in the third). About 90% of diagnosed people have heterophile antibodies by week 3, disappearing in under a year. The antibodies involved in the test do not interact with the Epstein–Barr virus or any of its antigens.
The monospot test is not recommended for general use by the CDC due to its poor accuracy.
Serology
Serologic tests detect antibodies directed against the Epstein–Barr virus. Immunoglobulin G (IgG), when positive, mainly reflects a past infection, whereas immunoglobulin M (IgM) mainly reflects a current infection. EBV-targeting antibodies can also be classified according to which part of the virus they bind to:
Viral capsid antigen (VCA):
Anti-VCA IgM appear early after infection, and usually, disappear within 4 to 6 weeks.
Anti-VCA IgG appears in the acute phase of EBV infection, reaches a maximum at 2 to 4 weeks after onset of symptoms and thereafter declines slightly and persists for the rest of a person’s life.
Early antigen (EA)
Anti-EA IgG appears in the acute phase of illness and disappears after 3 to 6 months. It is associated with having an active infection. Yet, 20% of people may have antibodies against EA for years despite having no other sign of infection.
EBV nuclear antigen (EBNA)
Antibody to EBNA slowly appears 2 to 4 months after the onset of symptoms and persists for the rest of a person’s life.
When negative, these tests are more accurate than the heterophile antibody test in ruling out infectious mononucleosis. When positive, they feature similar specificity to the heterophile antibody test. Therefore, these tests are useful for diagnosing infectious mononucleosis in people with highly suggestive symptoms and a negative heterophile antibody test.
Other tests
Elevated hepatic transaminase levels are highly suggestive of infectious mononucleosis, occurring in up to 50% of people.
By blood film, one diagnostic criterion for infectious mononucleosis is the presence of 50% lymphocytes with at least 10% reactive lymphocytes (large, irregular nuclei), while the person also has fever, pharyngitis, and swollen lymph nodes. The reactive lymphocytes resembled monocytes when they were first discovered, thus the term "mononucleosis" was coined.
A fibrin ring granuloma may be present in the liver or bone marrow.
Differential diagnosis
About 10% of people who present a clinical picture of infectious mononucleosis do not have an acute Epstein–Barr-virus infection. A differential diagnosis of acute infectious mononucleosis needs to take into consideration acute cytomegalovirus infection and Toxoplasma gondii infections. Because their management is much the same, it is not always helpful–or possible–to distinguish between Epstein–Barr-virus mononucleosis and cytomegalovirus infection. However, in pregnant women, differentiation of mononucleosis from toxoplasmosis is important, since it is associated with significant consequences for the fetus.
Acute HIV infection can mimic signs similar to those of infectious mononucleosis, and tests should be performed for pregnant women for the same reason as toxoplasmosis.
People with infectious mononucleosis are sometimes misdiagnosed with a streptococcal pharyngitis (because of the symptoms of fever, pharyngitis and adenopathy) and are given antibiotics such as ampicillin or amoxicillin as treatment.
Other conditions from which to distinguish infectious mononucleosis include leukemia, tonsillitis, diphtheria, common cold and influenza (flu).
Treatment
Infectious mononucleosis is generally self-limiting, so only symptomatic or supportive treatments are used. The need for rest and return to usual activities after the acute phase of the infection may reasonably be based on the person's general energy levels. Nevertheless, in an effort to decrease the risk of splenic rupture, experts advise avoidance of contact sports and other heavy physical activity, especially when involving increased abdominal pressure or the Valsalva maneuver (as in rowing or weight training), for at least the first 3–4 weeks of illness or until enlargement of the spleen has resolved, as determined by a treating physician.
Medications
Paracetamol (acetaminophen) and NSAIDs, such as ibuprofen, may be used to reduce fever and pain. Prednisone, a corticosteroid, while used to try to reduce throat pain or enlarged tonsils, remains controversial due to the lack of evidence that it is effective and the potential for side effects. Intravenous corticosteroids, usually hydrocortisone or dexamethasone, are not recommended for routine use but may be useful if there is a risk of airway obstruction, a very low platelet count, or hemolytic anemia.
Antiviral agents act by inhibiting viral DNA replication. There is little evidence to support the use of antivirals such as aciclovir and valacyclovir although they may reduce initial viral shedding. Antivirals are expensive, risk causing resistance to antiviral agents, and (in 1% to 10% of cases) can cause unpleasant side effects. Although antivirals are not recommended for people with simple infectious mononucleosis, they may be useful (in conjunction with steroids) in the management of severe EBV manifestations, such as EBV meningitis, peripheral neuritis, hepatitis, or hematologic complications.
Although antibiotics exert no antiviral action they may be indicated to treat bacterial secondary infections of the throat, such as with streptococcus (strep throat). However, ampicillin and amoxicillin are not recommended during acute Epstein–Barr virus infection as a diffuse rash may develop.
Observation
Splenomegaly is a common symptom of infectious mononucleosis and health care providers may consider using abdominal ultrasonography to get insight into the enlargement of a person's spleen. However, because spleen size varies greatly, ultrasonography is not a valid technique for assessing spleen enlargement and should not be used in typical circumstances or to make routine decisions about fitness for playing sports.
Prognosis
Serious complications are uncommon, occurring in less than 5% of cases:
CNS complications include meningitis, encephalitis, hemiplegia, Guillain–Barré syndrome, and transverse myelitis. Prior infectious mononucleosis has been linked to the development of multiple sclerosis.
Hematologic: Hemolytic anemia (direct Coombs test is positive) and various cytopenias, and bleeding (caused by thrombocytopenia) can occur.
Mild jaundice
Hepatitis with the Epstein–Barr virus is rare.
Upper airway obstruction from tonsillar hypertrophy is rare.
Fulminant disease course of immunocompromised people are rare.
Splenic rupture is rare.
Myocarditis and pericarditis are rare.
Postural orthostatic tachycardia syndrome
Myalgic encephalomyelitis/chronic fatigue syndrome
Cancers associated with the Epstein–Barr virus include Burkitt's lymphoma, Hodgkin's lymphoma and lymphomas in general as well as nasopharyngeal and gastric carcinoma.
Hemophagocytic lymphohistiocytosis
Once the acute symptoms of an initial infection disappear, they often do not return. But once infected, the person carries the virus for the rest of their life. The virus typically lives dormant in B lymphocytes. Independent infections of mononucleosis may be contracted multiple times, regardless of whether the person is already carrying the virus dormant. Periodically, the virus can reactivate, during which time the person is again infectious, but usually without any symptoms of illness. Usually, a person with IM has few, if any, further symptoms or problems from the latent B lymphocyte infection. However, in susceptible hosts under the appropriate environmental stressors, the virus can reactivate and cause vague physical symptoms (or may be subclinical), and during this phase, the virus can spread to others.
History
The characteristic symptomatology of infectious mononucleosis does not appear to have been reported until the late nineteenth century. In 1885, the renowned Russian pediatrician Nil Filatov reported an infectious process he called "idiopathic adenitis" exhibiting symptoms that correspond to infectious mononucleosis, and in 1889 a German balneologist and pediatrician, Emil Pfeiffer, independently reported similar cases (some of lesser severity) that tended to cluster in families, for which he coined the term Drüsenfieber ("glandular fever").
The word mononucleosis has several senses, but today it usually is used in the sense of infectious mononucleosis, which is caused by EBV.
Around the 1920’s , infectious mononucleosis was not known and there were not many things to test whether people were infected or not. Before this there weren’t many cases disclosed besides a few and one of these would take place in 1896. This outbreak infected an Ohio community which ended leaving them devastated. Epidemics seemed to keep reappearing here and there including an outbreak that happened in which 87 people were infected in the Falcon Islands. Some other outbreaks that occurred around this time would include some nurseries and boarding schools and also the U.S. Naval Base, Coronado, Calif, where hundreds were infected by this virus.
The term "infectious mononucleosis" was coined in 1920 by Thomas Peck Sprunt and Frank Alexander Evans in a classic clinical description of the disease published in the Bulletin of the Johns Hopkins Hospital, entitled "Mononuclear leukocytosis in reaction to acute infection (infectious mononucleosis)". A lab test for infectious mononucleosis was developed in 1931 by Yale School of Public Health Professor John Rodman Paul and Walls Willard Bunnell based on their discovery of heterophile antibodies in the sera of persons with the disease. The Paul-Bunnell Test or PBT was later replaced by the heterophile antibody test.
The Epstein–Barr virus was first identified in Burkitt's lymphoma cells by Michael Anthony Epstein and Yvonne Barr at the University of Bristol in 1964. The link with infectious mononucleosis was uncovered in 1967 by Werner and Gertrude Henle at the Children's Hospital of Philadelphia, after a laboratory technician handling the virus contracted the disease: comparison of serum samples collected from the technician before and after the onset revealed development of antibodies to the virus.
Yale School of Public Health epidemiologist Alfred E. Evans confirmed through testing that mononucleosis was transmitted mainly through kissing, leading to it being referred to colloquially as "the kissing disease".
| Biology and health sciences | Viral diseases | Health |
105847 | https://en.wikipedia.org/wiki/Gibbon | Gibbon | Gibbons () are apes in the family Hylobatidae (). The family historically contained one genus, but now is split into four extant genera and 20 species. Gibbons live in subtropical and tropical forests from eastern Bangladesh to Northeast India to southern China and Indonesia (including the islands of Sumatra, Borneo and Java).
Also called the lesser apes, gibbons differ from the great apes (chimpanzees, gorillas, orangutans and humans) in being smaller, exhibiting low sexual dimorphism, and not making nests. Like all of the apes, gibbons are tailless. Unlike most of the great apes, gibbons frequently form long-term pair bonds. Their primary mode of locomotion, brachiation, involves swinging from branch to branch for distances up to , at speeds as fast as . They can also make leaps up to , and walk bipedally with their arms raised for balance. They are the fastest of all tree-dwelling, nonflying mammals.
Depending on the species and sex, gibbons' fur coloration varies from dark- to light-brown shades, and any shade between black and white, though a completely "white" gibbon is rare.
Etymology
The English word "gibbon" is a reborrowing from French and may originally derive from an Orang Asli word.
Evolutionary history
Whole genome molecular dating analyses indicate that the gibbon lineage diverged from that of great apes around 16.8 million years ago (Mya) (95% confidence interval: 15.9–17.6 Mya; given a divergence of 29 Mya from Old World monkeys). Adaptive divergence associated with chromosomal rearrangements led to rapid radiation of the four genera 5–7 Mya. Each genus comprises a distinct, well-delineated lineage, but the sequence and timing of divergences among these genera has been hard to resolve, even with whole genome data, due to radiative speciations and extensive incomplete lineage sorting. An analysis based on morphology suggests that the four genera are ordered as (Symphalangus, (Nomascus, (Hoolock, Hylobates))).
A coalescent-based species tree analysis of genome-scale datasets suggests a phylogeny for the four genera ordered as (Hylobates, (Nomascus, (Hoolock, Symphalangus))).
At the species level, estimates from mitochondrial DNA genome analyses suggest that Hylobates pileatus diverged from H. lar and H. agilis around 3.9 Mya, and H. lar and H. agilis separated around 3.3 Mya. Whole genome analysis suggests divergence of H. pileatus from H. moloch 1.5–3.0 Mya. The extinct Bunopithecus sericus is a gibbon or gibbon-like ape, which until recently, was thought to be closely related to the hoolock gibbons.
Taxonomy
The family is divided into four genera based on their diploid chromosome number: Hylobates (44), Hoolock (38), Nomascus (52), and Symphalangus (50). Also, three extinct genera currently are recognised: Bunopithecus, Junzi, and Yuanmoupithecus.
Family Hylobatidae: gibbons
Genus Hoolock
Western hoolock gibbon, H. hoolock
Eastern hoolock gibbon, H. leuconedys
Skywalker hoolock gibbon, H. tianxing
Genus Hylobates: dwarf gibbons
Lar gibbon or white-handed gibbon, H. lar
Bornean white-bearded gibbon, H. albibarbis
Agile gibbon or black-handed gibbon, H. agilis
Western grey gibbon or Abbott's grey gibbon, H. abbotti
Eastern grey gibbon or northern grey gibbon, H. funereus
Müller's gibbon or southern grey gibbon, H. muelleri
Silvery gibbon, H. moloch
Pileated gibbon or capped gibbon, H. pileatus
Kloss's gibbon, Mentawai gibbon or bilou, H. klossii
Genus Symphalangus
Siamang, S. syndactylus
Genus Nomascus: crested gibbons
Northern buffed-cheeked gibbon, N. annamensis
Concolor or black crested gibbon, N. concolor
Eastern black crested gibbon or Cao Vit black crested gibbon, N. nasutus
Hainan black crested gibbon, N. hainanus
Northern white-cheeked gibbon, N. leucogenys
Southern white-cheeked gibbon, N. siki
Yellow-cheeked gibbon, N. gabriellae
Extinct genera
Genus Bunopithecus
Bunopithecus sericus
Genus Junzi
Junzi imperialis
Genus Yuanmoupithecus
Yuanmoupithecus xiaoyuan
Hybrids
Many gibbons are hard to identify based on fur coloration, so are identified either by song or genetics. These morphological ambiguities have led to hybrids in zoos. Zoos often receive gibbons of unknown origin, so they rely on morphological variation or labels that are impossible to verify to assign species and subspecies names, so separate species of gibbons commonly are misidentified and housed together. Interspecific hybrids, within a genus, are also suspected to occur in wild gibbons where their ranges overlap. No records exist, however, of fertile hybrids between different gibbon genera, either in the wild or in captivity.
Description
One unique aspect of a gibbon's anatomy is the wrist, which functions something like a ball-and-socket joint, allowing for biaxial movement. This greatly reduces the amount of energy needed in the upper arm and torso, while also reducing stress on the shoulder joint. Gibbons also have long hands and feet, with a deep cleft between the first and second digits of their hands. Their fur is usually black, gray, or brownish, often with white markings on hands, feet and face. Some species such as the siamang have an enlarged throat sac, which inflates and serves as a resonating chamber when the animals call. This structure can become quite large in some species, sometimes equaling the size of the animal's head. Their voices are much more powerful than that of any human singer, although they are at best half a human's height.
Gibbon skulls and teeth resemble those of the great apes, and their noses are similar to those of all catarrhine primates. The dental formula is . The siamang, which is the largest of the 18 species, is distinguished by having two fingers on each foot stuck together, hence the generic and species names Symphalangus and syndactylus.
Behavior
Like all primates, gibbons are social animals. They are strongly territorial, and defend their boundaries with vigorous visual and vocal displays. The vocal element, which can often be heard for distances up to , consists of a duet between a mated pair, with their young sometimes joining in. In most species, males and some females sing solos to attract mates, as well as advertise their territories. The song can be used to identify not only which species of gibbon is singing, but also the area from which it comes.
Gibbons often retain the same mate for life, although they do not always remain sexually monogamous. In addition to extra-pair copulations, pair-bonded gibbons occasionally "divorce". About 10% of gibbon groups studied in the wild contained more than two adults. In these cases, the limitation of food availability on group size may be relaxed, allowing more adults to congregate together without a significant increase in competition.
Gibbons are among nature's best brachiators. Their ball-and-socket wrist joints allow them unmatched speed and accuracy when swinging through trees. Nonetheless, their mode of transportation can lead to hazards when a branch breaks or a hand slips, and researchers estimate that the majority of gibbons suffer bone fractures one or more times during their lifetimes. They are the fastest of all tree-dwelling, nonflying mammals. On the ground, gibbons tend to walk bipedally, and their Achilles tendon morphology is more similar to that of humans than that of any other ape.
Diet
Gibbons' diets are about 60% fruit-based, but they also consume twigs, leaves, insects, flowers, and occasionally birds' eggs. Levels of frugivory vary between populations and species of gibbons and are best predicted by local fruit availability. The most folivorous gibbon species come from the genus Nomascus, whose higher reliance on leaves is thought to be because they live in high altitude seasonal habitats that lack year-round abundant fruits.
Genetics
Gibbons were the first apes to diverge from the common ancestor of humans and apes about 16.8 Mya. With a genome that has a 96% similarity to humans, the gibbon has a role as a bridge between Old World monkeys, such as macaques, and the great apes. According to a study that mapped synteny (genes occurring on the same chromosome) disruptions in the gibbon and human genome, humans and great apes are part of the same superfamily (Hominoidea) with gibbons. The karyotype of gibbons, however, diverged in a much more rapid fashion from the common hominoid ancestor than other apes.
The common ancestor of hominoids is shown to have a minimum of 24 major chromosomal rearrangements from the presumed gibbon ancestor's karyotype. Reaching the common gibbon ancestor's karyotype from today's various living species of gibbons will require up to 28 additional rearrangements. Adding up, this implies that at least 52 major chromosomal rearrangements are needed to compare the common hominoid ancestor to today's gibbons. No common specific sequence element in the independent rearrangements was found, while 46% of the gibbon-human synteny breakpoints occur in segmental duplication regions. This is an indication that these major differences in humans and gibbons could have had a common source of plasticity or change. Researchers view this unusually high rate of chromosomal rearrangement that is specific in small apes such as gibbons could potentially be due to factors that increase the rate of chromosomal breakage or factors that allow derivative chromosomes to be fixed in a homozygous state while mostly lost in other mammals.
The whole genome of the gibbons in Southeast Asia was first sequenced in 2014 by the German Primate Center, including Christian Roos, Markus Brameier, and Lutz Walter, along with other international researchers. One of the gibbons that had its genome sequenced is a white-cheeked gibbon (Nomascus leucogenys, NLE) named Asia. The team found that a jumping DNA element named LAVA transposon (also called gibbon-specific retrotransposon) is unique to the gibbon genome apart from humans and the great apes. The LAVA transposon increases mutation rate, thus is supposed to have contributed to the rapid and greater change in gibbons in comparison to their close relatives, which is critical for evolutionary development. The very high rate of chromosomal disorder and rearrangements (such as duplications, deletions or inversions of large stretches of DNA) due to the moving of this large DNA segment is one of the key features that are unique to the gibbon genome.
A special feature of the LAVA transposon is that it positioned itself precisely between genes that are involved in chromosome segregation and distribution during cell division, which results in a premature termination state leading to an alteration in transcription. This incorporation of the jumping gene near genes involved in chromosome replication is thought to make the rearrangement in the genome even more likely, leading to a greater diversity within the gibbon genera.
In addition, some characteristic genes in the gibbon genome had gone through a positive selection and are suggested to give rise to specific anatomical features for gibbons to adapt to their new environment. One of them is TBX5, which is a gene that is required for the development of the front extremities or forelimbs such as long arms. The other is COL1A1, which is responsible for the development of collagen, a protein that is directly involved with the forming of connective tissues, bone, and cartilage. This gene is thought to have a role in gibbons' stronger muscles.
Researchers have found a coincidence between major environmental changes in Southeast Asia about 5 Mya that caused a cyclical dynamic of expansions and contractions of their forest habitat, an instance of radiation experienced by the gibbon genera. This may have led to the development of a suite of physical characteristics, distinct from their great ape relatives, to adapt to their habitat of dense, canopy forest.
These crucial findings in genetics have contributed to the use of gibbons as a genetic model for chromosome breakage and fusion, which is a type of translocation mutation. The unusually high number of structural changes in the DNA and chromosomal rearrangements could lead to problematic consequences in some species. Gibbons, however, not only seemed to be free from problems but let the change help them effectively adapt to their environment. Thus, gibbons are organisms on which genetics research could be focused to broaden the implications to human diseases related to chromosomal changes, such as cancer, including chronic myeloid leukemia.
Conservation status
Most species are either endangered or critically endangered (the sole exception being H. leuconedys, which is vulnerable), primarily due to degradation or loss of their forest habitats. On the island of Phuket in Thailand, a volunteer-based Gibbon Rehabilitation Center rescues gibbons that were kept in captivity, and are being released back into the wild. The Kalaweit Project also has gibbon rehabilitation centers on Borneo and Sumatra.
The IUCN Species Survival Commission Primate Specialist Group announced 2015 to be the Year of the Gibbon and initiated events to be held around the world in zoos to promote awareness of the status of gibbons.
In traditional Chinese culture
Sinologist Robert van Gulik concluded gibbons were widespread in central and southern China until at least the Song dynasty, and furthermore, based on an analysis of references to primates in Chinese poetry and other literature and their portrayal in Chinese paintings, the Chinese word yuán (猿) referred specifically to gibbons until they were extirpated throughout most of the country due to habitat destruction (around the 14th century). In modern usage, however, yuán is a generic word for ape. Early Chinese writers viewed the "noble" gibbons, gracefully moving high in the treetops, as the "gentlemen" (jūnzǐ, 君子) of the forest, in contrast to the greedy macaques, attracted by human food. The Taoists ascribed occult properties to gibbons, believing them to be able to live for several hundred years and to turn into humans.
Gibbon figurines as old as from the fourth to third centuries BCE (the Zhou dynasty) have been found in China. Later on, gibbons became a popular subject for Chinese painters, especially during the Song dynasty and early Yuan dynasty, when Yì Yuánjí and Mùqī Fǎcháng excelled in painting these apes. From Chinese cultural influence, the Zen motif of the "gibbon grasping at the reflection of the moon in the water" became popular in Japanese art, as well, though gibbons have never occurred naturally in Japan.
| Biology and health sciences | Primates | null |
105849 | https://en.wikipedia.org/wiki/Inter-city%20rail | Inter-city rail | Inter-city rail services are express trains that run services that connect cities over longer distances than commuter or regional trains. They include rail services that are neither short-distance commuter rail trains within one city area nor slow regional rail trains stopping at all stations and covering local journeys only. An inter-city train is typically an express train with limited stops and comfortable carriages to serve long-distance travel.
Inter-city rail sometimes provides international services. This is most prevalent in Europe because of the proximity of its 50 countries to a 10,180,000-square-kilometre (3,930,000-square-mile) area. Eurostar and EuroCity are examples. In many European countries, the word InterCity or Inter-City is an official brand name for a network of regular-interval and relatively long-distance train services that meet certain criteria of speed and comfort. That use of the term appeared in the United Kingdom in the 1960s and has been widely imitated.
Speed
The speeds of inter-city rail lines are quite diverse, ranging from in a mountainous area or on undeveloped tracks to on newly constructed or improved tracks. As a result, Inter-city rail may or may not fall into the category of higher-speed rail or high-speed rail. Ideally, the average speed of inter-city rail service would be faster than in order to be competitive with car, bus and other methods of transport.
Distance of inter-city rail
50–100 km
The distance of an inter-city rail journey is usually at least , although in many large metropolitan areas commuter and regional services cover equal or longer distances. Examples of countries with relatively short intercity rail distances with service patterns comparable to regional rail include Belgium, Israel, The Netherlands, and Switzerland.
100–500 km
A distance of is a common journey distance for inter-city rail in many countries. In many cases, railway travel is most competitive at about two to three hours journey time. Inter-city rail can often compete with highways and short-haul air travel for journeys of this distance. Most major intercity railway routes in Europe, such as London to Birmingham, Paris to Lyon, and Lisbon to Porto cover this range of distances.
500–1,000 km
In journeys of , the role of inter-city rail is often replaced by faster air travel. Development of high-speed rail in some countries increases the share of railway for such longer-distance journeys. The Paris-Marseille TGV ( in 3 hours) and Tokyo-Aomori Shinkansen ( in 2 hours 59 minutes) are examples of this type of journey. In conventional non high-speed rail, overnight trains are common for this distance.
1,000 km or more
In some countries with a dense rail network, large territory, or less air and car transport, such as China, India, and Russia, overnight long-distance train services are provided and used practically.
In many other countries, such long-distance rail journey has been replaced by air travel except for tourism or hobbyist purposes, luxury train journeys, or significant cost benefit. Amtrak long-distance services in the United States, Via Rail's Canadian service in Canada, and the Indian Pacific in Australia are examples.
Faster high-speed rail of at least per hour, such as the Beijing–Shanghai High-Speed Railway in China ( in 5 hours) and Tokyo-Sapporo in the proposed Hokkaido Shinkansen in Japan ( in 4 hours), may play a significant role in long-distance travel in the future.
Inter-city rail by country
Africa
Railways in Africa are still developing or not practically used for passenger purposes in many countries, but the following countries have inter-city services between major cities:
Algeria: SNTF
Egypt: Egyptian National Railways
Kenya: Mombasa-Nairobi Standard Gauge Railway
Morocco: ONCF (in French - Office National des Chemins de Fer du Maroc, National Office for Railways of Morocco)
South Africa: Shosholoza Meyl
Tunisia: Tunisian Railways (SNCFT)
Asia
East Asia
China
Trains run by China Railway link almost every town and city in the People's Republic of China, including Beijing, Guangzhou, Shanghai, Shenzhen, and Xi'an, and onwards from Shenzhen across the border to Kowloon, in Hong Kong. New high-speed lines from operation are constructed, and many conventional lines are also upgraded to operation. Currently there are seven High-Speed Inter-City lines in China, with up to 21 planned. They are operated independently from the often parallel High-Speed-Rail-Lines.
Japan
Japan has six main regional passenger railway companies, known collectively as Japan Railways Group or simply as JR. Five JR companies operate the "bullet trains" on very fast and frequent Shinkansen lines that link all the larger cities, including Tokyo, Yokohama, Nagoya, Kyoto, Osaka, Hiroshima, Fukuoka and many more.
Many other cities are covered by a network of JR's limited express inter-city trains on , narrow gauge, lines. Major cities are covered by convenient train services of every one hour or more frequent. In addition to the JR Group, Japan has major private rail operators such as the Kintetsu, Meitetsu, Tobu Railway and Odakyu Electric Railway that operate "limited express" inter-city services.
Hong Kong
Inter-city railway services crossing the Hong Kong-China border (often known as through trains) are jointly operated by Hong Kong's MTR Corporation Limited and the Ministry of Railways of the People's Republic of China. Currently, Hung Hom station is the only station in the territory where passengers can catch these cross-border trains. Passengers are required to go through immigration and customs inspections of Hong Kong before boarding a cross-border train or alighting from such a train. There are currently three cross-border train services on the conventional line:
Between Hong Kong and Beijing (Beijing–Kowloon through train)
Between Hong Kong and Shanghai (Shanghai–Kowloon through train)
Between Hong Kong and Guangzhou (Guangzhou–Kowloon through train)
A new border-crossing service, the Guangzhou–Shenzhen–Hong Kong Express Rail Link, has been approved and has been granted HKD 6.6 billion in funding by the Legislative Council's Finance Committee. The line has been opened in 2018 with a new station West Kowloon Terminus in the city centre.
Taiwan
Taiwan's coastline is connected by frequent inter-city train services by Taiwan Railway Administration. Taiwan High Speed Rail, opened in 2007, covers the most populated west-coast corridor. Chinese:對號列車
There are Chu-kuang express (莒光號) and Tze-chiang limited express (自強號).
South Korea
Almost every major town and city in South Korea is linked by railway, run by Korail. ITX-Saemaeul is operated in most Main railway lines like Japanese limited express or German Intercity. Also, Mugunghwa-ho is the most common and most popular type of intercity rail travel like German Regional-Express. In addition, Seoul and Busan are linked by a high-speed train line known as KTX, which was built using French TGV technology.
South Asia
Bangladesh
India
India's inter-city trains are run by Indian Railways. With of rail routes and 7,308 stations, the railway network in India is the third-largest in the world (after Russia and China) and the largest in the world in terms of passenger kilometres. The Vande Bharat Express, Gatimaan Express, Tejas Express, Tejas-Rajdhani Express, Rajdhani Express, Shatabdi Express, Jan Shatabdi Express and Duronto Express are the fastest inter-city services in India; of these, the Vande Bharat is the fastest one. All long-distance journeys generally require a reservation, although unreserved travel is allowed in some trains.
Pakistan
Sri Lanka
Southeast Asia
Burma
Cambodia
There is only one train service in Cambodia, from Phnom Penh to Sihanoukville, stopping at Doun Kaev (Takeo) and Kampot.
Indonesia
In Indonesia, PT Kereta Api operates inter-city services between some of the country's major cities, like Jakarta, Bandung, Semarang, Yogyakarta, Surakarta, Surabaya, Medan, Padang, and Palembang. In Jakarta metropolitan area (or Jabodetabek), KRL Jabotabek operates inter-city and commuter services. Indonesia also currently operates Southeast Asia's first high-speed rail line, from Jakarta to Bandung.
Laos
In recent years construction has started on a China-funded higher-speed railway link, the Boten–Vientiane railway, commonly referred to as the China-Laos Railway. A fully electrified higher-speed railway line, it is part of a long-term goal of connecting China with the rest of Southeast Asia. The line runs from Boten near the China-Laos border to Vientiane, the capital of Laos, using CRRC high/higher-speed EMU trains.
Malaysia
Keretapi Tanah Melayu (Malayan Railways) operates loco-hauled express trains called KTM Intercity along Peninsular Malaysia and into Singapore. At the Malaysia–Thailand border, connections to State Railway of Thailand trains are available. KTM Intercity trains are diesel-powered and run on a single-track system. The rail track is gradually being duplicated and electrified. On the completed Central to Northern section (border), KTM runs the higher-speed Electric Train Service (ETS).
Philippines
As of February 2020, the Philippine National Railways does not have a regular inter-city rail service although the agency is planning on rebuilding new railway lines. Prior to the 1970s, the main island of Luzon had a relatively expansive narrow-gauge railway network, but government prioritization towards highway construction and the effects of multiple natural disasters gradually led to the decline and abandonment of most intercity rail services. Until the 2000s, PNR had two inter-city rail services: the Bicol Express and the Mayon Limited. The Bicol Express leaves Manila and passes through Manila, Pasay, and Muntinlupa and the provinces of Laguna, Quezon, and Camarines Sur before arriving at Naga. The trip takes 10 hours, or 600 minutes. The Mayon Limited connects Minola and Ligao in hours. The Philippine government is planning the revival of inter-city rail with projects such as the PNR South Long Haul which aims to reconstruct the railway in Southern Luzon.
Thailand
Thailand has a sizable meter-gauge intercity rail network radiating outwards from Bangkok, transporting around 60 million passengers every year. Construction is underway to connect Bangkok with Nakhon Ratchasima using a dedicated high speed rail line.
Vietnam
Trains in Vietnam, run by Vietnam Railways, link Hanoi, Hué, Da Nang, Nha Trang, and Ho Chi Minh City.
Southwest Asia
Iran
Israel
Israel Railways operates inter-city services between all the four major metropolitan areas of Israel: Tel Aviv, Jerusalem, Be'er Sheva, and Haifa. However, due to the small geography of Israel, most of the railway services have a more suburban service pattern, with many short stops at stations between the major city centres.
Europe
Western and Central Europe
In Europe, many long-distance inter-city trains are operated under the InterCity (often simply IC) brand. InterCity (or, initially, "Inter-City" with a hyphen) was first conceived as a brand name by British Rail for the launch of its electrification of the major part of the West Coast Main Line in 1966, which brought new express services between London and the major cities of Manchester, Birmingham and Liverpool. It later became the name of one of British Rail's new business sectors in the 1980s and was used to describe the whole network of main-line passenger routes in Great Britain, but it went out of official use following privatisation. The introduction of the British Rail Class 43 (HST) helped InterCity become a familiar brand in the 1970s.
The principal network of international express trains in continental Europe is called EuroCity, even though some InterCity trains also cross borders.
High-speed railways have relatively few stops. The German high-speed train service was named InterCityExpress, indicating its evolution from older InterCity trains. Other high-speed lines include the TGV (France), AVE (Spain), Treno Alta Velocità (Italy), Eurostar (United Kingdom–France and Belgium), Thalys (Netherlands–Belgium–Germany and France), Lyria (France-Switzerland), and Railjet (Germany-Austria–Czechia/Hungary).
Great Britain
In Great Britain, the inter-city rail links are now operated by a number of private companies as well as Continental State owned railways such as Avanti West Coast, LNER, EMR, CrossCountry, TransPennine Express, Greater Anglia and GWR. Ireland's inter-city rail network is maintained by Iarnród Éireann and Northern Ireland's is run by Northern Ireland Railways.
Italy
With the introduction of high-speed trains, intercity trains are limited to few services per day on mainline and regional tracks.
The daytime services (InterCity IC), while not frequent and limited to one or two trains per route, are essential in providing access to cities and towns off the railway's mainline network. The main routes are Trieste to Rome (stopping at Venice, Bologna, Prato, Florence and Arezzo), Milan to Rome (stopping at Genoa, La Spezia, Pisa and Livorno / stopping at Parma, Modena, Bologna, Prato, Florence and Arezzo), Bologna to Lecce (stopping at Rimini, Ancona, Pescara, Bari and Brindisi) and Rome to Reggio di Calabria (stopping at Latina and Naples). In addition, the Intercity trains provide a more economical means of long-distance rail travel within Italy.
The night trains (Intercity Notte ICN) have sleeper compartments and washrooms, but no showers on board. Main routes are Rome to Bolzano/Bozen (calling at Florence, Bologna, Verona, Rovereto and Trento), Milan to Lecce (calling at Piacenza, Parma, Reggio Emilia, Modena, Bologna, Faenza, Forlì, Cesena, Rimini, Ancona, Pescara, Bari and Brindisi), Turin to Lecce (calling at Alessandria, Voghera, Piacenza, Parma, Bologna, Rimini, Pescara, Termoli, San Severo,Foggia, Barletta, Bisceglie, Molfetta, Bari, Monopoli, Fasano, Ostuni and Brindisi) and Reggio di Calabria to Turin (calling at Naples, Rome, Livorno, La Spezia and Genova). Most portions of these ICN services run during the night; since most services take 10 to 15 hours to complete a one-way journey, their day-time portion provide extra train connections to complement with the Intercity services.
Central and Eastern Europe
Poland
The Polish State Railways (PKP), a state-owned corporate group, is the main provider of railway services. The PKP group holds an almost unrivaled monopoly over rail services in Poland since it is both supported and partly funded by the national government.
As of 2018, foreign services operate on the Polish Railways network. These include EuroCity and EuroNight trains operating between Western and Eastern European destinations, including by the EN 440/441 from Berlin via Warsaw to Moscow operated by Talgo train of Russian Railways company.
In 2019, new nightjet train from Wien to Berlin via Ostrava (CZ) and Wroclaw (PL) starts the service. .
Russia
Russia has a dense network of long-distance railways all over its vast territory, the longest and most famous being the Trans-Siberian Railway from Moscow to Vladivostok. Long-distance train routes of more than are common, with many trips taking two or three days. Speed is relatively low: trains average .
North America
Canada
Canada's inter-city trains are mostly run by Via Rail, a Canadian crown corporation mandated to operate inter-city passenger rail service in Canada. The majority of its services connect major cities in the most populous part of the country known as the Quebec City - Windsor Corridor, straddling the provinces of Ontario and Quebec. It also operates long-distance trains to western Canada and the Maritimes on the Canadian and Ocean lines and by smaller trains to more remote areas of Canada. Much like the United States, Canada previously had a larger intercity rail network prior to the 1970s; certain major cities such as Calgary and Regina lack connections to the extant Via Rail network, and passenger rail usage outside of the Quebec City - Windsor Corridor is infrequent and geared towards the tourism market.
International trains, run jointly by Amtrak and Via Rail, connect New York City with Toronto. Amtrak also operates the Adirondack between New York City and Montreal, and the Amtrak Cascades service linking Vancouver and Seattle. In addition, the White Pass and Yukon Route links Skagway and Whitehorse on an isolated northern route.
Other inter-city passenger rail operators include the Ontario Northland Railway, which operates passenger services between Cochrane and Moosonee in rural northern Ontario and luxury train operators such as the Royal Canadian Pacific and Rocky Mountaineer, which operate rail tours in Western Canada.
Mexico
In Mexico, the federal government discontinued almost all scheduled inter-city passenger trains in June 2001. Ferromex operates trains on three routes: Chihuahua City to Los Mochis, Torreón to Felipe Pescador, and Guadalajara to Amatitán. Mexican President Enrique Peña Nieto has proposed intercity trains, including from Mexico City to Toluca (construction began July 7, 2014), the Peninsular train from Yucatán to Riviera Maya, and the Mexico-Querétaro high-speed train from Puebla to Tlaxcala and Mexico City with future expansion to Guadalajara. In recent years, passenger trains have seen a revival, with the construction of the tourist-oriented Tren Maya route traversing the Yucatan Peninsula.
United States
There was a dense system of inter-city railways in the United States in the late 19th and early 20th centuries. After the decline of passenger railroads in North America in the 1960s, the inter-city lines decreased greatly and today the national system is far less dense. The most heavily used routes with the greatest ridership and schedule frequencies are in the Northeastern United States on Amtrak's Northeast Corridor. About one in every three users of mass transit in the United States and two-thirds of the nation's rail riders live in New York City. The two busiest passenger rail stations in the United States are Penn Station and Grand Central Terminal, both in Manhattan, New York City. Passenger rail outside the Northeast, Northwest, California, and the Chicago metropolitan area is infrequent and rarely used relative to networks in Europe and Japan.
Passenger lines in most of the United States are operated by the quasi-public corporation Amtrak. The separate Alaska Railroad, which is also government-owned, runs passenger trains in Alaska, and the privately owned Brightline rail service operates in Florida. The California High-Speed Rail system began construction in 2015 and aims to connect major job centers in California.
Multiple new rail corridors have been identified for private development throughout the country. These include the Brightline West corridor from Las Vegas to Los Angeles, California, the Texas Central Railway between Dallas and Houston in Texas, and others.
Oceania
Australia
In Australia, the national interstate network operated by Journey Beyond connects all mainland Australian capital cities except Canberra. However, it is catered towards the luxury tourism market. NSW TrainLink operates interstate services from Sydney to Canberra, Melbourne and Brisbane. Intrastate inter-city trains that traverse shorter distances are operated by V/Line, NSW TrainLink, Sydney Trains, Queensland Rail and Transwa. The fastest intercity trains in regular service have a top service speed of 160 km/h.
In Australia, electrified interurban commuter railway systems are used to connect urban areas separated by long distances and use heavy-rail equipment:
In New South Wales, Sydney Trains operates an extensive interurban network of four main routes from Sydney. These run to Newcastle and the Central Coast, the Blue Mountains, the Southern Highlands and the South Coast. NSW TrainLink brand its interurban commuter services as "Intercity".
In Brisbane, QR's City network operates a smaller interurban commuter network of three lines which connect Brisbane to the Gold Coast in the south, Caboolture and the Sunshine Coast in the north and Rosewood in the west.
In Perth, an electric interurban rail line running down the middle of the Kwinana Freeway to serve Mandurah opened on December 23, 2007.
On these systems, services either run as limited-stop expresses in the suburban area or as shuttles terminating where the suburban lines end.
A large-scale non-electric project of four regional lines known as the Regional Fast Rail is operational in Victoria. Current interurban and intercity journeys outside the suburban area are often locomotive-hauled, particularly for longer distance services, due to Victoria's lack of electrification outside of Melbourne.
New Zealand
In New Zealand, there are currently three long-distance passenger services classed as inter-city: the Coastal Pacific, the Northern Explorer, and the TranzAlpine. Their slow average speed is limited by the rugged country traversed, particularly in the middle of the North Island, where the North Island Main Trunk has many sharp curves and steep gradients. Given these speeds, as well as the prioritization of the rail transport in New Zealand towards freight, these passenger services primarily cater the tourist market, similar to long-distance routes in Australia.
Other current commuter passenger services include the Capital Connection, Te Huia and the Wairarapa Connection. A network of regional and long-distance rail passenger services until the mid-twentieth century has largely been replaced by air or bus services.
South America
A few countries of South America were once interconnected by international train services, but today they are almost non-existent, with the noticeable exceptions of Argentina and Chile. Most governments in the continent have favored roads and automobile transportation since the mid-20th century.
Argentina
Argentina has inter-city services on a number of routes, run by Operadora Ferroviaria Sociedad del Estado. Trains in Argentina are experiencing a revival, since the government intends to re-establish long-distance passenger trains between major cities.
Bolivia
Inter-city train services in Bolivia are operated by two train companies: Eastern and Western. The western network runs daily trains from Oruro to Tupiza, with both espresso (fast) and WaraWara (slow) trains. The eastern rail hub is Santa Cruz de la Sierra, with connections to Puerto Suárez and Villamontes, and international lines to Brazil and Argentina.
Brazil
Brazil's inter-city services operate on two routes, one from Vitória to Belo Horizonte (Vitória-Minas Railway) and another from Parauapebas to São Luís. A third service was proposed by São Paulo state government to operate from São Paulo to Americana.
Chile
Chile has inter-city services connecting Santiago to Chillán and occasionally to Temuco, run by Empresa de los Ferrocarriles del Estado. The fastest in Chile (and South America) is TerraSur, reaching around .
| Technology | Rail and cable transport | null |
105908 | https://en.wikipedia.org/wiki/Ford%20Mustang | Ford Mustang | The Ford Mustang is a series of American automobiles manufactured by Ford. In continuous production since 1964, the Mustang is currently the longest-produced Ford car nameplate. Currently in its seventh generation, it is the fifth-best selling Ford car nameplate. The namesake of the "pony car" automobile segment, the Mustang was developed as a highly styled line of sporty coupes and convertibles derived from existing model lines, initially distinguished by "long hood, short deck" proportions.
Originally predicted to sell 100,000 vehicles yearly, the 1965 Mustang became the most successful vehicle launch since the 1927 Model A. Introduced on April 17, 1964 (16 days after the Plymouth Barracuda), over 400,000 units were sold in its first year; the one-millionth Mustang was sold within two years of its launch. In August 2018, Ford produced the 10-millionth Mustang; matching the first 1965 Mustang, the vehicle was a 2019 Wimbledon White convertible with a V8 engine.
The success of the Mustang launch led to multiple competitors from other American manufacturers, including the Chevrolet Camaro and Pontiac Firebird (1967), AMC Javelin (1968), and Dodge Challenger (1970). It also competed with the Plymouth Barracuda, which was launched around the same time. The Mustang also had an effect on designs of coupes worldwide, leading to the marketing of the Toyota Celica and Ford Capri in the United States (the latter, by Lincoln-Mercury). The Mercury Cougar was launched in 1967 as a unique-bodied higher-trim alternative to the Mustang; during the 1970s, it included more features and was marketed as a personal luxury car.
From 1965 until 2004, the Mustang shared chassis commonality with other Ford model lines, staying rear-wheel-drive throughout its production. From 1965 to 1973, the Mustang was derived from the 1960 Ford Falcon compact. From 1974 until 1978, the Mustang (denoted Mustang II) was a longer-wheelbase version of the Ford Pinto. From 1979 until 2004, the Mustang shared its Fox platform chassis with 14 other Ford vehicles (becoming the final one to use the Fox architecture). Since 2005, Ford has produced two generations of the Mustang, each using a distinct platform unique to the model line.
Through its production, multiple nameplates have been associated with the Ford Mustang series, including GT, Mach 1, Boss 302/429, Cobra (separate from Shelby Cobra), and Bullitt, along with "5.0" fender badging (denoting 4.9 L OHV or 5.0 L DOHC V8 engines).
Name
Executive stylist John Najjar, who was a fan of the World War II P-51 Mustang fighter plane, is credited by Ford with suggesting the name. Najjar co-designed the first prototype of the Ford Mustang known as the "Ford Mustang I" in 1961, working jointly with fellow Ford stylist Philip T. Clark. The Mustang I made its formal debut at the United States Grand Prix in Watkins Glen, New York, on October 7, 1962, where test driver and contemporary Formula One race driver Dan Gurney lapped the track in a demonstration using the second "race" prototype.
An alternative view was that Robert J. Eggert, Ford Division market research manager, first suggested the Mustang name. Eggert, a breeder of quarterhorses, received a birthday present from his wife of the book, The Mustangs by J. Frank Dobie in 1960. Later, the book's title gave him the idea of adding the "Mustang" name for Ford's new concept car. The designer preferred Cougar (early styling bucks can be seen wearing a Cougar grille emblem) or Torino (an advertising campaign using the Torino name was actually prepared), while Henry Ford II wanted T-bird II. As the person responsible for Ford's research on potential names, Eggert added "Mustang" to the list to be tested by focus groups; "Mustang", by a wide margin, came out on top under the heading: "Suitability as Name for the Special Car". The name could not be used in Germany, however, because it was owned by Krupp, which had manufactured trucks between 1951 and 1964 with the name "Mustang". Ford refused to buy the name for about from Krupp at the time. Kreidler, a manufacturer of mopeds, also used the name, so Mustangs were sold in Germany as "T-5s" until December 1978.
First generation (1965)
Lee Iacocca's assistant general manager and chief engineer, Donald N. Frey was the head engineer for the T-5 project—supervising the overall development of the car in a record 18 months—while Iacocca himself championed the project as Ford Division general manager. The T-5 prototype was a two-seat, mid-mounted engine roadster. This vehicle employed the German Ford Taunus V4 engine.
The original 1962 Ford Mustang I two-seater concept car had evolved into the 1963 Mustang II four-seater concept car which Ford used to pretest how the public would take interest in the first production Mustang. The 1963 Mustang II concept car was designed with a variation of the production model's front and rear ends with a roof that was lower. It was originally based on the platform of the second-generation North American Ford Falcon, a compact car. Gale Halderman's side view design is the basis for the first clay model.
Non-traditional (1964½) introduction
The Ford Mustang began production five months before the normal start of the 1965 production year. The early production versions are often referred to as "1964½ models", but all Mustangs were advertised, VIN coded and titled by Ford as 1965 models, though minor design updates in August 1964 at the formal start of the 1965 production year contribute to tracking 1964 production data separately from 1965 data (see data below). With production beginning in Dearborn, Michigan, on March 9, 1964; the new car was, on 14 April 1964, first sold to the public, at a Ford dealership in St. John's, Newfoundland, Canada before it was even introduced on April 17, 1964, at the New York World's Fair. Body styles available included a two-door hardtop and convertible, with a "2+2" fastback added to the line in September 1964. A Wimbledon White (paint code P) convertible with red interior was used as product placement when the James Bond movie Goldfinger was released September 17, 1964, at its London premiere, where Bond girl Tilly Masterson was in a spirited chase with James driving an Aston Martin DB5 in the Swiss Alps. A Tropical Turquoise (paint code O) coupe was again used in the next film Thunderball at its Tokyo premiere 9 December 1965 with Bond girl Fiona Volpe as she drives James to meet the villain Emilio Largo at his compound at a very high speed across The Bahamas.
Favorable publicity articles appeared in 2,600 newspapers the next morning, the day the car was "officially" revealed. A four-seat car with full space for the front bucket seats and a rear bench seat was standard. A "fastback 2+2", first manufactured on August 17, 1964, enclosed the trunk space under a sweeping exterior line similar to the second series Corvette Sting Ray and European sports cars such as the Jaguar E-Type coupe.
Price and record-breaking sales
To achieve an advertised list price of , the Mustang was based heavily on familiar yet simple components, many of which were already in production for other Ford models. Many (if not most) of the interior, chassis, suspension, and drivetrain components were derived from those used on Ford's Falcon and Fairlane. This use of common components also shortened the learning curve for assembly and repair workers, while at the same time allowing dealers to pick up the Mustang without also having to invest in additional spare parts inventory to support the new car line. Original sales forecasts projected less than 100,000 units for the first year. This mark was surpassed in three months from rollout. Another 318,000 would be sold during the model year (a record), and in its first eighteen months, more than one million Mustangs were built.
Upgrades
Several changes were made at the traditional opening of the new model year (beginning August 1964), including the addition of back-up lights on some models, the introduction of alternators to replace generators, an upgrade of the six-cylinder engine from with an increase from , and an upgrade of the V8 engine from with an increase from . The rush into production included some unusual quirks, such as the horn ring bearing the 'Ford Falcon' logo covered by a trim ring with a 'Ford Mustang' logo. These characteristics made enough difference to warrant designation of the 121,538 early versions as "1964½" Mustangs, a distinction that has endured with purists.
Ford's designers began drawing up larger versions even as the original was achieving sales success, and while "Iacocca later complained about the Mustang's growth, he did oversee the 1967 redesign." From 1967 until 1973, the Mustang got bigger but not necessarily more powerful. The Mustang was facelifted, giving the Mustang a more massive look overall and allowing a big block engine to be offered for the first time. Front and rear end styling was more pronounced, and the "twin cove" instrument panel offered a thicker crash pad and larger gauges. Hardtop, fastback, and convertible body styles continued as before. Around this time, the Mustang was paired with a Mercury variant, called the Cougar, which used its own styling cues, such as a "prowling cat" logo and hidden quad headlamps. New safety regulations by the U.S. National Highway Traffic Safety Administration (NHTSA) for 1967 included an energy-absorbing steering column and wheel, 4-way emergency flashers, and a dual-circuit hydraulic braking system, and softer interior knobs. The 1968 models received revised side scoops, steering wheel, and gasoline caps. Side marker lights were also added that year, and cars built after January 1, 1968, included shoulder belts for both front seats on coupes. The 1968 models also introduced a new V8 engine, designed with Federal emissions regulations in mind.
The 1969 restyle "added more heft to the body as width and length again increased. Weight went up markedly too." Due to the larger body and revised front end styling, the 1969 models (but less so in 1970) had a notable aggressive stance. The 1969 models featured "quad headlamps" which disappeared to make way for a wider grille and a return to standard headlamps in the 1970 models. This switch back to standard headlamps was an attempt to tame the aggressive styling of the 1969 model, which some felt was too extreme and hurt sales, but 1969 production exceeded the 1970 total.
Models
Starting in 1969, to aid sales and continue the winning formula of the Mustang, a variety of new performance and decorative options became available, including functional (and non-functional) air scoops, cable and pin hood tie-downs, and both wing and chin spoilers. Additionally, a variety of performance packages were introduced that included the Mach 1, the Boss 302, and Boss 429. The two Boss models were to homologate the engines for racing. The 1969 Mustang was the last year for the GT option (although it did return on the third-generation Mustang for the 1982 model year). A fourth model available only as a hardtop, the Grandé, saw success starting in 1969 with its soft ride, "luxurious" trim, of extra sound deadening, and simulated wood trim.
Sales fluctuation
Developed under the watch of S. "Bunkie" Knudsen, Mustang evolved "from speed and power" to the growing consumer demand for bigger and heavier "luxury" type designs. "The result was the styling misadventures of 1971–73 ...the Mustang grew fat and lazy," "Ford was out of the go-fast business almost entirely by 1971." "This was the last major restyling of the first-generation Mustang." "The cars grew in every dimension except height, and they gained about ." "The restyling also sought to create the illusion that the cars were even larger." The 1971 Mustang was nearly wider than the 1970, its front and rear track was also widened by , and its size was most evident in the SportsRoof models with its nearly flat rear roofline and cramped interior with poor visibility for the driver. Performance decreased with sales continuing to decrease as consumers switched to the smaller Pintos and Mavericks. A displeased Iacocca summed up later: "The Mustang market never left us, we left it."
Second generation (1974)
Iacocca, who had been one of the forces behind the original Mustang, became president of Ford Motor Company in 1970, and ordered a smaller, more fuel-efficient Mustang for 1974. Initially, it was to be based on the Ford Maverick, but ultimately was based on the Ford Pinto subcompact.
The new model, called the "Mustang II", was introduced on September 21, 1973, two months before the first 1973 oil crisis, and its reduced size allowed it to compete against successful imported sports coupes such as the Japanese Datsun 240Z, Toyota Celica and the European Ford Capri (then Ford-built in Germany and Britain, sold in U.S. by Mercury as a captive import car). The Mustang II also later competed against the Chevrolet Monza, Pontiac Sunbird, Oldsmobile Starfire and Buick Skyhawk. First-year sales were 385,993 cars, compared with the original Mustang's twelve-month sales record of 418,812. Ultimately, the Mustang II was an early example of downsizing that would take place among Detroit's Big Three during the "malaise era".
Iacocca wanted the new car, which returned the Mustang to its 1965 model year predecessor in size, shape, and overall styling, to be finished to a high standard, saying it should be "a little jewel". Not only was it smaller than the original car, but it was also heavier, owing to the addition of equipment needed to meet new U.S. emission and safety regulations. Performance was reduced, and despite the car's new handling and engineering features the galloping mustang emblem "became a less muscular steed that seemed to be cantering".
Engines for the 1974 models included the venerable 2.3 L I4 from the Pinto and the 2.8 L Cologne V6 from the Mercury Capri. The 1975 model year reintroduced the Windsor V8 that was only available with the C-4 automatic transmission, power brakes, and power steering. This continued through production's end in 1978. Other transmissions were the RAD four-speed with unique gearing for all three engines, and the C-3 automatic behind the 2.3 L and 2.8 L. The "5.0 L" marketing designation was not applied until the 1978 King Cobra model. All -equipped Mustang IIs, except the King Cobras, received updated versions of the classic Ford "V8" emblem on each front fender.
The car was available in coupe and hatchback versions, including a "luxury" Ghia model designed by Ford's recently acquired Ghia of Italy. The coupe was marketed as a "hardtop" but actually had a thin "B" pillar and rear quarter windows that did not roll down. All Mustangs in this generation did feature frameless door glass, however. The "Ghia" featured a thickly padded vinyl roof and starting with 1975 models smaller rear quarter windows, giving a more formal look. 1974 models were: hardtop, hatchback, Mach 1, and Ghia. Changes introduced for 1975 included the availability of an "MPG" model which had a different rear axle ratio for better fuel economy. 1976 added the "Stallion" trim package. The Mach 1 remained through the life cycle 1974–1978. Other changes in appearance and performance came with a "Cobra II" version in 1976–1978 and a "King Cobra" in 1978 of which around 4,972 were built. The 1977–1978 hatchback models in all trim levels were now available with the T-top roof option, which included a leatherette storage bag that clipped to the top of the spare tire hump.
Third generation (1979)
The 1979 Mustang was based on the larger Fox platform, initially developed for the 1978 Ford Fairmont and Mercury Zephyr. The larger four passenger body used a larger wheelbase which yielded increased room in the passenger cabin, trunk and engine bay.
Body styles included a coupe (or notchback), hatchback, and convertible, the latter added for model year 1983. Available trim levels included an unnamed base model (1979–1981), Ghia (1979–1981), Cobra (1979–1981, 1993), L (1982–1984), GL (1982–1983), GLX (1982–1983), GT (1982–1993), Turbo GT (1983–1984), LX (1984–1993), GT-350 20th anniversary edition (1984), SVO (1984–1986) and Cobra R (1993).
Engines and drivetrains carried over from the Mustang II including the 2.3 L I4, 2.8 L V6, and 4.9 L V8 engines. A troublesome 2.3 L turbocharged I4 was available during initial production startup and then reappeared after undergoing improvements for the mid-year introduction of the 1983 turbo GT. The 2.8 L V6, in short supply, was replaced with a 3.3 L I6 engine during the 1979 model year. That engine was ultimately replaced with a new 3.8 L V6 for 1983. The V8 was suspended after 1979 and replaced with a smaller, 4.2 L V8 which was dropped in favor of the high output V8 for 1982.
From 1979 to 1986, the Capri was domestically produced as a badge engineered variant of the Mustang, using a few of its own styling cues.
The third-generation Mustang had two different front-end styles. From 1979 to 1986, the front end was angled back using four rectangular headlights. The front end was restyled for 1987 to 1993 model years providing a rounded-off "aero" style with flush-composite headlamps and a smooth grille-less nose.
When the Mustang was selected as the 1979 Official Indianapolis 500 Pace Car, Ford also marketed replica models, and its special body-appearance parts were adapted by the Cobra package for 1980–81.
1982 marked the return of the Mustang GT (replacing the Cobra) which used a specially-modified high-output engine.
In 1983, Ford again offered a convertible Mustang, after a nine-year absence. The front fascias of all Mustangs were restyled, featuring new grilles, sporting "blue oval" Ford emblems for the first time.
1984 introduced the high-performance Mustang SVO, which featured a 2.3 L turbocharged and intercooled four-cylinder engine and unique bodywork.
The Mustang celebrated its 20th anniversary with a special GT350 model in white with red interior and red lower-bodyside rocker stripes. 1985 Mustangs received another front-fascia restyle.
In response to poor sales and escalating fuel prices during the early 1980s, a new Mustang was in development. It was to be a variant of the Mazda MX-6 assembled at AutoAlliance International in Flat Rock, Michigan. Enthusiasts wrote to Ford objecting to the proposed change to a front-wheel drive, Japanese-designed Mustang without a V8 option. The result was the continuation of the existing Mustang while the Mazda MX-6 variant had a last-minute name change from Mustang to Probe and was released as a 1989 model.
The Mustang received a major restyling for 1987, including the interior, which carried it through the end of the 1993 model year.
Under the newly established Ford SVT division, the 1993 Ford Mustang SVT Cobra and Cobra R were added as special, high-performance models.
Fourth generation (SN95; 1994)
In November 1993, the Mustang debuted its first major redesign in fifteen years. Code-named "SN95" by the automaker, it was based on an updated version of the rear-wheel drive Fox platform called "Fox-4." The new styling by Patrick Schiavone incorporated several styling cues from earlier Mustangs. For the first time since its introduction 1964, a notchback coupe model was not available. The door windows on the coupe were once again frameless; however, the car had a fixed "B" pillar and rear windows.
The base model came with a 3.8 OHV V6 engine rated at in 1994 and 1995, or (1996–1998), and was mated to a standard 5-speed manual transmission or optional 4-speed automatic. Though initially used in the 1994 and 1995 Mustang GTS, GT and Cobra, Ford retired the 302 cid pushrod small-block V8 after nearly 30 years of use, replacing it with the newer Modular SOHC V8 in the 1996 Mustang GT. The 4.6 L V8 was initially rated at , 1996–1997, but was later increased to in 1998.
For 1999, the Mustang was reskinned with Ford's New Edge styling theme with sharper contours, larger wheel arches, and creases in its bodywork, but its basic proportions, interior design, and chassis remained the same as the previous model. The Mustang's powertrains were carried over for 1999, but benefited from new improvements. The standard 3.8 L V6 had a new split-port induction system, and was rated at 1999–2000, while the Mustang GT's 4.6 L V8 saw an increase in output to (1999–2004), due to a new head design and other enhancements. In 2001, the 3.8 L was increased to 193 bhp. In 2004, a 3.9 L variant of the Essex engine replaced the standard 3.8 L mid year with an increase of of torque as well as NVH improvements. There were also three alternate models offered in this generation: the 2001 Bullitt, the 2003 and 2004 Mach 1, as well as the 1999 and 2001, and 2003 and 2004 Cobra.
Ford Australia
This generation was sold in Australia between 2001 and 2002, to compete against the Holden Monaro (which eventually became the basis for the reborn Pontiac GTO). Due to the fact that the Mustang was never designed for right-hand-drive, Ford Australia contracted Tickford Vehicle Engineering to convert 250 Mustangs and modify them to meet Australian Design Rules per year. The development cost for redesigning the components and setting up the production process was . Sales did not meet expectations, due in part to a high selling price. In total, just 377 Mustangs were sold in Australia between 2001 and 2003. For promotional purposes, Ford Racing Australia also built a Mustang V10 convertible, which was powered by a Ford Modular 6.8 L V10 engine from the American F truck series but fitted with an Australian-made Sprintex supercharger.
Fifth generation (S197; 2005)
Ford introduced a re-designed 2005 model year Mustang at the 2004 North American International Auto Show, codenamed "S197", that was based on the new D2C platform. Developed under the direction of chief engineer Hau Thai-Tang, a veteran engineer for Ford's IndyCar program under Mario Andretti, and exterior styling designer Sid Ramnarace, the fifth-generation Mustang's styling echoes the fastback Mustang models of the late-1960s. Ford's senior vice president of design, J Mays, called it "retro-futurism". The fifth-generation Mustang was manufactured at the Flat Rock Assembly Plant in Flat Rock, Michigan.
For the 2005 to 2010 production years, the base model was powered by a cast-iron block 4.0 L SOHC V6, while the GT used an aluminum block 4.6 L SOHC three-valve Modular V8 with variable camshaft timing (VCT) that produced . Base models had Tremec T5 five-speed manual transmissions with Ford's 5R55S five-speed automatic being optional. Automatic GTs also featured this, but manual GTs had the Tremec TR-3650 five-speeds.
For 2007, Ford's SVT launched the Shelby GT500, a successor to the 2003/2004 Mustang SVT Cobra. The supercharged and intercooled Ford Modular DOHC 4 valves per cylinder V8 engine with an iron block and aluminum heads was rated at at 6,000 rpm and of torque at 4,500 rpm.
The 2010 model year Mustang was released in the spring of 2009 with a redesigned exterior — which included sequential LED taillights — and a reduced drag coefficient of 4% on base models and 7% on GT models. The engine for base Mustangs remained unchanged, while the GT's 4.6 L V8 was revised resulting in at 6,000 rpm and of torque at 4,255 rpm. Other mechanical features included new spring rates and dampers, traction and stability control system standard on all models, and new wheel sizes.
Engines were revised for 2011, and transmission options included the Getrag-Ford MT82 six-speed manual or the 6R80 six-speed automatic based on the ZF 6HP26 transmission, licensed for production by Ford. Electric power steering replaced the conventional hydraulic version. A new aluminum block V6 engine weighed less than the previous version. With 24 valves and twin independent variable cam timing (TiVCT), it produced and of torque. The 3.7 L engine came with a new dual exhaust. GT models included 32-valve 5.0 L engine () (also referred to as the "Coyote") producing 412 hp and 390 ft-lbs of torque. Brembo brakes were optional along with 19-inch wheels and performance tires.
For 2012, a new Mustang Boss 302 version was introduced. The engine had and of torque. A "Laguna Seca" edition was also available, which offered additional body bracing, the replacement of the rear seat with a steel "X-brace" for stiffening, and other powertrain and handling enhancements.
In the second quarter of 2012, Ford launched an update to the Mustang line as an early 2013 model. The Shelby GT500 had a new 5.8 L supercharged V8 producing . The Shelby and Boss engines came with a six-speed manual transmission. The GT and V6 models revised styling incorporated the grille and air intakes from the 2010–2012 GT500s. The decklid received a black cosmetic panel on all trim levels. The GT's 5.0 liter V8 gained eight horsepower from to .
Sixth generation (S550; 2015)
The sixth generation Mustang was unveiled on December 5, 2013, in Dearborn, Michigan; New York, New York; Los Angeles, California; Barcelona, Spain; Shanghai, China; and Sydney, Australia. The internal project code name is S550.
Changes include a body widened by 1.5 inches and lowered 1.4 inches, a trapezoidal grille, and a 2.75-inch lower decklid, as well as new colors. The passenger volume is increased to 84.5 cubic feet, the wheelbase is still 8 ft. 11.1 in. (107.1 in.), and three engine options are available: a newly developed 2.3 L EcoBoost 310 hp four-cylinder introduced to reach high tariff global markets like China, 3.7 L 300 hp V6, or 5.0 L Coyote 435 hp V8, with either a Getrag six-speed manual or six-speed automatic transmission with paddle shifters.
A new independent rear suspension (IRS) system was developed specifically for the new model. It also became the first version factory designed as a right hand drive export model to be sold overseas through Ford new car dealerships in right hand drive markets.
In February 2015, the Mustang earned a five-star rating from the National Highway Traffic Safety Administration (NHTSA) for front, side, and rollover crash protection.
In May 2015, Ford issued a recall involving 19,486 of the 2015 Ford Mustang with the 2.3 L EcoBoost turbocharged four-cylinder engine with a production date between February 14, 2014, and February 10, 2015, that were built at the Flat Rock Assembly Plant. As of June 2015, 1 million Mustangs (between 2005 and 2011) and GTs (between 2005 and 2006) were affected by a recall of airbags made by Takata Corporation. This was after Takata announced that it was recalling 33.8 million vehicles in the U.S. for airbags that could explode and send metal pieces flying at drivers and passengers.
Euro NCAP crash-tested the left hand drive (LHD) European version of the 2017 Mustang which received only two stars due to the lack of auto safety features such as lane assist and auto braking. Euro NCAP also pointed to insufficient pressure of the Airbag resulting in the driver's head hitting the steering wheel. In the full-width test, the rear passenger slipped under the seatbelt.
The 2018 model year Mustang was released in the third quarter of 2017 in North America and by 2018 globally. It featured a minor redesign to the exterior. The 2018 Mustang engine line up was revised. The 3.7 L V6 was dropped and the 2.3 L I4 Ecoboost (direct-injection turbocharged) engine now serves as the base power plant for the Mustang, producing and of torque when using 93-octane fuel. The 5.0 L V8 gets a power increase to and of torque. The automatic transmission in both engines is now a ten-speed Ford 10R80. In January 2018, Ford displayed a prototype of the special edition 2018 Bullitt model, to be released in the summer; this vehicle commemorated the 50th anniversary of the movie Bullitt that helped attract interest in the marque.
For the 2019 model year, Ford revised many components on the 2019 Shelby GT350 including stickier Michelin Pilot Sport Cup 2 tires along with steering and suspension components.
The 2020 model year saw the re-introduction of the GT500. The 2020 GT500 includes a hand-built 5.2-liter "Predator" aluminum-alloy V8 engine with a 2.65-liter roots-type supercharger. The Shelby GT500 produces and of torque. The GT350 was discontinued at the end of the 2020 model year.
For the 2021 model year, Ford re-introduced the Mach 1 after a 17-year hiatus. The 2021 Mach 1 utilizes the current Coyote 5.0 L engine with GT350 parts, including the intake manifold, increasing performance to at 7,000 rpm and at 4,600 rpm in addition to utilizing the GT350's lightweight Tremec six-speed manual transmission, oil-filter adapter, engine oil cooler, and front and rear subframe. The Mach 1 also utilizes parts from the GT500, including the rear axle cooling system, rear toe link, and rear diffuser.
Seventh generation (S650; 2024)
Ford previewed the seventh-generation Mustang at the 2022 Detroit Auto Show on September 14, in a special event called "The Stampede". As part of its introduction, multiple track-only models were showcased, such as a NASCAR Cup Series body, a V8 Supercar version, and multiple GT racing versions, among others. Also announced was the addition of the “Dark Horse” series. Bridging the gap between the Mach 1 and now-discontinued GT350, the Dark Horse performs much the same role as the 2012–2013 Boss 302 Mustangs — a street legal car with enhanced performance on road courses. The seventh generation Mustang is assembled at Ford's Flat Rock Assembly Plant and began production on May 1, 2023, initially available with either the redesigned 2.3 L EcoBoost turbocharged 4-cylinder with , or the revised, 4th generation Coyote V8 with in the GT and in the Dark Horse. At launch, three transmissions were offered: a Getrag 6-speed manual (GT only), a Tremec 6-speed manual transmission (Dark Horse only), or a 10-speed automatic transmission (available on all trims).
Mustang Mach-E
On November 17, 2019, Ford announced the Ford Mustang Mach-E. Unrelated to any of the pony car Mustang versions, it is an electric crossover with rear-wheel or all-wheel drive, depending on trim level. It has of range and an updated Ford Sync system with a 15.5 inch display. The Mustang Mach-E comes in several different trims including First Edition, Select, Premium, California Route 1, and GT. The Mach-E also offers two battery options, and Ford is expected to introduce a third option in the future. Although it shares the Mustang name and badge, this vehicle is not counted among the Mustang's seven generations, as it is a separate model produced alongside the existing two-door Mustang rather than being a chronological successor to it, and is designed around a separate vehicle platform.
Racing
The Mustang made its first public appearance on a racetrack as pace car for the 1964 Indianapolis 500.
The same year, Mustangs won first and second in class at the Tour de France international rally.
In 1969, modified versions of the 428 Mach 1, Boss 429 and Boss 302 took 295 United States Auto Club-certified records at Bonneville Salt Flats. The outing included a 24-hour run on a course at an average speed of . Drivers were Mickey Thompson, Danny Ongais, Ray Brock, and Bob Ottum.
Drag racing
The car's American competition debut, also in 1964, was in drag racing, where private individuals and dealer-sponsored teams campaigned Mustangs powered by V8s.
In late 1964, Ford contracted Holman & Moody to prepare ten 427-powered Mustangs to contest the National Hot Rod Association's (NHRA) A/Factory Experimental class in the 1965 drag racing season. Five of these special Mustangs made their competition debut at the 1965 NHRA Winternationals, where they qualified in the factory stock eliminator class. The car driven by Bill Lawton won the class.
A decade later Bob Glidden won the Mustang's first NHRA pro stock title.
Rickie Smith's Motorcraft Mustang won the International Hot Rod Association pro stock world championship.
In 2002 John Force broke his own NHRA drag racing record by winning his 12th national championship in his Ford Mustang funny car; Force beat that record again in 2006, becoming the first-ever 14-time champion, driving a Mustang.
Circuit racing
Early Mustangs also proved successful in road racing. The GT 350 R, the race version of the Shelby GT 350, won five of the Sports Car Club of America's (SCCA) six divisions in 1965. Drivers were Jerry Titus, Bob Johnson and Mark Donohue, and Titus won the (SCCA) B-Production national championship. The GT 350s won the B-Production title again in 1966 and 1967. They also won the 1966 manufacturers' championship in the inaugural SCCA Trans-Am series, and repeated the win the following year.
In 1970, Mustang won the SCCA series manufacturers' championship again, with Parnelli Jones and George Follmer driving for car owner/builder Bud Moore and crew chief Lanky Foushee. Jones won the "unofficial" drivers' title.
In 1975 Ron Smaldone's Mustang became the first-ever American car to win the Showroom Stock national championship in SCCA road racing.
Mustangs competed in the IMSA GTO class, with wins in 1984 and 1985. In 1985 John Jones won the 1985 GTO drivers' championship; Wally Dallenbach Jr., John Jones and Doc Bundy won the GTO class at the Daytona 24 Hours; and Ford won its first manufacturers' championship in road racing since 1970. Three class wins went to Lynn St. James, the first woman to win in the series.
1986 brought eight more GTO wins and another manufacturers' title. Scott Pruett won the drivers' championship. The GT Endurance Championship also went to Ford.
In 1987 Saleen Autosport Mustangs driven by Steve Saleen and Rick Titus won the SCCA Escort Endurance SSGT championship, and in International Motor Sports Association (IMSA) racing a Mustang again won the GTO class in the Daytona 24 Hours. In 1989, the Mustang won Ford its first Trans-Am manufacturers' title since 1970, with Dorsey Schroeder winning the drivers' championship.
In 1997, Tommy Kendall's Roush-prepared Mustang won a record 11 consecutive races in Trans-Am to secure his third straight driver's championship.
Mustangs compete in the SCCA World Challenge, with Brandon Davis winning the 2009 GT driver's championship. Mustangs competed in the now-defunct Grand-Am Road Racing Ford Racing Mustang Challenge for the Miller Cup series.
Ford won championships in the Grand-Am Road Racing Continental Tire Sports Car Challenge for the 2005, 2008, and 2009 seasons with the Mustang FR500C and GT models. In 2004, Ford Racing retained Multimatic Motorsports to design, engineer, build and race the Mustang FR500C turn-key race car. In 2005, Scott Maxwell and David Empringham took the driver's title. In 2010, the next-generation Mustang race car was known as the Boss 302R. It took its maiden victory at Barber Motorsports Park in early 2011, with drivers Scott Maxwell and Joe Foster.
In 2012, Jack Roush Jr and Billy Johnson won the Continental Tire Sports Car Challenge race at the Daytona International Speedway opening race of the 50th Anniversary Rolex 24 At Daytona weekend in a Mustang Boss 302R.
In 2016, Multimatic Motorsports won the IMSA CTSCC drivers' and manufacturers' titles with the S550-based Shelby GT350R-C, driven by Scott Maxwell and Billy Johnson.
On July 27, 2023, Ford announced that the 7th Generation Mustang would have its own spec-racing series called Mustang Challenge, sanctioned by the IMSA.
Stock car racing
Dick Trickle won 67 short-track oval feature races in 1972, a US national record for wins in a single season.
In 2010 the Ford Mustang became Ford's Car of Tomorrow for the NASCAR Nationwide Series with full-time racing of the Mustang beginning in 2011. This opened a new chapter in both the Mustang's history and Ford's history. NASCAR insiders expected to see Mustang racing in NASCAR Sprint Cup by 2014 (the model's 50th anniversary). The NASCAR vehicles are not based on production models but are a silhouette racing car with decals that give them a superficial resemblance to road cars. Carl Edwards won the first-ever race with a NASCAR-prepped Mustang on April 8, 2011, at the Texas Motor Speedway.
Ford Mustangs have also raced in the NASCAR Xfinity Series since 2010.
Ford Mustangs are driven in the NASCAR Whelen Euro Series also.
Ford Mustangs have been track-raced in the NASCAR Cup Series since 2019, replacing the discontinued Ford Fusion.
Drifting
Mustangs have competed at the Formula Drift and D1 Grand Prix series, most notably by American driver Vaughn Gittin Jr.
Brazilian Driver Diego Higa won the Netflix Hyperdrive Series in 2019 in a 2006 Ford Mustang V8.
Europe
Ford Mustangs compete in the FIA GT3 European Championship, and compete in the GT4 European Cup and other sports car races such as the 24 Hours of Spa. The Marc VDS Racing Team was developing the GT3 spec Mustang since 2010.
Australia
The Ford Mustang was announced as the replacement for the Ford Falcon FG X in the 2019 Supercars Championship, which is being contested in Australia and New Zealand. The Mustang placed first in the first race of the year with Scott McLaughlin winning for DJR Team Penske.
Awards
The 1965 Mustang won the Tiffany Gold Medal for excellence in American design, the first automobile ever to do so.
The Mustang was on the Car and Driver Ten Best list in 1983, 1987, 1988, 2005, 2006, 2011, and 2016. It won the Motor Trend Car of the Year award in 1974 and 1994.
Sales
Mustang Owner's Museum
In May 2016, the Mustang Owner's Museum was announced, with an official opening in Concord, North Carolina on April 17, 2019; the fifty-fifth anniversary. The decision to locate somewhere in Concord was a result of the success of the 2014 Mustang 50th-anniversary celebration at Charlotte Motor Speedway in Concord, with over 4,000 Mustangs registered and an estimated economic impact of .
In popular culture
The Ford Mustang has been featured in numerous media. Effective product placement allowed the car to reach "celebrity status in the 1960s". In particular, "movie glamour" assisted in establishing a positive association with the Mustang. The following are a few notable cases where embedded marketing influenced the sales or other tangible aspect of the vehicle:
The 1964 movie The Troops of St. Tropez, was the Ford Mustang's first appearance in a movie. "Contrary to popular belief, the Ford Mustang did not make its cinematic debut in the classic James Bond film Goldfinger. On September 9, 1964, Nicole Cruchot cruised around in a Poppy Red 1964.5 Mustang convertible in the French comedy Le Gendarme de Saint-Tropez. Known to American audiences as The Troops of St. Tropez, Cruchot's character, Geneviève Grad, holds the distinct honor of being the first person to drive a Ford Mustang on the silver screen."
The 1964 movie Goldfinger, was the Ford Mustang's second appearance in a feature film and timed with the car's introduction in the US marketplace.
The song "Mustang Sally", first recorded by Wilson Pickett in 1966 and covered by many other artists since, is about a man who buys a Mustang for his girlfriend, Sally, who ends up preferring the car over him. It has been described by one cultural historian as "free advertising for the Ford Motor Company."
The TV series The F.B.I. was sponsored by Ford Motor Company. Efrem Zimbalist Jr. drove new Mustang convertibles during the first four seasons (1965–1969), and viewers can see how the Mustang evolved into a muscle car.
Using real cars, Steve McQueen drove a debadged Highland Green 1968 Mustang GT fastback with a 390 cubic inch engine and 4 speed transmission in a chase scene, alongside a black 1968 Dodge Charger, in the 1968 film Bullitt. Ford has released several special editions of the Mustang paying homage to the movie car.
A 1971 Mustang (modified to look like a 1973 model), nicknamed "Eleanor", was the feature car in the 1974 car heist film Gone in 60 Seconds. "Eleanor" returned, as a 1967 Mustang Shelby GT500, in the movie's remake in 2000. The remake version of Eleanor featured a custom body kit designed by Chip Foose that has inspired numerous restomods since.
The racing video game Ford Mustang: The Legend Lives, released in 2005, features only Mustangs.
The 2008 TV movie Knight Rider featured a black 2008 Ford Mustang Shelby GT500KR as KITT (replacing the 1982 Pontiac Firebird from the original series), voiced by Val Kilmer.
The David Gelb directed documentary A Faster Horse covers the creation of the 2015 Mustang.
The 2014 film Need for Speed features, along with a Shelby Mustang integral to the plot, a 2015 Mustang that briefly appears at the end. Like with Goldfinger, the scene was shot before the car was revealed to the public. A prototype was used and kept secret, with only the actors and film crew allowed to see the car.
| Technology | Specific automobiles | null |
105927 | https://en.wikipedia.org/wiki/Charles%20de%20Gaulle%20Airport | Charles de Gaulle Airport | Paris Charles de Gaulle Airport , also known as Roissy Airport, is the primary international airport serving Paris, the capital city of France. The airport opened in 1974 and is located in Roissy-en-France, northeast of Paris. It is named for World War II leader and French President Charles de Gaulle (1890–1970), whose initials form its IATA airport code.
Charles de Gaulle Airport serves as the principal hub for Air France and a destination for other legacy carriers (from Star Alliance, Oneworld and SkyTeam), as well as an operating base for easyJet and Norse Atlantic Airways. It is operated by Groupe ADP (Aéroports de Paris) under the brand Paris Aéroport.
In 2024, the airport handled 70,290,260 passengers and 460,916 aircraft movements, thus making it the world's ninth busiest airport and Europe's third busiest airport (after Istanbul and Heathrow) in terms of passenger numbers. Charles de Gaulle is also the busiest airport within the European Union. In terms of cargo traffic, the airport is the eleventh busiest in the world and the busiest in Europe, handling of cargo in 2019. It is also the airport that is served by the greatest number of airlines, with more than 105 airlines operating at the airport.
, the airport offered direct flights to the most countries and hosts the most airlines in the world. Marc Houalla has been the director of the airport since 12 February 2018.
Location
Paris Charles de Gaulle Airport covers of land. The airport area, including terminals and runways, spans over three départements and six communes:
Seine-et-Marne département: Le Mesnil-Amelot (Terminal 2E, Satellites S3 and S4, and Terminal 2F), Mauregard (Terminals 1, 3), and Mitry-Mory (Terminal 2G) communes;
Seine-Saint-Denis département: Tremblay-en-France (Terminals 2A, 2B, 2C, 2D and Roissypôle) commune;
Val-d'Oise département: Roissy-en-France and Épiais-lès-Louvres communes.
The choice of constructing an international aviation hub outside of central Paris was made due to a limited prospect of potential relocations or expropriations and the possibility of further expanding the airport in the future.
Management of the airport lies solely on the authority of Groupe ADP, which also manages Orly (south of Paris), Le Bourget (to the immediate southwest of Charles de Gaulle Airport, now used for general aviation and Paris Air Shows), several smaller airfields in the suburbs of Paris, and other airports directly or indirectly worldwide.
History
Development
The planning and construction phase of what was known then as Aéroport de Paris Nord (Paris North Airport) began in 1966. On 8 March 1974 the airport, renamed Charles de Gaulle Airport, opened. Terminal 1 was built in an avant-garde design of a ten-floors-high circular building surrounded by seven satellite buildings, each with six gates allowing sunlight to enter through apertures. The main architect was Paul Andreu, who was also in charge of the extensions during the following decades.
Terminal 2 opened in 1981 with the official inauguration in presence of the then President, Francois Mitterrand, in March 1982. Unlike Terminal 1, Terminal 2 was designed with a traditional linear layout, but has evolved over time into a series of distinct terminals, designated as 2A through to 2G.
Following the introduction of the brand Paris Aéroport to all its Parisian airports, Groupe ADP also announced major changes for the Charles de Gaulle Airport: Terminals of the Satellite 1 were to be merged, as well as terminals 2B and 2D. A new luggage automated sorting system and conveyor under Terminal 2E Hall L was installed to speed luggage delivery time. The CDG Express, the direct express rail link from Paris to Charles de Gaulle Airport, is scheduled to open in early 2027.
Corporate identity
The Frutiger typeface was commissioned for use in the airport and implemented on signs throughout the building in 1975. Initially called Roissy, it was renamed after its designer Adrian Frutiger.
Until 2005, every PA announcement made at Terminal 1 was preceded by a distinctive chime, nicknamed "Indicatif Roissy" and composed by Bernard Parmegiani in 1971. The chime can be heard in the Roman Polanski film Frantic. The chime was officially replaced by the "Indicatif ADP" chime.
On 14 April 2016, the Groupe ADP rolled out the Connect 2020 corporate strategy and the commercial brand Paris Aéroport was applied to all Parisian airports, including Le Bourget airport.
Terminals
Charles de Gaulle Airport has three terminals: Terminal 1 is the oldest and situated opposite to Terminal 3; Terminal 2 is located at another side with 7 sub-terminal buildings (2A to 2G). Terminal 2 was originally built exclusively for Air France; since then it has been expanded significantly and now houses other airlines. Terminals 2A to 2F are interconnected by elevated walkways and situated next to each other. Terminal 2G is a satellite building connected by shuttle bus.
Terminal 3 (formerly known as "Terminal 9") hosts charter and low-cost airlines. The CDGVAL light-rail shuttle connects Terminal 2 to Terminals 1 and 3 and their parking lots.
Before the COVID-19 pandemic, Charles de Gaulle Airport had assigned all Star Alliance members to use Terminal 1, Oneworld members to use Terminal 2A and SkyTeam members to use Terminals 2C, 2E (intercontinental), 2D, 2F and 2G (European routes). The assignments changed several times due to the pandemic.
Today, the airport has assigned Star Alliance airlines to Terminal 1, Oneworld airlines to use Terminal 1 for routes to the Middle East and Asia, and 2B for flights to the Americas, Africa, and Europe (due to the closure of Terminal 2A), and SkyTeam airlines to use Terminals 2E for international routes and 2F for Schengen routes.
Terminal 1
The first terminal, designed by Paul Andreu, was built in the image of an octopus. It consists of a circular terminal building which houses key functions such as check-in counters and baggage claim conveyors. Seven satellites with boarding gates are connected to the central building by underground walkways.
The central building, with a large skylight in its centre, dedicates each floor to a single function. The first floor is reserved for technical operations and not accessible to the public. The second floor contains shops and restaurants, the CDGVAL inter-terminal shuttle train platforms (for Terminal 2 and trains to central Paris) and check-in counters from a recent renovation. The majority of check-in counters, however, are located on the third floor, which also has access to taxi stands, bus stops and special pick-up vehicles. Departing passengers with valid boarding passes can reach the fourth floor, which houses duty-free stores and border control posts, for the boarding gates. The fifth floor contains baggage claim conveyors for arriving passengers. All four upper floors have assigned areas for parking and airline offices.
Passages between the third, fourth and fifth floors are provided by a tangle of escalators arranged through the centre of the building. These escalators are suspended over the central court. Each escalator is covered with a transparent tube to shelter from all weather conditions. These escalators were often used in film shootings (e.g., The Last Gang of Ariel Zeitoun). The Alan Parsons Project album I Robot features these escalators on its cover.
Terminal 1 closed in March 2020 in response to the drop in traffic during the COVID-19 pandemic. ADP used this time for a €250 million refurbishment. Completed in 2023, the refurbishment included the creation of a new junction building linking satellites 1, 2 and 3, and modernisation of the central body of the terminal. Various design details in the refurbished terminal pay homage to the circular shape of the original Andreu design. The upgraded Terminal 2 also features a new departure lounge designed by French designers Maxime Liautard and Hugo Toro, which reflects the ambiance of a Parisian bistro.
Star Alliance airlines use Terminal 1 except Air Canada and Ethiopian Airlines Other carriers using Terminal 1 include Oneworld carriers Cathay Pacific, Qatar Airways and SriLankan Airlines and non-aligned carriers Aer Lingus, Emirates, Etihad Airways, Eurowings, Icelandair, Kuwait Airways and Oman Air.
Terminal 2
Terminal 2 is spread across seven sub-terminals: 2A to 2G. Terminals 2A to 2F are connected by inter-terminal walkways, but Terminal 2G is a satellite building away. Terminal 2G can only be accessed by shuttle bus from Terminals 1, 2A to 2F and 3. The CDGVAL inter-terminal shuttle train, Paris RER Regional-Express and high-speed TGV rail station, Aéroport Charles de Gaulle 2 TGV, is located within the Terminal 2 complex and between 2C and 2E (on one side) or 2D and 2F (on the opposite side).
Terminal 2F was used for the filming of the music video for the U2 song "Beautiful Day". The band also had their picture taken inside Terminal 2F for the album artwork of their 2000 album All That You Can't Leave Behind.
Terminals 2B and 2D are used by the majority of the airlines part of the Oneworld alliance, except Oneworld's long haul carriers to Asia and the Middle East, French overseas airlines Air Austral and Air Tahiti Nui, and all other non SkyTeam short-haul and mid-haul airlines which do not operate from Terminal 1. and SkyTeam carrier Czech Airlines also use this terminal.
Terminals 2E and 2F are dedicated use for Air France and its SkyTeam partners except Czech Airlines (Terminal 2D) and Saudia (Terminal 1). Several other carriers also use Terminal 2E, these are Oneworld carrier Japan Airlines and non-aligned carriers Air Mauritius, China Southern Airlines, Gulf Air, LATAM Chile, and WestJet.
Collapse of Terminal 2E
On 23 May 2004, shortly after the inauguration of terminal 2E, a portion of it collapsed near Gate E50, killing four people. Two of the dead were reported to be Chinese citizens, one Czech and the other Lebanese. Three other people were injured in the collapse. Terminal 2E had been inaugurated in 2003 after some delays in construction and was designed by Paul Andreu. Administrative and judicial enquiries were started.
Before this accident, ADP had been planning for an initial public offering in 2005 with the new terminal as a major attraction for investors. The partial collapse and indefinite closing of the terminal just before the beginning of summer seriously hurt the airport's business plan.
In February 2005, the results from the administrative inquiry were published. The experts pointed out that there was no single fault, but rather a number of causes for the collapse, in a design that had little margin for safety. The inquiry found the concrete vaulted roof was not resilient enough and had been pierced by metallic pillars and some openings weakened the structure. Sources close to the inquiry also disclosed that the whole building chain had worked as close to the limits as possible, so as to reduce costs. Paul Andreu denounced the building companies for having not correctly prepared the reinforced concrete.
On 17 March 2005, ADP decided to tear down and rebuild the whole part of Terminal 2E (the "jetty") of which a section had collapsed, at a cost of approximately €100 million. The reconstruction replaced the innovative concrete tube style of the jetty with a more traditional steel and glass structure. During reconstruction, two temporary departure lounges were constructed in the vicinity of the terminal that replicated the capacity of 2E before the collapse. The terminal reopened completely on 30 March 2008.
Terminal 2G
Terminal 2G, dedicated to regional Air France and HOP! flights and its affiliates, opened in 2008. This terminal is to the east of all terminals and can only be reached by shuttle bus. Terminal 2G is used for passengers flying in the Schengen Area (and thus has no passport control) and handles Air France regional and European traffic and provides small-capacity planes (up to 150 passengers) with a faster turnaround time than is currently possible by enabling them to park close to the new terminal building and boarding passengers primarily by bus, or walking. A bus line called "navette orange" connects the terminal 2G inside the security check area with terminals 2E and 2F. Passengers transferring to other terminals need to continue their trip with other bus shuttles within the security check area if they do not need to get their bags.
Terminal 2E Hall L (Satellite 3)
The completion of long Satellite 3 (or S3) to the immediate east of Terminals 2E and 2F provides further jetways for large-capacity airliners, specifically the Airbus A380. Check-in and baggage handling are provided by the existing infrastructure in Terminals 2E and 2F. Satellite 3 was opened in part on 27 June 2007 and fully operational in September 2007. It corresponds now to gates L of terminal 2E.
Terminal 2E Hall M (Satellite 4)
The satellite S4, adjacent to the S3 and part of terminal 2E, officially opened on 28 June 2012. It corresponds now to gates M of terminal 2E. Dedicated to long-haul flights, it has the ability to handle 16 aircraft at the same time, with an expected capacity of 7.8 million passengers per year. Its opening has led to the relocation of all SkyTeam airlines to terminals 2E (for international carriers), 2F (for Schengen European carriers) and 2G.
Recent terminal reassignments
Air France has moved all of its operations previously located at 2C to 2E. In October 2012, 2F closed its international operations and became completely Schengen, allowing for all Air France flights previously operating in 2D to relocate to 2F.
Further, in April 2013, Terminal 2B closed for a complete renovation (with all airlines relocating to 2D) and received upgrades including the addition of a second floor completely dedicated to arrivals. Terminal 2B reopened on 2 June 2021. Airlines including the Lufthansa group, Aegean Airlines, easyJet, Icelandair, LOT Polish Airlines, Norwegian Air Shuttle, Play, Royal Air Maroc, and Scandinavian Airlines began operations at Terminal 2B until 2 December 2022, when the airlines except easyJet and Royal Air Maroc moved back to Terminal 1. Low-cost carrier easyJet has shown interest in being the sole carrier at 2B. To facilitate connections, a new boarding area between 2A and 2C was opened in March 2012. It allows for all security and passport control to be handled in a single area, allows for many new shopping opportunities as well as new airline lounges, and eases transfer restrictions between 2A and 2C. Terminal 2D was closed during the pandemic and received the same upgrade including an additional floor. Terminal 2D reopened on 18 April 2023 and some airlines have moved operations to the terminal.
Terminals 2A and 2C are closed for baggage renovation system for 18 months (with all airlines relocating to Terminal 1 or 2B).
Terminal 3
Terminal 3 is located 1 km (0.62 mi) away from Terminal 1. It consists of one single building for arrivals and departures. The walking distance between Terminals 1 and 3 is ; however, the rail station (named as "CDG Airport Terminal 1") for RER and CDGVAL trains are only at a distance of . Terminal 3 has no boarding gates constructed and all passengers are ferried by airport buses to the aircraft stands.
Terminal 3 is voted 2024 best low-cost airlines terminal in the world by Skytrax.
Terminal usage during COVID-19 pandemic
The airport's services during the pandemic were sharply reduced. On 30 March 2020, the airport announced it would temporarily close Terminals 1 and 3, moving all remaining flights to Terminal 2. Terminal 2D was also closed during the pandemic and only Terminals 2A, 2C, 2E, 2F and 2G were opened. At the beginning of the pandemic, airlines were grouped by alliances: Star Alliance airlines operated at Terminal 2A, where Air Canada and Ethiopian Airlines operated prior to the pandemic, Oneworld airlines shifted their operations to Terminal 2C, and SkyTeam airlines operated at Terminals 2E and 2F. Between December 2020 and June 2021, only Terminals 2E and 2F were opened with non-Schengen flights operating at Terminal 2E and Schengen flights operated at Terminal 2F. 2B reopened on 2 June 2021 and some airlines were shifted to that concourse. Terminals 2A, 2C and 2D were then reopened for more space. Between June 2021 and December 2022, Star Alliance airlines operated at Terminals 2A (non-Schengen) and 2B (Schengen), Oneworld airlines operated at Terminals 2C (non-Schengen) and 2D (Schengen) and SkyTeam airlines operated at Terminals 2E (non-Schengen), 2F and 2G (both Schengen). However, Star Alliance airlines flights to Asia except Singapore Airlines, who operated at Terminal 2A were operating at Terminal 2E due to the capacity restrictions at Terminal 2A. Terminal 3 reopened on 3 May 2022 for the use of all charter and low cost airlines. Terminal 1 remained closed for renovation at that time. It reopened on 1 December 2022 to reduce traffic at Terminal 2.
Cancelled project for Terminal 4
Plans for a new terminal, Terminal 4, were first announced in 2014. With an estimated cost of €9bn, the new terminal was to be built around 2025, when Charles de Gaulle Airport's maximum capacity of 80 million would have been reached. When constructed, the new terminal would have been able to accommodate 30–40 million passengers per year and would have likely been built north of Terminal 2E. However, the Terminal 4 proposal was cancelled in 2021 due to reduced traffic resulting from the COVID-19 pandemic and new environmental regulations making the project unfeasible. Environmentalist groups hailed the cancellation of the project as a "great victory."
Roissypôle
Roissypôle is a complex consisting of office buildings, shopping areas, hotels, and a bus coach and RER B station within Charles de Gaulle Airport. The complex includes the head office of Air France, Continental Square, the Hilton Paris Charles de Gaulle Airport, and le Dôme building. Le Dôme includes the head office of Air France Consulting, an Air France subsidiary. Continental Square has the head office of Air France subsidiary Servair and the Air France Vaccinations Centre.
Airlines and destinations
Passenger
Cargo
Ground transportation
CDGVAL
The airport's terminals are served by a free automated shuttle rail system, consisting of two lines (CDGVAL and LISA).
CDGVAL (Charles de Gaulle Véhicule Automatique Léger, English: Charles de Gaulle light automatic vehicle) links Terminal 1, parking lot PR, Aéroport Charles de Gaulle 1 RER station (located inside Roissypôle and next to Terminal 3), Parking lot PX, and the Aéroport Charles de Gaulle 2 TGV and RER station located between Terminals 2C, 2D, 2E, and 2F
LISA (Liaison Interne Satellite Aérogare, English: Connection internal satellite terminal) links Terminal 2E to the Satellite S3 (L Gates) and Satellite S4 (M Gates).
RER
Charles de Gaulle Airport is connected to central Paris by the RER B, a hybrid suburban commuter and rapid transit line. The service has two stations on the airport grounds:
Aéroport Charles de Gaulle 1 station, located inside Roissypôle and next to Terminal 3. The station provides the fastest access to Terminal 1 via a connection on CDGVAL.
Aéroport Charles de Gaulle 2 TGV station, located between Terminals 2C, 2D, 2E, and 2F.
During most times, there are two types of services that operate on the RER B between Charles de Gaulle airport and Paris:
4 trains per hour making all stops between Charles de Gaulle airport and Saint-Rémy-lès-Chevreuse
4 trains per hour that offer non-stop express service between Charles de Gaulle airport, Aulnay-sous-Bois, and Gare du Nord, and then all stops to Massy–Palaiseau
The RER B has historically suffered from slowness and overcrowding, so French authorities are building CDG Express, a train service that will operate non-stop from Charles de Gaulle Airport to Paris Gare de l'Est railway station (next to Gare du Nord) starting in 2027. It will share some of the same tracks, and is expected to offer a 20-minute non-stop ride every half hour from 5am to midnight. The new line is expected to take airline customers off RER B, making room for local passengers, and divert to rail 15% of automobile trips to the airport.
TGV
Terminal 2 includes a TGV station on the LGV Interconnexion Est line. TGV inOui, Ouigo and Thalys high-speed services operate from the station offering services to stations across France and into Belgium and the Netherlands.
Bus
Roissybus offers non-stop express service between Opéra station of the Paris Métro and Charles de Gaulle airport, making stops at all terminals (except 2G).
"Magical Shuttle" offers non-stop express service between Disneyland Paris and Charles de Gaulle airport, making stops at Terminal 1 and Terminal 2E/2F.
RATP bus 350 offers local (all-stops) service between Gare de l'Est/Gare du Nord in Paris and Charles de Gaulle airport, all terminals (except 2G) and other areas of the airport.
RATP bus 351 offers local service between Nation station in Paris, Gallieni station, all terminals (except 2G) and other areas of the airport.
Noctilien routes N140 and N143 offers local service during the overnight hours between Gare de l'Est/Gare du Nord in Paris and Charles de Gaulle airport, all terminals (except 2G) and other areas of the airport.
Long-distance bus
BlaBlaBus and Flixbus all offer services to international and domestic destinations from the bus station outside of the Aéroport Charles de Gaulle 1 RER station.
Car
Charles de Gaulle Airport is directly connected to Autoroute A1 which connects Paris and Lille.
Alternative airports
The two other airports serving Paris are Orly Airport (south of Paris, the other major airport in Paris) and Paris-Le Bourget Airport (north-northeast of Paris, for general aviation and private jets).
Several low-cost airlines also advertise Beauvais–Tillé Airport and Châlons Vatry Airport, respectively and from Paris proper, as serving "Paris" with Paris–Beauvais and Paris–Vatry. Beauvais airport has no railway connections, but there is a shuttle bus to central Paris 15 times daily.
Accidents and incidents
On 6 January 1993, Lufthansa Flight 5634 from Bremen to Paris, which was carried out under the Lufthansa CityLine brand using a Contact Air Dash 8–300 (registered D-BEAT), hit the ground short of the runway of Charles de Gaulle Airport, resulting in the death of four out of the 23 passengers on board. The four crew members survived. The accident occurred after the pilot had to abort the final approach to the airport because the runway had been closed: the aircraft immediately ahead, a Korean Air Boeing 747, had suffered a blown tire upon landing.
On 25 July 2000, a Concorde, Air France Flight 4590 from Charles de Gaulle to John F. Kennedy International Airport in New York, crashed into Les Relais Bleus Hotel in Gonesse, killing everyone on the aircraft and four people on the ground. Investigations concluded that a tire burst during take-off roll, after running over a metal strip on the runway that had detached from a McDonnell Douglas DC-10-30 operating as Continental Airlines Flight 55, which departed shortly before, leading to a ruptured fuel tank and resulting in engine failure and other damage. Concorde was conducting a charter flight for a German tour company.
On 25 May 2001, a freight-carrying Short SH36 (operated as Streamline flight 200), departing to Luton, England, collided on the runway with departing Air Liberté flight 8807, an MD-83 jet. The first officer of the SH36 was killed when the wing tip of the MD-83 tore through his side of the flight deck. The captain was slightly injured and all others aboard survived.
Statistics
The following table shows total passenger numbers.
| Technology | Europe | null |
105953 | https://en.wikipedia.org/wiki/Crab%20louse | Crab louse | The crab louse or pubic louse (Pthirus pubis) is an insect that is an obligate ectoparasite of humans, feeding exclusively on blood. The crab louse usually is found in the person's pubic hair. Although the louse cannot jump, it can also live in other areas of the body that are covered with coarse hair, such as the perianal area, the entire body (in men), and the eyelashes (in children).
Humans are the only known hosts of the crab louse, although a closely related species, Pthirus gorillae, infects gorillas. The human parasite is thought to have diverged from Pthirus gorillae approximately 3.3 million years ago. It is more distantly related to the genus Pediculus, which contains the human head and body lice and lice that affect chimpanzees and bonobos.
Description
An adult crab louse is about 1.3–2 mm long (slightly smaller than the body louse and head louse), and can be distinguished from those other species by its almost round body. Another distinguishing feature is that the second and third pairs of legs of a crab louse are much thicker than the front legs and have large claws.
Life cycle
The eggs of the crab louse are laid usually on the coarse hairs of the genital and perianal regions of the human body. The female lays about three eggs a day. The eggs take 6–8 days to hatch, and there are three nymphal stages which together take 10–17 days before the adult develops, making a total life cycle from egg to adult of 16–25 days. Adults live for up to 30 days. Crab lice feed exclusively on blood, and take a blood meal 4–5 times daily. Outside the host they can survive for 24–48 hours. Crab lice are transmitted from person to person most commonly via sexual contact, although fomites (bedding, clothing) may play a minor role in their transmission.
Infestation of humans
Infestation of the eyelashes is referred to as pediculosis ciliaris or phthiriasis palpebrarum.
The main symptom of infestation with crab lice is itching, usually in the pubic-hair area, resulting from hypersensitivity to louse saliva, which can become stronger over two or more weeks following initial infestation. In some infestations, a characteristic grey-blue or slate coloration appears (maculae caeruleae) at the feeding site, which may last for several days.
The prevalence varies between 0.3% to 4.6% with an estimated average of 2% with an increase during war, disasters and in overcrowding. Crab louse infestations are not considered a reportable condition by many health authorities, and many cases are self-treated or treated discreetly by physicians.
It has been suggested that an increasing percentage of humans removing their pubic hair, especially in women, has led to reduced crab louse populations in some parts of the world.
While crab lice are not known to transmit disease, the possibility has been raised they may be a vector for Bartonella spp. and Acinetobacter spp which might require further study. In infested individuals an average of a dozen lice can be found. Although they are typically found attached to hair in the pubic area, sometimes they are also found on coarse hair elsewhere on the body (for example, eyebrows, eyelashes, beard, moustache, chest, armpits, etc.). They do not generally occur on the finer hair of the scalp. Crab lice attach to pubic hair that is thicker than other body hair because their claws are adapted to the specific diameter of pubic hair and other thick hairs of the body. Crab louse infestations (pthiriasis) are usually spread through sexual contact and are most common in adults. The crab louse can travel up to on the body. Crab louse infestation is found worldwide and occurs in all races and ethnic groups and in all socio-economic levels. Occasionally they may be also transmitted by close personal contact or contact with articles such as clothing, bed linen, and towels that have been used by an infested person.
Crab lice found on the head or eyelashes of children may be an indication of sexual exposure or abuse. Symptoms of crab louse infestation in the pubic area include itching, redness and inflammation. Crab lice are not known to transmit disease; however, secondary bacterial infection can occur from scratching of the skin.
Crab louse infestation can be diagnosed by identifying the presence of active stages of the louse, as well as of eggs (nits) on the pubic hair and other hairs of the body. When infestation is diagnosed, other family members and contact persons should also be examined. A magnifying glass or dermoscope could be used for better identification.
| Biology and health sciences | Insects: General | Animals |
106001 | https://en.wikipedia.org/wiki/Soil%20pH | Soil pH | Soil pH is a measure of the acidity or basicity (alkalinity) of a soil. Soil pH is a key characteristic that can be used to make informative analysis both qualitative and quantitatively regarding soil characteristics. pH is defined as the negative logarithm (base 10) of the activity of hydronium ions ( or, more precisely, ) in a solution. In soils, it is measured in a slurry of soil mixed with water (or a salt solution, such as ), and normally falls between 3 and 10, with 7 being neutral. Acid soils have a pH below 7 and alkaline soils have a pH above 7. Ultra-acidic soils (pH < 3.5) and very strongly alkaline soils (pH > 9) are rare.
Soil pH is considered a master variable in soils as it affects many chemical processes. It specifically affects plant nutrient availability by controlling the chemical forms of the different nutrients and influencing the chemical reactions they undergo. The optimum pH range for most plants is between 5.5 and 7.5; however, many plants have adapted to thrive at pH values outside this range.
Classification of soil pH ranges
The United States Department of Agriculture Natural Resources Conservation Service classifies soil pH ranges as follows:
0 to 6=acidic7=neutral8 and above=alkaline
Determining pH
Methods of determining pH include:
Observation of soil profile: certain profile characteristics can be indicators of either acid, saline, or sodic conditions. Examples are:
Poor incorporation of the organic surface layer with the underlying mineral layer – this can indicate strongly acidic soils;
The classic podzol horizon sequence, since podzols are strongly acidic: in these soils, a pale eluvial (E) horizon lies under the organic surface layer and overlies a dark B horizon;
Presence of a caliche layer indicates the presence of calcium carbonates, which are present in alkaline conditions;
Columnar structure can be an indicator of sodic condition.
Observation of predominant flora. Calcifuge plants (those that prefer an acidic soil) include Erica, Rhododendron and nearly all other Ericaceae species, many birch (Betula), foxglove (Digitalis), gorse (Ulex spp.), and Scots Pine (Pinus sylvestris). Calcicole (lime loving) plants include ash trees (Fraxinus spp.), honeysuckle (Lonicera), Buddleja, dogwoods (Cornus spp.), lilac (Syringa) and Clematis species.
Use of an inexpensive pH testing kit, where in a small sample of soil is mixed with indicator solution which changes colour according to the acidity.
Use of litmus paper. A small sample of soil is mixed with distilled water, into which a strip of litmus paper is inserted. If the soil is acidic the paper turns red, if basic, blue.
Certain other fruit and vegetable pigments also change color in response to changing pH. Blueberry juice turns more reddish if acid is added, and becomes indigo if titrated with sufficient base to yield a high pH. Red cabbage is similarly affected.
Use of a commercially available electronic pH meter, in which a glass or solid-state electrode is inserted into moistened soil or a mixture (suspension) of soil and water; the pH is usually read on a digital display screen.
In the 2010s, spectrophotometric methods were developed to measure soil pH involving addition of an indicator dye to the soil extract. These compare well to glass electrode measurements but offer substantial advantages such as lack of drift, liquid junction and suspension effects.
Precise, repeatable measures of soil pH are required for scientific research and monitoring. This generally entails laboratory analysis using a standard protocol; an example of such a protocol is that in the USDA Soil Survey Field and Laboratory Methods Manual. In this document the three-page protocol for soil pH measurement includes the following sections: Application; Summary of Method; Interferences; Safety; Equipment; Reagents; and Procedure.
Factors affecting soil pH
The pH of a natural soil depends on the mineral composition of the parent material of the soil, and the weathering reactions undergone by that parent material. In warm, humid environments, soil acidification occurs over time as the products of weathering are leached by water moving laterally or downwards through the soil. In dry climates, however, soil weathering and leaching are less intense and soil pH is often neutral or alkaline.
Sources of acidity
Many processes contribute to soil acidification. These include:
Rainfall: Average rainfall has a pH of 5.6 and is moderately acidic due to dissolved atmospheric carbon dioxide () that combines with water to form carbonic acid (). When this water flows through the soil it results in the leaching of basic cations as bicarbonates; this increases the percentage of and relative to other cations.
Root respiration and decomposition of organic matter by microorganisms release which increases the carbonic acid () concentration and subsequent leaching.
Plant growth: Plants take up nutrients in the form of ions (e.g. , , , ), and they often take up more cations than anions. However, plants must maintain a neutral charge in their roots. In order to compensate for the extra positive charge, they will release ions from the root. Some plants also exude organic acids into the soil to acidify the zone around their roots to help solubilize metal nutrients that are insoluble at neutral pH, such as iron (Fe).
Fertilizer use: Ammonium () fertilizers react in the soil by the process of nitrification to form nitrate (), and in the process release ions.
Acid rain: The burning of fossil fuels releases oxides of sulfur and nitrogen into the atmosphere. These react with water in the atmosphere to form sulfuric and nitric acid in rain.
Oxidative weathering: Oxidation of some primary minerals, especially sulfides and those containing , generate acidity. This process is often accelerated by human activity:
Mine spoil: Severely acidic conditions can form in soils near some mine spoils due to the oxidation of pyrite.
Acid sulfate soils formed naturally in waterlogged coastal and estuarine environments can become highly acidic when drained or excavated.
Sources of alkalinity
Total soil alkalinity increases with:
Weathering of silicate, aluminosilicate and carbonate minerals containing , , and ;
Addition of silicate, aluminosilicate and carbonate minerals to soils; this may happen by deposition of material eroded elsewhere by wind or water, or by mixing of the soil with less weathered material (such as the addition of limestone to acid soils);
Addition of water containing dissolved bicarbonates (as occurs when irrigating with high-bicarbonate waters).
The accumulation of alkalinity in a soil (as carbonates and bicarbonates of Na, K, Ca and Mg) occurs when there is insufficient water flowing through the soils to leach soluble salts. This may be due to arid conditions, or poor internal soil drainage; in these situations most of the water that enters the soil is transpired (taken up by plants) or evaporates, rather than flowing through the soil.
The soil pH usually increases when the total alkalinity increases, but the balance of the added cations also has a marked effect on the soil pH. For example, increasing the amount of sodium in an alkaline soil tends to induce dissolution of calcium carbonate, which increases the pH. Calcareous soils may vary in pH from 7.0 to 9.5, depending on the degree to which or dominate the soluble cations.
Effect of soil pH on plant growth
Acid soils
High levels of aluminium occur near mining sites; small amounts of aluminium are released to the environment at the coal-fired power plants or incinerators. Aluminium in the air is washed out by the rain or normally settles down but small particles of aluminium remain in the air for a long time.
Acidic precipitation is the main natural factor to mobilize aluminium from natural sources and the main reason for the environmental effects of aluminium; however, the main factor of presence of aluminium in salt and freshwater are the industrial processes that also release aluminium into air. Plants grown in acid soils can experience a variety of stresses including aluminium (Al), hydrogen (H), and/or manganese (Mn) toxicity, as well as nutrient deficiencies of calcium (Ca) and magnesium (Mg).
Aluminium toxicity is the most widespread problem in acid soils. Aluminium is present in all soils to varying degrees, but dissolved Al3+ is toxic to plants; Al3+ is most soluble at low pH; above pH 5.0, there is little Al in soluble form in most soils. Aluminium is not a plant nutrient, and as such, is not actively taken up by the plants, but enters plant roots passively through osmosis. Aluminium can exist in many different forms and is a responsible agent for limiting growth in various parts of the world. Aluminium tolerance studies have been conducted in different plant species to see viable thresholds and concentrations exposed along with function upon exposure. Aluminium inhibits root growth; lateral roots and root tips become thickened and roots lack fine branching; root tips may turn brown. In the root, the initial effect of Al3+ is the inhibition of the expansion of the cells of the rhizodermis, leading to their rupture; thereafter it is known to interfere with many physiological processes including the uptake and transport of calcium and other essential nutrients, cell division, cell wall formation, and enzyme activity.
Proton (H+ ion) stress can also limit plant growth. The proton pump, H+-ATPase, of the plasmalemma of root cells works to maintain the near-neutral pH of their cytoplasm. A high proton activity (pH within the range 3.0–4.0 for most plant species) in the external growth medium overcomes the capacity of the cell to maintain the cytoplasmic pH and growth shuts down.
In soils with a high content of manganese-containing minerals, Mn toxicity can become a problem at pH 5.6 and lower. Manganese, like aluminium, becomes increasingly soluble as pH drops, and Mn toxicity symptoms can be seen at pH levels below 5.6. Manganese is an essential plant nutrient, so plants transport Mn into leaves. Classic symptoms of Mn toxicity are crinkling or cupping of leaves.
Nutrient availability in relation to soil pH
Soil pH affects the availability of some plant nutrients:
As discussed above, aluminium toxicity has direct effects on plant growth; however, by limiting root growth, it also reduces the availability of plant nutrients. Because roots are damaged, nutrient uptake is reduced, and deficiencies of the macronutrients (nitrogen, phosphorus, potassium, calcium and magnesium) are frequently encountered in very strongly acidic to ultra-acidic soils (pH<5.0). When aluminum levels increase in the soil, it decreases the pH levels. This does not allow for trees to take up water, meaning they cannot photosynthesize, leading them to die. The trees can also develop yellowish colour on their leaves and veins.
Molybdenum availability is increased at higher pH; this is because the molybdate ion is more strongly sorbed by clay particles at lower pH.
Zinc, iron, copper and manganese show decreased availability at higher pH (increased sorption at higher pH).
The effect of pH on phosphorus availability varies considerably, depending on soil conditions and the crop in question. The prevailing view in the 1940s and 1950s was that P availability was maximized near neutrality (soil pH 6.5–7.5), and decreased at higher and lower pH. Interactions of phosphorus with pH in the moderately to slightly acidic range (pH 5.5–6.5) are, however, far more complex than is suggested by this view. Laboratory tests, glasshouse trials and field trials have indicated that increases in pH within this range may increase, decrease, or have no effect on P availability to plants.
Water availability in relation to soil pH
Strongly alkaline soils are sodic and dispersive, with slow infiltration, low hydraulic conductivity and poor available water capacity. Plant growth is severely restricted because aeration is poor when the soil is wet; while in dry conditions, plant-available water is rapidly depleted and the soils become hard and cloddy (high soil strength). The higher the pH in the soil, the less water available to be distributed to the plants and organisms that depend on it. With a decreased pH, this does not allow for plants to uptake water like they normally would. This causes them to not be able to photosynthesize.
Many strongly acidic soils, on the other hand, have strong aggregation, good internal drainage, and good water-holding characteristics. However, for many plant species, aluminium toxicity severely limits root growth, and moisture stress can occur even when the soil is relatively moist.
Plant pH preferences
In general terms, different plant species are adapted to soils of different pH ranges. For many species, the suitable soil pH range is fairly well known. Online databases of plant characteristics, such as USDA PLANTS and Plants for a Future can be used to look up the suitable soil pH range of a wide range of plants. Documents like Ellenberg's indicator values for British plants can also be consulted.
However, a plant may be intolerant of a particular pH in some soils as a result of a particular mechanism, and that mechanism may not apply in other soils. For example, a soil low in molybdenum may not be suitable for soybean plants at pH 5.5, but soils with sufficient molybdenum allow optimal growth at that pH. Similarly, some calcifuges (plants intolerant of high-pH soils) can tolerate calcareous soils if sufficient phosphorus is supplied. Another confounding factor is that different varieties of the same species often have different suitable soil pH ranges. Plant breeders can use this to breed varieties that can tolerate conditions that are otherwise considered unsuitable for that species – examples are projects to breed aluminium-tolerant and manganese-tolerant varieties of cereal crops for food production in strongly acidic soils.
The table below gives suitable soil pH ranges for some widely cultivated plants as found in the USDA PLANTS Database. Some species (like Pinus radiata and Opuntia ficus-indica) tolerate only a narrow range in soil pH, whereas others (such as Vetiveria zizanioides) tolerate a very wide pH range.
In natural or near-natural plant communities, the various pH preferences of plant species (or ecotypes) at least partly determine the composition and biodiversity of vegetation. While both very low and very high pH values are detrimental to plant growth, there is an increasing trend of plant biodiversity along the range from extremely acidic (pH 3.5) to strongly alkaline (pH 9) soils, i.e. there are more calcicole than calcifuge species, at least in terrestrial environments. Although widely reported and supported by experimental results, the observed increase of plant species richness with pH is still in need of a clearcut explanation. Competitive exclusion between plant species with overlapping pH ranges most probably contributes to the observed shifts of vegetation composition along pH gradients.
pH effects on soil biota
Soil biota (soil microflora, soil animals) are sensitive to soil pH, either directly upon contact or after soil ingestion or indirectly through the various soil properties to which pH contributes (e.g. nutrient status, metal toxicity, humus form). According to the various physiological and behavioural adaptations of soil biota, the species composition of soil microbial and animal communities varies with soil pH. Along altitudinal gradients, changes in the species distribution of soil animal and microbial communities can be at least partly ascribed to variation in soil pH. The shift from toxic to non-toxic forms of aluminium around pH5 marks the passage from acid-tolerance to acid-intolerance, with few changes in the species composition of soil communities above this threshold, even in calcareous soils. Soil animals exhibit distinct pH preferences when allowed to exert a choice along a range of pH values, explaining that various field distributions of soil organisms, motile microbes included, could at least partly result from active movement along pH gradients. Like for plants, competition between acido-tolerant and acido-intolerant soil-dwelling organisms was suspected to play a role in the shifts in species composition observed along pH ranges.
The opposition between acido-tolerance and acido-intolerance is commonly observed at species level within a genus or at genus level within a family, but it also occurs at much higher taxonomic rank, like between soil fungi and bacteria, here too with a strong involvement of competition.
It has been suggested that soil organisms more tolerant of soil acidity, and thus living mainly in soils at pH less than 5, were more primitive than those intolerant of soil acidity. A cladistic analysis on the collembolan genus Willemia showed that tolerance to soil acidity was correlated with tolerance of other stress factors and that stress tolerance was an ancestral character in this genus. However the generality of these findings remains to be established.
At low pH, the oxidative stress induced by aluminium (Al3+) affects soil animals the body of which is not protected by a thick chitinous exoskeleton like in arthropods, and thus are in more direct contact with the soil solution, e.g. protists, nematodes, rotifers (microfauna), enchytraeids (mesofauna) and earthworms (macrofauna).
Effects of pH on soil biota can be mediated by the various functional interactions of soil foodwebs. It has been shown experimentally that the collembolan Heteromurus nitidus, commonly living in soils at pH higher than 5, could be cultured in more acid soils provided that predators were absent. Its attraction to earthworm excreta (mucus, urine, faeces), mediated by ammonia emission, provides food and shelter within earthworm burrows in mull humus forms associated with less acid soils.
Effects of soil biota on soil pH
Soil biota affect soil pH directly through excretion, and indirectly by acting on the physical environment. Many soil fungi, although not all of them, acidify the soil by excreting oxalic acid, a product of their respiratory metabolism. Oxalic acid precipitates calcium, forming insoluble crystals of calcium oxalate and thus depriving the soil solution from this necessary element. On the opposite side, earthworms exert a buffering effect on soil pH through their excretion of mucus, endowed with amphoteric properties.
By mixing organic matter with mineral matter, in particular clay particles, and by adding mucus as a glue for some of them, burrowing soil animals, e.g. fossorial rodents, moles, earthworms, termites, some millipedes and fly larvae, contribute to decrease the natural acidity of raw organic matter, as observed in mull humus forms.
Changing soil pH
Increasing pH of acidic soil
Finely ground agricultural lime is often applied to acid soils to increase soil pH (liming). The amount of limestone or chalk needed to change pH is determined by the mesh size of the lime (how finely it is ground) and the buffering capacity of the soil. A high mesh size (60 mesh = 0.25 mm; 100 mesh = 0.149 mm) indicates a finely ground lime that will react quickly with soil acidity. The buffering capacity of a soil depends on the clay content of the soil, the type of clay, and the amount of organic matter present, and may be related to the soil cation exchange capacity. Soils with high clay content will have a higher buffering capacity than soils with little clay, and soils with high organic matter will have a higher buffering capacity than those with low organic matter. Soils with higher buffering capacity require a greater amount of lime to achieve an equivalent change in pH. The buffering of soil pH is often directly related to the quantity of aluminium in soil solution and taking up exchange sites as part of the cation exchange capacity. This aluminium can be measured in a soil test in which it is extracted from the soil with a salt solution, and then is quantified with a laboratory analysis. Then, using the initial soil pH and the aluminium content, the amount of lime needed to raise the pH to a desired level can be calculated.
Amendments other than agricultural lime that can be used to increase the pH of soil include wood ash, industrial calcium oxide (burnt lime), magnesium oxide, basic slag (calcium silicate), and oyster shells. These products increase the pH of soils through various acid–base reactions. Calcium silicate neutralizes active acidity in the soil by reacting with H+ ions to form monosilicic acid (H4SiO4), a neutral solute.
Decreasing the pH of alkaline soil
The pH of an alkaline soil can be reduced by adding acidifying agents or acidic organic materials. Elemental sulfur (90–99% S) has been used at application rates of – it slowly oxidises in the soil to form sulfuric acid. Acidifying fertilizers, such as ammonium sulfate, ammonium nitrate and urea, can help to reduce the pH of soil because ammonium oxidises to form nitric acid. Acidifying organic materials include peat or sphagnum peat moss.
However, in high-pH soils with a high calcium carbonate content (more than 2%), attempting to reduce the pH with acids can be very costly and ineffective. In such cases, it is often more efficient to add phosphorus, iron, manganese, copper, or zinc instead because deficiencies of these nutrients are the most common reasons for poor plant growth in calcareous soils.
| Physical sciences | Soil science | Earth science |
106235 | https://en.wikipedia.org/wiki/Pipette | Pipette | A pipette (sometimes spelled as pipet) is a type of laboratory tool commonly used in chemistry and biology to transport a measured volume of liquid, often as a media dispenser. Pipettes come in several designs for various purposes with differing levels of accuracy and precision, from single piece glass pipettes to more complex adjustable or electronic pipettes. Many pipette types work by creating a partial vacuum above the liquid-holding chamber and selectively releasing this vacuum to draw up and dispense liquid. Measurement accuracy varies greatly depending on the instrument.
History
The first simple pipettes were made of glass, such as Pasteur pipettes. Large pipettes continue to be made of glass; others are made of squeezable plastic for situations where an exact volume is not required.
The first micropipette was patented in 1957 by Dr Heinrich Schnitger (Marburg, Germany). The founder of the company Eppendorf, Dr. Heinrich Netheler, inherited the rights and started the commercial production of micropipettes in 1961.
The adjustable micropipette is a Wisconsin invention developed through interactions among several people, primarily inventor Warren Gilson and Henry Lardy, a professor of biochemistry at the University of Wisconsin–Madison.
Nomenclature
Although specific names exist for each type of pipette, in practice, any type can be referred to as a "pipette". Pipettes that dispense less than 1000 μL are sometimes distinguished as micropipettes.
The terms "pipette" and "pipet" are used interchangeably despite minor historical differences in their usage.
Common pipettes
Air displacement micropipettes
Air displacement micropipettes are a type of adjustable micropipette that deliver a measured volume of liquid; depending on size, it could be between about 0.1 μL to 1,000 μL (1 mL). These pipettes require disposable tips that come in contact with the fluid.
These pipettes operate by piston-driven air displacement. A vacuum is generated by the vertical travel of a metal or ceramic piston within an airtight sleeve. As the piston moves upward, driven by the depression of the plunger, a vacuum is created in the space left vacant by the piston. The liquid around the tip moves into this vacuum (along with the air in the tip) and can then be transported and released as necessary. These pipettes are capable of being very precise and accurate. However, since they rely on air displacement, they are subject to inaccuracies caused by the changing environment, particularly temperature and user technique. For these reasons, this equipment must be carefully maintained and calibrated, and users must be trained to exercise correct and consistent technique.
The micropipette was invented and patented in 1960 by Dr. Heinrich Schnitger in Marburg, Germany. Afterwards, the co-founder of the biotechnology company Eppendorf, Dr. Heinrich Netheler, inherited the rights and initiated the global and general use of micropipettes in labs. In 1972, the adjustable micropipette was invented at the University of Wisconsin-Madison by several people, primarily Warren Gilson and Henry Lardy.
Types of air displacement pipettes include:
adjustable or fixed
volume handled
Single-channel, multi-channel or repeater
conical tips or cylindrical tips
standard or locking
manual or electronic
manufacturer
Irrespective of brand or expense of pipette, every micropipette manufacturer recommends checking the calibration at least every six months, if used regularly. Companies in the drug or food industries are required to calibrate their pipettes quarterly (every three months). Schools which are conducting chemistry classes can have this process annually. Those studying forensics and research where a great deal of testing is commonplace will perform monthly calibrations.
Electronic pipette
To minimize the possible development of musculoskeletal disorders due to repetitive pipetting, electronic pipettes commonly replace the mechanical version.
Positive displacement pipette
These are similar to air displacement pipettes, but are less commonly used and are used to avoid contamination and for volatile or viscous substances at small volumes, such as DNA. The major difference is that the disposable tip is a microsyringe (plastic), composed of a capillary and a piston (movable inner part) which directly displaces the liquid.
Volumetric pipettes
Volumetric pipettes or bulb pipette allow the user to measure a volume of solution extremely precisely (precision of four significant figures). These pipettes have a large bulb with a long narrow portion above with a single graduation mark as it is calibrated for a single volume (like a volumetric flask). Typical volumes are 20, 50, and 100 mL. Volumetric pipettes are commonly used to make laboratory solutions from a base stock as well as prepare solutions for titration.
Graduated pipettes
Graduated pipettes are a type of macropipette consisting of a long tube with a series of graduations, as on a graduated cylinder or burette, to indicate different calibrated volumes. They also require a source of vacuum; in the early days of chemistry and biology, the mouth was used. The safety regulations included the statement: "Never pipette by mouth KCN, NH3, strong acids, bases and mercury salts". Some pipettes were manufactured with two bubbles between the mouth piece and the solution level line, to protect the chemist from accidental swallowing of the solution.
Pasteur pipette
Pasteur pipettes are plastic or glass pipettes used to transfer small amounts of liquids, but are not graduated or calibrated for any particular volume. The bulb is separate from the pipette body. Pasteur pipettes are also called teat pipettes, droppers, eye droppers and chemical droppers.
Transfer pipettes
Transfer pipettes, also known as Beral pipettes, are similar to Pasteur pipettes but are made from a single piece of plastic and their bulb can serve as the liquid-holding chamber.
Specialized pipettes
Pipetting syringe
Pipetting syringes are hand-held devices that combine the functions of volumetric (bulb) pipettes, graduated pipettes, and burettes. They are calibrated to ISO volumetric A grade standards. A glass or plastic pipette tube is used with a thumb-operated piston and PTFE seal which slides within the pipette in a positive displacement operation. Such a device can be used on a wide variety of fluids (aqueous, viscous, and volatile fluids; hydrocarbons; essential oils; and mixtures) in volumes between 0.5 mL and 25 mL. This arrangement provides improvements in precision, handling safety, reliability, economy, and versatility. No disposable tips or pipetting aids are needed with the pipetting syringe.
Van Slyke pipette
The Van Slyke pipette, invented by Donald Dexter Van Slyke, is a graduated pipette commonly used in medical technology with serologic pipettes for volumetric analysis.
Ostwald–Folin pipette
The Ostwald–Folin pipette, developed by Wilhelm Ostwald and refined by Otto Folin, is a type of volumetric pipette used to measure viscous fluids such as whole blood or serum.
Winkler–Dennis gas combustion pipette
The Winkler–Dennis gas combustion pipette, developed by Clemens Winkler and refined by Louis Munroe Dennis, is an apparatus for the controlled reaction of liquids under a mild electric current and a supply of oxygen.
Glass micropipette
Glass micropipettes are fabricated in a micropipette puller and are typically used in a micromanipulator. These are used to physically interact with microscopic samples, such as in the procedures of microinjection and patch clamping. Most micropipettes are made of borosilicate, aluminosilicate or quartz with many types and sizes of glass tubing being available. Each of these compositions has unique properties which will determine suitable applications.
Microfluidic pipette
A recent introduction into the micropipette field integrates the versatility of microfluidics into a freely positionable pipette platform. At the tip of the device, a localized flow zone is created which allows for constant control of the nanolitre environment, directly in front of the pipette. The pipettes are made from polydimethylsiloxane (PDMS), which is formed using reactive injection molding. Interfacing of these pipettes using pneumatics enables multiple solutions to be loaded and switched on demand, with solution exchange times of 100ms. This type of pipette was invented by Alar Ainla, and currently situated in the Biophysical Technology Lab. at Chalmers University of Technology in Sweden.
Extremely low volume pipettes
A zeptolitre pipette has been developed at Brookhaven National Laboratory. The pipette is made of a carbon shell, within which is an alloy of gold-germanium. The pipette was used to learn about how crystallization takes place.
Pipette aids
A variety of devices have been developed for safer, easier, and more efficient pipetting. For example, a motorized pipette controller can aid liquid aspiration or dispensing using volumetric pipettes or graduated pipettes; a tablet can interact in real-time with the pipette and guide a user through a protocol; and a pipette station can help to control the pipette tip immersion depth and improve ergonomics.
Robots
Pipette robots are capable of manipulating pipettes just as humans would do.
Calibration
Pipette recalibration is an important consideration in laboratories using these devices. It is the act of determining the accuracy of a measuring device by comparison with NIST traceable reference standards. Pipette calibration is essential to ensure that the instrument is working according to expectations and as per the defined regimes or work protocols. Pipette calibration is considered to be a complex affair because it includes many elements of calibration procedure and several calibration protocol options as well as makes and models of pipettes to consider.
Posture and injuries
Proper pipetting posture is the most important element in establishing good ergonomic work practices. During repetitive tasks such as pipetting, maintaining body positions that provide a maximum of strength with the least amount of muscular stress is important to minimize the risk of injury. A number of common pipetting techniques have been identified as potentially hazardous due to biomechanical stress factors. Recommendations for corrective pipetting actions, made by various US governmental agencies and ergonomics experts, are presented below.
Winged elbow pipetting
Technique: elevated, “winged elbow”. The average human arm weighs approximately 6% of the total body weight. Holding a pipette with the elbow extended (winged elbow) in a static position places the weight of the arm onto the neck and shoulder muscles and reduces blood flow, thereby causing stress and fatigue. Muscle strength is also substantially reduced as arm flexion is increased.
Corrective action: Position elbows as close to the body as possible, with arms and wrists extended in straight, neutral positions (handshake posture). Keep work items within easy reach to limit extension and elevation of arm. Arm/hand elevation should not exceed 12” from the worksurface.
Over rotated arm pipetting
Technique: Over-rotated forearm and wrist. Rotation of the forearm in a supinated position (palm up) and/or wrist flexion increases the fluid pressure in the carpal tunnel. This increased pressure can result in compression of soft tissues like nerves, tendons and blood vessels, causing numbness in the thumb and fingers.
Corrective action: Forearm rotation angle near 45° pronation (palm down) should be maintained to minimize carpal tunnel pressure during repetitive activity.
Clenched fist pipetting
Technique: Tight grip (clenched fist). Hand fatigue results from continuous contact between a hard object and sensitive tissues. This occurs when a firm grip is needed to hold a pipette, such as when jamming on a tip, and results in diminished hand strength.
Corrective action: Use pipettes with hooks or other attributes that allow a relaxed grip and/or alleviate need to constantly grip the pipette. This will reduce tension in the arm, wrist and hand.
Thumb plunger pipetting
Technique: Concentrated area of force (contact stress between a hard object and sensitive tissues). Some devices have plungers and buttons with limited surface areas, requiring a great deal of force to be expended by the thumb or other finger in a concentrated area.
Corrective action: Use pipettes with large contoured or rounded plungers and buttons. This will disperse the pressure used to operate the pipette across the entire surface of the thumb or finger, reducing contact pressure to acceptable levels.
Incorrect posture can have a strong impact on available strength arm strength pipetting
Technique: elevated arm. Muscle strength is substantially reduced when arm flexion is increased.
Corrective action: Keep work items within easy reach to limit extension and elevation of arm. Arm/hand elevation should also not exceed 12” from the worksurface.
Elbow strength pipetting
Technique: Elbow flexion or abduction. Arm strength diminishes as elbow posture is deviated from a 90° position.
Corrective action: Keep forearm and hand elevation within 12” of the worksurface, which will allow the elbow to remain near a 90° position.
Unlike traditional axial pipettes, ergonomic pipetting can affect posture and prevent common pipetting injuries such as carpal tunnel syndrome, tendinitis and other musculoskeletal disorders. To be "ergonomically correct" significant changes to traditional pipetting postures are essential, like:
minimizing forearm and wrist rotations, keeping a low arm and elbow height and relaxing the shoulders and upper arms.
Pipette stand
Typically the pipettes are vertically stored on holder called pipette stands. In case of electronic pipettes, such stands can recharge their batteries. The most advanced pipette stands can directly control electronic pipettes.
Alternatives
An alternative technology, especially for transferring small volumes (micro and nano litre range) is acoustic droplet ejection.
| Technology | Containers | null |
106240 | https://en.wikipedia.org/wiki/Aqueous%20solution | Aqueous solution | An aqueous solution is a solution in which the solvent is water. It is mostly shown in chemical equations by appending (aq) to the relevant chemical formula. For example, a solution of table salt, also known as sodium chloride (NaCl), in water would be represented as . The word aqueous (which comes from aqua) means pertaining to, related to, similar to, or dissolved in, water. As water is an excellent solvent and is also naturally abundant, it is a ubiquitous solvent in chemistry. Since water is frequently used as the solvent in experiments, the word solution refers to an aqueous solution, unless the solvent is specified.
A non-aqueous solution is a solution in which the solvent is a liquid, but is not water.
Characteristics
Substances that are hydrophobic ('water-fearing') do not dissolve well in water, whereas those that are hydrophilic ('water-friendly') do. An example of a hydrophilic substance is sodium chloride. In an aqueous solution the hydrogen ions () and hydroxide ions () are in Arrhenius balance ( = Kw = 1 x 10−14 at 298 K).
Acids and bases are aqueous solutions, as part of their Arrhenius definitions. An example of an Arrhenius acid is hydrogen chloride (HCl) because of its dissociation of the hydrogen ion when dissolved in water. Sodium hydroxide (NaOH) is an Arrhenius base because it dissociates the hydroxide ion when it is dissolved in water.
Aqueous solutions may contain, especially in the alkaline zone or subjected to radiolysis, hydrated atomic hydrogen and hydrated electrons.
Electrolytes
Aqueous solutions that conduct electric current efficiently contain strong electrolytes, while ones that conduct poorly are considered to have weak electrolytes. Those strong electrolytes are substances that are completely ionized in water, whereas the weak electrolytes exhibit only a small degree of ionization in water. The ability for ions to move freely through the solvent is a characteristic of an aqueous strong electrolyte solution. The solutes in a weak electrolyte solution are present as ions, but only in a small amount.
Nonelectrolytes are substances that dissolve in water yet maintain their molecular integrity (do not dissociate into ions). Examples include sugar, urea, glycerol, and methylsulfonylmethane (MSM).
Reactions
Reactions in aqueous solutions are usually metathesis reactions. Metathesis reactions are another term for double-displacement; that is, when a cation displaces to form an ionic bond with the other anion. The cation bonded with the latter anion will dissociate and bond with the other anion.
A common metathesis reaction in aqueous solutions is a precipitation reaction. This reaction occurs when two aqueous strong electrolyte solutions mix and produce an insoluble solid, also known as a precipitate. The ability of a substance to dissolve in water is determined by whether the substance can match or exceed the strong attractive forces that water molecules generate between themselves. If the substance lacks the ability to dissolve in water, the molecules form a precipitate.
When writing the equations of precipitation reactions, it is essential to determine the precipitate. To determine the precipitate, one must consult a chart of solubility. Soluble compounds are aqueous, while insoluble compounds are the precipitate. There may not always be a precipitate. Complete ionic equations and net ionic equations are used to show dissociated ions in metathesis reactions. When performing calculations regarding the reacting of one or more aqueous solutions, in general one must know the concentration, or molarity, of the aqueous solutions.
| Physical sciences | Mixture | Chemistry |
106269 | https://en.wikipedia.org/wiki/Test%20tube | Test tube | A test tube, also known as a culture tube or sample tube, is a common piece of laboratory glassware consisting of a finger-like length of glass or clear plastic tubing, open at the top and closed at the bottom.
Test tubes are usually placed in special-purpose racks.
Types and usage
Chemistry
Test tubes intended for general chemical work are usually made of glass, for its relative resistance to heat. Tubes made from expansion-resistant glasses, mostly borosilicate glass or fused quartz, can withstand high temperatures up to several hundred degrees Celsius.
Chemistry tubes are available in a multitude of lengths and widths, typically from 10 to 20 mm wide and 50 to 200 mm long. The top often features a flared lip to aid pouring out the contents.
A chemistry test tube typically has a flat bottom, a round bottom, or a conical bottom. Some test tubes are made to accept a ground glass stopper or a screw cap. They are often provided with a small ground glass or white glaze area near the top for labelling with a pencil.
Test tubes are widely used by chemists to handle chemicals, especially for qualitative experiments and assays. Their spherical bottom and vertical sides reduce mass loss when pouring, make them easier to wash out, and allow convenient monitoring of the contents. The long, narrow neck of test tube slows down the spreading of gases to the environment.
Test tubes are convenient containers for heating small amounts of liquids or solids with a Bunsen burner or alcohol burner. The tube is usually held by its neck with a clamp or tongs. By tilting the tube, the bottom can be heated to hundreds of degrees in the flame, while the neck remains relatively cool, possibly allowing vapours to condense on its walls. A boiling tube is a large test tube intended specifically for boiling liquids.
A test tube filled with water and upturned into a water-filled beaker is often used to capture gases, e.g. in electrolysis demonstrations.
A test tube with a stopper is often used for temporary storage of chemical or biological samples.
Biosciences
Culture tubes are test tubes used in biology and related sciences for handling and culturing all kinds of live organisms, such as molds, bacteria, seedlings, plant cuttings, etc.. Some racks for culture tubes are designed to hold the tubes in a nearly horizontal position, so as to maximize the surface of the culture medium inside.
Culture tubes for biology are usually made of clear plastic (such as polystyrene or polypropylene) by injection molding and are often discarded after use. Plastic test tubes with a screwtop cap are often called "Falcon tubes" after a line manufactured by Becton Dickinson.
Some sources consider that the presence of a lip is what distinguishes a test tube from a culture tube.
Clinical medicine
In clinical medicine, sterile test tubes with air removed, called vacutainers, are used to collect and hold samples of physiological fluids such as blood, urine, pus, and synovial fluid. These tubes are commonly sealed with a rubber stopper and often have a specific additive placed in the tube with the stopper color indicating the additive. For example, a blue-top tube is a 5 ml test tube containing sodium citrate as an anticoagulant, used to collect blood for coagulation and glucose-6-phosphate dehydrogenase testing. Small vials used in medicine may have a snap-top (also called a hinge cap) molded as part of the vial.
Other uses
Test tubes are sometimes put to casual uses outside of lab environments, e.g. as flower vases, glassware for certain weak shots, or containers for spices. They can also be used for raising queen ants during their first months of development.
Variants
Boiling tube
A boiling tube is a small cylindrical vessel used to strongly heat substances in the flame of a Bunsen burner. A boiling tube is essentially a scaled-up test tube, being about 50% larger.
They are designed to be wide enough to allow substances to boil violently as opposed to a test tube, which is too narrow; a boiling liquid can explode out of the end of test tubes when they are heated, as there is no room for bubbles of gas to escape independently of the surrounding liquid. This phenomenon is called bumping.
Ignition tube
An ignition tube is used in much the same way as a boiling tube, except it is not as large and thick-walled. It is primarily used to hold small quantities of substances which are undergoing direct heating by a Bunsen burner or other heat source. This type of tube is used in the sodium fusion test.
Ignition tubes are often difficult to clean due to the small bore. When used to heat substances strongly, some char may stick to the walls as well. They are usually disposable.
| Technology | Containers | null |
106284 | https://en.wikipedia.org/wiki/Centrifuge | Centrifuge | A centrifuge is a device that uses centrifugal force to subject a specimen to a specified constant force - for example, to separate various components of a fluid. This is achieved by spinning the fluid at high speed within a container, thereby separating fluids of different densities (e.g. cream from milk) or liquids from solids. It works by causing denser substances and particles to move outward in the radial direction. At the same time, objects that are less dense are displaced and moved to the centre. In a laboratory centrifuge that uses sample tubes, the radial acceleration causes denser particles to settle to the bottom of the tube, while low-density substances rise to the top. A centrifuge can be a very effective filter that separates contaminants from the main body of fluid.
Industrial scale centrifuges are commonly used in manufacturing and waste processing to sediment suspended solids, or to separate immiscible liquids. An example is the cream separator found in dairies. Very high speed centrifuges and ultracentrifuges able to provide very high accelerations can separate fine particles down to the nano-scale, and molecules of different masses. Large centrifuges are used to simulate high gravity or acceleration environments (for example, high-G training for test pilots). Medium-sized centrifuges are used in washing machines and at some swimming pools to draw water out of fabrics. Gas centrifuges are used for isotope separation, such as to enrich nuclear fuel for fissile isotopes.
History
English military engineer Benjamin Robins (1707–1751) invented a whirling arm apparatus to determine drag. In 1864, Antonin Prandtl proposed the idea of a dairy centrifuge to separate cream from milk. The idea was subsequently put into practice by his brother, Alexander Prandtl, who made improvements to his brother's design, and exhibited a working butterfat extraction machine in 1875.
Types
A centrifuge machine can be described as a machine with a rapidly rotating container that applies centrifugal force to its contents. There are multiple types of centrifuge, which can be classified by intended use or by rotor design:
Types by rotor design:
Fixed-angle centrifuges are designed to hold the sample containers at a constant angle relative to the central axis.
Swinging head (or swinging bucket) centrifuges, in contrast to fixed-angle centrifuges, have a hinge where the sample containers are attached to the central rotor. This allows all of the samples to swing outwards as the centrifuge is spun.
Continuous tubular centrifuges do not have individual sample vessels and are used for high volume applications.
Types by intended use:
Laboratory centrifuges, are general-purpose instruments of several types with distinct, but overlapping, capabilities. These include clinical centrifuges, superspeed centrifuges and preparative ultracentrifuges.
Analytical ultracentrifuges are designed to perform sedimentation analysis of macromolecules using the principles devised by Theodor Svedberg.
Haematocrit centrifuges are used to measure the volume percentage of red blood cells in whole blood.
Gas centrifuges, including Zippe-type centrifuges, for isotopic separations in the gas phase.
Industrial centrifuges may otherwise be classified according to the type of separation of the high density fraction from the low density one.
Generally, there are two types of centrifuges: the filtration and sedimentation centrifuges. For the filtration or the so-called screen centrifuge, the drum is perforated and is inserted with a filter, for example a filter cloth, wire mesh or lot screen. The suspension flows through the filter and the drum with the perforated wall from the inside to the outside. In this way, the solid material is restrained and can be removed. The kind of removing depends on the type of centrifuge, for example manually or periodically. Common types are:
Centrifugal oil filters
Screen/scroll centrifuges (Screen centrifuges, where the centrifugal acceleration allows the liquid to pass through a screen of some sort, through which the solids cannot go (due to granulometry larger than the screen gap or due to agglomeration))
Pusher centrifuges
Peeler centrifuges
Inverting filter centrifuges
Sliding discharge centrifuges
Pendulum centrifuges
Sedimentation centrifuges
In the centrifuges, the drum is a solid wall (not perforated). This type of centrifuge is used for the purification of a suspension. For the acceleration of the natural deposition, process of suspension the centrifuges use centrifugal force. With so-called overflow centrifuges, the suspension is drained off and the liquid is added constantly. Common types are:
Separator centrifuges (Continuous liquid); common types are:
Self-cleaning Centrifuges
Solid bowl centrifuges
Conical plate centrifuges
Tubular centrifuges
Decanter centrifuges, in which there is no physical separation between the solid and liquid phase, rather an accelerated settling due to centrifugal acceleration.
Though most modern centrifuges are electrically powered, a hand-powered variant inspired by the whirligig has been developed for medical applications in developing countries.
Many designs have been shared for free and open-source centrifuges that can be digitally manufactured. The open-source hardware designs for hand-powered centrifuge for larger volumes of fluids with a radial velocity of over 1750 rpm and over 50 N of relative centrifugal force can be completely 3-D printed for about $25. Other open hardware designs use custom 3-D printed fixtures with inexpensive electric motors to make low-cost centrifuges (e.g. the Dremelfuge that uses a Dremel power tool) or CNC cut out OpenFuge.
Uses
Laboratory separations
A wide variety of laboratory-scale centrifuges are used in chemistry, biology, biochemistry and clinical medicine for isolating and separating suspensions and immiscible liquids. They vary widely in speed, capacity, temperature control, and other characteristics. Laboratory centrifuges often can accept a range of different fixed-angle and swinging bucket rotors able to carry different numbers of centrifuge tubes and rated for specific maximum speeds. Controls vary from simple electrical timers to programmable models able to control acceleration and deceleration rates, running speeds, and temperature regimes. Ultracentrifuges spin the rotors under vacuum, eliminating air resistance and enabling exact temperature control. Zonal rotors and continuous flow systems are capable of handing bulk and larger sample volumes, respectively, in a laboratory-scale instrument.
An application in laboratories is blood separation. Blood separates into cells and proteins (RBC, WBC, platelets, etc.) and serum. DNA preparation is another common application for pharmacogenetics and clinical diagnosis. DNA samples are purified and the DNA is prepped for separation by adding buffers and then centrifuging it for a certain amount of time. The blood waste is then removed and another buffer is added and spun inside the centrifuge again. Once the blood waste is removed and another buffer is added the pellet can be suspended and cooled. Proteins can then be removed and the entire thing can be centrifuged again and the DNA can be isolated completely. Specialized cytocentrifuges are used in medical and biological laboratories to concentrate cells for microscopic examination.
Isotope separation
Other centrifuges, the first being the Zippe-type centrifuge, separate isotopes, and these kinds of centrifuges are in use in nuclear power and nuclear weapon programs.
Aeronautics and astronautics
Human centrifuges are exceptionally large centrifuges that test the reactions and tolerance of pilots and astronauts to acceleration above those experienced in the Earth's gravity.
The first centrifuges used for human research were used by Erasmus Darwin, the grandfather of Charles Darwin. The first large-scale human centrifuge designed for aeronautical training was created in Germany in 1933.
The US Air Force at Brooks City Base, Texas, operates a human centrifuge while awaiting completion of the new human centrifuge in construction at Wright-Patterson AFB, Ohio. The centrifuge at Brooks City Base is operated by the United States Air Force School of Aerospace Medicine for the purpose of training and evaluating prospective fighter pilots for high-g flight in Air Force fighter aircraft.
The use of large centrifuges to simulate a feeling of gravity has been proposed for future long-duration space missions. Exposure to this simulated gravity would prevent or reduce the bone decalcification and muscle atrophy that affect individuals exposed to long periods of freefall.
Non-Human centrifuge
At the European Space Agency (ESA) technology center ESTEC (in Noordwijk, the Netherlands), an diameter centrifuge is used to expose samples in fields of life sciences as well as physical sciences. This Large Diameter Centrifuge (LDC) began operation in 2007. Samples can be exposed to a maximum of 20 times Earth's gravity. With its four arms and six freely swinging out gondolas it is possible to expose samples with different g-levels at the same time. Gondolas can be fixed at eight different positions. Depending on their locations one could e.g. run an experiment at 5 and 10g in the same run. Each gondola can hold an experiment of a maximum . Experiments performed in this facility ranged from zebra fish, metal alloys, plasma, cells, liquids, Planaria, Drosophila or plants.
Industrial centrifugal separator
Industrial centrifugal separator is a coolant filtration system for separating particles from liquid like, grinding machining coolant. It is usually used for non-ferrous particles separation such as, silicon, glass, ceramic, and graphite etc. The filtering process does not require any consumption parts like filter bags, which saves the earth from harm.
Geotechnical centrifuge modeling
Geotechnical centrifuge modeling is used for physical testing of models involving soils. Centrifuge acceleration is applied to scale models to scale the gravitational acceleration and enable prototype scale stresses to be obtained in scale models. Problems such as building and bridge foundations, earth dams, tunnels, and slope stability, including effects such as blast loading and earthquake shaking.
Synthesis of materials
High gravity conditions generated by centrifuge are applied in the chemical industry, casting, and material synthesis. The convection and mass transfer are greatly affected by the gravitational condition. Researchers reported that the high-gravity level can effectively affect the phase composition and morphology of the products.
Commercial applications
Standalone centrifuges for drying (hand-washed) clothes – usually with a water outlet.
Washing machines are designed to act as centrifuges to get rid of excess water in laundry loads.
Centrifuges are used in the attraction Mission: SPACE, located at Epcot in Walt Disney World, which propels riders using a combination of a centrifuge and a motion simulator to simulate the feeling of going into space.
In soil mechanics, centrifuges utilize centrifugal acceleration to match soil stresses in a scale model to those found in reality.
Large industrial centrifuges are commonly used in water and wastewater treatment to dry sludges. The resulting dry product is often termed cake, and the water leaving a centrifuge after most of the solids have been removed is called centrate.
Large industrial centrifuges are also used in the oil industry to remove solids from the drilling fluid.
Disc-stack centrifuges used by some companies in the oil sands industry to separate small amounts of water and solids from bitumen
Centrifuges are used to separate cream (remove fat) from milk; see Separator (milk).
Mathematical description
Protocols for centrifugation typically specify the amount of acceleration to be applied to the sample, rather than specifying a rotational speed such as revolutions per minute. This distinction is important because two rotors with different diameters running at the same rotational speed will subject samples to different accelerations. During circular motion the acceleration is the product of the radius and the square of the angular velocity , and the acceleration relative to "g" is traditionally named "relative centrifugal force" (RCF). The acceleration is measured in multiples of "g" (or × "g"), the standard acceleration due to gravity at the Earth's surface, a dimensionless quantity given by the expression:
where
is earth's gravitational acceleration,
is the rotational radius,
is the angular velocity in radians per unit time
This relationship may be written as
or
where
is the rotational radius measured in millimeters (mm), and
is rotational speed measured in revolutions per minute (RPM).
To avoid having to perform a mathematical calculation every time, one can find nomograms for converting RCF to rpm for a rotor of a given radius. A ruler or other straight edge lined up with the radius on one scale, and the desired RCF on another scale, will point at the correct rpm on the third scale. Based on automatic rotor recognition, modern centrifuges have a button for automatic conversion from RCF to rpm and vice versa.
| Technology | Industrial machinery | null |
106291 | https://en.wikipedia.org/wiki/Ultracentrifuge | Ultracentrifuge | An ultracentrifuge is a centrifuge optimized for spinning a rotor at very high speeds, capable of generating acceleration as high as (approx. ). There are two kinds of ultracentrifuges, the preparative and the analytical ultracentrifuge. Both classes of instruments find important uses in molecular biology, biochemistry, and polymer science.
History
In 1924 Theodor Svedberg built a centrifuge capable of generating 7,000 g (at 12,000 rpm), and called it the ultracentrifuge, to juxtapose it with the Ultramicroscope that had been developed previously. In 1925-1926 Svedberg constructed a new ultracentrifuge that permitted fields up to 100,000 g (42,000 rpm). Modern ultracentrifuges are typically classified as allowing greater than 100,000 g. Svedberg won the Nobel Prize in Chemistry in 1926 for his research on colloids and proteins using the ultracentrifuge.
In early 1930s, Émile Henriot found that suitably placed jets of compressed air can spin a bearingless top to very high speeds and developed an ultracentrifuge on that principle. Jesse Beams from the Physics Department at the University of Virginia first adapted that principle to a high-speed camera, and then started improving Henriot's ultracentrifuge, but his rotors consistently overheated.
Beam's student Edward Greydon Pickels solved the problem in 1935 by vacuumizing the system, which allowed a reduction in friction generated at high speeds. Vacuum systems also enabled the maintenance of constant temperature across the sample, eliminating convection currents that interfered with the interpretation of sedimentation results.
In 1946, Pickels cofounded Spinco (Specialized Instruments Corp.) to market analytical and preparative ultracentrifuges based on his design. Pickels considered his design to be too complicated for commercial use and developed a more easily operated, “foolproof” version. But even with the enhanced design, sales of analytical centrifuges remained low, and Spinco almost went bankrupt. The company survived by concentrating on sales of preparative ultracentrifuge models, which were becoming popular as workhorses in biomedical laboratories. In 1949, Spinco introduced the Model L, the first preparative ultracentrifuge to reach a maximum speed of 40,000 rpm. In 1954, Beckman Instruments (later Beckman Coulter) purchased the company, forming the basis of its Spinco centrifuge division.
Instrumentation
Ultracentrifuges are available with a wide variety of rotors suitable for a great range of experiments. Most rotors are designed to hold tubes that contain the samples. Swinging bucket rotors allow the tubes to hang on hinges so the tubes reorient to the horizontal as the rotor initially accelerate. Fixed angle rotors are made of a single block of material and hold the tubes in cavities bored at a predetermined angle. Zonal rotors are designed to contain a large volume of sample in a single central cavity rather than in tubes. Some zonal rotors are capable of dynamic loading and unloading of samples while the rotor is spinning at high speed.
Preparative rotors are used in biology for pelleting of fine particulate fractions, such as cellular organelles (mitochondria, microsomes, ribosomes) and viruses. They can also be used for gradient separations, in which the tubes are filled from top to bottom with an increasing concentration of a dense substance in solution. Sucrose gradients are typically used for separation of cellular organelles. Gradients of caesium salts are used for separation of nucleic acids. After the sample has spun at high speed for sufficient time to produce the separation, the rotor is allowed to come to a smooth stop and the gradient is gently pumped out of each tube to isolate the separated components.
Hazards
The tremendous rotational kinetic energy of the rotor in an operating ultracentrifuge makes the catastrophic failure of a spinning rotor a serious concern, as it can explode spectacularly. Rotors conventionally have been made from high strength-to-weight metals such as aluminum or titanium. The stresses of routine use and harsh chemical solutions eventually cause rotors to deteriorate. Proper use of the instrument and rotors within recommended limits and careful maintenance of rotors to prevent corrosion and to detect deterioration is necessary to mitigate this risk.
More recently some rotors have been made of lightweight carbon fiber composite material, which are up to 60% lighter, resulting in faster acceleration/deceleration rates. Carbon fiber composite rotors also are corrosion-resistant, eliminating a major cause of rotor failure.
| Technology | Industrial machinery | null |
106293 | https://en.wikipedia.org/wiki/Catecholamine | Catecholamine | A catecholamine (; abbreviated CA) is a monoamine neurotransmitter, an organic compound that has a catechol (benzene with two hydroxyl side groups next to each other) and a side-chain amine.
Catechol can be either a free molecule or a substituent of a larger molecule, where it represents a 1,2-dihydroxybenzene group.
Catecholamines are derived from the amino acid tyrosine, which is derived from dietary sources as well as synthesis from phenylalanine. Catecholamines are water-soluble and are 50% bound to plasma proteins in circulation.
Included among catecholamines are epinephrine (adrenaline), norepinephrine (noradrenaline), and dopamine. Release of the hormones epinephrine and norepinephrine from the adrenal medulla of the adrenal glands is part of the fight-or-flight response.
Tyrosine is created from phenylalanine by hydroxylation by the enzyme phenylalanine hydroxylase. Tyrosine is also ingested directly from dietary protein. Catecholamine-secreting cells use several reactions to convert tyrosine serially to L-DOPA and then to dopamine. Depending on the cell type, dopamine may be further converted to norepinephrine or even further converted to epinephrine.
Various stimulant drugs (such as a number of substituted amphetamines) are catecholamine analogues.
Structure
Catecholamines have the distinct structure of a benzene ring with two hydroxyl groups, an intermediate ethyl chain, and a terminal amine group. Phenylethanolamines such as norepinephrine have a hydroxyl group on the ethyl chain.
Production and degradation
Location
Catecholamines are produced mainly by the chromaffin cells of the adrenal medulla and the postganglionic fibers of the sympathetic nervous system. Dopamine, which acts as a neurotransmitter in the central nervous system, is largely produced in neuronal cell bodies in two areas of the brainstem: the ventral tegmental area and the substantia nigra, the latter of which contains neuromelanin-pigmented neurons. The similarly neuromelanin-pigmented cell bodies of the locus coeruleus produce norepinephrine. Epinephrine is produced in small groups of neurons in the human brain which express its synthesizing enzyme, phenylethanolamine N-methyltransferase; these neurons project from a nucleus that is adjacent (ventrolateral) to the area postrema and from a nucleus in the dorsal region of the solitary tract.
Biosynthesis
Dopamine is the first catecholamine synthesized from DOPA. In turn, norepinephrine and epinephrine are derived from further metabolic modification of dopamine. The enzyme dopamine hydroxylase requires copper as a cofactor (not shown in the diagram) and DOPA decarboxylase requires PLP (not shown in the diagram). The rate limiting step in catecholamine biosynthesis through the predominant metabolic pathway is the hydroxylation of L-tyrosine to L-DOPA.
Catecholamine synthesis is inhibited by alpha-methyl-p-tyrosine (AMPT), which inhibits tyrosine hydroxylase.
The amino acids phenylalanine and tyrosine are precursors for catecholamines. Both amino acids are found in high concentrations in blood plasma and the brain. In mammals, tyrosine can be formed from dietary phenylalanine by the enzyme phenylalanine hydroxylase, found in large amounts in the liver. Insufficient amounts of phenylalanine hydroxylase result in phenylketonuria, a metabolic disorder that leads to intellectual deficits unless treated by dietary manipulation. Catecholamine synthesis is usually considered to begin with tyrosine. The enzyme tyrosine hydroxylase (TH) converts the amino acid L-tyrosine into 3,4-dihydroxyphenylalanine (L-DOPA). The hydroxylation of L-tyrosine by TH results in the formation of the DA precursor L-DOPA, which is metabolized by aromatic L-amino acid decarboxylase (AADC; see Cooper et al., 2002) to the transmitter dopamine. This step occurs so rapidly that it is difficult to measure L-DOPA in the brain without first inhibiting AADC. In neurons that use DA as the transmitter, the decarboxylation of L-DOPA to dopamine is the final step in formation of the transmitter; however, in those neurons using norepinephrine (noradrenaline) or epinephrine (adrenaline) as transmitters, the enzyme dopamine β-hydroxylase (DBH), which converts dopamine to yield norepinephrine, is also present. In still other neurons in which epinephrine is the transmitter, a third enzyme phenylethanolamine N-methyltransferase (PNMT) converts norepinephrine into epinephrine. Thus, a cell that uses epinephrine as its transmitter contains four enzymes (TH, AADC, DBH, and PNMT), whereas norepinephrine neurons contain only three enzymes (lacking PNMT) and dopamine cells only two (TH and AADC).
Degradation
Catecholamines have a half-life of a few minutes when circulating in the blood. They can be degraded either by methylation by catechol-O-methyltransferases (COMT) or by deamination by monoamine oxidases (MAO).
MAOIs bind to MAO, thereby preventing it from breaking down catecholamines and other monoamines.
Catabolism of catecholamines is mediated by two main enzymes: catechol-O-methyltransferase (COMT) which is present in the synaptic cleft and cytosol of the cell and monoamine oxidase (MAO) which is located in the mitochondrial membrane. Both enzymes require cofactors: COMT uses Mg2+ as a cofactor while MAO uses FAD. The first step of the catabolic process is mediated by either MAO or COMT which depends on the tissue and location of catecholamines (for example degradation of catecholamines in the synaptic cleft is mediated by COMT because MAO is a mitochondrial enzyme). The next catabolic steps in the pathway involve alcohol dehydrogenase, aldehyde dehydrogenase and aldehyde reductase. The end product of epinephrine and norepinephrine is vanillylmandelic acid (VMA) which is excreted in the urine. Dopamine catabolism leads to the production of homovanillic acid (HVA).
Function
Modality
Two catecholamines, norepinephrine and dopamine, act as neuromodulators in the central nervous system and as hormones in the blood circulation. The catecholamine norepinephrine is a neuromodulator of the peripheral sympathetic nervous system but is also present in the blood (mostly through "spillover" from the synapses of the sympathetic system).
High catecholamine levels in blood are associated with stress, which can be induced from psychological reactions or environmental stressors such as elevated sound levels, intense light, or low blood sugar levels.
Extremely high levels of catecholamines (also known as catecholamine toxicity) can occur in central nervous system trauma due to stimulation or damage of nuclei in the brainstem, in particular, those nuclei affecting the sympathetic nervous system. In emergency medicine, this occurrence is widely known as a "catecholamine dump".
Extremely high levels of catecholamine can also be caused by neuroendocrine tumors in the adrenal medulla, a treatable condition known as pheochromocytoma.
High levels of catecholamines can also be caused by monoamine oxidase A (MAO-A) deficiency, known as Brunner syndrome. As MAO-A is one of the enzymes responsible for degradation of these neurotransmitters, its deficiency increases the bioavailability of these neurotransmitters considerably. It occurs in the absence of pheochromocytoma, neuroendocrine tumors, and carcinoid syndrome, but it looks similar to carcinoid syndrome with symptoms such as facial flushing and aggression.
Acute porphyria can cause elevated catecholamines.
Effects
Catecholamines cause general physiological changes that prepare the body for physical activity (the fight-or-flight response). Some typical effects are increases in heart rate, blood pressure, blood glucose levels, and a general reaction of the sympathetic nervous system. Some drugs, like tolcapone (a central COMT-inhibitor), raise the levels of all the catecholamines. Increased catecholamines may also cause an increased respiratory rate (tachypnoea) in patients.
Catecholamine is secreted into urine after being broken down, and its secretion level can be measured for the diagnosis of illnesses associated with catecholamine levels in the body. Urine testing for catecholamine is used to detect pheochromocytoma.
Function in plants
Testing for catecholamines
Catecholamines are secreted by cells in tissues of different systems of the human body, mostly by the nervous and the endocrine systems. The adrenal glands secrete certain catecholamines into the blood when the person is physically or mentally stressed and this is usually a healthy physiological response. However, acute or chronic excess of circulating catecholamines can potentially increase blood pressure and heart rate to very high levels and eventually provoke dangerous effects. Tests for fractionated plasma free metanephrines or the urine metanephrines are used to confirm or exclude certain diseases when the doctor identifies signs of hypertension and tachycardia that don't adequately respond to treatment. Each of the tests measure the amount of adrenaline and noradrenaline metabolites, respectively called metanephrine and normetanephrine.
Blood tests are also done to analyze the amount of catecholamines present in the body.
Catecholamine tests are done to identify rare tumors at the adrenal gland or in the nervous system. Catecholamine tests provide information relative to tumors such as: pheochromocytoma, paraganglioma, and neuroblastoma.
| Biology and health sciences | Neurotransmitters | Biology |
106364 | https://en.wikipedia.org/wiki/Algebraic%20structure | Algebraic structure | In mathematics, an algebraic structure or algebraic system consists of a nonempty set A (called the underlying set, carrier set or domain), a collection of operations on A (typically binary operations such as addition and multiplication), and a finite set of identities (known as axioms) that these operations must satisfy.
An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called scalar multiplication between elements of the field (called scalars), and elements of the vector space (called vectors).
Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms).
In universal algebra, an algebraic structure is called an algebra; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring.
The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category.
Introduction
Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, and are associative laws, and and are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law.
Sets with one or more operations that obey specific laws are called algebraic structures. When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem.
In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses.
Common axioms
Equational axioms
An axiom of an algebraic structure often has the form of an identity, that is, an equation such that the two sides of the equals sign are expressions that involve operations of the algebraic structure and variables. If the variables in the identity are replaced by arbitrary elements of the algebraic structure, the equality must remain true. Here are some common examples.
Commutativity An operation is commutative if for every and in the algebraic structure.
Associativity An operation is associative if for every , and in the algebraic structure.
Left distributivity An operation is left distributive with respect to another operation if for every , and in the algebraic structure (the second operation is denoted here as , because the second operation is addition in many common examples).
Right distributivity An operation is right distributive with respect to another operation if for every , and in the algebraic structure.
Distributivity An operation is distributive with respect to another operation if it is both left distributive and right distributive. If the operation is commutative, left and right distributivity are both equivalent to distributivity.
Existential axioms
Some common axioms contain an existential clause. In general, such a clause can be avoided by introducing further operations, and replacing the existential clause by an identity involving the new operation. More precisely, let us consider an axiom of the form "for all there is such that where is a -tuple of variables. Choosing a specific value of for each value of defines a function which can be viewed as an operation of arity , and the axiom becomes the identity
The introduction of such auxiliary operation complicates slightly the statement of an axiom, but has some advantages. Given a specific algebraic structure, the proof that an existential axiom is satisfied consists generally of the definition of the auxiliary function, completed with straightforward verifications. Also, when computing in an algebraic structure, one generally uses explicitly the auxiliary operations. For example, in the case of numbers, the additive inverse is provided by the unary minus operation
Also, in universal algebra, a variety is a class of algebraic structures that share the same operations, and the same axioms, with the condition that all axioms are identities. What precedes shows that existential axioms of the above form are accepted in the definition of a variety.
Here are some of the most common existential axioms.
Identity element
A binary operation has an identity element if there is an element such that for all in the structure. Here, the auxiliary operation is the operation of arity zero that has as its result.
Inverse element
Given a binary operation that has an identity element , an element is invertible if it has an inverse element, that is, if there exists an element such that For example, a group is an algebraic structure with a binary operation that is associative, has an identity element, and for which all elements are invertible.
Non-equational axioms
The axioms of an algebraic structure can be any first-order formula, that is a formula involving logical connectives (such as "and", "or" and "not"), and logical quantifiers () that apply to elements (not to subsets) of the structure.
Such a typical axiom is inversion in fields. This axiom cannot be reduced to axioms of preceding types. (it follows that fields do not form a variety in the sense of universal algebra.) It can be stated: "Every nonzero element of a field is invertible;" or, equivalently: the structure has a unary operation such that
The operation can be viewed either as a partial operation that is not defined for ; or as an ordinary function whose value at 0 is arbitrary and must not be used.
Common algebraic structures
One set with operations
Simple structures: no binary operation:
Set: a degenerate algebraic structure S having no operations.
Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers.
Group: a monoid with a unary operation (inverse), giving rise to inverse elements.
Abelian group: a group whose binary operation is commutative.
Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition.
Ring: a semiring whose additive monoid is an abelian group.
Division ring: a nontrivial ring in which division by nonzero elements is defined.
Commutative ring: a ring in which the multiplication operation is commutative.
Field: a commutative division ring (i.e. a commutative ring which contains a multiplicative inverse for every nonzero element).
Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law.
Complete lattice: a lattice in which arbitrary meet and joins exist.
Bounded lattice: a lattice with a greatest element and least element.
Distributive lattice: a lattice in which each of meet and join distributes over the other. A power set under union and intersection forms a distributive lattice.
Boolean algebra: a complemented distributive lattice. Either of meet or join can be defined in terms of the other and complementation.
Two sets with operations
Module: an abelian group M and a ring R acting as operators on M. The members of R are sometimes called scalars, and the binary operation of scalar multiplication is a function R × M → M, which satisfies several axioms. Counting the ring operations these systems have at least three operations.
Vector space: a module where the ring R is a field or, in some contexts, a division ring.
Algebra over a field: a module over a field, which also carries a multiplication operation that is compatible with the module structure. This includes distributivity over addition and linearity with respect to multiplication.
Inner product space: a field F and vector space V with a definite bilinear form .
Hybrid structures
Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure.
Topological group: a group with a topology compatible with the group operation.
Lie group: a topological group with a compatible smooth manifold structure.
Ordered groups, ordered rings and ordered fields: each type of structure with a compatible partial order.
Archimedean group: a linearly ordered group for which the Archimedean property holds.
Topological vector space: a vector space whose M has a compatible topology.
Normed vector space: a vector space with a compatible norm. If such a space is complete (as a metric space) then it is called a Banach space.
Hilbert space: an inner product space over the real or complex numbers whose inner product gives rise to a Banach space structure.
Vertex operator algebra
Von Neumann algebra: a *-algebra of operators on a Hilbert space equipped with the weak operator topology.
Universal algebra
Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by identities and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry).
Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra T. Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure E. The quotient algebra T/E is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator m, taking two arguments, and the inverse operator i, taking one argument, and the identity element e, a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables x, y, z, etc. the term algebra is the collection of all possible terms involving m, i, e and the variables; so for example, m(i(x), m(x, m(y,e))) would be an element of the term algebra. One of the axioms defining a group is the identity m(x, i(x)) = e; another is m(x,e) = x. The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group.
Some structures do not form varieties, because either:
It is necessary that 0 ≠ 1, 0 being the additive identity element and 1 being a multiplicative identity element, but this is a nonidentity;
Structures such as fields have some axioms that hold only for nonzero members of S. For an algebraic structure to be a variety, its operations must be defined for all members of S; there can be no partial operations.
Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because , but fields do not have zero divisors.
Category theory
Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of objects with associated morphisms. Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure.
There are various concepts in category theory that try to capture the algebraic character of a context, for instance
algebraic category
essentially algebraic category
presentable category
locally presentable category
monadic functors and categories
universal property.
Different meanings of "structure"
In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring structure on the set ", means that we have defined ring operations on the set . For another example, the group can be seen as a set that is equipped with an algebraic structure, namely the operation'' .
| Mathematics | Abstract algebra | null |
106418 | https://en.wikipedia.org/wiki/Computational%20physics | Computational physics | Computational physics is the study and implementation of numerical analysis to solve problems in physics. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline (or offshoot) of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics — an area of study which supplements both theory and experiment.
Overview
In physics, different theories based on mathematical models provide very precise predictions on how systems behave. Unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, or is too complicated. In such cases, numerical approximations are required. Computational physics is the subject that deals with these numerical approximations: the approximation of the solution is written as a finite (and typically large) number of simple mathematical operations (algorithm), and a computer is used to perform these operations and compute an approximated solution and respective error.
Status in physics
There is a debate about the status of computation within the scientific method. Sometimes it is regarded as more akin to theoretical physics; some others regard computer simulation as "computer experiments", yet still others consider it an intermediate or different branch between theoretical and experimental physics, a third way that supplements theory and experiment. While computers can be used in experiments for the measurement and recording (and storage) of data, this clearly does not constitute a computational approach.
Challenges in computational physics
Computational physics problems are in general very difficult to solve exactly. This is due to several (mathematical) reasons: lack of algebraic and/or analytic solvability, complexity, and chaos. For example, even apparently simple problems, such as calculating the wavefunction of an electron orbiting an atom in a strong electric field (Stark effect), may require great effort to formulate a practical algorithm (if one can be found); other cruder or brute-force techniques, such as graphical methods or root finding, may be required. On the more advanced side, mathematical perturbation theory is also sometimes used (a working is shown for this particular example here). In addition, the computational cost and computational complexity for many-body problems (and their classical counterparts) tend to grow quickly. A macroscopic system typically has a size of the order of constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system and for classical N-body it is of order N-squared. Finally, many physical systems are inherently nonlinear at best, and at worst chaotic: this means it can be difficult to ensure any numerical errors do not grow to the point of rendering the 'solution' useless.
Methods and algorithms
Because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Between them, one can consider:
root finding (using e.g. Newton-Raphson method)
system of linear equations (using e.g. LU decomposition)
ordinary differential equations (using e.g. Runge–Kutta methods)
integration (using e.g. Romberg method and Monte Carlo integration)
partial differential equations (using e.g. finite difference method and relaxation method)
matrix eigenvalue problem (using e.g. Jacobi eigenvalue algorithm and power iteration)
All these methods (and several others) are used to calculate physical properties of the modeled systems.
Computational physics also borrows a number of ideas from computational chemistry - for example, the density functional theory used by computational solid state physicists to calculate properties of solids is basically the same as that used by chemists to calculate the properties of molecules.
Furthermore, computational physics encompasses the tuning of the software/hardware structure to solve the problems (as the problems usually can be very large, in processing power need or in memory requests).
Divisions
It is possible to find a corresponding computational branch for every major field in physics:
Computational mechanics consists of computational fluid dynamics (CFD), computational solid mechanics and computational contact mechanics.
Computational electrodynamics is the process of modeling the interaction of electromagnetic fields with physical objects and the environment. One subfield at the confluence between CFD and electromagnetic modelling is computational magnetohydrodynamics.
Computational chemistry is a rapidly growing field that was developed due to the quantum many-body problem.
Computational solid state physics is a very important division of computational physics dealing directly with material science.
Computational statistical mechanics is a field related to computational condensed matter which deals with the simulation of models and theories (such as percolation and spin models) that are difficult to solve otherwise.
Computational statistical physics makes heavy use of Monte Carlo-like methods. More broadly, (particularly through the use of agent based modeling and cellular automata) it also concerns itself with (and finds application in, through the use of its techniques) in the social sciences, network theory, and mathematical models for the propagation of disease (most notably, the SIR Model) and the spread of forest fires.
Numerical relativity is a (relatively) new field interested in finding numerical solutions to the field equations of both special relativity and general relativity.
Computational particle physics deals with problems motivated by particle physics.
Computational astrophysics is the application of these techniques and methods to astrophysical problems and phenomena.
Computational biophysics is a branch of biophysics and computational biology itself, applying methods of computer science and physics to large complex biological problems.
Applications
Due to the broad class of problems computational physics deals, it is an essential component of modern research in different areas of physics, namely: accelerator physics, astrophysics, general theory of relativity (through numerical relativity), fluid mechanics (computational fluid dynamics), lattice field theory/lattice gauge theory (especially lattice quantum chromodynamics), plasma physics (see plasma modeling), simulating physical systems (using e.g. molecular dynamics), nuclear engineering computer codes, protein structure prediction, weather prediction, solid state physics, soft condensed matter physics, hypervelocity impact physics etc.
Computational solid state physics, for example, uses density functional theory to calculate properties of solids, a method similar to that used by chemists to study molecules. Other quantities of interest in solid state physics, such as the electronic band structure, magnetic properties and charge densities can be calculated by this and several methods, including the Luttinger-Kohn/k.p method and ab-initio methods.
On top of advanced physics software, there are also a myriad of tools of analytics available for beginning students of physics such as the PASCO Capstone software.
| Physical sciences | Physics basics: General | Physics |
106421 | https://en.wikipedia.org/wiki/Library%20%28computing%29 | Library (computing) | In computer science, a library is a collection of resources that is leveraged during software development to implement a computer program.
Historically, a library consisted of subroutines (generally called functions today). The concept now includes other forms of executable code including classes and non-executable data including images and text. It can also refer to a collection of source code.
For example, a program could use a library to indirectly make system calls instead of making those system calls directly in the program.
Characteristics
General
A library can be used by multiple, independent consumers (programs and other libraries). This differs from resources defined in a program which can usually only be used by that program.
When a consumer uses a library resource, it gains the value of the library without having to implement it itself. Libraries encourage code reuse in a modular fashion.
When writing code that uses a library, a programmer only needs to know high-level information such as what items it contains at and how to use the items not all of the internal details of the library.
Libraries can use other libraries resulting in a hierarchy of libraries in a program.
Executable
A library of executable code has a well-defined interface by which the functionality is invoked.
For example, in C, a library function is invoked via C's normal function call capability. The linker generates code to call a function via the library mechanism if the function is available from a library instead of from the program itself.
The functions of a library can be connected to the invoking program at different program lifecycle phases. If the code of the library is accessed during the build of the invoking program, then the library is called a static library. An alternative is to build the program executable to be separate from the library file. The library functions are connected after the executable is started, either at load-time or runtime. In this case, the library is called a dynamic library.
Most compiled languages have a standard library, although programmers can also create their own custom libraries. Most modern software systems provide libraries that implement the majority of the system services. Such libraries have organized the services which a modern application requires. As such, most code used by modern applications is provided in these system libraries.
History
The idea of a computer library dates back to the first computers created by Charles Babbage. An 1888 paper on his Analytical Engine suggested that computer operations could be punched on separate cards from numerical input. If these operation punch cards were saved for reuse then "by degrees the engine would have a library of its own."
In 1947 Goldstine and von Neumann speculated that it would be useful to create a "library" of subroutines for their work on the IAS machine, an early computer that was not yet operational at that time. They envisioned a physical library of magnetic wire recordings, with each wire storing reusable computer code.
Inspired by von Neumann, Wilkes and his team constructed EDSAC. A filing cabinet of punched tape held the subroutine library for this computer. Programs for EDSAC consisted of a main program and a sequence of subroutines copied from the subroutine library. In 1951 the team published the first textbook on programming, The Preparation of Programs for an Electronic Digital Computer, which detailed the creation and the purpose of the library.
COBOL included "primitive capabilities for a library system" in 1959, but Jean Sammet described them as "inadequate library facilities" in retrospect.
JOVIAL has a Communication Pool (COMPOOL), roughly a library of header files.
Another major contributor to the modern library concept came in the form of the subprogram innovation of FORTRAN. FORTRAN subprograms can be compiled independently of each other, but the compiler lacked a linker. So prior to the introduction of modules in Fortran-90, type checking between FORTRAN subprograms was impossible.
By the mid 1960s, copy and macro libraries for assemblers were common. Starting with the popularity of the IBM System/360, libraries containing other types of text elements, e.g., system parameters, also became common.
In IBM's OS/360 and its successors this is called a partitioned data set.
The first object-oriented programming language, Simula, developed in 1965, supported adding classes to libraries via its compiler.
Linking
Libraries are important in the program linking or binding process, which resolves references known as links or symbols to library modules. The linking process is usually automatically done by a linker or binder program that searches a set of libraries and other modules in a given order. Usually it is not considered an error if a link target can be found multiple times in a given set of libraries. Linking may be done when an executable file is created (static linking), or whenever the program is used at runtime (dynamic linking).
The references being resolved may be addresses for jumps and other routine calls. They may be in the main program, or in one module depending upon another. They are resolved into fixed or relocatable addresses (from a common base) by allocating runtime memory for the memory segments of each module referenced.
Some programming languages use a feature called smart linking whereby the linker is aware of or integrated with the compiler, such that the linker knows how external references are used, and code in a library that is never actually used, even though internally referenced, can be discarded from the compiled application. For example, a program that only uses integers for arithmetic, or does no arithmetic operations at all, can exclude floating-point library routines. This smart-linking feature can lead to smaller application file sizes and reduced memory usage.
Relocation
Some references in a program or library module are stored in a relative or symbolic form which cannot be resolved until all code and libraries are assigned final static addresses. Relocation is the process of adjusting these references, and is done either by the linker or the loader. In general, relocation cannot be done to individual libraries themselves because the addresses in memory may vary depending on the program using them and other libraries they are combined with. Position-independent code avoids references to absolute addresses and therefore does not require relocation.
Static libraries
When linking is performed during the creation of an executable or another object file, it is known as static linking or early binding. In this case, the linking is usually done by a linker, but may also be done by the compiler. A static library, also known as an archive, is one intended to be statically linked. Originally, only static libraries existed. Static linking must be performed when any modules are recompiled.
All of the modules required by a program are sometimes statically linked and copied into the executable file. This process, and the resulting stand-alone file, is known as a static build of the program. A static build may not need any further relocation if virtual memory is used and no address space layout randomization is desired.
Shared libraries
A shared library or shared object is a file that is intended to be shared by executable files and further shared object files. Modules used by a program are loaded from individual shared objects into memory at load time or runtime, rather than being copied by a linker when it creates a single monolithic executable file for the program.
Shared libraries can be statically linked during compile-time, meaning that references to the library modules are resolved and the modules are allocated memory when the executable file is created. But often linking of shared libraries is postponed until they are loaded.
Object libraries
Although originally pioneered in the 1960s, dynamic linking did not reach the most commonly-used operating systems until the late 1980s. It was generally available in some form in most operating systems by the early 1990s. During this same period, object-oriented programming (OOP) was becoming a significant part of the programming landscape. OOP with runtime binding requires additional information that traditional libraries do not supply. In addition to the names and entry points of the code located within, they also require a list of the objects they depend on. This is a side-effect of one of OOP's core concepts, inheritance, which means that parts of the complete definition of any method may be in different places. This is more than simply listing that one library requires the services of another: in a true OOP system, the libraries themselves may not be known at compile time, and vary from system to system.
At the same time many developers worked on the idea of multi-tier programs, in which a "display" running on a desktop computer would use the services of a mainframe or minicomputer for data storage or processing. For instance, a program on a GUI-based computer would send messages to a minicomputer to return small samples of a huge dataset for display. Remote procedure calls (RPC) already handled these tasks, but there was no standard RPC system.
Soon the majority of the minicomputer and mainframe vendors instigated projects to combine the two, producing an OOP library format that could be used anywhere. Such systems were known as object libraries, or distributed objects, if they supported remote access (not all did). Microsoft's COM is an example of such a system for local use. DCOM, a modified version of COM, supports remote access.
For some time object libraries held the status of the "next big thing" in the programming world. There were a number of efforts to create systems that would run across platforms, and companies competed to try to get developers locked into their own system. Examples include IBM's System Object Model (SOM/DSOM), Sun Microsystems' Distributed Objects Everywhere (DOE), NeXT's Portable Distributed Objects (PDO), Digital's ObjectBroker, Microsoft's Component Object Model (COM/DCOM), and any number of CORBA-based systems.
Class libraries
Class libraries are the rough OOP equivalent of older types of code libraries. They contain classes, which describe characteristics and define actions (methods) that involve objects. Class libraries are used to create instances, or objects with their characteristics set to specific values. In some OOP languages, like Java, the distinction is clear, with the classes often contained in library files (like Java's JAR file format) and the instantiated objects residing only in memory (although potentially able to be made persistent in separate files). In others, like Smalltalk, the class libraries are merely the starting point for a system image that includes the entire state of the environment, classes and all instantiated objects.
Today most class libraries are stored in a package repository (such as Maven Central for Java). Client code explicitly declare the dependencies to external libraries in build configuration files (such as a Maven Pom in Java).
Remote libraries
Another library technique uses completely separate executables (often in some lightweight form) and calls them using a remote procedure call (RPC) over a network to another computer. This maximizes operating system re-use: the code needed to support the library is the same code being used to provide application support and security for every other program. Additionally, such systems do not require the library to exist on the same machine, but can forward the requests over the network.
However, such an approach means that every library call requires a considerable amount of overhead. RPC calls are much more expensive than calling a shared library that has already been loaded on the same machine. This approach is commonly used in a distributed architecture that makes heavy use of such remote calls, notably client-server systems and application servers such as Enterprise JavaBeans.
Code generation libraries
Code generation libraries are high-level APIs that can generate or transform byte code for Java. They are used by aspect-oriented programming, some data access frameworks, and for testing to generate dynamic proxy objects. They also are used to intercept field access.
File naming
Most modern Unix-like systems
The system stores libfoo.a and libfoo.so files in directories such as /lib, /usr/lib or /usr/local/lib. The filenames always start with lib, and end with a suffix of .a (archive, static library) or of .so (shared object, dynamically linked library). Some systems might have multiple names for a dynamically linked library. These names typically share the same prefix and have different suffixes indicating the version number. Most of the names are names for symbolic links to the latest version. For example, on some systems libfoo.so.2 would be the filename for the second major interface revision of the dynamically linked library libfoo. The .la files sometimes found in the library directories are libtool archives, not usable by the system as such.
macOS
The system inherits static library conventions from BSD, with the library stored in a .a file, and can use .so-style dynamically linked libraries (with the .dylib suffix instead). Most libraries in macOS, however, consist of "frameworks", placed inside special directories called "bundles" which wrap the library's required files and metadata. For example, a framework called MyFramework would be implemented in a bundle called MyFramework.framework, with MyFramework.framework/MyFramework being either the dynamically linked library file or being a symlink to the dynamically linked library file in MyFramework.framework/Versions/Current/MyFramework.
Microsoft Windows
Dynamic-link libraries usually have the suffix *.DLL, although other file name extensions may identify specific-purpose dynamically linked libraries, e.g. *.OCX for OLE libraries. The interface revisions are either encoded in the file names, or abstracted away using COM-object interfaces. Depending on how they are compiled, *.LIB files can be either static libraries or representations of dynamically linkable libraries needed only during compilation, known as "import libraries". Unlike in the UNIX world, which uses different file extensions, when linking against .LIB file in Windows one must first know if it is a regular static library or an import library. In the latter case, a .DLL file must be present at runtime.
| Technology | Software development: General | null |
7951270 | https://en.wikipedia.org/wiki/Sign%20%28mathematics%29 | Sign (mathematics) | In mathematics, the sign of a real number is its property of being either positive, negative, or 0. Depending on local conventions, zero may be considered as having its own unique sign, having no sign, or having both positive and negative sign. In some contexts, it makes sense to distinguish between a positive and a negative zero.
In mathematics and physics, the phrase "change of sign" is associated with exchanging an object for its additive inverse (multiplication with −1, negation), an operation which is not restricted to real numbers. It applies among other objects to vectors, matrices, and complex numbers, which are not prescribed to be only either positive, negative, or zero.
The word "sign" is also often used to indicate binary aspects of mathematical or scientific objects, such as odd and even (sign of a permutation), sense of orientation or rotation (cw/ccw), one sided limits, and other concepts described in below.
Sign of a number
Numbers from various number systems, like integers, rationals, complex numbers, quaternions, octonions, ... may have multiple attributes, that fix certain properties of a number. A number system that bears the structure of an ordered ring contains a unique number that when added with any number leaves the latter unchanged. This unique number is known as the system's additive identity element. For example, the integers has the structure of an ordered ring. This number is generally denoted as Because of the total order in this ring, there are numbers greater than zero, called the positive numbers. Another property required for a ring to be ordered is that, for each positive number, there exists a unique corresponding number less than whose sum with the original positive number is These numbers less than are called the negative numbers. The numbers in each such pair are their respective additive inverses. This attribute of a number, being exclusively either zero , positive , or negative , is called its sign, and is often encoded to the real numbers , , and , respectively (similar to the way the sign function is defined). Since rational and real numbers are also ordered rings (in fact ordered fields), the sign attribute also applies to these number systems.
When a minus sign is used in between two numbers, it represents the binary operation of subtraction. When a minus sign is written before a single number, it represents the unary operation of yielding the additive inverse (sometimes called negation) of the operand. Abstractly then, the difference of two number is the sum of the minuend with the additive inverse of the subtrahend. While is its own additive inverse (), the additive inverse of a positive number is negative, and the additive inverse of a negative number is positive. A double application of this operation is written as . The plus sign is predominantly used in algebra to denote the binary operation of addition, and only rarely to emphasize the positivity of an expression.
In common numeral notation (used in arithmetic and elsewhere), the sign of a number is often made explicit by placing a plus or a minus sign before the number. For example, denotes "positive three", and denotes "negative three" (algebraically: the additive inverse of ). Without specific context (or when no explicit sign is given), a number is interpreted per default as positive. This notation establishes a strong association of the minus sign "" with negative numbers, and the plus sign "+" with positive numbers.
Sign of zero
Within the convention of zero being neither positive nor negative, a specific sign-value may be assigned to the number value . This is exploited in the -function, as defined for real numbers. In arithmetic, and both denote the same number . There is generally no danger of confusing the value with its sign, although the convention of assigning both signs to does not immediately allow for this discrimination.
In certain European countries, e.g. in Belgium and France, is considered to be both positive and negative following the convention set forth by Nicolas Bourbaki.
In some contexts, such as floating-point representations of real numbers within computers, it is useful to consider signed versions of zero, with signed zeros referring to different, discrete number representations (see signed number representations for more).
The symbols and rarely appear as substitutes for and , used in calculus and mathematical analysis for one-sided limits (right-sided limit and left-sided limit, respectively). This notation refers to the behaviour of a function as its real input variable approaches along positive (resp., negative) values; the two limits need not exist or agree.
Terminology for signs
When is said to be neither positive nor negative, the following phrases may refer to the sign of a number:
A number is positive if it is greater than zero.
A number is negative if it is less than zero.
A number is non-negative if it is greater than or equal to zero.
A number is non-positive if it is less than or equal to zero.
When is said to be both positive and negative, modified phrases are used to refer to the sign of a number:
A number is strictly positive if it is greater than zero.
A number is strictly negative if it is less than zero.
A number is positive if it is greater than or equal to zero.
A number is negative if it is less than or equal to zero.
For example, the absolute value of a real number is always "non-negative", but is not necessarily "positive" in the first interpretation, whereas in the second interpretation, it is called "positive"—though not necessarily "strictly positive".
The same terminology is sometimes used for functions that yield real or other signed values. For example, a function would be called a positive function if its values are positive for all arguments of its domain, or a non-negative function if all of its values are non-negative.
Complex numbers
Complex numbers are impossible to order, so they cannot carry the structure of an ordered ring, and, accordingly, cannot be partitioned into positive and negative complex numbers. They do, however, share an attribute with the reals, which is called absolute value or magnitude. Magnitudes are always non-negative real numbers, and to any non-zero number there belongs a positive real number, its absolute value.
For example, the absolute value of and the absolute value of are both equal to . This is written in symbols as and .
In general, any arbitrary real value can be specified by its magnitude and its sign. Using the standard encoding, any real value is given by the product of the magnitude and the sign in standard encoding. This relation can be generalized to define a sign for complex numbers.
Since the real and complex numbers both form a field and contain the positive reals, they also contain the reciprocals of the magnitudes of all non-zero numbers. This means that any non-zero number may be multiplied with the reciprocal of its magnitude, that is, divided by its magnitude. It is immediate that the quotient of any non-zero real number by its magnitude yields exactly its sign. By analogy, the can be defined as the quotient and its The sign of a complex number is the exponential of the product of its argument with the imaginary unit. represents in some sense its complex argument. This is to be compared to the sign of real numbers, except with For the definition of a complex sign-function. see below.
Sign functions
When dealing with numbers, it is often convenient to have their sign available as a number. This is accomplished by functions that extract the sign of any number, and map it to a predefined value before making it available for further calculations. For example, it might be advantageous to formulate an intricate algorithm for positive values only, and take care of the sign only afterwards.
Real sign function
The sign function or signum function extracts the sign of a real number, by mapping the set of real numbers to the set of the three reals It can be defined as follows:
Thus is 1 when is positive, and is −1 when is negative. For non-zero values of , this function can also be defined by the formula
where is the absolute value of .
Complex sign function
While a real number has a 1-dimensional direction, a complex number has a 2-dimensional direction. The complex sign function requires the magnitude of its argument , which can be calculated as
Analogous to above, the complex sign function extracts the complex sign of a complex number by mapping the set of non-zero complex numbers to the set of unimodular complex numbers, and to : It may be defined as follows:
Let be also expressed by its magnitude and one of its arguments as then
This definition may also be recognized as a normalized vector, that is, a vector whose direction is unchanged, and whose length is fixed to unity. If the original value was R,θ in polar form, then sign(R, θ) is 1 θ. Extension of sign() or signum() to any number of dimensions is obvious, but this has already been defined as normalizing a vector.
Signs per convention
In situations where there are exactly two possibilities on equal footing for an attribute, these are often labelled by convention as plus and minus, respectively. In some contexts, the choice of this assignment (i.e., which range of values is considered positive and which negative) is natural, whereas in other contexts, the choice is arbitrary, making an explicit sign convention necessary, the only requirement being consistent use of the convention.
Sign of an angle
In many contexts, it is common to associate a sign with the measure of an angle, particularly an oriented angle or an angle of rotation. In such a situation, the sign indicates whether the angle is in the clockwise or counterclockwise direction. Though different conventions can be used, it is common in mathematics to have counterclockwise angles count as positive, and clockwise angles count as negative.
It is also possible to associate a sign to an angle of rotation in three dimensions, assuming that the axis of rotation has been oriented. Specifically, a right-handed rotation around an oriented axis typically counts as positive, while a left-handed rotation counts as negative.
An angle which is the negative of a given angle has an equal arc, but the opposite axis.
Sign of a change
When a quantity x changes over time, the change in the value of x is typically defined by the equation
Using this convention, an increase in x counts as positive change, while a decrease of x counts as negative change. In calculus, this same convention is used in the definition of the derivative. As a result, any increasing function has positive derivative, while any decreasing function has negative derivative.
Sign of a direction
When studying one-dimensional displacements and motions in analytic geometry and physics, it is common to label the two possible directions as positive and negative. Because the number line is usually drawn with positive numbers to the right, and negative numbers to the left, a common convention is for motions to the right to be given a positive sign, and for motions to the left to be given a negative sign.
On the Cartesian plane, the rightward and upward directions are usually thought of as positive, with rightward being the positive x-direction, and upward being the positive y-direction. If a displacement vector is separated into its vector components, then the horizontal part will be positive for motion to the right and negative for motion to the left, while the vertical part will be positive for motion upward and negative for motion downward.
Likewise, a negative speed (rate of change of displacement) implies a velocity in the opposite direction, i.e., receding instead of advancing; a special case is the radial speed.
In 3D space, notions related to sign can be found in the two normal orientations and orientability in general.
Signedness in computing
In computing, an integer value may be either signed or unsigned, depending on whether the computer is keeping track of a sign for the number. By restricting an integer variable to non-negative values only, one more bit can be used for storing the value of a number. Because of the way integer arithmetic is done within computers, signed number representations usually do not store the sign as a single independent bit, instead using e.g. two's complement.
In contrast, real numbers are stored and manipulated as floating point values. The floating point values are represented using three separate values, mantissa, exponent, and sign. Given this separate sign bit, it is possible to represent both positive and negative zero. Most programming languages normally treat positive zero and negative zero as equivalent values, albeit, they provide means by which the distinction can be detected.
Other meanings
In addition to the sign of a real number, the word sign is also used in various related ways throughout mathematics and other sciences:
Words up to sign mean that, for a quantity , it is known that either or for certain . It is often expressed as . For real numbers, it means that only the absolute value of the quantity is known. For complex numbers and vectors, a quantity known up to sign is a stronger condition than a quantity with known magnitude: aside and , there are many other possible values of such that .
The sign of a permutation is defined to be positive if the permutation is even, and negative if the permutation is odd.
In graph theory, a signed graph is a graph in which each edge has been marked with a positive or negative sign.
In mathematical analysis, a signed measure is a generalization of the concept of measure in which the measure of a set may have positive or negative values.
The concept of signed distance is used to convey side, inside or out.
The ideas of signed area and signed volume are sometimes used when it is convenient for certain areas or volumes to count as negative. This is particularly true in the theory of determinants. In an (abstract) oriented vector space, each ordered basis for the vector space can be classified as either positively or negatively oriented.
In a signed-digit representation, each digit of a number may have a positive or negative sign.
In physics, any electric charge comes with a sign, either positive or negative. By convention, a positive charge is a charge with the same sign as that of a proton, and a negative charge is a charge with the same sign as that of an electron.
| Mathematics | Basics | null |
7953419 | https://en.wikipedia.org/wiki/Restoring%20force | Restoring force | In physics, the restoring force is a force that acts to bring a body to its equilibrium position. The restoring force is a function only of position of the mass or particle, and it is always directed back toward the equilibrium position of the system. The restoring force is often referred to in simple harmonic motion. The force responsible for restoring original size and shape is called the restoring force.
An example is the action of a spring. An idealized spring exerts a force proportional to the amount of deformation of the spring from its equilibrium length, exerted in a direction oppose the deformation. Pulling the spring to a greater length causes it to exert a force that brings the spring back toward its equilibrium length. The amount of force can be determined by multiplying the spring constant, characteristic of the spring, by the amount of stretch, also known as Hooke's Law.
Another example is of a pendulum. When a pendulum is not swinging all the forces acting on it are in equilibrium. The force due to gravity and the mass of the object at the end of the pendulum is equal to the tension in the string holding the object up. When a pendulum is put in motion, the place of equilibrium is at the bottom of the swing, the location where the pendulum rests. When the pendulum is at the top of its swing the force returning the pendulum to this midpoint is gravity. As a result, gravity may be seen as a restoring force.
| Physical sciences | Classical mechanics | Physics |
7955016 | https://en.wikipedia.org/wiki/Montreal%20Metro | Montreal Metro | The Montreal Metro (, ) is a rubber-tired underground rapid transit system serving Greater Montreal, Quebec, Canada. The metro, operated by the Société de transport de Montréal (STM), was inaugurated on October 14, 1966, during the tenure of Mayor Jean Drapeau.
It has expanded since its opening from 22 stations on two lines to 68 stations on four lines totalling in length, serving the north, east and centre of the Island of Montreal with connections to Longueuil, via the Yellow Line, and Laval, via the Orange Line.
The Montreal Metro is Canada's busiest rapid transit system in terms of daily ridership, delivering an average of daily unlinked passenger trips per weekday as of . It is North America's third busiest rapid transit system, behind the New York City Subway and Mexico City Metro. In , trips on the Metro were completed. With the Metro and the newer driverless, steel-wheeled Réseau express métropolitain, Montreal has one of North America's largest urban rapid transit systems, attracting the second-highest ridership per capita behind New York City.
History
Urban transit began in Montreal in 1861 when a line of horse-drawn cars started to operate on Craig (now St-Antoine) and Notre-Dame streets. Eventually, as the city grew, a comprehensive network of streetcar lines provided service in most of the city. But urban congestion started to take its toll on streetcar punctuality, so the idea of an underground system was soon considered.
Fifty years of projects
In 1902, as European and American cities were inaugurating their first subway systems, the Canadian federal government created the Montreal Subway Company to promote the idea in Canada.
Starting in 1910, many proposals were tabled but the Montreal Metro would prove to be an elusive goal. The Montreal Street Railway Company, the Montreal Central Terminal Company and the Montreal Underground and Elevated Railway Company all undertook fruitless negotiations with the city. A year later, the Comptoir Financier Franco-Canadien and the Montreal Tunnel Company proposed tunnels under the city centre and the Saint-Lawrence River to link the emerging South Shore neighbourhoods but faced the opposition of railway companies. The Montreal Tramways Company (MTC) was the first to receive the approval of the provincial government in 1913 and four years to start construction. The reluctance of elected city officials to advance funds foiled this first attempt.
The issue of a subway remained present in the newspapers but World War I and the following recession prevented any execution. The gradual return to financial health during the 1920s brought the MTC project back and attracted support from the premier of Quebec. This new attempt was stalled by the Great Depression, which saw the city's streetcar ridership atrophy. A subway proposal was next made by Mayor Camillien Houde in 1939 as a way to provide work for the jobless masses.
World War II and the war effort in Montreal resurrected the idea of a metro. In 1944, the MTC proposed a two-line network, with one line running underneath Saint Catherine Street and the other under Saint Denis, Notre-Dame and Saint Jacques Streets. In 1953, the newly formed public Montreal Transportation Commission replaced streetcars with buses and proposed a single subway line reusing the 1944 plans and extending it all the way to Boulevard Crémazie, right by the D'Youville maintenance shops. By this point, construction was already well underway on Canada's first subway line in Toronto under Yonge Street, which would open in 1954. Still, Montreal councillors remained cautious and no work was initiated. For some of them, including Jean Drapeau during his first municipal term, public transit was a thing of the past.
In 1959, a private company, the Société d'expansion métropolitaine, offered to build a rubber-tired metro but the Transportation Commission wanted its own network and rejected the offer. This would be the last missed opportunity, for the re-election of Jean Drapeau as mayor and the arrival of his right-hand man, Lucien Saulnier, would prove decisive. In the early 1960s, the Western world experienced an economic boom and Quebec underwent its Quiet Revolution. From August 1, 1960, many municipal services reviewed the project and on November 3, 1961, the Montreal City Council voted appropriations amounting to $132 million ($1.06 billion in 2016) to construct and equip an initial network in length.
Construction
The 1961 plan reused several previous studies and planned three lines carved into the rock under the city centre to the most populated areas of the city. The City of Montreal (and its chief engineer Lucien L'Allier) were assisted in the detailed design and engineering of the Metro by French consultant SOFRETU, owned by the operator of the Paris Métro. The French influence is clearly seen in the station design and rolling stock of the Metro. Rubber tires were chosen instead of steel ones, following the Parisian influence - as the rubber tired trains could use steeper grades and accelerate faster. 80% of the tunnels were built through rock, as opposed to the traditional cut-and-cover method used for the construction of the Yonge Subway in Toronto.
The first two lines
The main line, or Line 1 (Green Line) was to pass between the two most important arteries, Saint Catherine and Sherbrooke streets, more or less under the De Maisonneuve Boulevard. It would extend between the English-speaking west at Atwater station and French-speaking east at . Line 2 (Orange Line) was to run from north of the downtown, from Crémazie station through various residential neighbourhoods to the business district at Place-d'Armes station.
Construction of the first two lines began May 23, 1962, under the supervision of the Director of Public Works, Lucien L'Allier. On June 11, 1963, the construction costs for tunnels being lower than expected, Line 2 (Orange Line) was extended by two stations at each end and the new termini became the and stations. The project, which employed more than 5,000 workers at its height, and cost the lives of 12 of them, ended on October 14, 1966. The service was opened gradually between October 1966 and April 1967 as the stations were completed.
Cancellation of Line 3
A third line was planned. It was to use Canadian National Railway (CN) tracks passing under the Mount Royal to reach the northwest suburb of Cartierville from the city centre. Unlike the previous two lines, trains were to be partly running above ground. Negotiations with the CN and municipalities were stalling as Montreal was chosen in November 1962 to hold the 1967 Universal Exposition (Expo 67). Having to make a choice, the city decided that a number 4 line (Yellow Line) linking Montreal to the South Shore suburbs following a plan similar to those proposed early in the 20th century was more necessary.
Line 3 was never built and the number was never used again. The railway, already used for a commuter train to the North Shore at Deux-Montagnes, was completely renovated in the early 1990s and effectively replaced the planned third line. The next line would thus be numbered 5 (Blue Line). Subsequently, elements of the line, particularly the Deux-Montagnes commuter train, became the first line of the Réseau Express Métropolitain.
Expo 67
The Montreal municipal administration asked municipalities of the South Shore of the Saint Lawrence River which one would be interested in the Metro and Longueuil got the link. Line 4 (Yellow Line) would therefore pass under the river, from Berri-de-Montigny station, junction of Line 1 (Green Line) and Line 2 (Orange Line), to Longueuil. A stop was added in between to access the site of Expo 67, built on two islands of the Hochelaga Archipelago in the river. Saint Helen's Island, on which the station of the same name was built, was massively enlarged and consolidated with several nearby islands (including Ronde Island) using backfill excavated during the construction of the Metro. Notre Dame Island, adjacent, was created from scratch with the same material. Line 4 (Yellow Line) was completed on April 1, 1967, in time for the opening of the World's Fair.
The first Metro network was completed with the public opening of Line 4 (Yellow Line) on April 28, 1967. The cities of Montreal, Longueuil and Westmount had assumed the entire cost of construction and equipment of $213.7 million ($1.6 billion in 2016). Montreal became the seventh city in North America to operate a subway. The 1960s being very optimistic years, Metro planning did not escape the general exuberance of the time, and a 1967 study, "Horizon 2000", imagined a network of of tunnels for the year 2000.
Extensions and unbuilt lines
In 1970, the Montreal Urban Community (MUC) was created. This group was made of municipalities that occupy the Island of Montreal and the city of Montreal was the biggest participant. MUC's mission was to provide standardized services at a regional level, one of them being transportation. The MUC Transportation Commission was thus created at the same time to serve as prime contractor for the Metro extensions. It merged all island transport companies and became the Société de transport de la communauté urbaine de Montréal (STCUM) in 1985 and then the Société de transport de Montréal (STM) in 2002.
Montreal Olympics
The success of the Metro increased the pressure to extend the network to other populated areas, including the suburbs on the Island of Montreal. After being awarded, in May 1970, the 1976 Summer Olympics, a loan of $430 million ($2.7 billion in 2016) was approved by the MUC on February 12, 1971, to fund the extensions of Line 1 (Green Line) and Line 2 (Orange Line) and the construction of a transverse line: Line 5 (Blue Line). The Government of Quebec agreed to bear 60% of the costs.
The work on the extensions started October 14, 1971, with Line 1 (Green Line) towards the east to reach the site where the Olympic Stadium was to be built and Autoroute 25 ( station) that could serve as a transfer point for visitors arriving from outside. The extensions were an opportunity to make improvements to the network, such as new trains, larger stations and even semi-automatic control. The first extension was completed in June 1976 just before the Olympics. Line 1 (Green Line) was later extended to the southwest to reach the suburbs of Verdun and LaSalle with the as the terminus station, named after the park and zoo. This segment opened at September 1978.
In the process, further extensions were planned and in 1975 spending was expected to reach reached $1.6 billion ($7.3 billion in 2016). Faced with these soaring costs, the Government of Quebec declared a moratorium May 19, 1976, to the all-out expansion desired by Mayor Jean Drapeau. Tenders were frozen, including those of Line 2 (Orange Line) after the station and those of Line 5 (Blue Line) whose works were yet already underway. A struggle then ensued between the MUC and the Government of Quebec as any extension could not be done without the agreement of both parties. The Montreal Transportation Office might have tried to put the government in front of a fait accompli by awarding large contracts to build the tunnel between station and the Bois-Franc station just before the moratorium was in force.
Moratorium on expansion
In 1977, the newly elected government partially lifted the moratorium on the extension of Line 2 (Orange Line) and the construction of Line 5 (Blue Line). In 1978, the STCUM proposed a map which includes a western extension of Line 5 (Blue Line) that includes stations in N.D.G., Montreal West, Ville St. Pierre, Lachine, LaSalle, and potentially beyond.
Line 2 (Orange Line) was gradually extended westward to station in 1980 and to station in 1981. As the stations were completed, the service was extended. In December 1979 Quebec presented its "integrated transport plan" in which Line 2 (Orange Line) was to be tunnelled to Du Collège station and Line 5 (Blue Line) from station to Anjou station. The plan proposed no other underground lines as the government preferred the option of converting existing railway lines to overground Metro ones. The mayors of the MUC, initially reluctant, accepted this plan when Quebec promised in February 1981 to finance future extensions fully. The moratorium was then modestly lifted on Line 2 (Orange Line) that reached Du Collège station in 1984 and finally station in 1986. This line took the shape of an "U" linking the north of the island to the city centre and serving two very populous axes.
The various moratoriums and technical difficulties encountered during the construction of the fourth line stretched the project over fourteen years. Line 5 (Blue Line), which runs through the centre of the island of Montreal, crossed the east branch of Line 2 (Orange Line) at the station in 1986 and its west branch at the Snowdon) station in 1988. Because it was not crowded, the STCUM at first operated Line 5 (Blue Line) weekdays only from 5:30 am to 7:30 pm and was circulating only three-car trains instead of the nine car trains in use along the other lines. Students from the University of Montreal, the main source of customers, obtained extension of the closing time to 11:10 pm and then 0:15 am in 2002.
Recession and unfinished projects
In the late 1980s, the original network length had nearly quadrupled in twenty years and exceeded that of Toronto, but the plans did not stop there. In its 1983–1984 scenario, the MUC planned a new underground Metro Line 7 (White Line) ( station to Montréal-Nord) and several surface lines numbered Line 6 (Du College station to Repentigny), Line 8 ( station to Pointe-aux-Trembles), Line 10 (Vendome station to Lachine) and Line 11 ( terminus to LaSalle). In 1985, a new government in Quebec rejected the project, replacing the Metro lines by commuter train lines in its own 1988 transport plan. Yet the provincial elections of 1989 approaching, the Line 7 (White Line) project reappeared and the extensions of Line 5 (Blue Line) to Anjou (Pie-IX, Viau, Lacordaire, Langelier and Galeries d'Anjou) and Line 2 (Orange Line) northward (Deguire/Poirier, Bois-Franc and Salaberry) were announced.
At the beginning of the 1990s, there was a significant deficit in public finances across Canada, especially in Quebec, and an economic recession. Metro ridership decreased and the Government of Quebec removed subsidies for the operation of urban public transport. Faced with this situation, the extensions projects were put on hold and the MUC prioritized the renovation of its infrastructures.
Creation of AMT, RTM, ARTM, and improvements
In 1996, the Government of Quebec created a supra-municipal agency, the Agence métropolitaine de transport (AMT), whose mandate is to coordinate the development of transport throughout the Greater Montreal area. The AMT was responsible, among others, for the development of the Metro and suburban trains.
On June 1, 2017, the AMT was disbanded and replaced by two distinct agencies by the Loi 76 (English: Law 76), the Autorité régionale de transport métropolitain (ARTM), mandated to manage and integrate road transport and public transportation in Greater Montreal; and the Réseau de transport métropolitain (RTM, publicly known as exo), which took over all operations from the former Agence métropolitaine de transport. RTM now operates Montreal's commuter rail and metropolitan bus services, and is the second busiest such system in Canada after Toronto's GO Transit.
Laval extension
Announced in 1998 by the STCUM, the project to extend Line 2 (Orange) past the Henri-Bourassa terminus to the city of Laval, passing under the Rivière des Prairies, was launched March 18, 2002. The extension was decided and funded by the Government of Quebec. The AMT received the mandate of its implementation but the ownership and operation of the line stayed with the Société de transport de Montréal (STCUM successor). The work completed, opening to the public happened April 28, 2007. This extension added to the network and three stations in Laval (, De la Concorde and Montmorency). As of 2009, ridership increased by 60,000 a day with these new stations.
Major renovations
Since 2004, most of the STM's investments have been directed to rolling stock and infrastructure renovation programs. New trains (MPM-10) have been delivered, replacing the older MR-63 trains. Tunnels are being repaired and several stations, including , have been several years in rehabilitation. Many electrical and ventilation structures on the surface are in 2016 completely rebuilt to modern standards. In 2020, work to install cellular coverage in the Metro was completed. Station accessibility has also been improved, with over 26 of the 68 stations having elevators installed since 2007.
Réseau express métropolitain
In August 2023, the first phase of the Réseau express métropolitain (REM) opened between Gare Centrale and Brossard. The system is independent of, but connects to and hence complements, the Metro. Built by CDPQ Infra, part of the Quebec pension fund Caisse de dépôt et placement du Québec, the line will eventually run north-south across Montreal, with interchanges with the Metro at Gare Centrale (Bonaventure), McGill and Édouard-Montpetit.
Future growth
Blue line extension to Anjou
Following the opening of Line 5 (Blue) in the 1980s, various governments have proposed extending the line east to Anjou. In 2013, a proposal to extend the line to Anjou was announced by the STM and the Quebec government. On April 9, 2018, premier of Quebec Philippe Couillard and Prime Minister Justin Trudeau announced their commitment to fund and complete the extension, then planned to open in 2026. In March 2022, it was announced that the federal government had agreed to provide $1.3 billion to the extension, with further costs to be covered by the provincial government.
The extension will include five new stations, two bus terminals, a pedestrian tunnel connecting to the Pie-IX BRT and a new park-and-ride. Overall, the project is estimated to cost around $5.8 to $6.4 billion and is scheduled to be completed in 2030. Initial construction work began in August 2022.
Pink Line
In 2017, Valérie Plante proposed the Pink Line as part of her campaign for the office of Mayor of Montreal. The new route would have 29 stations and would primarily northeastern Montreal with the downtown areas, as well as the western end of NDG and Lachine. The project has since been added to Quebec's 10-year infrastructure plan, and feasibility studies for the line's western section began in June 2021.
Network
The Montreal Metro consists of four lines, which are usually identified by their colour or terminus station. The terminus station in the direction of travel is used to differentiate between directions.
Lines and operation
The Yellow Line is the shortest line, with three stations, built for Expo 67. Metro lines that leave the Île de Montréal are the Orange Line, which continues to Laval, and the Yellow Line, which continues to Longueuil.
Metro service starts at 05:30, and the last trains start their run between 00:30 and 01:00 on weekdays and Sunday, and between 01:00 and 01:30 on Saturday. During rush hour, there are two to four minutes between trains on the Orange and Green Lines. The frequency decreases to 12 minutes during late nights.
Fares
The Société de transport de Montréal (STM) operates Metro and bus services in Montreal, and transfers between the two are free inside a 120-minute time frame after the first validation.
On July 1, 2022, the ARTM reorganized its fare system into 4 zones: A, B, C, and D. The island of Montreal was placed in zone A and fares for zones B, C and D can be bought separately or together. The Metro fares are fully integrated with the Exo commuter rail system, which links the metropolitan area to the outer suburbs via six interchange stations (, , , De la Concorde, , and Parc) and the Réseau express métropolitain (REM), which opened in August 2023. The fares for Exo, the REM and the Metro for zone A are only valid on the island of Montreal. In order to take the Exo, REM or Metro trains from Montreal to Laval (zone B), you must have the corresponding fares for that zone; for example, an all modes AB fare.
Fare payment is via a barrier system accepting magnetic tickets and RFID-like contactless cards. A rechargeable contactless smart card called Opus was unveiled on April 21, 2008; it provides seamless integration with other transit networks of neighbouring cities by being capable of holding multiple transport tickets: tickets, books or subscriptions, a subscription for Montreal only and commuter train tickets. Moreover, unlike the magnetic stripe cards, which had been sold alongside the new Opus cards up until May 2009, the contactless cards are not at risk of becoming demagnetized and rendered useless and do not require patrons to slide them through a reader.
Since 2015, customers have been able to purchase an Opus card reader to recharge their personal card online from a computer. As of April 2024, the ARTM added an option to recharge an Opus card directly from the Chrono mobile app. In 2016, the STM is developing a smart phone application featuring NFC technology, which could replace the Opus card.
MétroVision
Metro stations are equipped with MétroVision information screens displaying advertising, news headlines and MétéoMédia weather information, as well as STM-specific information regarding service changes, service delays and other information about using the system. By the end of 2014, the STM had installed screens in all 68 stations. Berri–UQAM station was the first station to have these screens installed.
Ridership
Montreal Metro ridership has more than doubled since it opened: the number of passengers increased from 136 million in 1967 to 357 million in 2014. Montreal has one of North America's busiest public transportation systems with, after New York, the largest number of users compared to its population. However, this growth was not continuous: in the late 1960s and early 1990s, ridership declined during some periods. From 1996 to 2015, the number of passengers grew. Today, portions of the busiest lines, such as Line 1 between Berri–UQAM and McGill stations and Line 2 between Jean-Talon and Champ-de-Mars, experience overcrowding during peak hours. It is not uncommon for travellers in these sections to let several trains pass before being able to board. Conditions at these stations worsen in summer because of the lack of air conditioning and heat generated by the trains.
In 2014, the five most popular stations (in millions of inbound travellers) were (12.8), (11.1), (8.1), (8.1) and (7.6); all of these but Côte-Vertu are located downtown. The least busy station is , with 773,078 entries in 2011.
Funding
The network operations funding (maintenance, equipment purchase and salaries) is provided by the STM. Tickets and subscriptions cover only 40% of the actual operational costs, with the shortfall offset by the urban agglomeration of Montreal (28%), the Montreal Metropolitan Community (5%) and the Government of Quebec (23%).
The STM does not keep separate accounts for Metro and buses services, therefore the following figures include both activities. In 2016, direct operating revenue planned by the STM totalled $667 million. To compensate for the reduced rates, the city will pay $513 million plus $351 million from Quebec. For a budget of $1.53 billion, salaries account for 57% of expenditures, followed in importance by financial expenses (22%) resulting from a 2.85 billion debt. For the Metro only, wages represented 75% of the $292 million operating costs, before electricity costs (9%).
Heavy investment (network extensions) is entirely funded by the provincial government. Renovations and service improvements are subsidized up to 100% by the Government of Canada, the province and the urban agglomeration. For example, 74% of the rolling stock replacement cost is paid for by Quebec while 33% of the bill for upgrades to ventilation structures is covered by the federal government. Small investments to maintain the network in working order remain entirely the responsibility of the STM.
Security
Montreal Metro facilities are patrolled daily by 155 STM inspectors and 115 agents of the Montreal Police Service (SPVM) assigned to the subway. They are in contact with the command centre of the Metro which has 2,000 cameras distributed on the network, coupled with a computerized visual recognition system.
On station platforms, emergency points are available with a telephone connected to the command centre, an emergency power supply cut-off switch and a fire extinguisher. The power supply system is segmented into short sections that can be independently powered, so that following an incident a single train can be stopped while the others reach the nearest station.
In tunnels, a raised path at trains level facilitates evacuation and allows people movement without walking on the tracks. Every 15 meters, directions are indicated by illuminated green signs. Every 150 meters, emergency stations with telephones, power switches and fire hoses can be found. At the ventilation shafts locations in the old tunnels or every 750 meters in recent tunnels sections (Laval), emergency exits reach the surface.
On the surface, blue fire hydrants in the streets are dry risers connected to the Metro fire control system. If a fire breaks out in tunnels, firefighters connect the red fire hydrant with the blue terminals to power the subway system. This decoupling prevents accidental flooding.
Station design
The design of the Metro was heavily influenced by Montreal's winter conditions. Unlike other cities' subways, nearly all station entrances in Montreal are set back from the sidewalk and completely enclosed; usually in small, separate buildings or within building facades. They are equipped with swivelling "butterfly" doors meant to mitigate the wind caused by train movements that can make doors difficult to open. The entire system runs underground and some stations are directly connected to buildings, making the Metro an integral part of Montreal's Underground City.
The network has 68 stations, four of which have connections between Metro lines, and five connect to the commuter train network. They are mostly named after streets adjacent to them.
The average distance between stations is , with a minimum in the city centre between and stations and a maximum between and stations of . Average station depth is . The deepest station of the network, , has its bound platform located underground. The shallowest stations are and Longueuil-Université-de-Sherbrooke terminus, below surface.
Platforms, long and at least wide, are positioned on either sides of the tracks except in the , and stations, where they are superimposed to facilitate transfers between lines in certain directions. and De l'Eglise stations are designed with bunk platforms for engineering reasons, the basement rock in their area (shales) being too brittle for a station with more footprint. The terminus stations of future extensions could be equipped with central platforms to accommodate a turning loop.
Architectural design and public art
The Montreal Metro is renowned for its architecture and public art. Under the direction of Drapeau, a competition among Canadian architects was held to decide the design of each station, ensuring that every station was built in a different style by a different architect. Several stations, such as , are important examples of modernist architecture, and various system-wide design choices were informed by the International Style. However, numerous interventions, such as the installation of public telephones and loudspeakers, with visible wiring, have had a significant impact on the elegance of many stations.
Along with the Stockholm metro, Montreal pioneered the installation of public art in the Metro among capitalist countries, a practice that beforehand was mostly found in socialist and communist nations (the Moscow Metro being a case in point). More than fifty stations are decorated with over one hundred works of public art, such as sculpture, stained glass, and murals by noted Quebec artists, including members of the famous art movement, the Automatistes.
Some of the most important works in the Metro include the stained-glass window at station, the masterpiece of major Quebec artist Marcelle Ferron; and the Guimard entrance at Square-Victoria-OACI station, largely consisting of parts from the famous entrances designed for the Paris Métro, on permanent loan since 1966 by the RATP to commemorate its cooperation in constructing the Metro. Installed in 1967 (the 100th anniversary of Hector Guimard's birth), this is the only authentic Guimard entrance in use outside Paris.
Accessibility
The Montreal Metro was a late adopter of accessibility compared to many metro systems (including those older than the Metro), much to the dismay and criticism of accessibility advocates in Montreal. The first accessible stations on the system were the three stations in Laval, , De la Concorde and , which opened in 2007 as part of the Orange Line extension. Four existing stations, , , and were made accessible between 2009 and 2010.
, there were 27 accessible stations on the system, most of which are on the Orange Line. All interchange stations between subway lines are accessible, but is currently only accessible for the Orange and Green lines. From May 2022, work is underway at Berri–UQAM to make the station fully accessible.
In 2015, the new McGill University Health Centre mega-hospital opened adjacent to station, with a new underground pedestrian tunnel to link the hospital to the station. However, the STM was criticized as many visitors to the hospital have reduced mobility and the station was not accessible. Work began in 2017 to make the station accessible; it was completed in 2021.
The Montreal Metro aims to have over 30 accessible stations by 2025, 41 stations by 2030, and expects all subway stations to be accessible by 2038. In comparison, the Toronto subway (first opened in 1954) will be fully accessible by 2025, and all Vancouver SkyTrain stations have been accessible from that system's opening in 1985, save for Granville station, which became accessible in 2006.
Rolling stock
The Montreal Metro's car fleet uses rubber tires instead of steel wheels. As the Metro runs entirely underground, the cars and the electrical system are not weatherproof. The trains are wide, narrower than the trains used by most other North American subway systems. This narrow width allowed the use of single tunnels (for both tracks) in construction of the Metro lines.
The first generation of rolling stock in Montreal went beyond just adopting the MP 59 car from the Paris Métro. North American cities building metro systems in the 1960s and 1970s (Washington, D.C., San Francisco and Atlanta) were in search of modern rolling stock that not only best fit their needs but also encompassed a change in industrial design that focused on the aesthetics and performances. Until June 2018, some of the Montreal trains were among the oldest North American subway trains in service – the Canadian Vickers MR-63 dating back to the system's opening in 1966 – but extended longevity is expected of rolling stock operated under fully sheltered conditions.
Unlike the subway cars of most metro systems in North America, but like those in most of Europe, Montreal's cars do not have air conditioning. In summer, the lack of cooled air can make trips uncomfortable for passengers. The claim, stated by the STM, is that with the Metro being built entirely underground, air conditioning would heat the tunnels to temperatures that would be too hot to operate the trains.
Models
Current
Bombardier Transportation MR-73, introduced in 1976. Once used in majority on the Orange Line, they were migrated to the Green Line as MR-63 were being retired. They are now the sole rolling stock on the Blue and Yellow Lines, and run alongside the MPM-10 on the Green Line during weekday rush hours.
Bombardier-Alstom MPM-10, named "Azur" by the public in 2012, entered service in 2016. The order completely replaced the outgoing MR-63 model. They use an open gangway design that allows passengers to walk from one end of the train to the other. They are currently the sole rolling stock running on the Orange line, and run in mixed service with the MR-73 on the Green Line during weekday rush hours. On weekends, only Azur trains are used on the Green Line.
Retired
Canadian Vickers MR-63, were in service from the metro's opening in October 1966 until June 2018. Of the original 369 cars built, 33 were destroyed in two separate accidents. On June 21, 2018, the last of the MR-63 trains was completely retired after 52 years of service.
Design
Montreal's Metro trains are made of low-alloy high-tensile steel, painted blue with a thick white band running lengthwise. Trains are assembled in three-, six- or nine-car lengths. Each three-car segment element consists of two motor cab cars encompassing a trailer car (M-T-M). Each car is wide and has three (MPM-10) or four (MR-63, MR-73) wide bi-parting leaf doors on each side for rapid passenger entry and egress. Design specifications called for station dwell times of typically 8 to 15 seconds. In response to overcrowding on the Orange Line, a redesign of the MR-73 cars removed some seats to provide more standing room. The newest Bombardier MPM-10 trains are open-gangway, allowing passengers to move between cars once on board such that the passenger load is more evenly distributed.
Each car has two sets of bogies (trucks), each with four sets of support tires, guide tires and backup conventional steel wheels. The motor cars' bogies each have two direct-current traction motors coupled to reduction gears and differentials. Montreal's Metro trains use electromagnetic brakes, generated by the train's kinetic energy until it has slowed down to about . The train then uses composite brake blocks made of yellow birch injected with peanut oil to bring it to a complete stop. Two sets are applied against the treads of the steel wheels for friction braking. Hard braking produces a characteristic burnt popcorn scent. Wooden brake shoes perform well, but if subjected to numerous high-speed applications they develop a carbon film that diminishes brake performance. The rationale for using wooden brake shoes soaked in peanut oil was health concernsthe use of wooden brake shoes avoids releasing metal dust into the air upon braking. It also reduces screeching noise when braking and prolongs the life of steel wheels.
Rubber tires on the Montreal Metro transmit minimal vibration and help the cars go uphill more easily and negotiate turns at high speeds. However, the advantages of rubber tires are offset by noise levels generated by traction motors which are noisier than the typical North American subway car, although the concrete trackbed favoured over stone ballasting much amplifies the noisiness itself. Trains can climb grades of up to 6.5% and economise the most energy when following a humped-station profile (track profiles that descend to accelerate at leaving a station and ascend upon entering the station). Steel-wheel train technology has undergone significant advances and can better round tight curves, and climb and descend similar grades and slopes but despite these advances, steel-wheel trains still cannot operate at high speeds () on the same steep or tightly curved track profiles as a train equipped with rubber tires.
The release of the MR-73 generation of Metro cars introduced three audible tones heard when departing, generated by chopper circuitry. The chopper circuitry incrementally increases the traction power fed to the trains' traction motors when accelerating from a stop, allowing trains to start smoothly and avoid overloads. The final tone is present throughout the train ride on MR-73s but is not heard at higher speeds because of ambient noise. Equipment on the newest generation of Metro cars does not produce the audible tones when accelerating, though a recording of similar tones is played as an auditory signal in advance of door closure, referred to as the "dou-dou-dou" door closing signal in a 2010 STM advertising campaign. The three tones are essentially the same as the iconic first three trumpet notes from Aaron Copland's musical piece "Fanfare for the Common Man".
Announcements for the Montreal Metro are pre-recorded and voiced by actress Michèle Deslauriers.
Train operation
The MR-73 and the former MR-63 trains are equipped with a manual train control system, and the MPM-10 is equipped with automatic train control. On MR-73 trains, the train operator opens and closes the doors and controls the traction/brake control system. On MPM-10 trains, the operator can operate the doors manually or they can be operated automatically, and then pushes the button, and then the train drives itself. The train operator can also drive the MPM-10 train manually at their discretion. Signalling is effected through coded pulses sent through the rails. Coded speed orders and station stop positions transmitted through track beacons are captured by beacon readers mounted under the driver cabs. The information sent to the train's electronic modules conveys speed information, and it is up to the train automatic control system computer to conform to the imposed speed. Additionally, the train computer can receive energy-saving instructions from track beacons, providing the train with four different economical coasting modes, plus one mode for maximum performance. In case of manual control, track speed is displayed on the cab speedometer indicating the maximum permissible speed. The wayside signals consist of point (switch/turnout) position indicators in proximity to switches and inter-station signalling placed at each station stop. Trains often reach their maximum permitted speed of within 16 seconds depending on grade and load.
Trains are programmed to stop at certain station positions with a precise odometer (accurate to plus or minus five centimetres, 2"). They receive their braking program and station stop positions orders (one-third, two-thirds, or end of station) from track beacons prior to entering the station, with additional beacons in the station for ensuring stop precision. The last beacon is positioned at precisely 12 turns of wheels from the end of the platform, which help improve the overall precision of the system.
Trains draw current from two sets of 750-volt direct current guide bar/third rails on either side of each motor car. Nine-car trains draw large currents of up to 6,000 amperes, requiring that all models of rolling stock have calibrated traction motor control systems to prevent power surges, arcing and breaker tripping. Both models have electrical braking (using motors) to assist primary friction braking, reducing the need to replace the brake pads.
The trains are equipped with double coverage broadband radio systems, provided by Thales Group.
Rolling stock maintenance
Garages
Idle trains are stored in five garages: Angrignon, Beaugrand, Cote-Vertu, Saint-Charles and Montmorency. Except Angrignon, they are all underground and can accommodate around 46% of the rolling stock. Remaining trains are parked in terminus tail tracks.
Angrignon garage, west of Line 1 terminus, is a surface building next to Angrignon Park housing six tracks accepting two nine-car trains each.
Beaugrand garage is located east of Line 1 terminus . It is entirely under the Chénier-Beaugrand Park, and its main access point is through the Honoré-Beaugrand station. It has seven tracks and accommodates light maintenance on MR-63 cars with two test tracks.
Saint-Charles garage, north of terminus, is located under Gouin Park. With eight tracks, allowing 20 trains to be parked, it is the main garage of Line 2. Also, under Jeanne-Sauvé Park, a training facility used by the firefighters contains one of the burnt MR-63 cars from 1973 and an obsolete picking train.
Montmorency garage is built perpendicular to its terminal station to allow an easier potential expansion of the Line 2 deeper in Laval territory.
Cote-Vertu garage was constructed underground at the end of Thimens boulevard to accommodate additional MPM-10 trains on Line 2. Accessible via a tunnel, it houses a small maintenance facility and two long tracks for a total of twelve parking places. Two more tracks could be added later with the line extension.
Maintenance and repair facilities
Rolling stock maintenance is effected in five facilities in four locations. Two small tracks are located at Montmorency and Beaugrand garages, and two large are at the Plateau d'Youville facility. A fifth facility was constructed at the Cote-Vertu garage.
The only repair facility for the Montreal Metro is the Atelier Plateau d'Youville, located at the intersection of Crémazie (part of Trans-Canada Highway) and Saint-Laurent Boulevards. Built alongside the first segment and opened in October 1966 on the site of a former streetcar depot, it is a large above-ground facility that provides major repairs to Metro cars and is the main base for the track assembly workshops (where track sections are pre-assembled prior to installation). The two-way service tunnel connecting the network to the Youville portal gate is found between Crémazie and Sauvé stations. Formerly, the Atelier Plateau d'Youville was connected to the Canadian national rail network with a connecting track to the CN St Laurent Subdivision, which was mainly used for delivery of MR-63 trains.
Tail tracks and connecting tracks
Centre d'attachement Duvernay is a garage and base for maintenance of way equipment. It accesses the network through the Line 1/Line 2 interchange southeast of . The access building is located at the corner of Duvernay and Vinet streets in Sainte-Cunégonde.
Centre d'attachement Viau is a garage and base for maintenance of way equipment. It accesses the network immediately west of the station (Line 1). The access building is within the Viau station building; facilities are visible from trains going west of the station.
Berri–UQAM link is connecting Lines 1 and 4 south of Berri–UQAM station.
Snowdon link and tail is an interchange track between Lines 2 and 5 south/west of station used for the storage of maintenance of way equipment. There are no surface facilities. The tail tracks west of Snowdon station extend about west of the station, reaching the border of the city of Hampstead. The end of the track is marked by an emergency exit on the corner of Queen Mary and Dufferin Roads.
Cote-Vertu tail track extends after the terminus station towards the intersection of Grenet and Deguire streets.
Future projects
City of Montreal
On June 12, 2008, the City of Montreal released its overall transportation plan for the immediate future. On April 9, 2018, construction on the Blue Line's five new stations was announced and began in 2021. The following projects were given priority status in the overall transportation scheme:
The Blue Line extension from station up to the boroughs of Saint-Leonard and Anjou, committing to the line's original design. It would consist of five new stations: Pie-IX, Viau, Lacordaire, Langelier and Anjou.
The Orange Line extension northwest from Côte-Vertu station, up to the existing Bois-Franc rail station, with an intermediate station at Rue Poirier. The station at Bois-Franc would be intermodal with the Réseau express métropolitain (part of the Deux-Montagnes commuter rail line at the time of the report).
In the long term, a new extension of the Yellow Line from Berri–UQAM is being studied that would go to station to ease congestion on that part of the Green Line.
In 2006 and 2007, Montreal's West Island newspapers discussed plans to extend the Blue Line from into the Notre-Dame-de-Grâce area of Montreal, as depicted in its original design.
City of Longueuil
In 2001, the Réseau de transport de Longueuil (RTL) has considered an extension of the Yellow Line with four new stations (Vieux-Longueuil, Gentilly, Curé-Poirier/Roland-Therrien and Jacques-Cartier/De Mortagne) beyond , under the city of Longueuil to Collège Édouard-Montpetit but their priority was switched to the construction of the proposed light rail project in the Champlain bridge corridor. In 2008, Longueuil Mayor Claude Gladu brought the proposal back to life.
A 2006 study rejected the possibility and cost of an extension from station to the City of Brossard on the south shore of Montreal as an alternative to the proposed light rail project in the Champlain bridge corridor.
In 2012, the AMT study Vision 2020, proposed extending the Yellow Line under Longueuil with six new stations.
City of Laval
On July 22, 2007, the mayor of Laval, Gilles Vaillancourt, with the ridership success of the current Laval extension, announced his wish to loop the Orange Line from to stations with the addition of six (or possibly seven) new stations (three in Laval and another three in Montreal). He proposed that Transports Quebec, the provincial transport department, set aside $100 million annually to fund the project, which is expected to cost upwards of $1.5 billion.
On May 26, 2011, Vaillancourt, after the successful opening of highway 25 toll bridge in the eastern part of Laval, proposed that Laval develop its remaining territories with a transit-oriented development (TOD) build around five new Metro stations: four on the west branch (Gouin, Lévesque, Notre-Dame and Carrefour) of the Orange Line and one more on the east branch (De l'Agora). The next-to-last station on the west branch would act as a corresponding station between the east and the west branches of the line.
Pioneer in tunnel advertising
In the early years of the Montreal Metro's life, a unique mode of advertising was used. In some downtown tunnels, cartoons depicting an advertiser's product were mounted on the walls of the tunnel at the level of the cars' windows. A retail film processing outfit called Direct Film advertised on the north wall in the Westbound track of the Guy (now Guy–Concordia)-to-Atwater Station (Green Line) between 1967 and 1969. Strobe lights, aimed at the frames of the cartoon and triggered by the passing train, sequentially illuminated the images so that they appeared to the viewer (passenger) on the train as a movie. Today known as "tunnel movies" or "tunnel advertising", they have been installed in many cities' subways around the world in recent years, for example in the Southgate tube station in London and the MBTA Red Line in Boston.
Accidents and incidents
On December 8, 1971, a speeding MR-63 train crashed into a parked MR-63 train near Henri-Bourassa station on the Orange Line, causing a 17-hour inferno that destroyed 24 MR-63 coaches parked at the Henri-Bourassa tail tracks. Operator Gerard Maccarone was the sole fatality in this accident, which was later revealed to be caused by a jammed throttle that prevented the train from braking in time. This was at that time the deadliest subway accident ever to have occurred in Canada until the Russell Hill subway accident on the Toronto subway in 1995.
| Technology | Canada | null |
7959065 | https://en.wikipedia.org/wiki/Trophic%20cascade | Trophic cascade | Trophic cascades are powerful indirect interactions that can control entire ecosystems, occurring when a trophic level in a food web is suppressed. For example, a top-down cascade will occur if predators are effective enough in predation to reduce the abundance, or alter the behavior of their prey, thereby releasing the next lower trophic level from predation (or herbivory if the intermediate trophic level is a herbivore).
The trophic cascade is an ecological concept which has stimulated new research in many areas of ecology. For example, it can be important for understanding the knock-on effects of removing top predators from food webs, as humans have done in many places through hunting and fishing.
A top-down cascade is a trophic cascade where the top consumer/predator controls the primary consumer population. In turn, the primary producer population thrives. The removal of the top predator can alter the food web dynamics. In this case, the primary consumers would overpopulate and exploit the primary producers. Eventually there would not be enough primary producers to sustain the consumer population. Top-down food web stability depends on competition and predation in the higher trophic levels. Invasive species can also alter this cascade by removing or becoming a top predator. This interaction may not always be negative. Studies have shown that certain invasive species have begun to shift cascades; and as a consequence, ecosystem degradation has been repaired.
For example, if the abundance of large piscivorous fish is increased in a lake, the abundance of their prey, smaller fish that eat zooplankton, should decrease. The resulting increase in zooplankton should, in turn, cause the biomass of its prey, phytoplankton, to decrease.
In a bottom-up cascade, the population of primary producers will always control the increase/decrease of the energy in the higher trophic levels. Primary producers are plants and phytoplankton that require photosynthesis. Although light is important, primary producer populations are altered by the amount of nutrients in the system. This food web relies on the availability and limitation of resources. All populations will experience growth if there is initially a large amount of nutrients.
In a subsidy cascade, species populations at one trophic level can be supplemented by external food. For example, native animals can forage on resources that don't originate in their same habitat, such as native predators eating livestock. This may increase their local abundances thereby affecting other species in the ecosystem and causing an ecological cascade. For example, Luskin et al. (2017) found that native animals living in protected primary rainforest in Malaysia found food subsidies in neighboring oil palm plantations. This subsidy allowed native animal populations to increase, which then triggered powerful secondary ‘cascading’ effects on forest tree community. Specifically, crop-raiding wild boar (Sus scrofa) built thousands of nests from the forest understory vegetation and this caused a 62% decline in forest tree sapling density over a 24-year study period. Such cross-boundary subsidy cascades may be widespread in both terrestrial and marine ecosystems and present significant conservation challenges.
These trophic interactions shape patterns of biodiversity globally. Humans and climate change have affected these cascades drastically. One example can be seen with sea otters (Enhydra lutris) on the Pacific coast of the United States of America. Over time, human interactions caused a removal of sea otters. One of their main prey, the Pacific purple sea urchin (Strongylocentrotus purpuratus) eventually began to overpopulate. The overpopulation caused increased predation of giant kelp (Macrocystis pyrifera). As a result, there was extreme deterioration of the kelp forests along the California coast. This is why it is important for countries to regulate marine and terrestrial ecosystems.
Predator-induced interactions could heavily influence the flux of atmospheric carbon if managed on a global scale. For example, a study was conducted to determine the cost of potential stored carbon in living kelp biomass in sea otter (Enhydra lutris) enhanced ecosystems. The study valued the potential storage between $205 million and $408 million dollars (US) on the European Carbon Exchange (2012).
Origins and theory
Aldo Leopold is generally credited with first describing the mechanism of a trophic cascade, based on his observations of overgrazing of mountain slopes by deer after human extermination of wolves. Nelson Hairston, Frederick E. Smith and Lawrence B. Slobodkin are generally credited with introducing the concept into scientific discourse, although they did not use the term either. Hairston, Smith and Slobodkin argued that predators reduce the abundance of herbivores, allowing plants to flourish. This is often referred to as the green world hypothesis. The green world hypothesis is credited with bringing attention to the role of top-down forces (e.g. predation) and indirect effects in shaping ecological communities. The prevailing view of communities prior to Hairston, Smith and Slobodkin was trophodynamics, which attempted to explain the structure of communities using only bottom-up forces (e.g. resource limitation). Smith may have been inspired by the experiments of a Czech ecologist, Hrbáček, whom he met on a United States State Department cultural exchange. Hrbáček had shown that fish in artificial ponds reduced the abundance of zooplankton, leading to an increase in the abundance of phytoplankton.
Hairston, Smith and Slobodkin feuded that the ecological communities acted as food chains with three trophic levels. Subsequent models expanded the argument to food chains with more than or fewer than three trophic levels. Lauri Oksanen argued that the top trophic level in a food chain increases the abundance of producers in food chains with an odd number of trophic levels (such as in Hairston, Smith and Slobodkin's three trophic level model), but decreases the abundance of the producers in food chains with an even number of trophic levels. Additionally, he argued that the number of trophic levels in a food chain increases as the productivity of the ecosystem increases.
Classic examples
Although Hairston, Smith and Slobodkin formulated their argument in terms of terrestrial food chains, the earliest empirical demonstrations of trophic cascades came from marine and, especially, aquatic ecosystems. Some of the most famous examples are:
In North American lakes, piscivorous fish can dramatically reduce populations of zooplanktivorous fish; zooplanktivorous fish can dramatically alter freshwater zooplankton communities, and zooplankton grazing can in turn have large impacts on phytoplankton communities. Removal of piscivorous fish can change lake water from clear to green by allowing phytoplankton to flourish.
In the Eel River, in Northern California, fish (steelhead and roach) consume fish larvae and predatory insects. These smaller predators prey on midge larvae, which feed on algae. Removal of the larger fish increases the abundance of algae.
In Pacific kelp forests, sea otters feed on sea urchins. In areas where sea otters have been hunted to extinction, sea urchins increase in abundance and kelp populations are reduced.
A classic example of a terrestrial trophic cascade is the reintroduction of gray wolves (Canis lupus) to Yellowstone National Park, which reduced the number, and changed the behavior, of elk (Cervus canadensis). This in turn released several plant species from grazing pressure and subsequently led to the transformation of riparian ecosystems.
Terrestrial trophic cascades
The fact that the earliest documented trophic cascades all occurred in lakes and streams led a scientist to speculate that fundamental differences between aquatic and terrestrial food webs made trophic cascades primarily an aquatic phenomenon. Trophic cascades were restricted to communities with relatively low species diversity, in which a small number of species could have overwhelming influence and the food web could operate as a linear food chain. Additionally, well documented trophic cascades at that point in time all occurred in food chains with algae as the primary producer. Trophic cascades, Strong argued, may only occur in communities with fast-growing producers which lack defenses against herbivory.
Subsequent research has documented trophic cascades in terrestrial ecosystems, including:
In the coastal prairie of Northern California, yellow bush lupines are fed upon by a particularly destructive herbivore, the root-boring caterpillar of the lupine ghost moth Phymatopus californicus. Entomopathogenic nematodes kill the caterpillars, and can increase the survival and seed production of lupines.
In Costa Rican rain forest, a clerid beetle specializes in eating ants. The ant Pheidole bicornis has a mutualistic association with Piper plants: the ant lives on the Piper and removes caterpillars and other insect herbivores. The clerid beetle, by reducing the abundance of ants, increases the leaf area removed from Piper plants by insect herbivores.
Critics pointed out that published terrestrial trophic cascades generally involved smaller subsets of the food web (often only a single plant species). This was quite different from aquatic trophic cascades, in which the biomass of producers as a whole were reduced when predators were removed. Additionally, most terrestrial trophic cascades did not demonstrate reduced plant biomass when predators were removed, but only increased plant damage from herbivores. It was unclear if such damage would actually result in reduced plant biomass or abundance. In 2002 a meta-analysis found trophic cascades to be generally weaker in terrestrial ecosystems, meaning that changes in predator biomass resulted in smaller changes in plant biomass. In contrast, a study published in 2009 demonstrated that multiple species of trees with highly varying autecologies are in fact heavily impacted by the loss of an apex predator. Another study, published in 2011, demonstrated that the loss of large terrestrial predators also significantly degrades the integrity of river and stream systems, impacting their morphology, hydrology, and associated biological communities.
The critics' model is challenged by studies accumulating since the reintroduction of gray wolves (Canis lupus) to Yellowstone National Park. The gray wolf, after being extirpated in the 1920s and absent for 70 years, was reintroduced to the park in 1995 and 1996. Since then a three-tiered trophic cascade has been reestablished involving wolves, elk (Cervus elaphus), and woody browse species such as aspen (Populus tremuloides), cottonwoods (Populus spp.), and willows (Salix spp.). Mechanisms likely include actual wolf predation of elk, which reduces their numbers, and the threat of predation, which alters elk behavior and feeding habits, resulting in these plant species being released from intensive browsing pressure. Subsequently, their survival and recruitment rates have significantly increased in some places within Yellowstone's northern range. This effect is particularly noted among the range's riparian plant communities, with upland communities only recently beginning to show similar signs of recovery.
Examples of this phenomenon include:
A 2–3 fold increase in deciduous woody vegetation cover, mostly of willow, in the Soda Butte Creek area between 1995 and 1999.
Heights of the tallest willows in the Gallatin River valley increasing from 75 cm to 200 cm between 1998 and 2002.
Heights of the tallest willows in the Blacktail Creek area increased from less than 50 cm to more than 250 cm between 1997 and 2003. Additionally, canopy cover over streams increased significantly, from only 5% to a range of 14–73%.
In the northern range, tall deciduous woody vegetation cover increased by 170% between 1991 and 2006.
In the Lamar and Soda Butte Valleys the number of young cottonwood trees that had been successfully recruited went from 0 to 156 between 2001 and 2010.
Trophic cascades also impact the biodiversity of ecosystems, and when examined from that perspective wolves appear to be having multiple, positive cascading impacts on the biodiversity of Yellowstone National Park. These impacts include:
Scavengers, such as ravens (Corvus corax), bald eagles (Haliaeetus leucocephalus), and even grizzly bears (Ursus arctos horribilis), are likely subsidized by the carcasses of wolf kills.
In the northern range, the relative abundance of six out of seven native songbirds which utilize willow was found to be greater in areas of willow recovery as opposed to those where willows remained suppressed.
Bison (Bison bison) numbers in the northern range have been steadily increasing as elk numbers have declined, presumably due to a decrease in interspecific competition between the two species.
Importantly, the number of beaver (Castor canadensis) colonies in the park has increased from one in 1996 to twelve in 2009. The recovery is likely due to the increase in willow availability, as they have been feeding almost exclusively on it. As keystone species, the resurgence of beaver is a critical event for the region. The presence of beavers has been shown to positively impact streambank erosion, sediment retention, water tables, nutrient cycling, and both the diversity and abundance of plant and animal life among riparian communities.
There are a number of other examples of trophic cascades involving large terrestrial mammals, including:
In both Zion National Park and Yosemite National Park, the increase in human visitation during the first half of the 20th century was found to correspond to the decline of native cougar (Puma concolor) populations in at least part of their range. Soon after, native populations of mule deer (Odocoileus hemionus) erupted, subjecting resident communities of cottonwoods (Populus fremontii) in Zion and California black oak (Quercus kelloggii) in Yosemite to intensified browsing. This halted successful recruitment of these species except in refugia inaccessible to the deer. In Zion the suppression of cottonwoods increased stream erosion and decreased the diversity and abundance of amphibians, reptiles, butterflies, and wildflowers. In parts of the park where cougars were still common these negative impacts were not expressed and riparian communities were significantly healthier.
In sub-Saharan Africa, the decline of lion (Panthera leo) and leopard (Panthera pardus) populations has led to a rising population of olive baboon (Papio anubis). This case of mesopredator release negatively impacted already declining ungulate populations and is one of the reasons for increased conflict between baboons and humans, as the primates raid crops and spread intestinal parasites.
In the Australian states of New South Wales and South Australia, the presence or absence of dingoes (Canis lupus dingo) was found to be inversely related to the abundance of invasive red foxes (Vulpes vulpes). In other words, the foxes were most common where the dingoes were least common. Subsequently, populations of an endangered prey species, the dusky hopping mouse (Notomys fuscus) were also less abundant where dingoes were absent due to the foxes, which consume the mice, no longer being held in check by the top predator.
Marine trophic cascades
In addition to the classic examples listed above, more recent examples of trophic cascades in marine ecosystems have been identified:
An example of a cascade in a complex, open-ocean ecosystem occurred in the northwest Atlantic during the 1980s and 1990s. The removal of Atlantic cod (Gadus morhua) and other ground fishes by sustained overfishing resulted in increases in the abundance of the prey species for these ground fishes, particularly smaller forage fishes and invertebrates such as the northern snow crab (Chionoecetes opilio) and northern shrimp (Pandalus borealis). The increased abundance of these prey species altered the community of zooplankton that serve as food for smaller fishes and invertebrates as an indirect effect.
A similar cascade, also involving the Atlantic cod, occurred in the Baltic Sea at the end of the 1980s. After a decline in Atlantic cod, the abundance of its main prey, the sprat (Sprattus sprattus), increased and the Baltic Sea ecosystem shifted from being dominated by cod into being dominated by sprat. The next level of trophic cascade was a decrease in the abundance of Pseudocalanus acuspes, a copepod which the sprat prey on.
On Caribbean coral reefs, several species of angelfishes and parrotfishes eat species of sponges that lack chemical defenses. Removal of these sponge-eating fish species from reefs by fish-trapping and netting has resulted in a shift in the sponge community toward fast-growing sponge species that lack chemical defenses. These fast-growing sponge species are superior competitors for space, and overgrow and smother reef-building corals to a greater extent on overfished reefs.
Criticisms
Although the existence of trophic cascades is not controversial, ecologists have long debated how ubiquitous they are. Hairston, Smith and Slobodkin argued that terrestrial ecosystems, as a rule, behave as a three trophic level trophic cascade, which provoked immediate controversy. Some of the criticisms, both of Hairston, Smith and Slobodkin's model and of Oksanen's later model, were:
Plants possess numerous defenses against herbivory, and these defenses also contribute to reducing the impact of herbivores on plant populations.
Herbivore populations may be limited by factors other than food or predation, such as nesting sites or available territory.
For trophic cascades to be ubiquitous, communities must generally act as food chains, with discrete trophic levels. Most communities, however, have complex food webs. In real food webs, consumers often feed at multiple trophic levels (omnivory), organisms often change their diet as they grow larger, cannibalism occurs, and consumers are subsidized by inputs of resources from outside the local community, all of which blur the distinctions between trophic levels.
Antagonistically, this principle is sometimes called the "trophic trickle".
| Biology and health sciences | Ecology | Biology |
1799818 | https://en.wikipedia.org/wiki/Sea%20eagle | Sea eagle | A sea eagle or fish eagle (also called erne or ern, mostly in reference to the white-tailed eagle) is any of the birds of prey in the subfamily Haliaeetinae of the bird of prey family Accipitridae. Ten extant species exist, currently described with this label.
The subfamily has a significant reach, with a scholarly article in 2005 reporting that they were "found in riverine and coastal habitat[s] throughout the world". However, Haliaeetinae inhabited areas have experienced particular threats given the context of human impacts on the environment.
Taxonomy and evolution
The genus Haliaeetus was introduced in 1809 by French naturalist Marie Jules César Savigny in his chapter on birds in the Description de l'Égypte. The two fish eagles in the genus Ichthyophaga were found to lie within Haliaeetus in a genetic study in 2005. They were then moved accordingly. They are very similar to the tropical Haliaeetus species. A prehistoric (i.e. extinct before 1500) form from Maui in the Hawaiian Islands may represent a species or subspecies in this genus.
The relationships to other genera in the family are less clear; they have long been considered closer to the genus Milvus (kites) than to the true eagles in the genus Aquila on the basis of their morphology and display behaviour; more recent genetic evidence agrees with this, but points to their being related to the genus Buteo (buzzards/hawks), as well, a relationship not previously thought close.
A 2005 molecular study found that the genus is paraphyletic and subsumes Ichthyophaga, the species diverging into a temperate and tropical group.
Evolution
Haliaeetus is possibly one of the oldest genera of living birds. A distal left tarsometatarsus (DPC 1652) recovered from early Oligocene deposits of Fayyum, Egypt (Jebel Qatrani Formation, about 33 million years ago (Mya)) is similar in general pattern and some details to that of a modern sea eagle. The genus was present in the middle Miocene (12–16 Mya) with certainty.
The origin of the sea eagles and fishing eagles is probably in the general area of the Bay of Bengal. During the Eocene/Oligocene, as the Indian subcontinent slowly collided with Eurasia, this was a vast expanse of fairly shallow ocean; the initial sea eagle divergence seems to have resulted in the four tropical (and Southern Hemisphere subtropical) species living around the Indian Ocean today. The Central Asian Pallas's sea eagle's relationships to the other taxa is more obscure; it seems closer to the three Holarctic species which evolved later and may be an early offshoot of this northward expansion; it does not have the hefty yellow bill of the northern forms, retaining a smaller, darker beak like the tropical species.
The rate of molecular evolution in Haliaeetus is fairly slow, as is to be expected in long-lived birds which take years to successfully reproduce. In the mtDNA cytochrome b gene, a mutation rate of 0.5–0.7% per million years (if assuming an Early Miocene divergence) or maybe as little as 0.25–0.3% per million years (for a Late Eocene divergence) has been shown.
Issues in the modern era
The Haliaeetinae subfamily is an especially threatened collection of creatures within the broader Accipitridae species, according to the academic journal Molecular Phylogenetics and Evolution, given the "anthropogenic factors" involved. The publication reported in 2005 that prior trends had meant that sea eagles could be "found in riverine and coastal habitat[s] throughout the world". In terms of international scientific campaigns, the Convention on International Trade in Endangered Species (CITES) protects all entities in the broader species, including sea eagles.
Species
Current sea eagles
Description
Sea eagles vary in size, from Sanford's sea eagle, averaging , to Steller's sea eagle, weighing up to . At up to , the white-tailed eagle is the largest eagle in Europe. Bald eagles can weigh up to , making them the largest eagle native to North America. There are exceptional records of even heavier individuals in both the white-tailed and bald eagles, although not surpassing the largest Steller's sea eagles. The white-bellied sea eagle can weigh up to . They are generally overall brown (from rich brown to dull grey-brown), often with white to the head, tail or underparts. Some of the species have an all-yellow beak as adults, which is unusual among eagles.
Their diets consist mainly of fish, aquatic birds, and small mammals. Nests are typically very large and positioned in a tree, but sometimes on a cliff.
The tail is entirely white in adult Haliaeetus species except for Sanford's, white-bellied, and Pallas's. Three species pairs exist: white-tailed and bald eagles, Sanford's and white-bellied sea eagles, and the African and Madagascar fish eagles, each of these consists of a white- and a tan-headed species.
In popular culture
The bald eagle is the national symbol of the United States.
The silver eagle on red shield on the arms of Poland has been interpreted as the sea eagle.
Namibia, Zambia, and Zimbabwe have the African fish eagle as their national bird.
The white-tailed eagle is the national bird of Poland.
The Manly Warringah Sea Eagles are an Australian professional rugby league club that competes in the National Rugby League (NRL).
Nesting pairs of both the bald eagle and white-bellied sea eagle have been subject to live-streaming webcam footage.
In heraldic language, the osprey is termed a "sea-eagle", although ospreys come from the taxonomic family Pandionidae and are not classified as true sea eagles.
| Biology and health sciences | Accipitrimorphae | Animals |
1800732 | https://en.wikipedia.org/wiki/Second%20messenger%20system | Second messenger system | Second messengers are intracellular signaling molecules released by the cell in response to exposure to extracellular signaling molecules—the first messengers. (Intercellular signals, a non-local form of cell signaling, encompassing both first messengers and second messengers, are classified as autocrine, juxtacrine, paracrine, and endocrine depending on the range of the signal.) Second messengers trigger physiological changes at cellular level such as proliferation, differentiation, migration, survival, apoptosis and depolarization.
They are one of the triggers of intracellular signal transduction cascades.
Examples of second messenger molecules include cyclic AMP, cyclic GMP, inositol triphosphate, diacylglycerol, and calcium. First messengers are extracellular factors, often hormones or neurotransmitters, such as epinephrine, growth hormone, and serotonin. Because peptide hormones and neurotransmitters typically are biochemically hydrophilic molecules, these first messengers may not physically cross the phospholipid bilayer to initiate changes within the cell directly—unlike steroid hormones, which usually do. This functional limitation requires the cell to have signal transduction mechanisms to transduce first messenger into second messengers, so that the extracellular signal may be propagated intracellularly. An important feature of the second messenger signaling system is that second messengers may be coupled downstream to multi-cyclic kinase cascades to greatly amplify the strength of the original first messenger signal. For example, RasGTP signals link with the mitogen activated protein kinase (MAPK) cascade to amplify the allosteric activation of proliferative transcription factors such as Myc and CREB.
Earl Wilbur Sutherland Jr., discovered second messengers, for which he won the 1971 Nobel Prize in Physiology or Medicine. Sutherland saw that epinephrine would stimulate the liver to convert glycogen to glucose (sugar) in liver cells, but epinephrine alone would not convert glycogen to glucose. He found that epinephrine had to trigger a second messenger, cyclic AMP, for the liver to convert glycogen to glucose. The mechanisms were worked out in detail by Martin Rodbell and Alfred G. Gilman, who won the 1994 Nobel Prize.
Secondary messenger systems can be synthesized and activated by enzymes, for example, the cyclases that synthesize cyclic nucleotides, or by opening of ion channels to allow influx of metal ions, for example Ca2+ signaling. These small molecules bind and activate protein kinases, ion channels, and other proteins, thus continuing the signaling cascade.
Types of second messenger molecules
There are three basic types of secondary messenger molecules:
Hydrophobic molecules: water-insoluble molecules such as diacylglycerol, and phosphatidylinositols, which are membrane-associated and diffuse from the plasma membrane into the intermembrane space where they can reach and regulate membrane-associated effector proteins.
Hydrophilic molecules: water-soluble molecules, such as cAMP, cGMP, IP3, and Ca2+, that are located within the cytosol.
Gases: nitric oxide (NO), carbon monoxide (CO) and hydrogen sulfide (H2S) which can diffuse both through cytosol and across cellular membranes.
These intracellular messengers have some properties in common:
They can be synthesized/released and broken down again in specific reactions by enzymes or ion channels.
Some (such as Ca2+) can be stored in special organelles and quickly released when needed.
Their production/release and destruction can be localized, enabling the cell to limit space and time of signal activity.
Common mechanisms of second messenger systems
There are several different secondary messenger systems (cAMP system, phosphoinositol system, and arachidonic acid system), but they all are quite similar in overall mechanism, although the substances involved and overall effects can vary.
In most cases, a ligand binds to a cell surface receptor. The binding of a ligand to the receptor causes a conformation change in the receptor. This conformation change can affect the activity of the receptor and result in the production of active second messengers.
In the case of G protein-coupled receptors, the conformation change exposes a binding site for a G-protein. The G-protein (named for the GDP and GTP molecules that bind to it) is bound to the inner membrane of the cell and consists of three subunits: alpha, beta and gamma. The G-protein is known as the "transducer."
When the G-protein binds with the receptor, it becomes able to exchange a GDP (guanosine diphosphate) molecule on its alpha subunit for a GTP (guanosine triphosphate) molecule. Once this exchange takes place, the alpha subunit of the G-protein transducer breaks free from the beta and gamma subunits, all parts remaining membrane-bound. The alpha subunit, now free to move along the inner membrane, eventually contacts another cell surface receptor - the "primary effector."
The primary effector then has an action, which creates a signal that can diffuse within the cell. This signal is called the "second (or secondary) messenger." The secondary messenger may then activate a "secondary effector" whose effects depend on the particular secondary messenger system.
Calcium ions are one type of second messengers and are responsible for many important physiological functions including muscle contraction, fertilization, and neurotransmitter release. The ions are normally bound or stored in intracellular components (such as the endoplasmic reticulum(ER)) and can be released during signal transduction. The enzyme phospholipase C produces diacylglycerol and inositol trisphosphate, which increases calcium ion permeability into the membrane. Active G-protein open up calcium channels to let calcium ions enter the plasma membrane. The other product of phospholipase C, diacylglycerol, activates protein kinase C, which assists in the activation of cAMP (another second messenger).
Examples
Second Messengers in the Phosphoinositol Signaling Pathway
IP3, DAG, and Ca2+ are second messengers in the phosphoinositol pathway. The pathway begins with the binding of extracellular primary messengers such as epinephrine, acetylcholine, and hormones AGT, GnRH, GHRH, oxytocin, and TRH, to their respective receptors. Epinephrine binds to the α1 GTPase Protein Coupled Receptor (GPCR) and acetylcholine binds to M1 and M2 GPCR.
Binding of a primary messenger to these receptors results in conformational change of the receptor. The α subunit, with the help of guanine nucleotide exchange factors (GEFS), releases GDP, and binds GTP, resulting in the dissociation of the subunit and subsequent activation. The activated α subunit activates phospholipase C, which hydrolyzes membrane bound phosphatidylinositol 4,5-bisphosphate (PIP2), resulting in the formation of secondary messengers diacylglycerol (DAG) and inositol-1,4,5-triphosphate (IP3). IP3 binds to calcium pumps on ER, transporting Ca2+, another second messenger, into the cytoplasm. Ca2+ ultimately binds to many proteins, activating a cascade of enzymatic pathways.
| Biology and health sciences | Cell processes | Biology |
12610483 | https://en.wikipedia.org/wiki/Android%20%28operating%20system%29 | Android (operating system) | Android is a mobile operating system based on a modified version of the Linux kernel and other open-source software, designed primarily for touchscreen-based mobile devices such as smartphones and tablets. Android has historically been developed by a consortium of developers known as the Open Handset Alliance, but its most widely used version is primarily developed by Google. First released in 2008, Android is the world's most widely used operating system; the latest version, released on October 15, 2024, is Android 15.
At its core, the operating system is known as the Android Open Source Project (AOSP) and is free and open-source software (FOSS) primarily licensed under the Apache License. However, most devices run the proprietary Android version developed by Google, which ships with additional proprietary closed-source software pre-installed, most notably Google Mobile Services (GMS), which includes core apps such as Google Chrome, the digital distribution platform Google Play, and the associated Google Play Services development platform. Firebase Cloud Messaging is used for push notifications. While AOSP is free, the "Android" name and logo are trademarks of Google, who restrict the use of Android branding on "uncertified" products. The majority of smartphones based on AOSP run Google's ecosystem—which is known simply as Android—some with vendor-customized user interfaces and software suites, for example One UI. Numerous other distributions exist, both commercial and community-developed, which include Amazon Fire OS, Oppo ColorOS, LineageOS, and others; the source code has also been used to develop a variety of Android distributions on a range of other electronics, such as Android TV for televisions, Wear OS for wearables, and Meta Horizon OS for VR headsets.
Software packages on Android, which use the APK format, are generally distributed through proprietary application stores like Google Play Store, Amazon Appstore, Samsung Galaxy Store, Huawei AppGallery, Cafe Bazaar, GetJar, and Aptoide, or open source platforms like F-Droid. Since 2011 Android has been the most used operating system worldwide on smartphones. It has the largest installed base of any operating system in the world with over three billion monthly active users and accounting for 46% of the global operating system market.
History
2000s
Android Inc. was founded in Palo Alto, California, in October 2003 by Andy Rubin, Rich Miner, Nick Sears, and Chris White. Rubin described the Android project as having "tremendous potential in developing smarter mobile devices that are more aware of its owner's location and preferences". The early intentions of the company were to develop an advanced operating system for digital cameras, and this was the basis of its pitch to investors in April 2004. The company then decided that the market for cameras was not large enough for its goals, and five months later it had diverted its efforts and was pitching Android as a handset operating system that would rival Symbian and Microsoft Windows Mobile.
Rubin had difficulty attracting investors early on, and Android was facing eviction from its office space. Steve Perlman, a close friend of Rubin, brought him $10,000 in cash in an envelope, and shortly thereafter wired an undisclosed amount as seed funding. Perlman refused a stake in the company, and has stated "I did it because I believed in the thing, and I wanted to help Andy."
In 2005, Rubin tried to negotiate deals with Samsung and HTC. Shortly afterwards, Google acquired the company in July of that year for at least $50 million; this was Google's "best deal ever" according to Google's then-vice president of corporate development, David Lawee, in 2010. Android's key employees, including Rubin, Miner, Sears, and White, joined Google as part of the acquisition. Not much was known about the secretive Android Inc. at the time, with the company having provided few details other than that it was making software for mobile phones. At Google, the team led by Rubin developed a mobile device platform powered by the Linux kernel. Google marketed the platform to handset makers and carriers on the promise of providing a flexible, upgradeable system. Google had "lined up a series of hardware components and software partners and signaled to carriers that it was open to various degrees of cooperation".
Speculation about Google's intention to enter the mobile communications market continued to build through December 2006. An early prototype had a close resemblance to a BlackBerry phone, with no touchscreen and a physical QWERTY keyboard, but the arrival of Apple's 2007 iPhone meant that Android "had to go back to the drawing board". Google later changed its Android specification documents to state that "Touchscreens will be supported", although "the Product was designed with the presence of discrete physical buttons as an assumption, therefore a touchscreen cannot completely replace physical buttons". By 2008, both Nokia and BlackBerry announced touch-based smartphones to rival the iPhone 3G, and Android's focus eventually switched to just touchscreens. The first commercially available smartphone running Android was the HTC Dream, also known as T-Mobile G1, announced on September 23, 2008.
On November 5, 2007, the Open Handset Alliance, a consortium of technology companies including Google, device manufacturers such as HTC, Motorola and Samsung, wireless carriers such as Sprint and T-Mobile, and chipset makers such as Qualcomm and Texas Instruments, unveiled itself, with a goal to develop "the first truly open and comprehensive platform for mobile devices". Within a year, the Open Handset Alliance faced two other open source competitors, the Symbian Foundation and the LiMo Foundation, the latter also developing a Linux-based mobile operating system like Google. In September 2007, Google had filed several patent applications in the area of mobile telephony.
On September 23, 2008, Android was introduced by Andy Rubin, Larry Page, Sergey Brin, Cole Brodman, Christopher Schlaeffer and Peter Chou at a press conference in a New York City subway station.
Since 2008, Android has seen numerous updates which have incrementally improved the operating system, adding new features and fixing bugs in previous releases. Each major release is named in alphabetical order after a dessert or sugary treat, with the first few Android versions being called "Cupcake", "Donut", "Eclair", and "Froyo", in that order. During its announcement of Android KitKat in 2013, Google explained that "Since these devices make our lives so sweet, each Android version is named after a dessert", although a Google spokesperson told CNN in an interview that "It's kind of like an internal team thing, and we prefer to be a little bit—how should I say—a bit inscrutable in the matter, I'll say".
2010s
In 2010, Google launched its Nexus series of devices, a lineup in which Google partnered with different device manufacturers to produce new devices and introduce new Android versions. The series was described as having "played a pivotal role in Android's history by introducing new software iterations and hardware standards across the board", and became known for its "bloat-free" software with "timely ... updates". At its developer conference in May 2013, Google announced a special version of the Samsung Galaxy S4, where, instead of using Samsung's own Android customization, the phone ran "stock Android" and was promised to receive new system updates fast. The device would become the start of the Google Play edition program, and was followed by other devices, including the HTC One Google Play edition, and Moto G Google Play edition. In 2015, Ars Technica wrote that "Earlier this week, the last of the Google Play edition Android phones in Google's online storefront were listed as "no longer available for sale" and that "Now they're all gone, and it looks a whole lot like the program has wrapped up".
From 2008 to 2013, Hugo Barra served as product spokesperson, representing Android at press conferences and Google I/O, Google's annual developer-focused conference. He left Google in August 2013 to join Chinese phone maker Xiaomi. Less than six months earlier, Google's then-CEO Larry Page announced in a blog post that Andy Rubin had moved from the Android division to take on new projects at Google, and that Sundar Pichai would become the new Android lead. Pichai himself would eventually switch positions, becoming the new CEO of Google in August 2015 following the company's restructure into the Alphabet conglomerate, making Hiroshi Lockheimer the new head of Android.
On Android 4.4, KitKat, shared writing access to MicroSD memory cards has been locked for user-installed applications, to which only the dedicated directories with respective package names, located inside Android/data/, remained writeable. Writing access has been reinstated with Android 5 Lollipop through the backwards-incompatible Google Storage Access Framework interface.
In June 2014, Google announced Android One, a set of "hardware reference models" that would "allow [device makers] to easily create high-quality phones at low costs", designed for consumers in developing countries. In September, Google announced the first set of Android One phones for release in India. However, Recode reported in June 2015 that the project was "a disappointment", citing "reluctant consumers and manufacturing partners" and "misfires from the search company that has never quite cracked hardware". Plans to relaunch Android One surfaced in August 2015, with Africa announced as the next location for the program a week later. A report from The Information in January 2017 stated that Google is expanding its low-cost Android One program into the United States, although The Verge notes that the company will presumably not produce the actual devices itself. Google introduced the Pixel and Pixel XL smartphones in October 2016, marketed as being the first phones made by Google, and exclusively featured certain software features, such as the Google Assistant, before wider rollout. The Pixel phones replaced the Nexus series, with a new generation of Pixel phones launched in October 2017.
In May 2019, the operating system became entangled in the trade war between China and the United States involving Huawei, which, like many other tech firms, had become dependent on access to the Android platform. In the summer of 2019, Huawei announced it would create an alternative operating system to Android known as Harmony OS, and has filed for intellectual property rights across major global markets. Under such sanctions Huawei has long-term plans to replace Android in 2022 with the new operating system, as Harmony OS was originally designed for internet of things devices, rather than for smartphones and tablets.
On August 22, 2019, it was announced that Android "Q" would officially be branded as Android 10, ending the historic practice of naming major versions after desserts. Google stated that these names were not "inclusive" to international users (due either to the aforementioned foods not being internationally known, or being difficult to pronounce in some languages). On the same day, Android Police reported that Google had commissioned a statue of a giant number "10" to be installed in the lobby of the developers' new office. Android 10 was released on September 3, 2019, to Google Pixel phones first.
2020s
In late 2021, some users reported that they were unable to dial emergency services. The problem was caused by a combination of bugs in Android and in the Microsoft Teams app; both companies released updates addressing the issue.
On December 12, 2024 Google announced Android XR. It is a new operating system developed by Google, designed to enhance extended reality (XR) experiences on devices such as VR headsets and smart glasses. It was built in collaboration with Samsung and Qualcomm. The platform is also focused on supporting developers with tools like ARCore and Unity to build applications for upcoming XR devices.
Features
Interface
Android's default user interface is mainly based on direct manipulation, using touch inputs that loosely correspond to real-world actions, like swiping, tapping, pinching, and reverse pinching to manipulate on-screen objects, along with a virtual keyboard. Game controllers and full-size physical keyboards are supported via Bluetooth or USB. The response to user input is designed to be immediate and provides a fluid touch interface, often using the vibration capabilities of the device to provide haptic feedback to the user. Internal hardware, such as accelerometers, gyroscopes and proximity sensors are used by some applications to respond to additional user actions, for example adjusting the screen from portrait to landscape depending on how the device is oriented, or allowing the user to steer a vehicle in a racing game by rotating the device, simulating control of a steering wheel.
Home screen
Android devices boot to the home screen, the primary navigation and information "hub" on Android devices, analogous to the desktop found on personal computers. Android home screens are typically made up of app icons and widgets; app icons launch the associated app, whereas widgets display live, auto-updating content, such as a weather forecast, the user's email inbox, or a news ticker directly on the home screen. A home screen may be made up of several pages, between which the user can swipe back and forth. Third-party apps available on Google Play and other app stores can extensively re-theme the home screen, and even mimic the look of other operating systems, such as Windows Phone. Most manufacturers customize the look and features of their Android devices to differentiate themselves from their competitors.
Status bar
Along the top of the screen is a status bar, showing information about the device and its connectivity. This status bar can be pulled (swiped) down from to reveal a notification screen where apps display important information or updates, as well as quick access to system controls and toggles such as display brightness, connectivity settings (WiFi, Bluetooth, cellular data), audio mode, and flashlight. Vendors may implement extended settings such as the ability to adjust the flashlight brightness.
Notifications
Notifications are "short, timely, and relevant information about your app when it's not in use", and when tapped, users are directed to a screen inside the app relating to the notification. Beginning with Android 4.1 "Jelly Bean", "expandable notifications" allow the user to tap an icon on the notification in order for it to expand and display more information and possible app actions right from the notification.
App lists
An "All Apps" screen lists all installed applications, with the ability for users to drag an app from the list onto the home screen. The app list may be accessed using a gesture or a button, depending on the Android version. A "Recents" screen, also known as "Overview", lets users switch between recently used apps.
The recent list may appear side-by-side or overlapping, depending on the Android version and manufacturer.
Navigation buttons
Many early Android OS smartphones were equipped with a dedicated search button for quick access to a web search engine and individual apps' internal search feature. More recent devices typically allow the former through a long press or swipe away from the home button.
The dedicated option key, also known as menu key, and its on-screen simulation, is no longer supported since Android version 10. Google recommends mobile application developers to locate menus within the user interface. On more recent phones, its place is occupied by a task key used to access the list of recently used apps when actuated. Depending on device, its long press may simulate a menu button press or engage split screen view, the latter of which is the default behaviour since stock Android version 7.
Split-screen view
Native support for split screen view has been added in stock Android version 7.0 Nougat.
The earliest vendor-customized Android-based smartphones known to have featured a split-screen view mode are the 2012 Samsung Galaxy S3 and Note 2, the former of which received this feature with the premium suite upgrade delivered in TouchWiz with Android 4.1 Jelly Bean.
Charging while powered off
When connecting or disconnecting charging power and when shortly actuating the power button or home button, all while the device is powered off, a visual battery meter whose appearance varies among vendors appears on the screen, allowing the user to quickly assess the charge status of a powered-off without having to boot it up first. Some display the battery percentage.
Applications
Most Android devices come with preinstalled Google apps including Gmail, Google Maps, Google Chrome, YouTube, Google Play Movies & TV, and others.
Applications ("apps"), which extend the functionality of devices (and must be 64-bit), are written using the Android software development kit (SDK) and, often, Kotlin programming language, which replaced Java as Google's preferred language for Android app development in May 2019, and was originally announced in May 2017. Java is still supported (originally the only option for user-space programs, and is often mixed with Kotlin), as is C++. Java or other JVM languages, such as Kotlin, may be combined with C/C++, together with a choice of non-default runtimes that allow better C++ support.
The SDK includes a comprehensive set of development tools, including a debugger, software libraries, a handset emulator based on QEMU, documentation, sample code, and tutorials. Initially, Google's supported integrated development environment (IDE) was Eclipse using the Android Development Tools (ADT) plugin; in December 2014, Google released Android Studio, based on IntelliJ IDEA, as its primary IDE for Android application development. Other development tools are available, including a native development kit (NDK) for applications or extensions in C or C++, Google App Inventor, a visual environment for novice programmers, and various cross platform mobile web applications frameworks. In January 2014, Google unveiled a framework based on Apache Cordova for porting Chrome HTML 5 web applications to Android, wrapped in a native application shell. Additionally, Firebase was acquired by Google in 2014 that provides helpful tools for app and web developers.
Android has a growing selection of third-party applications, which can be acquired by users by downloading and installing the application's APK (Android application package) file, or by downloading them using an application store program that allows users to install, update, and remove applications from their devices. Google Play Store is the primary application store installed on Android devices that comply with Google's compatibility requirements and license the Google Mobile Services software. Google Play Store allows users to browse, download and update applications published by Google and third-party developers; , there are more than three million applications available for Android in Play Store. , 50 billion application installations had been performed. Some carriers offer direct carrier billing for Google Play application purchases, where the cost of the application is added to the user's monthly bill. , there are over one billion active users a month for Gmail, Android, Chrome, Google Play and Maps.
Due to the open nature of Android, a number of third-party application marketplaces also exist for Android, either to provide a substitute for devices that are not allowed to ship with Google Play Store, provide applications that cannot be offered on Google Play Store due to policy violations, or for other reasons. Examples of these third-party stores have included the Amazon Appstore, GetJar, and SlideMe. F-Droid, another alternative marketplace, seeks to only provide applications that are distributed under free and open source licenses.
In October 2020, Google removed several Android applications from Play Store, as they were identified breaching its data collection rules. The firm was informed by International Digital Accountability Council (IDAC) that apps for children like Number Coloring, Princess Salon and Cats & Cosplay, with collective downloads of 20 million, were violating Google's policies.
At the Windows 11 announcement event in June 2021, Microsoft showcased the new Windows Subsystem for Android (WSA) to enable support for the Android Open Source Project (AOSP), but it has since been deprecated. It was meant to allow users running Android apps and games in Windows 11 on their Windows desktop. On March 5, 2024, Microsoft announced deprecation of WSA with support ending on March 5, 2025.
Storage
The storage of Android devices can be expanded using secondary devices such as SD cards. Android recognizes two types of secondary storage: portable storage (which is used by default), and adoptable storage. Portable storage is treated as an external storage device. Adoptable storage, introduced on Android 6.0, allows the internal storage of the device to be spanned with the SD card, treating it as an extension of the internal storage. This has the disadvantage of preventing the memory card from being used with another device unless it is reformatted.
Android 4.4 introduced the Storage Access Framework (SAF), a set of APIs for accessing files on the device's filesystem. As of Android 11, Android has required apps to conform to a data privacy policy known as scoped storage, under which apps may only automatically have access to certain directories (such as those for pictures, music, and video), and app-specific directories they have created themselves. Apps are required to use the SAF to access any other part of the filesystem.
Memory management
Since Android devices are usually battery-powered, Android is designed to manage processes to keep power consumption at a minimum. When an application is not in use the system suspends its operation so that, while available for immediate use rather than closed, it does not use battery power or CPU resources. Android manages the applications stored in memory automatically: when memory is low, the system will begin invisibly and automatically closing inactive processes, starting with those that have been inactive for the longest amount of time. Lifehacker reported in 2011 that third-party task-killer applications were doing more harm than good.
Developer options
Some settings for use by developers for debugging and power users are located in a "Developer options" sub menu, such as the ability to highlight updating parts of the display, show an overlay with the current status of the touch screen, show touching spots for possible use in screencasting, notify the user of unresponsive background processes with the option to end them ("Show all ANRs", i.e. "App's Not Responding"), prevent a Bluetooth audio client from controlling the system volume ("Disable absolute volume"), and adjust the duration of transition animations or deactivate them completely to speed up navigation.
Developer options are initially hidden since Android 4.2 "Jelly Bean", but can be enabled by actuating the operating system's build number in the device information seven times. Hiding developers options again requires deleting user data for the "Settings" app, possibly resetting some other preferences.
Hardware
The main hardware platform for Android is ARM (i.e. the 64-bit ARMv8-A architecture and previously 32-bit such as ARMv7), and x86 and x86-64 architectures were once also officially supported in later versions of Android. The unofficial Android-x86 project provided support for x86 architectures ahead of the official support. Since 2012, Android devices with Intel processors began to appear, including phones and tablets. While gaining support for 64-bit platforms, Android was first made to run on 64-bit x86 and then on ARM64. An unofficial experimental port of the operating system to the RISC-V architecture was released in 2021.
Requirements for the minimum amount of RAM for devices running Android 7.1 range from in practice 2 GB for best hardware, down to 1 GB for the most common screen. Android supports all versions of OpenGL ES and Vulkan (and version 1.1 available for some devices).
Android devices incorporate many optional hardware components, including still or video cameras, GPS, orientation sensors, dedicated gaming controls, accelerometers, gyroscopes, barometers, magnetometers, proximity sensors, pressure sensors, thermometers, and touchscreens. Some hardware components are not required, but became standard in certain classes of devices, such as smartphones, and additional requirements apply if they are present. Some other hardware was initially required, but those requirements have been relaxed or eliminated altogether. For example, as Android was developed initially as a phone OS, hardware such as microphones were required, while over time the phone function became optional. Android used to require an autofocus camera, which was relaxed to a fixed-focus camera if present at all, since the camera was dropped as a requirement entirely when Android started to be used on set-top boxes.
In addition to running on smartphones and tablets, several vendors run Android natively on regular PC hardware with a keyboard and mouse. In addition to their availability on commercially available hardware, similar PC hardware-friendly versions of Android are freely available from the Android-x86 project, including customized Android 4.4. Using the Android emulator that is part of the Android SDK, or third-party emulators, Android can also run non-natively on x86 architectures. Chinese companies are building a PC and mobile operating system, based on Android, to "compete directly with Microsoft Windows and Google Android". The Chinese Academy of Engineering noted that "more than a dozen" companies were customizing Android following a Chinese ban on the use of Windows 8 on government PCs.
Development
Android is developed by Google until the latest changes and updates are ready to be released, at which point the source code is made available to the Android Open Source Project (AOSP), an open source initiative led by Google. The first source code release happened as part of the initial release in 2007. All releases are under the Apache License.
The AOSP code can be found with minimal modifications on select devices, mainly the former Nexus and current Android One series of devices. However, most original equipment manufacturers (OEMs) customize the source code to run on their hardware.
Android's source code does not contain the device drivers, often proprietary, that are needed for certain hardware components, and does not contain the source code of Google Play Services, which many apps depend on. As a result, most Android devices, including Google's own, ship with a combination of free and open source and proprietary software, with the software required for accessing Google services falling into the latter category. In response to this, there are some projects that build complete operating systems based on AOSP as free software, the first being CyanogenMod (see section Open-source community below).
Update schedule
Google provides annual Android releases, both for factory installation in new devices, and for over-the-air updates to existing devices. The latest major release is Android 15.
The extensive variation of hardware in Android devices has caused significant delays for software upgrades and security patches. Each upgrade has had to be specifically tailored, a time- and resource-consuming process. Except for devices within the Google Nexus and Pixel brands, updates have often arrived months after the release of the new version, or not at all. Manufacturers often prioritize their newest devices and leave old ones behind. Additional delays can be introduced by wireless carriers who, after receiving updates from manufacturers, further customize Android to their needs and conduct extensive testing on their networks before sending out the upgrade. There are also situations in which upgrades are impossible due to a manufacturer not updating necessary drivers.
The lack of after-sale support from manufacturers and carriers has been widely criticized by consumer groups and the technology media. Some commentators have noted that the industry has a financial incentive not to upgrade their devices, as the lack of updates for existing devices fuels the purchase of newer ones, an attitude described as "insulting". The Guardian complained that the method of distribution for updates is complicated only because manufacturers and carriers have designed it that way. In 2011, Google partnered with a number of industry players to announce an "Android Update Alliance", pledging to deliver timely updates for every device for 18 months after its release; however, there has not been another official word about that alliance since its announcement.
In 2012, Google began de-coupling certain aspects of the operating system (particularly its central applications) so they could be updated through the Google Play store independently of the OS. One of those components, Google Play Services, is a closed-source system-level process providing APIs for Google services, installed automatically on nearly all devices running Android 2.2 "Froyo" and higher. With these changes, Google can add new system functions and update apps without having to distribute an upgrade to the operating system itself. As a result, Android 4.2 and 4.3 "Jelly Bean" contained relatively fewer user-facing changes, focusing more on minor changes and platform improvements.
HTC's then-executive Jason Mackenzie called monthly security updates "unrealistic" in 2015, and Google was trying to persuade carriers to exclude security patches from the full testing procedures. In May 2016, Bloomberg Businessweek reported that Google was making efforts to keep Android more up-to-date, including accelerated rates of security updates, rolling out technological workarounds, reducing requirements for phone testing, and ranking phone makers in an attempt to "shame" them into better behavior. As stated by Bloomberg: "As smartphones get more capable, complex and hackable, having the latest software work closely with the hardware is increasingly important". Hiroshi Lockheimer, the Android lead, admitted that "It's not an ideal situation", further commenting that the lack of updates is "the weakest link on security on Android". Wireless carriers were described in the report as the "most challenging discussions", due to their slow approval time while testing on their networks, despite some carriers, including Verizon Wireless and Sprint Corporation, already shortening their approval times. In a further effort for persuasion, Google shared a list of top phone makers measured by updated devices with its Android partners, and is considering making the list public. Mike Chan, co-founder of phone maker Nextbit and former Android developer, said that "The best way to solve this problem is a massive re-architecture of the operating system", "or Google could invest in training manufacturers and carriers 'to be good Android citizens.
In May 2017, with the announcement of Android 8.0, Google introduced Project Treble, a major re-architect of the Android OS framework designed to make it easier, faster, and less costly for manufacturers to update devices to newer versions of Android. Project Treble separates the vendor implementation (device-specific, lower-level software written by silicon manufacturers) from the Android OS framework via a new "vendor interface". In Android 7.0 and earlier, no formal vendor interface exists, so device makers must update large portions of the Android code to move a device to a newer version of the operating system. With Treble, the new stable vendor interface provides access to the hardware-specific parts of Android, enabling device makers to deliver new Android releases simply by updating the Android OS framework, "without any additional work required from the silicon manufacturers."
In September 2017, Google's Project Treble team revealed that, as part of their efforts to improve the security lifecycle of Android devices, Google had managed to get the Linux Foundation to agree to extend the support lifecycle of the Linux Long-Term Support (LTS) kernel branch from the 2 years that it has historically lasted to 6 years for future versions of the LTS kernel, starting with Linux kernel 4.4.
In May 2019, with the announcement of Android 10, Google introduced Project Mainline to simplify and expedite delivery of updates to the Android ecosystem. Project Mainline enables updates to core OS components through the Google Play Store. As a result, important security and performance improvements that previously needed to be part of full OS updates can be downloaded and installed as easily as an app update.
Google reported rolling out new amendments in Android 12 aimed at making the use of third-party application stores easier. This announcement rectified the concerns reported regarding the development of Android apps, including a fight over an alternative in-app payment system and difficulties faced by businesses moving online because of COVID-19.
Linux kernel
Android's kernel is based on the Linux kernel's long-term support (LTS) branches. , Android (14) uses versions 6.1 or 5.15 (for "Feature kernels", can be older for "Launch kernels", e.g. android12-5.10, android11-5.4, depending on Android version down to e.g. android11-5.4, android-4.14-stable, android-4.9-q), and older Android versions, use version 5.15 or a number of older kernels. The actual kernel depends on the individual device.
Android's variant of the Linux kernel has further architectural changes that are implemented by Google outside the typical Linux kernel development cycle, such as the inclusion of components like device trees, ashmem, ION, and different out of memory (OOM) handling. Certain features that Google contributed back to the Linux kernel, notably a power management feature called "wakelocks", were initially rejected by mainline kernel developers partly because they felt that Google did not show any intent to maintain its own code. Google announced in April 2010 that they would hire two employees to work with the Linux kernel community, but Greg Kroah-Hartman, the current Linux kernel maintainer for the stable branch, said in December 2010 that he was concerned that Google was no longer trying to get their code changes included in mainstream Linux. Google engineer Patrick Brady once stated in the company's developer conference that "Android is not Linux", with Computerworld adding that "Let me make it simple for you, without Linux, there is no Android". Ars Technica wrote that "Although Android is built on top of the Linux kernel, the platform has very little in common with the conventional desktop Linux stack".
In August 2011, Linus Torvalds said that "eventually Android and Linux would come back to a common kernel, but it will probably not be for four to five years". (that has not happened yet, while some code has been upstreamed, not all of it has, so modified kernels keep being used). In December 2011, Greg Kroah-Hartman announced the start of Android Mainlining Project, which aims to put some Android drivers, patches and features back into the Linux kernel, starting in Linux 3.3. Linux included the autosleep and wakelocks capabilities in the 3.5 kernel, after many previous attempts at a merger. The interfaces are the same but the upstream Linux implementation allows for two different suspend modes: to memory (the traditional suspend that Android uses), and to disk (hibernate, as it is known on the desktop). Google maintains a public code repository that contains their experimental work to re-base Android off the latest stable Linux versions.
Android is a Linux distribution according to the Linux Foundation, Google's open-source chief Chris DiBona, and several journalists. Others, such as Google engineer Patrick Brady, say that Android is not Linux in the traditional Unix-like Linux distribution sense; Android does not include the GNU C Library (it uses Bionic as an alternative C library) and some other components typically found in Linux distributions.
With the release of Android Oreo in 2017, Google began to require that devices shipped with new SoCs had Linux kernel version 4.4 or newer, for security reasons. Existing devices upgraded to Oreo, and new products launched with older SoCs, were exempt from this rule.
Rooting
The flash storage on Android devices is split into several partitions, such as /system/ for the operating system itself, and /data/ for user data and application installations.
In contrast to typical desktop Linux distributions, Android device owners are not given root access to the operating system and sensitive partitions such as /system/ are partially read-only. However, root access can be obtained by exploiting security flaws in Android, which is used frequently by the open-source community to enhance the capabilities and customizability of their devices, but also by malicious parties to install viruses and malware. Root access can also be obtained by unlocking the bootloader which is available on most Android devices, for example on most Google Pixel, OnePlus and Nothing models OEM Unlocking option in the developer settings allows Fastboot to unlock the bootloader. But most OEMs have their own methods. The unlocking process resets the system to factory state, erasing all user data.
Software stack
On top of the Linux kernel, there are the middleware, libraries and APIs written in C, and application software running on an application framework which includes Java-compatible libraries. Development of the Linux kernel continues independently of Android's other source code projects.
Android uses Android Runtime (ART) as its runtime environment (introduced in version 4.4), which uses ahead-of-time (AOT) compilation to entirely compile the application bytecode into machine code upon the installation of an application. In Android 4.4, ART was an experimental feature and not enabled by default; it became the only runtime option in the next major version of Android, 5.0. In versions no longer supported, until version 5.0 when ART took over, Android previously used Dalvik as a process virtual machine with trace-based just-in-time (JIT) compilation to run Dalvik "dex-code" (Dalvik Executable), which is usually translated from the Java bytecode. Following the trace-based JIT principle, in addition to interpreting the majority of application code, Dalvik performs the compilation and native execution of select frequently executed code segments ("traces") each time an application is launched.
For its Java library, the Android platform uses a subset of the now discontinued Apache Harmony project. In December 2015, Google announced that the next version of Android would switch to a Java implementation based on the OpenJDK project.
Android's standard C library, Bionic, was developed by Google specifically for Android, as a derivation of the BSD's standard C library code. Bionic itself has been designed with several major features specific to the Linux kernel. The main benefits of using Bionic instead of the GNU C Library (glibc) or uClibc are its smaller runtime footprint, and optimization for low-frequency CPUs. At the same time, Bionic is licensed under the terms of the BSD licence, which Google finds more suitable for the Android's overall licensing model.
Aiming for a different licensing model, toward the end of 2012, Google switched the Bluetooth stack in Android from the GPL-licensed BlueZ to the Apache-licensed BlueDroid. A new Bluetooth stack, called Gabeldorsche, was developed to try to fix the bugs in the BlueDroid implementation.
Android does not have a native X Window System by default, nor does it support the full set of standard GNU libraries. This made it difficult to port existing Linux applications or libraries to Android, until version r5 of the Android Native Development Kit brought support for applications written completely in C or C++. Libraries written in C may also be used in applications by injection of a small shim and usage of the JNI.
In current versions of Android, "Toybox", a collection of command-line utilities (mostly for use by apps, as Android does not provide a command-line interface by default), is used (since the release of Marshmallow) replacing a similar "Toolbox" collection found in previous Android versions.
Android has another operating system, Trusty OS, within it, as a part of "Trusty" "software components supporting a Trusted Execution Environment (TEE) on mobile devices." "Trusty and the Trusty API are subject to change. [..] Applications for the Trusty OS can be written in C/C++ (C++ support is limited), and they have access to a small C library. [..] All Trusty applications are single-threaded; multithreading in Trusty userspace currently is unsupported. [..] Third-party application development is not supported in" the current version, and software running on the OS and processor for it, run the "DRM framework for protected content. [..] There are many other uses for a TEE such as mobile payments, secure banking, full-disk encryption, multi-factor authentication, device reset protection, replay-protected persistent storage, wireless display ("cast") of protected content, secure PIN and fingerprint processing, and even malware detection."
Open-source community
Android's source code is released by Google under an open-source license, and its open nature has encouraged a large community of developers and enthusiasts to use the open-source code as a foundation for community-driven projects, which deliver updates to older devices, add new features for advanced users or bring Android to devices originally shipped with other operating systems. These community-developed releases often bring new features and updates to devices faster than through the official manufacturer/carrier channels, with a comparable level of quality; provide continued support for older devices that no longer receive official updates; or bring Android to devices that were officially released running other operating systems, such as the HP TouchPad. Community releases often come pre-rooted and contain modifications not provided by the original vendor, such as the ability to overclock or over/undervolt the device's processor. CyanogenMod was the most widely used community firmware, now discontinued and succeeded by LineageOS.
There are, as of August 2019, a handful of notable custom Android distributions (ROMs) of Android version 9.0 Pie, which was released publicly in August 2018. See List of custom Android distributions.
Historically, device manufacturers and mobile carriers have typically been unsupportive of third-party firmware development. Manufacturers express concern about improper functioning of devices running unofficial software and the support costs resulting from this. Moreover, modified firmware such as CyanogenMod sometimes offer features, such as tethering, for which carriers would otherwise charge a premium. As a result, technical obstacles including locked bootloaders and restricted access to root permissions are common in many devices. However, as community-developed software has grown more popular, and following a statement by the Librarian of Congress in the United States that permits the "jailbreaking" of mobile devices, manufacturers and carriers have softened their position regarding third party development, with some, including HTC, Motorola, Samsung and Sony, providing support and encouraging development. As a result of this, over time the need to circumvent hardware restrictions to install unofficial firmware has lessened as an increasing number of devices are shipped with unlocked or unlockable bootloaders, similar to Nexus series of phones, although usually requiring that users waive their devices' warranties to do so. However, despite manufacturer acceptance, some carriers in the US still require that phones are locked down.
Device codenames
Internally, Android identifies each supported device by its device codename, a short string, which may or may not be similar to the model name used in marketing the device. For example, the device codename of the Pixel smartphone is sailfish.
The device codename is usually not visible to the end user, but is important for determining compatibility with modified Android versions. It is sometimes also mentioned in articles discussing a device, because it allows to distinguish different hardware variants of a device, even if the manufacturer offers them under the same name. The device codename is available to running applications under android.os.Build.DEVICE.
Security and privacy
In 2020, Google launched the Android Partner Vulnerability Initiative to improve the security of Android. They also formed an Android security team.
Common security threats
Research from security company Trend Micro lists premium service abuse as the most common type of Android malware, where text messages are sent from infected phones to premium-rate telephone numbers without the consent or even knowledge of the user. Other malware displays unwanted and intrusive advertisements on the device, or sends personal information to unauthorised third parties. Security threats on Android are reportedly growing exponentially; however, Google engineers have argued that the malware and virus threat on Android is being exaggerated by security companies for commercial reasons, and have accused the security industry of playing on fears to sell virus protection software to users. Google maintains that dangerous malware is actually extremely rare, and a survey conducted by F-Secure showed that only 0.5% of Android malware reported had come from the Google Play store.
In 2021, journalists and researchers reported the discovery of spyware, called Pegasus, developed and distributed by a private company which can and has been used to infect both iOS and Android smartphones often – partly via use of 0-day exploits – without the need for any user-interaction or significant clues to the user and then be used to exfiltrate data, track user locations, capture film through its camera, and activate the microphone at any time. Analysis of data traffic by popular smartphones running variants of Android found substantial by-default data collection and sharing with no opt-out by this pre-installed software. Both of these issues are not addressed or cannot be addressed by security patches.
Scope of surveillance by public institutions
As part of the broader 2013 mass surveillance disclosures it was revealed in September 2013 that the American and British intelligence agencies, the National Security Agency (NSA) and Government Communications Headquarters (GCHQ), respectively, have access to the user data on iPhone, BlackBerry, and Android devices. They were reportedly able to read almost all smartphone information, including SMS, location, emails, and notes. In January 2014, further reports revealed the intelligence agencies' capabilities to intercept the personal information transmitted across the Internet by social networks and other popular applications such as Angry Birds, which collect personal information of their users for advertising and other commercial reasons. GCHQ has, according to The Guardian, a wiki-style guide of different apps and advertising networks, and the different data that can be siphoned from each. Later that week, the Finnish Angry Birds developer Rovio announced that it was reconsidering its relationships with its advertising platforms in the light of these revelations, and called upon the wider industry to do the same.
The documents revealed a further effort by the intelligence agencies to intercept Google Maps searches and queries submitted from Android and other smartphones to collect location information in bulk. The NSA and GCHQ insist their activities comply with all relevant domestic and international laws, although the Guardian stated "the latest disclosures could also add to mounting public concern about how the technology sector collects and uses information, especially for those outside the US, who enjoy fewer privacy protections than Americans."
Leaked documents codenamed Vault 7 and dated from 2013 to 2016, detail the capabilities of the Central Intelligence Agency (CIA) to perform electronic surveillance and cyber warfare, including the ability to compromise the operating systems of most smartphones (including Android).
Security patches
In August 2015, Google announced that devices in the Google Nexus series would begin to receive monthly security patches. Google also wrote that "Nexus devices will continue to receive major updates for at least two years and security patches for the longer of three years from initial availability or 18 months from last sale of the device via the Google Store." The following October, researchers at the University of Cambridge concluded that 87.7% of Android phones in use had known but unpatched security vulnerabilities due to lack of updates and support. Ron Amadeo of Ars Technica wrote also in August 2015 that "Android was originally designed, above all else, to be widely adopted. Google was starting from scratch with zero percent market share, so it was happy to give up control and give everyone a seat at the table in exchange for adoption. [...] Now, though, Android has around 75–80 percent of the worldwide smartphone market—making it not just the world's most popular mobile operating system but arguably the most popular operating system, period. As such, security has become a big issue. Android still uses a software update chain-of-command designed back when the Android ecosystem had zero devices to update, and it just doesn't work". Following news of Google's monthly schedule, some manufacturers, including Samsung and LG, promised to issue monthly security updates, but, as noted by Jerry Hildenbrand in Android Central in February 2016, "instead we got a few updates on specific versions of a small handful of models. And a bunch of broken promises".
In a March 2017 post on Google's Security Blog, Android security leads Adrian Ludwig and Mel Miller wrote that "More than 735 million devices from 200+ manufacturers received a platform security update in 2016" and that "Our carrier and hardware partners helped expand deployment of these updates, releasing updates for over half of the top 50 devices worldwide in the last quarter of 2016". They also wrote that "About half of devices in use at the end of 2016 had not received a platform security update in the previous year", stating that their work would continue to focus on streamlining the security updates program for easier deployment by manufacturers. Furthermore, in a comment to TechCrunch, Ludwig stated that the wait time for security updates had been reduced from "six to nine weeks down to just a few days", with 78% of flagship devices in North America being up-to-date on security at the end of 2016.
Patches to bugs found in the core operating system often do not reach users of older and lower-priced devices. However, the open-source nature of Android allows security contractors to take existing devices and adapt them for highly secure uses. For example, Samsung has worked with General Dynamics through their Open Kernel Labs acquisition to rebuild Jelly Bean on top of their hardened microvisor for the "Knox" project.
Location-tracking
Android smartphones have the ability to report the location of Wi-Fi access points, encountered as phone users move around, to build databases containing the physical locations of hundreds of millions of such access points. These databases form electronic maps to locate smartphones, allowing them to run apps like Foursquare, Google Latitude, Facebook Places, and to deliver location-based ads. Third party monitoring software such as TaintDroid, an academic research-funded project, can, in some cases, detect when personal information is being sent from applications to remote servers.
Further notable exploits
In 2018, Norwegian security firm Promon has unearthed a serious Android security hole which can be exploited to steal login credentials, access messages, and track location, which could be found in all versions of Android, including Android 10. The vulnerability came by exploiting a bug in the multitasking system enabling a malicious app to overlay legitimate apps with fake login screens that users are not aware of when handing in security credentials. Users can also be tricked into granting additional permissions to the malicious apps, which later enable them to perform various nefarious activities, including intercepting texts or calls and stealing banking credentials. Avast Threat Labs also discovered that many pre-installed apps on several hundred new Android devices contain dangerous malware and adware. Some of the preinstalled malware can commit ad fraud or even take over its host device.
In 2020, the Which? watchdog reported that more than a billion Android devices released in 2012 or earlier, which was 40% of Android devices worldwide, were at risk of being hacked. This conclusion stemmed from the fact that no security updates were issued for the Android versions below 7.0 in 2019. Which? collaborated with the AV Comparatives anti-virus lab to infect five phone models with malware, and it succeeded in each case. Google refused to comment on the watchdog's speculations.
On August 5, 2020, Twitter published a blog urging its users to update their applications to the latest version with regards to a security concern that allowed others to access direct messages. A hacker could easily use the "Android system permissions" to fetch the account credentials in order to do so. The security issue is only with Android 8 (Android Oreo) and Android 9 (Android Pie). Twitter confirmed that updating the app will restrict such practices.
Technical security features
Android applications run in a sandbox, an isolated area of the system that does not have access to the rest of the system's resources, unless access permissions are explicitly granted by the user when the application is installed, however this may not be possible for pre-installed apps. It is not possible, for example, to turn off the microphone access of the pre-installed camera app without disabling the camera completely. This is valid also in Android versions 7 and 8.
Since February 2012, Google has used its Google Bouncer malware scanner to watch over and scan apps available in the Google Play store. A "Verify Apps" feature was introduced in November 2012, as part of the Android 4.2 "Jelly Bean" operating system version, to scan all apps, both from Google Play and from third-party sources, for malicious behaviour. Originally only doing so during installation, Verify Apps received an update in 2014 to "constantly" scan apps, and in 2017 the feature was made visible to users through a menu in Settings.
In former Android versions, before installing an application, the Google Play store displayed a list of the requirements an app needs to function. After reviewing these permissions, the user could choose to accept or refuse them, installing the application only if they accepted. In Android 6.0 "Marshmallow", the permissions system was changed; apps are no longer automatically granted all of their specified permissions at installation time. An opt-in system is used instead, in which users are prompted to grant or deny individual permissions to an app when they are needed for the first time. Applications remember the grants, which can be revoked by the user at any time. Pre-installed apps, however, are not always part of this approach. In some cases it may not be possible to deny certain permissions to pre-installed apps, nor be possible to disable them. The Google Play Services app cannot be uninstalled, nor disabled. Any force stop attempt, result in the app restarting itself. The new permissions model is used only by applications developed for Marshmallow using its software development kit (SDK), and older apps will continue to use the previous all-or-nothing approach. Permissions can still be revoked for those apps, though this might prevent them from working properly, and a warning is displayed to that effect.
In September 2014, Jason Nova of Android Authority reported on a study by the German security company Fraunhofer AISEC in antivirus software and malware threats on Android. Nova wrote that "The Android operating system deals with software packages by sandboxing them; this does not allow applications to list the directory contents of other apps to keep the system safe. By not allowing the antivirus to list the directories of other apps after installation, applications that show no inherent suspicious behavior when downloaded are cleared as safe. If then later on parts of the app are activated that turn out to be malicious, the antivirus will have no way to know since it is inside the app and out of the antivirus' jurisdiction". The study by Fraunhofer AISEC, examining antivirus software from Avast, AVG, Bitdefender, ESET, F-Secure, Kaspersky, Lookout, McAfee (formerly Intel Security), Norton, Sophos, and Trend Micro, revealed that "the tested antivirus apps do not provide protection against customized malware or targeted attacks", and that "the tested antivirus apps were also not able to detect malware which is completely unknown to date but does not make any efforts to hide its malignity".
In August 2013, Google announced Android Device Manager (renamed Find My Device in May 2017), a service that allows users to remotely track, locate, and wipe their Android device, with an Android app for the service released in December. In December 2016, Google introduced a Trusted Contacts app, letting users request location-tracking of loved ones during emergencies. In 2020, Trusted Contacts was shut down and the location-sharing feature rolled into Google Maps.
On October 8, 2018, Google announced new Google Play store requirements to combat over-sharing of potentially sensitive information, including call and text logs. The issue stems from the fact that many apps request permissions to access users' personal information (even if this information is not needed for the app to function) and some users unquestionably grant these permissions. Alternatively, a permission might be listed in the app manifest as required (as opposed to optional) and the app would not install unless user grants the permission; users can withdraw any, even required, permissions from any app in the device settings after app installation, but few users do this. Google promised to work with developers and create exceptions if their apps require Phone or SMS permissions for "core app functionality". The new policies enforcement started on January 6, 2019, 90 days after policy announcement on October 8, 2018. Furthermore, Google announced a new "target API level requirement" (targetSdkVersion in manifest) at least Android 8.0 (API level 26) for all new apps and app updates. The API level requirement might combat the practice of app developers bypassing some permission screens by specifying early Android versions that had a coarser permission model.
Verified Boot
The Android Open Source Project implements a verified boot chain with intentions to verify that executed code, such as the kernel or bootloader, comes from an official source instead of a malicious actor. This implementation establishes a full chain of trust, as it initially starts at a hardware level. Subsequently, the boot loader is verified and system partitions such as system and vendor are checked for integrity.
Furthermore, this process verifies that a previous version of Android has not been installed. This effectively provides rollback protection, which mitigates exploits that are similar to a downgrade attack.
dm-verity
Android (all supported versions, as far back as version 4.4 of the Android Open Source Project) has the option to provide a verified boot chain with dm-verity. This is a feature in the Linux kernel that allows for transparent integrity checking of block devices.
This feature is designed to mitigate persistent rootkits.
Google Play Services and vendor changes
Dependence on proprietary Google Play Services and customizations added on top of the operating system by vendors who license Android from Google is causing privacy concerns.
Criticism and controversy
Privacy and GDPR compliance
France
In 2019, Google was fined €50 Million by the French CNIL for a lack of information regarding their users.
Two years later, in 2021, researcher Douglas Leith, using a sort of data interception, showed that several data are sent from Android device to Google's servers, even when the phone is sleeping (IDLE) with no Google account registered into it. Several Google applications send data, such as Chrome, Message or Docs, however Youtube is the only one to add a unique identifier data.
In 2022, Leith showed that an Android phone sent various data related to communications, including phone and text messages to Google. Timestamp, sender and receiver, plus several other data, are sent to Google Play Services infrastructure, even if the "Usage and Diag" feature is disabled. Those data are marked with a Unique Identifier of an Android device, and don't comply with GDPR.
Australia
Google was sanctioned about A$60 Million (approx 40Million USD) in Australia for having misled its Android customers. It concerns the 2017–2018 period where the issue regarding misleading location tracking settings was discovered, and the case came under Australia’s Competition & Consumer Commission responsibility. The trial concluded in 2021 when the court decided Google broke Consumer law for about 1.3 million of Google account owners.
United States of America
A similar case to the 2019 French case regarding location tracking, was brought in the U.S. in a privacy lawsuit filed by a coalition of attorneys general from 40 U.S. states. A penalty of USD 391 Million was agreed between Google and the DoJ. The New York Times released at that time a long-term investigation about those privacy concerns.
Short software support lifespans
Android devices, particularly low-end and mid-range models, have been criticized for their short software support lifespans. Starting in the 2010s, many users found that their devices received only one or two major updates and a limited number of security patches. This lack of long-term support stemmed from manufacturers’ unwillingness to invest in costly software upgrades, which were often tied to contractual agreements with chipset suppliers like Qualcomm. As a result, Android developed a reputation for rapid device obsolescence.
To address this concern, Google introduced Project Treble, a framework designed to streamline the development and deployment of Android updates via Google Play Services, reducing manufacturers’ involvement in the update process.
However, for many devices, significant improvements were still limited by the chipset manufacturers. Fairphone, a company focused on sustainability, explained that its inability to extend software support was due to Qualcomm’s policies rather than its own. Apple executives also highlighted Android’s fragmented update ecosystem in their critiques of the platform, while quietly admitting that Qualcomm had also made it difficult for them to offer updates to the iPhone.
In response problem, several community-driven initiatives emerged to provide alternatives operating systems for unsupported devices including, like LineageOS, Sailfish OS, Ubuntu Touch and PostmarketOS.
Starting in 2022, Samsung, the largest Android smartphone manufacturer, announced extended software support from previous two years, first to four years, followed by five years in 2023 and six years in 2024. Shortly thereafter, Qualcomm followed suit, increasing its support timeline for manufacturers to seven years. These changes brought Samsung and potentially other Qualcomm-powered devices closer to competing platforms, such as Apple, whose iPhones have received at least four years of support since the iPhone 4’s release in 2010.
Licensing
The source code for Android is open-source: it is developed in private by Google, with the source code released publicly when a new version of Android is released. Google publishes most of the code (including network and telephony stacks) under the non-copyleft Apache License version 2.0. which allows modification and redistribution. The license does not grant rights to the "Android" trademark, so device manufacturers and wireless carriers have to license it from Google under individual contracts. Associated Linux kernel changes are released under the copyleft GNU General Public License version 2, developed by the Open Handset Alliance, with the source code publicly available at all times. The only Android release which was not immediately made available as source code was the tablet-only 3.0 Honeycomb release. The reason, according to Andy Rubin in an official Android blog post, was because Honeycomb was rushed for production of the Motorola Xoom, and they did not want third parties creating a "really bad user experience" by attempting to put onto smartphones a version of Android intended for tablets.
Only the base Android operating system (including some applications) is open-source software, whereas most Android devices ship with a substantial amount of proprietary software, such as Google Mobile Services, which includes applications such as Google Play Store, Google Search, and Google Play Servicesa software layer that provides APIs for the integration with Google-provided services, among others. These applications must be licensed from Google by device makers, and can only be shipped on devices which meet its compatibility guidelines and other requirements. Custom, certified distributions of Android produced by manufacturers (such as Samsung Experience) may also replace certain stock Android apps with their own proprietary variants and add additional software not included in the stock Android operating system. With the advent of the Google Pixel line of devices, Google itself has also made specific Android features timed or permanent exclusives to the Pixel series. There may also be "binary blob" drivers required for certain hardware components in the device. The best known fully open source Android services are the LineageOS distribution and MicroG which acts as an open source replacement of Google Play Services.
Richard Stallman and the Free Software Foundation have been critical of Android and have recommended the usage of alternatives such as Replicant, because drivers and firmware vital for the proper functioning of Android devices are usually proprietary, and because the Google Play Store application can forcibly install or uninstall applications and, as a result, invite non-free software. In both cases, the use of closed-source software causes the system to become vulnerable to backdoors.
It has been argued that because developers are often required to purchase the Google-branded Android license, this has turned the theoretically open system into a freemium service.
Leverage over manufacturers
Google licenses their Google Mobile Services software, along with the Android trademarks, only to hardware manufacturers for devices that meet Google's compatibility standards specified in the Android Compatibility Program document. Thus, forks of Android that make major changes to the operating system itself do not include any of Google's non-free components, stay incompatible with applications that require them, and must ship with an alternative software marketplace in lieu of Google Play Store. A prominent example of such an Android fork is Amazon's Fire OS, which is used on the Kindle Fire line of tablets, and oriented toward Amazon services. The shipment of Android devices without GMS is also common in mainland China, as Google does not do business there.
In 2014, Google also began to require that all Android devices which license the Google Mobile Services software display a prominent "Powered by Android" logo on their boot screens. Google has also enforced preferential bundling and placement of Google Mobile Services on devices, including mandated bundling of the entire main suite of Google applications, mandatory placement of shortcuts to Google Search and the Play Store app on or near the main home screen page in its default configuration, and granting a larger share of search revenue to OEMs who agree to not include third-party app stores on their devices. In March 2018, it was reported that Google had begun to block "uncertified" Android devices from using Google Mobile Services software, and display a warning indicating that "the device manufacturer has preloaded Google apps and services without certification from Google". Users of custom ROMs can register their device ID to their Google account to remove this block.
Some stock applications and components in AOSP code that were formerly used by earlier versions of Android, such as Search, Music, Calendar, and the location API, were abandoned by Google in favor of non-free replacements distributed through Play Store (Google Search, YouTube Music, and Google Calendar) and Google Play Services, which are no longer open-source. Moreover, open-source variants of some applications also exclude functions that are present in their non-free versions. These measures are likely intended to discourage forks and encourage commercial licensing in line with Google requirements, as the majority of the operating system's core functionality is dependent on proprietary components licensed exclusively by Google, and it would take significant development resources to develop an alternative suite of software and APIs to replicate or replace them. Apps that do not use Google components would also be at a functional disadvantage, as they can only use APIs contained within the OS itself. In turn, third-party apps may have dependencies on Google Play Services.
Members of the Open Handset Alliance, which include the majority of Android OEMs, are also contractually forbidden from producing Android devices based on forks of the OS; in 2012, Acer Inc. was forced by Google to halt production on a device powered by Alibaba Group's Aliyun OS with threats of removal from the OHA, as Google deemed the platform to be an incompatible version of Android. Alibaba Group defended the allegations, arguing that the OS was a distinct platform from Android (primarily using HTML5 apps), but incorporated portions of Android's platform to allow backwards compatibility with third-party Android software. Indeed, the devices did ship with an application store which offered Android apps; however, the majority of them were pirated.
Reception
Android received a lukewarm reaction when it was unveiled in 2007. Although analysts were impressed with the respected technology companies that had partnered with Google to form the Open Handset Alliance, it was unclear whether mobile phone manufacturers would be willing to replace their existing operating systems with Android. The idea of an open-source, Linux-based development platform sparked interest, but there were additional worries about Android facing strong competition from established players in the smartphone market, such as Nokia and Microsoft, and rival Linux mobile operating systems that were in development. These established players were skeptical: Nokia was quoted as saying "we don't see this as a threat", and a member of Microsoft's Windows Mobile team stated "I don't understand the impact that they are going to have."
Since then Android has grown to become the most widely used smartphone operating system and "one of the fastest mobile experiences available". Reviewers have highlighted the open-source nature of the operating system as one of its defining strengths, allowing companies such as Nokia (Nokia X family), Amazon (Kindle Fire), Barnes & Noble (Nook), Ouya, Baidu and others to fork the software and release hardware running their own customised version of Android. As a result, it has been described by technology website Ars Technica as "practically the default operating system for launching new hardware" for companies without their own mobile platforms. This openness and flexibility is also present at the level of the end user: Android allows extensive customisation of devices by their owners and apps are freely available from non-Google app stores and third party websites. These have been cited as among the main advantages of Android phones over others.
Despite Android's popularity, including an activation rate three times that of iOS, there have been reports that Google has not been able to leverage their other products and web services successfully to turn Android into the money maker that analysts had expected. The Verge suggested that Google is losing control of Android due to the extensive customization and proliferation of non-Google apps and servicesAmazon's Kindle Fire line uses Fire OS, a heavily modified fork of Android which does not include or support any of Google's proprietary components, and requires that users obtain software from its competing Amazon Appstore instead of Play Store. In 2014, in an effort to improve prominence of the Android brand, Google began to require that devices featuring its proprietary components display an Android logo on the boot screen.
Android has suffered from "fragmentation", a situation where the variety of Android devices, in terms of both hardware variations and differences in the software running on them, makes the task of developing applications that work consistently across the ecosystem harder than rival platforms such as iOS where hardware and software varies less. For example, according to data from OpenSignal in July 2013, there were 11,868 models of Android devices, numerous screen sizes and eight Android OS versions simultaneously in use, while the large majority of iOS users have upgraded to the latest iteration of that OS. Critics such as Apple Insider have asserted that fragmentation via hardware and software pushed Android's growth through large volumes of low end, budget-priced devices running older versions of Android. They maintain this forces Android developers to write for the "lowest common denominator" to reach as many users as possible, who have too little incentive to make use of the latest hardware or software features only available on a smaller percentage of devices. However, OpenSignal, who develops both Android and iOS apps, concluded that although fragmentation can make development trickier, Android's wider global reach also increases the potential reward.
Market share
Android is the most used operating system on phones in virtually all countries, with some countries, such as India, having over 96% market share. On tablets, usage is more even, as iOS is a bit more popular globally.
Research company Canalys estimated in the second quarter of 2009, that Android had a 2.8% share of worldwide smartphone shipments. By May 2010, Android had a 10% worldwide smartphone market share, overtaking Windows Mobile, whilst in the US Android held a 28% share, overtaking iPhone OS. By the fourth quarter of 2010, its worldwide share had grown to 33% of the market becoming the top-selling smartphone platform, overtaking Symbian. In the US it became the top-selling platform in April 2011, overtaking BlackBerry OS with a 31.2% smartphone share, according to comScore.
By the third quarter of 2011, Gartner estimated that more than half (52.5%) of the smartphone sales belonged to Android. By the third quarter of 2012 Android had a 75% share of the global smartphone market according to the research firm IDC.
In July 2011, Google said that 550,000 Android devices were being activated every day, up from 400,000 per day in May, and more than 100 million devices had been activated with 4.4% growth per week. In September 2012, 500 million devices had been activated with 1.3 million activations per day. In May 2013, at Google I/O, Sundar Pichai announced that 900 million Android devices had been activated.
Android market share varies by location. In July 2012, "mobile subscribers aged 13+" in the United States using Android were up to 52%, and rose to 90% in China. During the third quarter of 2012, Android's worldwide smartphone shipment market share was 75%, with 750 million devices activated in total. In April 2013, Android had 1.5 million activations per day. 48 billion application ("app") installation have been performed from the Google Play store, and by September 2013, one billion Android devices had been activated.
the Google Play store had over 3 million Android applications published, and apps had been downloaded more than 65 billion times. The operating system's success has made it a target for patent litigation as part of the so-called "smartphone wars" between technology companies.
Android devices account for more than half of smartphone sales in most markets, including the US, while "only in Japan was Apple on top" (September–November 2013 numbers). At the end of 2013, over 1.5 billion Android smartphones had been sold in the four years since 2010, making Android the most sold phone and tablet OS. Three billion Android smartphones were estimated to be sold by the end of 2014 (including previous years). According to Gartner research company, Android-based devices outsold all contenders, every year since 2012. In 2013, it outsold Windows 2.8:1 or by 573 million. Android has the largest installed base of all operating systems; Since 2013, devices running it also sell more than Windows, iOS and Mac OS X devices combined.
According to StatCounter, which tracks only the use for browsing the web, Android is the most popular mobile operating system since August 2013. Android is the most popular operating system for web browsing in India and several other countries (e.g. virtually all of Asia, with Japan and North Korea exceptions). According to StatCounter, Android is most used on phones in all African countries, and it stated "mobile usage has already overtaken desktop in several countries including India, South Africa and Saudi Arabia", with all countries in Africa having done so already in which mobile (including tablets) usage is at 90.46% (Android only, accounts for 75.81% of all use there).
While Android phones in the Western world almost always include Google's proprietary code (such as Google Play) in the otherwise open-source operating system, Google's proprietary code and trademark is increasingly not used in emerging markets; "The growth of AOSP Android devices goes way beyond just China [..] ABI Research claims that 65 million devices shipped globally with open-source Android in the second quarter of [2014], up from 54 million in the first quarter"; depending on country, percent of phones estimated to be based only on AOSP source code, forgoing the Android trademark: Thailand (44%), Philippines (38%), Indonesia (31%), India (21%), Malaysia (24%), Mexico (18%), Brazil (9%).
According to a January 2015 Gartner report, "Android surpassed a billion shipments of devices in 2014, and will continue to grow at a double-digit pace in 2015, with a 26 percent increase year over year." This made it the first time that any general-purpose operating system has reached more than one billion end users within a year: by reaching close to 1.16 billion end users in 2014, Android shipped over four times more than iOS and OS X combined, and over three times more than Microsoft Windows. Gartner expected the whole mobile phone market to "reach two billion units in 2016", including Android. Describing the statistics, Farhad Manjoo wrote in The New York Times that "About one of every two computers sold today is running Android. [It] has become Earth's dominant computing platform."
According to a Statistica's estimate, Android smartphones had an installed base of 1.8 billion units in 2015, which was 76% of the estimated total number of smartphones worldwide. Android has the largest installed base of any mobile operating system and, since 2013, the highest-selling operating system overall with sales in 2012, 2013 and 2014 close to the installed base of all PCs.
In the second quarter of 2014, Android's share of the global smartphone shipment market was 84.7%, a new record. This had grown to 87.5% worldwide market share by the third quarter of 2016, leaving main competitor iOS with 12.1% market share.
According to an April 2017 StatCounter report, Android overtook Microsoft Windows to become the most popular operating system for total Internet usage. It has maintained the plurality since then.
In September 2015, Google announced that Android had 1.4 billion monthly active users. This changed to 2 billion monthly active users in May 2017.
Adoption on tablets
Despite its success on smartphones, initially Android tablet adoption was slow, then later caught up with the iPad, in most countries. One of the main causes was the chicken or the egg situation where consumers were hesitant to buy an Android tablet due to a lack of high quality tablet applications, but developers were hesitant to spend time and resources developing tablet applications until there was a significant market for them. The content and app "ecosystem" proved more important than hardware specs as the selling point for tablets. Due to the lack of Android tablet-specific applications in 2011, early Android tablets had to make do with existing smartphone applications that were ill-suited to larger screen sizes, whereas the dominance of Apple's iPad was reinforced by the large number of tablet-specific iOS applications.
Despite app support in its infancy, a considerable number of Android tablets, like the Barnes & Noble Nook (alongside those using other operating systems, such as the HP TouchPad and BlackBerry PlayBook) were rushed out to market in an attempt to capitalize on the success of the iPad. InfoWorld has suggested that some Android manufacturers initially treated their first tablets as a "Frankenphone business", a short-term low-investment opportunity by placing a smartphone-optimized Android OS (before Android 3.0 Honeycomb for tablets was available) on a device while neglecting user interface. This approach, such as with the Dell Streak, failed to gain market traction with consumers as well as damaging the early reputation of Android tablets. Furthermore, several Android tablets such as the Motorola Xoom were priced the same or higher than the iPad, which hurt sales. An exception was the Amazon Kindle Fire, which relied upon lower pricing as well as access to Amazon's ecosystem of applications and content.
This began to change in 2012, with the release of the affordable Nexus 7 and a push by Google for developers to write better tablet applications. According to International Data Corporation, shipments of Android-powered tablets surpassed iPads in Q3 2012.
As of the end of 2013, over 191.6 million Android tablets had sold in three years since 2011. This made Android tablets the most-sold type of tablet in 2013, surpassing iPads in the second quarter of 2013.
According to StatCounter's web use statistics, , Android tablets represent the majority of tablet devices used in Africa (70%), South America (65%), while less than half elsewhere, e.g. Europe (44%), Asia (44%), North America (34%) and Oceania/Australia (18%). There are countries on all continents where Android tablets are the majority, for example, Mexico.
Platform information
Android has 71% market share vs Apple's iOS/iPadOS at 28% (on tablets alone Apple is slightly ahead, i.e. 44% vs 56%, though Android is ahead in virtually all countries). The latest Android 14 is the most popular Android version on smartphones and on tablets.
, Android 14 is most popular single Android version on smartphones at 26%, followed by Android 13, 12, down to Pie 9.0 in that order. Android is more used than iOS is virtually all countries, with few exceptions such as iOS has a 56% share in the US. The latest Android 14 is the most used single version in several countries e.g. the US, Canada, Australia, with over a third of the share in those countries, and it's also single most used in India and most of European countries. Usage of Android 12 and newer, i.e. supported versions, is at 64%, the rest of users are not supported with security updates, with recently unsupported Android 11, use is at 78.55%.
On tablets, Android 14 is again the most popular single version, at 17%. Usage of Android 12 and newer, i.e. supported versions, is at 46% on Android tablets, and with Android 11, until recently supported, at 56%. The usage share varies a lot by country.
Since April 2024, 85% of devices have Vulkan graphics support (77.6% support Vulkan 1.1 or higher, thereof 6.6% supporting Vulkan 1.3), the successor to OpenGL. At the same time 100.0% of the devices have support for or higher, 96% are on or higher, and 88.6% are using the latest version .
Application piracy
Paid Android applications in the past were simple to pirate. In a May 2012 interview with Eurogamer, the developers of Football Manager stated that the ratio of pirated players vs legitimate players was 9:1 for their game Football Manager Handheld. However, not every developer agreed that piracy rates were an issue; for example, in July 2012 the developers of the game Wind-up Knight said that piracy levels of their game were only 12%, and most of the piracy came from China, where people cannot purchase apps from Google Play.
In 2010, Google released a tool for validating authorized purchases for use within apps, but developers complained that this was insufficient and trivial to crack. Google responded that the tool, especially its initial release, was intended as a sample framework for developers to modify and build upon depending on their needs, not as a finished piracy solution. Android "Jelly Bean" introduced the ability for paid applications to be encrypted, so that they may work only on the device for which they were purchased.
Legal issues
The success of Android has made it a target for patent and copyright litigation between technology companies, both Android and Android phone manufacturers having been involved in numerous patent lawsuits and other legal challenges.
Patent lawsuit with Oracle
On August 12, 2010, Oracle sued Google over claimed infringement of copyrights and patents related to the Java programming language. Oracle originally sought damages up to $6.1 billion, but this valuation was rejected by a United States federal judge who asked Oracle to revise the estimate. In response, Google submitted multiple lines of defense, counterclaiming that Android did not infringe on Oracle's patents or copyright, that Oracle's patents were invalid, and several other defenses. They said that Android's Java runtime environment is based on Apache Harmony, a clean room implementation of the Java class libraries, and an independently developed virtual machine called Dalvik. In May 2012, the jury in this case found that Google did not infringe on Oracle's patents, and the trial judge ruled that the structure of the Java APIs used by Google was not copyrightable. The parties agreed to zero dollars in statutory damages for a small amount of copied code. On May 9, 2014, the Federal Circuit partially reversed the district court ruling, ruling in Oracle's favor on the copyrightability issue, and remanding the issue of fair use to the district court.
In December 2015, Google announced that the next major release of Android (Android Nougat) would switch to OpenJDK, which is the official open-source implementation of the Java platform, instead of using the now-discontinued Apache Harmony project as its runtime. Code reflecting this change was also posted to the AOSP source repository. In its announcement, Google claimed this was part of an effort to create a "common code base" between Java on Android and other platforms. Google later admitted in a court filing that this was part of an effort to address the disputes with Oracle, as its use of OpenJDK code is governed under the GNU General Public License (GPL) with a linking exception, and that "any damages claim associated with the new versions expressly licensed by Oracle under OpenJDK would require a separate analysis of damages from earlier releases". In June 2016, a United States federal court ruled in favor of Google, stating that its use of the APIs was fair use.
In April 2021, the Supreme Court of the United States ruled that Google's use of the Java APIs was within the bounds of fair use, reversing the Federal Circuit Appeals Court ruling and remanding the case for further hearing. The majority opinion began with the assumption that the APIs may be copyrightable, and thus proceeded with a review of the factors that contributed to fair use.
Anti-competitive challenges in Europe
In 2013, FairSearch, a lobbying organization supported by Microsoft, Oracle and others, filed a complaint regarding Android with the European Commission, alleging that its free-of-charge distribution model constituted anti-competitive predatory pricing. The Free Software Foundation Europe, whose donors include Google, disputed the Fairsearch allegations. On April 20, 2016, the EU filed a formal antitrust complaint against Google based upon the FairSearch allegations, arguing that its leverage over Android vendors, including the mandatory bundling of the entire suite of proprietary Google software, hindering the ability for competing search providers to be integrated into Android, and barring vendors from producing devices running forks of Android, constituted anti-competitive practices. In August 2016, Google was fined US$6.75 million by the Russian Federal Antimonopoly Service (FAS) under similar allegations by Yandex. The European Commission issued its decision on July 18, 2018, determining that Google had conducted three operations related to Android that were in violation of antitrust regulations: bundling Google's search and Chrome as part of Android, blocking phone manufacturers from using forked versions of Android, and establishing deals with phone manufacturers and network providers to exclusively bundle the Google search application on handsets (a practice Google ended by 2014). The EU fined Google for (about ) and required the company to end this conduct within 90 days. Google filed its appeal of the ruling in October 2018, though will not ask for any interim measures to delay the onset of conduct requirements.
On October 16, 2018, Google announced that it would change its distribution model for Google Mobile Services in the EU, since part of its revenues streams for Android which came through use of Google Search and Chrome were now prohibited by the EU's ruling. While the core Android system remains free, OEMs in Europe would be required to purchase a paid license to the core suite of Google applications, such as Gmail, Google Maps and the Google Play Store. Google Search will be licensed separately, with an option to include Google Chrome at no additional cost atop Search. European OEMs can bundle third-party alternatives on phones and devices sold to customers, if they so choose. OEMs will no longer be barred from selling any device running incompatible versions of Android in Europe.
Others
In addition to lawsuits against Google directly, various proxy wars have been waged against Android indirectly by targeting manufacturers of Android devices, with the effect of discouraging manufacturers from adopting the platform by increasing the costs of bringing an Android device to market. Both Apple and Microsoft have sued several manufacturers for patent infringement, with Apple's legal action against Samsung being a particularly high-profile case. In January 2012, Microsoft said they had signed patent license agreements with eleven Android device manufacturers, whose products account for "70 percent of all Android smartphones" sold in the US and 55% of the worldwide revenue for Android devices. These include Samsung and HTC. Samsung's patent settlement with Microsoft included an agreement to allocate more resources to developing and marketing phones running Microsoft's Windows Phone operating system. Microsoft has also tied its own Android software to patent licenses, requiring the bundling of Microsoft Office Mobile and Skype applications on Android devices to subsidize the licensing fees, while at the same time helping to promote its software lines.
Google has publicly expressed its frustration for the current patent landscape in the United States, accusing Apple, Oracle and Microsoft of trying to take down Android through patent litigation, rather than innovating and competing with better products and services. In August 2011, Google purchased Motorola Mobility for US$12.5 billion, which was viewed in part as a defensive measure to protect Android, since Motorola Mobility held more than 17,000 patents. In December 2011, Google bought over a thousand patents from IBM.
Turkey's competition authority investigations about the default search engine in Android, started in 2017, led to a US$17.4 million fine in September 2018 and a fine of 0.05 percent of Google's revenue per day in November 2019 when Google did not meet the requirements. In December 2019, Google stopped issuing licenses for new Android phone models sold in Turkey.
Other uses
Google has developed several variations of Android for specific use cases, including Android Wear, later renamed Wear OS, for wearable devices such as wrist watches, Android TV for televisions, Android Things for smart or Internet of things devices and Android Automotive for cars. Additionally, by providing infrastructure that combines dedicated hardware and dedicated applications running on regular Android, Google have opened up the platform for its use in particular usage scenarios, such as the Android Auto app for cars, and Daydream, a Virtual Reality platform.
The open and customizable nature of Android allows device makers to use it on other electronics as well, including laptops, netbooks, and desktop computers, cameras, headphones, home automation systems, game consoles, media players, satellites, routers, printers, payment terminals, automated teller machines, inflight entertainment systems, and robots. Additionally, Android has been installed and run on a variety of less-technical objects, including calculators, single-board computers, feature phones, electronic dictionaries, alarm clocks, refrigerators, landline telephones, coffee machines, bicycles, and mirrors.
Ouya, a video game console running Android, became one of the most successful Kickstarter campaigns, crowdfunding US$8.5m for its development, and was later followed by other Android-based consoles, such as Nvidia's Shield Portablean Android device in a video game controller form factor.
In 2011, Google demonstrated "Android@Home", a home automation technology which uses Android to control a range of household devices including light switches, power sockets and thermostats. Prototype light bulbs were announced that could be controlled from an Android phone or tablet, but Android head Andy Rubin was cautious to note that "turning a lightbulb on and off is nothing new", pointing to numerous failed home automation services. Google, he said, was thinking more ambitiously and the intention was to use their position as a cloud services provider to bring Google products into customers' homes.
Parrot unveiled an Android-based car stereo system known as Asteroid in 2011, followed by a successor, the touchscreen-based Asteroid Smart, in 2012. In 2013, Clarion released its own Android-based car stereo, the AX1. In January 2014, at the Consumer Electronics Show (CES), Google announced the formation of the Open Automotive Alliance, a group including several major automobile makers (Audi, General Motors, Hyundai, and Honda) and Nvidia, which aims to produce Android-based in-car entertainment systems for automobiles, "[bringing] the best of Android into the automobile in a safe and seamless way."
Android comes preinstalled on a few laptops (a similar functionality of running Android applications is also available in Google's ChromeOS) and can also be installed on personal computers by end users. On those platforms Android provides additional functionality for physical keyboards and mice, together with the "Alt-Tab" key combination for switching applications quickly with a keyboard. In December 2014, one reviewer commented that Android's notification system is "vastly more complete and robust than in most environments" and that Android is "absolutely usable" as one's primary desktop operating system.
In October 2015, The Wall Street Journal reported that Android will serve as Google's future main laptop operating system, with the plan to fold ChromeOS into it by 2017. Google's Sundar Pichai, who led the development of Android, explained that "mobile as a computing paradigm is eventually going to blend with what we think of as desktop today." Also, back in 2009, Google co-founder Sergey Brin himself said that ChromeOS and Android would "likely converge over time." Lockheimer, who replaced Pichai as head of Android and ChromeOS, responded to this claim with an official Google blog post stating that "While we've been working on ways to bring together the best of both operating systems, there's no plan to phase out ChromeOS [which has] guaranteed auto-updates for five years". That is unlike Android where support is shorter with "EOL dates [being..] at least 3 years [into the future] for Android tablets for education".
At Google I/O in May 2016, Google announced Daydream, a virtual reality platform that relied on a smartphone and provided VR capabilities through a virtual reality headset and controller designed by Google itself. However, this did not catch on and was discontinued in 2019.
Mascot
The mascot of Android is a green android robot, as related to the software's name. Although it had no official name for a long time, the Android team at Google reportedly call it "Bugdroid".
In 2024, a Google blog post revealed its official name, "The Bot".
It was designed by then-Google graphic designer Irina Blok on November 5, 2007, when Android was announced. Contrary to reports that she was tasked with a project to create an icon, Blok confirmed in an interview that she independently developed it and made it open source. The robot design was initially not presented to Google, but it quickly became commonplace in the Android development team, with various variations of it created by the developers there who liked the figure, as it was free under a Creative Commons license. Its popularity amongst the development team eventually led to Google adopting it as an official icon as part of the Android logo when it launched to consumers in 2008.
| Technology | Operating systems | null |
18348679 | https://en.wikipedia.org/wiki/Magnoliids | Magnoliids | Magnoliids, Magnoliidae or Magnolianae are a clade of flowering plants. With more than 10,000 species, including magnolias, nutmeg, bay laurel, cinnamon, avocado, black pepper, tulip tree and many others, it is the third-largest group of angiosperms after the eudicots and monocots. The group is characterized by trimerous flowers, pollen with one pore, and usually branching-veined leaves.
Some members of the subclass are among the earliest angiosperms and share anatomical similarities with gymnosperms like stamens that resemble the male cone scales of conifers and carpels found on the long flowering axis. According to molecular clock calculations, the lineage that led to magnoliids split from other plants about 135 million years ago or 160-165 million years ago.
Classification
"Magnoliidae" is the botanical name of a subclass, and "magnoliids" is an informal name that does not conform to the International Code of Nomenclature for algae, fungi, and plants. The circumscription of a subclass will vary with the taxonomic system being used. The only requirement is that it must include the family Magnoliaceae. The informal name "magnoliids" is used by some researchers to avoid the confusion that recently surrounds the name "Magnoliidae." More recently, the group has been redefined under the PhyloCode as a node-based clade comprising the Canellales, Laurales, Magnoliales, and Piperales. Chase & Reveal have proposed "Magnoliidae" as the name used for the entire group of flowering plants, and the formal name "Magnolianae" for the group of four orders discussed here.
APG system
The APG III (2009) and its predecessor systems did not originally use formal botanical names above the rank of order. Under those systems, larger clades were usually referred to by informal names, such as "magnoliids" (plural, not capitalized) or "magnoliid complex". The formal name in Linnean nomenclature was specified in a separate APG publication as the existing name "Magnolianae" Takht. (1967). The APG III recognizes a clade within the angiosperms for the magnoliids. The circumscription is:
The clade includes most of the basal groups of the angiosperms. This clade was formally named Magnoliidae in 2007 under provisions of the PhyloCode.
Cronquist system
The Cronquist system (1981) used the name Magnoliidae for one of six subclasses (within class Magnoliopsida = dicotyledons). In the original version of this system the circumscription was:
Subclass Magnoliidae :
Order Aristolochiales
Order Illiciales
Order Laurales
Order Magnoliales
Order Nymphaeales
Order Papaverales
Order Piperales
Order Ranunculales
Dahlgren and Thorne systems
Both Dahlgren and Thorne classified the magnoliids (sensu APG) in superorder Magnolianae, rather than as a subclass. In their systems, the name Magnoliidae is used for a much larger group including all dicotyledons. This is also the case in some of the systems derived from the Cronquist system.
Dahlgren divided his Magnolianae into ten orders, more than other systems of the time, and unlike Cronquist and Thorne, he did not include the Piperales. Thorne grouped most of his Magnolianae into two large orders, Magnoliales and Berberidales, although his Magnoliales was divided into suborders along lines similar to the ordinal groupings used by both Cronquist and Dahlgren. Thorne revised his system in 2000, restricting the name Magnoliidae to include only the Magnolianae, Nymphaeanae, and Rafflesianae, and removing the Berberidales and other previously included groups to his subclass Ranunculidae. This revised system diverges from the Cronquist system, but agrees more closely with the circumscription later published under APG II.
Comparison table
Comparison of classification systems is often difficult. Two authors may apply the same name to groups with different composition of members; for example, Dahlgren's Magnoliidae includes all dicots, whereas Cronquists' Magnoliidae is only one of five dicot groups. Two authors may also describe the same group with nearly identical composition, but each may then apply a different name to that group or place the group at a different taxonomic rank. For example, the composition of Cronquist's subclass Magnoliidae is nearly the same as Thorne's (1992) superorder Magnolianae, despite the difference in taxonomic rank.
Because of these difficulties and others, the synoptic table below imprecisely compares the definition of "magnoliid" groups in the systems of four authors. For each system, only orders are named in the table. All orders included by a particular author are listed and linked in that column. When a taxon is not included by that author, but was included by an author in another column, that item appears in unlinked italics and indicates remote placement. The sequence of each system has been altered from its publication in order to pair corresponding taxa between columns.
Economic uses
The magnoliids is a large group of plants, with many species that are economically important as food, drugs, perfumes, timber, and as ornamentals, among many other uses.
One widely cultivated magnoliid fruit is the avocado (Persea americana), which is believed to have been cultivated in Mexico and Central America for nearly 10,000 years. Now grown throughout the tropics, it probably originates from the Chiapas region of Mexico or Guatemala, where "wild" avocados may still be found. The soft pulp of the fruit is eaten fresh or mashed into guacamole. The ancient peoples of Central America were also the first to cultivate several fruit-bearing species of Annona. These include the custard-apple (A. reticulata), soursop (A. muricata), sweetsop or sugar-apple (A. squamosa), and the cherimoya (A. cherimola). Both soursop and sweetsop now are widely grown for their fruits in the Old World as well.
Some members of the magnoliids have served as important food additives, such as black pepper, nutmeg, bay laurel and cinnamon. Oil of sassafras was formerly used as a key flavoring in both root beer and in sarsaparilla. The primary ingredient responsible for the oil's flavor is safrole, but it is no longer used in either the United States or Canada. Both nations banned the use of safrole as a food additive in 1960 as a result of studies that demonstrated safrole promoted liver damage and tumors in mice. Consumption of more than a minute quantity of the oil causes nausea, vomiting, hallucinations, and shallow rapid breathing. It is very toxic, and can severely damage the kidneys. In addition to its former use as a food additive, safrole from either Sassafras or Ocotea cymbarum is also the primary precursor for synthesis of MDMA (methylenedioxymethamphetamine), commonly known as the drug ecstasy.
Other magnoliids also are known for their narcotic, hallucinogenic, or paralytic properties. The Polynesian beverage kava is prepared from the pulverized roots of Piper methysticum, and has both sedative and narcotic properties. It is used throughout the Pacific in social gatherings or after work to relax. Likewise, some native peoples of the Amazon take a hallucinogenic snuff made from the dried and powdered fluid exuded from the bark of Virola trees. Another hallucinogenic compound, myristicin, comes from the spice nutmeg. As with safrole, ingestion of nutmeg in quantities can lead to hallucinations, nausea, and vomiting, with symptoms lasting several days. A more severe reaction comes from poisoning by rodiasine and demethylrodiasine, the active ingredients in fruit extract from Chlorocardium venenosum. These chemicals paralyze muscles and nerves, resulting in tetanus-like reactions in animals. The Cofán peoples of westernmost Amazon in Colombia and Ecuador use the compound as a poison to tip their arrows in hunting.
Not all the effects of chemical compounds in the magnoliids are detrimental. In previous centuries, sailors would use Winter's Bark from the South American tree Drimys winteri to ward off the vitamin-deficiency of scurvy. Today, benzoyl is extracted from Lindera benzoin (common spicebush) for use as a food additive and skin medicine, due to its anti-bacterial and anti-fungal properties. Drugs extracted from the bark of Magnolia have long been used in traditional Chinese medicine. Scientific investigation of magnolol and honokiol have shown promise for their use in dental health. Both compounds demonstrate effective anti-bacterial activity against the bacteria responsible for bad breath and dental caries. Several members of the family Annonaceae are also under investigation for uses of a group of chemicals called acetogenins. The first acetogenin discovered was uvaricin, which has anti-leukemic properties when used in living organisms. Other acetogenins have been discovered with anti-malarial and anti-tumor properties, and some even inhibit HIV replication in laboratory studies.
Many magnoliid species produce essential oils in their leaves, bark, or wood. The tree Virola surinamensis (Brazilian "nutmeg") contains trimyristin, which is extracted in the form of a fat and used in soaps and candles, as well as in shortenings. Other fragrant volatile oils are extracted from Aniba rosaeodora (bois-de-rose oil), Cinnamomum porrectum, Cinnamomum cassia, and Litsea odorifera for scenting soaps. Perfumes also are made from some of these oils; ylang-ylang comes from the flowers of Cananga odorata, and is used by Arab and Swahili women. A compound called nutmeg butter is produced from the same tree as the spice of that name, but the sweet-smelling "butter" is used in perfumery or as a lubricant rather than as a food.
| Biology and health sciences | Magnoliids | null |
10297736 | https://en.wikipedia.org/wiki/Rover%20%28space%20exploration%29 | Rover (space exploration) | A rover (or sometimes planetary rover) is a planetary surface exploration device designed to move over the rough surface of a planet or other planetary mass celestial bodies. Some rovers have been designed as land vehicles to transport members of a human spaceflight crew; others have been partially or fully autonomous robots. Rovers are typically created to land on another planet (other than Earth) via a lander-style spacecraft,tasked to collect information about the terrain, and to take crust samples such as dust, soil, rocks, and even liquids. They are essential tools in space exploration.
Features
Rovers arrive on spacecraft and are used in conditions very distinct from those on the Earth, which makes some demands on their design.
Reliability
Rovers have to withstand high levels of acceleration, high and low temperatures, pressure, dust, corrosion, cosmic rays, remaining functional without repair for a needed period of time.
Autonomy
Rovers which land on celestial bodies far from the Earth, such as the Mars Exploration Rovers, cannot be remotely controlled in real-time since the speed at which radio signals travel is far too slow for real-time or near-real-time communication. For example, sending a signal from Mars to Earth takes between 3 and 21 minutes. These rovers are thus capable of operating autonomously with little assistance from ground control as far as navigation and data acquisition are concerned, although they still require human input for identifying promising targets in the distance to which to drive, and determining how to position itself to maximize solar energy. Giving a rover some rudimentary visual identification capabilities to make simple distinctions can allow engineers to speed up the reconnaissance. During the NASA Sample Return Robot Centennial Challenge, a rover, named Cataglyphis, successfully demonstrated autonomous navigation, decision-making, and sample detection, retrieval, and return capabilities.
Non-wheeled approaches
Other rover designs that do not use wheeled approaches are possible. Mechanisms that utilize "walking" on robotic legs, hopping, rolling, etc. are possible. For example, Stanford University researchers have proposed "Hedgehog", a small cube-shaped rover that can controllably hop—or even spin out of a sandy sinkhole by corkscrewing upward to escape—for surface exploration of low gravity celestial bodies.
Past missions
Moon
Lunokhod 0 (No.201)
The Soviet rover was intended to be the first roving remote-controlled robot on the Moon, but crashed during a failed start of the launcher 19 February 1969.
Lunokhod 1
The Lunokhod 1 rover landed on the Moon in November 1970. It was the first roving remote-controlled robot to land on any celestial body. The Soviet Union launched Lunokhod 1 aboard the Luna 17 spacecraft on November 10, 1970, and it entered lunar orbit on November 15. The spacecraft soft-landed in the Sea of Rains region on November 17. The lander had dual ramps from which Lunokhod 1 could descend to the lunar surface, which it did at 06:28 UT. From November 17, 1970, to November 22, 1970, the rover drove 197 m, and during 10 communication sessions returned 14 close up pictures of the Moon and 12 panoramic views. It also analyzed the lunar soil. The last successful communications session with Lunokhod 1 was on September 14, 1971, having operated for 11 months.
Apollo Lunar Roving Vehicle
NASA included Lunar Roving Vehicles in three Apollo missions: Apollo 15 (which landed on the Moon July 30, 1971), Apollo 16 (which landed April 21, 1972), and Apollo 17 (which landed December 11, 1972).
Lunokhod 2
The Lunokhod 2 was the second of two uncrewed lunar rovers landed on the Moon by the Soviet Union as part of the Lunokhod program. The rover became operational on the Moon on January 16, 1973.
It was the second roving remote-controlled robot to land on any celestial body. The Soviet Union launched Lunokhod 2 aboard the Luna 21 spacecraft on January 8, 1973, and the spacecraft soft-landed in the eastern edge of the Mare Serenitatis region on January 15, 1973. Lunokhod 2 descended from the lander's dual ramps to the lunar surface at 01:14 UT on January 16, 1973. Lunokhod 2 operated for about four months, covered of terrain, including hilly upland areas and rilles, and sent back 86 panoramic images and over 80,000 TV pictures. Based on wheel rotations Lunokhod 2 was thought to have covered but Russian scientists at the Moscow State University of Geodesy and Cartography (MIIGAiK) have revised that to an estimated distance of about based on Lunar Reconnaissance Orbiter (LRO) images of the lunar surface. Subsequent discussions with their American counterparts ended with an agreed-upon final distance of , which has stuck since.
Lunokhod 3
The Soviet rover was intended to be the third roving remote-controlled robot on the Moon in 1977. The mission was canceled due to lack of launcher availability and funding, although the rover was built.
Yutu
Chang'e 3 is a Chinese Moon mission that includes a robotic rover Yutu, named after the pet rabbit of Chang'e, the goddess of the Moon in Chinese mythology. Launched in 2013 with the Chang'e 3 mission, it is China's first lunar rover, the first soft landing on the Moon since 1976 and the first rover to operate there since the Soviet Lunokhod 2 ceased operations on 11 May 1973. It was deployed on the Moon on December 14, 2013, and the rover encountered operational difficulties toward the end of the second lunar day after surviving and recovering successfully the first 14-day lunar night (about a month on the Moon), and was unable to move after the end of the second lunar night, though it continued to gather useful information for some months afterward. In October 2015, Yutu set the record for the longest operational period for a rover on the Moon. On 31 July 2016, Yutu ceased to operate after a total of 31 months, well beyond its original expected lifespan of three months.
Pragyan (Chandrayaan-2 rover)
Chandrayaan-2 was the second lunar mission by India, consisting of a lunar orbiter, a lander named Vikram, and a rover named Pragyan. The rover weighing 27 kg, had six wheels and was to be operated on solar power. Launched on 22 July 2019, the mission entered lunar orbit on August 20. Pragyan was destroyed along with its lander, Vikram, when it crash-landed on the Moon on 6 September 2019 and never got the chance to deploy.
Rashid
Rashid was a lunar rover built by MBRSC to be launched onboard Ispace's lander called Hakuto-R. The rover was launched in November 2022, but was destroyed as the lander crash landed in April 2023. It was equipped with two high-resolution cameras, a microscopic camera to capture small details, and a thermal imaging camera. The rover carried a Langmuir probe, designed to study the Moon's plasma and will attempt to explain why Moon dust is so sticky. The rover was supposed to study the lunar surface, mobility on the Moon’s surface and how different surfaces interact with lunar particles.
SORA-Q (Hakuto-R Mission 1 rover)
Takara Tomy, JAXA and Doshisha University made a rover to be launched onboard Ispace's lander called Hakuto-R. It was launched in 2022, but was destroyed as the lander crash landed in April 2023.
Pragyan (Chandrayaan-3 rover)
Chandrayaan-3 is a mission by India's space agency (ISRO), consisting of a lunar lander and the Pragyan rover. It was a re-attempt to demonstrate soft landing, following the failure of Chandrayaan-2's Vikram lander. It was launched on 14 July 2023 on the LVM-3 launch vehicle and has soft landed near south pole of the Moon August 23 at 6.04 PM IST. The 26 kg 6 wheeled rover Pragyan has descend from lander belly, on to the Moon's surface, using one of its side panels as a ramp. The rover will carry out in-situ chemical analysis of the lunar surface during its course of its mobility. The rover was deployed on 23 August and was put into sleep mode after completing all its objectives on 3 September. It later died during that lunar night.
Peregrine Mission One
Peregrine launched towards the Moon on 8 January 2024, taking with it 5 Colmena rovers and a Iris rover. After separation from the launch vehicle a fault occurred preventing it from completing its mission. The spacecraft instead returned to Earth's atmosphere, where it disintegrated on 18 January.
SLIM rovers
The SLIM lander has two rovers onboard, Lunar Excursion Vehicle 1 (LEV-1) (hopper) and Lunar Excursion Vehicle 2 (LEV-2), a tiny rover developed by JAXA in joint cooperation with Tomy, Sony Group, and Doshisha University. The first rover has direct-to-Earth communication. The second rover is designed to change its shape to traverse around the landing site over a short lifespan of two hours. SLIM was launched on September 6, 2023, and reached lunar orbit on 25 December 2023. They two rovers were successfully deployed and landed separately from SLIM shortly before it own landing on 19 January 2024. LEV-1 conducted six hops on lunar surface and LEV-2 imaged SLIM lander on lunar surface.
Jinchan
Chang'e 6 sample return mission also carried a Chinese rover called Jinchan to conduct infrared spectroscopy of lunar surface and imaged Chang'e 6 lander on lunar surface.
Mars
PrOP-M
The Soviet Mars 2 and Mars 3 landers each had a small 4.5 kg PrOP-M rover on board, which would have moved across the surface on skis while connected to the lander with a 15-meter umbilical. Two small metal rods were used for autonomous obstacle avoidance, as radio signals from Earth would have taken too long to drive the rovers using remote control. The rover was planned to be placed on the surface after landing by a manipulator arm and to move in the field of view of the television cameras and stop to make measurements every 1.5 meters. The rover tracks in the Martian soil would also have been recorded to determine material properties. Because of the crash landing of Mars 2 and the communication failure (15 seconds post landing) of Mars 3, neither rover was deployed.
Marsokhod
The Marsokhod was a Soviet rover (hybrid, with both controls telecommand and automatic) aimed at Mars, part of the Mars 4NM and scheduled to commence after 1973 (according to the plans of 1970). It was to be launched by a N1 rocket, which never flew successfully.
Sojourner
The Mars Pathfinder mission included Sojourner, the first rover to successfully deploy on another planet. NASA launched Mars Pathfinder on 4 December 1996; it landed on Mars in a region called Chryse Planitia on 4 July 1997. From its landing until the final data transmission on 27 September 1997, Mars Pathfinder returned 16,500 images from the lander and 550 images from Sojourner, as well as data from more than 15 chemical analyses of rocks and soil and extensive data on winds and other weather factors.
Beagle 2
Beagle 2 was designed to explore Mars with a small "mole" (Planetary Undersurface Tool, or PLUTO), to be deployed by the arm. PLUTO had a compressed spring mechanism designed to enable it to move across the surface at a rate of 20 mm per second and to burrow into the ground, collecting a subsurface sample in a cavity in its tip. Beagle 2 failed while attempting to land on Mars in 2003.
Mars Exploration Rover Spirit
Spirit is a robotic rover on Mars, active from 2004 to 2010. It was one of two rovers of NASA's ongoing Mars Exploration Rover mission. It landed successfully on Mars at 04:35 Ground UTC on January 4, 2004, three weeks before its twin, Opportunity (MER-B), landed on the other side of the planet. Its name was chosen through a NASA-sponsored student essay competition. The rover became stuck in late 2009, and its last communication with Earth was sent on March 22, 2010.
Mars Exploration Rover Opportunity
Opportunity is a robotic rover on the planet Mars, active from 2004 to early 2019. Launched from Earth on July 7, 2003, it landed on the Martian Meridiani Planum on January 25, 2004, at 05:05 Ground UTC (about 13:15 local time), three weeks after its twin Spirit (MER-A) touched down on the other side of the planet. On July 28, 2014, NASA announced that Opportunity, after having traveled over on the planet Mars, has set a new "off-world" record as the rover having driven the greatest distance, surpassing the previous record held by the Soviet Union's Lunokhod 2 rover that had traveled .
Zhurong
Zhurong rover was a Chinese Mars rover operated by CNSA.It was launched from Wenchang by a Long March 5 carrier rocket on 23 July 2020, 23:18 UTC. It deployed successfully on Mars at 22 May 2021, 02:40 UTC. It was designed for 90sols (93 Earth days), and operated for 347sols (356.5 Earth days) and travelled 1.921Km/1.194Mi.The rover was deactivated on 20 May 2022 due to an approaching sandstorm and Martian winter, waiting to be self-reactivation during favorable condition. Zhurong was expected to reactivate in December 2022, but due to excessive dust accumulation on the solar panel, the rover could not wake itself. On 25 April 2023, chief designer Zhang Rongqiao indicated that the rover could be inactive "forever".
Active rover missions
Mars
Mars Science Laboratory Rover Curiosity
On 26 November 2011, NASA's Mars Science Laboratory mission was successfully launched for Mars. The mission successfully landed the robotic Curiosity rover on the surface of Mars in August 2012. The rover is currently helping to determine whether Mars could ever have supported life, and search for evidence of past or present life on Mars.
Mars 2020 Perseverance rover
NASA's Perseverance rover is a part of the Mars 2020 mission, launched in 2020 and landed on Mars on February 18, 2021. It is intended to investigate an astrobiologically relevant ancient environment on Mars, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures within accessible geological materials.
Moon
Yutu-2
Chinese Chang'e 4 mission launched 7 December 2018, landed and deployed rover 3 January 2019 on the far side of the Moon. It was the first ever rover that operates on the far side of the Moon.
In December 2019, Yutu 2 broke the lunar longevity record, previously held by the Soviet Union's Lunokhod 1 rover, which operated on the lunar surface for eleven lunar days (321 Earth days) and traversed a total distance of .
In February 2020, Chinese astronomers reported, for the first time, a high-resolution image of a lunar ejecta sequence, and, as well, direct analysis of its internal architecture. These were based on observations made by the Lunar Penetrating Radar (LPR) on board the Yutu-2 rover while studying the far side of the Moon.
TENACIOUS
The Hakuto-R Mission 2 includes a rover called "TENACIOUS", designed and manufactured in Luxembourg which will explore the area around the landing site, after being lowered to the lunar surface from the lander.
Planned rover missions
ExoMars Rosalind Franklin
The European Space Agency (ESA) has designed and carried out early prototyping and testing of the Rosalind Franklin rover. As a result of Russia's invasion of Ukraine, ESA severed ties with Roscosmos and was left without a launch vehicle for this mission. The mission now plans to launch no earlier than (NET) 2028 with a landing around 2030.
| Technology | Rovers | null |
234328 | https://en.wikipedia.org/wiki/Stellar%20association | Stellar association | A stellar association is a very loose star cluster, looser than both open clusters and globular clusters. Stellar associations will normally contain from 10 to 100 or more visible stars. An association is primarily identified by commonalities in its member stars' movement vectors, ages, and chemical compositions. These shared features indicate that the members share a common origin. Nevertheless, they have become gravitationally unbound, unlike star clusters, and the member stars will drift apart over millions of years, becoming a moving group as they scatter throughout their neighborhood within the galaxy.
Stellar associations were discovered by Victor Ambartsumian in 1947. The conventional name for an association uses the names or abbreviations of the constellation (or constellations) in which they are located; the association type, and, sometimes, a numerical identifier.
Types
Victor Ambartsumian first categorized stellar associations into two groups, OB and T, based on the properties of their stars. A third category, R, was later suggested by Sidney van den Bergh for associations that illuminate reflection nebulae.
The OB, T, and R associations form a continuum of young stellar groupings. But it is currently uncertain whether they are an evolutionary sequence, or represent some other factor at work. Some groups also display properties of both OB and T associations, so the categorization is not always clear-cut.
OB associations
Young associations will contain 10–100 massive stars of spectral class O and B, and are known as OB associations. These are believed to form within the same small volume inside a giant molecular cloud. Once the surrounding dust and gas is blown away, the remaining stars become unbound and begin to drift apart. It is believed that the majority of all stars in the Milky Way were formed in OB associations.
O class stars are short-lived, and will expire as supernovae after roughly one to fifteen million years, depending on the mass of the star. As a result, OB associations are generally only a few million years in age or less. The O-B stars in the association will have burned all their fuel within
10 million years. (Compare this to the current age of the Sun at about 5 billion years.)
The Hipparcos satellite provided measurements that located a dozen OB associations within 650 parsecs of the Sun. The nearest OB association is the Scorpius–Centaurus association, located about 400 light-years from the Sun.
OB associations have also been found in the Large Magellanic Cloud and the Andromeda Galaxy. These associations can be quite sparse, spanning 1,500 light-years in diameter.
T associations
Young stellar groups can contain a number of infant T Tauri stars that are still in the process of entering the main sequence. These sparse populations of up to a thousand T Tauri stars are known as T associations. The nearest example is the Taurus-Auriga T association (Tau-Aur T association), located at a distance of 140 parsecs from the Sun. Other examples of T associations include the R Corona Australis T association, the Lupus T association, the Chamaeleon T association and the Velorum T association. T associations are often found in the vicinity of the molecular cloud from which they formed. Some, but not all, include O-B class stars. To summarize the characteristics of Moving groups members: they have the same age and origin, the same chemical composition and they have the same amplitude and direction in their vector of velocity.
R associations
Associations of stars that illuminate reflection nebulae are called R associations, a name suggested by Sidney van den Bergh after he discovered that the stars in these nebulae had a non-uniform distribution. These young stellar groupings contain main sequence stars that are not sufficiently massive to disperse the interstellar clouds in which they formed. This allows the properties of the surrounding dark cloud to be examined by astronomers. Because R-associations are more plentiful than OB associations, they can be used to trace out the structure of the galactic spiral arms. An example of an R-association is Monoceros R2, located 830 ± 50 parsecs from the Sun.
Known associations
The Ursa Major Moving Group is one example of a stellar association. (Except for α Ursae Majoris and η Ursae Majoris, all the stars in the Plough/Big Dipper are part of that group.)
Other young moving groups include:
Local Association (Pleiades moving group)
Hyades Stream
IC 2391 supercluster
Beta Pictoris moving group
Castor moving group
AB Doradus moving group
Zeta Herculis moving group
Alpha Persei moving cluster
Cameleopardis OB1 association
| Physical sciences | Stellar astronomy | Astronomy |
234417 | https://en.wikipedia.org/wiki/Rebar | Rebar | Rebar (short for reinforcement bar or reinforcing bar), known when massed as reinforcing steel or steel reinforcement, is a tension device added to concrete to form reinforced concrete and reinforced masonry structures to strengthen and aid the concrete under tension. Concrete is strong under compression, but has low tensile strength. Rebar usually consists of steel bars which significantly increase the tensile strength of the structure. Rebar surfaces feature a continuous series of ribs, lugs or indentations to promote a better bond with the concrete and reduce the risk of slippage.
The most common type of rebar is carbon steel, typically consisting of hot-rolled round bars with deformation patterns embossed into its surface. Steel and concrete have similar coefficients of thermal expansion, so a concrete structural member reinforced with steel will experience minimal differential stress as the temperature changes.
Other readily available types of rebar are manufactured of stainless steel, and composite bars made of glass fiber, carbon fiber, or basalt fiber. The carbon steel reinforcing bars may also be coated in zinc or an epoxy resin designed to resist the effects of corrosion, especially when used in saltwater environments. Bamboo has been shown to be a viable alternative to reinforcing steel in concrete construction. These alternative types tend to be more expensive or may have lesser mechanical properties and are thus more often used in specialty construction where their physical characteristics fulfill a specific performance requirement that carbon steel does not provide.
History
Reinforcing bars in masonry construction have been used since antiquity, with Rome using iron or wooden rods in arch construction. Iron tie rods and anchor plates were later employed across Medieval Europe, as a device to reinforce arches, vaults, and cupolas. 2,500 meters of rebar was used in the 14th-century Château de Vincennes.
During the 18th century, rebar was used to form the carcass of the Leaning Tower of Nevyansk in Russia, built on the orders of the industrialist Akinfiy Demidov. The cast iron used for the rebar was of high quality, and there is no corrosion on the bars to this day. The carcass of the tower was connected to its cast iron tented roof, crowned with one of the first known lightning rods.
However, not until the mid-19th century, with the embedding of steel bars into concrete (thus producing modern reinforced concrete), did rebar display its greatest strengths. Several people in Europe and North America developed reinforced concrete in the 1850s. These include Joseph-Louis Lambot of France, who built reinforced concrete boats in Paris (1854) and Thaddeus Hyatt of the United States, who produced and tested reinforced concrete beams. Joseph Monier of France is one of the most notable figures for the invention and popularization of reinforced concrete. As a French gardener, Monier patented reinforced concrete flowerpots in 1867, before proceeding to build reinforced concrete water tanks and bridges.
Ernest L. Ransome, an English engineer and architect who worked in the United States, made a significant contribution to the development of reinforcing bars in concrete construction. He invented twisted iron rebar, which he initially thought of while designing self-supporting sidewalks for the Masonic Hall in Stockton, California. His twisted rebar was, however, not initially appreciated and even ridiculed at the Technical Society of California, where members stated that the twisting would weaken the iron. In 1889, Ransome worked on the West Coast mainly designing bridges. One of these, the Alvord Lake Bridge in San Francisco's Golden Gate Park, was the first reinforced concrete bridge built in the United States. He used twisted rebar in this structure.
At the same time Ransome was inventing twisted steel rebar, C.A.P. Turner was designing his "mushroom system" of reinforced concrete floor slabs with smooth round rods and Julius Kahn was experimenting with an innovative rolled diamond-shaped rebar with flat-plate flanges angled upwards at 45° (patented in 1902). Kahn predicted concrete beams with this reinforcing system would bend like a Warren truss, and also thought of this rebar as shear reinforcement. Kahn's reinforcing system was built in concrete beams, joists, and columns.
The system was both praised and criticized by Kahn's engineering contemporaries: Turner voiced strong objections to this system as it could cause catastrophic failure to concrete structures. He rejected the idea that Kahn's reinforcing system in concrete beams would act as a Warren truss and also noted that this system would not provide the adequate amount of shear stress reinforcement at the ends of the simply supported beams, the place where the shear stress is greatest. Furthermore, Turner warned that Kahn's system could result in a brittle failure as it did not have longitudinal reinforcement in the beams at the columns.
This type of failure manifested in the partial collapse of the Bixby Hotel in Long Beach, California and total collapse of the Eastman Kodak Building in Rochester, New York, both during construction in 1906. It was, however, concluded that both failures were the consequences of poor-quality labor. With the increase in demand of construction standardization, innovative reinforcing systems such as Kahn's were pushed to the side in favor of the concrete reinforcing systems seen today.
Requirements for deformations on steel bar reinforcement were not standardized in US construction until about 1950. Modern requirements for deformations were established in "Tentative Specifications for the Deformations of Deformed Steel Bars for Concrete Reinforcement", ASTM A305-47T. Subsequently, changes were made that increased rib height and reduced rib spacing for certain bar sizes, and the qualification of “tentative” was removed when the updated standard ASTM A305-49 was issued in 1949. The requirements for deformations found in current specifications for steel bar reinforcing, such as ASTM A615 and ASTM A706, among others, are the same as those specified in ASTM A305-49.
Use in concrete and masonry
Concrete is a material that is very strong in compression, but relatively weak in tension. To compensate for this imbalance in concrete's behavior, rebar is cast into it to carry the tensile loads. Most steel reinforcement is divided into primary and secondary reinforcement:
Primary reinforcement refers to the steel which is employed to guarantee the resistance needed by the structure as a whole to support the design loads.
Secondary reinforcement, also known as distribution or thermal reinforcement, is employed for durability and aesthetic reasons, by providing enough localized resistance to limit cracking and resist stresses caused by effects such as temperature changes and shrinkage.
Secondary applications include rebar embedded in masonry walls, which includes both bars placed horizontally in a mortar joint (every fourth or fifth course of block) or vertically (in the horizontal voids of cement blocks and cored bricks, which is then fixed in place with grout. Masonry structures held together with grout have similar properties to concrete – high compressive resistance but a limited ability to carry tensile loads. When rebar is added they are known as "reinforced masonry".
A similar approach (of embedding rebar vertically in designed voids in engineered blocks) is also used in dry-laid landscape walls, at least pinning the lowest course in place into the earth, also employed securing the lowest course and/or deadmen in walls made of engineered concrete or wooden landscape ties.
In unusual cases, steel reinforcement may be embedded and partially exposed, as in the steel tie bars that constrain and reinforce the masonry of Nevyansk Tower or ancient structures in Rome and the Vatican.
Physical characteristics
Steel has a thermal expansion coefficient nearly equal to that of modern concrete. If this were not so, it would cause problems through additional longitudinal and perpendicular stresses at temperatures different from the temperature of the setting. Although rebar has ribs that bind it mechanically to the concrete, it can still be pulled out of the concrete under high stresses, an occurrence that often accompanies a larger-scale collapse of the structure. To prevent such a failure, rebar is either deeply embedded into adjacent structural members (40–60 times the diameter), or bent and hooked at the ends to lock it around the concrete and other rebar. This first approach increases the friction locking the bar into place, while the second makes use of the high compressive strength of concrete.
Common rebar is made of unfinished tempered steel, making it susceptible to rusting. Normally the concrete cover is able to provide a pH value higher than 12 avoiding the corrosion reaction. Too little concrete cover can compromise this guard through carbonation from the surface, and salt penetration. Too much concrete cover can cause bigger crack widths which also compromises the local guard. As rust takes up greater volume than the steel from which it was formed, it causes severe internal pressure on the surrounding concrete, leading to cracking, spalling, and, ultimately, structural failure. This phenomenon is known as oxide jacking.
This is a particular problem where the concrete is exposed to salt water, as in bridges where salt is applied to roadways in winter, or in marine applications. Uncoated, corrosion-resistant low-carbon/chromium (microcomposite), silicon bronze, epoxy-coated, galvanized, or stainless steel rebars may be employed in these situations at greater initial expense, but significantly lower expense over the service life of the project.
Extra care is taken during the transport, fabrication, handling, installation, and concrete placement process when working with epoxy-coated rebar, because damage will reduce the long-term corrosion resistance of these bars. Even damaged epoxy-coated bars have shown better performance than uncoated reinforcing bars, though issues from debonding of the epoxy coating from the bars and corrosion under the epoxy film have been reported. These epoxy-coated bars are used in over 70,000 bridge decks in the US, but this technology was slowly being phased out in favor of stainless steel rebar as of 2005 because of its poor performance.
Requirements for deformations are found in US-standard product specifications for steel bar reinforcing, such as ASTM A615 and ASTM A706, and dictate lug spacing and height.
Fibre-reinforced plastic rebar is also used in high-corrosion environments. It is available in many forms, such as spirals for reinforcing columns, common rods, and meshes. Most commercially available rebar is made from unidirectional fibers set in a thermoset polymer resin and is often referred to as FRP.
Some special construction such as research and manufacturing facilities with very sensitive electronics may require the use of reinforcement that is non-conductive to electricity, and medical imaging equipment rooms may require non-magnetic properties to avoid interference. FRP rebar, notably glass fibre types have low electrical conductivity and are non-magnetic which is commonly used for such needs. Stainless steel rebar with low magnetic permeability is available and is sometimes used to avoid magnetic interference issues.
Reinforcing steel can also be displaced by impacts such as earthquakes, resulting in structural failure. The prime example of this is the collapse of the Cypress Street Viaduct in Oakland, California as a result of the 1989 Loma Prieta earthquake, causing 42 fatalities. The shaking of the earthquake caused rebars to burst from the concrete and buckle. Updated building designs, including more circumferential rebar, can address this type of failure.
Sizes and grades
US sizes
US/Imperial bar sizes give the diameter in units of for bar sizes #2 through #8, so that #8 = inch = diameter.
There are no fractional bar sizes in this system. The "#" symbol indicates the number sign, and thus "#6" is read as "number six". The use of the "#" sign is customary for US sizes, but "No." is sometimes used instead. Within the trades rebar is known by a shorthand utilizing the bar diameter as descriptor, such as "four-bar" for bar that is four-eighths (or one-half) of an inch.
The cross-sectional area of a bar, as given by πr², works out to (bar size/9.027)², which is approximated as (bar size/9)² square inches. For example, the area of #8 bar is (8/9)² = 0.79 square inches.
Bar sizes larger than #8 follow the -inch rule imperfectly and skip sizes #12–13, and #15–17 due to historical convention. In early concrete construction bars of one inch and larger were only available in square sections, and when large format deformed round bars became available around 1957, the industry manufactured them to provide the cross-sectional area equivalent of standard square bar sizes that were formerly used. The diameter of the equivalent large format round shape is rounded to the nearest inch to provide the bar size. For example, #9 bar has a cross section of , and therefore a diameter of . #10, #11, #14, and #18 sizes correspond to 1 inch, 1, 1, and 2-inch square bars, respectively.
Sizes smaller than #3 are no longer recognized as standard sizes. These are most commonly manufactured as plain round undeformed rod steel but can be made with deformations. Sizes smaller than #3 are typically referred to as "wire" products and not "bar" and specified by either their nominal diameter or wire gage number. #2 bars are often informally called "pencil rod" as they are about the same size as a pencil.
When US/Imperial sized rebar are used in projects with metric units, the equivalent metric size is typically specified as the nominal diameter rounded to the nearest millimeter. These are not considered standard metric sizes, and thus is often referred to as a soft conversion or the "soft metric" size. The US/Imperial bar size system recognizes the use of true metric bar sizes (No. 10, 12, 16, 20, 25, 28, 32, 36, 40, 50 and 60 specifically) which indicates the nominal bar diameter in millimeters, as an "alternate size" specification. Substituting a true metric size for a US/Imperial size is called a hard conversion, and sometimes results in the use of a physically different sized bar.
Canadian sizes
Metric bar designations represent the nominal bar diameter in millimeters, rounded to the nearest 5 mm.
European sizes
Metric bar designations represent the nominal bar diameter in millimetres. Preferred bar sizes in Europe are specified to comply with Table 6 of the standard EN 10080, although various national standards still remain in force (e.g. BS 4449 in the United Kingdom). In Switzerland some sizes are different from European standard.
Australian sizes
Reinforcement for use in concrete construction is subject to the requirements of Australian Standards AS3600 (Concrete Structures) and AS/NZS4671 (Steel Reinforcing for Concrete). There are other standards that apply to testing, welding and galvanizing.
The designation of reinforcement is defined in AS/NZS4671 using the following formats:
Shape/ Section
D- deformed ribbed bar, R- round / plain bar, I- deformed indented bar
Ductility Class
L- low ductility, N- normal ductility, E- seismic (Earthquake) ductility
Standard grades (MPa)
250N, 300E, 500L, 500N, 500E
Examples:
D500N12 is deformed bar, 500 MPa strength, normal ductility and 12 mm nominal diameter – also known as "N12"
Bars are typically abbreviated to simply 'N' (hot-rolled deformed bar), 'R' (hot-rolled round bar), 'RW' (cold-drawn ribbed wire) or 'W' (cold-drawn round wire), as the yield strength and ductility class can be implied from the shape. For example, all commercially available wire has a yield strength of 500 MPa and low ductility, while round bars are 250 MPa and normal ductility.
New Zealand
Reinforcement for use in concrete construction is subject to the requirements of AS/NZS4671 (Steel Reinforcing for Concrete). There are other standards that apply to testing, welding and galvanizing.
'Reinforcement steel bar Grade 300 & 500 Class E
India
Rebars are available in the following grades as per IS:1786-2008 FE 415/FE 415D/FE 415S/FE 500/FE 500D/FE 500S/FE 550, FE550D, FE 600. Rebars are quenched with water at a high level pressure so that the outer surface is hardened while the inner core remains soft. Rebars are ribbed so that the concrete can have a better grip. Coastal regions use galvanized rebars to prolong their life. BIS rebar sizes are 10, 12, 16, 20, 25, 28, 32, 36, 40 and 50 millimeters.
Jumbo and threaded bar sizes
Very large format rebar sizes are widely available and produced by specialty manufacturers. The tower and sign industries commonly use "jumbo" bars as anchor rods for large structures which are fabricated from slightly oversized blanks such that threads can be cut at the ends to accept standard anchor nuts. Fully threaded rebar is also produced with very coarse threads which satisfy rebar deformation standards and allow for custom nuts and couplers to be used. These customary sizes, while in common use, do not have consensus standards associated with them, and properties may vary by manufacturer.
Grades
Rebar is available in grades and specifications that vary in yield strength, ultimate tensile strength, chemical composition, and percentage of elongation.
The use of a grade by itself only indicates the minimum permissible yield strength, and it must be used in the context of a material specification in order to fully describe product requirements for rebar. Material specifications set the requirements for grades as well as additional properties such as, chemical composition, minimum elongation, physical tolerances, etc. Fabricated rebar must exceed the grade's minimum yield strength and any other material specification requirements when inspected and tested.
In US use, the grade designation is equal to the minimum yield strength of the bar in ksi (1000 psi); for example, grade 60 rebar has a minimum yield strength of 60 ksi. Rebar is most commonly manufactured in grades 40, 60, and 75 with higher strength readily available in grades 80, 100, 120 and 150. Grade 60 (420 MPa) is the most widely used rebar grade in modern US construction. Historic grades include 30, 33, 35, 36, 50 and 55, which are not in common use today.
Some grades are only manufactured for specific bar sizes; for example, under ASTM A615, Grade 40 (280 MPa) is only furnished for US bar sizes #3 through #6 (soft metric No.10 through 19). Sometimes limitations on available material grades for specific bar sizes is related to the manufacturing process used, as well as the availability of controlled quality raw materials used.
Some material specifications cover multiple grades, and in such cases it is necessary to indicate both the material specification and grade. Rebar grades are customarily noted on engineering documents, even when there are no other grade options within the material specification, in order to eliminate confusion and avoid potential quality issues such as might occur if a material substitution is made. "Gr." is the common engineering abbreviation for "grade", with variations on letter capitalization and the use of a period.
In certain cases, such as earthquake engineering and blast-resistant design where post-yield behavior is expected, it is important to be able to predict and control properties such as the maximum yield strength and minimum ratio of tensile strength to yield strength. ASTM A706 Gr. 60 is an example of a controlled property range material specification which has a minimum yield strength of 60 ksi (420 MPa), maximum yield strength of 78 ksi (540 MPa), minimum tensile strength of 80 ksi (550 MPa) and not less than 1.25 times the actual yield strength, and minimum elongation requirements that vary by bar size.
In countries that use the metric system, the grade designation is typically the yield strength in megapascals (MPa), for example grade 400 (similar to US grade 60; however, metric grade 420 is actually the exact substitution for the US grade).
Common US specifications, published by ACI and ASTM, are:
American Concrete Institute: "ACI 318-14 Building Code Requirements for Structural Concrete and Commentary", (2014)
ASTM A82: Specification for Plain Steel Wire for Concrete Reinforcement
ASTM A184/A184M: Specification for Fabricated Deformed Steel Bar Mats for Concrete Reinforcement
ASTM A185: Specification for Welded Plain Steel Wire Fabric for Concrete Reinforcement
ASTM A496: Specification for Deformed Steel Wire for Concrete Reinforcement
ASTM A497: Specification for Welded Deformed Steel Wire Fabric for Concrete Reinforcement
ASTM A615/A615M: Deformed and plain carbon-steel bars for concrete reinforcement
ASTM A616/A616M: Specification for Rail-Steel Deformed and Plain Bars for Concrete Reinforcement
ASTM A617/A617M: Specification for Axle-Steel Deformed and Plain Bars for Concrete Reinforcement
ASTM A706/A706M: Low-alloy steel deformed and plain bars for concrete reinforcement
ASTM A722/A722M: Standard Specification for High-Strength Steel Bars for Prestressed Concrete
ASTM A767/A767M: Specification for Zinc-Coated (Galvanized) Steel Bars for Concrete Reinforcement
ASTM A775/A775M: Specification for Epoxy-Coated Reinforcing Steel Bars
ASTM A934/A934M: Specification for Epoxy-Coated Prefabricated Steel Reinforcing Bars
ASTM A955: Deformed and plain stainless-steel bars for concrete reinforcement (Supplementary Requirement S1 is used when specifying magnetic permeability testing)
ASTM A996: Rail-steel and axle-steel deformed bars for concrete reinforcement
ASTM A1035: Standard Specification for Deformed and Plain, Low-carbon, Chromium, Steel Bars for Concrete Reinforcement
ASTM marking designations are:
'S' billet A615
'I' rail A616 ()
'IR' Rail Meeting Supplementary Requirements S1 A616 )
'A' Axle A617 )
'W' Low-alloy — A706
Historically in Europe, rebar is composed of mild steel material with a yield strength of approximately 250 MPa (36 ksi). Modern rebar is composed of high-yield steel, with a yield strength more typically 500 MPa (72.5 ksi). Rebar can be supplied with various grades of ductility. The more ductile steel is capable of absorbing considerably more energy when deformed – a behavior that resists earthquake forces and is used in design. These high-yield-strength ductile steels are usually produced using the TEMPCORE process, a method of thermomechanical processing. The manufacture of reinforcing steel by re-rolling finished products (e.g. sheets or rails) is not allowed. In contrast to structural steel, rebar steel grades are not harmonized yet across Europe, each country having their own national standards. However, some standardization of specification and testing methods exist under EN 10080 and EN ISO 15630:
BS EN 10080: Steel for the reinforcement of concrete. Weldable reinforcing steel. General. (2005)
BS 4449: Steel for the reinforcement of concrete. Weldable reinforcing steel. Bar, coil and product. Specification. (2005/2009)
BS 4482: Steel wire for the reinforcement of concrete products. Specification (2005)
BS 4483: Steel fabric for the reinforcement of concrete. Specification (2005)
BS 6744: Stainless steel bars for the reinforcement of and use in concrete. Requirements and test methods. (2001/2009)
DIN 488-1: Reinforcing steels - Part 1: Grades, properties, marking (2009)
DIN 488-2: Reinforcing steels - Part 2: Reinforcing steel bars (2009)
DIN 488-3: Reinforcing steels - Part 3: Reinforcing steel in coils, steel wire (2009)
DIN 488-4: Reinforcing steels - Part 4: Welded fabric (2009)
DIN 488-5: Reinforcing steels - Part 5: Lattice girders (2009)
DIN 488-6: Reinforcing steel - Part 6: Assessment of conformity (2010)
BS EN ISO 15630-1: Steel for the reinforcement and prestressing of concrete. Test methods. Reinforcing bars, wire rod and wire. (2010)
BS EN ISO 15630-2: Steel for the reinforcement and prestressing of concrete. Test methods. Welded fabric. (2010)
Placing rebar
Rebar cages are fabricated either on or off the project site commonly with the help of hydraulic benders and shears. However, for small or custom work a tool known as a Hickey, or hand rebar bender, is sufficient. The rebars are placed by steel fixers ("rodbusters" or concrete reinforcing iron workers), with bar supports and concrete or plastic rebar spacers separating the rebar from the concrete formwork to establish concrete cover and ensure that proper embedment is achieved. The rebars in the cages are connected by spot welding, tying steel wire, sometimes using an electric rebar tier, or with mechanical connections. For tying epoxy-coated or galvanized rebars, epoxy-coated or galvanized wire is normally used, respectively.
Stirrups
Stirrups form the outer part of a rebar cage. The function of stirrups (often referred to as 'reinforcing steel links' and 'shear links') is threefold: to give the main reinforcement bars structure, to maintain a correct level of concrete cover, and to maintain an equal transferance of force throughout the supporting elements. Stirrups are usually rectangular in beams, and circular in piers and are placed at regular intervals along a column or beam as defined by civil or structural engineers in construction drawings.
Welding
The American Welding Society (AWS) D 1.4 sets out the practices for welding rebar in the US. Without special consideration the only rebar that is ready to weld is W grade (Low-alloy — A706). Rebar that is not produced to the ASTM A706 specification is generally not suitable for welding without calculating the "carbon-equivalent". Material with a carbon-equivalent of less than 0.55 can be welded.
ASTM A 616 & ASTM A 617 (now replaced by the combined standard A996) reinforcing bars are re-rolled rail steel and re-rolled rail axle steel with uncontrolled chemistry, phosphorus and carbon content. These materials are not common.
Rebar cages are normally tied together with wire, although spot welding of cages has been the norm in Europe for many years, and is becoming more common in the United States. High strength steels for prestressed concrete cannot be welded.
Reinforcement placement in rolls
Roll reinforcement system is a remarkably fast and cost-efficient method for placing a large quantity of reinforcement over a short period of time. Roll reinforcement is usually prepared off-site and easily unrolled on site. Roll reinforcement placement has been applied successfully in slabs (decks, foundations), wind energy mast foundations, walls, ramps, etc.
Mechanical connections
Also known as "mechanical couplers" or "mechanical splices", mechanical connections are used to connect reinforcing bars together. Mechanical couplers are an effective means to reduce rebar congestion in highly reinforced areas for cast-in-place concrete construction. These couplers are also used in precast concrete construction at the joints between members.
The structural performance criteria for mechanical connections varies between countries, codes, and industries. As a minimum requirement, codes typically specify that the rebar to splice connection meets or exceeds 125% of the specified yield strength of the rebar. More stringent criteria also requires the development of the specified ultimate strength of the rebar. As an example, ACI 318 specifies either Type 1 (125% Fy) or Type 2 (125% Fy and 100% Fu) performance criteria.
For concrete structures designed with ductility in mind, it is recommended that the mechanical connections are also capable of failing in a ductile manner, typically known in the reinforcing steel industry as achieving "bar-break". As an example, Caltrans specifies a required mode of failure (i.e., "necking of the bar").
Safety
To prevent injury, the protruding ends of steel rebar are often bent over or covered with special steel-reinforced plastic caps.
Designations
Reinforcement is usually tabulated in a "reinforcement schedule" on construction drawings. This eliminates ambiguity in the notations used around the world. The following list provides examples of the notations used in the architectural, engineering, and construction industry.
Reuse and recycling
Rebar is frequently recycled, and rebar is often made entirely from recycled steel. Nucor, the largest steel producer in the United States, claims its steel bar products are made from 97% recycled steel.
| Technology | Building materials | null |
234571 | https://en.wikipedia.org/wiki/European%20eel | European eel | The European eel (Anguilla anguilla) is a species of eel. Their life history was a mystery for thousands of years, and mating in the wild has not yet been observed. The five stages of their development were originally thought to be different species. They are critically endangered due to hydroelectric dams, overfishing by fisheries on coasts for human consumption, and parasites.
Description
European eels live through 5 stages of development: larva (leptocephalus), glass eel, elver, yellow eel, and silver eel. Adults in the yellow phase are normally around and rarely reach more than , but can reach a length of up to in exceptional cases. In addition, they range from having 110 to 120 vertebrae. While European eels tend to live approximately 15–20 years in the wild, some captive specimens have lived for over 80 years. A specimen known as "the Brantevik Eel" lived for 155 years in the well of a family home in Brantevik, a fishing village in southern Sweden.
Ecology
Eels tend to range from underwater and after spawning in the Sargasso Sea, disperse North throughout the Atlantic Ocean, its coasts, and the rivers that empty into it. Feeding occurs mainly at night, via scent and prey consists of worms, fish (including ones too big to eat without biting off chunks), mollusks such as slugs, crustaceans such as crayfish, and plankton on occasion when available in large quantities. European eels are preyed upon by bigger eels, herons, cormorants, and pike. Seagulls also prey on elvers. Eels usually find and compete for shelter by hiding in plants or tube-shaped crevices in rocks. They also hide in muddy fields when inland.
Conservation status
The European eel is a critically endangered species. Since the 1970s, the numbers of eels reaching Europe is thought to have declined by around 90% (possibly even 98%). Contributing factors include overfishing, parasites such as Anguillicola crassus, barriers to migration such as hydroelectric dams, and natural changes in the North Atlantic oscillation, Gulf Stream, and North Atlantic drift. Recent work suggests polychlorinated biphenyl (PCB) pollution may be a factor in the decline. The TRAFFIC program is introducing traceability and legality systems throughout trade change to control and reverse the decline of the species. The species is listed in Appendix II of the CITES Convention. Hydroelectric dams have been shown to have a significant negative impact on eel populations. Over an 80 year period, waters with large dams have experienced almost twice the reduction of eel numbers as dam-free waters.
Sustainable consumption
Eels have been important sources of food both as adults (including jellied eels of East London) and as glass eels. Glass-eel fishing using basket traps has been of significant economic value in many river estuaries on the western seaboard of Europe. In addition, the United States imports 11 million pounds of eel every year to support its sushi industry, including European eels. In order to make eel consumption sustainable, in 2010, Greenpeace International added the European eel to its "seafood red list", and the Sustainable Eel Group launched the Sustainable Eel Standard.
Breeding projects
As the European eel population has been falling for some time, several projects have been started. In 1997, Innovatie Netwerk in the Netherlands initiated a project where they attempted to get European eels to breed in captivity by simulating the journey from Europe to the Sargasso Sea with a swimming machine for the fish.
The first to achieve some success was DTU Aqua, a part of the Technical University of Denmark. Through a combination of fresh and salt water, as well as hormones, they were able to breed it in captivity in 2006 and make the larvae survive for 4.5 days after hatching. By 2007, DTU Aqua scientists were able to set a new record where the larvae survived for 12 days by feeding the mother eel with a special arginine-enriched diet. At this age the content of the larval yolk sac has been used, the mouth and digestive channel have developed, and it requires feeding. Attempts with various substances failed. Deep water sampling of the presumed habitat of larval European eel in the Sargasso Sea was performed by the Galathea 3 expedition in 2006–07, in the hope of revealing the likely feeding preference at the early stage. Their results indicated that they feed on various planktonic organisms, but especially microscopic jellyfish. A follow-up expedition was performed by DTU's own research ship to the Sargasso Sea region in 2014.
To further the research, the PRO-EEL project, led by DTU Aqua and involving several research institutes elsewhere in Denmark (University of Copenhagen and others), Norway (Norwegian Institute of Fisheries and Food Research and others), the Netherlands (Leiden University and others), Belgium (Ghent University), France (French National Center for Scientific Research and others), Spain (ICTA at Polytechnic University of Valencia) and Tunisia (National Institute of Marine Sciences and Technologies), was started in 2010. By 2014, the eel larvae at their facilities typically survived 20–22 days, and by 2022 they were surviving up to around 140 days, well into the leptocephalus stage (the stage just before glass eel), but the full life cycle has still not been completed in captivity.
Life history
Much of the European eel's life history was a mystery for centuries, as fishermen never caught anything they could identify as a young eel. Unlike many other migrating fish, eels begin their life cycle in the ocean and spend most of their lives in fresh inland water, or brackish coastal water, returning to the ocean to spawn and then die. In the early 1900s, Danish researcher Johannes Schmidt identified the Sargasso Sea as the most likely spawning grounds for European eels. The larvae (leptocephali) drift towards Europe in a 300-day migration.
When approaching the European coast, the larvae metamorphose into a transparent larval stage called "glass eel", enter estuaries, and many start migrating upstream. After entering their continental habitat, the glass eels metamorphose into elvers, miniature versions of the adult eels. As the eel grows, it becomes known as a "yellow eel" due to the brownish-yellow color of their sides and belly. After 5–20 years in fresh or brackish water, the eels become sexually mature, their eyes grow larger, their flanks become silver, and their bellies white in color. In this stage, the eels are known as "silver eels", and they begin their migration back to the Sargasso Sea to spawn. Silvering is important in an eel's development because it allows for increased levels of the steroid hormone cortisol, which is needed for their migration from fresh water back to the sea. Cortisol plays a role in the long migration because it allows for the mobilization of energy during migration. Also playing a key role in silvering is the production of the steroid 11-Ketotestosterone (11-KT), which prepares the eel for structural changes to the skin to endure the migration from fresh water to saltwater.
Sometimes the eel will never enter freshwater, and remain in their marine environment their entire life. Others grow up in brackish water, or migrate between saltwater, brackish water and freshwater several times in their lifetime.
Magnetoreception has also been reported in the European eel by at least one study, and may be used for navigation.
Commercial fisheries
Production
The eel farming industry uses recirculating pools to raise glass eels taken from the wild for 8 months to 2 years until they mature enough for sale. Valliculture on coasts through the use of weirs is also utilized instead of recirculating pools for eel farming. New eels are quarantined to prevent disease spread and eels are sorted by size every couple weeks to prevent cannibalism and remove dead animals. A range of 23°C to 28°C is optimal for growth and protein based pellets and pastes are utilized as food sources for the eels after an initial few days of cod roe for the small glass ones. European eels typically have a feed conversion ratio (FCR) in the range of 1.8-2.5, although European fisheries are typically in the 1.6-1.7 range. Filters are essential for eliminating waste and ensuring the eels have clean water to live in. Eels are typically transported via road in tanks with water or via air in styrofoam boxes with a beaker of ice. The beakers keep condensation on the outside and ice on inside to keep the environment moist enough for the 1–3kg of eels to survive and also keep the temperature low enough.
Diseases/parasites in fisheries
Diseases can be spread rapidly in the highly populated environments of fisheries if quarantine measures are not taken immediately upon arrival of new eels. Some common bacterial infections observed in eel fisheries are red fin and red eel pest. When an eel has a red fin infection, its tail and fins start rotting, and a salt solution should be utilized to treat it. Antibiotics can be used to treat red eel pest which is characterized by ulcerated lesions, swelling, and spots of red on the skin of the eel. In addition, Aeromonas sobria and Streptococcus spp. are other more rare bacteria to infect European eels but have been observed in necropsies and are likely the result of other stresses increasing the eel's susceptibility to disease, but can be treated with antimicrobials. Parasites such as from the genus Dactylogyrus have also been observed in necropsies, and some symptoms of parasitic infections in European eels are white spots, mucus increase, fin fraying, rubbing infected spots against the enclosure, respiratory distress, and lethargy. These parasites are best treated with salt solutions or formaldehyde solutions. Viral infections such as red head have also been observed; symptoms include red hemorrhaging spreading from the head to the rest of the eel and can be treated with vaccinations at a young age, salt solutions, or decreased temperature of water within the enclosure. Salt solutions also can treat fungal infections that cause swelling of gills and brown or white skin patches.
Industry
The exportation of European Eels has been restricted since 2010, yet on average 44% of eel sales in the United States consists of these eels. Eel aquaculture is most prominent in Japan, yet China, Scandinavia, Europe, Australia, Morocco, and Taiwan also participate in this practice. Eel breeding programs initiated by humans have been unsuccessful thus far and therefore the entire industry is dependent on the number of eels spawning in the wild, leaving it unsustainable and vulnerable to the factors causing European Eels to be critically endangered.
| Biology and health sciences | Anguilliformes | Animals |
234665 | https://en.wikipedia.org/wiki/Viverridae | Viverridae | Viverridae is a family of small to medium-sized feliform mammals, comprising 14 genera with 33 species. This family was named and first described by John Edward Gray in 1821. Viverrids occur all over Africa, southern Europe, South and Southeast Asia across the Wallace Line. The word viverridae comes from the Latin word .
The species of the subfamily Genettinae are known as genets and oyans. The viverrids of the subfamily Viverrinae are commonly called civets; the Paradoxurinae and most Hemigalinae species are called palm civets.
Characteristics
Viverrids have four or five toes on each foot and half-retractile claws. They have six incisors in each jaw and molars with two tubercular grinders behind in the upper jaw, and one in the lower jaw. The tongue is rough with sharp prickles. A pouch or gland occurs beneath the anus, but there is no cecum. The male's urethral opening is directed backward.
Viverrids are the most primitive of all the families of feliform Carnivora and clearly less specialized than the Felidae. In external characteristics, they are distinguished from the Felidae by the longer muzzle and tuft of facial vibrissae between the lower jaw bones, and by the shorter limbs and the five-toed hind foot with the first digit present. The skull differs by the position of the postpalatine foramina on the maxilla, almost always well in advance of the maxillopalatine suture, and usually about the level of the second premolar; and by the distinct external division of the auditory bulla into its two elements either by a definite groove or, when rarely this is obliterated, by the depression of the tympanic bone in front of the swollen entotympanic. The typical dental formula is: , but the number may be reduced, although never to the same extent as in the Felidae.
Their flesh-shearing carnassial teeth are relatively undeveloped compared to those of other feliform carnivorans. Most viverrid species have a penis bone (a baculum).
Classification
Living species
In 1821, Gray defined this family as consisting of the genera Viverra, Genetta, Herpestes, and Suricata. Reginald Innes Pocock later redefined the family as containing a great number of highly diversified genera, and being susceptible of division into several subfamilies, based mainly on the structure of the feet and of some highly specialized scent glands, derived from the skin, which are present in most of the species and are situated in the region of the external generative organs. He subordinated the subfamilies Hemigalinae, Paradoxurinae, Prionodontinae, and Viverrinae to the Viverridae.
In 1833, Edward Turner Bennett described the Malagasy fossa (Cryptoprocta ferox) and subordinated the Cryptoprocta to the Viverridae. A molecular and morphological analysis based on DNA/DNA hybridization experiments suggests that Cryptoprocta does not belong within Viverridae, but is a member of the Eupleridae.
The African palm civet (Nandinia binotata) resembles the civets of the Viverridae, but is genetically distinct and belongs in its own monotypic family, the Nandiniidae. There is little dispute that the Poiana species are viverrids.
DNA analysis based on 29 carnivoran species, comprising 13 Viverrinae species and three species representing Paradoxurus, Paguma and Hemigalinae, confirmed Pocock's assumption that the African linsang Poiana represents the sister group of the genus Genetta. The placement of Prionodon as the sister group of the family Felidae is strongly supported, and it was proposed that the Asiatic linsangs be placed in the monogeneric family Prionodontidae.
Phylogeny
The phylogenetic relationships of Viverridae are shown in the following cladogram:
Extinct species
| Biology and health sciences | Other carnivora | Animals |
234714 | https://en.wikipedia.org/wiki/Medical%20imaging | Medical imaging | Medical imaging is the technique and process of imaging the interior of a body for clinical analysis and medical intervention, as well as visual representation of the function of some organs or tissues (physiology). Medical imaging seeks to reveal internal structures hidden by the skin and bones, as well as to diagnose and treat disease. Medical imaging also establishes a database of normal anatomy and physiology to make it possible to identify abnormalities. Although imaging of removed organs and tissues can be performed for medical reasons, such procedures are usually considered part of pathology instead of medical imaging.
Measurement and recording techniques that are not primarily designed to produce images, such as electroencephalography (EEG), magnetoencephalography (MEG), electrocardiography (ECG), and others, represent other technologies that produce data susceptible to representation as a parameter graph versus time or maps that contain data about the measurement locations. In a limited comparison, these technologies can be considered forms of medical imaging in another discipline of medical instrumentation.
As of 2010, 5 billion medical imaging studies had been conducted worldwide. Radiation exposure from medical imaging in 2006 made up about 50% of total ionizing radiation exposure in the United States. Medical imaging equipment is manufactured using technology from the semiconductor industry, including CMOS integrated circuit chips, power semiconductor devices, sensors such as image sensors (particularly CMOS sensors) and biosensors, and processors such as microcontrollers, microprocessors, digital signal processors, media processors and system-on-chip devices. , annual shipments of medical imaging chips amount to 46million units and .
The term "noninvasive" is used to denote a procedure where no instrument is introduced into a patient's body, which is the case for most imaging techniques used.
Types
In the clinical context, "invisible light" medical imaging is generally equated to radiology or "clinical imaging". "Visible light" medical imaging involves digital video or still pictures that can be seen without special equipment. Dermatology and wound care are two modalities that use visible light imagery. Interpretation of medical images is generally undertaken by a physician specialising in radiology known as a radiologist; however, this may be undertaken by any healthcare professional who is trained and certified in radiological clinical evaluation. Increasingly interpretation is being undertaken by non-physicians, for example radiographers frequently train in interpretation as part of expanded practice. Diagnostic radiography designates the technical aspects of medical imaging and in particular the acquisition of medical images. The radiographer (also known as a radiologic technologist) is usually responsible for acquiring medical images of diagnostic quality; although other professionals may train in this area, notably some radiological interventions performed by radiologists are done so without a radiographer.
As a field of scientific investigation, medical imaging constitutes a sub-discipline of biomedical engineering, medical physics or medicine depending on the context: Research and development in the area of instrumentation, image acquisition (e.g., radiography), modeling and quantification are usually the preserve of biomedical engineering, medical physics, and computer science; Research into the application and interpretation of medical images is usually the preserve of radiology and the medical sub-discipline relevant to medical condition or area of medical science (neuroscience, cardiology, psychiatry, psychology, etc.) under investigation. Many of the techniques developed for medical imaging also have scientific and industrial applications.
Radiography
Two forms of radiographic images are in use in medical imaging. Projection radiography and fluoroscopy, with the latter being useful for catheter guidance. These 2D techniques are still in wide use despite the advance of 3D tomography due to the low cost, high resolution, and depending on the application, lower radiation dosages with 2D technique. This imaging modality uses a wide beam of X-rays for image acquisition and is the first imaging technique available in modern medicine.
Fluoroscopy produces real-time images of internal structures of the body in a similar fashion to radiography, but employs a constant input of X-rays, at a lower dose rate. Contrast media, such as barium, iodine, and air are used to visualize internal organs as they work. Fluoroscopy is also used in image-guided procedures when constant feedback during a procedure is required. An image receptor is required to convert the radiation into an image after it has passed through the area of interest. Early on, this was a fluorescing screen, which gave way to an Image Amplifier (IA) which was a large vacuum tube that had the receiving end coated with cesium iodide, and a mirror at the opposite end. Eventually the mirror was replaced with a TV camera.
Projectional radiographs, more commonly known as X-rays, are often used to determine the type and extent of a fracture as well as for detecting pathological changes in the lungs. With the use of radio-opaque contrast media, such as barium, they can also be used to visualize the structure of the stomach and intestines – this can help diagnose ulcers or certain types of colon cancer.
Magnetic resonance imaging
A magnetic resonance imaging instrument (MRI scanner), or "nuclear magnetic resonance (NMR) imaging" scanner as it was originally known, uses powerful magnets to polarize and excite hydrogen nuclei (i.e., single protons) of water molecules in human tissue, producing a detectable signal which is spatially encoded, resulting in images of the body. The MRI machine emits a radio frequency (RF) pulse at the resonant frequency of the hydrogen atoms on water molecules. Radio frequency antennas ("RF coils") send the pulse to the area of the body to be examined. The RF pulse is absorbed by protons, causing their direction with respect to the primary magnetic field to change. When the RF pulse is turned off, the protons "relax" back to alignment with the primary magnet and emit radio-waves in the process. This radio-frequency emission from the hydrogen-atoms on water is what is detected and reconstructed into an image. The resonant frequency of a spinning magnetic dipole (of which protons are one example) is called the Larmor frequency and is determined by the strength of the main magnetic field and the chemical environment of the nuclei of interest. MRI uses three electromagnetic fields: a very strong (typically 1.5 to 3 teslas) static magnetic field to polarize the hydrogen nuclei, called the primary field; gradient fields that can be modified to vary in space and time (on the order of 1 kHz) for spatial encoding, often simply called gradients; and a spatially homogeneous radio-frequency (RF) field for manipulation of the hydrogen nuclei to produce measurable signals, collected through an RF antenna.
Like CT, MRI traditionally creates a two-dimensional image of a thin "slice" of the body and is therefore considered a tomographic imaging technique. Modern MRI instruments are capable of producing images in the form of 3D blocks, which may be considered a generalization of the single-slice, tomographic, concept. Unlike CT, MRI does not involve the use of ionizing radiation and is therefore not associated with the same health hazards. For example, because MRI has only been in use since the early 1980s, there are no known long-term effects of exposure to strong static fields (this is the subject of some debate; see 'Safety' in MRI) and therefore there is no limit to the number of scans to which an individual can be subjected, in contrast with X-ray and CT. However, there are well-identified health risks associated with tissue heating from exposure to the RF field and the presence of implanted devices in the body, such as pacemakers. These risks are strictly controlled as part of the design of the instrument and the scanning protocols used.
Because CT and MRI are sensitive to different tissue properties, the appearances of the images obtained with the two techniques differ markedly. In CT, X-rays must be blocked by some form of dense tissue to create an image, so the image quality when looking at soft tissues will be poor. In MRI, while any nucleus with a net nuclear spin can be used, the proton of the hydrogen atom remains the most widely used, especially in the clinical setting, because it is so ubiquitous and returns a large signal. This nucleus, present in water molecules, allows the excellent soft-tissue contrast achievable with MRI.
A number of different pulse sequences can be used for specific MRI diagnostic imaging (multiparametric MRI or mpMRI). It is possible to differentiate tissue characteristics by combining two or more of the following imaging sequences, depending on the information being sought: T1-weighted (T1-MRI), T2-weighted (T2-MRI), diffusion weighted imaging (DWI-MRI), dynamic contrast enhancement (DCE-MRI), and spectroscopy (MRI-S). For example, imaging of prostate tumors is better accomplished using T2-MRI and DWI-MRI than T2-weighted imaging alone. The number of applications of mpMRI for detecting disease in various organs continues to expand, including liver studies, breast tumors, pancreatic tumors, and assessing the effects of vascular disruption agents on cancer tumors.
Nuclear medicine
Nuclear medicine encompasses both diagnostic imaging and treatment of disease, and may also be referred to as molecular medicine or molecular imaging and therapeutics. Nuclear medicine uses certain properties of isotopes and the energetic particles emitted from radioactive material to diagnose or treat various pathology. Different from the typical concept of anatomic radiology, nuclear medicine enables assessment of physiology. This function-based approach to medical evaluation has useful applications in most subspecialties, notably oncology, neurology, and cardiology. Gamma cameras and PET scanners are used in e.g. scintigraphy, SPECT and PET to detect regions of biologic activity that may be associated with a disease. Relatively short-lived isotope, such as 99mTc is administered to the patient. Isotopes are often preferentially absorbed by biologically active tissue in the body, and can be used to identify tumors or fracture points in bone. Images are acquired after collimated photons are detected by a crystal that gives off a light signal, which is in turn amplified and converted into count data.
Scintigraphy ("scint") is a form of diagnostic test wherein radioisotopes are taken internally, for example, intravenously or orally. Then, gamma cameras capture and form two-dimensional images from the radiation emitted by the radiopharmaceuticals.
SPECT is a 3D tomographic technique that uses gamma camera data from many projections and can be reconstructed in different planes. A dual detector head gamma camera combined with a CT scanner, which provides localization of functional SPECT data, is termed a SPECT-CT camera, and has shown utility in advancing the field of molecular imaging. In most other medical imaging modalities, energy is passed through the body and the reaction or result is read by detectors. In SPECT imaging, the patient is injected with a radioisotope, most commonly Thallium 201TI, Technetium 99mTC, Iodine 123I, and Gallium 67Ga. The radioactive gamma rays are emitted through the body as the natural decaying process of these isotopes takes place. The emissions of the gamma rays are captured by detectors that surround the body. This essentially means that the human is now the source of the radioactivity, rather than the medical imaging devices such as X-ray or CT.
Positron emission tomography (PET) uses coincidence detection to image functional processes. Short-lived positron emitting isotope, such as 18F, is incorporated with an organic substance such as glucose, creating F18-fluorodeoxyglucose, which can be used as a marker of metabolic utilization. Images of activity distribution throughout the body can show rapidly growing tissue, like tumor, metastasis, or infection. PET images can be viewed in comparison to computed tomography scans to determine an anatomic correlate. Modern scanners may integrate PET, allowing PET-CT, or PET-MRI to optimize the image reconstruction involved with positron imaging. This is performed on the same equipment without physically moving the patient off of the gantry. The resultant hybrid of functional and anatomic imaging information is a useful tool in non-invasive diagnosis and patient management.
Fiduciary markers are used in a wide range of medical imaging applications. Images of the same subject produced with two different imaging systems may be correlated (called image registration) by placing a fiduciary marker in the area imaged by both systems. In this case, a marker which is visible in the images produced by both imaging modalities must be used. By this method, functional information from SPECT or positron emission tomography can be related to anatomical information provided by magnetic resonance imaging (MRI). Similarly, fiducial points established during MRI can be correlated with brain images generated by magnetoencephalography to localize the source of brain activity.
Ultrasound
Medical ultrasound uses high frequency broadband sound waves in the megahertz range that are reflected by tissue to varying degrees to produce (up to 3D) images. This is commonly associated with imaging the fetus in pregnant women. Uses of ultrasound are much broader, however. Other important uses include imaging the abdominal organs, heart, breast, muscles, tendons, arteries and veins. While it may provide less anatomical detail than techniques such as CT or MRI, it has several advantages which make it ideal in numerous situations, in particular that it studies the function of moving structures in real-time, emits no ionizing radiation, and contains speckle that can be used in elastography. Ultrasound is also used as a popular research tool for capturing raw data, that can be made available through an ultrasound research interface, for the purpose of tissue characterization and implementation of new image processing techniques. The concepts of ultrasound differ from other medical imaging modalities in the fact that it is operated by the transmission and receipt of sound waves. The high frequency sound waves are sent into the tissue and depending on the composition of the different tissues; the signal will be attenuated and returned at separate intervals. A path of reflected sound waves in a multilayered structure can be defined by an input acoustic impedance (ultrasound sound wave) and the Reflection and transmission coefficients of the relative structures. It is very safe to use and does not appear to cause any adverse effects. It is also relatively inexpensive and quick to perform. Ultrasound scanners can be taken to critically ill patients in intensive care units, avoiding the danger caused while moving the patient to the radiology department. The real-time moving image obtained can be used to guide drainage and biopsy procedures. Doppler capabilities on modern scanners allow the blood flow in arteries and veins to be assessed.
Elastography
Elastography is a relatively new imaging modality that maps the elastic properties of soft tissue. This modality emerged in the last two decades. Elastography is useful in medical diagnoses, as elasticity can discern healthy from unhealthy tissue for specific organs/growths. For example, cancerous tumours will often be harder than the surrounding tissue, and diseased livers are stiffer than healthy ones. There are several elastographic techniques based on the use of ultrasound, magnetic resonance imaging and tactile imaging. The wide clinical use of ultrasound elastography is a result of the implementation of technology in clinical ultrasound machines. Main branches of ultrasound elastography include Quasistatic Elastography/Strain Imaging, Shear Wave Elasticity Imaging (SWEI), Acoustic Radiation Force Impulse imaging (ARFI), Supersonic Shear Imaging (SSI), and Transient Elastography. In the last decade, a steady increase of activities in the field of elastography is observed demonstrating successful application of the technology in various areas of medical diagnostics and treatment monitoring.
Photoacoustic imaging
Photoacoustic imaging is a recently developed hybrid biomedical imaging modality based on the photoacoustic effect. It combines the advantages of optical absorption contrast with an ultrasonic spatial resolution for deep imaging in (optical) diffusive or quasi-diffusive regime. Recent studies have shown that photoacoustic imaging can be used in vivo for tumor angiogenesis monitoring, blood oxygenation mapping, functional brain imaging, and skin melanoma detection, etc.
Tomography
Tomography is the imaging by sections or sectioning. The main such methods in medical imaging are:
X-ray computed tomography (CT), or Computed Axial Tomography (CAT) scan, is a helical tomography technique (latest generation), which traditionally produces a 2D image of the structures in a thin section of the body. In CT, a beam of X-rays spins around an object being examined and is picked up by sensitive radiation detectors after having penetrated the object from multiple angles. A computer then analyses the information received from the scanner's detectors and constructs a detailed image of the object and its contents using the mathematical principles laid out in the Radon transform. It has a greater ionizing radiation dose burden than projection radiography; repeated scans must be limited to avoid health effects. CT is based on the same principles as X-ray projections but in this case, the patient is enclosed in a surrounding ring of detectors assigned with 500–1000 scintillation detectors (fourth-generation X-ray CT scanner geometry). Previously in older generation scanners, the X-ray beam was paired by a translating source and detector. Computed tomography has almost completely replaced focal plane tomography in X-ray tomography imaging.
Positron emission tomography (PET) also used in conjunction with computed tomography, PET-CT, and magnetic resonance imaging PET-MRI.
Magnetic resonance imaging (MRI) commonly produces tomographic images of cross-sections of the body. (See separate MRI section in this article.)
Echocardiography
When ultrasound is used to image the heart it is referred to as an echocardiogram. Echocardiography allows detailed structures of the heart, including chamber size, heart function, the valves of the heart, as well as the pericardium (the sac around the heart) to be seen. Echocardiography uses 2D, 3D, and Doppler imaging to create pictures of the heart and visualize the blood flowing through each of the four heart valves. Echocardiography is widely used in an array of patients ranging from those experiencing symptoms, such as shortness of breath or chest pain, to those undergoing cancer treatments. Transthoracic ultrasound has been proven to be safe for patients of all ages, from infants to the elderly, without risk of harmful side effects or radiation, differentiating it from other imaging modalities. Echocardiography is one of the most commonly used imaging modalities in the world due to its portability and use in a variety of applications. In emergency situations, echocardiography is quick, easily accessible, and able to be performed at the bedside, making it the modality of choice for many physicians.
Functional near-infrared spectroscopy
FNIR Is a relatively new non-invasive imaging technique. NIRS (near infrared spectroscopy) is used for the purpose of functional neuroimaging and has been widely accepted as a brain imaging technique.
Magnetic particle imaging
Using superparamagnetic iron oxide nanoparticles, magnetic particle imaging (MPI) is a developing diagnostic imaging technique used for tracking superparamagnetic iron oxide nanoparticles. The primary advantage is the high sensitivity and specificity, along with the lack of signal decrease with tissue depth. MPI has been used in medical research to image cardiovascular performance, neuroperfusion, and cell tracking.
In pregnancy
Medical imaging may be indicated in pregnancy because of pregnancy complications, a pre-existing disease or an acquired disease in pregnancy, or routine prenatal care. Magnetic resonance imaging (MRI) without MRI contrast agents as well as obstetric ultrasonography are not associated with any risk for the mother or the fetus, and are the imaging techniques of choice for pregnant women. Projectional radiography, CT scan and nuclear medicine imaging result some degree of ionizing radiation exposure, but have with a few exceptions much lower absorbed doses than what are associated with fetal harm. At higher dosages, effects can include miscarriage, birth defects and intellectual disability.
Maximizing imaging procedure use
The amount of data obtained in a single MR or CT scan is very extensive. Some of the data that radiologists discard could save patients time and money, while reducing their exposure to radiation and risk of complications from invasive procedures. Another approach for making the procedures more efficient is based on utilizing additional constraints, e.g., in some medical imaging modalities one can improve the efficiency of the data acquisition by taking into account the fact the reconstructed density is positive.
Creation of three-dimensional images
Volume rendering techniques have been developed to enable CT, MRI and ultrasound scanning software to produce 3D images for the physician. Traditionally CT and MRI scans produced 2D static output on film. To produce 3D images, many scans are made and then combined by computers to produce a 3D model, which can then be manipulated by the physician. 3D ultrasounds are produced using a somewhat similar technique.
In diagnosing disease of the viscera of the abdomen, ultrasound is particularly sensitive on imaging of biliary tract, urinary tract and female reproductive organs (ovary, fallopian tubes). As for example, diagnosis of gallstone by dilatation of common bile duct and stone in the common bile duct.
With the ability to visualize important structures in great detail, 3D visualization methods are a valuable resource for the diagnosis and surgical treatment of many pathologies. It was a key resource for the famous, but ultimately unsuccessful attempt by Singaporean surgeons to separate Iranian twins Ladan and Laleh Bijani in 2003. The 3D equipment was used previously for similar operations with great success.
Other proposed or developed techniques include:
Diffuse optical tomography
Elastography
Electrical impedance tomography
Optoacoustic imaging
Ophthalmology
A-scan
B-scan
Corneal topography
Optical coherence tomography
Scanning laser ophthalmoscopy
Some of these techniques are still at a research stage and not yet used in clinical routines.
Non-diagnostic imaging
Neuroimaging has also been used in experimental circumstances to allow people (especially disabled persons) to control outside devices, acting as a brain computer interface.
Many medical imaging software applications are used for non-diagnostic imaging, specifically because they do not have an FDA approval and not allowed to use in clinical research for patient diagnosis. Note that many clinical research studies are not designed for patient diagnosis anyway.
Archiving and recording
Used primarily in ultrasound imaging, capturing the image produced by a medical imaging device is required for archiving and telemedicine applications. In most scenarios, a frame grabber is used in order to capture the video signal from the medical device and relay it to a computer for further processing and operations.
DICOM
The Digital Imaging and Communication in Medicine (DICOM) Standard is used globally to store, exchange, and transmit medical images. The DICOM Standard incorporates protocols for imaging techniques such as radiography, computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, and radiation therapy.
Compression of medical images
Medical imaging techniques produce very large amounts of data, especially from CT, MRI and PET modalities. As a result, storage and communications of electronic image data are prohibitive without the use of compression. JPEG 2000 image compression is used by the DICOM standard for storage and transmission of medical images. The cost and feasibility of accessing large image data sets over low or various bandwidths are further addressed by use of another DICOM standard, called JPIP, to enable efficient streaming of the JPEG 2000 compressed image data.
Medical imaging in the cloud
There has been growing trend to migrate from on-premise PACS to a cloud-based PACS. A recent article by Applied Radiology said, "As the digital-imaging realm is embraced across the healthcare enterprise, the swift transition from terabytes to petabytes of data has put radiology on the brink of information overload. Cloud computing offers the imaging department of the future the tools to manage data much more intelligently."
Use in pharmaceutical clinical trials
Medical imaging has become a major tool in clinical trials since it enables rapid diagnosis with visualization and quantitative assessment.
A typical clinical trial goes through multiple phases and can take up to eight years. Clinical endpoints or outcomes are used to determine whether the therapy is safe and effective. Once a patient reaches the endpoint, he or she is generally excluded from further experimental interaction. Trials that rely solely on clinical endpoints are very costly as they have long durations and tend to need large numbers of patients.
In contrast to clinical endpoints, surrogate endpoints have been shown to cut down the time required to confirm whether a drug has clinical benefits. Imaging biomarkers (a characteristic that is objectively measured by an imaging technique, which is used as an indicator of pharmacological response to a therapy) and surrogate endpoints have shown to facilitate the use of small group sizes, obtaining quick results with good statistical power.
Imaging is able to reveal subtle change that is indicative of the progression of therapy that may be missed out by more subjective, traditional approaches. Statistical bias is reduced as the findings are evaluated without any direct patient contact.
Imaging techniques such as positron emission tomography (PET) and magnetic resonance imaging (MRI) are routinely used in oncology and neuroscience areas. For example, measurement of tumour shrinkage is a commonly used surrogate endpoint in solid tumour response evaluation. This allows for faster and more objective assessment of the effects of anticancer drugs. In Alzheimer's disease, MRI scans of the entire brain can accurately assess the rate of hippocampal atrophy, while PET scans can measure the brain's metabolic activity by measuring regional glucose metabolism, and beta-amyloid plaques using tracers such as Pittsburgh compound B (PiB). Historically less use has been made of quantitative medical imaging in other areas of drug development although interest is growing.
An imaging-based trial will usually be made up of three components:
A realistic imaging protocol. The protocol is an outline that standardizes (as far as practically possible) the way in which the images are acquired using the various modalities (PET, SPECT, CT, MRI). It covers the specifics in which images are to be stored, processed and evaluated.
An imaging centre that is responsible for collecting the images, perform quality control and provide tools for data storage, distribution and analysis. It is important for images acquired at different time points are displayed in a standardised format to maintain the reliability of the evaluation. Certain specialised imaging contract research organizations provide end to end medical imaging services, from protocol design and site management through to data quality assurance and image analysis.
Clinical sites that recruit patients to generate the images to send back to the imaging centre.
Risks and safety issues
Medical imaging can lead to patient and healthcare provider harm through exposure to ionizing radiation, iodinated contrast, magnetic fields, and other hazards.
Lead is the main material used for radiographic shielding against scattered X-rays.
In magnetic resonance imaging, there is MRI RF shielding as well as magnetic shielding to prevent external disturbance of image quality.
Privacy protection
Medical imaging are generally covered by laws of medical privacy. For example, in the United States the Health Insurance Portability and Accountability Act (HIPAA) sets restrictions for health care providers on utilizing protected health information, which is any individually identifiable information relating to the past, present, or future physical or mental health of any individual. While there has not been any definitive legal decision in the matter, at least one study has indicated that medical imaging may contain biometric information that can uniquely identify a person, and so may qualify as PHI.
The UK General Medical Council's ethical guidelines indicate that the Council does not require consent prior to making recordings of X-ray images. However, the same guidance indicates that the images and recordings need to be anonimized, and acknowledges that in deciding whether a recording is anonymised, one should bear in mind that apparently insignificant details may still be capable of identifying a patient. As such, one should be particularly careful about the anonymity of a recordings of an X-ray image before using or publishing them without consent in journals and other learning materials, whether they are printed or in an electronic format.
Industry
Organizations in the medical imaging industry include manufacturers of imaging equipment, freestanding radiology facilities, and hospitals.
The global market for manufactured devices was estimated at $5 billion in 2018. Notable manufacturers as of 2012 included Fujifilm,GE HealthCare, Siemens Healthineers, Philips, Shimadzu, Toshiba, Carestream Health, Hitachi, Hologic, and Esaote. In 2016, the manufacturing industry was characterized as oligopolistic and mature; new entrants included in Samsung and Neusoft Medical.
In the United States, as estimate as of 2015 places the US market for imaging scans at about $100b, with 60% occurring in hospitals and 40% occurring in freestanding clinics, such as the RadNet chain.
Copyright
United States
As per chapter 300 of the Compendium of U.S. Copyright Office Practices, "the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author" including "Medical imaging produced by X-rays, ultrasounds, magnetic resonance imaging, or other diagnostic equipment." This position differs from the broad copyright protections afforded to photographs. While the Copyright Compendium is an agency statutory interpretation and not legally binding, courts are likely to give deference to it if they find it reasonable. Yet, there is no U.S. federal case law directly addressing the issue of the copyrightability of X-ray images.
Derivatives
An extensive definition of the term derivative work is given by the United States Copyright Act in :
A "derivative work" is a work based upon one or more preexisting works, such as a translation... art reproduction, abridgment, condensation, or any other form in which a work may be recast, transformed, or adapted. A work consisting of editorial revisions, annotations, elaborations, or other modifications which, as a whole, represent an original work of authorship, is a "derivative work".
provides:
The copyright in a compilation or derivative work extends only to the material contributed by the author of such work, as distinguished from the preexisting material employed in the work, and does not imply any exclusive right in the preexisting material. The copyright in such work is independent of, and does not affect or enlarge the scope, duration, ownership, or subsistence of, any copyright protection in the preexisting material.
Germany
In Germany, X-ray images as well as MRI, medical ultrasound, PET and scintigraphy images are protected by (copyright-like) related rights or neighbouring rights. This protection does not require creativity (as would be necessary for regular copyright protection) and lasts only for 50 years after image creation, if not published within 50 years, or for 50 years after the first legitimate publication. The letter of the law grants this right to the "Lichtbildner", i.e. the person who created the image. The literature seems to uniformly consider the medical doctor, dentist or veterinary physician as the rights holder, which may result from the circumstance that in Germany many X-rays are performed in ambulatory settings.
United Kingdom
Medical images created in the United Kingdom will normally be protected by copyright due to "the high level of skill, labour and judgement required to produce a good quality X-ray, particularly to show contrast between bones and various soft tissues". The Society of Radiographers believe this copyright is owned by employer (unless the radiographer is self-employed—though even then their contract might require them to transfer ownership to the hospital). This copyright owner can grant certain permissions to whoever they wish, without giving up their ownership of the copyright. So the hospital and its employees will be given permission to use such radiographic images for the various purposes that they require for medical care. Physicians employed at the hospital will, in their contracts, be given the right to publish patient information in journal papers or books they write (providing they are made anonymous). Patients may also be granted permission to "do what they like with" their own images.
Sweden
The Cyber Law in Sweden states: "Pictures can be protected as photographic works or as photographic pictures. The former requires a higher level of originality; the latter protects all types of photographs, also the ones taken by amateurs, or within medicine or science. The protection requires some sort of photographic technique being used, which includes digital cameras as well as holograms created by laser technique. The difference between the two types of work is the term of protection, which amounts to seventy years after the death of the author of a photographic work as opposed to fifty years, from the year in which the photographic picture was taken."
Medical imaging may possibly be included in the scope of "photography", similarly to a U.S. statement that "MRI images, CT scans, and the like are analogous to photography."
| Technology | Food and health | null |
234802 | https://en.wikipedia.org/wiki/Blackboard | Blackboard | A blackboard or a chalkboard is a reusable writing surface on which text or drawings are made with sticks of calcium sulphate or calcium carbonate, known, when used for this purpose, as chalk. Blackboards were originally made of smooth, thin sheets of black or dark grey slate stone.
Design
A blackboard can simply be a board painted with a dark matte paint (usually black, occasionally dark green). Matte black plastic sign material (known as closed-cell PVC foamboard) is also used to create custom chalkboard art. Blackboards on an A-frame are used by restaurants and bars to advertise daily specials. Adhesive chalkboard surface is also available in stores as rolls of textured black plastic shelf covering, which is applied to the desired wall, door or other surface.
A more modern variation consists of a coiled sheet of plastic drawn across two parallel rollers, which can be scrolled to create additional writing space while saving what has been written. The highest grade blackboards are made of porcelain-enameled steel (black, green, blue or sometimes other colours). Porcelain is very hard wearing, and blackboards made of porcelain usually last 10–20 years in intensive use.
Lecture theatres may contain a number of blackboards in a grid arrangement. The lecturer then moves boards into reach for writing and then moves them out of reach, allowing a large amount of material to be shown simultaneously.
The chalk marks can be easily wiped off with a damp cloth, a sponge or a special blackboard eraser usually consisting of a block of wood covered by a felt pad. However, chalk marks made on some types of wet blackboard can be difficult to remove. Blackboard manufacturers often advise that a new or newly resurfaced blackboard be completely covered using the side of a stick of chalk and then that chalk brushed off as normal to prepare it for use.
Chalk sticks
Chalk sticks are produced in white and in various colours, especially for use with blackboards. White chalk sticks are made mainly from calcium carbonate derived from mineral chalk or limestone, while coloured chalk sticks are made from calcium sulphate in its dihydrate form, CaSO4·2H2O, derived from gypsum. Chalk sticks containing calcium carbonate typically contain 40–60% of CaCO3 (calcite).
Advantages and disadvantages
Advantages
Low Maintenance: Chalk requires no special care; unlike whiteboard markers, chalk does not dry out if left uncapped.
Cost-Efficiency: Chalk is significantly cheaper than whiteboard markers, providing a cost-effective option for extensive writing.
Drawing Versatility: Chalk allows for the easy creation of lines with different weights and thicknesses, surpassing the capabilities of whiteboard markers.
Quick Dashed Lines: The friction technique with chalk enables the swift creation of dashed lines, a task that might be more cumbersome with whiteboard markers.
Odor Considerations: Chalk's mild smell contrasts with the often pungent odor of whiteboard markers, offering a more pleasant writing experience.
Contrast and Visibility: Chalk writing generally provides better contrast than whiteboard markers, ensuring clear visibility in various lighting conditions.
Non-Reflective Surface: Blackboards do not reflect light like whiteboards, allowing information to be viewable from all angles without glare.
Ease of Erasure: Chalk can be easily erased, while whiteboard markings left for an extended period may require solvents for removal.
Stain Resistance: Chalk can be easily removed from most surfaces, including clothing, in contrast to whiteboard markers that may leave permanent stains.
Environmental Impact: Chalk is mostly biodegradable, while whiteboard markers pose challenges for plastic recycling.
Disadvantages
On the other hand, chalk produces dust, the amount depending on the quality of chalk used. Some people find this uncomfortable or may be allergic to it, and according to the American Academy of Allergy, Asthma and Immunology (AAAAI), there are links between chalk dust and allergy and asthma problems. The dust also precludes the use of chalk in areas shared with dust-sensitive equipment such as computers. The writing on blackboards is difficult to read in the dark. Chalk sticks shrink through use, and are notorious for breaking in half unless inserted in a writing utensil designed for chalk. Blackboards can suffer from ghosting. Ghosting occurs when old coloured chalk, pastels or chalkpen ink absorbs into the black finish of the board, making it impossible to remove.
The scratching of fingernails on a blackboard, as well as other pointed, especially metal objects against blackboards, produces a sound that is well known for being extremely irritating to most people. According to a study run by Michael Oehler, a professor at the University of Cologne, Germany, humans are "predisposed to detest" the sound of nails on a blackboard. The findings of the study were presented at the Acoustical Society of America conference and support earlier findings from a 1986 study by Vanderbilt psychologist Randolph Blake and two colleagues found that the sound of nails on a chalkboard annoyed people even when the high-pitch frequencies were removed. The study earned Blake a 2006 Ig Nobel Prize.
Etymology and history
The writing slate was in use in Indian schools as mentioned in Alberuni's Indica (Tarikh Al-Hind), written in the early 11th century:
They use black tablets for the children in the schools, and write upon them along the long side, not the broadside, writing with a white material from the left to the right.
The first classroom uses of large blackboards are difficult to date, but they were used for music education and composition in Europe as far back as the 16th century. The term "blackboard" is attested in English from the mid-18th century; the Oxford English Dictionary provides a citation from 1739, to write "with Chalk on a black-Board". The first attested use of chalk on blackboard in the United States dates to September 21, 1801, in a lecture course in mathematics given by George Baron. James Pillans has been credited with the invention of coloured chalk (1814); he had a recipe with ground chalk, dyes and porridge.
The use of blackboards changed methods of education and testing, as found in the Conic Sections Rebellion of 1830 in Yale.
Manufacturing of slate blackboards began by the 1840s. Green porcelain enamel surface, was first used in 1930, and as this type of boards became popular, the word "chalkboard" appeared. In the US green porcelain enamelled boards started to appear at schools in 1950s.
Gallery
| Technology | Writing tools | null |
234803 | https://en.wikipedia.org/wiki/Heart%20valve | Heart valve | A heart valve is a biological one-way valve that allows blood to flow in one direction through the chambers of the heart. A mammalian heart usually has four valves. Together, the valves determine the direction of blood flow through the heart. Heart valves are opened or closed by a difference in blood pressure on each side.
The mammalian heart has two atrioventricular valves separating the upper atria from the lower ventricles: the mitral valve in the left heart, and the tricuspid valve in the right heart. The two semilunar valves are at the entrance of the arteries leaving the heart. These are the aortic valve at the aorta, and the pulmonary valve at the pulmonary artery.
The heart also has a coronary sinus valve and an inferior vena cava valve, not discussed here.
Structure
The heart valves and the chambers are lined with endocardium. Heart valves separate the atria from the ventricles, or the ventricles from a blood vessel. Heart valves are situated around the fibrous rings of the cardiac skeleton. The valves incorporate flaps called leaflets or cusps, similar to a duckbill valve or flutter valve, which are pushed open to allow blood flow and which then close together to seal and prevent backflow. The mitral valve has two cusps, whereas the others have three. There are nodules at the tips of the cusps that make the seal tighter.
The pulmonary valve has left, right, and anterior cusps. The aortic valve has left, right, and posterior cusps. The tricuspid valve has anterior, posterior, and septal cusps; and the mitral valve has just anterior and posterior cusps.
The valves of the human heart can be grouped in two sets:
Two atrioventricular valves to prevent backflow of blood from the ventricles into the atria:
Tricuspid valve or right atrioventricular valve, between the right atrium and right ventricle
Mitral valve or bicuspid valve, between the left atrium and left ventricle
Two semilunar valves to prevent the backflow of blood into the ventricle:
Pulmonary valve, located at the opening between the right ventricle and the pulmonary trunk
Aortic valve, located at the opening between the left ventricle and the aorta.
Atrioventricular valves
The atrioventricular valves are the mitral valve, and the tricuspid valve, which are situated between the atria and the ventricles, and prevent backflow from the ventricles into the atria during systole. They are anchored to the walls of the ventricles by chordae tendineae, which prevent them from inverting.
The chordae tendineae are attached to papillary muscles that cause tension to better hold the valve. Together, the papillary muscles and the chordae tendineae are known as the subvalvular apparatus. The function of the subvalvular apparatus is to keep the valves from prolapsing into the atria when they close. The subvalvular apparatus has no effect on the opening and closure of the valves, however, which is caused entirely by the pressure gradient across the valve. The peculiar insertion of chords on the leaflet free margin, however, provides systolic stress sharing between chords according to their different thickness.
The closure of the AV valves is heard as lub, the first heart sound (S1). The closure of the SL valves is heard as dub, the second heart sound (S2).
The mitral valve is also called the bicuspid valve because it contains two leaflets or cusps. The mitral valve gets its name from the resemblance to a bishop's mitre (a type of hat). It is on the left side of the heart and allows the blood to flow from the left atrium into the left ventricle.
During diastole, a normally-functioning mitral valve opens as a result of increased pressure from the left atrium as it fills with blood (preloading). As atrial pressure increases above that of the left ventricle, the mitral valve opens. Opening facilitates the passive flow of blood into the left ventricle. Diastole ends with atrial contraction, which ejects the final 30% of blood that is transferred from the left atrium to the left ventricle. This amount of blood is known as the end diastolic volume (EDV), and the mitral valve closes at the end of atrial contraction to prevent a reversal of blood flow.
The tricuspid valve has three leaflets or cusps and is on the right side of the heart. It is between the right atrium and the right ventricle, and stops the backflow of blood between the two.
Semilunar valves
The aortic and pulmonary valves are located at the base of the aorta and the pulmonary trunk respectively. These are also called the "semilunar valves". These two arteries receive blood from the ventricles and their semilunar valves permit blood to be forced into the arteries, and prevent backflow from the arteries into the ventricles. These valves do not have chordae tendineae, and are more similar to the valves in veins than they are to the atrioventricular valves. The closure of the semilunar valves causes the second heart sound.
The aortic valve, which has three cusps, lies between the left ventricle and the aorta. During ventricular systole, pressure rises in the left ventricle and when it is greater than the pressure in the aorta, the aortic valve opens, allowing blood to exit the left ventricle into the aorta. When ventricular systole ends, pressure in the left ventricle rapidly drops and the pressure in the aorta forces the aortic valve to close. The closure of the aortic valve contributes the A2 component of the second heart sound.
The pulmonary valve (sometimes referred to as the pulmonic valve) lies between the right ventricle and the pulmonary artery, and has three cusps. Similar to the aortic valve, the pulmonary valve opens in ventricular systole, when the pressure in the right ventricle rises above the pressure in the pulmonary artery. At the end of ventricular systole, when the pressure in the right ventricle falls rapidly, the pressure in the pulmonary artery will close the pulmonary valve. The closure of the pulmonary valve contributes the P2 component of the second heart sound. The right heart is a low-pressure system, so the P2 component of the second heart sound is usually softer than the A2 component of the second heart sound. However, it is physiologically normal in some young people to hear both components separated during inhalation.
Development
In the developing heart, the valves between the atria and ventricles, the bicuspid and the tricuspid valves, develop on either side of the atrioventricular canals. The upward extension of the bases of the ventricles causes the canal to become invaginated into the ventricle cavities. The invaginated margins form the rudiments of the lateral cusps of the AV valves. The middle and septal cusps develop from the downward extension of the septum intermedium.
The semilunar valves (the pulmonary and aortic valves) are formed from four thickenings at the cardiac end of the truncus arteriosus. These thickenings are called endocardial cushions. The truncus arteriosus is originally a single outflow tract from the embryonic heart that will later split to become the ascending aorta and pulmonary trunk. Before it has split, four thickenings occur. There are anterior, posterior, and two lateral thickenings. A septum begins to form between what will later become the ascending aorta and pulmonary tract. As the septum forms, the two lateral thickenings are split, so that the ascending aorta and pulmonary trunk have three thickenings each (an anterior or posterior, and half of each of the lateral thickenings). The thickenings are the origins of the three cusps of the semilunar valves. The valves are visible as unique structures by the ninth week. As they mature, they rotate slightly as the outward vessels spiral, and move slightly closer to the heart.
Physiology
In general, the motion of the heart valves is determined using the Navier–Stokes equation, using boundary conditions of the blood pressures, pericardial fluid, and external loading as the constraints.
The motion of the heart valves is used as a boundary condition in the Navier–Stokes equation in determining the fluid dynamics of blood ejection from the left and right ventricles into the aorta and the lung.
Relationship between pressure and flow in open valves
The pressure drop, , across an open heart valve relates to the flow rate, Q, through the valve:
If:
Inflow energy conserved
Stagnant region behind leaflets
Outflow momentum conserved
Flat velocity profile
Valves with a single degree of freedom
Usually, the aortic and mitral valves are incorporated in valve studies within a single degree of freedom. These relationships are based on the idea of the valve being a structure with a single degree of freedom. These relationships are based on the Euler equations.
Equations for the aortic valve in this case:
where:
u = axial velocity
p = pressure
A = cross sectional area of valve
L = axial length of valve
Λ(t) = single degree of freedom; when
Atrioventricular valve
Clinical significance
Valvular heart disease is a general term referring to dysfunction of the valves, and is primarily in two forms, either regurgitation, (also insufficiency, or incompetence) where a dysfunctional valve lets blood flow in the wrong direction, or stenosis, when a valve is narrow.
Regurgitation occurs when a valve becomes insufficient and malfunctions, allowing some blood to flow in the wrong direction. This insufficiency can affect any of the valves as in aortic insufficiency, mitral insufficiency, pulmonary insufficiency and tricuspid insufficiency. The other form of valvular heart disease is stenosis, a narrowing of the valve. This is a result of the valve becoming thickened and any of the heart valves can be affected, as in mitral valve stenosis, tricuspid valve stenosis, pulmonary valve stenosis and aortic valve stenosis. Stenosis of the mitral valve is a common complication of rheumatic fever. Inflammation of the valves can be caused by infective endocarditis, usually a bacterial infection but can sometimes be caused by other organisms. Bacteria can more readily attach to damaged valves. Another type of endocarditis which doesn't provoke an inflammatory response, is nonbacterial thrombotic endocarditis. This is commonly found on previously undamaged valves. A major valvular heart disease is mitral valve prolapse, which is a weakening of connective tissue called myxomatous degeneration of the valve. This sees the displacement of a thickened mitral valve cusp into the left atrium during systole.
Disease of the heart valves can be congenital, such as aortic regurgitation or acquired, for example infective endocarditis. Different forms are associated with cardiovascular disease, connective tissue disorders and hypertension. The symptoms of the disease will depend on the affected valve, the type of disease, and the severity of the disease. For example, valvular disease of the aortic valve, such as aortic stenosis or aortic regurgitation, may cause breathlessness, whereas valvular diseases of the tricuspid valve may lead to dysfunction of the liver and jaundice. When valvular heart disease results from infectious causes, such as infective endocarditis, an affected person may have a fever and unique signs such as splinter haemorrhages of the nails, Janeway lesions, Osler nodes and Roth spots. A particularly feared complication of valvular disease is the creation of emboli because of turbulent blood flow, and the development of heart failure.
Valvular heart disease is diagnosed by echocardiography, which is a form of ultrasound. Damaged and defective heart valves can be repaired, or replaced with artificial heart valves. Infectious causes may also require treatment with antibiotics.
Congenital heart disease
The most common form of valvular anomaly is a congenital heart defect (CHD), called a bicuspid aortic valve. This results from the fusing of two of the cusps during embryonic development forming a bicuspid valve instead of a tricuspid valve. This condition is often undiagnosed until calcific aortic stenosis has developed, and this usually happens around ten years earlier than would otherwise develop.
Less common CHD's are tricuspid and pulmonary atresia, and Ebstein's anomaly. Tricuspid atresia is the complete absence of the tricuspid valve which can lead to an underdeveloped or absent right ventricle. Pulmonary atresia is the complete closure of the pulmonary valve. Ebstein's anomaly is the displacement of the septal leaflet of the tricuspid valve causing a larger atrium and a smaller ventricle than normal.
History
Heart valves were first documented by Leonardo da Vinci over 500 years ago. Da Vinci achieved this by doing dissections on cows, pigs, and humans and studying the dissections. Da Vinci also performed vivo studies on pigs, by using small metallic tracers to analyze the movement of blood in the heart. Da Vinci made wax casts of the bull heart to construct glass models of the bull heart to study the hydraulic characteristics of blood flowing through the heart and heart valves. This was done to make a circulation model that would mimic human circulation. Da Vinci used seeds to visualize turbulences and blood flow.
Function of heart valves
Artificial heart valve
Pericardial heart valves
Bjork–Shiley valve
| Biology and health sciences | Circulatory system | Biology |
234806 | https://en.wikipedia.org/wiki/Diazepam | Diazepam | {{Infobox drug
| Verifiedfields = changed
| Watchedfields = changed
| verifiedrevid = 443634166
| image = Diazepam structure.svg
| image_class = skin-invert-image
| width = 200
| alt =
| caption =
| image2 = Diazepam-from-xtal-3D-balls.png
| alt2 =
| pronounce =
| tradename = Valium, others
| Drugs.com =
| MedlinePlus = a682047
| DailyMedID = Diazepam
| pregnancy_AU = C
| pregnancy_AU_comment =
| pregnancy_category =
| dependency_liability = High
| addiction_liability = Moderate
| routes_of_administration = oral, intramuscular, intravenous, rectal, nasal, buccal film
| class = Benzodiazepine
| ATC_prefix = N05
| ATC_suffix = BA01
| ATC_supplemental =
| legal_AU = S4
| legal_AU_comment =
| legal_BR = B1
| legal_BR_comment =
| legal_CA = Schedule IV
| legal_CA_comment =
| legal_DE = Rx-only/Anlage III
| legal_DE_comment =
| legal_NZ = Class C
| legal_NZ_comment =
| legal_UK = Class C <ref>
Diazepam, sold under the brand name Valium among others, is a medicine of the benzodiazepine family that acts as an anxiolytic. It is used to treat a range of conditions, including anxiety, seizures, alcohol withdrawal syndrome, muscle spasms, insomnia, and restless legs syndrome. It may also be used to cause memory loss during certain medical procedures. It can be taken orally (by mouth), as a suppository inserted into the rectum, intramuscularly (injected into muscle), intravenously (injection into a vein) or used as a nasal spray. When injected intravenously, effects begin in one to five minutes and last up to an hour. When taken by mouth, effects begin after 15 to 60 minutes.
Common side effects include sleepiness and trouble with coordination. Serious side effects are rare. They include increased risk of suicide, decreased breathing, and an increased risk of seizures if used too frequently in those with epilepsy. Occasionally, excitement or agitation may occur. Long-term use can result in tolerance, dependence, and withdrawal symptoms on dose reduction. Abrupt stopping after long-term use can be potentially dangerous. After stopping, cognitive problems may persist for six months or longer. It is not recommended during pregnancy or breastfeeding. Its mechanism of action works by increasing the effect of the neurotransmitter gamma-aminobutyric acid (GABA).
Diazepam was patented in 1959 by Hoffmann-La Roche. It has been one of the most frequently prescribed medications in the world since its launch in 1963. In the United States it was the best-selling medication between 1968 and 1982, selling more than 2billion tablets in 1978 alone. In 2022, it was the 169th most commonly prescribed medication in the United States, with more than 3million prescriptions. In 1985, the patent ended, and there are more than 500 brands available on the market. It is on the World Health Organization's List of Essential Medicines.
Structure, physical and chemical properties
Diazepam does not possess any chiral centers in its structure, but it does have two conformers. The two conformers mentioned were the 'P'-conformer and 'M'-conformer. Diazepam is an equimolar mixture and it was shown through CD spectra in serum protein solutions, that the 'P'-conformer is preferred by α1-acid glycoprotein binding.
The drug diazepam occurs as a pale yellow-white crystalline powder without a distinctive smell and has a low molecular weight (MW = 284.74 g/mol). This classic aryl 1,4-benzodiazepine possesses three acceptors and no hydrogen bond donors. Diazepam is moderately lipophilic with LogP (Octanol-Water Partition Coefficient) value of 2,82 and hydrophilic with a TPSA (Topological Polar Surface Area) value of 32.7 Ų. The LogP value indicates that diazepam tends to dissolve more readily in lipid-based environments, such as chloroform, acetone, ethanol and ether, compared to water. The TPSA value implies that a segment of the molecule exhibits a degree of polarity or hydrophilicity and represents the collective surface area of polar atoms, like oxygen or nitrogen, along with their connected hydrogen atoms. A TPSA value of 32,7 Ų signifies a moderate level of polarity within the compound. TPSA is especially useful in medical chemistry as it shows the ability of a molecule to permeate cells. Molecules with PSA value smaller than 60-70 Ų have a better ability to permeate cells. The balance between its lipophilic and hydrophilic characteristics can impact various aspects of the molecule’s behavior, including its solubility, absorption, distribution, metabolism, and potential interactions within the biological system.
Diazepam is overall a stable molecule. The British Pharmacopoeia lists it as being very slightly soluble in water, soluble in alcohol, and freely soluble in chloroform. The United States Pharmacopoeia lists diazepam as soluble 1 in 16 ethyl alcohol, 1 in 2 of chloroform, 1 in 39 ether, and practically insoluble in water. The pH of diazepam is neutral (i.e., pH = 7). Due to additives such as benzoic acid/benzoate in the injectable form. Diazepam has a shelf life of five years for oral tablets and three years for IV/IM solutions.
Diazepam is stored at room temperature (15–30 °C). The solution for parenteral injection is kept so that it is protected from light and kept from freezing. The oral forms are stored in air-tight containers and protected from light.
Diazepam can be absorbed into plastics, so liquid preparations are not kept in plastic bottles or syringes, etc. As such, it can leach into the plastic bags and tubing used for intravenous infusions. Absorption appears to depend on several factors, such as temperature, concentration, flow rates, and tube length. Diazepam is not be administered if a precipitate has formed and does not dissolve.
Medical uses
Diazepam is mainly used to treat anxiety, insomnia, panic attacks, and symptoms of acute alcohol withdrawal. It is also used as a premedication for inducing sedation, anxiolysis, or amnesia before certain medical procedures (e.g., endoscopy). In 2020, it was approved for use in the United States as a nasal spray to interrupt seizure activity in people with epilepsy. Diazepam is the most commonly used benzodiazepine for "tapering" benzodiazepine dependence due to the drug's comparatively long half-life, allowing for more efficient dose reduction. Benzodiazepines have a relatively low toxicity in overdose.
Diazepam has several uses, including:
Treatment of anxiety, panic attacks, and states of agitation
Treatment of neurovegetative symptoms associated with vertigo
Treatment of the symptoms of alcohol, opiate, and benzodiazepine withdrawal
Short-term treatment of insomnia
Treatment of muscle spasms
Treatment of tetanus, together with other measures of intensive treatment
Adjunctive treatment of spastic muscular paresis (paraplegia/tetraplegia) caused by cerebral or spinal cord conditions such as stroke, multiple sclerosis, or spinal cord injury (long-term treatment is coupled with other rehabilitative measures)
Palliative treatment of stiff person syndrome
Pre- or postoperative sedation, anxiolysis or amnesia (e.g., before endoscopic or surgical procedures)
Treatment of complications with stimulant overdoses and psychosis, such as cocaine or methamphetamine
Used in the treatment of organophosphate poisoning and reduces the risk of seizure-induced brain and cardiac damage.
Preventive treatment of oxygen toxicity during hyperbaric oxygen therapy
Dosages are typically determined on an individual basis, depending on the condition being treated, severity of symptoms, patient body weight, and any other conditions the person may have.
Seizures
Intravenous diazepam or lorazepam are first-line treatments for status epilepticus. However, intravenous lorazepam has advantages over intravenous diazepam, including a higher rate of terminating seizures and a more prolonged anticonvulsant effect. Diazepam gel was better than placebo gel in reducing the risk of non-cessation of seizures. Diazepam is rarely used for the long-term treatment of epilepsy because tolerance to its anticonvulsant effects usually develops within six to twelve months of treatment, effectively rendering it useless for that purpose.
The anticonvulsant effects of diazepam can help in the treatment of seizures due to a drug overdose or chemical toxicity as a result of exposure to sarin, VX, or soman (or other organophosphate poisons), lindane, chloroquine, physostigmine, or pyrethroids.
Diazepam is sometimes used intermittently for the prevention of febrile seizures that may occur in children under five years of age. Recurrence rates are reduced, but side effects are common and the decision to treat febrile seizures (which are benign in nature) with medication uses these factors as part of the evaluation. Long-term use of diazepam for the management of epilepsy is not recommended; however, a subgroup of individuals with treatment-resistant epilepsy benefit from long-term benzodiazepines, and for such individuals, clorazepate has been recommended due to its slower onset of tolerance to the anticonvulsant effects.
Alcohol withdrawal
Because of its relatively long duration of action, and evidence of safety and efficacy, diazepam is preferred over other benzodiazepines for the treatment of persons experiencing moderate to severe alcohol withdrawal. An exception to this is when a medication is required intramuscular in which case either lorazepam or midazolam is recommended.
Other
Diazepam is used for the emergency treatment of eclampsia when IV magnesium sulfate and blood-pressure control measures have failed. Benzodiazepines do not have any pain-relieving properties themselves and are generally recommended to be avoided in individuals with pain. However, benzodiazepines such as diazepam can be used for their muscle-relaxant properties to alleviate pain caused by muscle spasms and various dystonias, including blepharospasm. Tolerance often develops to the muscle relaxant effects of benzodiazepines such as diazepam. Baclofen is sometimes used as an alternative to diazepam.
Availability
Diazepam is marketed in over 500 brands throughout the world. It is supplied in oral, injectable, inhalation, and rectal forms.
The United States military employs a specialized diazepam preparation known as Convulsive Antidote, Nerve Agent (), which contains diazepam. One CANA kit is typically issued to service members, along with three Mark I NAAK kits, when operating in circumstances where chemical weapons in the form of nerve agents are considered a potential hazard. Both of these kits deliver drugs using autoinjectors. They are intended for use in "buddy aid" or "self-aid" administration of the drugs in the field before decontamination and delivery of the patient to definitive medical care.
Contraindications
Use of diazepam is avoided, when possible, in individuals with:
Ataxia
Severe hypoventilation
Acute narrow-angle glaucoma
Severe hepatic deficiencies (hepatitis and liver cirrhosis decrease elimination by a factor of two)
Severe renal deficiencies (for example, patients on dialysis)
Liver disorders
Severe sleep apnea
Severe depression, particularly when accompanied by suicidal tendencies
Psychosis
Pregnancy or breast feeding
Caution required in elderly or debilitated patients
Coma or shock
Abrupt discontinuation of therapy
Acute intoxication with alcohol, narcotics, or other psychoactive substances (with the exception of hallucinogens or some stimulants, where it is occasionally used as a treatment for overdose)
History of alcohol or drug dependence
Myasthenia gravis, an autoimmune disorder causing marked fatiguability
Hypersensitivity or allergy to any drug in the benzodiazepine class
Abuse and special populations
Benzodiazepine abuse and misuse is guarded against when prescribed to those with alcohol or drug dependencies or who have psychiatric disorders.
Pediatric patients
For Less than 18 years of age, this treatment is usually not indicated, except for treatment of epilepsy, and pre-or postoperative treatment. The smallest possible effective dose is typically used for this group of patients.
Under 6 months of age, safety and effectiveness have not been established; diazepam is not given to those in this age group.
Elderly and very ill patients can experience apnea or cardiac arrest. Concomitant use of other central nervous system depressants increases this risk. The smallest possible effective dose is generally used for this group of people. The elderly metabolise benzodiazepines much more slowly than younger adults, and are also more sensitive to the effects of benzodiazepines, even at similar blood plasma levels. Doses of diazepam are recommended to be about half of those given to younger people, and treatment is limited to a maximum of two weeks. Long-acting benzodiazepines such as diazepam are not recommended for the elderly. Diazepam can also be dangerous in geriatric patients owing to a significantly increased risk of falls.
Intravenous or intramuscular injections in hypotensive people or those in shock is administered carefully and vital signs are closely monitored.
Benzodiazepines such as diazepam are lipophilic and rapidly penetrate membranes, thus rapidly cross over into the placenta with significant uptake of the drug. Use of benzodiazepines including diazepam in late pregnancy, especially high doses, can result in floppy infant syndrome. Diazepam when taken late in pregnancy, during the third trimester, causes a definite risk of a severe benzodiazepine withdrawal syndrome in the neonate with symptoms including hypotonia, and reluctance to suck, to apnoeic spells, cyanosis, and impaired metabolic responses to cold stress. Floppy infant syndrome and sedation in the newborn may also occur. Symptoms of floppy infant syndrome and the neonatal benzodiazepine withdrawal syndrome have been reported to persist from hours to months after birth.
Adverse effects
Benzodiazepines, such as diazepam, can cause anterograde amnesia, confusion, and sedation. The elderly are more prone to diazepam's confusion, amnesia, ataxia, hangover symptoms, and falls. Long-term use of benzodiazepines, such as diazepam, induces tolerance, dependency, and withdrawal syndrome. Like other benzodiazepines, diazepam impairs short-term memory and learning new information. Diazepam and other benzodiazepines can produce anterograde amnesia, but not retrograde amnesia, which means information learned before using benzodiazepines is not impaired. Short-term benzodiazepine use does not lead to tolerance, and the elderly are more sensitive to them. Additionally, after stopping benzodiazepines, cognitive problems may last at least six months; it is unclear if these problems last for longer than six months or are permanent. Benzodiazepines may also cause or worsen depression. Infusions or repeated intravenous injections of diazepam when managing seizures, for example, may lead to drug toxicity, including respiratory depression, sedation, and hypotension. Drug tolerance may also develop to infusions of diazepam if it is given for longer than 24 hours. Sedatives and sleeping pills, including diazepam, have been associated with an increased risk of death.
In September 2020, the U.S. Food and Drug Administration (FDA) required the boxed warning be updated for all benzodiazepine medicines to describe the risks of abuse, misuse, addiction, physical dependence, and withdrawal reactions consistently across all the medicines in the class.
Diazepam has a range of side effects common to most benzodiazepines, including:
Suppression of REM sleep and slow wave sleep
Impaired motor function
Impaired coordination
Impaired balance
Dizziness
Reflex tachycardia
Less commonly, paradoxical reactions can occur, including nervousness, irritability, excitement, worsening of seizures, insomnia, muscle cramps, changes in libido, and in some cases, rage and violence. These adverse reactions are more likely to occur in children, the elderly, and individuals with a history of a substance use disorder, such as an alcohol use disorder, or a history of aggressive behavior. In some people, diazepam may increase the propensity toward self-harming behavior and, in extreme cases, may provoke suicidal tendencies or acts. Very rarely dystonia can occur.
Diazepam may impair the ability to drive vehicles or operate machinery. The impairment is worsened by the consumption of alcohol because both act as central nervous system depressants.
During therapy, tolerance to the sedative effects usually develops, but not to the anxiolytic and myorelaxant effects.
Patients with severe attacks of apnea during sleep may experience respiratory depression (hypoventilation), leading to respiratory arrest and death.
Diazepam in doses of or more causes significant deterioration in alertness performance combined with increased feelings of sleepiness.
Tolerance and withdrawal
Diazepam, as with other benzodiazepine drugs, can cause tolerance, physical dependence, substance use disorder, and benzodiazepine withdrawal syndrome. Withdrawal from diazepam or other benzodiazepines often leads to withdrawal symptoms similar to those seen during barbiturate or alcohol withdrawal. The higher the dose and the longer the drug is taken, the greater the risk of experiencing unpleasant withdrawal symptoms.
Withdrawal symptoms can occur from standard dosages and also after short-term use, and can range from insomnia and anxiety to more serious symptoms, including seizures and psychosis. Withdrawal symptoms can sometimes resemble pre-existing conditions and be misdiagnosed. Diazepam may produce less intense withdrawal symptoms due to its long elimination half-life.
Benzodiazepine treatment is recommended to be discontinued as soon as possible by a slow and gradual dose reduction regimen. Tolerance develops to the therapeutic effects of benzodiazepines; for example, tolerance occurs to the anticonvulsant effects and as a result benzodiazepines are not generally recommended for the long-term management of epilepsy. Dose increases may overcome the effects of tolerance, but tolerance may then develop to the higher dose and adverse effects may increase. The mechanism of tolerance to benzodiazepines includes uncoupling of receptor sites, alterations in gene expression, down-regulation of receptor sites, and desensitisation of receptor sites to the effect of GABA. About one-third of individuals who take benzodiazepines for longer than four weeks become dependent and experience withdrawal syndrome on cessation.
Differences in rates of withdrawal (50–100%) vary depending on the patient sample. For example, a random sample of long-term benzodiazepine users typically finds around 50% experience few or no withdrawal symptoms, with the other 50% experiencing notable withdrawal symptoms. Certain select patient groups show a higher rate of notable withdrawal symptoms, up to 100%.
Rebound anxiety, more severe than baseline anxiety, is also a common withdrawal symptom when discontinuing diazepam or other benzodiazepines. Diazepam is therefore only recommended for short-term therapy at the lowest possible dose owing to risks of severe withdrawal problems from low doses even after gradual reduction. The risk of pharmacological dependence on diazepam is significant, and In humans, tolerance to the anticonvulsant effects of diazepam occurs frequently.
Dependence
Improper or excessive use of diazepam can lead to dependence. At a particularly high risk for diazepam misuse, substance use disorder or dependence are:
People with a history of a substance use disorder or substance dependence. Diazepam increases craving for alcohol in problem alcohol consumers.
People with severe personality disorders, such as borderline personality disorder
Patients from the aforementioned groups are monitored very closely during therapy for signs of abuse and development of dependence. Therapy is recommended to be discontinued if any of these signs are noted. If dependence has developed, therapy is still discontinued gradually to avoid severe withdrawal symptoms. Long-term therapy in such instances is not recommended.
People suspected of being dependent on benzodiazepine drugs are very gradually tapered off the drug. Withdrawals can be life-threatening, particularly when excessive doses have been taken for extended periods. Therefore, equal prudence is used whether dependence has occurred in therapeutic or recreational contexts.
Diazepam is seen as a good choice for tapering for those using high doses of other benzodiazepines since it has a long half-life thus withdrawal symptoms are tolerable. The process is very slow (usually from 14 to 28 weeks) but is considered safe when done appropriately.
Overdose
An individual who has consumed too much diazepam typically displays one or more of these symptoms in approximately four hours immediately following a suspected overdose:
Drowsiness
Mental confusion
Hypotension
Impaired motor function
Impaired reflexes
Impaired coordination
Impaired balance
Dizziness
Coma
Although not usually fatal when taken alone, a diazepam overdose is considered a medical emergency and generally requires the immediate attention of medical personnel. The antidote for an overdose of diazepam (or any other benzodiazepine) is flumazenil (Anexate). This drug is only used in cases with severe respiratory depression or cardiovascular complications. Because flumazenil is a short-acting drug, and the effects of diazepam can last for days, several doses of flumazenil may be necessary. Artificial respiration and stabilization of cardiovascular functions may also be necessary. Though not routinely indicated, activated charcoal can be used for decontamination of the stomach following a diazepam overdose. Emesis is contraindicated. Dialysis is minimally effective. Hypotension may be treated with levarterenol or metaraminol.
The oral (lethal dose in 50% of the population) of diazepam is in mice and in rats. D. J. Greenblatt and colleagues reported in 1978 on two patients who had taken and of diazepam, respectively, went into moderately-deep comas, and were discharged within 48 hours without having experienced any important complications, despite having high concentrations of diazepam and its metabolites desmethyldiazepam, oxazepam, and temazepam, according to samples taken in the hospital and as follow-up.
Overdoses of diazepam with alcohol, opiates, or other depressants may be fatal.
Interactions
If diazepam is administered concomitantly with other drugs, it is recommended that attention be paid to the possible pharmacological interactions. Particular care is taken with drugs that potentiate the effects of diazepam, such as barbiturates, phenothiazines, opioids, and antidepressants.
Diazepam does not increase or decrease hepatic enzyme activity and does not alter the metabolism of other compounds. No evidence has suggested that diazepam alters its metabolism with chronic administration.
Agents with an effect on hepatic cytochrome P450 pathways or conjugation can alter the rate of diazepam metabolism. These interactions would be expected to be most significant with long-term diazepam therapy, and their clinical significance is variable.
Diazepam increases the central depressive effects of alcohol, other hypnotics/sedatives (e.g., barbiturates), other muscle relaxants, certain antidepressants, sedative antihistamines, opioids, and antipsychotics, as well as anticonvulsants such as phenobarbital, phenytoin, and carbamazepine. The euphoriant effects of opioids may be increased, leading to an increased risk of psychological dependence.
Cimetidine, omeprazole, oxcarbazepine, ticlopidine, topiramate, ketoconazole, itraconazole, disulfiram, fluvoxamine, isoniazid, erythromycin, probenecid, propranolol, imipramine, ciprofloxacin, fluoxetine, and valproic acid prolong the action of diazepam by inhibiting its elimination.
Alcohol in combination with diazepam may cause a synergistic enhancement of the hypotensive properties of benzodiazepines and alcohol.
Oral contraceptives significantly decrease the elimination of desmethyldiazepam, a major metabolite of diazepam.
Rifampin, phenytoin, carbamazepine, and phenobarbital increase the metabolism of diazepam, thus decreasing drug levels and effects. Dexamethasone and St John's wort also increase the metabolism of diazepam.
Diazepam increases the serum levels of phenobarbital.
Nefazodone can cause increased blood levels of benzodiazepines.
Cisapride may enhance the absorption, and therefore the sedative activity, of diazepam.
Small doses of theophylline may inhibit the action of diazepam.
Diazepam may block the action of levodopa (used in the treatment of Parkinson's disease).
Diazepam may alter digoxin serum concentrations.
Other drugs that may have interactions with diazepam include antipsychotics (e.g. chlorpromazine), MAO inhibitors, and ranitidine.
Because it acts on the GABA receptor, the herb valerian may produce an adverse effect.
Foods that acidify the urine can lead to faster absorption and elimination of diazepam, reducing drug levels and activity.
Foods that alkalinize the urine can lead to slower absorption and elimination of diazepam, increasing drug levels and activity.
Reports conflict as to whether food in general has any effects on the absorption and activity of orally administered diazepam.
Pharmacology
Diazepam is a long-acting "classical" benzodiazepine. Other classical benzodiazepines include chlordiazepoxide, clonazepam, lorazepam, oxazepam, nitrazepam, temazepam, flurazepam, bromazepam, and clorazepate. Diazepam has anticonvulsant properties. Benzodiazepines act via micromolar benzodiazepine binding sites as calcium channel blockers and significantly inhibit depolarization-sensitive calcium uptake in rat nerve cell preparations.
Diazepam inhibits acetylcholine release in mouse hippocampal synaptosomes. This has been found by measuring sodium-dependent high-affinity choline uptake in mouse brain cells in vitro, after pretreatment of the mice with diazepam in vivo. This may play a role in explaining diazepam's anticonvulsant properties.
Diazepam binds with high affinity to glial cells in animal cell cultures. Diazepam at high doses has been found to decrease histamine turnover in mouse brain via diazepam's action at the benzodiazepine-GABA receptor complex. Diazepam also decreases prolactin release in rats.
Mechanism of action
Benzodiazepines are positive allosteric modulators of the GABA type A receptors (GABAA). The GABAA receptors are ligand-gated chloride-selective ion channels that are activated by GABA, the major inhibitory neurotransmitter in the brain. The binding of benzodiazepines to this receptor complex promotes the binding of GABA, which in turn increases the total conduction of chloride ions across the neuronal cell membrane. This increased chloride ion influx hyperpolarizes the neuron's membrane potential. As a result, the difference between resting potential and threshold potential is increased and firing is less likely. As a result, the arousal of the cortical and limbic systems in the central nervous system is reduced.
The GABAA receptor is a heteromer composed of five subunits, the most common ones being two αs, two βs, and one γ (α2β2γ). For each subunit, many subtypes exist (α1–6, β1–3, and γ1–3). GABAA receptors containing the α1 subunit mediate the sedative, the anterograde amnesic, and partly the anticonvulsive effects of diazepam. GABAA receptors containing α2 mediate the anxiolytic actions and to a large degree the myorelaxant effects. GABAA receptors containing α3 and α5 also contribute to benzodiazepines myorelaxant actions, whereas GABAA receptors comprising the α5 subunit were shown to modulate the temporal and spatial memory effects of benzodiazepines. Diazepam is not the only drug to target these GABAA receptors. Drugs such as flumazenil also bind to GABAA to induce their effects.
Diazepam appears to act on areas of the limbic system, thalamus, and hypothalamus, inducing anxiolytic effects. Benzodiazepine drugs including diazepam increase the inhibitory processes in the cerebral cortex.
The anticonvulsant properties of diazepam and other benzodiazepines may be in part or entirely due to binding to voltage-dependent sodium channels rather than GABAA receptors. Sustained repetitive firing seems limited by benzodiazepines' effect of slowing recovery of sodium channels from inactivation.
The muscle relaxant properties of diazepam are produced via inhibition of polysynaptic pathways in the spinal cord.
Pharmacokinetics
Diazepam can be administered orally, intravenously (it is always diluted, as it is painful and damaging to veins), intramuscularly (IM), or as a suppository.
The onset of action is one to five minutes for IV administration and 15–30 minutes for IM administration. The duration of diazepam's peak pharmacological effects is 15 minutes to one hour for both routes of administration. The half-life of diazepam, in general, is 30–56 hours. Peak plasma levels occur between 30 and 90 minutes after oral administration and between 30 and 60 minutes after intramuscular administration; after rectal administration, peak plasma levels occur after 10 to 45 minutes. Diazepam is highly plasma protein-bound, with 96–99% of the absorbed drug being protein-bound. The distribution half-life of diazepam is two to 13 minutes.
Diazepam is highly lipid-soluble and is widely distributed throughout the body after administration. It easily crosses both the blood–brain barrier and the placenta, and is excreted into breast milk. After absorption, diazepam is redistributed into muscle and adipose tissue. Continual daily doses of diazepam quickly build to a high concentration in the body (mainly in adipose tissue), far above the actual dose for any given day.
Diazepam is stored preferentially in some organs, including the heart. Absorption by any administered route and the risk of accumulation is significantly increased in the neonate, and withdrawal of diazepam during pregnancy and breastfeeding is clinically justified.
Diazepam undergoes oxidative metabolism by demethylation (CYP2C9, 2C19, 2B6, 3A4, and 3A5), hydroxylation (CYP3A4 and 2C19) and glucuronidation in the liver as part of the cytochrome P450 enzyme system. It has several pharmacologically active metabolites. The main active metabolite of diazepam is desmethyldiazepam (also known as nordazepam or nordiazepam). Its other active metabolites include the minor active metabolites temazepam and oxazepam. These metabolites are conjugated with glucuronide and are excreted primarily in the urine. Because of these active metabolites, the serum values of diazepam alone are not useful in predicting the effects of the drug. Diazepam has a biphasic half-life of about one to three days and two to seven days for the active metabolite desmethyldiazepam. Most of the drug is metabolized; very little diazepam is excreted unchanged. The elimination half-life of diazepam and also the active metabolite desmethyldiazepam increases significantly in the elderly, which may result in prolonged action, as well as accumulation of the drug during repeated administration.
Synthesis
The synthesis of Diazepam was first achieved through a reaction pathway developed by Leo Sternbach and his team at Hoffmann-La Roche in the late 1950s.
Sternbach's method commenced with 2-amino-5-chlorobenzophenone, which undergoes cyclocondensation with glycine ethyl ester hydrochloride to construct the benzodiazepine core. This core is subsequently alkylated at the nitrogen in the 1-position using dimethyl sulfate in the presence of sodium methoxide and methanol under reflux conditions. Although the direct transformation from 2-amino-5-chlorobenzophenone to Nordazepam is conceptually straightforward, an alternative approach involving the treatment of 2-amino-5-chlorobenzophenon with chloroacetyl chloride, succeeded by ammoniation and heating, culminates in Nordazepam with enhanced yield and facilitates easier purification processes.
Detection in body fluids
Diazepam may be quantified in blood or plasma to confirm a diagnosis of poisoning in hospitalized patients, provide evidence in an impaired driving arrest, or to assist in a medicolegal death investigation. Blood or plasma diazepam concentrations are usually in a range of in persons receiving the drug therapeutically. Most commercial immunoassays for the benzodiazepine class of drugs cross-react with diazepam, but confirmation and quantitation are usually performed using chromatographic techniques.
Environmental
Diazepam is a common environmental contamination finding near human settlements.
History
Diazepam was the second benzodiazepine invented by Leo Sternbach of Hoffmann-La Roche at the company's Nutley, New Jersey, facility following chlordiazepoxide (Librium), which was approved for use in 1960. Released in 1963 as an improved version of Librium, diazepam became incredibly popular, helping Roche to become a pharmaceutical industry giant. It is 2.5 times more potent than its predecessor, which it quickly surpassed in terms of sales. After this initial success, other pharmaceutical companies began to introduce other benzodiazepine derivatives.
The benzodiazepines gained popularity among medical professionals as an improvement over barbiturates, which have a comparatively narrow therapeutic index, and are far more sedative at therapeutic doses. The benzodiazepines are also far less dangerous; death rarely results from diazepam overdose, except in cases where it is consumed with large amounts of other depressants (such as alcohol or opioids). Benzodiazepine drugs such as diazepam initially had widespread public support, but with time the view changed to one of growing criticism and calls for restrictions on their prescription.
Marketed by Roche using an advertising campaign conceived by the William Douglas McAdams Agency under the leadership of Arthur Sackler, diazepam was the top-selling pharmaceutical in the United States from 1969 to 1982, with peak annual sales in 1978 of 2.3 billion tablets. Diazepam, along with oxazepam, nitrazepam and temazepam, represents 82% of the benzodiazepine market in Australia. While psychiatrists continue to prescribe diazepam for the short-term relief of anxiety, neurology has taken the lead in prescribing diazepam for the palliative treatment of certain types of epilepsy and spastic activity, for example, forms of paresis. It is also the first line of defense for a rare disorder called stiff-person syndrome.
Society and culture
Recreational use
Diazepam is a medication with a high risk of misuse and can cause drug dependence. Urgent action by national governments has been recommended to improve prescribing patterns of benzodiazepines such as diazepam. A single dose of diazepam modulates the dopamine system in similar ways to how morphine and alcohol modulate the dopaminergic pathways.
Between 50 and 64% of rats will self-administer diazepam.
Diazepam can substitute for the behavioral effects of barbiturates in a primate study.
Diazepam has been found as an adulterant in heroin.
Diazepam drug misuse can occur either through recreational misuse where the drug is taken to achieve a high or when the drug is continued long term against medical advice.
Sometimes, it is used by stimulant users to "come down" and sleep and to help control the urge to binge. These users often escalate dosage from 2 to 25 times the therapeutic dose of to .
A large-scale study in the US, conducted by SAMHSA, using data from 2011, determined benzodiazepines were present in 28.7% of emergency department visits involving nonmedical use of pharmaceuticals. In this regard, benzodiazepines are second only to opiates, the study found in 39.2% of visits. About 29.3% of drug-related suicide attempts involve benzodiazepines, making them the most frequently represented class in drug-related suicide attempts. Males misuse benzodiazepines as commonly as females.
Diazepam was detected in 26% of cases of people suspected of driving under the influence of drugs in Sweden and its active metabolite nordazepam was detected in 28% of cases. Other benzodiazepines, zolpidem, and zopiclone also were found in high numbers. Many drivers had blood levels far exceeding the therapeutic dose range, suggesting a high degree of potential for misuse of benzodiazepines, zolpidem, and zopiclone. In Northern Ireland, in cases where drugs were detected in samples from impaired drivers who were not impaired by alcohol, benzodiazepines were found in 87% of cases. Diazepam was the most commonly detected benzodiazepine.
Legal status
Diazepam is regulated as a prescription medication:
International
Diazepam is a Schedule IV controlled drug under the Convention on Psychotropic Substances.
UK
Classified as a controlled drug, listed under Schedule IV, Part I (CD Benz POM) of the Misuse of Drugs Regulations 2001, allowing possession with a valid prescription. The Misuse of Drugs Act 1971 makes it illegal to possess the drug without a prescription, and for such purposes, it is classified as a Class C drug.
Germany
Classified as a prescription drug, or in high dosage as a restricted drug (Betäubungsmittelgesetz, Anlage III).
Australia
Diazepam is a Schedule 4 substance under the Poisons Standard (June 2018). A Schedule 4 drug is outlined in the Poisons Act 1964 as, "Substances, the use or supply of which should be by or on the order of persons permitted by State or Territory legislation to prescribe and should be available from a pharmacist on prescription".
United States
Diazepam is controlled as a Schedule IV substance.
Judicial executions
The states of California and Florida offer diazepam to condemned inmates as a pre-execution sedative as part of their lethal injection program, although the state of California has not executed a prisoner since 2006. In August 2018, Nebraska used diazepam as part of the drug combination used to execute Carey Dean Moore, the first death row inmate executed in Nebraska in over 21 years.
Veterinary uses
Diazepam is used as a short-term sedative and anxiolytic for cats and dogs, sometimes used as an appetite stimulant. It can also be used to stop seizures in dogs and cats.
| Biology and health sciences | Psychiatric drugs | Health |
234857 | https://en.wikipedia.org/wiki/Yawn | Yawn | A yawn is a reflex in vertebrate animals characterized by a long inspiratory phase with gradual mouth gaping, followed by a brief climax (or acme) with muscle stretching, and a rapid expiratory phase with muscle relaxation, which typically lasts a few seconds. For fish and birds, this is described as gradual mouth gaping, staying open for at least three seconds and subsequently a rapid closure of the mouth. Almost all vertebrate animals, including mammals, birds, reptiles, amphibians, and even fish, experience yawning. The study of yawning is called chasmology.
Yawning (oscitation) most often occurs in adults immediately before and after sleep, during tedious activities and as a result of its contagious quality. It is commonly associated with tiredness, stress, sleepiness, boredom, or even hunger. In humans, yawning is often triggered by the perception that others are yawning (for example, seeing a person yawning, or talking to someone on the phone who is yawning). This is a typical example of echopraxia and positive feedback. This "contagious" yawning has also been observed in chimpanzees, dogs, cats, birds, and reptiles and can occur between members of different species. Approximately twenty psychological reasons for yawning have been proposed by scholars but there is little agreement on the primacy of any one.
During a yawn, muscles around the airway are fully stretched, including chewing and swallowing muscles. Due to these strong repositioning muscle movements, the airway (lungs and throat) dilates to three or four times its original size. The tensor tympani muscle in the middle ear contracts, which creates a rumbling noise perceived as coming from within the head; however, the noise is due to mechanical disturbance of the hearing apparatus and is not generated by the motion of air. Yawning is sometimes accompanied, in humans and other animals, by an instinctive act of stretching several parts of the body including the arms, neck, shoulders and back.
In humans it is often visible that nostrils are dilating involuntary during yawning.
Etymology
The English yawn continues a number of Middle English forms: from Old English , and from Old English frequentatives , from a Germanic root *gīn-. The Germanic root has Proto-Indo-European cognates, from a root found also with -n- suffix in Greek ('to yawn'), and without the -n- in English gap (compare the figura etymologica in Norse ), gum ('palate') and gasp (via Old Norse), Latin , and Greek .
The Latin term used in medicine is (anglicized as oscitation), from the verb oscito ('to open the mouth').
Pandiculation is the act of yawning and stretching simultaneously.
Proposed causes
There are a number of theories that attempt to explain why humans and other animals yawn.
One study states that yawning occurs when one's blood contains increased amounts of carbon dioxide and therefore becomes in need of the influx of oxygen (or expulsion of carbon dioxide) that a yawn can provide. Yawning may reduce oxygen intake compared to normal respiration; however, the frequency of yawning is not decreased by providing more oxygen or reducing carbon dioxide in the air.
Animals subject to predation or other dangers must be ready to physically exert themselves at any given moment. At least one study suggests that yawning, especially psychological "contagious" yawning, may have developed as a way of keeping a group of animals alert. If an animal is drowsy or bored, it will be less alert than when fully awake and less prepared to spring into action. "Contagious" yawning could be an instinctual signal between group members to stay alert.
Nervousness, which often indicates the perception of an impending need for action, has also been suggested as a cause. Anecdotal evidence suggests that yawning helps increase a person's alertness. Paratroopers have been noted to yawn during the moments before they exit their aircraft and athletes often yawn just before intense exertions.
Another notion states that yawning is the body's way of controlling brain temperature. In 2007, researchers, including a professor of psychology from the SUNY Albany (US), proposed yawning may be a means to keep the brain cool. Mammalian brains operate best within a narrow temperature range. In two experiments, subjects with cold packs attached to their foreheads and subjects asked to breathe strictly nasally exhibited reduced contagious yawning when watching videos of people yawning.
A similar hypothesis suggests yawning is used for regulation of body temperature. Similarly, Guttmann and Dopart (2011) found that when a subject wearing earplugs yawns, the air moving between the subject's ear and the environment causes a breeze to be heard. Guttmann and Dopart determined that a yawn causes one of three possible situations to occur: the brain cools down due to an influx or outflux of oxygen; pressure in the brain is reduced by an outflux of oxygen; or the pressure of the brain is increased by an influx of air caused by increased cranial space.
One review hypothesized that yawning's goal is to periodically stretch the muscles of the throat, which may be important for efficient vocalization, swallowing, chewing, and also keeping the airway wide.
Yawning behavior may be altered as a result of medical issues such as diabetes, stroke, or adrenal conditions. Excessive yawning is seen in immunosuppressed patients such as those with multiple sclerosis. A professor of clinical and forensic neuropsychology at Bournemouth University has demonstrated that cortisol levels rise during yawning.
Social function
With respect to a possible evolutionary advantage, yawning might be a herd instinct. Theories suggest that the yawn serves to synchronize mood in gregarious animals, similar to howling in a wolf pack. It signals fatigue among members of a group in order to synchronize sleeping patterns and periods.
Research by Garrett Norris (2013) involving monitoring the behaviour of students kept waiting in a reception area indicates a connection (supported by neuro-imaging research) between empathic ability and yawning. "We believe that contagious yawning indicates empathy. It indicates an appreciation of other peoples' behavioral and physiological state," says Norris.
The yawn reflex has long been observed to be contagious. In 1508, Erasmus wrote, "One man's yawning makes another yawn", and the French proverbialized the idea to ('One good gaper makes seven others gape'). Often, if one person yawns, this may cause another person to "empathetically" yawn. Observing another person's yawning face (especially their eyes), reading or thinking about yawning, or looking at a yawning picture can cause a person to yawn. The proximate cause for contagious yawning may lie with mirror neurons in the frontal cortex of certain vertebrates, which, upon being exposed to a stimulus from conspecific (same species) and occasionally interspecific organisms, activates the same regions in the brain. Mirror neurons have been proposed as a driving force for imitation, which lies at the root of much human learning, such as language acquisition. Yawning may be an offshoot of the same imitative impulse.
A 2007 study found that young children with autism spectrum disorders do not increase their yawning frequency after seeing videos of other people yawning, in contrast to non-autistic children. In fact, the autistic children actually yawned less during the videos of yawning than during the control videos.
The relationship between yawn contagion and empathy is strongly supported by a 2011 behavioural study, conducted by Ivan Norscia and Elisabetta Palagi (University of Pisa, Italy). The study revealed that—among other variables such as nationality, gender, and sensory modality—only social bonding predicted the occurrence, frequency, and latency of yawn contagion. As with other measures of empathy, the rate of contagion was found to be greatest in response to kin, then friends, then acquaintances, and lastly strangers. Related individuals (r≥0.25) showed the greatest contagion, in terms of both occurrence of yawning and frequency of yawns. Strangers and acquaintances showed a longer delay in the yawn response (latency) compared to friends and kin. Hence, yawn contagion appears to be primarily driven by the emotional closeness between individuals. The social asymmetry in contagious yawning (with contagious yawning being more frequent between familiar subjects than between strangers) remains when only yawns that are heard, but not seen, are considered. This finding makes it unlikely that visual attentional biases are at the basis of the social asymmetry observed in contagious yawning.
Two classes of yawning have been observed among primates. In some cases, the yawn is used as a threat gesture as a way of maintaining order in the primates' social structure. Specific studies were conducted on chimpanzees and stumptail macaques. A group of these animals was shown a video of other members of their own species yawning; both species yawned as well. This helps to partly confirm a yawn's "contagiousness".
The Discovery Channel's show MythBusters also tested this concept. In their small-scale, informal study they concluded that yawning is contagious, although elsewhere the statistical significance of this finding has been disputed.
Gordon Gallup, who hypothesizes that yawning may be a means of keeping the brain cool, also hypothesizes that "contagious" yawning may be a survival instinct inherited from our evolutionary past. "During human evolutionary history, when we were subject to predation and attacks by other groups, if everybody yawns in response to seeing someone yawn the whole group becomes much more vigilant and much better at being able to detect danger."
A study by the University of London has suggested that the "contagiousness" of yawns by a human will pass to dogs. The study observed that 21 of 29 dogs yawned when a stranger yawned in front of them but did not yawn when the stranger only opened his mouth.
Helt and Eigsti (2010) showed that dogs, like humans, develop a susceptibility to contagious yawning gradually, and that while dogs above seven months 'catch' yawns from humans, younger dogs are immune to contagion. The study also indicated that nearly half of the dogs responded to the human's yawn by becoming relaxed and sleepy, suggesting that the dogs copied not just the yawn, but also the physical state that yawns typically reflect.
Relation to empathy
In a study involving gelada baboons, yawning was contagious between individuals, especially those that were socially close. This suggests that emotional proximity rather than spatial proximity is an indicator of yawn contagion.
Evidence for the occurrence of contagious yawning linked to empathy is rare outside of primates. It has been studied in Canidae species, such as the domestic dog and wolf. Domestic dogs have shown the ability to yawn contagiously in response to human yawns. Domestic dogs have demonstrated they are skilled at reading human communication behaviours. This ability makes it difficult to ascertain whether yawn contagion among domestic dogs is deeply rooted in their evolutionary history or is a result of domestication.
In a 2014 study, wolves were observed in an effort to answer this question. The results of the study showed that wolves are capable of yawn contagion. This study also found that the social bond strength between individuals affected the frequency of contagious yawning in wolves, supporting previous research which ties contagious yawning to emotional proximity.
Some evidence for contagious yawning has also been found in budgerigars (Melopsittacus undulatus), a species of social parrots. This indicates that contagious yawning may have evolved several times in different lineages. In budgerigars, contagious yawning does not seem to be related to social closeness.
In certain neurological and psychiatric disorders, such as schizophrenia and autism, the patient has an impaired ability to infer the mental states of others. In such cases, yawn contagion can be used to evaluate their ability to infer or empathize with others. Autism spectrum disorder (ASD) is a developmental disorder which severely affects social and communicative development, including empathy. The results of various studies have showed a diminished susceptibility to contagious yawn compared to the control group of typically developing children. Since atypical development of empathy is reported in autism spectrum disorder, results support the claim that contagious yawning and the capacity of empathy share common neural and cognitive mechanisms. Similarly, patients with neurological and psychiatric conditions, such as schizophrenia, have shown an impaired ability to empathize with others. Contagious yawning is one means of evaluating such disorders. The Canadian psychiatrist Heinz Lehmann claimed that increases in yawning could predict recovery in schizophrenia. The impairment of contagious yawning can provide greater insight into its connection to the underlying causes of empathy.
There is still substantial disagreement in the existing literature about whether or not yawn contagion is related to empathy at all. Empathy is a notoriously difficult trait to measure, and the literature on the subject is confused, with the same species sometimes displaying a connection between contagious yawning and social closeness, and sometimes apparently not. Different experimenters typically use slightly different measures of empathy, making comparisons between studies difficult, and there may be a publication bias, where studies which find a significant correlation between the two tested variables are more likely to be published than studies which do not. By revising in critical way the literature for and against yawn contagion as an empathy-related phenomenon, a 2020 review has shown that the social and emotional relevance of the stimulus (based on who the yawner is) can be related to the levels of yawn contagion, as suggested by neurobiological, ethological and psychological findings. Therefore, the discussion over the issue remains open.
Non-human
Mammals, birds, and other vertebrates yawn.
In animals, yawning can serve as a warning signal. Charles Darwin's book, The Expression of the Emotions in Man and Animals, mentions that baboons yawn to threaten their enemies, possibly by displaying large canine teeth. Similarly, Siamese fighting fish yawn only when they see a conspecific (same species) or their own mirror-image, and their yawn often accompanies aggressive attack. Guinea pigs also yawn in a display of dominance or anger, displaying their impressive incisor teeth. This is often accompanied by teeth chattering, purring and scent marking.
Adelie penguins employ yawning as part of their courtship ritual. Penguin couples face off and the males engage in what is described as an "ecstatic display", opening their beaks and pointing their faces skyward. This trait has also been seen among emperor penguins. Researchers have been attempting to discover why these two different species share this trait, despite not sharing a habitat. Snakes yawn, both to realign their jaws after a meal and for respiratory reasons, as their trachea can be seen to expand when they do this. Dogs, and occasionally cats, often yawn after seeing people yawn and when they feel uncertain. Dogs demonstrate contagious yawning when exposed to human yawning. Dogs are very adept at reading human communication actions, so it is unclear if this phenomenon is rooted in evolutionary history or a result of domestication. Fish can also yawn, and they will increase this behavior when experiencing a lack of oxygen. Socially contagious yawning has been observed in budgerigars, and anecdotally when tired in other parrot species.
Culture
Some cultures lend yawning moral or spiritual significance. An open mouth has been associated with letting good immaterial things (such as the soul) escape or letting bad ones (evil spirits) enter, and yawning may have been thought to increase these risks. Covering the mouth when yawning may have been a way to prevent such transmission. Exorcists believe yawning can indicate that a demon or possessive spirit is leaving its human host during the course of an exorcism. Thus, covering one's mouth has been conceived as a protective measure against this.
Yawning has also been described as disrespectful (when done before others) or improper (when done alone). For example, in his commentary on Al-Bukhari's hadith collection, Ibn Hajar, an Islamic theologian, mentions that yawning, in addition to its risks of letting demons enter or take hold of one's body, is unbefitting for humans as it makes them look and sound like dogs by crooking men's upright posture and making them howl:
Superstitions regarding the act of yawning may have arisen from concerns over public health. Polydore Vergil (–1555), in his De Rerum Inventoribus, writes that it was customary to make the Sign of the Cross over one's mouth, since "alike deadly plague was sometime in yawning, wherefore men used to fence themselves with the sign of the cross... which custom we retain at this day."
Yawning is often perceived as implying boredom, and yawning conspicuously in another's presence has historically been a faux pas. In 1663 Francis Hawkins advised, "In yawning howl not, and thou shouldst abstain as much as thou can to yawn, especially when thou speakest." George Washington said, "If You Cough, Sneeze, Sigh, or Yawn, do it not Loud but Privately; and Speak not in your Yawning, but put Your handkerchief or Hand before your face and turn aside." These customary beliefs persist in the modern age. One of Mason Cooley's aphorisms is "A yawn is more disconcerting than a contradiction." A loud yawn may even lead to penalties for contempt of court.
| Biology and health sciences | Ethology | Biology |
234880 | https://en.wikipedia.org/wiki/Loriini | Loriini | Loriini is a tribe of small to medium-sized arboreal parrots characterized by their specialized brush-tipped tongues for feeding on nectar of various blossoms and soft fruits, preferably berries. The species form a monophyletic group within the parrot family Psittaculidae. The group consists of the lories and lorikeets. Traditionally, they were considered a separate subfamily (Loriinae) from the other subfamily (Psittacinae) based on the specialized characteristics, but recent molecular and morphological studies show that the group is positioned in the middle of various other groups. They are widely distributed throughout the Australasian region, including south-eastern Asia, Polynesia, Papua New Guinea, Timor Leste and Australia, and the majority have very brightly coloured plumage.
Etymology
The word "lory" comes from the Malay lūri, a name used for a number of species of colourful parrots. The name was used by the Dutch writer Johan Nieuhof in 1682 in a book describing his travels in the East Indies.
The spelling "laurey" was used by English naturalist Eleazar Albin in 1731 for a species of parrot from Brazil,
and then in 1751 the English naturalist George Edwards used the spelling "lory" when introducing names for five species of parrot from the East Indies in the fourth volume of his A Natural History of Uncommon Birds. Edwards credited Nieuhof for the name.
The choice of the terms "lory" and "lorikeet" is subjective, like the use of "parrot" and "parakeet". Species with longer tapering tails are generally referred to as "lorikeets", while species with short blunt tails are generally referred to as "lories".
Taxonomy
Traditionally, lories and lorikeets have either been classified as the subfamily, Loriinae, or as a family on their own, Loriidae, but they are currently classified as a tribe. Neither traditional view is confirmed by molecular studies. Those studies show that the lories and lorikeets form a single group, closely related to the budgerigar and the fig parrots (Cyclopsitta and Psittaculirostris).
A comprehensive molecular phylogenetic study of the Loriini published in 2020 led to major changes in the generic boundaries. The reorganisation involved the resurrection of four genera: Charminetta, Hypocharmosyna, Charmosynopsis and Glossoptilus, as well as the erection of three entirely new genera: Synorhacma, Charmosynoides and Saudareos. One genus disappeared, as the collared lory, which had previously been placed in the monotypic genus Phigys, was found to be embedded in the genus Vini. The extinct New Caledonian lorikeet, although not sampled, was assumed to be a member of the genus Vini on plumage and biogeographic grounds. The tribe Loriini now contains 61 species divided into 19 genera.
Genera
Morphology
Lories and lorikeets have specialized brush-tipped tongues for feeding on nectar and soft fruits. They can feed from the flowers of about 5,000 species of plants and use their specialized tongues to take the nectar. The tip of their tongues have tufts of papillae (extremely fine hairs), which collect nectar and pollen.
The multi-coloured rainbow lorikeet was one of the species of parrots appearing in the first edition of The Parrots of the World and also in John Gould's lithographs of the Birds of Australia.
Diet
In the wild, rainbow lorikeets feed mainly on pollen and nectar, and possess a tongue adapted especially for their particular diet. Many fruit orchard owners consider them a pest, as they often fly in groups and strip trees containing fresh fruit. They are also frequent visitors at bird feeders that supply lorikeet-friendly treats, such as store-bought nectar, sunflower seeds, and fruits such as apples, grapes and pears. Occasionally they have been observed feeding on meat.
Conservation
The ultramarine lorikeet is endangered. It is now one of the 50 rarest birds in the world. The blue lorikeet is classified as vulnerable. The introduction of European rats to the small island habitats of these birds is a major cause of their endangerment. Various conservation efforts have been made to relocate some of these birds to locations free of predation and habitat destruction.
In literature
A "Lory" famously appears in Chapter III of Lewis Carroll's Alice's Adventures in Wonderland. Alice argues with the Lory about its age.
Gallery
| Biology and health sciences | Psittaciformes | Animals |
234901 | https://en.wikipedia.org/wiki/Wrist | Wrist | In human anatomy, the wrist is variously defined as (1) the carpus or carpal bones, the complex of eight bones forming the proximal skeletal segment of the hand; (2) the wrist joint or radiocarpal joint, the joint between the radius and the carpus and; (3) the anatomical region surrounding the carpus including the distal parts of the bones of the forearm and the proximal parts of the metacarpus or five metacarpal bones and the series of joints between these bones, thus referred to as wrist joints. This region also includes the carpal tunnel, the anatomical snuff box, bracelet lines, the flexor retinaculum, and the extensor retinaculum.
As a consequence of these various definitions, fractures to the carpal bones are referred to as carpal fractures, while fractures such as distal radius fracture are often considered fractures to the wrist.
Structure
The distal radioulnar joint (DRUJ) is a pivot joint located between the distal ends of the radius and ulna, which make up the forearm. Formed by the head of the ulna and the ulnar notch of the radius, the DRUJ is separated from the radiocarpal (wrist) joint by an articular disk lying between the radius and the styloid process of the ulna. The capsule of the joint is lax and extends from the inferior sacciform recess to the ulnar shaft. The DRUJ works with the proximal radioulnar joint (at the elbow) for pronation and supination.
The radiocarpal (wrist) joint is an ellipsoid joint formed by the radius and the articular disc proximally and the proximal row of carpal bones distally. The carpal bones on the ulnar side only make intermittent contact with the proximal side — the triquetrum only makes contact during ulnar abduction. The capsule, lax and un-branched, is thin on the dorsal side and can contain synovial folds. The capsule is continuous with the midcarpal joint and strengthened by numerous ligaments, including the palmar and dorsal radiocarpal ligaments, and the ulnar and radial collateral ligaments.
The parts forming the radiocarpal joint are the lower end of the radius and under surface of the articular disk above; and the scaphoid, lunate, and triquetral bones below. The articular surface of the radius and the undersurface of the articular disk form together with a transversely elliptical concave surface, the receiving cavity. The superior articular surfaces of the scaphoid, lunate, and triquetrum form a smooth convex surface, the condyle, which is received into the concavity.
Carpal bones of the hand:
Proximal: A=Scaphoid, B=Lunate, C=Triquetrum, D=Pisiform
Distal: E=Trapezium, F=Trapezoid, G=Capitate, H=Hamate
In the hand proper a total of 13 bones form part of the wrist: eight carpal bones—scaphoid, lunate, triquetral, pisiform, trapezium, trapezoid, capitate, and hamate— and five metacarpal bones—the first, second, third, fourth, and fifth metacarpal bones.
The midcarpal joint is the S-shaped joint space separating the proximal and distal rows of carpal bones. The intercarpal joints, between the bones of each row, are strengthened by the radiate carpal and pisohamate ligaments and the palmar, interosseous, and dorsal intercarpal ligaments. Some degree of mobility is possible between the bones of the proximal row while the bones of the distal row are connected to each other and to the metacarpal bones —at the carpometacarpal joints— by strong ligaments —the pisometacarpal and palmar and dorsal carpometacarpal ligament— that makes a functional entity of these bones. Additionally, the joints between the bases of the metacarpal bones —the intermetacarpal articulations— are strengthened by dorsal, interosseous, and palmar intermetacarpal ligaments.
The earliest carpal bones to ossify are capitate bone and hamate bone in the first six months of an infant life.
Articulations
The radiocarpal, intercarpal, midcarpal, carpometacarpal, and intermetacarpal joints often intercommunicate through a common synovial cavity.
Articular surfaces
It has two articular surfaces named, proximal and distal articular surfaces respectively. The proximal articular surface is made up of the lower end of the radius and a triangular articular disc of the inferior radio-ulnar joint. On the other hand, the distal articular surface is made up of proximal surfaces of the scaphoid, triquetral and lunate bones.
Function
Movement
The extrinsic hand muscles are located in the forearm where their bellies form the proximal fleshy roundness. When contracted, most of the tendons of these muscles are prevented from standing up like taut bowstrings around the wrist by passing under the flexor retinaculum on the palmar side and the extensor retinaculum on the dorsal side. On the palmar side the carpal bones form the carpal tunnel, through which some of the flexor tendons pass in tendon sheaths that enable them to slide back and forth through the narrow passageway (see carpal tunnel syndrome).
Starting from the mid-position of the hand, the movements permitted in the wrist proper are (muscles in order of importance):
Marginal movements: radial deviation (abduction, movement towards the thumb) and ulnar deviation (adduction, movement towards the little finger). These movements take place about a dorsopalmar axis (back to front) at the radiocarpal and midcarpal joints passing through the capitate bone.
Radial abduction (up to 20°): extensor carpi radialis longus, abductor pollicis longus, extensor pollicis longus, flexor carpi radialis, flexor pollicis longus
Ulnar adduction (up to 30°): extensor carpi ulnaris, flexor carpi ulnaris, extensor digitorum, extensor digiti minimi
Movements in the plane of the hand: flexion (palmar flexion, tilting towards the palm) and extension (dorsiflexion, tilting towards the back of the hand). These movements take place through a transverse axis passing through the capitate bone. Palmar flexion is the most powerful of these movements because the flexors, especially the finger flexors, are considerably stronger than the extensors.
Extension (up to 60°): extensor digitorum, extensor carpi radialis longus, extensor carpi radialis brevis, extensor indicis, extensor pollicis longus, extensor digiti minimi, extensor carpi ulnaris
Palmar flexion (up to 70°): flexor digitorum superficialis, flexor digitorum profundus, flexor carpi ulnaris, flexor pollicis longus, flexor carpi radialis, abductor pollicis longus
Intermediate or combined movements
However, movements at the wrist can not be properly described without including movements in the distal radioulnar joint in which the rotary actions of supination and pronation occur and this joint is therefore normally regarded as part of the wrist.
Clinical significance
Wrist pain has a number of causes, including carpal tunnel syndrome, ganglion cyst, tendinitis, and osteoarthritis. Tests such as Phalen's test involve palmarflexion at the wrist.
The hand may deviate at the wrist in some conditions, such as rheumatoid arthritis.
Ossification of the bones around the wrist is one indicator used in taking a bone age.
A wrist fracture typically refers to a distal radius fracture. It is more common in non-Hispanic women and is associated with factors such as alcohol consumption, smoking, high serum phosphate levels, osteoporosis, and obesity.
History
Etymology
The English word "wrist" is etymologically derived from the Proto-Germanic word wristiz from which are derived modern German Rist ("instep", "wrist") and modern Swedish vrist ("instep", "ankle"). The base writh- and its variants are associated with Old English words "wreath", "wrest", and "writhe". The wr- sound of this base seems originally to have been symbolic of the action of twisting.
| Biology and health sciences | Skeletal system | Biology |
235029 | https://en.wikipedia.org/wiki/Nth%20root | Nth root | In mathematics, an th root of a number is a number (the root) which, when raised to the power of the positive integer , yields :
The integer is called the index or degree, and the number of which the root is taken is the radicand. A root of degree 2 is called a square root and a root of degree 3, a cube root. Roots of higher degree are referred by using ordinal numbers, as in fourth root, twentieth root, etc. The computation of an th root is a root extraction.
For example, is a square root of , since , and is also a square root of , since .
The th root of is written as using the radical symbol . The square root is usually written without the as just . Taking the nth root of a number is the inverse operation of exponentiation, and can be written as a fractional exponent:
For a positive real number , denotes the positive square root of and denotes the positive real th root. A negative real number has no real-valued square roots, but when is treated as a complex number it has two imaginary square roots, and , where is the imaginary unit.
In general, any non-zero complex number has distinct complex-valued th roots, equally distributed around a complex circle of constant absolute value. (The th root of is zero with multiplicity , and this circle degenerates to a point.) Extracting the th roots of a complex number can thus be taken to be a multivalued function. By convention the principal value of this function, called the principal root and denoted , is taken to be the th root with the greatest real part and in the special case when is a negative real number, the one with a positive imaginary part. The principal root of a positive real number is thus also a positive real number. As a function, the principal root is continuous in the whole complex plane, except along the negative real axis.
An unresolved root, especially one using the radical symbol, is sometimes referred to as a surd or a radical. Any expression containing a radical, whether it is a square root, a cube root, or a higher root, is called a radical expression, and if it contains no transcendental functions or transcendental numbers it is called an algebraic expression.
Roots are used for determining the radius of convergence of a power series with the root test. The th roots of 1 are called roots of unity and play a fundamental role in various areas of mathematics, such as number theory, theory of equations, and Fourier transform.
History
An archaic term for the operation of taking nth roots is radication.
Definition and notation
An th root of a number x, where n is a positive integer, is any of the n real or complex numbers r whose nth power is x:
Every positive real number x has a single positive nth root, called the principal nth root, which is written . For n equal to 2 this is called the principal square root and the n is omitted. The nth root can also be represented using exponentiation as x.
For even values of n, positive numbers also have a negative nth root, while negative numbers do not have a real nth root. For odd values of n, every negative number x has a real negative nth root. For example, −2 has a real 5th root, but −2 does not have any real 6th roots.
Every non-zero number x, real or complex, has n different complex number nth roots. (In the case x is real, this count includes any real nth roots.) The only complex root of 0 is 0.
The nth roots of almost all numbers (all integers except the nth powers, and all rationals except the quotients of two nth powers) are irrational. For example,
All nth roots of rational numbers are algebraic numbers, and all nth roots of integers are algebraic integers.
The term "surd" traces back to Al-Khwarizmi (), who referred to rational and irrational numbers as audible and inaudible, respectively. This later led to the Arabic word (, meaning "deaf" or "dumb") for irrational number being translated into Latin as (meaning "deaf" or "mute"). Gerard of Cremona (), Fibonacci (1202), and then Robert Recorde (1551) all used the term to refer to unresolved irrational roots, that is, expressions of the form , in which and are integer numerals and the whole expression denotes an irrational number. Irrational numbers of the form where is rational, are called pure quadratic surds; irrational numbers of the form , where and are rational, are called mixed quadratic surds.
Square roots
A square root of a number x is a number r which, when squared, becomes x:
Every positive real number has two square roots, one positive and one negative. For example, the two square roots of 25 are 5 and −5. The positive square root is also known as the principal square root, and is denoted with a radical sign:
Since the square of every real number is nonnegative, negative numbers do not have real square roots. However, for every negative real number there are two imaginary square roots. For example, the square roots of −25 are 5i and −5i, where i represents a number whose square is .
Cube roots
A cube root of a number x is a number r whose cube is x:
Every real number x has exactly one real cube root, written . For example,
Every real number has two additional complex cube roots.
Identities and properties
Expressing the degree of an nth root in its exponent form, as in , makes it easier to manipulate powers and roots. If is a non-negative real number,
Every non-negative number has exactly one non-negative real nth root, and so the rules for operations with surds involving non-negative radicands and are straightforward within the real numbers:
Subtleties can occur when taking the nth roots of negative or complex numbers. For instance:
but, rather,
Since the rule strictly holds for non-negative real radicands only, its application leads to the inequality in the first step above.
Simplified form of a radical expression
A non-nested radical expression is said to be in simplified form if no factor of the radicand can be written as a power greater than or equal to the index; there are no fractions inside the radical sign; and there are no radicals in the denominator.
For example, to write the radical expression in simplified form, we can proceed as follows. First, look for a perfect square under the square root sign and remove it:
Next, there is a fraction under the radical sign, which we change as follows:
Finally, we remove the radical from the denominator as follows:
When there is a denominator involving surds it is always possible to find a factor to multiply both numerator and denominator by to simplify the expression. For instance using the factorization of the sum of two cubes:
Simplifying radical expressions involving nested radicals can be quite difficult. In particular, denesting is not always possible, and when possible, it may involve advanced Galois theory. Moreover, when complete denesting is impossible, there is no general canonical form such that the equality of two numbers can be tested by simply looking at their canonical expressions.
For example, it is not obvious that
The above can be derived through:
Let , with and coprime and positive integers. Then is rational if and only if both and are integers, which means that both and are nth powers of some integer.
Infinite series
The radical or root may be represented by the infinite series:
with . This expression can be derived from the binomial series.
Computing principal roots
Using Newton's method
The th root of a number can be computed with Newton's method, which starts with an initial guess and then iterates using the recurrence relation
until the desired precision is reached. For computational efficiency, the recurrence relation is commonly rewritten
This allows to have only one exponentiation, and to compute once for all the first factor of each term.
For example, to find the fifth root of 34, we plug in and (initial guess). The first 5 iterations are, approximately:
(All correct digits shown.)
The approximation is accurate to 25 decimal places and is good for 51.
Newton's method can be modified to produce various generalized continued fractions for the nth root. For example,
Digit-by-digit calculation of principal roots of decimal (base 10) numbers
Building on the digit-by-digit calculation of a square root, it can be seen that the formula used there, , or , follows a pattern involving Pascal's triangle. For the nth root of a number is defined as the value of element in row of Pascal's Triangle such that , we can rewrite the expression as . For convenience, call the result of this expression . Using this more general expression, any positive principal root can be computed, digit-by-digit, as follows.
Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into groups of digits equating to the root being taken, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the radicand. One digit of the root will appear above each group of digits of the original number.
Beginning with the left-most group of digits, do the following procedure for each group:
Starting on the left, bring down the most significant (leftmost) group of digits not yet used (if all the digits have been used, write "0" the number of times required to make a group) and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by and add the digits from the next group. This will be the current value c.
Find p and x, as follows:
Let be the part of the root found so far, ignoring any decimal point. (For the first step, and ).
Determine the greatest digit such that .
Place the digit as the next digit of the root, i.e., above the group of digits you just brought down. Thus the next p will be the old p times 10 plus x.
Subtract from to form a new remainder.
If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration.
Examples
Find the square root of 152.2756.
1 2. 3 4
/
\/ 01 52.27 56 (Results) (Explanations)
01 x = 1 10·1·0·1 + 10·2·0·1 ≤ 1 < 10·1·0·2 + 10·2·0·2
01 y = 1 y = 10·1·0·1 + 10·2·0·1 = 1 + 0 = 1
00 52 x = 2 10·1·1·2 + 10·2·1·2 ≤ 52 < 10·1·1·3 + 10·2·1·3
00 44 y = 44 y = 10·1·1·2 + 10·2·1·2 = 4 + 40 = 44
08 27 x = 3 10·1·12·3 + 10·2·12·3 ≤ 827 < 10·1·12·4 + 10·2·12·4
07 29 y = 729 y = 10·1·12·3 + 10·2·12·3 = 9 + 720 = 729
98 56 x = 4 10·1·123·4 + 10·2·123·4 ≤ 9856 < 10·1·123·5 + 10·2·123·5
98 56 y = 9856 y = 10·1·123·4 + 10·2·123·4 = 16 + 9840 = 9856
00 00
Algorithm terminates: Answer is 12.34
Find the cube root of 4192 truncated to the nearest thousandth.
1 6. 1 2 4
3 /
\/ 004 192.000 000 000 (Results) (Explanations)
004 x = 1 10·1·0·1 + 10·3·0·1 + 10·3·0·1 ≤ 4 < 10·1·0·2 + 10·3·0·2 + 10·3·0·2
001 y = 1 y = 10·1·0·1 + 10·3·0·1 + 10·3·0·1 = 1 + 0 + 0 = 1
003 192 x = 6 10·1·1·6 + 10·3·1·6 + 10·3·1·6 ≤ 3192 < 10·1·1·7 + 10·3·1·7 + 10·3·1·7
003 096 y = 3096 y = 10·1·1·6 + 10·3·1·6 + 10·3·1·6 = 216 + 1,080 + 1,800 = 3,096
096 000 x = 1 10·1·16·1 + 10·3·16·1 + 10·3·16·1 ≤ 96000 < 10·1·16·2 + 10·3·16·2 + 10·3·16·2
077 281 y = 77281 y = 10·1·16·1 + 10·3·16·1 + 10·3·16·1 = 1 + 480 + 76,800 = 77,281
018 719 000 x = 2 10·1·161·2 + 10·3·161·2 + 10·3·161·2 ≤ 18719000 < 10·1·161·3 + 10·3·161·3 + 10·3·161·3
015 571 928 y = 15571928 y = 10·1·161·2 + 10·3·161·2 + 10·3·161·2 = 8 + 19,320 + 15,552,600 = 15,571,928
003 147 072 000 x = 4 10·1·1612·4 + 10·3·1612·4 + 10·3·1612·4 ≤ 3147072000 < 10·1·1612·5 + 10·3·1612·5 + 10·3·1612·5
The desired precision is achieved. The cube root of 4192 is 16.124...
Logarithmic calculation
The principal nth root of a positive number can be computed using logarithms. Starting from the equation that defines r as an nth root of x, namely with x positive and therefore its principal root r also positive, one takes logarithms of both sides (any base of the logarithm will do) to obtain
The root r is recovered from this by taking the antilog:
(Note: That formula shows b raised to the power of the result of the division, not b multiplied by the result of the division.)
For the case in which x is negative and n is odd, there is one real root r which is also negative. This can be found by first multiplying both sides of the defining equation by −1 to obtain then proceeding as before to find |r|, and using .
Geometric constructibility
The ancient Greek mathematicians knew how to use compass and straightedge to construct a length equal to the square root of a given length, when an auxiliary line of unit length is given. In 1837 Pierre Wantzel proved that an nth root of a given length cannot be constructed if n is not a power of 2.
Complex roots
Every complex number other than 0 has n different nth roots.
Square roots
The two square roots of a complex number are always negatives of each other. For example, the square roots of are and , and the square roots of are
If we express a complex number in polar form, then the square root can be obtained by taking the square root of the radius and halving the angle:
A principal root of a complex number may be chosen in various ways, for example
which introduces a branch cut in the complex plane along the positive real axis with the condition , or along the negative real axis with .
Using the first(last) branch cut the principal square root maps to the half plane with non-negative imaginary(real) part. The last branch cut is presupposed in mathematical software like Matlab or Scilab.
Roots of unity
The number 1 has n different nth roots in the complex plane, namely
where
These roots are evenly spaced around the unit circle in the complex plane, at angles which are multiples of . For example, the square roots of unity are 1 and −1, and the fourth roots of unity are 1, , −1, and .
nth roots
Every complex number has n different nth roots in the complex plane. These are
where η is a single nth root, and 1, ω, ω, ... ω are the nth roots of unity. For example, the four different fourth roots of 2 are
In polar form, a single nth root may be found by the formula
Here r is the magnitude (the modulus, also called the absolute value) of the number whose root is to be taken; if the number can be written as a+bi then . Also, is the angle formed as one pivots on the origin counterclockwise from the positive horizontal axis to a ray going from the origin to the number; it has the properties that and
Thus finding nth roots in the complex plane can be segmented into two steps. First, the magnitude of all the nth roots is the nth root of the magnitude of the original number. Second, the angle between the positive horizontal axis and a ray from the origin to one of the nth roots is , where is the angle defined in the same way for the number whose root is being taken. Furthermore, all n of the nth roots are at equally spaced angles from each other.
If n is even, a complex number's nth roots, of which there are an even number, come in additive inverse pairs, so that if a number r1 is one of the nth roots then r2 = –r1 is another. This is because raising the latter's coefficient –1 to the nth power for even n yields 1: that is, (–r1) = (–1) × r1 = r1.
As with square roots, the formula above does not define a continuous function over the entire complex plane, but instead has a branch cut at points where θ / n is discontinuous.
Solving polynomials
It was once conjectured that all polynomial equations could be solved algebraically (that is, that all roots of a polynomial could be expressed in terms of a finite number of radicals and elementary operations). However, while this is true for third degree polynomials (cubics) and fourth degree polynomials (quartics), the Abel–Ruffini theorem (1824) shows that this is not true in general when the degree is 5 or greater. For example, the solutions of the equation
cannot be expressed in terms of radicals. (cf. quintic equation)
Proof of irrationality for non-perfect nth power x
Assume that is rational. That is, it can be reduced to a fraction , where and are integers without a common factor.
This means that .
Since x is an integer, and must share a common factor if . This means that if , is not in simplest form. Thus b should equal 1.
Since and , .
This means that and thus, . This implies that is an integer. Since is not a perfect th power, this is impossible. Thus is irrational.
| Mathematics | Arithmetic | null |
235083 | https://en.wikipedia.org/wiki/Chainsaw | Chainsaw | A chainsaw (or chain saw) is a portable handheld power saw that cuts with a set of teeth attached to a rotating chain driven along a guide bar.
Modern chainsaws are typically gasoline or electric and are used in activities such as tree felling, limbing, bucking, pruning, cutting firebreaks in wildland fire suppression, harvesting of firewood, for use in chainsaw art and chainsaw mills, for cutting concrete, and cutting ice. Precursors to modern chainsaws were first used in surgery, with patents for wood chainsaws beginning in the late 19th century.
A chainsaw comprises an engine, a drive mechanism, a guide bar, a cutting chain, a tensioning mechanism, and safety features. Various safety practices and working techniques are used with chainsaws.
History
In surgery
A "flexible saw", consisting of a fine serrated link chain held between two wooden handles, was pioneered in the late 18th century (–1785) by two Scottish doctors, John Aitken and James Jeffray, for symphysiotomy and excision of diseased bone, respectively. It was illustrated in the second edition of Aitken's Principles of Midwifery, or Puerperal Medicine (1785) in the context of a pelviotomy. In 1806, Jeffray published Cases of the Excision of Carious Joints, which collected a paper previously published by H. Park in 1782 and a translation of an 1803 paper by French physician P. F. Moreau, with additional observations by Park and Jeffray. In it, Jeffray reported having conceived the idea of a saw "with joints like the chain of a watch" independently very soon after Park's original 1782 publication, but that he was not able to have it produced until 1790, after which it was used in the anatomy lab and occasionally lent out to surgeons. Park and Moreau described successful excision of diseased joints, particularly the knee and elbow, and Jeffray explained that the chainsaw would allow a smaller wound and protect the adjacent muscles, nerves, and veins. While symphysiotomy had too many complications for most obstetricians, Jeffray's ideas about the excision of the ends of bones became more accepted, especially after the widespread adoption of anaesthetics. For much of the 19th century the chainsaw was a useful surgical instrument, but it was superseded in 1894 by the Gigli twisted-wire saw, which was substantially cheaper to manufacture, and gave a quicker, narrower cut, without risk of breaking and being entrapped in the bone.
A precursor of the chainsaw familiar today in the timber industry was another medical instrument developed around 1830, by German precision mechanic and orthopaedist Bernhard Heine. This instrument, the osteotome, had links of a chain carrying small cutting teeth with the edges set at an angle; the chain was moved around a guiding blade by turning the handle of a sprocket wheel. As the name implies, this was used to cut bone.
For cutting wood
One of the earliest patents for an "endless chain saw" comprising a chain of links carrying saw teeth was granted to Frederick L. Magaw of Flatlands, New York in 1883, apparently for the purpose of producing boards by stretching the chain between grooved drums. A later patent incorporating a guide frame was granted to Samuel J. Bens of San Francisco on January 17, 1905, his intent being to fell giant redwoods. The first portable chainsaw was developed and patented in 1918 by Canadian millwright James Shand. After he allowed his rights to lapse in 1930, his invention was further developed by what became the German company Festo in 1933. The company, now operating as Festool, produces portable power tools. Other important contributors to the modern chainsaw are Joseph Buford Cox and Andreas Stihl; the latter patented and developed an electric chainsaw for use on bucking sites in 1926 and a gasoline-powered chainsaw in 1929, and founded a company to mass-produce them. In 1927, Emil Lerp, the founder of Dolmar, developed the world's first gasoline-powered chainsaw and mass-produced them.
World War II interrupted the supply of German chainsaws to North America, so new manufacturers sprang up, including Industrial Engineering Ltd (IEL) in 1939, the forerunner of Pioneer Saws Ltd and part of Outboard Marine Corporation, the oldest manufacturer of chainsaws in North America.
The first one-man chainsaw was introduced in 1950, though it was relatively heavy. By 1959, the average weight was around 12 kg (today, chainsaws typically weigh between 4 and 5 kg, with heavy-duty models ranging from 7 to 9 kg), and it quickly gained attention.
McCulloch in North America started to produce chainsaws in 1948. The early models were heavy, two-person devices with long bars. Often, chainsaws were so heavy that they had wheels like dragsaws. Other outfits used driven lines from a wheeled power unit to drive the cutting bar.
After World War II, improvements in aluminum and engine design lightened chainsaws to the point where one person could carry them. In some areas, the chainsaw and skidder crews have been replaced by the feller buncher and harvester.
Chainsaws have almost entirely replaced simple man-powered saws in forestry. They are made in many sizes, from small electric saws intended for home and garden use, to large "lumberjack" saws. Members of military engineer units are trained to use chainsaws, as are firefighters to fight forest fires and to ventilate structure fires.
Three main types of chainsaw sharpeners are used: handheld file, electric chainsaw, and bar-mounted.
The first electric chainsaw was invented by Stihl in 1926. Corded chainsaws became available for sale to the public from the 1960s onwards, but these were never as successful commercially as the older gas-powered type due to limited range, dependency upon the presence of an electrical socket, plus the health and safety risk of the blade's proximity to the cable.
For most of the early 21st century petrol driven chainsaws remained the most common type, but they faced competition from cordless lithium battery powered chainsaws from the late 2010s onwards. Although most cordless chainsaws are small and suitable only for hedge trimming and tree surgery, Husqvarna and Stihl began manufacturing full size chainsaws for cutting logs during the early 2020s. Battery powered chainsaws should eventually see increased market share in California due to state restrictions planned to take effect in 2024 on gas powered gardening equipment.
Construction
A chainsaw consists of several parts:
Engine
Chainsaw engines are traditionally either a two-stroke single-cylinder gasoline (petrol) internal combustion engine (usually with a cylinder volume of 30 to 120 cm3) or an electric motor driven by a battery or electric power cord. In a petrol chainsaw, fuel is generally supplied to the engine by a carburetor at the intake. Two-stroke engines have been preferred for chainsaws due to their higher power-to-weight ratio and simplicity.
Hydraulic power may be used for chainsaws for underwater use.
To allow use in any orientation, modern gas chainsaws use a diaphragm carburetor, which draws fuel from the tank using the alternating pressure differential within the crankcase. Early engines used carburetors with gravity fed float chambers, which caused the engine to stall when tilted. The carburetor may need to be adjusted to maintain an appropriate idle speed and air-fuel ratio, such as when moving to a higher/lower altitude or as the air filter clogs. Carburetors are adjusted either by the operator or, in some saws, automatically by an electronic control unit.
To prevent vibration induced injury and reduce user fatigue, saws generally have an anti-vibration system to physically decouple the handles from the engine and bar. This is achieved by constructing the saw in two pieces, connected by springs or rubber in the same way an automobile suspension isolates the chassis from the wheels and road. In cold weather, carburetor icing can occur, so many saws have a vent between the cylinders and carburetor which may be opened to allow hot air to pass. Cold temperature can also contribute to vibration-induced injury, and some saws have a small alternator connected to resistive heating elements in the handles and/or carburetor.
Drive mechanism
Typically, a centrifugal clutch and sprocket are used. The centrifugal clutch expands with increasing speed, engaging a drum. On this drum sits either a fixed sprocket or an exchangeable one. The clutch has three jobs: When the engine runs idle (typically 2500–2700 rpm) the chain does not move. When the clutch is engaged and the chain stops in the wood for another reason, it protects the engine. Most importantly, it protects the operator in case of a kickback. Here, the chain brake stops the drum, and the clutch releases immediately.
Guide bar
A guide bar, typically an elongated bar with a round end of wear-resistant alloy steel typically 40 to 90 cm (16 to 36 in) in length, is used. An edge slot guides the cutting chain. Specialized, loop-style bars, called bow bars, were also used at one time for bucking logs and clearing brush, although they are now rarely encountered due to increased hazards of operation.
All guide bars have some elements for operation:
Gauge
The lower part of the chain runs in the gauge. Here, the lubrication oil is pulled by the chain to the nose. This is basically the thickness of the drive links.
Oil holes
The end of the saw power head has two oil holes, one on each side. These holes must match with the outlet of the oil pump. The pump sends the oil through the hole in the lower part of the gauge.
Saw bar producers provide a large variety of bars matching different saws.
Grease holes at bar nose
Through this hole, grease is pumped, typically each tank filling to keep the nose sprocket well lubricated.
Guide slot
Here, one or two bolts from the saw run through. The clutch cover is put on top of the bar and it is secured through these bolts. The number of bolts is determined by the size of the saw.
Bar types
Different bar types are available:
Laminated bars consist of different layers to reduce the weight of the bar.
Solid bars are solid steel, intended for professional use. They commonly have an exchangeable nose, since the sprocket at the bar nose wears out faster than the bar.
Safety bars are laminated bars with a small sprocket at the nose. The small nose reduces the kickback effect. Such bars are used on consumer saws.
Cutting chain
Usually, each segment in a chain (which is constructed from riveted metal sections similar to a bicycle chain, but without rollers) features small, sharp, cutting teeth. Each tooth takes the form of a folded tab of chromium-plated steel with a sharp angular or curved corner and two beveled cutting edges, one on the top plate and one on the side plate. Left-handed and right-handed teeth are alternated in the chain. Chains are made in varying pitch and gauge; the pitch of a chain is defined as half of the length spanned by any three consecutive rivets (e.g., 8 mm, 0.325 inch), while the gauge is the thickness of the drive link where it fits into the guide bar (e.g., 1.5 mm, 0.05 inch). The conventional "full complement" chain has one tooth for every two drive links. "Full skip" chain has one tooth for every three drive links. Built into each tooth is a depth gauge or "raker", which rides ahead of the tooth and limits the depth of cut, typically to around 0.5 mm (0.025"). Depth gauges are critical to safe chain operation. If left too high, they cause very slow cutting; if filed too low, the chain becomes more prone to kick back. Low depth gauges also cause the saw to vibrate excessively. Vibration is uncomfortable for the operator and is detrimental to the saw.
Tensioning mechanism
The tension of the chain that does the cutting is adjusted so that it neither binds on nor comes loose from the guide bar. The tensioner for doing so is either operated by turning a screw or a manual wheel. The tensioner is either in a lateral position underneath the exhaust or integrated into the clutch cover.
Lateral tensioners have the advantage that the clutch cover is easier to mount, but the disadvantage that it is more difficult to reach nearby the bar. Tensioners through the clutch cover are easier to operate, but the clutch cover is more difficult to attach.
When turning the screw, a hook in a bar hole moves the bar either out (tensioning) or in, making the chain loose. Tension is right when it can be moved easily by hand and not hanging loose from the bar. When tensioning, hold the bar nose up and pull the bar nuts tight. Otherwise, the chain might derail.
The underside of each link features a small, metal finger called a "drive link", which locates the chain on the bar, helps to carry lubricating oil around the bar, and engages with the engine's drive sprocket inside the body of the saw. The engine drives the chain around the track by a centrifugal clutch, engaging the chain as engine speed increases under power, but allowing it to stop as the engine speed slows to idle speed.
Consistent improvement to overall chainsaw design, including adding safety features, has taken place over the years. These include chain-brake systems, better chain design, and lighter, more ergonomic saws, including fatigue-reducing antivibration systems.
As chainsaw carving has become more popular, manufacturers are making special short, narrow-tipped bars (called "quarter-tipped" "nickel-tipped", or "dime-tipped" bars, based on the size of their tips). Some chainsaws are built specifically for carving applications. Echo sponsors a carving series.
Safety features
Today's chainsaws have multiple safety features to protect the operator. These include:
Chain brake
A chain brake activator is located forward of the upper handle and is activated by a kickback event. When triggered, it tensions a band around the clutch drum, stopping the chain within milliseconds.
A chain catcher is located between the saw body and the clutch cover. In most cases, it resembles a hook made of aluminum. It is used to stop the chain when it derails from the bar and shortens the length of the chain. When derailing, the chain swings from underneath the saw towards the operator. This prevents the chain from hitting the operator, which hits the rear handle guard instead.
A rear handle guard protects the hand of the operator when the chain derails.
Some chains have safety features as safety links as on micro chisel saws. These links keep the saw close to the gap between two cutting links and lift the chain when the space at the safety link is full with saw chips, which lifts the chain and lets it cut slower. Nonprofessional chains have less aggressive teeth, by having shallower depth gauges.
Maintenance
Two-stroke chainsaws require about 2–5% of oil in the fuel to lubricate the engine, while the motor in electrical chain-saws is normally lubricated for life. Most modern gasoline-operated saws today require a fuel mix of 2% (1:50). Gasoline that contains ethanol can result in problems for the equipment because ethanol dissolves plastic, rubber, and other material. This leads to problems, especially on older equipment. A workaround for this problem is to run fresh fuel only and run the saw dry at the end of the work.
Separate chain oil or bar oil is used for the lubrication of the bar and chain on all types of chainsaws. The chain oil is depleted quickly because it tends to be thrown off by chain centrifugal force, and it is soaked up by sawdust. On two-stroke chainsaws, the chain oil reservoir is usually filled up at the same time as refueling. The reservoir is normally large enough to provide sufficient chain oil between refueling. Lack of chain oil, or using an oil of incorrect viscosity, is a common source of damage to chainsaws, and tends to lead to rapid wear of the bar, or the chain seizing or coming off the bar. In addition to being quite thick, chain oil is particularly sticky (due to "tackifier" additives) to reduce the amount thrown off the chain. Although motor oil is a common emergency substitute, it is lost even faster, so leaves the chain under-lubricated.
The oil is pumped from a small pump to a hole in the bar. From there, the lower ends of each chain drive link take a portion of the oil into the gauge towards the bar nose. The pump outlet and bar hole must be aligned. Since the bar is moving out and inwards depending on the chain length, the oil outlet on the saw side has a banana-style long shape.
Chains must be kept sharp to perform well. They become blunt rapidly if they touch soil, metal, or stones. When blunt, they tend to produce powdery sawdust, rather than the longer, clean shavings characteristic of a sharp chain; a sharp saw also needs very little force from the operator to push it into the cut. Specially-hardened chains (made with tungsten carbide) are used for applications where the soil is likely to contaminate the cut, such as for cutting through roots.
A clear sign of a blunt chain is the vibrations of the saw. A sharp chain pulls itself into the wood without pressing on the saw.
Since the air intake filter tends to clog up with sawdust, it must be cleaned from time to time but is not a problem during normal operation.
Safety
Protective clothing is designed to protect operators in the event of a moving chain touching their clothing by snarling the chain and sprocket, by using special synthetic fibers woven into the garment. Despite safety features and protective clothing, injuries can still arise from chainsaw use, from the large forces involved in the work, from the fast-moving, sharp chain, or the vibration and noise of the machinery.
A common accident arises from "kickback" when a chain tooth at the tip of the guide bar catches on wood without cutting through it. This throws the bar (with its moving chain) in an upward arc toward the operator, which can cause serious injury or even death.
Another dangerous situation occurs when heavy timber begins to fall or shift before a cut is complete. The chainsaw operator may be trapped or crushed. Similarly, timber falling in an unplanned direction may harm the operator or other workers, or an operator working at a height may fall or be injured by falling timber.
Like other hand-held machinery, the operation of chainsaws can cause vibration white finger, tinnitus, or industrial deafness. These symptoms were very common before vibration dampening using rubber or steel spring was introduced. Heated handles are additional help. Newer, lighter, and easier to wield cordless electric chainsaws use brushless motors, which further decrease noise and vibration compared to traditional petroleum-powered models.
The risks associated with chainsaw use mean that protective clothing such as chainsaw boots, chaps, and hearing protectors are normally worn while operating them, and many jurisdictions require that operators be certified or licensed to work with chainsaws. Injury can also result if the chain breaks during operation due to poor maintenance or attempting to cut inappropriate materials.
Gasoline-powered chainsaws expose operators to harmful carbon monoxide gas, especially indoors or in partially enclosed outdoor areas.
Drop starting, or turning on a chainsaw by dropping it with one hand while pulling the starting cord with the other, is a safety violation in most states in the U.S. Keeping both hands on the saw for stability is essential for safe chainsaw use.
Safe and effective chainsaw and crosscut use on federally administered public lands within the United States has been codified since 2016 in the Final Directive for National Saw Program issued by the United States Forest Service, which specifies the training, testing, and certification process for employees and unpaid volunteers who operate chainsaws within public lands.
Working techniques
Chainsaw training is designed to provide working technical knowledge and skills to safely operate the equipment.
Sizeup – This is scouting and planning safe cuts for the felling direction, danger zones, and retreat paths, before starting the saw. The tree's location relative to other objects, support, and tension determines a safe fall, splits off, or if the saw will jam. Several factors to consider are tree lean and bend, wind direction, branch arrangement, snow load, obstacles and damaged, rotting tree parts, which might behave unexpectedly when cut. A tree may have to fall in its natural direction if it is too dangerous or impossible to fell in a desired direction. The aim is for the tree to fall safely for limbing and cross-cutting the log. The goal is to avoid having the tree fall on another tree or obstacle.
Felling – After clearing the tree's base undergrowth for the retreat path and the felling direction; felling is properly done with three main cuts. To control the fall, the directional cut line should run 1/4 of the tree diameter to make a 45-degree wedge, which should be 90 degrees to the felling direction and perfectly horizontal. The top cut should be made first and then the bottom cut is made to form the directional cut line at the wedge point. A narrow or nonexistent hinge lessens felling direction control. From the opposite side of the wedge, the final felling cut is finished one-tenth of the tree diameter from the direction cut line. The felling cut is made horizontally and slightly (1.5–2 inches; ) above the bottom cut. When the hinge is properly set, the felling cut will begin the fall in the desired direction. A sitback is when a tree moves back opposite the intended direction. Placing a wedge in the felling cut can prevent a sitback from pinching the saw.
Freeing – Working a badly fallen tree that may have become trapped in other trees. Working out maximum tension locations to decide the safest way to release tension, and a winch may be needed in complicated situations. To avoid cutting straight through a tree in tension, one or two cuts at the tension point of sufficient depth to reduce tension may be necessary. After tension releases, cuts are made outside the bend.
Limbing – Cutting the branches off the log. The operator must be able to properly reach the cut to avoid kickback.
Bucking – Cross-cutting the felled log into sections. Setup is made to avoid binding the chainsaw within the changing log tensions and compressions. Safe bucking is started at the log high-side and then sections worked offside, toward the butt end. The offside log falls and allows for gravity to help prevent binds. The log's kerf movement while cutting can help to indicate binds. Additional equipment (lifts, bars, wedges, and winches) and special cutting techniques can help prevent binds.
Binds – This is when the chainsaw is at risk or is stuck in the log compression. A log bound chainsaw is unsafe and must be carefully removed to prevent equipment damage.
Top bind – The tension area on log bottom, compression on top.
Bottom bind – The tension area on log top, compression on bottom.
Side bind – Sideways pressure exerted on the log.
End bind – Weight compresses the log's entire cross-section.
Brushing and slashing – This is quickly clearing small trees and branches under five inches () in diameter. Hand piler may follow along to move out debris.
Cutting
Chainsaws with specially designed bar-and-chain combinations have been developed as tools for use in chainsaw art and chainsaw mills. Specialized chainsaws are used for cutting concrete during construction developments. Chainsaws are sometimes used for cutting ice; for example, ice sculpture and winter swimming in Finland.
As a sawmill
When fastened into a special guide frame, a chainsaw can be used as a portable sawmill to cut bulk wood into planks or boards. Such usage is called a chainsaw mill or Alaskan sawmill.
Cutting stone, concrete, and brick
Special chainsaws can cut concrete, brick, and natural stone. These use similar chains to ordinary chainsaws, but with cutting edges embedded with diamond grit. They may use gasoline or hydraulic power, and the chain is lubricated with water, because of high friction and to remove stone dust. The machine is used in construction, for example, in cutting deep, square holes in walls or floors, in stone sculpture for removing large chunks of stone during pre-carving, by fire departments for gaining access to buildings, and in restoration of buildings and monuments for removing parts with minimal damage to the surrounding structure. More recently, concrete chainsaws with electric motors of 230 volts have also been developed.
Because the material to be cut is not fibrous, much less kickback occurs. So, the most-used method of cutting is plunge-cutting, by pushing the tip of the blade into the material. With this method, square cuts as small as the blade width can be achieved. Pushback can occur if a block shifts when nearly cut through and pinches the blade, but overall, the machine is less dangerous than a wood-cutting chainsaw.
Underwater work
Chainsaws are used for underwater cutting by professional divers. They are usually driven by hydraulic power supplied from the surface and operated by commercial divers using surface-supplied diving equipment. Underwater chainsaw cutting may also be used by public safety divers.
Hydraulic chainsaws can be used to cut wood, concrete, brick and steel if the appropriate chain is used. Underwater cutting may be done in conditions of moving water and low visibility, which can increase risk, and appropriate safety precautions and suitable procedures are required for safety.
Underwater wood structures may include bridge pilings, pier, and dock timbers. Chain saws generally include an interlocking safety trigger with hand guard.
| Technology | Farm and garden machinery | null |
235175 | https://en.wikipedia.org/wiki/Trace%20element | Trace element |
A trace element is a chemical element of a minute quantity, a trace amount, especially used in referring to a micronutrient, but is also used to refer to minor elements in the composition of a rock, or other chemical substance.
In nutrition, trace elements are classified into two groups: essential trace elements, and non-essential trace elements. Essential trace elements are needed for many physiological and biochemical processes in both plants and animals. Not only do trace elements play a role in biological processes but they also serve as catalysts to engage in redox – oxidation and reduction mechanisms. Trace elements of some heavy metals have a biological role as essential micronutrients.
Types
The two types of trace element in biochemistry are classed as essential or non-essential.
Essential trace elements
An essential trace element is a dietary element, a mineral that is only needed in minute quantities for the proper growth, development, and physiology of the organism. The essential trace elements are those that are required to perform vital metabolic activities in organisms. Essential trace elements in human nutrition, and other animals include iron (Fe) (hemoglobin), copper (Cu) (respiratory pigments), cobalt (Co) (Vitamin B12), iodine (I), manganese (Mn), chlorine (Cl), molybdenum (Mo), selenium (Se) and zinc (Zn) (enzymes). Although they are essential, they become toxic at high concentrations.
Non-essential trace elements
Non-essential trace elements include silver (Ag), cadmium (Cd), mercury (Hg), and lead (Pb). They have no known biological function in mammals, with toxic effects even at low concentration.
The structural components of cells and tissues that are required in the diet in gram quantities daily are known as bulk elements.
| Physical sciences | Geochemistry | Earth science |
235195 | https://en.wikipedia.org/wiki/Mineral%20%28nutrient%29 | Mineral (nutrient) | In the context of nutrition, a mineral is a chemical element. Some "minerals" are essential for life, but most are not. Minerals are one of the four groups of essential nutrients; the others are vitamins, essential fatty acids, and essential amino acids. The five major minerals in the human body are calcium, phosphorus, potassium, sodium, and magnesium. The remaining minerals are called "trace elements". The generally accepted trace elements are iron, chlorine, cobalt, copper, zinc, manganese, molybdenum, iodine, selenium, and bromine; there is some evidence that there may be more.
The four organogenic elements, namely carbon, hydrogen, oxygen, and nitrogen (CHON), that comprise roughly 96% of the human body by weight, are usually not considered as minerals (nutrient). In fact, in nutrition, the term "mineral" refers more generally to all the other functional and structural elements found in living organisms.
Plants obtain minerals from soil. Animals ingest plants, thus moving minerals up the food chain. Larger organisms may also consume soil (geophagia) or use mineral resources such as salt licks to obtain minerals.
Finally, although mineral and elements are in many ways synonymous, minerals are only bioavailable to the extent that they can be absorbed. To be absorbed, minerals either must be soluble or readily extractable by the consuming organism. For example, molybdenum is an essential mineral, but metallic molybdenum has no nutritional benefit. Many molybdates are sources of molybdenum.
Essential chemical elements for humans
Twenty chemical elements are known to be required to support human biochemical processes by serving structural and functional roles, and there is evidence for a few more.
Oxygen, hydrogen, carbon and nitrogen are the most abundant elements in the body by weight and make up about 96% of the weight of a human body. Calcium makes up 920 to 1200 grams of adult body weight, with 99% of it contained in bones and teeth. This is about 1.5% of body weight. Phosphorus occurs in amounts of about 2/3 of calcium, and makes up about 1% of a person's body weight. The other major minerals (potassium, sodium, chlorine, sulfur and magnesium) make up only about 0.85% of the weight of the body. Together these eleven chemical elements (H, C, N, O, Ca, P, K, Na, Cl, S, Mg) make up 99.85% of the body. The remaining ≈18 ultratrace minerals comprise just 0.15% of the body, or about one hundred grams in total for the average person. Total fractions in this paragraph are amounts based on summing percentages from the article on chemical composition of the human body.
Some diversity of opinion exist about the essential nature of various ultratrace elements in humans (and other mammals), even based on the same data. For example, whether chromium is essential in humans is debated. No Cr-containing biochemical has been purified. The United States and Japan designate chromium as an essential nutrient, but the European Food Safety Authority (EFSA), representing the European Union, reviewed the question in 2014 and does not agree.
Most of the known and suggested mineral nutrients are of relatively low atomic weight, and are reasonably common on land, or for sodium and iodine, in the ocean. They also tend to have soluble compounds at physiological pH ranges: elements without such soluble compounds tend to be either non-essential (Al) or, at best, may only be needed in traces (Si).
Roles in biological processes
RDA = Recommended Dietary Allowance; AI = Adequate intake; UL = Tolerable upper intake level; Figures shown are for adults age 31–50, male or female neither pregnant nor lactating
* One serving of seaweed exceeds the US UL of 1100 μg but not the 3000 μg UL set by Japan.
Dietary nutrition
Dietitians may recommend that minerals are best supplied by ingesting specific foods rich with the chemical element(s) of interest. The elements may be naturally present in the food (e.g., calcium in dairy milk) or added to the food (e.g., orange juice fortified with calcium; iodized salt fortified with iodine). Dietary supplements can be formulated to contain several different chemical elements (as compounds), a combination of vitamins and/or other chemical compounds, or a single element (as a compound or mixture of compounds), such as calcium (calcium carbonate, calcium citrate) or magnesium (magnesium oxide), or iron (ferrous sulfate, iron bis-glycinate).
The dietary focus on chemical elements derives from an interest in supporting the biochemical reactions of metabolism with the required elemental components. Appropriate intake levels of certain chemical elements have been demonstrated to be required to maintain optimal health. Diet can meet all the body's chemical element requirements, although supplements can be used when some recommendations are not adequately met by the diet. An example would be a diet low in dairy products, and hence not meeting the recommendation for calcium.
Plants
The list of minerals required for plants is similar to that for animals. Both use very similar enzymes, although differences exist. For example, legumes host molybdenum-containing nitrogenase, but animals do not. Many animals rely on hemoglobin (Fe) for oxygen transport, but plants do not. Fertilizers are often tailored to address mineral deficiencies in particular soils. Examples include molybdenum deficiency, manganese deficiency, zinc deficiency, and so on.
Safety
The gap between recommended daily intake and what are considered safe upper limits (ULs) can be small. For example, for calcium the U.S. Food and Drug Administration set the recommended intake for adults over 70 years at 1,200 mg/day and the UL at 2,000 mg/day. The European Union also sets recommended amounts and upper limits, which are not always in accord with the U.S. Likewise, Japan, which sets the UL for iodine at 3000 μg versus 1100 for the U.S. and 600 for the EU. In the table above, magnesium appears to be an anomaly as the recommended intake for adult men is 420 mg/day (women 350 mg/day) while the UL is lower than the recommended, at 350 mg. The reason is that the UL is specific to consuming more than 350 mg of magnesium all at once, in the form of a dietary supplement, as this may cause diarrhea. Magnesium-rich foods do not cause this problem.
Elements considered possibly essential for humans but not confirmed
Many ultratrace elements have been suggested as essential, but such claims have usually not been confirmed. Definitive evidence for efficacy comes from the characterization of a biomolecule containing the element with an identifiable and testable function. One problem with identifying efficacy is that some elements are innocuous at low concentrations and are pervasive (examples: silicon and nickel in solid and dust), so proof of efficacy is lacking because deficiencies are difficult to reproduce. Some elements were once thought to have a role with unknown biochemical nature, but the evidence has not always been strong. For example, it was once thought that arsenic was probably essential in mammals, but it seems to be only used by microbes; and while chromium was long thought to be an essential trace element based on rodent models, and was proposed to be involved in glucose and lipid metabolism, more recent studies have conclusively ruled this possibility out. It may still have a role in insulin signalling, but the evidence is not clear, and it only seems to occur at doses not found in normal diets. Boron is essential to plants, but not animals.
Non-essential elements can sometimes appear in the body when they are chemically similar to essential elements (e.g. Rb+ and Cs+ replacing Na+), so that essentiality is not the same thing as uptake by a biological system.
Mineral ecology
Diverse ions are used by animals and microorganisms for the process of mineralizing structures, called biomineralization, used to construct bones, seashells, eggshells, exoskeletons and mollusc shells.
Minerals can be bioengineered by bacteria which act on metals to catalyze mineral dissolution and precipitation. Mineral nutrients are recycled by bacteria distributed throughout soils, oceans, freshwater, groundwater, and glacier meltwater systems worldwide. Bacteria absorb dissolved organic matter containing minerals as they scavenge phytoplankton blooms. Mineral nutrients cycle through this marine food chain, from bacteria and phytoplankton to flagellates and zooplankton, which are then eaten by other marine life. In terrestrial ecosystems, fungi have similar roles as bacteria, mobilizing minerals from matter inaccessible by other organisms, then transporting the acquired nutrients to local ecosystems.
| Biology and health sciences | Health and fitness: General | Health |
235248 | https://en.wikipedia.org/wiki/Transmissible%20spongiform%20encephalopathy | Transmissible spongiform encephalopathy | Transmissible spongiform encephalopathies (TSEs), also known as prion diseases, are a group of progressive, incurable, and fatal conditions that are associated with the prion hypothesis and affect the brain and nervous system of many animals, including humans, cattle, and sheep. According to the most widespread hypothesis, they are transmitted by prions, though some other data suggest an involvement of a Spiroplasma infection. Mental and physical abilities deteriorate and many tiny holes appear in the cortex causing it to appear like a sponge when brain tissue obtained at autopsy is examined under a microscope. The disorders cause impairment of brain function which may result in memory loss, personality changes, and abnormal or impaired movement which worsen over time.
TSEs of humans include Creutzfeldt–Jakob disease, Gerstmann–Sträussler–Scheinker syndrome, fatal familial insomnia, and kuru, as well as the recently discovered variably protease-sensitive prionopathy and familial spongiform encephalopathy. Creutzfeldt-Jakob disease itself has four main forms, the sporadic (sCJD), the hereditary/familial (fCJD), the iatrogenic (iCJD) and the variant form (vCJD). These conditions form a spectrum of diseases with overlapping signs and symptoms.
TSEs in non-human mammals include scrapie in sheep, bovine spongiform encephalopathy (BSE) in cattle – popularly known as "mad cow disease" – and chronic wasting disease (CWD) in deer and elk. The variant form of Creutzfeldt–Jakob disease in humans is caused by exposure to bovine spongiform encephalopathy prions.
Unlike other kinds of infectious disease, which are spread by agents with a DNA or RNA genome (such as virus or bacteria), the infectious agent in TSEs is believed to be a prion, thus being composed solely of protein material. Misfolded prion proteins carry the disease between individuals and cause deterioration of the brain. TSEs are unique diseases in that their aetiology may be genetic, sporadic, or infectious via ingestion of infected foodstuffs and via iatrogenic means (e.g., blood transfusion). Most TSEs are sporadic and occur in an animal with no prion protein mutation. Inherited TSE occurs in animals carrying a rare mutant prion allele, which expresses prion proteins that contort by themselves into the disease-causing conformation. Transmission occurs when healthy animals consume tainted tissues from others with the disease. In the 1980s and 1990s, bovine spongiform encephalopathy spread in cattle in an epidemic fashion. This occurred because cattle were fed the processed remains of other cattle, a practice now banned in many countries. In turn, consumption (by humans) of bovine-derived foodstuff which contained prion-contaminated tissues resulted in an outbreak of the variant form of Creutzfeldt–Jakob disease in the 1990s and 2000s.
Prions cannot be transmitted through the air, through touching, or most other forms of casual contact. However, they may be transmitted through contact with infected tissue, body fluids, or contaminated medical instruments. Normal sterilization procedures such as boiling or irradiating materials fail to render prions non-infective. However, treatment with strong, almost undiluted bleach and/or sodium hydroxide, or heating to a minimum of 134 °C, does destroy prions.
Classification
Differences in shape between the different prion protein forms are poorly understood.
Signs and symptoms
The degenerative tissue damage caused by human prion diseases (CJD, GSS, and kuru) is characterised by four features: spongiform change (the presence of many small holes), the death of neurons, astrocytosis (abnormal increase in the number of astrocytes due to the destruction of nearby neurons), and amyloid plaque formation. These features are shared with prion diseases in animals, and the recognition of these similarities prompted the first attempts to transmit a human prion disease (kuru) to a primate in 1966, followed by CJD in 1968 and GSS in 1981. These neuropathological features have formed the basis of the histological diagnosis of human prion diseases for many years, although it was recognized that these changes are enormously variable both from case to case and within the central nervous system in individual cases.
The clinical signs in humans vary, but commonly include personality changes, psychiatric problems such as depression, lack of coordination, and/or an unsteady gait (ataxia). Patients also may experience involuntary jerking movements called myoclonus, unusual sensations, insomnia, confusion, or memory problems. In the later stages of the disease, patients have severe mental impairment (dementia) and lose the ability to move or speak.
Early neuropathological reports on human prion diseases suffered from a confusion of nomenclature, in which the significance of the diagnostic feature of spongiform change was occasionally overlooked. The subsequent demonstration that human prion diseases were transmissible reinforced the importance of spongiform change as a diagnostic feature, reflected in the use of the term "spongiform encephalopathy" for this group of disorders.
Prions appear to be most infectious when in direct contact with affected tissues. For example, Creutzfeldt–Jakob disease has been transmitted to patients taking injections of growth hormone harvested from human pituitary glands, from cadaver dura allografts and from instruments used for brain surgery (Brown, 2000) (prions can survive the "autoclave" sterilization process used for most surgical instruments). It is also believed that dietary consumption of affected animals can cause prions to accumulate slowly, especially when cannibalism or similar practices allow the proteins to accumulate over more than one generation. An example is kuru, which reached epidemic proportions in the mid-20th century in the Fore people of Papua New Guinea, who used to consume their dead as a funerary ritual. Laws in developed countries now ban the use of rendered ruminant proteins in ruminant feed as a precaution against the spread of prion infection in cattle and other ruminants.
Note that not all encephalopathies are caused by prions, as in the cases of PML (caused by the JC virus), CADASIL (caused by abnormal NOTCH3 protein activity), and Krabbe disease (caused by a deficiency of the enzyme galactosylceramidase). Progressive Spongiform Leukoencephalopathy (PSL)—which is a spongiform encephalopathy—is also probably not caused by a prion, although the adulterant that causes it among heroin smokers has not yet been identified. This, combined with the highly variable nature of prion disease pathology, is why a prion disease cannot be diagnosed based solely on a patient's symptoms.
Cause
Genetics
Familial forms of prion disease are caused by inherited mutations in the PRNP gene. Only a small percentage of all cases of prion disease run in families, however. Most cases of prion disease are sporadic, which means they occur in people without any known risk factors or gene mutations. In rare circumstances, prion diseases also can be transmitted by exposure to prion-contaminated tissues or other biological materials obtained from individuals with prion disease.It could be transmitted by Five cases of atypical BSE were reported in cattle across the and five more reported by countries. A total of 721 cases of scrapie were detected in small ruminants in the 27 Member States and in the United Kingdom in respect of Northern Ireland: 538 in sheep (557 in 2022) and 183 in goats (224 in 2022).Surveillance of TSE in cervids is voluntary in the. Only Norway confirmed one case of Chronic Wasting Disease (one wild European moose).The PRNP gene provides the instructions to make a protein called the prion protein (PrP). Under normal circumstances, this protein may be involved in transporting copper into cells. The protein may also be involved in protecting brain cells and helping them communicate.
Protein-only hypothesis
Protein could be the infectious agent, inducing its own replication by causing conformational change of normal cellular PrPC into PrPSc. Evidence for this hypothesis:
Infectivity titre correlates with PrPSc levels. However, this is disputed.
PrPSc is an isomer of PrPC
Denaturing PrP removes infectivity
PrP-null mice cannot be infected
PrPC depletion in the neural system of mice with established neuroinvasive prion infection reverses early spongeosis and behavioural deficits, halts further disease progression and increases life-span
Multi-component hypothesis
While not containing a nucleic acid genome, prions may be composed of more than just a protein. Purified PrPC appears unable to convert to the infectious PrPSc form, unless other components are added, such as RNA and lipids. These other components, termed cofactors, may form part of the infectious prion, or they may serve as catalysts for the replication of a protein-only prion.
Viral hypothesis
This hypothesis postulates that a yet undiscovered infectious viral agent is the cause of the disease. Although this was once the leading hypothesis, it is now a minority view. Evidence for this hypothesis is as follows:
Brain particle titers purified of PrP retain infectivity.
Brain titers exposed to nucleases reduced infectivity by >=99%.
Incubation time is comparable to a lentivirus.
Strain variation of different isolates of PrPsc.
Diagnosis
There continues to be a very practical problem with diagnosis of prion diseases, including BSE and CJD. They have an incubation period of months to decades during which there are no symptoms, even though the pathway of converting the normal brain PrP protein into the toxic, disease-related PrPSc form has started. At present, there is virtually no way to detect PrPSc reliably except by examining the brain using neuropathological and immunohistochemical methods after death. Accumulation of the abnormally folded PrPSc form of the PrP protein is a characteristic of the disease, but it is present at very low levels in easily accessible body fluids like blood or urine. Researchers have tried to develop methods to measure PrPSc, but there are still no fully accepted methods for use in materials such as blood.
In 2010, a team from New York described detection of PrPSc even when initially present at only one part in a hundred billion (10−11) in brain tissue. The method combines amplification with a novel technology called Surround Optical Fiber Immunoassay (SOFIA) and some specific antibodies against PrPSc. After amplifying and then concentrating any PrPSc, the samples are labelled with a fluorescent dye using an antibody for specificity and then finally loaded into a micro-capillary tube. This tube is placed in a specially constructed apparatus so that it is totally surrounded by optical fibres to capture all light emitted once the dye is excited using a laser. The technique allowed detection of PrPSc after many fewer cycles of conversion than others have achieved, substantially reducing the possibility of artefacts, as well as speeding up the assay. The researchers also tested their method on blood samples from apparently healthy sheep that went on to develop scrapie. The animals' brains were analysed once any symptoms became apparent. The researchers could therefore compare results from brain tissue and blood taken once the animals exhibited symptoms of the diseases, with blood obtained earlier in the animals' lives, and from uninfected animals. The results showed very clearly that PrPSc could be detected in the blood of animals long before the symptoms appeared.
Treatment
There are currently no known ways to cure or prevent prion disease. Certain medications can slow down the progression of the disease. But ultimately, supportive care is currently the only option for infected individuals.
Epidemiology
Transmissible spongiform encephalopathies (TSE) are very rare but can reach epidemic proportions. It is very hard to map the spread of the disease due to the difficulty of identifying individual strains of the prions. This means that, if animals at one farm begin to show the disease after an outbreak on a nearby farm, it is very difficult to determine whether it is the same strain affecting both herds—suggesting transmission—or if the second outbreak came from a completely different source.
Classic Creutzfeldt-Jakob disease (CJD) was discovered in 1920. It occurs sporadically over the world but is very rare. It affects about one person per million each year. Typically, the cause is unknown for these cases. It has been found to be passed on genetically in some cases. 250 patients contracted the disease through iatrogenic transmission (from use of contaminated surgical equipment). This was before equipment sterilization was required in 1976, and there have been no other iatrogenic cases since then. In order to prevent the spread of infection, the World Health Organization created a guide to tell health care workers what to do when CJD appears and how to dispose of contaminated equipment. The Centers for Disease Control and Prevention (CDC) have been keeping surveillance on CJD cases, particularly by looking at death certificate information.
Chronic wasting disease (CWD) is a prion disease found in North America in deer and elk. The first case was identified as a fatal wasting syndrome in the 1960s. It was then recognized as a transmissible spongiform encephalopathy in 1978. Surveillance studies showed the endemic of CWD in free-ranging deer and elk spread in northeastern Colorado, southeastern Wyoming and western Nebraska. It was also discovered that CWD may have been present in a proportion of free-ranging animals decades before the initial recognition. In the United States, the discovery of CWD raised concerns about the transmission of this prion disease to humans. Many apparent cases of CJD were suspected transmission of CWD, however the evidence was lacking and not convincing.
In the 1980s and 1990s, bovine spongiform encephalopathy (BSE or "mad cow disease") spread in cattle at an epidemic rate. The total estimated number of cattle infected was approximately 750,000 between 1980 and 1996. This occurred because the cattle were fed processed remains of other cattle. Then human consumption of these infected cattle caused an outbreak of the human form CJD. There was a dramatic decline in BSE when feeding bans were put in place. On May 20, 2003, the first case of BSE was confirmed in North America. The source could not be clearly identified, but researchers suspect it came from imported BSE-infected cow meat. In the United States, the USDA created safeguards to minimize the risk of BSE exposure to humans.
Variant Creutzfeldt-Jakob disease (vCJD) was discovered in 1996 in England. There is strong evidence to suggest that vCJD was caused by the same prion as bovine spongiform encephalopathy. A total of 231 cases of vCJD have been reported since it was first discovered. These cases have been found in a total of 12 countries with 178 in the United Kingdom, 27 in France, five in Spain, four in Ireland, four in the United States, three in the Netherlands, three in Italy, two in Portugal, two in Canada, and one each in Japan, Saudi Arabia, and Taiwan.
History
In the 5th century BCE, Hippocrates described a disease like TSE in cattle and sheep, which he believed also occurred in humans. Publius Flavius Vegetius Renatus records cases of a disease with similar characteristics in the 4th and 5th centuries AD. In 1755, an outbreak of scrapie was discussed in the British House of Commons and may have been present in Britain for some time before that. Although there were unsupported claims in 1759 that the disease was contagious, in general it was thought to be due to inbreeding and countermeasures appeared to be successful. Early-20th-century experiments failed to show transmission of scrapie between animals, until extraordinary measures were taken such as the intra-ocular injection of infected nervous tissue. No direct link between scrapie and human disease was suspected then or has been found since. TSE was first described in humans by Alfons Maria Jakob in 1921. Daniel Carleton Gajdusek's discovery that Kuru was transmitted by cannibalism accompanied by the finding of scrapie-like lesions in the brains of Kuru victims strongly suggested an infectious basis to TSE. A paradigm shift to a non-nucleic infectious entity was required when the results were validated with an explanation of how a prion protein might transmit spongiform encephalopathy. Not until 1988 was the neuropathology of spongiform encephalopathy properly described in cows. The alarming amplification of BSE in the British cattle herd heightened fear of transmission to humans and reinforced the belief in the infectious nature of TSE. This was confirmed with the identification of a Kuru-like disease, called new variant Creutzfeldt–Jakob disease, in humans exposed to BSE. Although the infectious disease model of TSE has been questioned in favour of a prion transplantation model that explains why cannibalism favours transmission, the search for a viral agent was, as of 2007, being continued in some laboratories.
| Biology and health sciences | Prion diseases | Health |
235255 | https://en.wikipedia.org/wiki/Wrench | Wrench | A wrench or spanner is a tool used to provide grip and mechanical advantage in applying torque to turn objects—usually rotary fasteners, such as nuts and bolts—or keep them from turning.
In the UK, Ireland, Australia, and New Zealand spanner is the standard term. The most common shapes are called open-ended spanner and ring spanner. The term wrench is generally used for tools that turn non-fastening devices (e.g. tap wrench and pipe wrench), or may be used for a monkey wrench—an adjustable pipe wrench.
In North American English, wrench is the standard term. The most common shapes are called open-end wrench and box-end wrench. In American English, spanner refers to a specialized wrench with a series of pins or tabs around the circumference. (These pins or tabs fit into the holes or notches cut into the object to be turned). In American commerce, such a wrench may be called a spanner wrench to distinguish it from the British sense of spanner.
Higher quality wrenches are typically made from chromium-vanadium alloy tool steels and are often drop-forged. They are frequently chrome-plated to resist corrosion and for ease of cleaning.
Hinged tools, such as pliers or tongs, are not generally considered wrenches in English, but exceptions are the plumber wrench (pipe wrench in British English) and Mole wrench (sometimes Mole grips in British English).
The word can also be used in slang to describe an unexpected obstacle, for example, "He threw a spanner in the works" (in U.S. English, "monkey wrench").
Etymology
'Wrench' is derived from Middle English wrench, from Old English wrenċ, from Proto-Germanic *wrankiz ("a turning, twisting"). The oldest recorded use dates to 1794.
'Spanner' came into use in the 1630s, referring to the tool for winding the spring of a wheel-lock firearm. From German Spanner (n.), from spannen (v.) ("to join, fasten, extend, connect"), from Proto-Germanic *spannan, from PIE root *(s)pen- ("to draw, stretch, spin").
History
Wrenches and applications using wrenches or devices that needed wrenches, such as pipe clamps and suits of armor, have been noted by historians as far back as the 15th century. Adjustable coach wrenches for the odd-sized nuts of wagon wheels were manufactured in England and exported to North America in the late eighteenth and early nineteenth centuries. The mid 19th century began to see patented wrenches that used a screw for narrowing and widening the jaws, including patented monkey wrenches.
Most box end wrenches are sold as '12-point' because 12-point wrenches fit over both 12-point and 6-point bolts. 12-point wrenches also offer a higher number of engagement points over six-point. However, 12-point wrenches have been known to cause round-off damage to 6-point bolts as they provide less contact space.
Types
Other types of keys
These types of keys are not emically classified as wrenches by English speakers, but they are etically similar in function to wrenches.
| Technology | Hand tools | null |
235287 | https://en.wikipedia.org/wiki/Nitric%20oxide | Nitric oxide | Nitric oxide (nitrogen oxide or nitrogen monoxide) is a colorless gas with the formula . It is one of the principal oxides of nitrogen. Nitric oxide is a free radical: it has an unpaired electron, which is sometimes denoted by a dot in its chemical formula (•N=O or •NO). Nitric oxide is also a heteronuclear diatomic molecule, a class of molecules whose study spawned early modern theories of chemical bonding.
An important intermediate in industrial chemistry, nitric oxide forms in combustion systems and can be generated by lightning in thunderstorms. In mammals, including humans, nitric oxide is a signaling molecule in many physiological and pathological processes. It was proclaimed the "Molecule of the Year" in 1992. The 1998 Nobel Prize in Physiology or Medicine was awarded for discovering nitric oxide's role as a cardiovascular signalling molecule. Its impact extends beyond biology, with applications in medicine, such as the development of sildenafil (Viagra), and in industry, including semiconductor manufacturing.
Nitric oxide should not be confused with nitrogen dioxide (NO2), a brown gas and major air pollutant, or with nitrous oxide (N2O), an anesthetic gas.
History
Nitric oxide (NO) was first identified by Joseph Priestley in the late 18th century, originally seen as merely a toxic byproduct of combustion and an environmental pollutant. Its biological significance was later uncovered in the 1980s when researchers Robert F. Furchgott, Louis J. Ignarro, and Ferid Murad discovered its critical role as a vasodilator in the cardiovascular system, a breakthrough that earned them the 1998 Nobel Prize in Physiology or Medicine.
Physical properties
Electronic configuration
The ground state electronic configuration of NO is, in united atom notation:
The first two orbitals are actually pure atomic 1sO and 1sN from oxygen and nitrogen respectively and therefore are usually not noted in the united atom notation. Orbitals noted with an asterisk are antibonding. The ordering of 5σ and 1π according to their binding energies is subject to discussion. Removal of a 1π electron leads to 6 states whose energies span over a range starting at a lower level than a 5σ electron an extending to a higher level. This is due to the different orbital momentum couplings between a 1π and a 2π electron.
The lone electron in the 2π orbital makes NO a doublet (X ²Π) in its ground state whose degeneracy is split in the fine structure from spin-orbit coupling with a total momentum J= or J=.
Dipole
The dipole of NO has been measured experimentally to 0.15740 D and is oriented from O to N (⁻NO⁺) due to the transfer of negative electronic charge from oxygen to nitrogen.
Reactions
With di- and triatomic molecules
Upon condensing to a liquid, nitric oxide dimerizes to dinitrogen dioxide, but the association is weak and reversible. The N–N distance in crystalline NO is 218 pm, nearly twice the N–O distance.
Since the heat of formation of •NO is endothermic, NO can be decomposed to the elements. Catalytic converters in cars exploit this reaction:
2 •NO → O2 + N2
When exposed to oxygen, nitric oxide converts into nitrogen dioxide:
2 •NO + O2 → 2 •NO2
This reaction is thought to occur via the intermediates ONOO• and the red compound ONOONO.
In water, nitric oxide reacts with oxygen to form nitrous acid (HNO2). The reaction is thought to proceed via the following stoichiometry:
4 •NO + O2 + 2 H2O → 4 HNO2
Nitric oxide reacts with fluorine, chlorine, and bromine to form the nitrosyl halides, such as nitrosyl chloride:
2 •NO + Cl2 → 2 NOCl
With NO2, also a radical, NO combines to form the intensely blue dinitrogen trioxide:
•NO + •NO2 ON−NO2
Organic chemistry
The addition of a nitric oxide moiety to another molecule is often referred to as nitrosylation. The Traube reaction is the addition of a two equivalents of nitric oxide onto an enolate, giving a diazeniumdiolate (also called a nitrosohydroxylamine). The product can undergo a subsequent retro-aldol reaction, giving an overall process similar to the haloform reaction. For example, nitric oxide reacts with acetone and an alkoxide to form a diazeniumdiolate on each α position, with subsequent loss of methyl acetate as a by-product:
This reaction, which was discovered around 1898, remains of interest in nitric oxide prodrug research. Nitric oxide can also react directly with sodium methoxide, ultimately forming sodium formate and nitrous oxide by way of an N-methoxydiazeniumdiolate.
Coordination complexes
Nitric oxide reacts with transition metals to give complexes called metal nitrosyls. The most common bonding mode of nitric oxide is the terminal linear type (M−NO). Alternatively, nitric oxide can serve as a one-electron pseudohalide. In such complexes, the M−N−O group is characterized by an angle between 120° and 140°. The NO group can also bridge between metal centers through the nitrogen atom in a variety of geometries.
Production and preparation
In commercial settings, nitric oxide is produced by the oxidation of ammonia at 750–900 °C (normally at 850 °C) with platinum as catalyst in the Ostwald process:
4 NH3 + 5 O2 → 4 •NO + 6 H2O
The uncatalyzed endothermic reaction of oxygen (O2) and nitrogen (N2), which is effected at high temperature (>2000 °C) by lightning has not been developed into a practical commercial synthesis (see Birkeland–Eyde process):
N2 + O2 → 2 •NO
Laboratory methods
In the laboratory, nitric oxide is conveniently generated by reduction of dilute nitric acid with copper:
8 HNO3 + 3 Cu → 3 Cu(NO3)2 + 4 H2O + 2 •NO
An alternative route involves the reduction of nitrous acid in the form of sodium nitrite or potassium nitrite:
2 NaNO2 + 2 NaI + 2 H2SO4 → I2 + 2 Na2SO4 + 2 H2O + 2 •NO
2 NaNO2 + 2 FeSO4 + 3 H2SO4 → Fe2(SO4)3 + 2 NaHSO4 + 2 H2O + 2 •NO
3 KNO2 + KNO3 + Cr2O3 → 2 K2CrO4 + 4 •NO
The iron(II) sulfate route is simple and has been used in undergraduate laboratory experiments. So-called NONOate compounds are also used for nitric oxide generation.
Detection and assay
Nitric oxide concentration can be determined using a chemiluminescent reaction involving ozone. A sample containing nitric oxide is mixed with a large quantity of ozone. The nitric oxide reacts with the ozone to produce oxygen and nitrogen dioxide, accompanied with emission of light (chemiluminescence):
•NO + O3 → •NO2 + O2 + hν
which can be measured with a photodetector. The amount of light produced is proportional to the amount of nitric oxide in the sample.
Other methods of testing include electroanalysis (amperometric approach), where ·NO reacts with an electrode to induce a current or voltage change. The detection of NO radicals in biological tissues is particularly difficult due to the short lifetime and concentration of these radicals in tissues. One of the few practical methods is spin trapping of nitric oxide with iron-dithiocarbamate complexes and subsequent detection of the mono-nitrosyl-iron complex with electron paramagnetic resonance (EPR).
A group of fluorescent dye indicators that are also available in acetylated form for intracellular measurements exist. The most common compound is 4,5-diaminofluorescein (DAF-2).
Environmental effects
Acid rain deposition
Nitric oxide reacts with the hydroperoxyl radical () to form nitrogen dioxide (NO2), which then can react with a hydroxyl radical (HO•) to produce nitric acid (HNO3):
•NO + → •NO2 + HO•
•NO2 + HO• → HNO3
Nitric acid, along with sulfuric acid, contributes to acid rain deposition.
Ozone depletion
•NO participates in ozone layer depletion. Nitric oxide reacts with stratospheric ozone to form O2 and nitrogen dioxide:
•NO + O3 → •NO2 + O2
This reaction is also utilized to measure concentrations of •NO in control volumes.
Precursor to NO2
As seen in the acid deposition section, nitric oxide can transform into nitrogen dioxide (this can happen with the hydroperoxy radical, , or diatomic oxygen, O2). Symptoms of short-term nitrogen dioxide exposure include nausea, dyspnea and headache. Long-term effects could include impaired immune and respiratory function.
Biological functions
NO is a gaseous signaling molecule. It is a key vertebrate biological messenger, playing a role in a variety of biological processes. It is a bioproduct in almost all types of organisms, including bacteria, plants, fungi, and animal cells.
Nitric oxide, an endothelium-derived relaxing factor (EDRF), is biosynthesized endogenously from L-arginine, oxygen, and NADPH by various nitric oxide synthase (NOS) enzymes. Reduction of inorganic nitrate may also make nitric oxide. One of the main enzymatic targets of nitric oxide is guanylyl cyclase. The binding of nitric oxide to the heme region of the enzyme leads to activation, in the presence of iron. Nitric oxide is highly reactive (having a lifetime of a few seconds), yet diffuses freely across membranes. These attributes make nitric oxide ideal for a transient paracrine (between adjacent cells) and autocrine (within a single cell) signaling molecule. Once nitric oxide is converted to nitrates and nitrites by oxygen and water, cell signaling is deactivated.
The endothelium (inner lining) of blood vessels uses nitric oxide to signal the surrounding smooth muscle to relax, resulting in vasodilation and increasing blood flow. Sildenafil (Viagra) is a drug that uses the nitric oxide pathway. Sildenafil does not produce nitric oxide, but enhances the signals that are downstream of the nitric oxide pathway by protecting cyclic guanosine monophosphate (cGMP) from degradation by cGMP-specific phosphodiesterase type 5 (PDE5) in the corpus cavernosum, allowing for the signal to be enhanced, and thus vasodilation. Another endogenous gaseous transmitter, hydrogen sulfide (H2S) works with NO to induce vasodilation and angiogenesis in a cooperative manner.
Nasal breathing produces nitric oxide within the body, while oral breathing does not.
Occupational safety and health
In the U.S., the Occupational Safety and Health Administration (OSHA) has set the legal limit (permissible exposure limit) for nitric oxide exposure in the workplace as 25 ppm (30 mg/m3) over an 8-hour workday. The National Institute for Occupational Safety and Health (NIOSH) has set a recommended exposure limit (REL) of 25 ppm (30 mg/m3) over an 8-hour workday. At levels of 100 ppm, nitric oxide is immediately dangerous to life and health.
Explosion hazard
Liquid nitrogen oxide is very sensitive to detonation even in the absence of fuel, and can be initiated as readily as nitroglycerin. Detonation of the endothermic liquid oxide close to its b.p. (-152°C) generated a 100 kbar pulse and fragmented the test equipment. It is the simplest molecule that is capable of detonation in all three phases. The liquid oxide is sensitive and may explode during distillation, and this has been the cause of industrial accidents. Gaseous nitric oxide detonates at about 2300 m/s, but as a solid it can reach a detonation velocity of 6100 m/s.
| Physical sciences | Covalent oxides | Chemistry |
235343 | https://en.wikipedia.org/wiki/Potassium%20hydroxide | Potassium hydroxide | Potassium hydroxide is an inorganic compound with the formula KOH, and is commonly called caustic potash.
Along with sodium hydroxide (NaOH), KOH is a prototypical strong base. It has many industrial and niche applications, most of which utilize its caustic nature and its reactivity toward acids. An estimated 700,000 to 800,000 tonnes were produced in 2005. KOH is noteworthy as the precursor to most soft and liquid soaps, as well as numerous potassium-containing chemicals. It is a white solid that is dangerously corrosive.
Properties and structure
KOH exhibits high thermal stability. Because of this high stability and relatively low melting point, it is often melt-cast as pellets or rods, forms that have low surface area and convenient handling properties. These pellets become tacky in air because KOH is hygroscopic. Most commercial samples are ca. 90% pure, the remainder being water and carbonates. Its dissolution in water is strongly exothermic. Concentrated aqueous solutions are sometimes called potassium lyes. Even at high temperatures, solid KOH does not dehydrate readily.
Structure
At higher temperatures, solid KOH crystallizes in the NaCl crystal structure. The group is either rapidly or randomly disordered so that it is effectively a spherical anion of radius 1.53 Å (between and in size). At room temperature, the groups are ordered and the environment about the centers is distorted, with distances ranging from 2.69 to 3.15 Å, depending on the orientation of the OH group. KOH forms a series of crystalline hydrates, namely the monohydrate , the dihydrate and the tetrahydrate .
Reactions
Solubility and desiccating properties
About 112 g of KOH dissolve in 100 mL water at room temperature, which contrasts with 100 g/100 mL for NaOH. Thus on a molar basis, KOH is slightly more soluble than NaOH. Lower molecular-weight alcohols such as methanol, ethanol, and propanols are also excellent solvents. They participate in an acid-base equilibrium. In the case of methanol the potassium methoxide (methylate) forms:
Because of its high affinity for water, KOH serves as a desiccant in the laboratory. It is often used to dry basic solvents, especially amines and pyridines.
As a nucleophile in organic chemistry
KOH, like NaOH, serves as a source of , a highly nucleophilic anion that attacks polar bonds in both inorganic and organic materials. Aqueous KOH saponifies esters:
When R is a long chain, the product is called a potassium soap. This reaction is manifested by the "greasy" feel that KOH gives when touched; fats on the skin are rapidly converted to soap and glycerol.
Molten KOH is used to displace halides and other leaving groups. The reaction is especially useful for aromatic reagents to give the corresponding phenols.
Reactions with inorganic compounds
Complementary to its reactivity toward acids, KOH attacks oxides. Thus, SiO2 is attacked by KOH to give soluble potassium silicates. KOH reacts with carbon dioxide to give potassium bicarbonate:
Manufacture
Historically, KOH was made by adding potassium carbonate to a strong solution of calcium hydroxide (slaked lime). The salt metathesis reaction results in precipitation of solid calcium carbonate, leaving potassium hydroxide in solution:
Filtering off the precipitated calcium carbonate and boiling down the solution gives potassium hydroxide ("calcinated or caustic potash"). This method of producing potassium hydroxide remained dominant until the late 19th century, when it was largely replaced by the current method of electrolysis of potassium chloride solutions. The method is analogous to the manufacture of sodium hydroxide (see chloralkali process):
Hydrogen gas forms as a byproduct on the cathode; concurrently, an anodic oxidation of the chloride ion takes place, forming chlorine gas as a byproduct. Separation of the anodic and cathodic spaces in the electrolysis cell is essential for this process.
Uses
KOH and NaOH can be used interchangeably for a number of applications, although in industry, NaOH is preferred because of its lower cost.
Catalyst for hydrothermal gasification process
In industry, KOH is a good catalyst for hydrothermal gasification process. In this process, it is used to improve the yield of gas and amount of hydrogen in process. For example, production of coke (fuel) from coal often produces much coking wastewater. In order to degrade it, supercritical water is used to convert it to the syngas containing carbon monoxide, carbon dioxide, hydrogen and methane. Using pressure swing adsorption, we could separate various gases and then use power-to-gas technology to convert them to fuel. On the other hand, the hydrothermal gasification process could degrade other waste such as sewage sludge and waste from food factories.
Precursor to other potassium compounds
Many potassium salts are prepared by neutralization reactions involving KOH. The potassium salts of carbonate, cyanide, permanganate, phosphate, and various silicates are prepared by treating either the oxides or the acids with KOH. The high solubility of potassium phosphate is desirable in fertilizers.
Manufacture of soft soaps
The saponification of fats with KOH is used to prepare the corresponding "potassium soaps", which are softer than the more common sodium hydroxide-derived soaps. Because of their softness and greater solubility, potassium soaps require less water to liquefy, and can thus contain more cleaning agent than liquefied sodium soaps.
As an electrolyte
Aqueous potassium hydroxide is employed as the electrolyte in alkaline batteries based on nickel-cadmium, nickel-hydrogen, and manganese dioxide-zinc. Potassium hydroxide is preferred over sodium hydroxide because its solutions are more conductive. The nickel–metal hydride batteries in the Toyota Prius use a mixture of potassium hydroxide and sodium hydroxide. Nickel–iron batteries also use potassium hydroxide electrolyte.
Food industry
In food products, potassium hydroxide acts as a food thickener, pH control agent and food stabilizer. The FDA considers it generally safe as a direct food ingredient when used in accordance with Good Manufacturing Practices. It is known in the E number system as E525.
Niche applications
Like sodium hydroxide, potassium hydroxide attracts numerous specialized applications, virtually all of which rely on its properties as a strong chemical base with its consequent ability to degrade many materials. For example, in a process commonly referred to as "chemical cremation" or "resomation", potassium hydroxide hastens the decomposition of soft tissues, both animal and human, to leave behind only the bones and other hard tissues. Entomologists wishing to study the fine structure of insect anatomy may use a 10% aqueous solution of KOH to apply this process.
In chemical synthesis, the choice between the use of KOH and the use of NaOH is guided by the solubility or keeping quality of the resulting salt.
The corrosive properties of potassium hydroxide make it a useful ingredient in agents and preparations that clean and disinfect surfaces and materials that can themselves resist corrosion by KOH.
KOH is also used for semiconductor chip fabrication (for example anisotropic wet etching).
Potassium hydroxide is often the main active ingredient in chemical "cuticle removers" used in manicure treatments.
Because aggressive bases like KOH damage the cuticle of the hair shaft, potassium hydroxide is used to chemically assist the removal of hair from animal hides. The hides are soaked for several hours in a solution of KOH and water to prepare them for the unhairing stage of the tanning process. This same effect is also used to weaken human hair in preparation for shaving. Preshave products and some shave creams contain potassium hydroxide to force open the hair cuticle and to act as a hygroscopic agent to attract and force water into the hair shaft, causing further damage to the hair. In this weakened state, the hair is more easily cut by a razor blade.
Potassium hydroxide is used to identify some species of fungi. A 3–5% aqueous solution of KOH is applied to the flesh of a mushroom and the researcher notes whether or not the color of the flesh changes. Certain species of gilled mushrooms, boletes, polypores, and lichens are identifiable based on this color-change reaction.
Safety
Potassium hydroxide is a caustic alkali and its solutions range from irritating to skin and other tissue in low concentrations, to highly corrosive in high concentrations. Eyes are particularly vulnerable, and dust or mist is severely irritating to lungs and can cause pulmonary edema. Safety considerations are similar to those of sodium hydroxide.
The caustic effects arise from being highly alkaline, but if potassium hydroxide is neutralised with a non-toxic acid then it becomes a non-toxic potassium salt. It is approved as a food additive under the code E525.
| Physical sciences | Inorganic compounds | null |
235393 | https://en.wikipedia.org/wiki/Amenorrhea | Amenorrhea | Amenorrhea or amenorrhoea is the absence of a menstrual period in a female who has reached reproductive age. Physiological states of amenorrhoea are most commonly seen during pregnancy and lactation (breastfeeding).
Amenorrhoea is a symptom with many potential causes. Primary amenorrhea is defined as an absence of secondary sexual characteristics by age 13 with no menarche or normal secondary sexual characteristics but no menarche by 15 years of age. It may be caused by developmental problems, such as the congenital absence of the uterus, failure of the ovary to receive or maintain egg cells, or delay in pubertal development. Secondary amenorrhoea, ceasing of menstrual cycles after menarche, is defined as the absence of menses for three months in a woman with previously normal menstruation, or six months for women with a history of oligomenorrhoea. It is often caused by hormonal disturbances from the hypothalamus and the pituitary gland, premature menopause, intrauterine scar formation, or eating disorders.
Pathophysiology
Although amenorrhea has multiple potential causes, ultimately, it is the result of hormonal imbalance or an anatomical abnormality.
Physiologically, menstruation is controlled by the release of gonadotropin-releasing hormone (GnRH) from the hypothalamus. GnRH acts on the pituitary to stimulate the release of follicle stimulating hormone (FSH) and luteinizing hormone (LH). FSH and LH then act on the ovaries to stimulate the production of estrogen and progesterone which, respectively, control the proliferative and secretary phases of the menstrual cycle. Prolactin also influences the menstrual cycle as it suppresses the release of LH and FSH from the pituitary. Similarly, thyroid hormone also affects the menstrual cycle. Low levels of thyroid hormone stimulate the release of TRH from the hypothalamus, which in turn increases both TSH and prolactin release. This increase in prolactin suppresses the release of LH and FSH through a negative feedback mechanism. Amenorrhea can be caused by any mechanism that disrupts this hypothalamic-pituitary-ovarian axis, whether that it be by hormonal imbalance or by disruption of feedback mechanisms.
Classification
Amenorrhea is classified as either primary or secondary.
Primary amenorrhea
Primary amenorrhoea is the absence of menstruation in a woman by the age of 16. Females who have not reached menarche at 14 and who have no signs of secondary sexual characteristics (thelarche or pubarche) are also considered to have primary amenorrhea. Examples of amenorrhea include constitutional delay of puberty, Turner syndrome, and Mayer–Rokitansky–Küster–Hauser (MRKH) syndrome.
It produces the appearance of secondary sexual characteristics, which are the sprouting of pubic and armpit hair, development of the breasts, and a lack of definition in the female body structure, such as the waist and hips.
Secondary amenorrhea
Secondary amenorrhoea is defined as the absence of menstruation for three months in a woman with a history of regular cyclic bleeding or six months in a woman with a history of irregular menstrual periods. Examples of secondary amenorrhea include hypothyroidism, hyperthyroidism, hyperprolactinemia, polycystic ovarian syndrome, primary ovarian insufficiency, and functional hypothalamic amenorrhea.
Causes
Primary amenorrhea
Turner syndrome
Turner syndrome, monosomy 45XO, is a genetic disorder characterized by a missing, or partially missing, X chromosome. Turner syndrome is associated with a wide spectrum of features that vary with each case. However, one common feature of this syndrome is ovarian insufficiency due to gonadal dysgenesis. Most people with Turner syndrome experience ovarian insufficiency within the first few years of life, prior to menarche. Therefore, most patients with Turner syndrome will have primary amenorrhea. However, the incidence of spontaneous puberty varies between 8–40% depending on whether or not there is a complete or partial absence of the X chromosome.
MRKH
MRKH (Mayer–Rokitansky–Küster–Hauser) syndrome is the second-most common cause of primary amenorrhoea. The syndrome is characterized by Müllerian agenesis. In MRKH Syndrome, the Müllerian ducts develop abnormally and result in the absence of a uterus and cervix. Even though patients with MRKH have functioning ovaries, and therefore have secondary sexual characteristics, they experience primary amenorrhea since there is no functioning uterus.
Other Intersex conditions
Individuals with a female phenotype can present with primary amenorrhea due to complete androgen insensitivity syndrome (CAIS), 5-alpha-reductase 2 deficiency, pure gonadal dysgenesis, 17β-hydroxysteroid dehydrogenase deficiency, and mixed gonadal dysgenesis.
Constitutional delay of puberty
Constitutional delay of puberty is a diagnosis of exclusion that is made when the workup for primary amenorrhea does not reveal another cause. Constitutional delay of puberty is not due to a pathologic cause. It is considered a variant of the timeline of puberty. Although more common in boys, girls with delayed puberty present with onset of secondary sexual characteristics after the age of 14, as well as menarche after the age of 16. This may be due to genetics, as some cases of constitutional delay of puberty are familial.
Secondary amenorrhea
Breastfeeding
Physiologic amenorrhea is present before menarche, during pregnancy and breastfeeding, and after menopause.
Breastfeeding or lactational amenorrhea is also a common cause of secondary amenorrhoea. Lactational amenorrhea is due to the presence of elevated prolactin and low levels of LH, which suppress ovarian hormone secretion. Breastfeeding typically prolongs postpartum lactational amenorrhoea, and the duration of amenorrhoea varies depending on how often a woman breastfeeds. Due to this reason, breastfeeding has been advocated as a method of family planning, especially in developing countries where access to other methods of contraception may be limited.
Diseases of the thyroid
Disturbances in thyroid hormone regulation has been a known cause of menstrual irregularities, including secondary amenorrhea.
Patients with hypothyroidism frequently present with changes in their menstrual cycle. It is hypothesized that this is due to increased TRH, which goes on to stimulate the release of both TSH and prolactin. Increased prolactin inhibits the release of LH and FSH which are needed for ovulation to occur.
Patients with hyperthyroidism may also present with oligomenorrhea or amenorrhea. Sex hormone binding globulin is increased in hyperthyroid states. This, in turn, increases the total levels of testosterone and estradiol. Increased levels of LH and FSH have also been reported in patients with hyperthyroidism.
Hypothalamic and pituitary causes
Changes in the hypothalamic-pituitary axis is a common cause of secondary amenorrhea. GnRH is released from the hypothalamus and stimulates the anterior pituitary to release FSH and LH, which in turn stimulate the ovaries to release estrogen and progesterone. Any pathology in the hypothalamus or pituitary can alter the way this feedback mechanism works and can cause secondary amenorrhea.
Pituitary adenomas are a common cause of amenorrhea. Prolactin secreting pituitary adenomas cause amenorrhea due to the hyper-secretion of prolactin which inhibits FSH and LH release. Other space occupying pituitary lesions can also cause amenorrhea due to the inhibition of dopamine, an inhibitor of prolactin, due to compression of the pituitary gland.
Polycystic ovary syndrome
Polycystic ovary syndrome (PCOS) is a common endocrine disorder affecting 4–8% of women worldwide. It is characterized by multiple cysts on the ovary, amenorrhea or oligomenorrhea, and increased androgens. Although the exact cause remains unknown, it is hypothesized that increased levels of circulating androgens is what results in secondary amenorrhea. PCOS may also be a cause of primary amenorrhea if androgen access is present prior to menarche. Although multiple cysts on the ovary are characteristic of the syndrome, this has not been noted to be a cause of the disease.
Low body weight
Women who perform extraneous exercise on a regular basis or lose a significant amount of weight are at risk of developing hypothalamic amenorrhoea. Functional hypothalamic amenorrhoea (FHA) can be caused by stress, weight loss, or excessive exercise. Many women who diet or who exercise at a high level do not take in enough calories to maintain their normal menstrual cycles. The threshold of developing amenorrhoea appears to be dependent on low energy availability rather than absolute weight because a critical minimum amount of stored, easily mobilized energy is necessary to maintain regular menstrual cycles. Amenorrhoea is often associated with anorexia nervosa and other eating disorders. Relative energy deficiency in sport, also known as the female athlete triad, is when a woman experiences amenorrhoea, disordered eating, and osteoporosis.
Energy imbalance and weight loss can disrupt menstrual cycles through several hormonal mechanisms. Weight loss can cause elevations in the hormone ghrelin which inhibits the hypothalamic-pituitary-ovarial axis. Elevated concentrations of ghrelin alter the amplitude of GnRH pulses, which causes diminished pituitary release of LH and follicle-stimulating hormone (FSH). Low levels of the hormone leptin are also seen in females with low body weight. Like ghrelin, leptin signals energy balance and fat stores to the reproductive axis. Decreased levels of leptin are closely related to low levels of body fat, and correlate with a slowing of GnRH pulsing.
Drug-induced
Certain medications, particularly contraceptive medications, can induce amenorrhoea in a healthy woman. The lack of menstruation usually begins shortly after beginning the medication and can take up to a year to resume after stopping its use. Hormonal contraceptives that contain only progestogen, like the oral contraceptive Micronor, and especially higher-dose formulations, such as the injectable Depo-Provera, commonly induce this side effect. Extended cycle use of combined hormonal contraceptives also allow suppression of menstruation. Patients who stop using combined oral contraceptive pills (COCP) may experience secondary amenorrhoea as a withdrawal symptom. The link is not well understood, as studies have found no difference in hormone levels between women who develop amenorrhoea as a withdrawal symptom following the cessation of COCP use and women who experience secondary amenorrhoea because of other reasons. New contraceptive pills which do not have the normal seven days of placebo pills in each cycle, have been shown to increase rates of amenorrhoea in women. Studies show that women are most likely to experience amenorrhoea after one year of treatment with continuous OCP use.
The use of opiates (such as heroin) on a regular basis has also been known to cause amenorrhoea in longer term users.
Anti-psychotic drugs, which are commonly used to treat schizophrenia, have been known to cause amenorrhoea as well. Research suggests that anti-psychotic medications affect levels of prolactin, insulin, FSH, LH, and testosterone. Recent research suggests that adding a dosage of Metformin to an anti-psychotic drug regimen can restore menstruation. Metformin has been shown to decrease resistance to the hormone insulin, as well as levels of prolactin, testosterone, and luteinizing hormone (LH).
Primary ovarian insufficiency
Primary ovarian insufficiency (POI) affects 1% of females and is defined as the loss of ovarian function before the age of 40. Although the cause of POI can vary, it has been linked to chromosomal abnormalities, chemotherapy, and autoimmune conditions. Hormone levels in POI are similar to menopause and are categorized by low estradiol and high levels of gonadotropins. Since the pathogenesis of POI involves the depletion of ovarian reserve, restoration of menstrual cycles typically does not occur in this form of secondary amenorrhea.
Diagnosis
Primary amenorrhoea
Primary amenorrhoea can be diagnosed in female children by age 14 if no secondary sex characteristics, such as enlarged breasts and body hair, are present. In the absence of secondary sex characteristics, the most common cause of amenorrhoea is low levels of FSH and LH caused by a delay in puberty. Gonadal dysgenesis, often associated with Turner syndrome, or premature ovarian failure may also be to blame. If secondary sex characteristics are present, but menstruation is not, primary amenorrhoea can be diagnosed by age 16.
Evaluation of primary amenorrhea begins with a pregnancy test, prolactin, FSH, LH, and TSH levels. Abnormal TSH levels prompt evaluation for hyper- and hypo-thyroidism with additional thyroid function tests. Elevated prolactin levels prompt evaluation of the pituitary with an MRI to assess for any masses or malignancies. A pelvic ultrasound can also be obtained in the initial evaluation. If a uterus is not present on ultrasound, karyotype analysis and testosterone levels are obtained to assess for MRKH or androgen insensitivity syndrome. If a uterus is present, LH and FSH levels are used to make a diagnosis. Low levels of LH and FSH suggest delayed puberty or functional hypothalamic amenorrhea. Elevated levels of FSH and LH suggest primary ovarian insufficiency, typically due to Turner syndrome. Normal levels of FSH and LH can suggest an anatomical outflow obstruction.
Secondary amenorrhea
Secondary amenorrhea's most common and most easily diagnosable causes are pregnancy, thyroid disease, and hyperprolactinemia. A pregnancy test is a common first step for diagnosis.
Similar to primary amenorrhea, evaluation of secondary amenorrhea also begins with a pregnancy test, prolactin, FSH, LH, and TSH levels. A pelvic ultrasound is also obtained. Abnormal TSH should prompt a thyroid workup with a full thyroid function test panel. Elevated prolactin should be followed with an MRI to look for masses. If LH and FSH are elevated, menopause or primary ovarian insufficiency should be considered. Normal or low levels of FSH and LH prompts further evaluation with patient history and the physical exam. Testosterone, DHEA-S, and 17-hydroxyprogesterone levels should be obtained if there is evidence of excess androgens, such as hirsutism or acne. 17-hydroxyprogesterone is elevated in congenital adrenal hyperplasia. Elevated testosterone and amenorrhea can suggest PCOS. Elevated androgens can also be present in ovarian or adrenal tumors, so additional imaging may also be needed. History of disordered eating or excessive exercise should raise concern for hypothalamic amenorrhea. Headache, vomiting, and vision changes can be signs of a tumor and needs evaluation with MRI. Finally, a history of gynecologic procedures should lead to evaluation of Asherman syndrome with a hysteroscopy or progesterone withdrawal bleeding test.
Treatment
Treatment for amenorrhea varies based on the underlying condition. Treatment not only focuses on restoring menstruation, if possible, but also preventing additional complications associated with the underlying cause of amenorrhea.
Primary amenorrhea
In primary amenorrhea, the goal is to continue pubertal development, if possible. For example, most patients with Turner syndrome will be infertile due to gonadal dysgenesis. However, patients are frequently prescribed growth hormone therapy and estrogen supplementation to achieve taller stature and prevent osteoporosis. In other cases, such as MRKH, hormones do not need to be prescribed since the ovaries are able to function normally. Patients with constitutional delay of puberty may be monitored by an endocrinologist, but definitive treatment may not be needed as there will eventually be progression to normal puberty.
Secondary amenorrhea
Treatment for secondary amenorrhea varies greatly based on the root cause. Functional hypothalamic amenorrhoea is typically treated by weight gain through increased calorie intake and decreased expenditure. Multidisciplinary treatment with monitoring from a physician, dietitian, and mental health counselor is recommended, along with support from family, friends, and coaches. Although oral contraceptives can cause menses to return, oral contraceptives should not be the initial treatment as they can mask the underlying problem and allow other effects of the eating disorder, like osteoporosis, continue to develop.
Patients with hyperprolactinemia are often treated with dopamine agonists to reduce the levels of prolactin and restore menstruation. Surgery and radiation may also be considered if dopamine agonists, such as cabergoline and bromocriptine are ineffective. Once prolactin levels are lowered, the resulting secondary amenorrhea is typically resolved. Similarly, treatment of thyroid abnormalities often resolves the associated amenorrhea. For example, administration of thyroxine in patients with low thyroid levels restored normal menstruation in a majority of patients.
Although there is currently no definitive treatment for PCOS, various interventions are used to restore more frequent ovulation in patients. Weight loss and exercise have been associated with a return of ovulation in patients with PCOS due to normalization of androgen levels. Metformin has also been recently studied to regularize menstrual cycles in patients with PCOS. Although the exact mechanism still remains unknown, it is hypothesized that this is due to metformin's ability to increase the body's sensitivity to insulin. Anti-androgen medications, such as spironolactone, can also be used to lower body androgen levels and restore menstruation. Oral contraceptive pills are also often prescribed to patients with secondary amenorrhea due to PCOS in order to regularize the menstrual cycle, although this is due to the suppression of ovulation.
| Biology and health sciences | Symptoms and signs | Health |
235436 | https://en.wikipedia.org/wiki/Sardine | Sardine | Sardine and pilchard are common names for various species of small, oily forage fish in the herring suborder Clupeoidei. The term 'sardine' was first used in English during the early 15th century; a somewhat dubious etymology says it comes from the Italian island of Sardinia, around which sardines were once supposedly abundant.
The terms 'sardine' and 'pilchard' are not precise, and what is meant depends on the region. The United Kingdom's Sea Fish Industry Authority, for example, classifies sardines as young pilchards. One criterion suggests fish shorter in length than are sardines, and larger fish are pilchards.
The FAO/WHO Codex standard for canned sardines cites 21 species that may be classed as sardines. FishBase, a database of information about fish, calls at least six species pilchards, over a dozen just sardines, and many more with the two basic names qualified by various adjectives.
Etymology
The word 'sardine' first appeared in English in the 15th century, a loanword from French , derived from Latin , from Ancient Greek (sardínē) or (sardĩnos), possibly from the Greek (Sardō) 'Sardinia'. Athenaios quotes a fragmentary passage from Aristotle mentioning the fish (sardĩnos), referring to the sardine or pilchard. However, Sardinia is over 1000 km from Athens, so it seems "hardly probable that the Greeks would have obtained fish from so far as Sardinia at a time relatively so early as that of Aristotle."
The flesh of some sardines or pilchards is a reddish-brown colour similar to some varieties of red sardonyx or sardine stone; this word derives from (sardĩon) with a root meaning 'red' and possibly cognate with Sardis, the capital of ancient Lydia (now western Turkey) where it was obtained. However, the name may refer to the reddish-pink colour of the gemstone sard (or carnelian) known to the ancients.
The phrase "packed like sardines" (in a tin) is recorded from 1911. The phrase "packed up like sardines" appears in The Mirror of Literature, Amusement, and Instruction from 1841, and is a translation of "encaissés comme des sardines", which appears in from 1829. Other early appearances of the idiom are "packed together ... like sardines in a tin-box" (1845), and "packed ... like sardines in a can" (1854).
Genera
Sardines occur in several genera.
Genus Dussumieria
Rainbow sardine (Dussumieria acuta)
Slender rainbow sardine (Dussumieria elopsoides)
Genus Escualosa
Slender white sardine (Escualosa elongata)
White sardine (Escualosa thoracata)
Genus Sardina
European pilchard (true sardine) (Sardina pilchardus)
Genus Sardinella
Goldstripe sardinella (Sardinella gibbosa)
Indian oil sardine (Sardinella longiceps)
Round sardinella (Sardinella aurita)
Freshwater sardine (Sardinella tawilis)
Marquesan sardinella (Sardinella marquesensis)
Genus Sardinops
South American pilchard (Sardinops sagax)
Although they are not true sardines, sprats are sometimes marketed as sardines. For example, the european sprat, Sprattus sprattus, is sometimes marketed as the 'brisling sardine'.
Species
Feeding
Sardines feed almost exclusively on zooplankton, (lit. "animal plankton"), and congregate wherever this is abundant.
Fisheries
Typically, sardines are caught with encircling nets, particularly purse seines. Many modifications of encircling nets are used, including traps or fishing weirs. The latter are stationary enclosures composed of stakes into which schools of sardines are diverted as they swim along the coast. The fish are caught mainly at night, when they approach the surface to feed on plankton. After harvesting, the fish are submerged in brine while they are transported to shore.
Sardines are commercially fished for a variety of uses: for bait; for immediate consumption; for drying, salting, or smoking; and for reduction into fish meal or oil. The chief use of sardines is for human consumption, but fish meal is used as animal feed, while sardine oil has many uses, including the manufacture of paint, varnish, and linoleum.
Food and nutrition
Sardines are commonly consumed by humans as a source of protein, omega-3 fatty acids, and micronutrients. Sardines may be grilled, pickled, smoked, or preserved in cans.
Canned sardines are 67% water, 21% protein, 10% fat, and contain negligible carbohydrates (table). In a reference amount of , canned sardines supply 185 calories of food energy and are a rich source (20% or more of the Daily Value, DV) of vitamin B12 (375% DV), phosphorus (29% DV), and niacin (26% DV) (table). Sardines are a moderate source (10–19% DV) of the B vitamins, riboflavin and pantothenic acid, and several dietary minerals, including calcium and sodium (18% DV each) (table). A 100 g serving of canned sardines supplies about 7 g combined of monounsaturated and polyunsaturated fatty acids (USDA source in table).
Because they are low in the food chain, sardines are low in contaminants, such as mercury, relative to other fish commonly eaten by humans, and have a relatively low impact in production of greenhouse gases.
History
History of sardine fishing in the UK
Pilchard fishing and processing became a thriving industry in Cornwall, England from around 1750 to around 1880, after which it went into decline. Catches varied from year to year, and in 1871, the catch was 47,000 hogsheads, while in 1877, only 9,477 hogsheads. A hogshead contained 2,300 to 4,000 pilchards, and when filled with pressed pilchards, weighed 476 lbs. The pilchards were mostly exported to Roman Catholic countries such as Italy and Spain, where they are known as fermades. The chief market for the oil was Bristol, where it was used on machinery.
Since 1997, sardines from Cornwall have been sold as 'Cornish sardines', and since March 2010, under EU law, Cornish sardines have Protected Geographical Status. The industry has featured in numerous works of art, particularly by Stanhope Forbes and other Newlyn School artists.
The traditional "Toast to Pilchards" refers to the lucrative export of the fish to Catholic Europe:
Here's health to the Pope, may he live to repent
And add just six months to the term of his Lent
And tell all his vassals from Rome to the Poles,
There's nothing like pilchards for saving their souls!
History of sardine fishing in the United States
In the United States, the sardine canning industry peaked in the 1950s. Since then, the industry has been on the decline. The canneries in Monterey Bay, in what was known as Cannery Row in Monterey County, California (where John Steinbeck's novel of the same name was set), failed in the mid-1950s. The last large sardine cannery in the United States, the Stinson Seafood plant in Prospect Harbor, Maine, closed its doors on 15 April 2010 after 135 years in operation.
In April 2015 the Pacific Fishery Management Council voted to direct NOAA Fisheries Service to halt the current commercial season in Oregon, Washington and California, because of a dramatic collapse in Pacific sardine stocks. The ban affected about 100 fishing boats with sardine permits, although far fewer were actively fishing at the time. The season normally would end 30 June. The ban was expected to last for more than a year, and was still in place .
In popular culture
The manner in which sardines can be packed in a can has led to the popular English language saying "packed like sardines", which is used metaphorically to describe situations where people or objects are crowded closely together.
'Sardines' is also the name of a children's game, where one person hides and each successive person who finds the hidden one packs into the same space until only one is left out, who becomes the next one to hide.
Among the residents of the Mediterranean city of Marseille, the local tendency to exaggerate is linked to a folk tale about a sardine that supposedly blocked the city's port in the 18th century. It was actually blocked by a ship called the Sartine.
Gallery
| Biology and health sciences | Clupeiformes | null |
235451 | https://en.wikipedia.org/wiki/Random%20walk | Random walk | In mathematics, a random walk, sometimes known as a drunkard's walk, is a stochastic process that describes a path that consists of a succession of random steps on some mathematical space.
An elementary example of a random walk is the random walk on the integer number line which starts at 0, and at each step moves +1 or −1 with equal probability. Other examples include the path traced by a molecule as it travels in a liquid or a gas (see Brownian motion), the search path of a foraging animal, or the price of a fluctuating stock and the financial status of a gambler. Random walks have applications to engineering and many scientific fields including ecology, psychology, computer science, physics, chemistry, biology, economics, and sociology. The term random walk was first introduced by Karl Pearson in 1905.
Realizations of random walks can be obtained by Monte Carlo simulation.
Lattice random walk
A popular random walk model is that of a random walk on a regular lattice, where at each step the location jumps to another site according to some probability distribution. In a simple random walk, the location can only jump to neighboring sites of the lattice, forming a lattice path. In a simple symmetric random walk on a locally finite lattice, the probabilities of the location jumping to each one of its immediate neighbors are the same. The best-studied example is the random walk on the d-dimensional integer lattice (sometimes called the hypercubic lattice) .
If the state space is limited to finite dimensions, the random walk model is called a simple bordered symmetric random walk, and the transition probabilities depend on the location of the state because on margin and corner states the movement is limited.
One-dimensional random walk
An elementary example of a random walk is the random walk on the integer number line, , which starts at 0 and at each step moves +1 or −1 with equal probability.
This walk can be illustrated as follows. A marker is placed at zero on the number line, and a fair coin is flipped. If it lands on heads, the marker is moved one unit to the right. If it lands on tails, the marker is moved one unit to the left. After five flips, the marker could now be on -5, -3, -1, 1, 3, 5. With five flips, three heads and two tails, in any order, it will land on 1. There are 10 ways of landing on 1 (by flipping three heads and two tails), 10 ways of landing on −1 (by flipping three tails and two heads), 5 ways of landing on 3 (by flipping four heads and one tail), 5 ways of landing on −3 (by flipping four tails and one head), 1 way of landing on 5 (by flipping five heads), and 1 way of landing on −5 (by flipping five tails). See the figure below for an illustration of the possible outcomes of 5 flips.
To define this walk formally, take independent random variables , where each variable is either 1 or −1, with a 50% probability for either value, and set and The series is called the simple random walk on . This series (the sum of the sequence of −1s and 1s) gives the net distance walked, if each part of the walk is of length one.
The expectation of is zero. That is, the mean of all coin flips approaches zero as the number of flips increases. This follows by the finite additivity property of expectation:
A similar calculation, using the independence of the random variables and the fact that , shows that:
This hints that , the expected translation distance after n steps, should be of the order of In fact,
To answer the question of how many times will a random walk cross a boundary line if permitted to continue walking forever, a simple random walk on will cross every point an infinite number of times. This result has many names: the level-crossing phenomenon, recurrence or the gambler's ruin. The reason for the last name is as follows: a gambler with a finite amount of money will eventually lose when playing a fair game against a bank with an infinite amount of money. The gambler's money will perform a random walk, and it will reach zero at some point, and the game will be over.
If a and b are positive integers, then the expected number of steps until a one-dimensional simple random walk starting at 0 first hits b or −a is ab. The probability that this walk will hit b before −a is , which can be derived from the fact that simple random walk is a martingale. And these expectations and hitting probabilities can be computed in in the general one-dimensional random walk Markov chain.
Some of the results mentioned above can be derived from properties of Pascal's triangle. The number of different walks of n steps where each step is +1 or −1 is 2n. For the simple random walk, each of these walks is equally likely.
In order for Sn to be equal to a number k it is necessary and sufficient that the number of +1 in the walk exceeds those of −1 by k. It follows +1 must appear (n + k)/2 times among n steps of a walk, hence the number of walks which satisfy equals the number of ways of choosing (n + k)/2 elements from an n element set, denoted . For this to have meaning, it is necessary that n + k be an even number, which implies n and k are either both even or both odd. Therefore, the probability that is equal to . By representing entries of Pascal's triangle in terms of factorials and using Stirling's formula, one can obtain good estimates for these probabilities for large values of .
If space is confined to + for brevity, the number of ways in which a random walk will land on any given number having five flips can be shown as {0,5,0,4,0,1}.
This relation with Pascal's triangle is demonstrated for small values of n. At zero turns, the only possibility will be to remain at zero. However, at one turn, there is one chance of landing on −1 or one chance of landing on 1. At two turns, a marker at 1 could move to 2 or back to zero. A marker at −1, could move to −2 or back to zero. Therefore, there is one chance of landing on −2, two chances of landing on zero, and one chance of landing on 2.
The central limit theorem and the law of the iterated logarithm describe important aspects of the behavior of simple random walks on . In particular, the former entails that as n increases, the probabilities (proportional to the numbers in each row) approach a normal distribution.
To be precise, knowing that ,
and using Stirling's formula one has
Fixing the scaling , for fixed, and using the expansion when vanishes, it follows
taking the limit (and observing that corresponds to the spacing of the scaling grid) one finds the gaussian density . Indeed, for a absolutely continuous random variable with density it holds , with corresponding to an infinitesimal spacing.
As a direct generalization, one can consider random walks on crystal lattices (infinite-fold abelian covering graphs over finite graphs). Actually it is possible to establish the central limit theorem and large deviation theorem in this setting.
As a Markov chain
A one-dimensional random walk can also be looked at as a Markov chain whose state space is given by the integers For some number p satisfying , the transition probabilities (the probability Pi,j of moving from state i to state j) are given by
Heterogeneous generalization
The heterogeneous random walk draws in each time step a random number that determines the local jumping probabilities and then a random number that determines the actual jump direction. The main question is the probability of staying in each of the various sites after jumps, and in the limit of this probability when is very large.
Higher dimensions
In higher dimensions, the set of randomly walked points has interesting geometric properties. In fact, one gets a discrete fractal, that is, a set which exhibits stochastic self-similarity on large scales. On small scales, one can observe "jaggedness" resulting from the grid on which the walk is performed. The trajectory of a random walk is the collection of points visited, considered as a set with disregard to when the walk arrived at the point. In one dimension, the trajectory is simply all points between the minimum height and the maximum height the walk achieved (both are, on average, on the order of ).
To visualize the two-dimensional case, one can imagine a person walking randomly around a city. The city is effectively infinite and arranged in a square grid of sidewalks. At every intersection, the person randomly chooses one of the four possible routes (including the one originally travelled from). Formally, this is a random walk on the set of all points in the plane with integer coordinates.
To answer the question of the person ever getting back to the original starting point of the walk, this is the 2-dimensional equivalent of the level-crossing problem discussed above. In 1921 George Pólya proved that the person almost surely would in a 2-dimensional random walk, but for 3 dimensions or higher, the probability of returning to the origin decreases as the number of dimensions increases. In 3 dimensions, the probability decreases to roughly 34%. The mathematician Shizuo Kakutani was known to refer to this result with the following quote: "A drunk man will find his way home, but a drunk bird may get lost forever".
The probability of recurrence is in general , which can be derived by generating functions or Poisson process.
Another variation of this question which was also asked by Pólya is: "if two people leave the same starting point, then will they ever meet again?" It can be shown that the difference between their locations (two independent random walks) is also a simple random walk, so they almost surely meet again in a 2-dimensional walk, but for 3 dimensions and higher the probability decreases with the number of the dimensions. Paul Erdős and Samuel James Taylor also showed in 1960 that for dimensions less or equal than 4, two independent random walks starting from any two given points have infinitely many intersections almost surely, but for dimensions higher than 5, they almost surely intersect only finitely often.
The asymptotic function for a two-dimensional random walk as the number of steps increases is given by a Rayleigh distribution. The probability distribution is a function of the radius from the origin and the step length is constant for each step. Here, the step length is assumed to be 1, N is the total number of steps and r is the radius from the origin.
Relation to Wiener process
A Wiener process is a stochastic process with similar behavior to Brownian motion, the physical phenomenon of a minute particle diffusing in a fluid. (Sometimes the Wiener process is called "Brownian motion", although this is strictly speaking a confusion of a model with the phenomenon being modeled.)
A Wiener process is the scaling limit of random walk in dimension 1. This means that if there is a random walk with very small steps, there is an approximation to a Wiener process (and, less accurately, to Brownian motion). To be more precise, if the step size is ε, one needs to take a walk of length L/ε2 to approximate a Wiener length of L. As the step size tends to 0 (and the number of steps increases proportionally), random walk converges to a Wiener process in an appropriate sense. Formally, if B is the space of all paths of length L with the maximum topology, and if M is the space of measure over B with the norm topology, then the convergence is in the space M. Similarly, a Wiener process in several dimensions is the scaling limit of random walk in the same number of dimensions.
A random walk is a discrete fractal (a function with integer dimensions; 1, 2, ...), but a Wiener process trajectory is a true fractal, and there is a connection between the two. For example, take a random walk until it hits a circle of radius r times the step length. The average number of steps it performs is r2. This fact is the discrete version of the fact that a Wiener process walk is a fractal of Hausdorff dimension 2.
In two dimensions, the average number of points the same random walk has on the boundary of its trajectory is r4/3. This corresponds to the fact that the boundary of the trajectory of a Wiener process is a fractal of dimension 4/3, a fact predicted by Mandelbrot using simulations but proved only in 2000
by Lawler, Schramm and Werner.
A Wiener process enjoys many symmetries a random walk does not. For example, a Wiener process walk is invariant to rotations, but the random walk is not, since the underlying grid is not (random walk is invariant to rotations by 90 degrees, but Wiener processes are invariant to rotations by, for example, 17 degrees too). This means that in many cases, problems on a random walk are easier to solve by translating them to a Wiener process, solving the problem there, and then translating back. On the other hand, some problems are easier to solve with random walks due to its discrete nature.
Random walk and Wiener process can be coupled, namely manifested on the same probability space in a dependent way that forces them to be quite close. The simplest such coupling is the Skorokhod embedding, but there exist more precise couplings, such as Komlós–Major–Tusnády approximation theorem.
The convergence of a random walk toward the Wiener process is controlled by the central limit theorem, and by Donsker's theorem. For a particle in a known fixed position at t = 0, the central limit theorem tells us that after a large number of independent steps in the random walk, the walker's position is distributed according to a normal distribution of total variance:
where t is the time elapsed since the start of the random walk, is the size of a step of the random walk, and is the time elapsed between two successive steps.
This corresponds to the Green's function of the diffusion equation that controls the Wiener process, which suggests that, after a large number of steps, the random walk converges toward a Wiener process.
In 3D, the variance corresponding to the Green's function of the diffusion equation is:
By equalizing this quantity with the variance associated to the position of the random walker, one obtains the equivalent diffusion coefficient to be considered for the asymptotic Wiener process toward which the random walk converges after a large number of steps:
(valid only in 3D).
The two expressions of the variance above correspond to the distribution associated to the vector that links the two ends of the random walk, in 3D. The variance associated to each component , or is only one third of this value (still in 3D).
For 2D:
For 1D:
Gaussian random walk
A random walk having a step size that varies according to a normal distribution is used as a model for real-world time series data such as financial markets.
Here, the step size is the inverse cumulative normal distribution where 0 ≤ z ≤ 1 is a uniformly distributed random number, and μ and σ are the mean and standard deviations of the normal distribution, respectively.
If μ is nonzero, the random walk will vary about a linear trend. If vs is the starting value of the random walk, the expected value after n steps will be vs + nμ.
For the special case where μ is equal to zero, after n steps, the translation distance's probability distribution is given by N(0, nσ2), where N() is the notation for the normal distribution, n is the number of steps, and σ is from the inverse cumulative normal distribution as given above.
Proof: The Gaussian random walk can be thought of as the sum of a sequence of independent and identically distributed random variables, Xi from the inverse cumulative normal distribution with mean equal zero and σ of the original inverse cumulative normal distribution:
but we have the distribution for the sum of two independent normally distributed random variables, , is given by
(see here).
In our case, and yield
By induction, for n steps we have
For steps distributed according to any distribution with zero mean and a finite variance (not necessarily just a normal distribution), the root mean square translation distance after n steps is (see Bienaymé's identity)
But for the Gaussian random walk, this is just the standard deviation of the translation distance's distribution after n steps. Hence, if μ is equal to zero, and since the root mean square(RMS) translation distance is one standard deviation, there is 68.27% probability that the RMS translation distance after n steps will fall between . Likewise, there is 50% probability that the translation distance after n steps will fall between .
Number of distinct sites
The number of distinct sites visited by a single random walker has been studied extensively for square and cubic lattices and for fractals. This quantity is useful for the analysis of problems of trapping and kinetic reactions. It is also related to the vibrational density of states, diffusion reactions processes
and spread of populations in ecology.
Information rate
The information rate of a Gaussian random walk with respect to the squared error distance, i.e. its quadratic rate distortion function, is given parametrically by
where . Therefore, it is impossible to encode using a binary code of less than bits and recover it with expected mean squared error less than . On the other hand, for any , there exists an large enough and a binary code of no more than distinct elements such that the expected mean squared error in recovering from this code is at most .
Applications
As mentioned the range of natural phenomena which have been subject to attempts at description by some flavour of random walks is considerable, particularly in physics and chemistry, materials science, and biology. The following are some specific applications of random walks:
In financial economics, the random walk hypothesis is used to model shares prices and other factors. Empirical studies found some deviations from this theoretical model, especially in short term and long term correlations. See share prices.
In population genetics, random walk describes the statistical properties of genetic drift
In physics, random walks are used as simplified models of physical Brownian motion and diffusion such as the random movement of molecules in liquids and gases. See for example diffusion-limited aggregation. Also in physics, random walks and some of the self interacting walks play a role in quantum field theory.
In semiconductor manufacturing, random walks are used to analyze the effects of thermal treatment at smaller nodes. It is applied to understand the diffusion of dopants, defects, impurities etc., during critical fabrication steps. Random walk treatments are also used to study the diffusion of reactants, products and plasma during chemical vapor deposition processes. Continuum diffusion has been used to study the flow of gases, at macroscopic scales, in CVD reactors. However, smaller dimensions and increased complexity has forced us to treat them with random walk. This allows for accurate analysis of stochastic processes, at molecular level and smaller, in semiconductor manufacturing.
In mathematical ecology, random walks are used to describe individual animal movements, to empirically support processes of biodiffusion, and occasionally to model population dynamics.
In polymer physics, random walk describes an ideal chain. It is the simplest model to study polymers.
In other fields of mathematics, random walk is used to calculate solutions to Laplace's equation, to estimate the harmonic measure, and for various constructions in analysis and combinatorics.
In computer science, random walks are used to estimate the size of the Web.
In image segmentation, random walks are used to determine the labels (i.e., "object" or "background") to associate with each pixel. This algorithm is typically referred to as the random walker segmentation algorithm.
In brain research, random walks and reinforced random walks are used to model cascades of neuron firing in the brain.
In vision science, ocular drift tends to behave like a random walk. According to some authors, fixational eye movements in general are also well described by a random walk.
In psychology, random walks explain accurately the relation between the time needed to make a decision and the probability that a certain decision will be made.
Random walks can be used to sample from a state space which is unknown or very large, for example to pick a random page off the internet. In computer science, this method is known as Markov Chain Monte Carlo (MCMC).
In wireless networking, a random walk is used to model node movement.
Motile bacteria engage in biased random walks.
In physics, random walks underlie the method of Fermi estimation.
On the web, the Twitter website uses random walks to make suggestions of whom to follow
Dave Bayer and Persi Diaconis have proven that 7 riffle shuffles are sufficient to mix a deck of cards (see more details under shuffle). This result translates to a statement about random walk on the symmetric group which is what they prove, with a crucial use of the group structure via Fourier analysis.
Variants
A number of types of stochastic processes have been considered that are similar to the pure random walks but where the simple structure is allowed to be more generalized. The pure structure can be characterized by the steps being defined by independent and identically distributed random variables. Random walks can take place on a variety of spaces, such as graphs, the integers, the real line, the plane or higher-dimensional vector spaces, on curved surfaces or higher-dimensional Riemannian manifolds, and on groups. It is also possible to define random walks which take their steps at random times, and in that case, the position has to be defined for all times . Specific cases or limits of random walks include the Lévy flight and diffusion models such as Brownian motion.
On graphs
A random walk of length k on a possibly infinite graph G with a root 0 is a stochastic process with random variables such that and
is a vertex chosen uniformly at random from the neighbors of .
Then the number is the probability that a random walk of length k starting at v ends at w.
In particular, if G is a graph with root 0, is the probability that a -step random walk returns to 0.
Building on the analogy from the earlier section on higher dimensions, assume now that our city is no longer a perfect square grid. When our person reaches a certain junction, he picks between the variously available roads with equal probability. Thus, if the junction has seven exits the person will go to each one with probability one-seventh. This is a random walk on a graph. Will our person reach his home? It turns out that under rather mild conditions, the answer is still yes, but depending on the graph, the answer to the variant question 'Will two persons meet again?' may not be that they meet infinitely often almost surely.
An example of a case where the person will reach his home almost surely is when the lengths of all the blocks are between a and b (where a and b are any two finite positive numbers). Notice that we do not assume that the graph is planar, i.e. the city may contain tunnels and bridges. One way to prove this result is using the connection to electrical networks. Take a map of the city and place a one ohm resistor on every block. Now measure the "resistance between a point and infinity". In other words, choose some number R and take all the points in the electrical network with distance bigger than R from our point and wire them together. This is now a finite electrical network, and we may measure the resistance from our point to the wired points. Take R to infinity. The limit is called the resistance between a point and infinity. It turns out that the following is true (an elementary proof can be found in the book by Doyle and Snell):
Theorem: a graph is transient if and only if the resistance between a point and infinity is finite. It is not important which point is chosen if the graph is connected.
In other words, in a transient system, one only needs to overcome a finite resistance to get to infinity from any point. In a recurrent system, the resistance from any point to infinity is infinite.
This characterization of transience and recurrence is very useful, and specifically it allows us to analyze the case of a city drawn in the plane with the distances bounded.
A random walk on a graph is a very special case of a Markov chain. Unlike a general Markov chain, random walk on a graph enjoys a property called time symmetry or reversibility. Roughly speaking, this property, also called the principle of detailed balance, means that the probabilities to traverse a given path in one direction or the other have a very simple connection between them (if the graph is regular, they are just equal). This property has important consequences.
Starting in the 1980s, much research has gone into connecting properties of the graph to random walks. In addition to the electrical network connection described above, there are important connections to isoperimetric inequalities, see more here, functional inequalities such as Sobolev and Poincaré inequalities and properties of solutions of Laplace's equation. A significant portion of this research was focused on Cayley graphs of finitely generated groups. In many cases these discrete results carry over to, or are derived from manifolds and Lie groups.
In the context of random graphs, particularly that of the Erdős–Rényi model, analytical results to some properties of random walkers have been obtained. These include the distribution of first and last hitting times of the walker, where the first hitting time is given by the first time the walker steps into a previously visited site of the graph, and the last hitting time corresponds the first time the walker cannot perform an additional move without revisiting a previously visited site.
A good reference for random walk on graphs is the online book by Aldous and Fill. For groups see the book of Woess.
If the transition kernel is itself random (based on an environment ) then the random walk is called a "random walk in random environment". When the law of the random walk includes the randomness of , the law is called the annealed law; on the other hand, if is seen as fixed, the law is called a quenched law. See the book of Hughes, the book of Revesz, or the lecture notes of Zeitouni.
We can think about choosing every possible edge with the same probability as maximizing uncertainty (entropy) locally. We could also do it globally – in maximal entropy random walk (MERW) we want all paths to be equally probable, or in other words: for every two vertexes, each path of given length is equally probable. This random walk has much stronger localization properties.
Self-interacting random walks
There are a number of interesting models of random paths in which each step depends on the past in a complicated manner. All are more complex for solving analytically than the usual random walk; still, the behavior of any model of a random walker is obtainable using computers. Examples include:
The self-avoiding walk.
The self-avoiding walk of length n on is the random n-step path which starts at the origin, makes transitions only between adjacent sites in , never revisit a site, and is chosen uniformly among all such paths. In two dimensions, due to self-trapping, a typical self-avoiding walk is very short, while in higher dimension it grows beyond all bounds.
This model has often been used in polymer physics (since the 1960s).
The loop-erased random walk.
The reinforced random walk.
The exploration process.
The multiagent random walk.
Biased random walks on graphs
Maximal entropy random walk
Random walk chosen to maximize entropy rate, has much stronger localization properties.
Correlated random walks
Random walks where the direction of movement at one time is correlated with the direction of movement at the next time. It is used to model animal movements.
| Mathematics | Probability | null |
235475 | https://en.wikipedia.org/wiki/Western%20grey%20kangaroo | Western grey kangaroo | The western grey kangaroo (Macropus fuliginosus), also referred to as a western grey giant kangaroo, black-faced kangaroo, mallee kangaroo, sooty kangaroo and (when referring to the Kangaroo Island subspecies) Kangaroo Island grey kangaroo, is a large and very common kangaroo found across almost the entire southern part of Australia, from just south of Shark Bay through coastal Western Australia and South Australia, into western Victoria, and in the entire Murray–Darling basin in New South Wales and Queensland.
Taxonomy
Long known to the Aboriginal Australians, for Europeans, the western grey kangaroo was the centre of a great deal of sometimes comical taxonomic confusion for almost 200 years. It was first noted by European explorers when Matthew Flinders landed on Kangaroo Island in 1802. Flinders shot several for food, but assumed that they were eastern grey kangaroos. In 1803, French explorers captured several Kangaroo Island western grey kangaroos and shipped them to Paris, where they lived in the Ménagerie du Jardin des Plantes for some years. Eventually, researchers at the Paris Museum of Natural History recognized that these animals were indeed distinct from the eastern grey kangaroo and formally described the species as Macropus fuliginosus in 1817. For reasons that remain unclear, the species was, later in 1888, incorrectly described as native to Tasmania. It was not until 1924 that researchers realized that the "forester kangaroo" of Tasmania was in fact Macropus giganteus, the same eastern grey kangaroo that was, and still is, widespread in the southeastern part of the mainland, and reaffirmed Kangaroo Island as the source of the type specimens. By 1971, it was understood that the Kangaroo Island western grey kangaroo belonged to the same species as the kangaroos of southern and Western Australia, and that this population extended through much of the eastern part of the continent as well (see range map). For a time, three subspecies were described, two on the mainland and one on Kangaroo Island. The current classification scheme emerged in the 1990s.
The western grey kangaroo is not found in the north or the far southeast of Australia, and the eastern grey does not extend beyond the New South Wales–South Australia border, but the two species are both common in the Murray–Darling basin area. They never interbreed in the wild, although it has proved possible to produce hybrids between eastern grey females and western grey males in captivity.
Subspecies
There are two subspecies:
Macropus fuliginosus fuliginosus (commonly known as the Kangaroo Island western grey kangaroo or simply Kangaroo Island grey kangaroo) is endemic to Kangaroo Island, South Australia
Macropus fuliginosus melanops has a range of different forms that intergrade clinally from west to east.
Description
The western grey kangaroo is one of the largest macropods in Australia. It weighs and its length is with a tail, standing approximately tall. It exhibits sexual dimorphism with the male up to twice the size of female. It has thick, coarse fur with colour ranging from pale grey to brown; its throat, chest and belly have a paler colour.
This species is difficult to distinguish from its sibling species, the eastern grey kangaroo (Macropus giganteus). However, the western grey kangaroo has darker grey-brown fur, darker colouration around the head, and sometimes has a blackish patch around the elbow.
Ecology and behaviour
Diet
It feeds at night, mainly on grasses and forbs but also on leafy shrubs and low trees. During the Late Pleistocene, its diet was more varied and incorporated a greater proportion of C4 plants relative to that of present western grey kangaroos. It has a nickname "stinker" because mature males have a distinctive curry-like odour.
Thermoregulation
The western grey kangaroo is a nocturnal species that varies its core body temperature based on daily ambient temperatures. The kangaroo's lowest daily core body temperature occurs mid-morning. In the summer, this was 2.2 °C (4 °F) lower than during cooler spring days. This reduced summer body temperature is thought to allow the species to conserve energy during a time when food availability is low.
Reproduction and development
The western grey kangaroo lives in groups of up to 15, and the males compete for females during the breeding season. During these "boxing" contests, they lock arms and try to push each other over. Usually, only the dominant male in the group mates. The gestation period is 30–31 days, after which the incompletely developed fetus (referred to as a joey) attaches to the teat in the pouch for 130–150 days. Females reach sexual maturity at 17 months while males mature at around 20 months.
The western grey kangaroo is closely related to the eastern grey kangaroo (M. giganteus), and their distribution overlaps extensively, especially in the Murray–Darling basin. However, the two species interbreed only rarely in the wild. Although hybridisation occurs in both directions in the overlap zone between the two species, this does not seem to be the case with captive animals. Although interbreeding between the two species does occasionally occur in captive animals, viable offspring are only produced when the mating pair consists of a female eastern grey kangaroo and a male western grey kangaroo. This is an example of unidirectional hybridisation.
Relationship with humans
The western grey kangaroo is classified as Least Concern by the IUCN Red List, with a population showing an increasing trend. Total population within the commercial harvest areas is estimated to be around 3,781,023 individuals in 2020.
Though the feeding habits of M. fuliginosus can be problematic for agriculture, it is protected and controlled exclusively by the state faunal authorities. Because it is considered a competitor for water and pasture by ranchers, this species is considered a pest in some areas. To limit agricultural damage, kangaroo culling has been allowed under license every year.
Commercial hunting for meat and skin also allowed under regulation, with skins providing a high-quality, long-lasting leather. About 40% of harvested meat is used for human consumption; leather is used as a material for handbags, briefcases, and belts.
Commercial hunting is permitted in New South Wales, mainland South Australia, and Western Australia, but prohibited in Tasmania, Northern Territory and Kangaroo Island.
| Biology and health sciences | Diprotodontia | Animals |
235548 | https://en.wikipedia.org/wiki/Appetite | Appetite | Appetite is the desire to eat food items, usually due to hunger. Appealing foods can stimulate appetite even when hunger is absent, although appetite can be greatly reduced by satiety. Appetite exists in all higher life-forms, and serves to regulate adequate energy intake to maintain metabolic needs. It is regulated by a close interplay between the digestive tract, adipose tissue and the brain. Appetite has a relationship with every individual's behavior. Appetitive behaviour also known as approach behaviour, and consummatory behaviour, are the only processes that involve energy intake, whereas all other behaviours affect the release of energy. When stressed, appetite levels may increase and result in an increase of food intake. Decreased desire to eat is termed anorexia, while polyphagia (or "hyperphagia") is increased eating. Dysregulation of appetite contributes to ARFID, anorexia nervosa, bulimia nervosa, cachexia, overeating, and binge eating disorder.
Role in disease
A limited or excessive appetite is not necessarily pathological. Abnormal appetite could be defined as eating habits causing malnutrition and related conditions such as obesity and its related problems.
Both genetic and environmental factors may regulate appetite, and abnormalities in either may lead to abnormal appetite. Poor appetite (anorexia) can have numerous causes, but may be a result of physical (infectious, autoimmune or malignant disease) or psychological (stress, mental disorders) factors. Likewise, hyperphagia (excessive eating) may be a result of hormonal imbalances, mental disorders (e.g., depression) and others. Dyspepsia, also known as indigestion, can also affect appetite as one of its symptoms is feeling "overly full" soon after beginning a meal. Taste and smell ("dysgeusia", bad taste) or the lack thereof may also affect appetite.
Abnormal appetite may also be linked to genetics on a chromosomal scale, shown by the 1950s discovery of Prader–Willi syndrome, a type of obesity caused by chromosome alterations. Additionally, anorexia nervosa and bulimia nervosa are more commonly found in females than males – thus hinting at a possibility of a linkage to the X-chromosome.
Eating disorders
Dysregulation of appetite lies at the root of anorexia nervosa, bulimia nervosa, and binge eating disorder. Anorexia nervosa is a mental disorder characterized as severe dietary restriction and intense fear of weight gain. Furthermore, persons with anorexia nervosa may exercise ritualistically. Individuals who have anorexia have high levels of ghrelin, a hormone that stimulates appetite, so the body is trying to cause hunger, but the urge to eat is being suppressed by the person. Binge eating disorder (commonly referred to as BED) is described as eating excessively (or uncontrollably) between periodic time intervals. The risk for BED can be present in children and most commonly manifests during adulthood. Studies suggest that the heritability of BED in adults is approximately 50%. Similarly to bulimia, some people may be involved in purging and binging. They might vomit after food intake or take purgatives. Body dysmorphic disorder may involve food restriction in an attempt to deal with a perceived fault, and may be associated with depression and social isolation.
Obesity
Various hereditary forms of obesity have been traced to defects in hypothalamic signaling (such as the leptin receptor and the MC-4 receptor) or are still awaiting characterization – Prader-Willi syndrome – in addition, decreased response to satiety may promote development of obesity. It has been found that ghrelin-reactive IgG immunoglobulins affect ghrelin's orexigenic response.
Other than genetically stimulated appetite abnormalities, there are physiological ones that do not require genes for activation. For example, ghrelin and leptin are released from the stomach and adipose cells, respectively, into the blood stream. Ghrelin stimulates feelings of hunger, whereas leptin stimulates feelings of satisfaction from food. Any changes in normal production levels of these two hormones can lead to obesity. The amount of leptin hormone production is stimulated by body fat percentage. When body fat accumulates there is overproduction of leptin causing a resistant hypothalamus and eventually almost no leptin effect. From then all ghrelin production causes insatiable appetite.
Pediatric eating problems
Eating issues such as "picky eating" affects about 25% of children, but among children with development disorders this number may be significantly higher, which in some cases may be related to the sounds, smells, and tastes (sensory processing disorder).
Pharmacology and treatment
The glycemic index is thought to affect satiety; a study investigating the effect of satiety found that a high-glycemic food, potatoes, reduced appetite more than a low glycemic index food.
Suppression
Mechanisms controlling appetite are a potential target for weight loss drugs. Appetite control mechanisms seem to strongly counteract undereating, whereas they appear weak to control overeating. Early anorectics (appetite suppressants) were fenfluramine and phentermine. A more recent addition is sibutramine which increases serotonin and noradrenaline levels in the central nervous system, but had to be withdrawn from the market when it was shown to have an adverse cardiovascular risk profile. Similarly, the appetite suppressant rimonabant (a cannabinoid receptor antagonist) had to be withdrawn when it was linked with worsening depression and increased risk of suicide. Recent reports on recombinant PYY 3-36 suggest that this agent may contribute to weight loss by suppressing appetite.
Given the epidemic proportions of obesity in the Western world and the fact that it is increasing rapidly in some poorer countries, observers expect developments in this area to snowball in the near future.
Stimulation
Weight loss or loss of appetite ("cachexia") is an effect of some diseases, and a side effect of some prescription drugs. Stimulants such as methylphenidate commonly reduce appetite in patients, and have been prescribed off-label for weight loss. Three agents are approved for appetite stimulation in the United States: megestrol acetate - a progesterone available as an oral tablet, oxandrolone - an oral anabolic steroid, and dronabinol - THC, the principal cannabinoid in marijuana, available in an oral capsule.
Ghrelin, a gut hormone recognized as affecting appetite, is under investigation. Ghrelin itself must be delivered parenterally and research has therefore focused on substances that can be taken orally. Rikkunshito, a traditional Japanese Kampo medicine, is under preliminary research for its potential to stimulate ghrelin and appetite.
| Biology and health sciences | Health and fitness: General | Health |
235562 | https://en.wikipedia.org/wiki/Fatigue | Fatigue | Fatigue is a state of tiredness (which is not sleepiness), exhaustion or loss of energy.
Fatigue (in the medical sense) is sometimes associated with medical conditions including autoimmune disease, organ failure, chronic pain conditions, mood disorders, heart disease, infectious diseases, and post-infectious-disease states. However, fatigue is complex and in up to a third of primary care cases no medical or psychiatric diagnosis is found.
Fatigue (in the general usage sense of normal tiredness) often follows prolonged physical or mental activity. Physical fatigue results from muscle fatigue brought about by intense physical activity. Mental fatigue results from prolonged periods of cognitive activity which impairs cognitive ability, can manifest as sleepiness, lethargy, or directed attention fatigue, and can also impair physical performance.
Definition
Fatigue in a medical context is used to cover experiences of low energy that are not caused by normal life.
A 2021 review proposed a definition for fatigue as a starting point for discussion: "A multi-dimensional phenomenon in which the biophysiological, cognitive, motivational and emotional state of the body is affected resulting in significant impairment of the individual's ability to function in their normal capacity".
Another definition is that fatigue is "a significant subjective sensation of weariness, increasing sense of effort, mismatch between effort expended and actual performance, or exhaustion independent from medications, chronic pain, physical deconditioning, anaemia, respiratory dysfunction, depression, and sleep disorders".
Terminology
The use of the term "fatigue" in medical contexts may carry inaccurate connotations from the more general usage of the same word. More accurate terminology may also be needed for variants within the umbrella term of fatigue.
Comparison with other terms
Tiredness
Tiredness which is a normal result of work, mental stress, anxiety, overstimulation and understimulation, jet lag, active recreation, boredom, or lack of sleep is not considered medical fatigue. This is the tiredness described in MeSH Descriptor Data.
Sleepiness
Sleepiness refers to a tendency to fall asleep, whereas fatigue refers to an overwhelming sense of tiredness, lack of energy, and a feeling of exhaustion. Sleepiness and fatigue often coexist as a consequence of sleep deprivation. However sleepiness and fatigue may not correlate. Fatigue is generally considered a longer-term condition than sleepiness (somnolence).
Presentation
Common features
Distinguishing features of medical fatigue include
unpredictability,
variability in severity,
fatigue being relatively profound/overwhelming, and having extensive impact on daily living,
lack of improvement with rest,
where an underlying disease is present, the amount of fatigue is often not commensurate with the severity of the underlying disease.
Differentiating features
Differentiating characteristics of fatigue that may help identify the possible cause of fatigue include
Post-exertional malaise; a common feature of ME/CFS, and experienced by a significant proportion of people with Long Covid, but not a feature of other fatigues.
Increased by heat or cold; MS fatigue is in many cases affected in this way.
Remission; MS fatigue may reduce during periods of other MS symptom remission. ME/CFS may also have lower periods of activity.
Cognitive declines; sleep deprivation causes cognitive and neurobehavioral effects including unstable attention and slowing of response times. ME/CFS and MS may cause brain fog over longer timescales.
Intermittency; Fatigues often vary in how and when they occur. Some fatigues (RA, cancer fatigue) seem to often be continual (24/7) whilst others (MS, Sjögren's, lupus, brain injury) are often intermittent. A 2010 study found that Sjögren's patients reported fatigue after rising, an improvement in mid-morning, and worsening later in the day, whereas lupus (SLE) patients reported lower fatigue after rising followed by increasing fatigue through the day. ME/CFS symptoms can be continual, or can fluctuate during the day, from day to day, and over longer periods.
The pace of onset may be a related differentiating factor; MS fatigue can have abrupt onset.
Feeling of weight; some fatigues, including that caused by MS, create a sense of weight or gravity; "I feel like I have lead weights attached to my limbs ... or I am being pulled down by gravity."
Some people may have multiple causes of fatigue.
Causes
Drug use
A 2021 study in a Korean city found that alcohol consumption was the variable with the most correlation with overall fatigue. A 2020 Norway study found that 69% of substance use disorder patients had severe fatigue symptoms, and particularly those with extensive use of benzodiazepines. Causality, as opposed to correlation, were not proven in these studies.
Unknown
In up to a third of fatigue primary care cases, no medical or psychiatric diagnosis is found.
Tiredness is a common medically unexplained symptom.
Sleep disturbance
Fatigue can often be traced to poor sleep habits. Sleep deprivation and disruption is associated with subsequent fatigue. Sleep disturbances due to disease may impact fatigue. Caffeine and alcohol can disrupt sleep, causing fatigue.
Medications
Fatigue may be a side effect of certain medications (e.g., lithium salts, ciprofloxacin); beta blockers, which can induce exercise intolerance, medicines used to treat allergies or coughs, and many cancer treatments, particularly chemotherapy and radiotherapy. Use of benzodiazepines has been found to correlate with higher fatigue.
Association with diseases and illnesses
Fatigue is often associated with diseases and conditions. Some major categories of conditions that often list fatigue as a symptom include physical diseases, substance use illness, mental illnesses, and other diseases and conditions.
Physical diseases
autoimmune diseases, such as celiac disease, lupus, multiple sclerosis, myasthenia gravis, NMOSD, Sjögren's syndrome, rheumatoid arthritis, spondyloarthropathy and UCTD; this population's primary concern is fatigue;
blood disorders, such as anemia and hemochromatosis;
brain injury;
cancer, in which case it is called cancer fatigue;
Covid-19 and long Covid;
developmental disorders such as autism spectrum disorder;
endocrine diseases or metabolic disorders: diabetes mellitus, hypothyroidism and Addison's disease;
fibromyalgia;
heart failure and heart attack;
HIV
inborn errors of metabolism such as fructose malabsorption;
infectious diseases such as infectious mononucleosis or tuberculosis;
irritable bowel syndrome;
kidney diseases, e.g., acute renal failure, chronic renal failure;
leukemia or lymphoma;
liver failure or liver diseases, e.g., hepatitis;
Lyme disease;
neurological disorders such as narcolepsy, Parkinson's disease, postural orthostatic tachycardia syndrome (POTS) and post-concussion syndrome;
physical trauma and other pain-causing conditions, such as arthritis;
sleep deprivation or sleep disorders, e.g. sleep apnea;
stroke
thyroid disease such as hypothyroidism;
sarcoidosis
Mental illnesses
anxiety disorders, such as generalized anxiety disorder;
depression;
eating disorders, which can produce fatigue due to inadequate nutrition;
Other
myalgic encephalomyelitis/chronic fatigue syndrome (ME/CFS)
idiopathic chronic fatigue, a term used to describe chronic fatigue which does not have symptoms of ME/CFS. However ICF does not have a dedicated diagnostic code in the World Health Organization's ICD-11 classification.
Gulf War syndrome;
Primary vs. secondary
In some areas, it has been proposed that fatigue be separated into primary fatigue, caused directly by a disease process, and ordinary or secondary fatigue, caused by a range of causes including exertion and also secondary impacts on a person of having a disease (such as disrupted sleep). The ICD-11 MG22 definition of fatigue captures both types of fatigue; it includes fatigue that "occur[s] in the absence of... exertion... as a symptom of health conditions."
Obesity
Obesity correlates with higher fatigue levels and incidence.
Somatic symptom disorder
In somatic symptom disorder the patient is overfocused on a physical symptom, such as fatigue, that may or may not be explained by a medical condition.
Adverse life events
Adverse life events have been associated with fatigue.
Scientifically unsupported causes
The concept of adrenal fatigue is often raised in media but no scientific basis has been found for it.
Mechanisms
The mechanisms that cause fatigue are not well understood. Several mechanisms may be in operation within a patient, with the relative contribution of each mechanism differing over time.
Proposed fatigue explanations due to permanent changes in the brain may have difficulty in explaining the "unpredictability" and "variability" (i.e. appearing intermittently during the day, and not on all days) of the fatigue associated with inflammatory rheumatic diseases and autoimmune diseases (such as multiple sclerosis).
Inflammation
Inflammation distorts neural chemistry, brain function and functional connectivity across a broad range of brain networks, and has been linked to many types of fatigue. Findings implicate neuroinflammation in the etiology of fatigue in autoimmune and related disorders. Low-grade inflammation may cause an imbalance between energy availability and expenditure.
Cytokines are small protein molecules that modulate immune responses and inflammation (as well as other functions) and may have causal roles in fatigue. However a 2019 review was inconclusive as to whether cytokines play any definitive role in ME/CFS.
Reduced brain connectivity
Fatigue has been correlated with reductions in structural and functional connectivity in the brain. This has included in post-stroke, MS, NMOSD and MOG, and ME/CFS. This was also found for fatigue after brain injury, including a significant linear correlation between self-reported fatigue and brain functional connectivity.
Areas of the brain for which there is evidence of relation to fatigue are the thalamus and middle frontal cortex, fronto-parietal and cingulo-opercular, and default mode network, salience network, and thalamocortical loop areas.
A 2024 review found that structural connectivity changes may underlie fatigue in pwRRMS but that the overall results were inconclusive, possibly explained by heterogeneity and limited number of studies.
A small 2023 study found that infratentorial lesion volume (cerebellar and brainstem) was a relatively good predictor of RRMS fatigue severity.
Damage to brain white matter
Studies have found MS fatigue correlates with damage to NAWM (normal appearing white matter) (which will not show on normal MRI but will show on DTI (diffusion tensor imaging)). The correlation becomes unreliable in patients aged over 65 due to damage due to ageing.
Heat shock proteins
A small 2016 study found that primary Sjögren's syndrome patients with high fatigue, when compared with those with low fatigue, had significantly higher plasma concentrations of HSP90α, and a tendency to higher concentrations of HSP72. A small 2020 study of Crohn's disease patients found that higher fatigue visual analogue scale (fVAS) scores correlated with hgher HSP90α levels. A related small 2012 trial investigating if application of an IL-1 receptor antagonist (anakinra) would reduce fatigue in primary Sjögren's syndrome patients was inconclusive.
Measurement
Fatigue is currently measured by many different self-measurement surveys. Examples are the Fatigue Symptom Inventory (FSI) and the Fatigue Severity Scale. There is no consensus on best practice, and the existing surveys do not capture the intermittent nature of some forms of fatigue.
Diagnosis
Diagnosis guidance
A 2023 guidance indicates the following
in the primary care setting, a medical or psychiatric diagnosis is found in at least two-thirds of patients;
the most common diagnoses are viral illness, upper respiratory infection, iron-deficiency anaemia, acute bronchitis, adverse effects of a medical agent in the proper dose, and depression or other mental disorder, such as panic disorder, and somatisation disorder;
the origin of fatigue may be central, brain-derived, or peripheral, usually of a neuromuscular origin—it may be attributed to physical illness, psychological (e.g., psychiatric disorder), social (e.g., family problems), and physiological factors (e.g., old age), occupational illness (e.g., workplace stress);
when unexplained, clinically evaluated chronic fatigue can be separated into ME/CFS and idiopathic chronic fatigue.
A 2016 German review found that
about 20% of people complaining of tiredness to a GP (general practitioner) suffered from a depressive disorder.
anaemia, malignancies and other serious somatic diseases were only very rarely found in fatigued primary care patients, with prevalence rates hardly differing from non-fatigued patients.
if fatigue occurred in primary care patients as an isolated symptom without additional abnormalities in the medical history and in the clinical examination, then extensive diagnostic testing rarely helped detect serious diseases. Such testing might also lead to false-positive tests.
A 2014 Australian review recommended that a period of watchful waiting may be appropriate if there are no major warning signs.
A 2009 study found that about 50% of people who had fatigue received a diagnosis that could explain the fatigue after a year with the condition. In those people who had a possible diagnosis, musculoskeletal (19.4%) and psychological problems (16.5%) were the most common. Definitive physical conditions were only found in 8.2% of cases.
Classification
By type
Uni- or multi-dimensional
Fatigue can be seen as a uni-dimensional phenomenon that influences different aspects of human life. It can be multi-faceted and broadly defined, making understanding the causes of its manifestations especially difficult in conditions with diverse pathology including autoimmune diseases.
A 2021 review considered that different "types/subsets" of fatigue may exist and that patients normally present with more than one such "type/subset". These different "types/subsets" of fatigue may be different dimensions of the same symptom, and the relative manifestations of each may depend on the relative contribution of different mechanisms. Inflammation may be the root causal mechanism in many cases.
Physical
Physical fatigue, or muscle fatigue, is the temporary physical inability of muscles to perform optimally. The onset of muscle fatigue during physical activity is gradual, and depends upon an individual's level of physical fitness – other factors include sleep deprivation and overall health. Physical fatigue can be caused by a lack of energy in the muscle, by a decrease of the efficiency of the neuromuscular junction or by a reduction of the drive originating from the central nervous system, and can be reversed by rest. The central component of fatigue is triggered by an increase of the level of serotonin in the central nervous system. During motor activity, serotonin released in synapses that contact motor neurons promotes muscle contraction. During high level of motor activity, the amount of serotonin released increases and a spillover occurs. Serotonin binds to extrasynaptic receptors located on the axonal initial segment of motor neurons with the result that nerve impulse initiation and thereby muscle contraction are inhibited.
Muscle strength testing can be used to determine the presence of a neuromuscular disease, but cannot determine its cause. Additional testing, such as electromyography, can provide diagnostic information, but information gained from muscle strength testing alone is not enough to diagnose most neuromuscular disorders.
Mental
Mental fatigue is a temporary inability to maintain optimal cognitive performance. The onset of mental fatigue during any cognitive activity is gradual, and depends upon an individual's cognitive ability, and also upon other factors, such as sleep deprivation and overall health.
Mental fatigue has also been shown to decrease physical performance. It can manifest as somnolence, lethargy, directed attention fatigue, or disengagement. Research also suggests that mental fatigue is closely linked to the concept of ego depletion, though the validity of the concept is disputed. For example, one pre-registered study of 686 participants found that after exerting mental effort, people are likely to disengage and become less interested in exerting further effort.
Decreased attention can also be described as a more or less decreased level of consciousness. In any case, this can be dangerous when performing tasks that require constant concentration, such as operating large vehicles. For instance, a person who is sufficiently somnolent may experience microsleep. However, objective cognitive testing can be used to differentiate the neurocognitive deficits of brain disease from those attributable to tiredness.
The perception of mental fatigue is believed to be modulated by the brain's reticular activating system (RAS).
Fatigue impacts a driver's reaction time, awareness of hazards around them and their attention. Drowsy drivers are three times more likely to be involved in a car crash, and being awake over 20 hours is the equivalent of driving with a blood-alcohol concentration level of 0.08%.
Neurological fatigue
People with multiple sclerosis experience a form of overwhelming tiredness that can occur at any time of the day, for any duration, and that does not necessarily recur in a recognizable pattern for any given patient, referred to as "neurological fatigue", and often as "multiple sclerosis fatigue" or "lassitude". People with autoimmune diseases including inflammatory rheumatic diseases such as rheumatoid arthritis, psoriatic arthritis and primary Sjögren's syndrome, experience similar fatigue.
Attempts have been made to isolate causes of central nervous system fatigue.
By timescale
Acute
Acute fatigue is that which is temporary and self-limited. Acute fatigue is most often caused by an infection such as the common cold and can be cognized as one part of the sickness behavior response occurring when the immune system fights an infection. Other common causes of acute fatigue include depression and chemical causes, such as dehydration, poisoning, low blood sugar, or mineral or vitamin deficiencies.
Prolonged
Prolonged fatigue is a self-reported, persistent (constant) fatigue lasting at least one month.
Chronic
Chronic fatigue is a self-reported fatigue lasting at least 6 consecutive months. Chronic fatigue may be either persistent or relapsing. Chronic fatigue is a symptom of many chronic illnesses and of idiopathic chronic fatigue.
By effect
Fatigue can have significant negative impacts on quality of life. Profound and debilitating fatigue is the most common complaint reported among individuals with autoimmune disease, such as systemic lupus erythematosus, multiple sclerosis, type 1 diabetes, celiac disease, Myalgic Encephalomyelitis/chronic fatigue syndrome, and rheumatoid arthritis. Fatigue has been described by sufferers as 'incomprehensible' due to its unpredictable occurrence, lack of relationship to physical effort and different character as compared to tiredness.
WHO classification
The World Health Organization's ICD-11 classification
includes a category MG22 Fatigue (typically fatigue following exertion but sometimes may occur in the absence of such exertion as a symptom of health conditions), and many other categories where fatigue is mentioned as a secondary result of other factors. It does not include any fatigue-based psychiatric illness (unless it is accompanied by related psychiatric symptoms).
DSM-5 lists 'fatigue or loss of energy nearly every day' as one factor in diagnosing depression.
Treatment and management
Management may include review of factors and methods as explained below.
Cessation of medications causing fatigue
Taking of medications with side effects of contributing to fatigue may be ceased.
Medications to treat fatigue
The UK NICE recommends consideration of amantadine, modafinil, and selective serotonin reuptake inhibitors (SSRIs) for MS fatigue. A PCORI review, however, found amantadine, methylphenidate, and modafinil no more effective than placebo in reducing fatigue, with side effects reported. Psychostimulants such as methylphenidate, amphetamines, and modafinil have been used in the treatment of fatigue related to depression, and medical illness such as chronic fatigue syndrome and cancer. They have also been used to counteract fatigue in sleep loss and in aviation.
Mental health tools
CBT can be useful for fatigue, including ME/CFS but is not included in NICE guidelines for ME/CFS treatment.
Other approaches
Avoidance of body heat
Fatigue in MS often correlates with relatively high endogenous body temperature.
Improved sleep
Improving sleep has been associated with reduced fatigue but only in small studies.
Intermittent fasting
A very small 2022 study found 40% reductions in fatigue categorisations after three months of 16:8 intermittent fasting.
Vagus nerve stimulation
A very small 2023 study of Sjogren's patients showed reductions in self-reported fatigue after 56 days of vagus nerve stimulation.
Qigong and Tai Chi
Qigong and Tai chi have been postulated as helpful to reduce fatigue, but the evidence is of low quality.
Approaches to managing fatigue
Some health systems help people manage their fatigue better through attitude changes and skills transference.
Prevalence
2023 guidance stated fatigue prevalence is between 4.3% and 21.9%. Prevalence is higher in women than men.
A 2021 German study found that fatigue was the main or secondary reason for 10–20% of all consultations with a primary care physician.
A large study based on the 2004 Health and Retirement Study (HRS), a biennial longitudinal survey of US adults aged 51 and above, with mean age 65, found that 33% of women and 29% of men self-reported fatigue.
Fatigue represents a large health economic burden and unmet need to patients and to society.
Possible purposes of fatigue
Body resource management purposes
Fatigue has been posited as a bio-psycho-physiological state reflecting the body's overall strategy in resource (energy) management. Fatigue may occur when the body wants to limit resource utilisation ("rationing") in order to use resources for healing (part of sickness behaviour) or conserve energy for a particular current or future anticipated need, including a threat.
Evolutionary purposes
It has been posited that fatigue had evolutionary benefits in making more of the body's resources available for healing processes, such as immune responses, and in limiting disease spread by tending to reduce social interactions.
| Biology and health sciences | Symptoms and signs | Health |
235589 | https://en.wikipedia.org/wiki/Axolotl | Axolotl | The axolotl (; from ) (Ambystoma mexicanum) is a paedomorphic salamander closely related to the tiger salamander. It is unusual among amphibians in that it reaches adulthood without undergoing metamorphosis. Instead of taking to the land, adults remain aquatic and gilled. The species was originally found in several lakes underlying what is now Mexico City, such as Lake Xochimilco and Lake Chalco. These lakes were drained by Spanish settlers after the conquest of the Aztec Empire, leading to the destruction of much of the axolotl's natural habitat.
, the axolotl was near extinction due to urbanization in Mexico City and consequent water pollution, as well as the introduction of invasive species such as tilapia and perch. It is listed as critically endangered in the wild, with a decreasing population of around 50 to 1,000 adult individuals, by the International Union for Conservation of Nature and Natural Resources (IUCN) and is listed under Appendix II of the Convention on International Trade in Endangered Species (CITES). Axolotls are used extensively in scientific research for their ability to regenerate limbs, gills and parts of their eyes and brains. Notably, their ability to regenerate declines with age, but it does not disappear. Axolotls keep modestly growing throughout their life and some consider this trait to be a direct contributor to their regenerative abilities. Further research has been conducted to examine their heart as a model of human single ventricle and excessive trabeculation. Axolotls were also sold as food in Mexican markets and were a staple in the Aztec diet.
Axolotls may be confused with the larval stage of the closely related tiger salamander (A. tigrinum), which are widespread in much of North America and occasionally become paedomorphic, or with mudpuppies (Necturus spp.), fully aquatic salamanders from a different family that are not closely related to the axolotl but bear a superficial resemblance.
Description
A sexually mature adult axolotl, at age 18–27 months, ranges in length from , although a size close to is most common and greater than is rare. Axolotls possess features typical of salamander larvae, including external gills and a caudal fin extending from behind the head to the vent. External gills are usually lost when salamander species mature into adulthood, although the axolotl maintains this feature. This is due to their neoteny evolution, where axolotls are much more aquatic than other salamander species.
Their heads are wide, and their eyes are lidless. Their limbs are underdeveloped and possess long, thin digits. Males are identified by their swollen cloacae lined with papillae, while females are noticeable for their wider bodies full of eggs. Three pairs of external gill stalks (rami) originate behind their heads and are used to move oxygenated water. The external gill rami are lined with filaments (fimbriae) to increase surface area for gas exchange. Four-gill slits lined with gill rakers are hidden underneath the external gills, which prevent food from entering and allow particles to filter through.
Axolotls have barely visible vestigial teeth, which develop during metamorphosis. The primary method of feeding is by suction, during which their rakers interlock to close the gill slits. External gills are used for respiration, although buccal pumping (gulping air from the surface) may also be used to provide oxygen to their lungs. Buccal pumping can occur in a two-stroke manner that pumps air from the mouth to the lungs, and with four-stroke that reverses this pathway with compression forces.
Axolotls have four pigmentation genes; when mutated, they create different color variants. The normal wild-type animal is brown or tan with gold speckles and an olive undertone. The five most common mutant colors are listed below.
Leucistic: pale pink with black eyes.
Xanthic: grey, with black eyes.
Albino: pale pink or white, with red eyes, which is more common in axolotls than other species.
Melanoid: all black or dark blue with no gold speckling or olive tone.
In addition, there is wide individual variability in the size, frequency, and intensity of the gold speckling, and at least one variant develops a black and white piebald appearance upon reaching maturity. Because pet breeders frequently cross the variant colors, double homozygous mutants are common in the pet trade, especially white/pink animals with pink eyes that are double homozygous mutants for both the albino and leucistic trait. Axolotls also have some limited ability to alter their color to provide better camouflage by changing the relative size and thickness of their melanophores.
Habitat and ecology
The axolotl is native only to the freshwater of Lake Xochimilco and Lake Chalco in the Valley of Mexico. Lake Chalco no longer exists, having been drained as a flood control measure, and Lake Xochimilco remains a remnant of its former self, existing mainly as canals. The water temperature in Xochimilco rarely rises above , although it may fall to in the winter, and perhaps lower.
Surveys in 1998, 2003, and 2008 found 6,000, 1,000, and 100 axolotls per square kilometer in its Lake Xochimilco habitat, respectively. A four-month-long search in 2013, however, turned up no surviving individuals in the wild. Just a month later, two wild ones were spotted in a network of canals leading from Xochimilco.
The wild population has been put under heavy pressure by the growth of Mexico City. The axolotl is currently on the International Union for Conservation of Nature's annual Red List of threatened species. Non-native fish, such as African tilapia and Asian carp, have also recently been introduced to the waters. These new fish have been eating the axolotls' young, as well as their primary source of food.
Axolotls are members of the tiger salamander, or Ambystoma tigrinum, species complex, along with all other Mexican species of Ambystoma. Their habitat is like that of most neotenic species—a high-altitude body of water surrounded by a risky terrestrial environment. These conditions are thought to favor neoteny. However, a terrestrial population of Mexican tiger salamanders occupies and breeds in the axolotl's habitat.
Diet
The axolotl is carnivorous, consuming small prey such as mollusks, worms, insects, other arthropods, and small fish in the wild. Axolotls locate food by smell, and will "snap" at any potential meal, sucking the food into their stomachs with vacuum force.
Use as a model organism
Today, the axolotl is still used in research as a model organism, and large numbers are bred in captivity. They are especially easy to breed compared to other salamanders in their family, which are rarely captive-bred due to the demands of terrestrial life. One attractive feature for research is the large and easily manipulated embryo, which allows viewing of the full development of a vertebrate. Axolotls are used in heart defect studies due to the presence of a mutant gene that causes heart failure in embryos. Since the embryos survive almost to hatching with no heart function, the defect is very observable. The axolotl is also considered an ideal animal model for the study of neural tube closure due to the similarities between human and axolotl neural plate and tube formation; the axolotl's neural tube, unlike the frog's, is not hidden under a layer of superficial epithelium. There are also mutations affecting other organ systems some of which are not well characterized and others that are. The genetics of the color variants of the axolotl have also been widely studied.
Regeneration
The feature of the axolotl that attracts most attention is its healing ability: the axolotl does not heal by scarring and is capable of the regeneration of entire lost appendages in a period of months, and, in certain cases, more vital structures, such as tail, limb, central nervous system, and tissues of the eye and heart. They can even restore less vital parts of their brains. They can also readily accept transplants from other individuals, including eyes and parts of the brain—restoring these alien organs to full functionality. In some cases, axolotls have been known to repair a damaged limb, as well as regenerating an additional one, ending up with an extra appendage that makes them attractive to pet owners as a novelty. In metamorphosed individuals, however, the ability to regenerate is greatly diminished. The axolotl is therefore used as a model for the development of limbs in vertebrates. There are three basic requirements for regeneration of the limb: the wound epithelium, nerve signaling, and the presence of cells from the different limb axes. A wound epidermis is quickly formed by the cells to cover up the site of the wound. In the following days, the cells of the wound epidermis divide and grow quickly forming a blastema, which means the wound is ready to heal and undergo patterning to form the new limb.
It is believed that during limb generation, axolotls have a different system to regulate their internal macrophage level and suppress inflammation, as scarring prevents proper healing and regeneration. However, this belief has been questioned by other studies. The axolotl's regenerative properties leave the species as the perfect model to study the process of stem cells and its own neoteny feature. Current research can record specific examples of these regenerative properties through tracking cell fates and behaviors, lineage tracing skin triploid cell grafts, pigmentation imaging, electroporation, tissue clearing and lineage tracing from dye labeling. The newer technologies of germline modification and transgenesis are better suited for live imaging the regenerative processes that occur for axolotls.
Genome
The 32 billion base pair long sequence of the axolotl's genome was published in 2018 and was the largest animal genome completed at the time. It revealed species-specific genetic pathways that may be responsible for limb regeneration. Although the axolotl genome is about 10 times as large as the human genome, it encodes a similar number of proteins, namely 23,251 (the human genome encodes about 20,000 proteins). The size difference is mostly explained by a large fraction of repetitive sequences, but such repeated elements also contribute to increased median intron sizes (22,759 bp) which are 13, 16 and 25 times that observed in human (1,750 bp), mouse (1,469 bp) and Tibetan frog (906 bp), respectively.
Neoteny
Most amphibians begin their lives as aquatic animals which are unable to live on dry land, often being dubbed as tadpoles. To reach adulthood, they go through a process called metamorphosis, in which they lose their gills and start living on land. The axolotl is unusual in that it has a lack of thyroid-stimulating hormone, which is needed for the thyroid to produce thyroxine in order for the axolotl to go through metamorphosis; it keeps its gills and lives in water all its life, even after it becomes an adult and is able to reproduce. Neoteny is the term for reaching sexual maturity without undergoing metamorphosis.
The genes responsible for neoteny in laboratory animals may have been identified; they are not linked in wild populations, suggesting artificial selection is the cause of complete neoteny in laboratory and pet axolotls. The genes responsible have been narrowed down to a small chromosomal region called met1, which contains several candidate genes.
Metamorphosis
The axolotl's body has the capacity to go through metamorphosis if given the necessary hormone, but axolotls do not produce it, and must be exposed to it from an external source, after which an axolotl undergoes an artificially-induced metamorphosis and begins living on land. In laboratory conditions, metamorphosis is reliably induced by administering either the thyroid hormone thyroxine or thyroid-stimulating hormone. The former is more commonly used.
An axolotl undergoing metamorphosis experiences a number of physiological changes that help them adapt to life on land. These include increased muscle tone in limbs, the absorption of gills and fins into the body, the development of eyelids, and a reduction in the skin's permeability to water, allowing the axolotl to stay more easily hydrated when on land. The lungs of an axolotl, though present alongside gills after reaching non-metamorphosed adulthood, develop further during metamorphosis.
An axolotl that has gone through metamorphosis resembles an adult plateau tiger salamander, though the axolotl differs in its longer toes. Among hobbyists, the process of artificially inducing metamorphosis can often result in death during or even following a successful attempt, and so casual hobbyists are generally discouraged from attempting to induce metamorphosis in pet axolotls. Morphed pet axolotls should be given solid footholds in their enclosure to satisfy their need for land. They should not be given live animals as food.
History
Six adult axolotls (including a leucistic specimen) were shipped from Mexico City to the Jardin des Plantes in Paris in 1863. Unaware of their neoteny, Auguste Duméril was surprised when, instead of the axolotl, he found in the vivarium a new species, similar to the salamander. This discovery was the starting point of research about neoteny. It is not certain that Ambystoma velasci specimens were not included in the original shipment. Vilem Laufberger in Prague used thyroid hormone injections to induce an axolotl to grow into a terrestrial adult salamander. The experiment was repeated by Englishman Julian Huxley, who was unaware the experiment had already been done, using ground thyroids. Since then, experiments have been done often with injections of iodine or various thyroid hormones used to induce metamorphosis.
In other salamanders
Many other species within the axolotl's genus are also either entirely neotenic or have neotenic populations. Sirens and Necturus are other neotenic salamanders, although unlike axolotls, they cannot be induced to metamorphose by an injection of iodine or thyroxine hormone.
Neoteny has been observed in all salamander families in which it seems to be a survival mechanism, in aquatic environments only of mountain and hill, with little food and, in particular, with little iodine. In this way, salamanders can reproduce and survive in the form of a smaller larval stage, which is aquatic and requires a lower quality and quantity of food compared to the big adult, which is terrestrial. If the salamander larvae ingest a sufficient amount of iodine, directly or indirectly through cannibalism, they quickly begin metamorphosis and transform into bigger terrestrial adults, with higher dietary requirements. In fact, in some high mountain lakes there live dwarf forms of salmonids that are caused by deficiencies in food and, in particular, iodine, which causes cretinism and dwarfism due to hypothyroidism, as it does in humans.
Online Model Organism Database
xenbase provides limited support (BLAST, JBrowse tracks, genome download) for Axolotls.
Threats
Axolotls are only native to the Mexican Central Valley. Although the native axolotl population once extended through most of the lakes and wetlands that make up this region, the native habitat is now limited to Lake Xochimilco as a result of the expansion of Mexico City. Lake Xochimilco is not a large body of water, but rather a small series of artificial channels, small lakes, and temporary wetlands.
Lake Xochimilco has poor water quality, caused by the region's aquaculture and agriculture demands. It is also maintained by inputs of only partially treated wastewater. Water quality tests reveal a low nitrogen-phosphorus ratio and a high concentration of chlorophyll a, which are indicative of an oxygen-poor environment that is not well-suited for axolotls. In addition, the intensive use of pesticides from agriculture around Lake Xochimilco causes run off into the lake and a reduction of habitat quality for axolotls. The pesticides used contain chemical compounds that studies show to sharply increase mortality in axolotl embryos and larvae. Of the surviving embryo and larvae, there is also an increase of morphological, behavior, and activity abnormalities.
Another factor that threatens the native axolotl population is the introduction of invasive species such as the Nile tilapia and common carp. These invasive fish species threaten axolotl populations by eating their eggs or young and by out-competing them for natural resources. The presence of these species has also been shown to change the behavior of axolotls, causing them to be less active to avoid predation. This reduction in activity greatly impacts the axolotls foraging and mating opportunities.
With such a small native population, there is a large loss of genetic diversity. This lack of genetic diversity can be dangerous for the remaining population, causing an increase in inbreeding and a decrease in general fitness and adaptive potential. It ultimately raises the axolotl's risk for extinction, something that they are already in danger of. Studies have found indicators of a low interpopulation gene flow and higher rates of genetic drift. These are likely the result of multiple “bottleneck” incidents in which events that kill off several individuals of a population occur and sharply reduce the genetic diversity of the remaining population. The offspring produced after bottleneck events have a greater risk of showing decreased fitness and are often less capable of adaptation down the line. Multiple bottleneck events can have disastrous effects on a population. Studies have also found high rates of relatedness that are indicative of inbreeding. Inbreeding can be especially harmful as it can cause an increase in the presence of deleterious, or harmful, genes within a population. The detection of introgressed tiger salamander (A. tigrinum) DNA in the laboratory axolotl population raises further concerns about the suitability of the captive population as an ark for potential reintroduction purposes.
There has been little improvement in the conditions of the lake or the population of native axolotls. Many scientists are focusing their conservation efforts on translocation of captive-bred individuals into new habitats or reintroduction into Lake Xochimilco. The Laboratorio de Restauracion Ecologica (LRE) in the Universidad Nacional Autonoma de Mexico (UNAM) has built up a population of more than 100 captive-bred individuals. These axolotls are mostly used for research by the lab but plans of a semi-artificial wetland inside the university have been established and the goal is to establish a viable population of axolotls within it. Studies have shown that captive-bred axolotls that are raised in a semi-natural environment can catch prey, survive in the wild, and have moderate success in escaping predators. These captive-bred individuals can be introduced into unpolluted bodies of water or back into Lake Xochimilco to establish or re-establish a wild population.
Captive care
The axolotl is a popular exotic pet like its relative, the tiger salamander (Ambystoma tigrinum). As for all poikilothermic organisms, lower temperatures result in slower metabolism and a very unhealthily reduced appetite. Temperatures at approximately to are suggested for captive axolotls to ensure sufficient food intake; stress resulting from more than a day's exposure to lower temperatures may quickly lead to disease and death, and temperatures higher than may lead to metabolic rate increase, also causing stress and eventually death. Chlorine, commonly added to tapwater, is harmful to axolotls. A single axolotl typically requires a tank. Axolotls spend the majority of the time at the bottom of the tank.
Salts, such as Holtfreter's solution, are often added to the water to prevent infection.
In captivity, axolotls eat a variety of readily available foods, including trout and salmon pellets, frozen or live bloodworms, earthworms, and waxworms. Axolotls can also eat feeder fish, but care should be taken as fish may contain parasites.
Substrates are another important consideration for captive axolotls, as axolotls (like other amphibians and reptiles) tend to ingest bedding material together with food and are commonly prone to gastrointestinal obstruction and foreign body ingestion. Some common substrates used for animal enclosures can be harmful for amphibians and reptiles. Gravel (common in aquarium use) should not be used, and is recommended that any sand consists of smooth particles with a grain size of under 1mm. One guide to axolotl care for laboratories notes that bowel obstructions are a common cause of death, and recommends that no items with a diameter below 3 cm (or approximately the size of the animal's head) should be available to the animal.
There is some evidence that axolotls might seek out appropriately-sized gravel for use as gastroliths based on experiments conducted at the University of Manitoba axolotl colony. As there is no conclusive evidence pointing to gastrolith use, gravel should be avoided due to the high risk of impaction.
Cultural significance
The species is named after the Aztec deity Xolotl, the god of fire and lightning, who transformed himself into an axolotl to avoid being sacrificed by fellow gods. They continue to play an outsized cultural role in Mexico. Axólotl also means water monster in the Nahuatl language.
They appear in the works of Mexican muralist Diego Rivera. In 2021, Mexico released a new design for its 50-peso banknote featuring an axolotl along with maize and chinampas on its back. It was recognized as "Bank Note of the Year" by the International Bank Note Society. HD 224693, a star in the equatorial constellation of Cetus, was named Axólotl in 2019.
The Pokémon Mudkip and its evolutions, added in Pokémon Ruby and Sapphire (2002), take some visual inspiration from axolotls. Additionally, the Pokémon Wooper, added in Pokémon Gold, Silver and Crystal (1999), is directly based on an axolotl. The looks of the dragons Toothless and The Night Fury in the How to Train Your Dragon movies are based on axolotls. They were also added to the video game Minecraft in 2020. It is following Mojang Studios' trend of adding endangered species to the game to raise awareness. They were also added to its spin-off Minecraft: Dungeons in 2022 and are available in Lego Minecraft. An anthropomorphic Axolotl named Axo was also added as a purchasable outfit in Fortnite Battle Royale on August 9, 2020.
| Biology and health sciences | Salamanders and newts | Animals |
235757 | https://en.wikipedia.org/wiki/Sensor | Sensor | A sensor is a device that produces an output signal for the purpose of detecting a physical phenomenon.
In the broadest definition, a sensor is a device, module, machine, or subsystem that detects events or changes in its environment and sends the information to other electronics, frequently a computer processor.
Sensors are used in everyday objects such as touch-sensitive elevator buttons (tactile sensor) and lamps which dim or brighten by touching the base, and in innumerable applications of which most people are never aware. With advances in micromachinery and easy-to-use microcontroller platforms, the uses of sensors have expanded beyond the traditional fields of temperature, pressure and flow measurement, for example into MARG sensors.
Analog sensors such as potentiometers and force-sensing resistors are still widely used. Their applications include manufacturing and machinery, airplanes and aerospace, cars, medicine, robotics and many other aspects of our day-to-day life. There is a wide range of other sensors that measure chemical and physical properties of materials, including optical sensors for refractive index measurement, vibrational sensors for fluid viscosity measurement, and electro-chemical sensors for monitoring pH of fluids.
A sensor's sensitivity indicates how much its output changes when the input quantity it measures changes. For instance, if the mercury in a thermometer moves 1 cm when the temperature changes by 1 °C, its sensitivity is 1 cm/°C (it is basically the slope assuming a linear characteristic). Some sensors can also affect what they measure; for instance, a room temperature thermometer inserted into a hot cup of liquid cools the liquid while the liquid heats the thermometer. Sensors are usually designed to have a small effect on what is measured; making the sensor smaller often improves this and may introduce other advantages.
Technological progress allows more and more sensors to be manufactured on a microscopic scale as microsensors using MEMS technology. In most cases, a microsensor reaches a significantly faster measurement time and higher sensitivity compared with macroscopic approaches. Due to the increasing demand for rapid, affordable and reliable information in today's world, disposable sensors—low-cost and easy‐to‐use devices for short‐term monitoring or single‐shot measurements—have recently gained growing importance. Using this class of sensors, critical analytical information can be obtained by anyone, anywhere and at any time, without the need for recalibration and worrying about contamination.
Classification of measurement errors
A good sensor obeys the following rules:
it is sensitive to the measured property
it is insensitive to any other property likely to be encountered in its application, and
it does not influence the measured property.
Most sensors have a linear transfer function. The sensitivity is then defined as the ratio between the output signal and measured property. For example, if a sensor measures temperature and has a voltage output, the sensitivity is constant with the units [V/K]. The sensitivity is the slope of the transfer function. Converting the sensor's electrical output (for example V) to the measured units (for example K) requires dividing the electrical output by the slope (or multiplying by its reciprocal). In addition, an offset is frequently added or subtracted. For example, −40 must be added to the output if 0 V output corresponds to −40 C input.
For an analog sensor signal to be processed or used in digital equipment, it needs to be converted to a digital signal, using an analog-to-digital converter.
Sensor deviations
Since sensors cannot replicate an ideal transfer function, several types of deviations can occur which limit sensor accuracy:
Since the range of the output signal is always limited, the output signal will eventually reach a minimum or maximum when the measured property exceeds the limits. The full scale range defines the maximum and minimum values of the measured property.
The sensitivity may in practice differ from the value specified. This is called a sensitivity error. This is an error in the slope of a linear transfer function.
If the output signal differs from the correct value by a constant, the sensor has an offset error or bias. This is an error in the y-intercept of a linear transfer function.
Nonlinearity is deviation of a sensor's transfer function from a straight line transfer function. Usually, this is defined by the amount the output differs from ideal behavior over the full range of the sensor, often noted as a percentage of the full range.
Deviation caused by rapid changes of the measured property over time is a dynamic error. Often, this behavior is described with a bode plot showing sensitivity error and phase shift as a function of the frequency of a periodic input signal.
If the output signal slowly changes independent of the measured property, this is defined as drift. Long term drift over months or years is caused by physical changes in the sensor.
Noise is a random deviation of the signal that varies in time.
A hysteresis error causes the output value to vary depending on the previous input values. If a sensor's output is different depending on whether a specific input value was reached by increasing vs. decreasing the input, then the sensor has a hysteresis error.
If the sensor has a digital output, the output is essentially an approximation of the measured property. This error is also called quantization error.
If the signal is monitored digitally, the sampling frequency can cause a dynamic error, or if the input variable or added noise changes periodically at a frequency near a multiple of the sampling rate, aliasing errors may occur.
The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment.
All these deviations can be classified as systematic errors or random errors. Systematic errors can sometimes be compensated for by means of some kind of calibration strategy. Noise is a random error that can be reduced by signal processing, such as filtering, usually at the expense of the dynamic behavior of the sensor.
Resolution
The sensor resolution or measurement resolution is the smallest change that can be detected in the quantity that is being measured. The resolution of a sensor with a digital output is usually the numerical resolution of the digital output. The resolution is related to the precision with which the measurement is made, but they are not the same thing. A sensor's accuracy may be considerably worse than its resolution.
For example, the distance resolution is the minimum distance that can be accurately measured by any distance-measuring devices. In a time-of-flight camera, the distance resolution is usually equal to the standard deviation (total noise) of the signal expressed in unit of length.
The sensor may to some extent be sensitive to properties other than the property being measured. For example, most sensors are influenced by the temperature of their environment.
Chemical sensor
A chemical sensor is a self-contained analytical device that can provide information about the chemical composition of its environment, that is, a liquid or a gas phase. The information is provided in the form of a measurable physical signal that is correlated with the concentration of a certain chemical species (termed as analyte). Two main steps are involved in the functioning of a chemical sensor, namely, recognition and transduction. In the recognition step, analyte molecules interact selectively with receptor molecules or sites included in the structure of the recognition element of the sensor. Consequently, a characteristic physical parameter varies and this variation is reported by means of an integrated transducer that generates the output signal.
A chemical sensor based on recognition material of biological nature is a biosensor. However, as synthetic biomimetic materials are going to substitute to some extent recognition biomaterials, a sharp distinction between a biosensor and a standard chemical sensor is superfluous. Typical biomimetic materials used in sensor development are molecularly imprinted polymers and aptamers.
Chemical sensor array
Biosensor
In biomedicine and biotechnology, sensors which detect analytes thanks to a biological component, such as cells, protein, nucleic acid or biomimetic polymers, are called biosensors.
Whereas a non-biological sensor, even organic (carbon chemistry), for biological analytes is referred to as sensor or nanosensor. This terminology applies for both in-vitro and in vivo applications.
The encapsulation of the biological component in biosensors, presents a slightly different problem that ordinary sensors; this can either be done by means of a semipermeable barrier, such as a dialysis membrane or a hydrogel, or a 3D polymer matrix, which either physically constrains the sensing macromolecule or chemically constrains the macromolecule by bounding it to the scaffold.
Neuromorphic sensors
Neuromorphic sensors are sensors that physically mimic structures and functions of biological neural entities. One example of this is the event camera.
MOS sensors
The MOSFET invented at Bell Labs between 1955 and 1960, MOSFET sensors (MOS sensors) were later developed, and they have since been widely used to measure physical, chemical, biological and environmental parameters.
Biochemical sensors
A number of MOSFET sensors have been developed, for measuring physical, chemical, biological, and environmental parameters. The earliest MOSFET sensors include the open-gate field-effect transistor (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. The ISFET is widely used in biomedical applications, such as the detection of DNA hybridization, biomarker detection from blood, antibody detection, glucose measurement, pH sensing, and genetic technology.
By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed.
Image sensors
MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting.
The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The CMOS active-pixel sensor was later developed by Eric Fossum and his team in the early 1990s.
MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5μm NMOS sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors.
Monitoring sensors
MOS monitoring sensors are used for house monitoring, office and agriculture monitoring, traffic monitoring (including car speed, traffic jams, and traffic accidents), weather monitoring (such as for rain, wind, lightning and storms), defense monitoring, and monitoring temperature, humidity, air pollution, fire, health, security and lighting. MOS gas detector sensors are used to detect carbon monoxide, sulfur dioxide, hydrogen sulfide, ammonia, and other gas substances. Other MOS sensors include intelligent sensors and wireless sensor network (WSN) technology.
Electronics sensors
The typical modern CPUs, GPUs and SoCs are usually integrated electrics sensors to detect chip temperatures, voltages and powers.
| Technology | Basics_4 | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.