id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
26700 | https://en.wikipedia.org/wiki/Science | Science | Science is a systematic discipline that builds and organises knowledge in the form of testable hypotheses and predictions about the universe. Modern science is typically divided into two or three major branches: the natural sciences (e.g., physics, chemistry, and biology), which study the physical world; and the behavioural sciences (e.g., economics, psychology, and sociology), which study individuals and societies. The formal sciences (e.g., logic, mathematics, and theoretical computer science), which study formal systems governed by axioms and rules, are sometimes described as being sciences as well; however, they are often regarded as a separate field because they rely on deductive reasoning instead of the scientific method or empirical evidence as their main methodology. Applied sciences are disciplines that use scientific knowledge for practical purposes, such as engineering and medicine.
The history of science spans the majority of the historical record, with the earliest identifiable predecessors to modern science dating to the Bronze Age in Egypt and Mesopotamia (). Their contributions to mathematics, astronomy, and medicine entered and shaped the Greek natural philosophy of classical antiquity, whereby formal attempts were made to provide explanations of events in the physical world based on natural causes, while further advancements, including the introduction of the Hindu–Arabic numeral system, were made during the Golden Age of India. Scientific research deteriorated in these regions after the fall of the Western Roman Empire during the Early Middle Ages (400–1000 CE), but in the Medieval renaissances (Carolingian Renaissance, Ottonian Renaissance and the Renaissance of the 12th century) scholarship flourished again. Some Greek manuscripts lost in Western Europe were preserved and expanded upon in the Middle East during the Islamic Golden Age, along with the later efforts of Byzantine Greek scholars who brought Greek manuscripts from the dying Byzantine Empire to Western Europe at the start of the Renaissance.
The recovery and assimilation of Greek works and Islamic inquiries into Western Europe from the 10th to 13th centuries revived natural philosophy, which was later transformed by the Scientific Revolution that began in the 16th century as new ideas and discoveries departed from previous Greek conceptions and traditions. The scientific method soon played a greater role in knowledge creation and it was not until the 19th century that many of the institutional and professional features of science began to take shape, along with the changing of "natural philosophy" to "natural science".
New knowledge in science is advanced by research from scientists who are motivated by curiosity about the world and a desire to solve problems. Contemporary scientific research is highly collaborative and is usually done by teams in academic and research institutions, government agencies, and companies. The practical impact of their work has led to the emergence of science policies that seek to influence the scientific enterprise by prioritising the ethical and moral development of commercial products, armaments, health care, public infrastructure, and environmental protection.
Etymology
The word science has been used in Middle English since the 14th century in the sense of "the state of knowing". The word was borrowed from the Anglo-Norman language as the suffix , which was borrowed from the Latin word , meaning "knowledge, awareness, understanding". It is a noun derivative of the Latin meaning "knowing", and undisputedly derived from the Latin , the present participle , meaning "to know".
There are many hypotheses for sciences ultimate word origin. According to Michiel de Vaan, Dutch linguist and Indo-Europeanist, may have its origin in the Proto-Italic language as or meaning "to know", which may originate from Proto-Indo-European language as , , meaning "to incise". The Lexikon der indogermanischen Verben proposed is a back-formation of , meaning "to not know, be unfamiliar with", which may derive from Proto-Indo-European in Latin , or , from meaning "to cut".
In the past, science was a synonym for "knowledge" or "study", in keeping with its Latin origin. A person who conducted scientific research was called a "natural philosopher" or "man of science". In 1834, William Whewell introduced the term scientist in a review of Mary Somerville's book On the Connexion of the Physical Sciences, crediting it to "some ingenious gentleman" (possibly himself).
History
Early history
Science has no single origin. Rather, scientific thinking emerged gradually over the course of tens of thousands of years, taking different forms around the world, and few details are known about the very earliest developments. Women likely played a central role in prehistoric science, as did religious rituals. Some scholars use the term "protoscience" to label activities in the past that resemble modern science in some but not all features; however, this label has also been criticised as denigrating, or too suggestive of presentism, thinking about those activities only in relation to modern categories.
Direct evidence for scientific processes becomes clearer with the advent of writing systems in the Bronze Age civilisations of Ancient Egypt and Mesopotamia (), creating the earliest written records in the history of science. Although the words and concepts of "science" and "nature" were not part of the conceptual landscape at the time, the ancient Egyptians and Mesopotamians made contributions that would later find a place in Greek and medieval science: mathematics, astronomy, and medicine. From the 3rd millennium BCE, the ancient Egyptians developed a non-positional decimal numbering system, solved practical problems using geometry, and developed a calendar. Their healing therapies involved drug treatments and the supernatural, such as prayers, incantations, and rituals.
The ancient Mesopotamians used knowledge about the properties of various natural chemicals for manufacturing pottery, faience, glass, soap, metals, lime plaster, and waterproofing. They studied animal physiology, anatomy, behaviour, and astrology for divinatory purposes. The Mesopotamians had an intense interest in medicine and the earliest medical prescriptions appeared in Sumerian during the Third Dynasty of Ur. They seem to have studied scientific subjects which had practical or religious applications and had little interest in satisfying curiosity.
Classical antiquity
In classical antiquity, there is no real ancient analogue of a modern scientist. Instead, well-educated, usually upper-class, and almost universally male individuals performed various investigations into nature whenever they could afford the time. Before the invention or discovery of the concept of phusis or nature by the pre-Socratic philosophers, the same words tend to be used to describe the natural "way" in which a plant grows, and the "way" in which, for example, one tribe worships a particular god. For this reason, it is claimed that these men were the first philosophers in the strict sense and the first to clearly distinguish "nature" and "convention".
The early Greek philosophers of the Milesian school, which was founded by Thales of Miletus and later continued by his successors Anaximander and Anaximenes, were the first to attempt to explain natural phenomena without relying on the supernatural. The Pythagoreans developed a complex number philosophy and contributed significantly to the development of mathematical science. The theory of atoms was developed by the Greek philosopher Leucippus and his student Democritus. Later, Epicurus would develop a full natural cosmology based on atomism, and would adopt a "canon" (ruler, standard) which established physical criteria or standards of scientific truth. The Greek doctor Hippocrates established the tradition of systematic medical science and is known as "The Father of Medicine".
A turning point in the history of early philosophical science was Socrates' example of applying philosophy to the study of human matters, including human nature, the nature of political communities, and human knowledge itself. The Socratic method as documented by Plato's dialogues is a dialectic method of hypothesis elimination: better hypotheses are found by steadily identifying and eliminating those that lead to contradictions. The Socratic method searches for general commonly-held truths that shape beliefs and scrutinises them for consistency. Socrates criticised the older type of study of physics as too purely speculative and lacking in self-criticism.
In the 4th century BCE, Aristotle created a systematic programme of teleological philosophy. In the 3rd century BCE, Greek astronomer Aristarchus of Samos was the first to propose a heliocentric model of the universe, with the Sun at the centre and all the planets orbiting it. Aristarchus's model was widely rejected because it was believed to violate the laws of physics, while Ptolemy's Almagest, which contains a geocentric description of the Solar System, was accepted through the early Renaissance instead. The inventor and mathematician Archimedes of Syracuse made major contributions to the beginnings of calculus. Pliny the Elder was a Roman writer and polymath, who wrote the seminal encyclopaedia Natural History.
Positional notation for representing numbers likely emerged between the 3rd and 5th centuries CE along Indian trade routes. This numeral system made efficient arithmetic operations more accessible and would eventually become standard for mathematics worldwide.
Middle Ages
Due to the collapse of the Western Roman Empire, the 5th century saw an intellectual decline, with knowledge of classical Greek conceptions of the world deteriorating in Western Europe. Latin encyclopaedists of the period such as Isidore of Seville preserved the majority of general ancient knowledge. In contrast, because the Byzantine Empire resisted attacks from invaders, they were able to preserve and improve prior learning. John Philoponus, a Byzantine scholar in the 6th century, started to question Aristotle's teaching of physics, introducing the theory of impetus. His criticism served as an inspiration to medieval scholars and Galileo Galilei, who extensively cited his works ten centuries later.
During late antiquity and the Early Middle Ages, natural phenomena were mainly examined via the Aristotelian approach. The approach includes Aristotle's four causes: material, formal, moving, and final cause. Many Greek classical texts were preserved by the Byzantine Empire and Arabic translations were done by groups such as the Nestorians and the Monophysites. Under the Abbasids, these Arabic translations were later improved and developed by Arabic scientists. By the 6th and 7th centuries, the neighbouring Sasanian Empire established the medical Academy of Gondishapur, which was considered by Greek, Syriac, and Persian physicians as the most important medical hub of the ancient world.
Islamic study of Aristotelianism flourished in the House of Wisdom established in the Abbasid capital of Baghdad, Iraq and the flourished until the Mongol invasions in the 13th century. Ibn al-Haytham, better known as Alhazen, used controlled experiments in his optical study. Avicenna's compilation of The Canon of Medicine, a medical encyclopaedia, is considered to be one of the most important publications in medicine and was used until the 18th century.
By the 11th century, most of Europe had become Christian, and in 1088, the University of Bologna emerged as the first university in Europe. As such, demand for Latin translation of ancient and scientific texts grew, a major contributor to the Renaissance of the 12th century. Renaissance scholasticism in western Europe flourished, with experiments done by observing, describing, and classifying subjects in nature. In the 13th century, medical teachers and students at Bologna began opening human bodies, leading to the first anatomy textbook based on human dissection by Mondino de Luzzi.
Renaissance
New developments in optics played a role in the inception of the Renaissance, both by challenging long-held metaphysical ideas on perception, as well as by contributing to the improvement and development of technology such as the camera obscura and the telescope. At the start of the Renaissance, Roger Bacon, Vitello, and John Peckham each built up a scholastic ontology upon a causal chain beginning with sensation, perception, and finally apperception of the individual and universal forms of Aristotle. A model of vision later known as perspectivism was exploited and studied by the artists of the Renaissance. This theory uses only three of Aristotle's four causes: formal, material, and final.
In the 16th century, Nicolaus Copernicus formulated a heliocentric model of the Solar System, stating that the planets revolve around the Sun, instead of the geocentric model where the planets and the Sun revolve around the Earth. This was based on a theorem that the orbital periods of the planets are longer as their orbs are farther from the centre of motion, which he found not to agree with Ptolemy's model.
Johannes Kepler and others challenged the notion that the only function of the eye is perception, and shifted the main focus in optics from the eye to the propagation of light. Kepler is best known, however, for improving Copernicus' heliocentric model through the discovery of Kepler's laws of planetary motion. Kepler did not reject Aristotelian metaphysics and described his work as a search for the Harmony of the Spheres. Galileo had made significant contributions to astronomy, physics and engineering. However, he became persecuted after Pope Urban VIII sentenced him for writing about the heliocentric model.
The printing press was widely used to publish scholarly arguments, including some that disagreed widely with contemporary ideas of nature. Francis Bacon and René Descartes published philosophical arguments in favour of a new type of non-Aristotelian science. Bacon emphasised the importance of experiment over contemplation, questioned the Aristotelian concepts of formal and final cause, promoted the idea that science should study the laws of nature and the improvement of all human life. Descartes emphasised individual thought and argued that mathematics rather than geometry should be used to study nature.
Age of Enlightenment
At the start of the Age of Enlightenment, Isaac Newton formed the foundation of classical mechanics by his , greatly influencing future physicists. Gottfried Wilhelm Leibniz incorporated terms from Aristotelian physics, now used in a new non-teleological way. This implied a shift in the view of objects: objects were now considered as having no innate goals. Leibniz assumed that different types of things all work according to the same general laws of nature, with no special formal or final causes.
During this time the declared purpose and value of science became producing wealth and inventions that would improve human lives, in the materialistic sense of having more food, clothing, and other things. In Bacon's words, "the real and legitimate goal of sciences ", and he discouraged scientists from pursuing intangible philosophical or spiritual ideas, which he believed contributed little to human happiness beyond "the fume of subtle, sublime or pleasing [speculation]".
Science during the Enlightenment was dominated by scientific societies and academies, which had largely replaced universities as centres of scientific research and development. Societies and academies were the backbones of the maturation of the scientific profession. Another important development was the popularisation of science among an increasingly literate population. Enlightenment philosophers turned to a few of their scientific predecessors – Galileo, Kepler, Boyle, and Newton principally – as the guides to every physical and social field of the day.
The 18th century saw significant advancements in the practice of medicine and physics; the development of biological taxonomy by Carl Linnaeus; a new understanding of magnetism and electricity; and the maturation of chemistry as a discipline. Ideas on human nature, society, and economics evolved during the Enlightenment. Hume and other Scottish Enlightenment thinkers developed A Treatise of Human Nature, which was expressed historically in works by authors including James Burnett, Adam Ferguson, John Millar and William Robertson, all of whom merged a scientific study of how humans behaved in ancient and primitive cultures with a strong awareness of the determining forces of modernity. Modern sociology largely originated from this movement. In 1776, Adam Smith published The Wealth of Nations, which is often considered the first work on modern economics.
19th century
During the 19th century, many distinguishing characteristics of contemporary modern science began to take shape. These included the transformation of the life and physical sciences; the frequent use of precision instruments; the emergence of terms such as "biologist", "physicist", and "scientist"; an increased professionalisation of those studying nature; scientists gaining cultural authority over many dimensions of society; the industrialisation of numerous countries; the thriving of popular science writings; and the emergence of science journals. During the late 19th century, psychology emerged as a separate discipline from philosophy when Wilhelm Wundt founded the first laboratory for psychological research in 1879.
During the mid-19th century Charles Darwin and Alfred Russel Wallace independently proposed the theory of evolution by natural selection in 1858, which explained how different plants and animals originated and evolved. Their theory was set out in detail in Darwin's book On the Origin of Species, published in 1859. Separately, Gregor Mendel presented his paper, "Experiments on Plant Hybridisation" in 1865, which outlined the principles of biological inheritance, serving as the basis for modern genetics.
Early in the 19th century John Dalton suggested the modern atomic theory, based on Democritus's original idea of indivisible particles called atoms. The laws of conservation of energy, conservation of momentum and conservation of mass suggested a highly stable universe where there could be little loss of resources. However, with the advent of the steam engine and the Industrial Revolution there was an increased understanding that not all forms of energy have the same energy qualities, the ease of conversion to useful work or to another form of energy. This realisation led to the development of the laws of thermodynamics, in which the free energy of the universe is seen as constantly declining: the entropy of a closed universe increases over time.
The electromagnetic theory was established in the 19th century by the works of Hans Christian Ørsted, André-Marie Ampère, Michael Faraday, James Clerk Maxwell, Oliver Heaviside, and Heinrich Hertz. The new theory raised questions that could not easily be answered using Newton's framework. The discovery of X-rays inspired the discovery of radioactivity by Henri Becquerel and Marie Curie in 1896, Marie Curie then became the first person to win two Nobel Prizes. In the next year came the discovery of the first subatomic particle, the electron.
20th century
In the first half of the century the development of antibiotics and artificial fertilisers improved human living standards globally. Harmful environmental issues such as ozone depletion, ocean acidification, eutrophication, and climate change came to the public's attention and caused the onset of environmental studies.
During this period scientific experimentation became increasingly larger in scale and funding. The extensive technological innovation stimulated by World War I, World War II, and the Cold War led to competitions between global powers, such as the Space Race and nuclear arms race. Substantial international collaborations were also made, despite armed conflicts.
In the late 20th century active recruitment of women and elimination of sex discrimination greatly increased the number of women scientists, but large gender disparities remained in some fields. The discovery of the cosmic microwave background in 1964 led to a rejection of the steady-state model of the universe in favour of the Big Bang theory of Georges Lemaître.
The century saw fundamental changes within science disciplines. Evolution became a unified theory in the early 20th-century when the modern synthesis reconciled Darwinian evolution with classical genetics. Albert Einstein's theory of relativity and the development of quantum mechanics complement classical mechanics to describe physics in extreme length, time and gravity. Widespread use of integrated circuits in the last quarter of the 20th century combined with communications satellites led to a revolution in information technology and the rise of the global internet and mobile computing, including smartphones. The need for mass systematisation of long, intertwined causal chains and large amounts of data led to the rise of the fields of systems theory and computer-assisted scientific modelling.
21st century
The Human Genome Project was completed in 2003 by identifying and mapping all of the genes of the human genome. The first induced pluripotent human stem cells were made in 2006, allowing adult cells to be transformed into stem cells and turn into any cell type found in the body. With the affirmation of the Higgs boson discovery in 2013, the last particle predicted by the Standard Model of particle physics was found. In 2015, gravitational waves, predicted by general relativity a century before, were first observed. In 2019, the international collaboration Event Horizon Telescope presented the first direct image of a black hole's accretion disc.
Branches
Modern science is commonly divided into three major branches: natural science, social science, and formal science. Each of these branches comprises various specialised yet overlapping scientific disciplines that often possess their own nomenclature and expertise. Both natural and social sciences are empirical sciences, as their knowledge is based on empirical observations and is capable of being tested for its validity by other researchers working under the same conditions.
Natural science
Natural science is the study of the physical world. It can be divided into two main branches: life science and physical science. These two branches may be further divided into more specialised disciplines. For example, physical science can be subdivided into physics, chemistry, astronomy, and earth science. Modern natural science is the successor to the natural philosophy that began in Ancient Greece. Galileo, Descartes, Bacon, and Newton debated the benefits of using approaches that were more mathematical and more experimental in a methodical way. Still, philosophical perspectives, conjectures, and presuppositions, often overlooked, remain necessary in natural science. Systematic data collection, including discovery science, succeeded natural history, which emerged in the 16th century by describing and classifying plants, animals, minerals, and other biotic beings. Today, "natural history" suggests observational descriptions aimed at popular audiences.
Social science
Social science is the study of human behaviour and the functioning of societies. It has many disciplines that include, but are not limited to anthropology, economics, history, human geography, political science, psychology, and sociology. In the social sciences, there are many competing theoretical perspectives, many of which are extended through competing research programmes such as the functionalists, conflict theorists, and interactionists in sociology. Due to the limitations of conducting controlled experiments involving large groups of individuals or complex situations, social scientists may adopt other research methods such as the historical method, case studies, and cross-cultural studies. Moreover, if quantitative information is available, social scientists may rely on statistical approaches to better understand social relationships and processes.
Formal science
Formal science is an area of study that generates knowledge using formal systems. A formal system is an abstract structure used for inferring theorems from axioms according to a set of rules. It includes mathematics, systems theory, and theoretical computer science. The formal sciences share similarities with the other two branches by relying on objective, careful, and systematic study of an area of knowledge. They are, however, different from the empirical sciences as they rely exclusively on deductive reasoning, without the need for empirical evidence, to verify their abstract concepts. The formal sciences are therefore a priori disciplines and because of this, there is disagreement on whether they constitute a science. Nevertheless, the formal sciences play an important role in the empirical sciences. Calculus, for example, was initially invented to understand motion in physics. Natural and social sciences that rely heavily on mathematical applications include mathematical physics, chemistry, biology, finance, and economics.
Applied science
Applied science is the use of the scientific method and knowledge to attain practical goals and includes a broad range of disciplines such as engineering and medicine. Engineering is the use of scientific principles to invent, design and build machines, structures and technologies. Science may contribute to the development of new technologies. Medicine is the practice of caring for patients by maintaining and restoring health through the prevention, diagnosis, and treatment of injury or disease. The applied sciences are often contrasted with the basic sciences, which are focused on advancing scientific theories and laws that explain and predict events in the natural world.
Computational science applies computing power to simulate real-world situations, enabling a better understanding of scientific problems than formal mathematics alone can achieve. The use of machine learning and artificial intelligence is becoming a central feature of computational contributions to science, for example in agent-based computational economics, random forests, topic modeling and various forms of prediction. However, machines alone rarely advance knowledge as they require human guidance and capacity to reason; and they can introduce bias against certain social groups or sometimes underperform against humans.
Interdisciplinary science
Interdisciplinary science involves the combination of two or more disciplines into one, such as bioinformatics, a combination of biology and computer science or cognitive sciences. The concept has existed since the ancient Greek period and it became popular again in the 20th century.
Scientific research
Scientific research can be labelled as either basic or applied research. Basic research is the search for knowledge and applied research is the search for solutions to practical problems using this knowledge. Most understanding comes from basic research, though sometimes applied research targets specific practical problems. This leads to technological advances that were not previously imaginable.
Scientific method
Scientific research involves using the scientific method, which seeks to objectively explain the events of nature in a reproducible way. Scientists usually take for granted a set of basic assumptions that are needed to justify the scientific method: there is an objective reality shared by all rational observers; this objective reality is governed by natural laws; these laws were discovered by means of systematic observation and experimentation. Mathematics is essential in the formation of hypotheses, theories, and laws, because it is used extensively in quantitative modelling, observing, and collecting measurements. Statistics is used to summarise and analyse data, which allows scientists to assess the reliability of experimental results.
In the scientific method an explanatory thought experiment or hypothesis is put forward as an explanation using parsimony principles and is expected to seek consilience – fitting with other accepted facts related to an observation or scientific question. This tentative explanation is used to make falsifiable predictions, which are typically posted before being tested by experimentation. Disproof of a prediction is evidence of progress. Experimentation is especially important in science to help establish causal relationships to avoid the correlation fallacy, though in some sciences such as astronomy or geology, a predicted observation might be more appropriate.
When a hypothesis proves unsatisfactory it is modified or discarded. If the hypothesis survives testing, it may become adopted into the framework of a scientific theory, a validly reasoned, self-consistent model or framework for describing the behaviour of certain natural events. A theory typically describes the behaviour of much broader sets of observations than a hypothesis; commonly, a large number of hypotheses can be logically bound together by a single theory. Thus, a theory is a hypothesis explaining various other hypotheses. In that vein, theories are formulated according to most of the same scientific principles as hypotheses. Scientists may generate a model, an attempt to describe or depict an observation in terms of a logical, physical or mathematical representation, and to generate new hypotheses that can be tested by experimentation.
While performing experiments to test hypotheses, scientists may have a preference for one outcome over another. Eliminating the bias can be achieved through transparency, careful experimental design, and a thorough peer review process of the experimental results and conclusions. After the results of an experiment are announced or published, it is normal practice for independent researchers to double-check how the research was performed, and to follow up by performing similar experiments to determine how dependable the results might be. Taken in its entirety, the scientific method allows for highly creative problem solving while minimising the effects of subjective and confirmation bias. Intersubjective verifiability, the ability to reach a consensus and reproduce results, is fundamental to the creation of all scientific knowledge.
Scientific literature
Scientific research is published in a range of literature. Scientific journals communicate and document the results of research carried out in universities and various other research institutions, serving as an archival record of science. The first scientific journals, followed by Philosophical Transactions, began publication in 1665. Since that time the total number of active periodicals has steadily increased. In 1981, one estimate for the number of scientific and technical journals in publication was 11,500.
Most scientific journals cover a single scientific field and publish the research within that field; the research is normally expressed in the form of a scientific paper. Science has become so pervasive in modern societies that it is considered necessary to communicate the achievements, news, and ambitions of scientists to a wider population.
Challenges
The replication crisis is an ongoing methodological crisis that affects parts of the social and life sciences. In subsequent investigations, the results of many scientific studies have been proven to be unrepeatable. The crisis has long-standing roots; the phrase was coined in the early 2010s as part of a growing awareness of the problem. The replication crisis represents an important body of research in metascience, which aims to improve the quality of all scientific research while reducing waste.
An area of study or speculation that masquerades as science in an attempt to claim legitimacy that it would not otherwise be able to achieve is sometimes referred to as pseudoscience, fringe science, or junk science. Physicist Richard Feynman coined the term "cargo cult science" for cases in which researchers believe, and at a glance, look like they are doing science but lack the honesty to allow their results to be rigorously evaluated. Various types of commercial advertising, ranging from hype to fraud, may fall into these categories. Science has been described as "the most important tool" for separating valid claims from invalid ones.
There can also be an element of political or ideological bias on all sides of scientific debates. Sometimes, research may be characterised as "bad science", research that may be well-intended but is incorrect, obsolete, incomplete, or over-simplified expositions of scientific ideas. The term "scientific misconduct" refers to situations such as where researchers have intentionally misrepresented their published data or have purposely given credit for a discovery to the wrong person.
Philosophy of science
There are different schools of thought in the philosophy of science. The most popular position is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalise observations. Empiricism generally encompasses inductivism, a position that explains how general theories can be made from the finite amount of empirical evidence available. Many versions of empiricism exist, with the predominant ones being Bayesianism and the hypothetico-deductive method.
Empiricism has stood in contrast to rationalism, the position originally associated with Descartes, which holds that knowledge is created by the human intellect, not by observation. Critical rationalism is a contrasting 20th-century approach to science, first defined by Austrian-British philosopher Karl Popper. Popper rejected the way that empiricism describes the connection between theory and observation. He claimed that theories are not generated by observation, but that observation is made in the light of theories, and that the only way theory A can be affected by observation is after theory A were to conflict with observation, but theory B were to survive the observation.
Popper proposed replacing verifiability with falsifiability as the landmark of scientific theories, replacing induction with falsification as the empirical method. Popper further claimed that there is actually only one universal method, not specific to science: the negative method of criticism, trial and error, covering all products of the human mind, including science, mathematics, philosophy, and art.
Another approach, instrumentalism, emphasises the utility of theories as instruments for explaining and predicting phenomena. It views scientific theories as black boxes, with only their input (initial conditions) and output (predictions) being relevant. Consequences, theoretical entities, and logical structure are claimed to be things that should be ignored. Close to instrumentalism is constructive empiricism, according to which the main criterion for the success of a scientific theory is whether what it says about observable entities is true.
Thomas Kuhn argued that the process of observation and evaluation takes place within a paradigm, a logically consistent "portrait" of the world that is consistent with observations made from its framing. He characterised normal science as the process of observation and "puzzle solving", which takes place within a paradigm, whereas revolutionary science occurs when one paradigm overtakes another in a paradigm shift. Each paradigm has its own distinct questions, aims, and interpretations. The choice between paradigms involves setting two or more "portraits" against the world and deciding which likeness is most promising. A paradigm shift occurs when a significant number of observational anomalies arise in the old paradigm and a new paradigm makes sense of them. That is, the choice of a new paradigm is based on observations, even though those observations are made against the background of the old paradigm. For Kuhn, acceptance or rejection of a paradigm is a social process as much as a logical process. Kuhn's position, however, is not one of relativism.
Another approach often cited in debates of scientific scepticism against controversial movements like "creation science" is methodological naturalism. Naturalists maintain that a difference should be made between natural and supernatural, and science should be restricted to natural explanations. Methodological naturalism maintains that science requires strict adherence to empirical study and independent verification.
Scientific community
The scientific community is a network of interacting scientists who conduct scientific research. The community consists of smaller groups working in scientific fields. By having peer review, through discussion and debate within journals and conferences, scientists maintain the quality of research methodology and objectivity when interpreting results.
Scientists
Scientists are individuals who conduct scientific research to advance knowledge in an area of interest. In modern times, many professional scientists are trained in an academic setting and, upon completion, attain an academic degree, with the highest degree being a doctorate (e.g. a Doctor of Philosophy, or PhD). Many scientists pursue careers in various sectors of the economy such as academia, industry, government, and nonprofit organisations.
Scientists exhibit a strong curiosity about reality and a desire to apply scientific knowledge for the benefit of health, nations, the environment, or industries. Other motivations include recognition by their peers and prestige. In modern times, many scientists have advanced degrees in an area of science and pursue careers in various sectors of the economy, such as academia, industry, government, and nonprofit environments.
Science has historically been a male-dominated field, with notable exceptions. Women in science faced considerable discrimination in science, much as they did in other areas of male-dominated societies. For example, women were frequently passed over for job opportunities and denied credit for their work. The achievements of women in science have been attributed to the defiance of their traditional role as labourers within the domestic sphere.
Learned societies
Learned societies for the communication and promotion of scientific thought and experimentation have existed since the Renaissance. Many scientists belong to a learned society that promotes their respective scientific discipline, profession, or group of related disciplines. Membership may either be open to all, require possession of scientific credentials, or conferred by election. Most scientific societies are nonprofit organisations, and many are professional associations. Their activities typically include holding regular conferences for the presentation and discussion of new research results and publishing or sponsoring academic journals in their discipline. Some societies act as professional bodies, regulating the activities of their members in the public interest, or the collective interest of the membership.
The professionalisation of science, begun in the 19th century, was partly enabled by the creation of national distinguished academies of sciences such as the Italian in 1603, the British Royal Society in 1660, the French Academy of Sciences in 1666, the American National Academy of Sciences in 1863, the German Kaiser Wilhelm Society in 1911, and the Chinese Academy of Sciences in 1949. International scientific organisations, such as the International Science Council, are devoted to international cooperation for science advancement.
Awards
Science awards are usually given to individuals or organisations that have made significant contributions to a discipline. They are often given by prestigious institutions; thus, it is considered a great honour for a scientist receiving them. Since the early Renaissance, scientists have often been awarded medals, money, and titles. The Nobel Prize, a widely regarded prestigious award, is awarded annually to those who have achieved scientific advances in the fields of medicine, physics, and chemistry.
Society
Funding and policies
Scientific research is often funded through a competitive process in which potential research projects are evaluated and only the most promising receive funding. Such processes, which are run by government, corporations, or foundations, allocate scarce funds. Total research funding in most developed countries is between 1.5% and 3% of GDP. In the OECD, around two-thirds of research and development in scientific and technical fields is carried out by industry, and 20% and 10%, respectively, by universities and government. The government funding proportion in certain fields is higher, and it dominates research in social science and the humanities. In less developed nations, the government provides the bulk of the funds for their basic scientific research.
Many governments have dedicated agencies to support scientific research, such as the National Science Foundation in the United States, the National Scientific and Technical Research Council in Argentina, Commonwealth Scientific and Industrial Research Organisation in Australia, National Centre for Scientific Research in France, the Max Planck Society in Germany, and National Research Council in Spain. In commercial research and development, all but the most research-orientated corporations focus more heavily on near-term commercialisation possibilities than research driven by curiosity.
Science policy is concerned with policies that affect the conduct of the scientific enterprise, including research funding, often in pursuance of other national policy goals such as technological innovation to promote commercial product development, weapons development, health care, and environmental monitoring. Science policy sometimes refers to the act of applying scientific knowledge and consensus to the development of public policies. In accordance with public policy being concerned about the well-being of its citizens, science policy's goal is to consider how science and technology can best serve the public. Public policy can directly affect the funding of capital equipment and intellectual infrastructure for industrial research by providing tax incentives to those organisations that fund research.
Education and awareness
Science education for the general public is embedded in the school curriculum, and is supplemented by online pedagogical content (for example, YouTube and Khan Academy), museums, and science magazines and blogs. Major organisations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning, along with philosophy and history. Scientific literacy is chiefly concerned with an understanding of the scientific method, units and methods of measurement, empiricism, a basic understanding of statistics (correlations, qualitative versus quantitative observations, aggregate statistics), and a basic understanding of core scientific fields such as physics, chemistry, biology, ecology, geology, and computation. As a student advances into higher stages of formal education, the curriculum becomes more in depth. Traditional subjects usually included in the curriculum are natural and formal sciences, although recent movements include social and applied science as well.
The mass media face pressures that can prevent them from accurately depicting competing scientific claims in terms of their credibility within the scientific community as a whole. Determining how much weight to give different sides in a scientific debate may require considerable expertise regarding the matter. Few journalists have real scientific knowledge, and even beat reporters who are knowledgeable about certain scientific issues may be ignorant about other scientific issues that they are suddenly asked to cover.
Science magazines such as New Scientist, Science & Vie, and Scientific American cater to the needs of a much wider readership and provide a non-technical summary of popular areas of research, including notable discoveries and advances in certain fields of research. The science fiction genre, primarily speculative fiction, can transmit the ideas and methods of science to the general public. Recent efforts to intensify or develop links between science and non-scientific disciplines, such as literature or poetry, include the Creative Writing Science resource developed through the Royal Literary Fund.
Anti-science attitudes
While the scientific method is broadly accepted in the scientific community, some fractions of society reject certain scientific positions or are sceptical about science. Examples are the common notion that COVID-19 is not a major health threat to the US (held by 39% of Americans in August 2021) or the belief that climate change is not a major threat to the US (also held by 40% of Americans, in late 2019 and early 2020). Psychologists have pointed to four factors driving rejection of scientific results:
Scientific authorities are sometimes seen as inexpert, untrustworthy, or biased.
Some marginalised social groups hold anti-science attitudes, in part because these groups have often been exploited in unethical experiments.
Messages from scientists may contradict deeply held existing beliefs or morals.
The delivery of a scientific message may not be appropriately targeted to a recipient's learning style.
Anti-science attitudes often seem to be caused by fear of rejection in social groups. For instance, climate change is perceived as a threat by only 22% of Americans on the right side of the political spectrum, but by 85% on the left. That is, if someone on the left would not consider climate change as a threat, this person may face contempt and be rejected in that social group. In fact, people may rather deny a scientifically accepted fact than lose or jeopardise their social status.
Politics
Attitudes towards science are often determined by political opinions and goals. Government, business and advocacy groups have been known to use legal and economic pressure to influence scientific researchers. Many factors can act as facets of the politicisation of science such as anti-intellectualism, perceived threats to religious beliefs, and fear for business interests. Politicisation of science is usually accomplished when scientific information is presented in a way that emphasises the uncertainty associated with the scientific evidence. Tactics such as shifting conversation, failing to acknowledge facts, and capitalising on doubt of scientific consensus have been used to gain more attention for views that have been undermined by scientific evidence. Examples of issues that have involved the politicisation of science include the global warming controversy, health effects of pesticides, and health effects of tobacco.
| Physical sciences | null | null |
26725 | https://en.wikipedia.org/wiki/Slime%20mold | Slime mold | Slime mold or slime mould is an informal name given to a polyphyletic assemblage of unrelated eukaryotic organisms in the Stramenopiles, Rhizaria, Discoba, Amoebozoa and Holomycota clades. Most are microscopic; those in the Myxogastria form larger plasmodial slime molds visible to the naked eye.
The slime mold life cycle includes a free-living single-celled stage and the formation of spores. Spores are often produced in macroscopic multicellular or multinucleate fruiting bodies that may be formed through aggregation or fusion; aggregation is driven by chemical signals called acrasins. Slime molds contribute to the decomposition of dead vegetation; some are parasitic.
Most slime molds are terrestrial and free-living, typically in damp shady habitats such as in or on the surface of rotting wood. Some myxogastrians and protostelians are aquatic or semi-aquatic. The phytomyxea are parasitic, living inside their plant hosts. Geographically, slime molds are cosmopolitan in distribution. A small number of species occur in regions as dry as the Atacama Desert and as cold as the Arctic; they are abundant in the tropics, especially in rainforests.
Slime molds have a variety of behaviors otherwise seen in animals with brains. Species such as Physarum polycephalum have been used to simulate traffic networks. Some species have traditionally been eaten in countries such as Ecuador.
Evolution
Taxonomic history
The first account of slime molds was 's 1654 discussion of Lycogala epidendrum. He called it , "a fast-growing fungus".
German mycologist Heinrich Anton de Bary, in 1860 and 1887, classified the Myxomycetes (plasmodial slime molds) and Acrasieae (cellular slime molds) as Mycetozoa, a new class. He also introduced a "Doubtful Mycetozoa" section for Plasmodiophora (now in Phytomyxea) and Labyrinthula, emphasizing their distinction from plants and fungi. In 1880, the French botanist Philippe van Tieghem analyzed the two groups further.
In 1868, the German biologist Ernst Haeckel placed the Mycetozoa in a kingdom he named Protista. In 1885, the British zoologist Ray Lankester grouped the Mycetozoa alongside the Proteomyxa as part of the Gymnomyxa in the phylum Protozoa. Arthur and Gulielma Lister published monographs of the group in 1894, 1911, and 1925.
In 1932 and 1960, the American mycologist George Willard Martin argued that the slime molds evolved from fungi. In 1956, the American biologist Herbert Copeland placed the Mycetozoa (the myxomycetes and plasmodiophorids) and the Sarkodina (the labyrinthulids and the cellular slime molds) in a phylum called Protoplasta, which he placed alongside the fungi and the algae in a new kingdom, Protoctista.
In 1969, the taxonomist R. H. Whittaker observed that slime molds were highly conspicuous and distinct within the Fungi, the group to which they were then classified. He concurred with Lindsay S. Olive's proposal to reclassify the Gymnomycota, which includes slime molds, as part of the Protista. Whittaker placed three phyla, namely the Myxomycota, Acrasiomycota, and Labyrinthulomycota in a subkingdom Gymnomycota within the Fungi. The same year, Martin and Alexopoulos published their influential textbook The Myxomycetes.
In 1975, Olive distinguished the dictyostelids and the acrasids as separate groups. In 1992, David J. Patterson and M. L. Sogin proposed that the dictyostelids diverged before plants, animals, and fungi.
Phylogeny
Slime molds have little or no fossil history, as might be expected given that they are small and soft-bodied. The grouping is polyphyletic, consisting of multiple clades (emphasised in the phylogenetic tree) widely scattered across the Eukaryotes. Paraphyletic groups are shown in quotation marks:
Diversity
Various estimates of the number of species of slime molds agree that there are around 1000 species, most being Myxogastria. Collection of environmental DNA gives a higher estimate, from 1200 to 1500 species. These are diverse both taxonomically and in appearance, the largest and most familiar species being among the Myxogastria. The growth forms most commonly noticed are the sporangia, the spore-forming bodies, which are often roughly spherical; these may be directly on the surface, such as on rotting wood, or may be on a thin stalk which elevates the spores for release above the surface. Other species have the spores in a large mass, which may be visited by insects for food; they disperse spores when they leave.
Macroscopic, plasmodial slime molds: Myxogastria
The Myxogastria or plasmodial slime molds are the only macroscopic scale slime molds; they gave the group its informal name, since for part of their life cycle they are slimy to the touch. A myxogastrian consists of a large cell with thousands of nuclei within a single membrane without walls, forming a syncytium. Most are smaller than a few centimeters, but some species may reach sizes up to several square meters, and in the case of Brefeldia maxima, a mass of up to .
Cellular slime molds: Dictyosteliida
The Dictyosteliida or cellular slime molds do not form huge coenocytes like the Myxogastria; their amoebae remain individual for most of their lives as individual unicellular protists, feeding on microorganisms. When food is depleted and they are ready to form sporangia, they form swarms. The amoebae join up into a tiny multicellular slug which crawls to an open lit place and grows into a fruiting body, a sorocarp. Some of the amoebae become spores to begin the next generation, but others sacrifice themselves to become a dead stalk, lifting the spores up into the air.
Protosteliida
The Protosteliida, a polyphyletic group, have characters intermediate between the previous two groups, but they are much smaller, the fruiting bodies only forming one to a few spores.
Copromyxa
The lobosans, a paraphyletic group of amoebae, include the Copromyxa slime molds.
Non-amoebozoan slime molds
Among the non-amoebozoan slime molds are the Acrasids, which have sluglike amoebae. In locomotion, the amoebae's pseudopodia are eruptive, meaning that hemispherical bulges appear at the front. The Phytomyxea are obligate parasites, with hosts among the plants, diatoms, oomycetes, and brown algae. They cause plant diseases like cabbage club root and powdery scab. The Labyrinthulomycetes are marine slime nets, forming labyrinthine networks of tubes in which amoeba without pseudopods can travel. The Fonticulida are cellular slime molds that form a fruiting body in a "volcano" shape.
Distribution, habitats, and ecology
Slime molds, with their small size and moist surface, live mostly in damp habitats including shaded forests, rotting wood, fallen or living leaves, and on bryophytes. Most Myxogastria are terrestrial, though some, like Didymium aquatilis are aquatic, and D. nigripes is semi-aquatic. Myxogastria are not limited to wet regions; 34 species are known from Saudi Arabia, living on bark, in plant litter, and rotting wood, even in deserts. They occur, too, in Arizona's Sonoran Desert (46 species), and in Chile's exceptionally dry Atacama Desert (24 species). In contrast, the semi-dry Tehuacán-Cuicatlán Biosphere Reserve has 105 species, and Russia and Kazakhstan's Volga river basin has 158 species. In tropical rainforests of Latin America, species such as of Arcyria and Didymium are commonly epiphyllous, growing on the leaves of liverworts.
The dictyostelids are mostly terrestrial. On Changbai Mountain in China, six species of dictyostelids were found in forest soils at elevations up to , the highest recorded species there being Dictyostelium mucoroides.
The protostelids live mainly on dead plant matter, where they consume the spores of bacteria, yeasts, and fungi. They include some aquatic species, which live on dead plant parts submerged in ponds. Cellular slime molds are most numerous in the tropics, decreasing with latitude, but are cosmopolitan in distribution, occurring in soil even in the Arctic and the Antarctic. In the Alaskan tundra, the only slime molds are the dictyostelids D. mucoroides and D. sphaerocephalum.
The species of Copromyxa are coprophilous, feeding on dung.
Some myxogastrians have their spores dispersed by animals. The slime mold fly Epicypta testata lay its eggs within the spore mass of Enteridium lycoperdon, which the larvae feed on. These pupate, and the hatching adults carry and disperse spores that have stuck to them. While various insects consume slime molds, Sphindidae slime mold beetles, both larvae and adults, exclusively feed on them.
Life cycle
Plasmodial slime molds
Plasmodial slime molds begin life as amoeba-like cells. These unicellular amoebae are commonly haploid and feed on small prey such as bacteria, yeast cells, and fungal spores by phagocytosis, engulfing them with its cell membrane. These amoebae can mate if they encounter the correct mating type and form zygotes that then grow into plasmodia. These contain many nuclei without cell membranes between them, and can grow to meters in size. The species Fuligo septica is often seen as a slimy yellow network in and on rotting logs. The amoebae and the plasmodia engulf microorganisms. The plasmodium grows into an interconnected network of protoplasmic strands. Within each protoplasmic strand, the cytoplasmic contents rapidly stream, periodically reversing direction. The streaming protoplasm within a plasmodial strand can reach speeds of up to 1.35 mm per second in Physarum polycephalum, the fastest for any microorganism.
Slime molds are isogamous, which means that their gametes (reproductive cells) are all the same size, unlike the eggs and sperms of animals. Physarum polycephalum has three genes involved in reproduction: matA and matB, with thirteen variants each, and matC with three variants. Each reproductively mature slime mold is diploid, meaning that it contains two copies of each of the three reproductive genes. When P. polycephalum is ready to make its reproductive cells, it grows a bulbous extension of its body to contain them. Each cell has a random combination of the genes that the slime mold contains within its genome. Therefore, it can create cells of up to eight different gene types. Released cells then independently seek another compatible cell for fusion. Other individuals of P. polycephalum may contain different combinations of the matA, matB, and matC genes, allowing over 500 possible variations. It is advantageous for organisms with this type of reproductive cell to have many mating types because the likelihood of the cells finding a partner is greatly increased, and the risk of inbreeding is drastically reduced.
Cellular slime molds
The cellular slime molds are a group of approximately 150 described species. They occur primarily in the humus layer of forest soils and feed on bacteria but also are found in animal dung and agricultural fields. They exist as single-celled organisms while food is plentiful. When food is in short supply, many of the single-celled amoebae congregate and start moving as a single body, called a 'slug'. The ability of the single celled organisms to aggregate into multicellular forms are why they are also called the social amoebae. In this state they are sensitive to airborne chemicals and can detect food sources. They readily change the shape and function of parts, and may form stalks that produce fruiting bodies, releasing countless spores, light enough to be carried on the wind or on passing animals. The cellular slime mold Dictyostelium discoideum has many different mating types. When this organism has entered the stage of reproduction, it releases a chemical attractant. When it comes time for the cells to fuse, Dictyostelium discoideum has mating types of its own that dictate which cells are compatible with each other. There are at least eleven mating types; macrocysts form after cell contact between compatible mating types.
Chemical signals
The chemicals that aggregate cellular slime molds are small molecules called acrasins; motion towards a chemical signal is called chemotaxis. The first acrasin to be discovered was cyclic adenosine monophosphate (cyclic AMP), a common cell signaling molecule, in Dictyostelium discoideum. During the aggregation phase of their life cycle, Dictyostelium discoideum amoebae communicate with each other using traveling waves of cyclic AMP. There is an amplification of cyclic AMP when they aggregate. Pre-stalk cells move toward cyclic AMP, but pre-spore cells ignore the signal. Other acrasins exist; the acrasin for Polysphondylium violaceum, purified in 1983, is the dipeptide glorin. Calcium ions too serve to attract slime mold amoebae, at least at short distances. It has been suggested that acrasins may be taxon-specific, since specificity is required to form an aggregation of genetically similar cells. Many dictyostelid species indeed do not respond to cyclic AMP, but as of 2023 their acrasins remained unknown.
Study
Use in research and teaching
The practical study of slime molds was facilitated by the introduction of the "moist culture chamber" by H. C. Gilbert and G. W. Martin in 1933. Slime molds can be used to teach convergent evolution, as the habit of forming a stalk with a sporangium that can release spores into the air, off the ground, has evolved repeatedly, such as in myxogastria (eukaryotes) and in myxobacteria (prokaryotes). Further, both the (macroscopic) dictyostelids and the (microscopic) protostelids have a phase with motile amoebae and a phase with a stalk; in the protostelids, the stalk is tiny, supporting just one spore, but the logic of airborne spore dispersal is the same.
O. R. Collins showed that the slime mold Didymium iridis had two strains (+ and −) of cells, equivalent to gametes, that these could form immortal cell lines in culture, and that the system was controlled by alleles of a single gene. This made the species a model organism for exploring incompatibility, asexual reproduction, and mating types.
Biochemicals
Slime molds have been studied for their production of unusual organic compounds, including pigments, antibiotics, and anti-cancer drugs. Pigments include naphthoquinones, physarochrome A, and compounds of tetramic acid. Bisindolylmaleimides produced by Arcyria denudata include some phosphorescent compounds. The sporophores (fruiting bodies) of Arcyria denudata are colored red by arcyriaflavins A–C, which contain an unusual indolo[2,3-a]carbazole alkaloid ring. By 2022, more than 100 pigments had been isolated from slime molds, mostly from sporophores. It has been suggested that the many yellow-to-red pigments might be useful in cosmetics. Some 42% of patients with seasonal allergic rhinitis reacted to myxogastrian spores, so the spores may contribute significantly as airborne allergens.
Computation
Slime molds share some similarities with neural systems in animals. The membranes of both slime molds and neural cells contain receptor sites, which alter electrical properties of the membrane when it is bound. Therefore, some studies on the early evolution of animal neural systems are inspired by slime molds. When a slime mold mass or mound is physically separated, the cells find their way back to re-unite. Studies on Physarum polycephalum have even shown the organism to have an ability to learn and predict periodic unfavorable conditions in laboratory experiments. John Tyler Bonner, a professor of ecology known for his studies of slime molds, argues that they are "no more than a bag of amoebae encased in a thin slime sheath, yet they manage to have various behaviors that are equal to those of animals who possess muscles and nerves with ganglia – that is, simple brains."
The slime mold algorithm is a meta-heuristic algorithm, based on the behavior of aggregated slime molds as they stream in search of food. It is described as a simple, efficient, and flexible way of solving optimization problems, such as finding the shortest path between nodes in a network. However, it can become trapped in a local optimum.
Toshiyuki Nakagaki and colleagues studies slime molds and their abilities to solve mazes by placing nodes at two point separated by a maze of plastic film. The mold explored all possible paths and solves it for the shortest path.
Traffic system inspirations
Atsushi Tero and colleagues grew Physarum in a flat wet dish, placing the mold in a central position representing Tokyo, and oat flakes surrounding it corresponding to the locations of other major cities in the Greater Tokyo Area. As Physarum avoids bright light, light was used to simulate mountains, water and other obstacles in the dish. The mold first densely filled the space with plasmodia, and then thinned the network to focus on efficiently connected branches. The network closely resembled Tokyo's rail system. P. polycephalum was used in experimental laboratory approximations of motorway networks of 14 geographical areas: Australia, Africa, Belgium, Brazil, Canada, China, Germany, Iberia, Italy, Malaysia, Mexico, the Netherlands, UK and US. The filamentary structure of P. polycephalum forming a network to food sources is similar to the large scale galaxy filament structure of the universe. This observation has led astronomers to use simulations based on the behaviour of slime molds to inform their search for dark matter.
Use as food
In central Mexico, the false puffball Enteridium lycoperdon was traditionally used as food; it was one of the species which mushroom-collectors or hongueros gathered on trips into the forest in the rainy season. One of its local names is "cheese mushroom", so called for its texture and flavor when cooked. It was salted, wrapped in a maize leaf, and baked in the ashes of a campfire; or boiled and eaten with maize tortillas. Fuligo septica was similarly collected in Mexico, cooked with onions and peppers and eaten in a tortilla. In Ecuador, Lycogala epidendrum was called "yakich" and eaten raw as an appetizer.
In popular culture
Oscar Requejo and N. Floro Andres-Rodriguez suggest that Fuligo septica may have inspired Irvin Yeaworth's 1958 film The Blob, in which a giant amoeba from space sets about engulfing people in a small American town.
| Biology and health sciences | Other organisms | null |
26751 | https://en.wikipedia.org/wiki/Sun | Sun | The Sun is the star at the center of the Solar System. It is a massive, nearly perfect sphere of hot plasma, heated to incandescence by nuclear fusion reactions in its core, radiating the energy from its surface mainly as visible light and infrared radiation with 10% at ultraviolet energies. It is by far the most important source of energy for life on Earth. The Sun has been an object of veneration in many cultures. It has been a central subject for astronomical research since antiquity.
The Sun orbits the Galactic Center at a distance of 24,000 to 28,000 light-years. From Earth, it is () or about 8 light-minutes away. Its diameter is about (), 109 times that of Earth. Its mass is about 330,000 times that of Earth, making up about 99.86% of the total mass of the Solar System. Roughly three-quarters of the Sun's mass consists of hydrogen (~73%); the rest is mostly helium (~25%), with much smaller quantities of heavier elements, including oxygen, carbon, neon, and iron.
The Sun is a G-type main-sequence star (G2V), informally called a yellow dwarf, though its light is actually white. It formed approximately 4.6 billion years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the center, whereas the rest flattened into an orbiting disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core. Every second, the Sun's core fuses about 600 billion kilograms (kg) of hydrogen into helium and converts 4 billion kg of matter into energy.
About 4 to 7 billion years from now, when hydrogen fusion in the Sun's core diminishes to the point where the Sun is no longer in hydrostatic equilibrium, its core will undergo a marked increase in density and temperature which will cause its outer layers to expand, eventually transforming the Sun into a red giant. This process will make the Sun large enough to render Earth uninhabitable approximately five billion years from the present. After the red giant phase, models suggest the Sun will shed its outer layers and become a dense type of cooling star (a white dwarf), and no longer produce energy by fusion, but will still glow and give off heat from its previous fusion for perhaps trillions of years. After that, it is theorized to become a super dense black dwarf, giving off negligible energy.
Etymology
The English word sun developed from Old English . Cognates appear in other Germanic languages, including West Frisian , Dutch , Low German , Standard German , Bavarian , Old Norse , and Gothic . All these words stem from Proto-Germanic . This is ultimately related to the word for sun in other branches of the Indo-European language family, though in most cases a nominative stem with an l is found, rather than the genitive stem in n, as for example in Latin , ancient Greek (), Welsh and Czech , as well as (with *l > r) Sanskrit () and Persian (). Indeed, the l-stem survived in Proto-Germanic as well, as , which gave rise to Gothic (alongside ) and Old Norse prosaic (alongside poetic ), and through it the words for sun in the modern Scandinavian languages: Swedish and Danish , Icelandic , etc.
The principal adjectives for the Sun in English are sunny for sunlight and, in technical contexts, solar (), from Latin . From the Greek comes the rare adjective heliac (). In English, the Greek and Latin words occur in poetry as personifications of the Sun, Helios () and Sol (), while in science fiction Sol may be used to distinguish the Sun from other stars. The term sol with a lowercase s is used by planetary astronomers for the duration of a solar day on another planet such as Mars.
The astronomical symbol for the Sun is a circle with a center dot, . It is used for such units as M☉ (Solar mass), R☉ (Solar radius) and L☉ (Solar luminosity).
The scientific study of the Sun is called heliology.
General characteristics
The Sun is a G-type main-sequence star that makes up about 99.86% of the mass of the Solar System. It has an absolute magnitude of +4.83, estimated to be brighter than about 85% of the stars in the Milky Way, most of which are red dwarfs. It is more massive than 95% of the stars within .
The Sun is a Population I, or heavy-element-rich, star. Its formation approximately 4.6 billion years ago may have been triggered by shockwaves from one or more nearby supernovae. This is suggested by a high abundance of heavy elements in the Solar System, such as gold and uranium, relative to the abundances of these elements in so-called Population II, heavy-element-poor, stars. The heavy elements could most plausibly have been produced by endothermic nuclear reactions during a supernova, or by transmutation through neutron absorption within a massive second-generation star.
The Sun is by far the brightest object in the Earth's sky, with an apparent magnitude of −26.74. This is about 13 billion times brighter than the next brightest star, Sirius, which has an apparent magnitude of −1.46.
is defined as the mean distance between the centers of the Sun and the Earth. The instantaneous distance varies by about as Earth moves from perihelion around 3 January to aphelion around 4 July. At its average distance, light travels from the Sun's horizon to Earth's horizon in about 8 minutes and 20 seconds, while light from the closest points of the Sun and Earth takes about two seconds less. The energy of this sunlight supports almost all life on Earth by photosynthesis, and drives Earth's climate and weather.
The Sun does not have a definite boundary, but its density decreases exponentially with increasing height above the photosphere. For the purpose of measurement, the Sun's radius is considered to be the distance from its center to the edge of the photosphere, the apparent visible surface of the Sun.
The roundness of the Sun is relative difference between its radius at its equator, , and at its pole, , called the oblateness,
The value is difficult to measure. Atmospheric distortion means the measurement must be done on satellites; the value is very small meaning very precise technique is needed.
The oblateness was once proposed to be sufficient to explain the perihelion precession of Mercury but Einstein proposed that general relativity could explain the precession using a spherical Sun. When high precision measurements of the oblateness became available via the Solar Dynamics Observatory and the
Picard satellite the measured value was even smaller than expected, 8.2 x 10−6, or 8 parts per million.
This makes the Sun the natural object closest to a perfect sphere. The oblateness value remains constant independent of solar irradiation changes. The tidal effect of the planets is weak and does not significantly affect the shape of the Sun.
Rotation
The Sun rotates faster at its equator than at its poles. This differential rotation is caused by convective motion due to heat transport and the Coriolis force due to the Sun's rotation. In a frame of reference defined by the stars, the rotational period is approximately 25.6 days at the equator and 33.5 days at the poles. Viewed from Earth as it orbits the Sun, the apparent rotational period of the Sun at its equator is about 28 days. Viewed from a vantage point above its north pole, the Sun rotates counterclockwise around its axis of spin.
A survey of solar analogs suggest the early Sun was rotating up to ten times faster than it does today. This would have made the surface much more active, with greater X-ray and UV emission. Sun spots would have covered 5–30% of the surface. The rotation rate was gradually slowed by magnetic braking, as the Sun's magnetic field interacted with the outflowing solar wind. A vestige of this rapid primordial rotation still survives at the Sun's core, which has been found to be rotating at a rate of once per week; four times the mean surface rotation rate.
Composition
The Sun consists mainly of the elements hydrogen and helium. At this time in the Sun's life, they account for 74.9% and 23.8%, respectively, of the mass of the Sun in the photosphere. All heavier elements, called metals in astronomy, account for less than 2% of the mass, with oxygen (roughly 1% of the Sun's mass), carbon (0.3%), neon (0.2%), and iron (0.2%) being the most abundant.
The Sun's original chemical composition was inherited from the interstellar medium out of which it formed. Originally it would have been about 71.1% hydrogen, 27.4% helium, and 1.5% heavier elements. The hydrogen and most of the helium in the Sun would have been produced by Big Bang nucleosynthesis in the first 20 minutes of the universe, and the heavier elements were produced by previous generations of stars before the Sun was formed, and spread into the interstellar medium during the final stages of stellar life and by events such as supernovae.
Since the Sun formed, the main fusion process has involved fusing hydrogen into helium. Over the past 4.6 billion years, the amount of helium and its location within the Sun has gradually changed. The proportion of helium within the core has increased from about 24% to about 60% due to fusion, and some of the helium and heavy elements have settled from the photosphere toward the center of the Sun because of gravity. The proportions of heavier elements are unchanged. Heat is transferred outward from the Sun's core by radiation rather than by convection (see Radiative zone below), so the fusion products are not lifted outward by heat; they remain in the core, and gradually an inner core of helium has begun to form that cannot be fused because presently the Sun's core is not hot or dense enough to fuse helium. In the current photosphere, the helium fraction is reduced, and the metallicity is only 84% of what it was in the protostellar phase (before nuclear fusion in the core started). In the future, helium will continue to accumulate in the core, and in about 5 billion years this gradual build-up will eventually cause the Sun to exit the main sequence and become a red giant.
The chemical composition of the photosphere is normally considered representative of the composition of the primordial Solar System. Typically, the solar heavy-element abundances described above are measured both by using spectroscopy of the Sun's photosphere and by measuring abundances in meteorites that have never been heated to melting temperatures. These meteorites are thought to retain the composition of the protostellar Sun and are thus not affected by the settling of heavy elements. The two methods generally agree well.
Structure and fusion
Core
The core of the Sun extends from the center to about 20–25% of the solar radius. It has a density of up to (about 150 times the density of water) and a temperature of close to 15.7 million kelvin (K). By contrast, the Sun's surface temperature is about . Recent analysis of SOHO mission data favors the idea that the core is rotating faster than the radiative zone outside it. Through most of the Sun's life, energy has been produced by nuclear fusion in the core region through the proton–proton chain; this process converts hydrogen into helium. Currently, 0.8% of the energy generated in the Sun comes from another sequence of fusion reactions called the CNO cycle; the proportion coming from the CNO cycle is expected to increase as the Sun becomes older and more luminous.
The core is the only region of the Sun that produces an appreciable amount of thermal energy through fusion; 99% of the Sun's power is generated in the innermost 24% of its radius, and almost no fusion occurs beyond 30% of the radius. The rest of the Sun is heated by this energy as it is transferred outward through many successive layers, finally to the solar photosphere where it escapes into space through radiation (photons) or advection (massive particles).
The proton–proton chain occurs around times each second in the core, converting about 3.7 protons into alpha particles (helium nuclei) every second (out of a total of ~8.9 free protons in the Sun), or about . However, each proton (on average) takes around 9 billion years to fuse with another using the PP chain. Fusing four free protons (hydrogen nuclei) into a single alpha particle (helium nucleus) releases around 0.7% of the fused mass as energy, so the Sun releases energy at the mass–energy conversion rate of 4.26 billion kg/s (which requires 600 billion kg of hydrogen), for 384.6 yottawatts (), or 9.192 megatons of TNT per second. The large power output of the Sun is mainly due to the huge size and density of its core (compared to Earth and objects on Earth), with only a fairly small amount of power being generated per cubic metre. Theoretical models of the Sun's interior indicate a maximum power density, or energy production, of approximately 276.5 watts per cubic metre at the center of the core, which, according to Karl Kruszelnicki, is about the same power density inside a compost pile.
The fusion rate in the core is in a self-correcting equilibrium: a slightly higher rate of fusion would cause the core to heat up more and expand slightly against the weight of the outer layers, reducing the density and hence the fusion rate and correcting the perturbation; and a slightly lower rate would cause the core to cool and shrink slightly, increasing the density and increasing the fusion rate and again reverting it to its present rate.
Radiative zone
The radiative zone is the thickest layer of the Sun, at 0.45 solar radii. From the core out to about 0.7 solar radii, thermal radiation is the primary means of energy transfer. The temperature drops from approximately 7 million to 2 million kelvins with increasing distance from the core. This temperature gradient is less than the value of the adiabatic lapse rate and hence cannot drive convection, which explains why the transfer of energy through this zone is by radiation instead of thermal convection. Ions of hydrogen and helium emit photons, which travel only a brief distance before being reabsorbed by other ions. The density drops a hundredfold (from 20,000 kg/m3 to 200 kg/m3) between 0.25 solar radii and 0.7 radii, the top of the radiative zone.
Tachocline
The radiative zone and the convective zone are separated by a transition layer, the tachocline. This is a region where the sharp regime change between the uniform rotation of the radiative zone and the differential rotation of the convection zone results in a large shear between the two—a condition where successive horizontal layers slide past one another. Presently, it is hypothesized that a magnetic dynamo, or solar dynamo, within this layer generates the Sun's magnetic field.
Convective zone
The Sun's convection zone extends from 0.7 solar radii (500,000 km) to near the surface. In this layer, the solar plasma is not dense or hot enough to transfer the heat energy of the interior outward via radiation. Instead, the density of the plasma is low enough to allow convective currents to develop and move the Sun's energy outward towards its surface. Material heated at the tachocline picks up heat and expands, thereby reducing its density and allowing it to rise. As a result, an orderly motion of the mass develops into thermal cells that carry most of the heat outward to the Sun's photosphere above. Once the material diffusively and radiatively cools just beneath the photospheric surface, its density increases, and it sinks to the base of the convection zone, where it again picks up heat from the top of the radiative zone and the convective cycle continues. At the photosphere, the temperature has dropped 350-fold to and the density to only 0.2 g/m3 (about 1/10,000 the density of air at sea level, and 1 millionth that of the inner layer of the convective zone).
The thermal columns of the convection zone form an imprint on the surface of the Sun giving it a granular appearance called the solar granulation at the smallest scale and supergranulation at larger scales. Turbulent convection in this outer part of the solar interior sustains "small-scale" dynamo action over the near-surface volume of the Sun. The Sun's thermal columns are Bénard cells and take the shape of roughly hexagonal prisms.
Photosphere
The visible surface of the Sun, the photosphere, is the layer below which the Sun becomes opaque to visible light. Photons produced in this layer escape the Sun through the transparent solar atmosphere above it and become solar radiation, sunlight. The change in opacity is due to the decreasing amount of H− ions, which absorb visible light easily. Conversely, the visible light perceived is produced as electrons react with hydrogen atoms to produce H− ions.
The photosphere is tens to hundreds of kilometers thick, and is slightly less opaque than air on Earth. Because the upper part of the photosphere is cooler than the lower part, an image of the Sun appears brighter in the center than on the edge or limb of the solar disk, in a phenomenon known as limb darkening. The spectrum of sunlight has approximately the spectrum of a black-body radiating at , interspersed with atomic absorption lines from the tenuous layers above the photosphere. The photosphere has a particle density of ~1023 m−3 (about 0.37% of the particle number per volume of Earth's atmosphere at sea level). The photosphere is not fully ionized—the extent of ionization is about 3%, leaving almost all of the hydrogen in atomic form.
Atmosphere
The Sun's atmosphere is composed of five layers: the photosphere, the chromosphere, the transition region, the corona, and the heliosphere.
The coolest layer of the Sun is a temperature minimum region extending to about above the photosphere, and has a temperature of about . This part of the Sun is cool enough to allow for the existence of simple molecules such as carbon monoxide and water. The chromosphere, transition region, and corona are much hotter than the surface of the Sun. The reason is not well understood, but evidence suggests that Alfvén waves may have enough energy to heat the corona.
Above the temperature minimum layer is a layer about thick, dominated by a spectrum of emission and absorption lines. It is called the chromosphere from the Greek root chroma, meaning color, because the chromosphere is visible as a colored flash at the beginning and end of total solar eclipses. The temperature of the chromosphere increases gradually with altitude, ranging up to around near the top. In the upper part of the chromosphere helium becomes partially ionized.
Above the chromosphere, in a thin (about ) transition region, the temperature rises rapidly from around in the upper chromosphere to coronal temperatures closer to . The temperature increase is facilitated by the full ionization of helium in the transition region, which significantly reduces radiative cooling of the plasma. The transition region does not occur at a well-defined altitude, but forms a kind of nimbus around chromospheric features such as spicules and filaments, and is in constant, chaotic motion. The transition region is not easily visible from Earth's surface, but is readily observable from space by instruments sensitive to extreme ultraviolet.
The corona is the next layer of the Sun. The low corona, near the surface of the Sun, has a particle density around 1015 m−3 to 1016 m−3. The average temperature of the corona and solar wind is about 1,000,000–2,000,000 K; however, in the hottest regions it is 8,000,000–20,000,000 K. Although no complete theory yet exists to account for the temperature of the corona, at least some of its heat is known to be from magnetic reconnection.
The corona is the extended atmosphere of the Sun, which has a volume much larger than the volume enclosed by the Sun's photosphere. A flow of plasma outward from the Sun into interplanetary space is the solar wind.
The heliosphere, the tenuous outermost atmosphere of the Sun, is filled with solar wind plasma and is defined to begin at the distance where the flow of the solar wind becomes superalfvénic—that is, where the flow becomes faster than the speed of Alfvén waves, at approximately 20 solar radii (). Turbulence and dynamic forces in the heliosphere cannot affect the shape of the solar corona within, because the information can only travel at the speed of Alfvén waves. The solar wind travels outward continuously through the heliosphere, forming the solar magnetic field into a spiral shape, until it impacts the heliopause more than from the Sun. In December 2004, the Voyager 1 probe passed through a shock front that is thought to be part of the heliopause. In late 2012, Voyager 1 recorded a marked increase in cosmic ray collisions and a sharp drop in lower energy particles from the solar wind, which suggested that the probe had passed through the heliopause and entered the interstellar medium, and indeed did so on August 25, 2012, at approximately 122 astronomical units (18 Tm) from the Sun. The heliosphere has a heliotail which stretches out behind it due to the Sun's peculiar motion through the galaxy.
On April 28, 2021, NASA's Parker Solar Probe encountered the specific magnetic and particle conditions at 18.8 solar radii that indicated that it penetrated the Alfvén surface, the boundary separating the corona from the solar wind, defined as where the coronal plasma's Alfvén speed and the large-scale solar wind speed are equal. During the flyby, Parker Solar Probe passed into and out of the corona several times. This proved the predictions that the Alfvén critical surface is not shaped like a smooth ball, but has spikes and valleys that wrinkle its surface.
Solar radiation
The Sun emits light across the visible spectrum, so its color is white, with a CIE color-space index near (0.3, 0.3), when viewed from space or when the Sun is high in the sky. The Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from space. When the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blue. Some cultures mentally picture the Sun as yellow and some even red; the cultural reasons for this are debated. The Sun is classed as a G2 star, meaning it is a G-type star, with 2 indicating its surface temperature is in the second range of the G class.
The solar constant is the amount of power that the Sun deposits per unit area that is directly exposed to sunlight. The solar constant is equal to approximately (watts per square meter) at a distance of one astronomical unit (AU) from the Sun (that is, at or near Earth's orbit). Sunlight on the surface of Earth is attenuated by Earth's atmosphere, so that less power arrives at the surface (closer to ) in clear conditions when the Sun is near the zenith. Sunlight at the top of Earth's atmosphere is composed (by total energy) of about 50% infrared light, 40% visible light, and 10% ultraviolet light. The atmosphere filters out over 70% of solar ultraviolet, especially at the shorter wavelengths. Solar ultraviolet radiation ionizes Earth's dayside upper atmosphere, creating the electrically conducting ionosphere.
Ultraviolet light from the Sun has antiseptic properties and can be used to sanitize tools and water. This radiation causes sunburn, and has other biological effects such as the production of vitamin D and sun tanning. It is the main cause of skin cancer. Ultraviolet light is strongly attenuated by Earth's ozone layer, so that the amount of UV varies greatly with latitude and has been partially responsible for many biological adaptations, including variations in human skin color.
High-energy gamma ray photons initially released with fusion reactions in the core are almost immediately absorbed by the solar plasma of the radiative zone, usually after traveling only a few millimeters. Re-emission happens in a random direction and usually at slightly lower energy. With this sequence of emissions and absorptions, it takes a long time for radiation to reach the Sun's surface. Estimates of the photon travel time range between 10,000 and 170,000 years. In contrast, it takes only 2.3 seconds for neutrinos, which account for about 2% of the total energy production of the Sun, to reach the surface. Because energy transport in the Sun is a process that involves photons in thermodynamic equilibrium with matter, the time scale of energy transport in the Sun is longer, on the order of 30,000,000 years. This is the time it would take the Sun to return to a stable state if the rate of energy generation in its core were suddenly changed.
Electron neutrinos are released by fusion reactions in the core, but, unlike photons, they rarely interact with matter, so almost all are able to escape the Sun immediately. However, measurements of the number of these neutrinos produced in the Sun are lower than theories predict by a factor of 3. In 2001, the discovery of neutrino oscillation resolved the discrepancy: the Sun emits the number of electron neutrinos predicted by the theory, but neutrino detectors were missing of them because the neutrinos had changed flavor by the time they were detected.
Magnetic activity
The Sun has a stellar magnetic field that varies across its surface. Its polar field is , whereas the field is typically in features on the Sun called sunspots and in solar prominences. The magnetic field varies in time and location. The quasi-periodic 11-year solar cycle is the most prominent variation in which the number and size of sunspots waxes and wanes.
The solar magnetic field extends well beyond the Sun itself. The electrically conducting solar wind plasma carries the Sun's magnetic field into space, forming what is called the interplanetary magnetic field. In an approximation known as ideal magnetohydrodynamics, plasma particles only move along magnetic field lines. As a result, the outward-flowing solar wind stretches the interplanetary magnetic field outward, forcing it into a roughly radial structure. For a simple dipolar solar magnetic field, with opposite hemispherical polarities on either side of the solar magnetic equator, a thin current sheet is formed in the solar wind. At great distances, the rotation of the Sun twists the dipolar magnetic field and corresponding current sheet into an Archimedean spiral structure called the Parker spiral.
Sunspot
Sunspots are visible as dark patches on the Sun's photosphere and correspond to concentrations of magnetic field where convective transport of heat is inhibited from the solar interior to the surface. As a result, sunspots are slightly cooler than the surrounding photosphere, so they appear dark. At a typical solar minimum, few sunspots are visible, and occasionally none can be seen at all. Those that do appear are at high solar latitudes. As the solar cycle progresses toward its maximum, sunspots tend to form closer to the solar equator, a phenomenon known as Spörer's law. The largest sunspots can be tens of thousands of kilometers across.
An 11-year sunspot cycle is half of a 22-year Babcock–Leighton dynamo cycle, which corresponds to an oscillatory exchange of energy between toroidal and poloidal solar magnetic fields. At solar-cycle maximum, the external poloidal dipolar magnetic field is near its dynamo-cycle minimum strength; but an internal toroidal quadrupolar field, generated through differential rotation within the tachocline, is near its maximum strength. At this point in the dynamo cycle, buoyant upwelling within the convective zone forces emergence of the toroidal magnetic field through the photosphere, giving rise to pairs of sunspots, roughly aligned east–west and having footprints with opposite magnetic polarities. The magnetic polarity of sunspot pairs alternates every solar cycle, a phenomenon described by Hale's law.
During the solar cycle's declining phase, energy shifts from the internal toroidal magnetic field to the external poloidal field, and sunspots diminish in number and size. At solar-cycle minimum, the toroidal field is, correspondingly, at minimum strength, sunspots are relatively rare, and the poloidal field is at its maximum strength. With the rise of the next 11-year sunspot cycle, differential rotation shifts magnetic energy back from the poloidal to the toroidal field, but with a polarity that is opposite to the previous cycle. The process carries on continuously, and in an idealized, simplified scenario, each 11-year sunspot cycle corresponds to a change, then, in the overall polarity of the Sun's large-scale magnetic field.
Solar activity
The Sun's magnetic field leads to many effects that are collectively called solar activity. Solar flares and coronal mass ejections tend to occur at sunspot groups. Slowly changing high-speed streams of solar wind are emitted from coronal holes at the photospheric surface. Both coronal mass ejections and high-speed streams of solar wind carry plasma and the interplanetary magnetic field outward into the Solar System. The effects of solar activity on Earth include auroras at moderate to high latitudes and the disruption of radio communications and electric power. Solar activity is thought to have played a large role in the formation and evolution of the Solar System.
Change in solar irradiance over the 11 year solar cycle has been correlated with change in sunspot number. The solar cycle influences space weather conditions, including those surrounding Earth. For example, in the 17th century, the solar cycle appeared to have stopped entirely for several decades; few sunspots were observed during a period known as the Maunder minimum. This coincided in time with the era of the Little Ice Age, when Europe experienced unusually cold temperatures. Earlier extended minima have been discovered through analysis of tree rings and appear to have coincided with lower-than-average global temperatures.
Coronal heating
The temperature of the photosphere is approximately 6,000 K, whereas the temperature of the corona reaches . The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere.
It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient matter in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares.
Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms.
Life phases
The Sun today is roughly halfway through the main-sequence portion of its life. It has not changed dramatically in over four billion years and will remain fairly stable for about five billion more. However, after hydrogen fusion in its core has stopped, the Sun will undergo dramatic changes, both internally and externally.
Formation
The Sun formed about 4.6 billion years ago from the collapse of part of a giant molecular cloud that consisted mostly of hydrogen and helium and that probably gave birth to many other stars. This age is estimated using computer models of stellar evolution and through nucleocosmochronology. The result is consistent with the radiometric date of the oldest Solar System material, at 4.567 billion years ago. Studies of ancient meteorites reveal traces of stable daughter nuclei of short-lived isotopes, such as iron-60, that form only in exploding, short-lived stars. This indicates that one or more supernovae must have occurred near the location where the Sun formed. A shock wave from a nearby supernova would have triggered the formation of the Sun by compressing the matter within the molecular cloud and causing certain regions to collapse under their own gravity. As one fragment of the cloud collapsed it also began to rotate due to conservation of angular momentum and heat up with the increasing pressure. Much of the mass became concentrated in the center, whereas the rest flattened out into a disk that would become the planets and other Solar System bodies. Gravity and pressure within the core of the cloud generated a lot of heat as it accumulated more matter from the surrounding disk, eventually triggering nuclear fusion.
The stars HD 162826 and HD 186302 share similarities with the Sun and are thus hypothesized to be its stellar siblings, formed in the same molecular cloud.
Main sequence
The Sun is about halfway through its main-sequence stage, during which nuclear fusion reactions in its core fuse hydrogen into helium. Each second, more than four billion kilograms of matter are converted into energy within the Sun's core, producing neutrinos and solar radiation. At this rate, the Sun has so far converted around 100 times the mass of Earth into energy, about 0.03% of the total mass of the Sun. The Sun will spend a total of approximately 10 to 11 billion years as a main-sequence star before the red giant phase of the Sun. At the 8 billion year mark, the Sun will be at its hottest point according to the ESA's Gaia space observatory mission in 2022.
The Sun is gradually becoming hotter in its core, hotter at the surface, larger in radius, and more luminous during its time on the main sequence: since the beginning of its main sequence life, it has expanded in radius by 15% and the surface has increased in temperature from to , resulting in a 48% increase in luminosity from 0.677 solar luminosities to its present-day 1.0 solar luminosity. This occurs because the helium atoms in the core have a higher mean molecular weight than the hydrogen atoms that were fused, resulting in less thermal pressure. The core is therefore shrinking, allowing the outer layers of the Sun to move closer to the center, releasing gravitational potential energy. According to the virial theorem, half of this released gravitational energy goes into heating, which leads to a gradual increase in the rate at which fusion occurs and thus an increase in the luminosity. This process speeds up as the core gradually becomes denser. At present, it is increasing in brightness by about 1% every 100 million years. It will take at least 1 billion years from now to deplete liquid water from the Earth from such increase. After that, the Earth will cease to be able to support complex, multicellular life and the last remaining multicellular organisms on the planet will suffer a final, complete mass extinction.
After core hydrogen exhaustion
The Sun does not have enough mass to explode as a supernova. Instead, when it runs out of hydrogen in the core in approximately 5 billion years, core hydrogen fusion will stop, and there will be nothing to prevent the core from contracting. The release of gravitational potential energy will cause the luminosity of the Sun to increase, ending the main sequence phase and leading the Sun to expand over the next billion years: first into a subgiant, and then into a red giant. The heating due to gravitational contraction will also lead to expansion of the Sun and hydrogen fusion in a shell just outside the core, where unfused hydrogen remains, contributing to the increased luminosity, which will eventually reach more than 1,000 times its present luminosity. When the Sun enters its red-giant branch (RGB) phase, it will engulf (and very likely destroy) Mercury and Venus. According to a 2008 article, Earth's orbit will have initially expanded to at most due to the Sun's loss of mass. However, Earth's orbit will then start shrinking due to tidal forces (and, eventually, drag from the lower chromosphere) so that it is engulfed by the Sun during the tip of the red-giant branch phase 7.59 billion years from now, 3.8 and 1 million years after Mercury and Venus have respectively suffered the same fate.
By the time the Sun reaches the tip of the red-giant branch, it will be about 256 times larger than it is today, with a radius of . The Sun will spend around a billion years in the RGB and lose around a third of its mass.
After the red-giant branch, the Sun has approximately 120 million years of active life left, but much happens. First, the core (full of degenerate helium) ignites violently in the helium flash; it is estimated that 6% of the core—itself 40% of the Sun's mass—will be converted into carbon within a matter of minutes through the triple-alpha process. The Sun then shrinks to around 10 times its current size and 50 times the luminosity, with a temperature a little lower than today. It will then have reached the red clump or horizontal branch, but a star of the Sun's metallicity does not evolve blueward along the horizontal branch. Instead, it just becomes moderately larger and more luminous over about 100 million years as it continues to react helium in the core.
When the helium is exhausted, the Sun will repeat the expansion it followed when the hydrogen in the core was exhausted. This time, however, it all happens faster, and the Sun becomes larger and more luminous. This is the asymptotic-giant-branch phase, and the Sun is alternately reacting hydrogen in a shell or helium in a deeper shell. After about 20 million years on the early asymptotic giant branch, the Sun becomes increasingly unstable, with rapid mass loss and thermal pulses that increase the size and luminosity for a few hundred years every 100,000 years or so. The thermal pulses become larger each time, with the later pulses pushing the luminosity to as much as 5,000 times the current level. Despite this, the Sun's maximum AGB radius will not be as large as its tip-RGB maximum: 179 , or about .
Models vary depending on the rate and timing of mass loss. Models that have higher mass loss on the red-giant branch produce smaller, less luminous stars at the tip of the asymptotic giant branch, perhaps only 2,000 times the luminosity and less than 200 times the radius. For the Sun, four thermal pulses are predicted before it completely loses its outer envelope and starts to make a planetary nebula.
The post-asymptotic-giant-branch evolution is even faster. The luminosity stays approximately constant as the temperature increases, with the ejected half of the Sun's mass becoming ionized into a planetary nebula as the exposed core reaches , as if it is in a sort of blue loop. The final naked core, a white dwarf, will have a temperature of over and contain an estimated 54.05% of the Sun's present-day mass. Simulations indicate that the Sun may be among the least massive stars capable of forming a planetary nebula. The planetary nebula will disperse in about 10,000 years, but the white dwarf will survive for trillions of years before fading to a hypothetical super-dense black dwarf. As such, it would give off no more energy.
Location
Solar System
The Sun has eight known planets orbiting it. This includes four terrestrial planets (Mercury, Venus, Earth, and Mars), two gas giants (Jupiter and Saturn), and two ice giants (Uranus and Neptune). The Solar System also has nine bodies generally considered as dwarf planets and some more candidates, an asteroid belt, numerous comets, and a large number of icy bodies which lie beyond the orbit of Neptune. Six of the planets and many smaller bodies also have their own natural satellites: in particular, the satellite systems of Jupiter, Saturn, and Uranus are in some ways like miniature versions of the Sun's system.
The Sun is moved by the gravitational pull of the planets. The center of the Sun moves around the Solar System barycenter, within a range from 0.1 to 2.2 solar radii. The Sun's motion around the barycenter approximately repeats every 179 years, rotated by about 30° due primarily to the synodic period of Jupiter and Saturn.
The Sun's gravitational field is estimated to dominate the gravitational forces of surrounding stars out to about two light-years (). Lower estimates for the radius of the Oort cloud, by contrast, do not place it farther than . Most of the mass is orbiting in the region between 3,000 and . The furthest known objects, such as Comet West, have aphelia around from the Sun. The Sun's Hill sphere with respect to the galactic nucleus, the effective range of its gravitational influence, was calculated by G. A. Chebotarev to be 230,000 AU.
Celestial neighborhood
Motion
The Sun, taking along the whole Solar System, orbits the galaxy's center of mass at an average speed of 230 km/s (828,000 km/h) or 143 mi/s (514,000 mph), taking about 220–250 million Earth years to complete a revolution (a Galactic year), having done so about 20 times since the Sun's formation. The direction of the Sun's motion, the Solar apex, is roughly in the direction of the star Vega.
The Milky Way is moving with respect to the cosmic microwave background radiation (CMB) in the direction of the constellation Hydra with a speed of 550 km/s. Since the Sun is moving with respect to the galactic center in the direction of Cygnus (galactic longitude 90°; latitude 0°) at more than 200km/sec, the resultant velocity with respect to the CMB is about 370 km/s in the direction of Crater or Leo (galactic latitude 264°, latitude 48°).
Observational history
Early understanding
In many prehistoric and ancient cultures, the Sun was thought to be a solar deity or other supernatural entity. In the early first millennium BC, Babylonian astronomers observed that the Sun's motion along the ecliptic is not uniform, though they did not know why; it is today known that this is due to the movement of Earth in an elliptic orbit, moving faster when it is nearer to the Sun at perihelion and moving slower when it is farther away at aphelion.
One of the first people to offer a scientific or philosophical explanation for the Sun was the Greek philosopher Anaxagoras. He reasoned that it was a giant flaming ball of metal even larger than the land of the Peloponnesus and that the Moon reflected the light of the Sun. Eratosthenes estimated the distance between Earth and the Sun in the third century BC as "of stadia myriads 400 and 80000", the translation of which is ambiguous, implying either 4,080,000 stadia (755,000 km) or 804,000,000 stadia (148 to 153 million kilometers or 0.99 to 1.02 AU); the latter value is correct to within a few percent. In the first century AD, Ptolemy estimated the distance as 1,210 times the radius of Earth, approximately .
The theory that the Sun is the center around which the planets orbit was first proposed by the ancient Greek Aristarchus of Samos in the third century BC, and later adopted by Seleucus of Seleucia (see Heliocentrism). This view was developed in a more detailed mathematical model of a heliocentric system in the 16th century by Nicolaus Copernicus.
Development of scientific understanding
Observations of sunspots were recorded during the Han dynasty (206 BC–AD 220) by Chinese astronomers, who maintained records of these observations for centuries. Averroes also provided a description of sunspots in the 12th century. The invention of the telescope in the early 17th century permitted detailed observations of sunspots by Thomas Harriot, Galileo Galilei and other astronomers. Galileo posited that sunspots were on the surface of the Sun rather than small objects passing between Earth and the Sun.
Arabic astronomical contributions include Al-Battani's discovery that the direction of the Sun's apogee (the place in the Sun's orbit against the fixed stars where it seems to be moving slowest) is changing. In modern heliocentric terms, this is caused by a gradual motion of the aphelion of the Earth's orbit. Ibn Yunus observed more than 10,000 entries for the Sun's position for many years using a large astrolabe.
From an observation of a transit of Venus in 1032, the Persian astronomer and polymath Ibn Sina concluded that Venus was closer to Earth than the Sun. In 1677, Edmond Halley observed a transit of Mercury across the Sun, leading him to realize that observations of the solar parallax of a planet (more ideally using the transit of Venus) could be used to trigonometrically determine the distances between Earth, Venus, and the Sun. Careful observations of the 1769 transit of Venus allowed astronomers to calculate the average Earth–Sun distance as , only 0.8% greater than the modern value.
In 1666, Isaac Newton observed the Sun's light using a prism, and showed that it is made up of light of many colors. In 1800, William Herschel discovered infrared radiation beyond the red part of the solar spectrum. The 19th century saw advancement in spectroscopic studies of the Sun; Joseph von Fraunhofer recorded more than 600 absorption lines in the spectrum, the strongest of which are still often referred to as Fraunhofer lines. The 20th century brought about several specialized systems for observing the Sun, especially at different narrowband wavelengths, such as those using Calcium H (396.9 nm), K (393.37 nm) and Hydrogen-alpha (656.46 nm) filtering.
During early studies of the optical spectrum of the photosphere, some absorption lines were found that did not correspond to any chemical elements then known on Earth. In 1868, Norman Lockyer hypothesized that these absorption lines were caused by a new element that he dubbed helium, after the Greek Sun god Helios. Twenty-five years later, helium was isolated on Earth.
In the early years of the modern scientific era, the source of the Sun's energy was a significant puzzle. Lord Kelvin suggested that the Sun is a gradually cooling liquid body that is radiating an internal store of heat. Kelvin and Hermann von Helmholtz then proposed a gravitational contraction mechanism to explain the energy output, but the resulting age estimate was only 20 million years, well short of the time span of at least 300 million years suggested by some geological discoveries of that time. In 1890, Lockyer proposed a meteoritic hypothesis for the formation and evolution of the Sun.
Not until 1904 was a documented solution offered. Ernest Rutherford suggested that the Sun's output could be maintained by an internal source of heat, and suggested radioactive decay as the source. However, it would be Albert Einstein who would provide the essential clue to the source of the Sun's energy output with his mass–energy equivalence relation . In 1920, Sir Arthur Eddington proposed that the pressures and temperatures at the core of the Sun could produce a nuclear fusion reaction that merged hydrogen (protons) into helium nuclei, resulting in a production of energy from the net change in mass. The preponderance of hydrogen in the Sun was confirmed in 1925 by Cecilia Payne using the ionization theory developed by Meghnad Saha. The theoretical concept of fusion was developed in the 1930s by the astrophysicists Subrahmanyan Chandrasekhar and Hans Bethe. Hans Bethe calculated the details of the two main energy-producing nuclear reactions that power the Sun. In 1957, Margaret Burbidge, Geoffrey Burbidge, William Fowler and Fred Hoyle showed that most of the elements in the universe have been synthesized by nuclear reactions inside stars, some like the Sun.
Solar space missions
The first satellites designed for long term observation of the Sun from interplanetary space were NASA's Pioneers 6, 7, 8 and 9, which were launched between 1959 and 1968. These probes orbited the Sun at a distance similar to that of Earth, and made the first detailed measurements of the solar wind and the solar magnetic field. Pioneer 9 operated for a particularly long time, transmitting data until May 1983.
In the 1970s, two Helios spacecraft and the Skylab Apollo Telescope Mount provided scientists with significant new data on solar wind and the solar corona. The Helios 1 and 2 probes were U.S.–German collaborations that studied the solar wind from an orbit carrying the spacecraft inside Mercury's orbit at perihelion. The Skylab space station, launched by NASA in 1973, included a solar observatory module called the Apollo Telescope Mount that was operated by astronauts resident on the station. Skylab made the first time-resolved observations of the solar transition region and of ultraviolet emissions from the solar corona. Discoveries included the first observations of coronal mass ejections, then called "coronal transients", and of coronal holes, now known to be intimately associated with the solar wind.
In 1980, the Solar Maximum Mission probes were launched by NASA. This spacecraft was designed to observe gamma rays, X-rays and UV radiation from solar flares during a time of high solar activity and solar luminosity. Just a few months after launch, however, an electronics failure caused the probe to go into standby mode, and it spent the next three years in this inactive state. In 1984, Space Shuttle Challenger mission STS-41C retrieved the satellite and repaired its electronics before re-releasing it into orbit. The Solar Maximum Mission subsequently acquired thousands of images of the solar corona before re-entering Earth's atmosphere in June 1989.
Launched in 1991, Japan's Yohkoh (Sunbeam) satellite observed solar flares at X-ray wavelengths. Mission data allowed scientists to identify several different types of flares and demonstrated that the corona away from regions of peak activity was much more dynamic and active than had previously been supposed. Yohkoh observed an entire solar cycle but went into standby mode when an annular eclipse in 2001 caused it to lose its lock on the Sun. It was destroyed by atmospheric re-entry in 2005.
The Solar and Heliospheric Observatory, jointly built by the European Space Agency and NASA, was launched on 2 December 1995. Originally intended to serve a two-year mission, SOHO remains in operation as of 2024. Situated at the Lagrangian point between Earth and the Sun (at which the gravitational pull from both is equal), SOHO has provided a constant view of the Sun at many wavelengths since its launch. Besides its direct solar observation, SOHO has enabled the discovery of a large number of comets, mostly tiny sungrazing comets that incinerate as they pass the Sun.
All these satellites have observed the Sun from the plane of the ecliptic, and so have only observed its equatorial regions in detail. The Ulysses probe was launched in 1990 to study the Sun's polar regions. It first traveled to Jupiter, to "slingshot" into an orbit that would take it far above the plane of the ecliptic. Once Ulysses was in its scheduled orbit, it began observing the solar wind and magnetic field strength at high solar latitudes, finding that the solar wind from high latitudes was moving at about 750 km/s, which was slower than expected, and that there were large magnetic waves emerging from high latitudes that scattered galactic cosmic rays.
Elemental abundances in the photosphere are well known from spectroscopic studies, but the composition of the interior of the Sun is more poorly understood. A solar wind sample return mission, Genesis, was designed to allow astronomers to directly measure the composition of solar material.
Observation by eyes
Exposure to the eye
The brightness of the Sun can cause pain from looking at it with the naked eye; however, doing so for brief periods is not hazardous for normal non-dilated eyes. Looking directly at the Sun (sungazing) causes phosphene visual artifacts and temporary partial blindness. It also delivers about 4 milliwatts of sunlight to the retina, slightly heating it and potentially causing damage in eyes that cannot respond properly to the brightness. Viewing of the direct Sun with the naked eye can cause UV-induced, sunburn-like lesions on the retina beginning after about 100 seconds, particularly under conditions where the UV light from the Sun is intense and well focused.
Viewing the Sun through light-concentrating optics such as binoculars may result in permanent damage to the retina without an appropriate filter that blocks UV and substantially dims the sunlight. When using an attenuating filter to view the Sun, the viewer is cautioned to use a filter specifically designed for that use. Some improvised filters that pass UV or IR rays, can actually harm the eye at high brightness levels. Brief glances at the midday Sun through an unfiltered telescope can cause permanent damage.
During sunrise and sunset, sunlight is attenuated because of Rayleigh scattering and Mie scattering from a particularly long passage through Earth's atmosphere, and the Sun is sometimes faint enough to be viewed comfortably with the naked eye or safely with optics (provided there is no risk of bright sunlight suddenly appearing through a break between clouds). Hazy conditions, atmospheric dust, and high humidity contribute to this atmospheric attenuation.
Phenomena
An optical phenomenon, known as a green flash, can sometimes be seen shortly after sunset or before sunrise. The flash is caused by light from the Sun just below the horizon being bent (usually through a temperature inversion) towards the observer. Light of shorter wavelengths (violet, blue, green) is bent more than that of longer wavelengths (yellow, orange, red) but the violet and blue light is scattered more, leaving light that is perceived as green.
Religious aspects
Solar deities play a major role in many world religions and mythologies. Worship of the Sun was central to civilizations such as the ancient Egyptians, the Inca of South America and the Aztecs of what is now Mexico. In religions such as Hinduism, the Sun is still considered a god, known as Surya. Many ancient monuments were constructed with solar phenomena in mind; for example, stone megaliths accurately mark the summer or winter solstice (for example in Nabta Playa, Egypt; Mnajdra, Malta; and Stonehenge, England); Newgrange, a prehistoric human-built mount in Ireland, was designed to detect the winter solstice; the pyramid of El Castillo at Chichén Itzá in Mexico is designed to cast shadows in the shape of serpents climbing the pyramid at the vernal and autumnal equinoxes.
The ancient Sumerians believed that the Sun was Utu, the god of justice and twin brother of Inanna, the Queen of Heaven, who was identified as the planet Venus. Later, Utu was identified with the East Semitic god Shamash. Utu was regarded as a helper-deity, who aided those in distress.
From at least the Fourth Dynasty of Ancient Egypt, the Sun was worshipped as the god Ra, portrayed as a falcon-headed divinity surmounted by the solar disk, and surrounded by a serpent. In the New Empire period, the Sun became identified with the dung beetle. In the form of the sun disc Aten, the Sun had a brief resurgence during the Amarna Period when it again became the preeminent, if not only, divinity for the Pharaoh Akhenaton. The Egyptians portrayed the god Ra as being carried across the sky in a solar barque, accompanied by lesser gods, and to the Greeks, he was Helios, carried by a chariot drawn by fiery horses. From the reign of Elagabalus in the late Roman Empire the Sun's birthday was a holiday celebrated as Sol Invictus (literally "Unconquered Sun") soon after the winter solstice, which may have been an antecedent to Christmas. Regarding the fixed stars, the Sun appears from Earth to revolve once a year along the ecliptic through the zodiac, and so Greek astronomers categorized it as one of the seven planets (Greek planetes, "wanderer"); the naming of the days of the weeks after the seven planets dates to the Roman era.
In Proto-Indo-European religion, the Sun was personified as the goddess *Seh2ul. Derivatives of this goddess in Indo-European languages include the Old Norse Sól, Sanskrit Surya, Gaulish Sulis, Lithuanian Saulė, and Slavic Solntse. In ancient Greek religion, the sun deity was the male god Helios, who in later times was syncretized with Apollo.
In the Bible, Malachi 4:2 mentions the "Sun of Righteousness" (sometimes translated as the "Sun of Justice"), which some Christians have interpreted as a reference to the Messiah (Christ). In ancient Roman culture, Sunday was the day of the sun god. In paganism, the Sun was a source of life, giving warmth and illumination. It was the center of a popular cult among Romans, who would stand at dawn to catch the first rays of sunshine as they prayed. The celebration of the winter solstice (which influenced Christmas) was part of the Roman cult of the unconquered Sun (Sol Invictus). It was adopted as the Sabbath day by Christians. The symbol of light was a pagan device adopted by Christians, and perhaps the most important one that did not come from Jewish traditions. Christian churches were built so that the congregation faced toward the sunrise.
Tonatiuh, the Aztec god of the sun, was closely associated with the practice of human sacrifice. The sun goddess Amaterasu is the most important deity in the Shinto religion, and she is believed to be the direct ancestor of all Japanese emperors.
| Physical sciences | Science and medicine | null |
26764 | https://en.wikipedia.org/wiki/International%20System%20of%20Units | International System of Units | The International System of Units, internationally known by the abbreviation SI (from French ), is the modern form of the metric system and the world's most widely used system of measurement. It is the only system of measurement with official status in nearly every country in the world, employed in science, technology, industry, and everyday commerce. The SI system is coordinated by the International Bureau of Weights and Measures which is abbreviated BIPM from .
The SI comprises a coherent system of units of measurement starting with seven base units, which are the second (symbol s, the unit of time), metre (m, length), kilogram (kg, mass), ampere (A, electric current), kelvin (K, thermodynamic temperature), mole (mol, amount of substance), and candela (cd, luminous intensity). The system can accommodate coherent units for an unlimited number of additional quantities. These are called coherent derived units, which can always be represented as products of powers of the base units. Twenty-two coherent derived units have been provided with special names and symbols.
The seven base units and the 22 coherent derived units with special names and symbols may be used in combination to express other coherent derived units. Since the sizes of coherent units will be convenient for only some applications and not for others, the SI provides twenty-four prefixes which, when added to the name and symbol of a coherent unit produce twenty-four additional (non-coherent) SI units for the same quantity; these non-coherent units are always decimal (i.e. power-of-ten) multiples and sub-multiples of the coherent unit.
The current way of defining the SI is a result of a decades-long move towards increasingly abstract and idealised formulation in which the realisations of the units are separated conceptually from the definitions. A consequence is that as science and technologies develop, new and superior realisations may be introduced without the need to redefine the unit. One problem with artefacts is that they can be lost, damaged, or changed; another is that they introduce uncertainties that cannot be reduced by advancements in science and technology.
The original motivation for the development of the SI was the diversity of units that had sprung up within the centimetre–gram–second (CGS) systems (specifically the inconsistency between the systems of electrostatic units and electromagnetic units) and the lack of coordination between the various disciplines that used them. The General Conference on Weights and Measures (French: – CGPM), which was established by the Metre Convention of 1875, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements. The system was published in 1960 as a result of an initiative that began in 1948, and is based on the metre–kilogram–second system of units (MKS) combined with ideas from the development of the CGS system.
Definition
The International System of Units consists of a set of defining constants with corresponding base units, derived units, and a set of decimal-based multipliers that are used as prefixes.
SI defining constants
The seven defining constants are the most fundamental feature of the definition of the system of units.
The magnitudes of all SI units are defined by declaring that seven constants have certain exact numerical values when expressed in terms of their SI units. These defining constants are the speed of light in vacuum , the hyperfine transition frequency of caesium , the Planck constant , the elementary charge , the Boltzmann constant , the Avogadro constant , and the luminous efficacy . The nature of the defining constants ranges from fundamental constants of nature such as to the purely technical constant . The values assigned to these constants were fixed to ensure continuity with previous definitions of the base units.
SI base units
The SI selects seven units to serve as base units, corresponding to seven base physical quantities. They are the second, with the symbol , which is the SI unit of the physical quantity of time; the metre, symbol , the SI unit of length; kilogram (, the unit of mass); ampere (, electric current); kelvin (, thermodynamic temperature); mole (, amount of substance); and candela (, luminous intensity).
The base units are defined in terms of the defining constants. For example, the kilogram is defined by taking the Planck constant to be , giving the expression in terms of the defining constants
All units in the SI can be expressed in terms of the base units, and the base units serve as a preferred set for expressing or analysing the relationships between units. The choice of which and even how many quantities to use as base quantities is not fundamental or even unique – it is a matter of convention.
Derived units
The system allows for an unlimited number of additional units, called derived units, which can always be represented as products of powers of the base units, possibly with a nontrivial numeric multiplier. When that multiplier is one, the unit is called a coherent derived unit. For example, the coherent derived SI unit of velocity is the metre per second, with the symbol . The base and coherent derived units of the SI together form a coherent system of units (the set of coherent SI units). A useful property of a coherent system is that when the numerical values of physical quantities are expressed in terms of the units of the system, then the equations between the numerical values have exactly the same form, including numerical factors, as the corresponding equations between the physical quantities.
Twenty-two coherent derived units have been provided with special names and symbols as shown in the table below. The radian and steradian have no base units but are treated as derived units for historical reasons.
The derived units in the SI are formed by powers, products, or quotients of the base units and are unlimited in number.
Derived units apply to some derived quantities, which may by definition be expressed in terms of base quantities, and thus are not independent; for example, electrical conductance is the inverse of electrical resistance, with the consequence that the siemens is the inverse of the ohm, and similarly, the ohm and siemens can be replaced with a ratio of an ampere and a volt, because those quantities bear a defined relationship to each other. Other useful derived quantities can be specified in terms of the SI base and derived units that have no named units in the SI, such as acceleration, which has the SI unit m/s2.
A combination of base and derived units may be used to express a derived unit. For example, the SI unit of force is the newton (N), the SI unit of pressure is the pascal (Pa) – and the pascal can be defined as one newton per square metre (N/m2).
Prefixes
Like all metric systems, the SI uses metric prefixes to systematically construct, for the same physical quantity, a set of units that are decimal multiples of each other over a wide range. For example, driving distances are normally given in kilometres (symbol ) rather than in metres. Here the metric prefix 'kilo-' (symbol 'k') stands for a factor of 1000; thus, = .
The SI provides twenty-four metric prefixes that signify decimal powers ranging from 10−30 to 1030, the most recent being adopted in 2022. Most prefixes correspond to integer powers of 1000; the only ones that do not are those for 10, 1/10, 100, and 1/100.
The conversion between different SI units for one and the same physical quantity is always through a power of ten. This is why the SI (and metric systems more generally) are called decimal systems of measurement units.
The grouping formed by a prefix symbol attached to a unit symbol (e.g. '', '') constitutes a new inseparable unit symbol. This new symbol can be raised to a positive or negative power. It can also be combined with other unit symbols to form compound unit symbols. For example, is an SI unit of density, where is to be interpreted as ().
Prefixes are added to unit names to produce multiples and submultiples of the original unit. All of these are integer powers of ten, and above a hundred or below a hundredth all are integer powers of a thousand. For example, kilo- denotes a multiple of a thousand and milli- denotes a multiple of a thousandth, so there are one thousand millimetres to the metre and one thousand metres to the kilometre. The prefixes are never combined, so for example a millionth of a metre is a micrometre, not a millimillimetre. Multiples of the kilogram are named as if the gram were the base unit, so a millionth of a kilogram is a milligram, not a microkilogram.
The BIPM specifies 24 prefixes for the International System of Units (SI):
Coherent and non-coherent SI units
The base units and the derived units formed as the product of powers of the base units with a numerical factor of one form a coherent system of units. Every physical quantity has exactly one coherent SI unit. For example, is the coherent derived unit for velocity. With the exception of the kilogram (for which the prefix kilo- is required for a coherent unit), when prefixes are used with the coherent SI units, the resulting units are no longer coherent, because the prefix introduces a numerical factor other than one. For example, the metre, kilometre, centimetre, nanometre, etc. are all SI units of length, though only the metre is a SI unit. The complete set of SI units consists of both the coherent set and the multiples and sub-multiples of coherent units formed by using the SI prefixes.
The kilogram is the only coherent SI unit whose name and symbol include a prefix. For historical reasons, the names and symbols for multiples and sub-multiples of the unit of mass are formed as if the gram were the base unit. Prefix names and symbols are attached to the unit name gram and the unit symbol g respectively. For example, is written milligram and , not microkilogram and .
Several different quantities may share the same coherent SI unit. For example, the joule per kelvin (symbol ) is the coherent SI unit for two distinct quantities: heat capacity and entropy; another example is the ampere, which is the coherent SI unit for both electric current and magnetomotive force. This illustrates why it is important not to use the unit alone to specify the quantity. As the SI Brochure states, "this applies not only to technical texts, but also, for example, to measuring instruments (i.e. the instrument read-out needs to indicate both the unit and the quantity measured)".
Furthermore, the same coherent SI unit may be a base unit in one context, but a coherent derived unit in another. For example, the ampere is a base unit when it is a unit of electric current, but a coherent derived unit when it is a unit of magnetomotive force.
Lexicographic conventions
Unit names
According to the SI Brochure, unit names should be treated as common nouns of the context language. This means that they should be typeset in the same character set as other common nouns (e.g. Latin alphabet in English, Cyrillic script in Russian, etc.), following the usual grammatical and orthographical rules of the context language. For example, in English and French, even when the unit is named after a person and its symbol begins with a capital letter, the unit name in running text should start with a lowercase letter (e.g., newton, hertz, pascal) and is capitalised only at the beginning of a sentence and in headings and publication titles. As a nontrivial application of this rule, the SI Brochure notes that the name of the unit with the symbol is correctly spelled as 'degree Celsius': the first letter of the name of the unit, 'd', is in lowercase, while the modifier 'Celsius' is capitalised because it is a proper name.
The English spelling and even names for certain SI units and metric prefixes depend on the variety of English used. US English uses the spelling deka-, meter, and liter, and International English uses deca-, metre, and litre. The name of the unit whose symbol is t and which is defined according to is 'metric ton' in US English and 'tonne' in International English.
Unit symbols and the values of quantities
Symbols of SI units are intended to be unique and universal, independent of the context language. The SI Brochure has specific rules for writing them.
In addition, the SI Brochure provides style conventions for among other aspects of displaying quantities units: the quantity symbols, formatting of numbers and the decimal marker, expressing measurement uncertainty, multiplication and division of quantity symbols, and the use of pure numbers and various angles.
In the United States, the guideline produced by the National Institute of Standards and Technology (NIST) clarifies language-specific details for American English that were left unclear by the SI Brochure, but is otherwise identical to the SI Brochure. For example, since 1979, the litre may exceptionally be written using either an uppercase "L" or a lowercase "l", a decision prompted by the similarity of the lowercase letter "l" to the numeral "1", especially with certain typefaces or English-style handwriting. The American NIST recommends that within the United States "L" be used rather than "l".
Realisation of units
Metrologists carefully distinguish between the definition of a unit and its realisation. The SI units are defined by declaring that seven defining constants have certain exact numerical values when expressed in terms of their SI units. The realisation of the definition of a unit is the procedure by which the definition may be used to establish the value and associated uncertainty of a quantity of the same kind as the unit.
For each base unit the BIPM publishes a , (French for 'putting into practice; implementation',) describing the current best practical realisations of the unit. The separation of the defining constants from the definitions of units means that improved measurements can be developed leading to changes in the as science and technology develop, without having to revise the definitions.
The published is not the only way in which a base unit can be determined: the SI Brochure states that "any method consistent with the laws of physics could be used to realise any SI unit". Various consultative committees of the CIPM decided in 2016 that more than one would be developed for determining the value of each unit. These methods include the following:
At least three separate experiments be carried out yielding values having a relative standard uncertainty in the determination of the kilogram of no more than and at least one of these values should be better than . Both the Kibble balance and the Avogadro project should be included in the experiments and any differences between these be reconciled.
The definition of the kelvin measured with a relative uncertainty of the Boltzmann constant derived from two fundamentally different methods such as acoustic gas thermometry and dielectric constant gas thermometry be better than one part in and that these values be corroborated by other measurements.
Organizational status
The International System of Units, or SI, is a decimal and metric system of units established in 1960 and periodically updated since then. The SI has an official status in most countries, including the United States, Canada, and the United Kingdom, although these three countries are among the handful of nations that, to various degrees, also continue to use their customary systems. Nevertheless, with this nearly universal level of acceptance, the SI "has been used around the world as the preferred system of units, the basic language for science, technology, industry, and trade."
The only other types of measurement system that still have widespread use across the world are the imperial and US customary measurement systems. The international yard and pound are defined in terms of the SI.
International System of Quantities
The quantities and equations that provide the context in which the SI units are defined are now referred to as the International System of Quantities (ISQ).
The ISQ is based on the quantities underlying each of the seven base units of the SI. Other quantities, such as area, pressure, and electrical resistance, are derived from these base quantities by clear, non-contradictory equations. The ISQ defines the quantities that are measured with the SI units. The ISQ is formalised, in part, in the international standard ISO/IEC 80000, which was completed in 2009 with the publication of ISO 80000-1, and has largely been revised in 2019–2020.
Controlling authority
The SI is regulated and continually developed by three international organisations that were established in 1875 under the terms of the Metre Convention. They are the General Conference on Weights and Measures (CGPM), the International Committee for Weights and Measures (CIPM), and the International Bureau of Weights and Measures (BIPM).
All the decisions and recommendations concerning units are collected in a brochure called The International System of Units (SI), which is published in French and English by the BIPM and periodically updated. The writing and maintenance of the brochure is carried out by one of the committees of the CIPM. The definitions of the terms "quantity", "unit", "dimension", etc. that are used in the SI Brochure are those given in the international vocabulary of metrology. The brochure leaves some scope for local variations, particularly regarding unit names and terms in different languages. For example, the United States' National Institute of Standards and Technology (NIST) has produced a version of the CGPM document (NIST SP 330) which clarifies usage for English-language publications that use American English.
History
CGS and MKS systems
The concept of a system of units emerged a hundred years before the SI.
In the 1860s, James Clerk Maxwell, William Thomson (later Lord Kelvin), and others working under the auspices of the British Association for the Advancement of Science, building on previous work of Carl Gauss, developed the centimetre–gram–second system of units or cgs system in 1874. The systems formalised the concept of a collection of related units called a coherent system of units. In a coherent system, base units combine to define derived units without extra factors. For example, using metre per second is coherent in a system that uses metre for length and second for time, but kilometre per hour is not coherent. The principle of coherence was successfully used to define a number of units of measure based on the CGS, including the erg for energy, the dyne for force, the barye for pressure, the poise for dynamic viscosity and the stokes for kinematic viscosity.
Metre Convention
A French-inspired initiative for international cooperation in metrology led to the signing in 1875 of the Metre Convention, also called Treaty of the Metre, by 17 nations.
The General Conference on Weights and Measures (French: – CGPM), which was established by the Metre Convention, brought together many international organisations to establish the definitions and standards of a new system and to standardise the rules for writing and presenting measurements.
Initially the convention only covered standards for the metre and the kilogram. This became the foundation of the MKS system of units.
Giovanni Giorgi and the problem of electrical units
At the close of the 19th century three different systems of units of measure existed for electrical measurements: a CGS-based system for electrostatic units, also known as the Gaussian or ESU system, a CGS-based system for electromechanical units (EMU), and an International system based on units defined by the Metre Convention for electrical distribution systems.
Attempts to resolve the electrical units in terms of length, mass, and time using dimensional analysis was beset with difficulties – the dimensions depended on whether one used the ESU or EMU systems. This anomaly was resolved in 1901 when Giovanni Giorgi published a paper in which he advocated using a fourth base unit alongside the existing three base units. The fourth unit could be chosen to be electric current, voltage, or electrical resistance.
Electric current with named unit 'ampere' was chosen as the base unit, and the other electrical quantities derived from it according to the laws of physics.
When combined with the MKS the new system, known as MKSA, was approved in 1946.
9th CGPM, the precursor to SI
In 1948, the 9th CGPM commissioned a study to assess the measurement needs of the scientific, technical, and educational communities and "to make recommendations for a single practical system of units of measurement, suitable for adoption by all countries adhering to the Metre Convention". This working document was Practical system of units of measurement. Based on this study, the 10th CGPM in 1954 defined an international system derived six base units: the metre, kilogram, second, ampere, degree Kelvin, and candela.
The 9th CGPM also approved the first formal recommendation for the writing of symbols in the metric system when the basis of the rules as they are now known was laid down. These rules were subsequently extended and now cover unit symbols and names, prefix symbols and names, how quantity symbols should be written and used, and how the values of quantities should be expressed.
Birth of the SI
The 10th CGPM in 1954 resolved to create an international system of units and
in 1960, the 11th CGPM adopted the International System of Units, abbreviated SI from the French name , which included a specification for units of measurement.
The International Bureau of Weights and Measures (BIPM) has described SI as "the modern form of metric system". In 1971 the mole became the seventh base unit of the SI.
2019 redefinition
After the metre was redefined in 1960, the International Prototype of the Kilogram (IPK) was the only physical artefact upon which base units (directly the kilogram and indirectly the ampere, mole and candela) depended for their definition, making these units subject to periodic comparisons of national standard kilograms with the IPK. During the 2nd and 3rd Periodic Verification of National Prototypes of the Kilogram, a significant divergence had occurred between the mass of the IPK and all of its official copies stored around the world: the copies had all noticeably increased in mass with respect to the IPK. During extraordinary verifications carried out in 2014 preparatory to redefinition of metric standards, continuing divergence was not confirmed. Nonetheless, the residual and irreducible instability of a physical IPK undermined the reliability of the entire metric system to precision measurement from small (atomic) to large (astrophysical) scales.
By avoiding the use of an artefact to define units, all issues with the loss, damage, and change of the artefact are avoided.
A proposal was made that:
In addition to the speed of light, four constants of nature – the Planck constant, an elementary charge, the Boltzmann constant, and the Avogadro constant – be defined to have exact values
The International Prototype of the Kilogram be retired
The current definitions of the kilogram, ampere, kelvin, and mole be revised
The wording of base unit definitions should change emphasis from explicit unit to explicit constant definitions.
The new definitions were adopted at the 26th CGPM on 16 November 2018, and came into effect on 20 May 2019. The change was adopted by the European Union through Directive (EU) 2019/1258.
Prior to its redefinition in 2019, the SI was defined through the seven base units from which the derived units were constructed as products of powers of the base units. After the redefinition, the SI is defined by fixing the numerical values of seven defining constants. This has the effect that the distinction between the base units and derived units is, in principle, not needed, since all units, base as well as derived, may be constructed directly from the defining constants. Nevertheless, the distinction is retained because "it is useful and historically well established", and also because the ISO/IEC 80000 series of standards, which define the International System of Quantities (ISQ), specifies base and derived quantities that necessarily have the corresponding SI units.
Related units
Non-SI units accepted for use with SI
Many non-SI units continue to be used in the scientific, technical, and commercial literature. Some units are deeply embedded in history and culture, and their use has not been entirely replaced by their SI alternatives. The CIPM recognised and acknowledged such traditions by compiling a list of non-SI units accepted for use with SI, including the hour, minute, degree of angle, litre, and decibel.
Metric units not recognised by SI
Although the term metric system is often used as an informal alternative name for the International System of Units, other metric systems exist, some of which were in widespread use in the past or are even still used in particular areas. There are also individual metric units such as the sverdrup and the darcy that exist outside of any system of units. Most of the units of the other metric systems are not recognised by the SI.
Unacceptable uses
Sometimes, SI unit name variations are introduced, mixing information about the corresponding physical quantity or the conditions of its measurement; however, this practice is unacceptable with the SI. "Unacceptability of mixing information with units: When one gives the value of a quantity, any information concerning the quantity or its conditions of measurement must be presented in such a way as not to be associated with the unit."
Instances include: "watt-peak" and "watt RMS"; "geopotential metre" and "vertical metre"; "standard cubic metre"; "atomic second", "ephemeris second", and "sidereal second".
| Physical sciences | Science: General | null |
26768 | https://en.wikipedia.org/wiki/Sirenia | Sirenia | The Sirenia (), commonly referred to as sea cows or sirenians, are an order of fully aquatic, herbivorous mammals that inhabit swamps, rivers, estuaries, marine wetlands, and coastal marine waters. The extant Sirenia comprise two distinct families: Dugongidae (the dugong and the now extinct Steller's sea cow) and Trichechidae (manatees, namely the Amazonian manatee, West Indian manatee, and West African manatee) with a total of four species. The Protosirenidae (Eocene sirenians) and Prorastomidae (terrestrial sirenians) families are extinct. Sirenians are classified in the clade Paenungulata, alongside the elephants and the hyraxes, and evolved in the Eocene 50 million years ago (mya). The Dugongidae diverged from the Trichechidae in the late Eocene or early Oligocene (30–35 mya).
Sirenians grow to between in length and in weight. The recently extinct Steller's sea cow was the largest known sirenian to have lived, reaching lengths of and weights of .
Sirenians have a large, fusiform body which reduces drag through the water and heavy bones that act as ballast to counteract the buoyancy of their blubber. They have a thin layer of blubber and consequently are sensitive to temperature fluctuations, which cause migrations when water temperatures dip too low. Sirenians are slow-moving, typically coasting at , but they can reach in short bursts. They use their strong lips to pull out seagrasses, consuming 10–15% of their body weight per day.
While breathing, sirenians hold just their nostrils above the surface, sometimes standing on their tails to do so. They typically inhabit warm, shallow, coastal waters, or rivers. They are mainly herbivorous, but have been known to consume animals such as birds and jellyfish. Males typically mate with more than one female and may gather in leks to mate. Sirenians are K-selected, displaying parental care.
The meat, oil, bones, and skins are commercially valuable. Mortality is often caused by direct hunting by humans or other human-induced causes, such as habitat destruction, entanglement in fishing gear, and watercraft collisions. Steller's sea cow was driven to extinction due to overhunting in 1768.
Taxonomy
Etymology
Sirenia, commonly sirenians, are also referred to by the common name sirens, deriving from the sirens of Greek mythology.
Classification
Sirenians are classified within the cohort Afrotheria in the clade Paenungulata, alongside Proboscidea (elephants), Hyracoidea (hyraxes), Embrithopoda, Desmostylia, and Afroinsectiphilia. This clade was first established by George Gaylord Simpson in 1945 on the basis of anatomical evidence, such as testicondy and similar fetal development. The Paenungulata, along with the Afrotheria, are one of the most well-supported mammalian clades in molecular phylogeny. Sirenia, Proboscidae, and Desmotylia are grouped together in the clade Tethytheria. On the basis of morphological similarities, Tethytheria, Perissodactyla, and Hyracoidea were previously thought to be grouped together as the Altungulata, but this has been invalidated by molecular data.
Sirenia families
† = Extinct
Family Dugongidae:
Genus Dugong
D. dugon
Genus †Anisosiren
†A. pannonica
Genus †Indosiren
†I. javanense
Genus †Bharatisiren
†B. indica
Genus †Callistosiren
†C. boriquensis
Genus †Crenatosiren
†C. olseni
Genus †Corystosiren
†C. varguezi
Genus †Dioplotherium
†D. allisoni
†D. manigualti
Genus †Domningia
†D. sodhae
Genus †Kutchisiren
†K. cylindrica
Genus †Nanosiren
†N. garciae
†N. sanchezi
Genus †Rytiodus
†R. capgrandi
†R. heali
Genus †Xenosiren
†X. yucateca
Genus †Caribosiren
†C. turneri
Genus †Halitherium
†H. alleni
†H. schinzii
Genus †Paralitherium
†P. tarkanyense
Genus †Priscosiren
†P. atlantica
Genus †Sirenavus
†S. hungaricus
Genus †Metaxytherium
†M. albifontanum
†M. arctodites
†M. crataegense
†M. floridanum
†M. krahuletzi
†M. medium
†M. serresii
†M. subapenninum
Genus †Dusisiren
†D. dewana
†D. jordani
†D. reinharti
†D. takasatensis
Genus †Hydrodamalis
†H. cuestae
†H. gigas
Family Trichechidae:
Genus Trichechus
T. manatus
T. senegalensis
T. inunguis
Genus †Anomotherium
†A. langewieschei
Genus †Miosiren
†M. canhami
†M. kocki
Genus †Potamosiren
†P. magdalenensis
Genus †Ribodon
†R. limbatus
†Family Protosirenidae:
Genus †Ashokia
†A. antiqua
Genus †Libysiren
†L. sickenbergi
Genus †Protosiren
†P. eothene
†P. fraasi
†P. minima
†P. sattaensis
†P. smithae
†Family Prorastomidae:
Genus †Pezosiren
†P. portelli
Genus †Prorastomus
†P. sirenoides
Distribution
The warm shallow waters of the equator have been the center of sirenian habitation. The northernmost living population, the Florida subspecies of the West Indian manatee (T. manatus latirostris), inhabits the coast and frequents freshwater springs, power plants, and canals in Florida to stay warm during the winter. Individuals may migrate north in the warm summer months, some up to 1,000 kilometers (about 621.37 mi) from their winter range. The Antillean subspecies (T. manatus manatus) occurs in the Caribbean, South America, and Central America and frequent drowned cays, mangroves, lagoons, and sea grass beds.
The Amazonian manatee (T. inunguis) has been documented in all parts of the Amazon River Basin in South America. River channels that connect allow easy travel to other waterways where food may be plentiful. The Amazonian manatee lives only in freshwater.
The West African manatee (T. senegalensis) lives in murky isolated inland mangroves and coastal flats in West Africa. It is found in waters above 18 °C, and its range spans Senegal to Angola.
The dugong (Dugong dugong), the closest living relative of Steller's sea cow, lives in the Indo-West Pacific Ocean in more than 40 different countries. They are coastal animals supported by wide protected sea grass meadows.
Steller's sea cow was discovered in 1741 around islands in the Bering Sea and was specialized for cold subarctic temperatures. It ranged from Alaska through the Amchitka and Aleutian Islands, and even to Japan. Steller's sea cow was reported to have congregated in shallow, sandy areas along coastline and mouths of rivers and creeks to feed on kelp.
Evolution
The evolution of sirenians is characterized by the appearance of several traits that are found in all sirenians. The nostrils are large and retracted, the upper-jaw bone contacts the frontal bone, the sagittal crest is missing, the mastoid fills the supratemporal fenestra (an opening on the top of the skull), there is a drop-like ectotympanic (a bony ring that holds the ear drum), and the bones are pachyosteosclerotic (dense and bulky).
Sirenians first appeared in the fossil record in the Early Eocene and diversified throughout the epoch. They inhabited rivers, estuaries, and nearshore marine waters. Sirenians, unlike other marine mammals such as cetaceans, lived in the New World. In Western Europe the first and oldest sirenian remains have been found in a new paleontological site, in Santa Brígida, Amer (La Selva, Catalonia, Spain). One of the earliest aquatic sirenians discovered is Prorastomus, which dates back to 40 million years ago, and the first known sirenian, the quadruped Pezosiren, lived 50 million years ago. An ancient sirenian fossil of a petrosal bone was found in Tunisia, dating back to approximately the same time as Prorastomus. This is the oldest sirenian fossil to be found in Africa and supports molecular data suggesting that sirenians may have originated in Africa. Prorastomidae and Protosirenidae, the earliest sirenian families, consisted of pig-like amphibious creatures who died out at the end of the Eocene. With the appearance of the Dugongidae at this time, sirenians had evolved the characteristics of the modern order, including an aquatic, streamlined body with flipper-like fore limbs and no hind limbs, and a powerful tail with horizontal caudal fins which uses an up-and-down motion to move them through the water.
The last of the sirenian families to appear, Trichechidae, apparently arose from early dugongids in the late Eocene or early Oligocene. In 1994, the family was expanded to include not only the subfamily Trichechinae (Potamosiren, Ribodon, and Trichechus), but also Miosireninae (Anomotherium and Miosiren). The African manatee and the West Indian manatee are more closely related to each other than to the Amazonian manatee.
Dugongidae comprises the subfamilies Dugonginae and Hydrodamalinae and the paraphyletic Halitheriinae. The tusks of modern-day dugongs may have originally been used for digging, but they are now used for social interaction. The genus Dugong probably originated in the Indo-Pacific.
Description
Adaptations
The tail fluke of a dugong is notched and similar to those of dolphins, whereas the tail fluke of manatee is paddle-shaped. The fluke is pumped up and down in long strokes to move the animal forward, or twisted to turn. The forelimbs are paddle-like flippers which aid in turning and slowing. Unlike manatees, the dugong lacks nails on its flippers, which are only 15% of a dugong's body length. Manatees generally glide at speeds of , but can reach speeds of in short bursts. The body is fusiform to reduce drag in the water. Like those of cetaceans, the hind limbs are internal and vestigial. The snout is angled downwards to aid in bottom-feeding. Sirenians typically make two- to three-minute dives, but manatees can hold their breath for up to 15 minutes while resting and dugongs up to six minutes. They may stand on their tails to hold their heads above water.
Much like elephants, manatees are polyphyodonts, continuously replacing their teeth from the back of the jaw. Adults lack incisors, canines, and premolars, and instead have eight to ten cheek teeth. Manatees have an unlimited supply of teeth moving in from the back and shedding in the front; these are continuously formed by a dental capsule behind the tooth row. These teeth are constantly worn down by the abrasive vascular plants they forage, particularly aquatic grasses. Unlike those of manatees, the dugong's teeth do not continually grow back via horizontal tooth replacement. The dugong has two tusks which emerge in males during puberty, and sometime later in life for females after reaching the base of the premaxilla. The number of growth layer groups in a tusk indicates the age of a dugong.
Sirenians exhibit pachyostosis, a condition in which the ribs and other long bones are solid and contain little or no bone marrow. They have among the densest bones in the animal kingdom. These may act as ballast, countering the buoyancy of their blubber and helping them remain suspended slightly below the water's surface. Manatees do not possess blubber per se, but rather have thick skin and consequently are sensitive to temperature changes. They often migrate to warmer waters whenever the water temperature dips below . The lungs of sirenians are unlobed; along with the diaphragm, these extend the entire length of the vertebral column, helping the animals control their buoyancy and reducing tipping in the water.
Extant sirenians grow to between in length and can weigh up to . Steller's sea cow was the largest known sirenian to have lived, and could reach lengths of and weight of . A dugong's brain weighs a maximum of , about 0.1% of the animal's body weight. The bodies of sirenians are sparsely covered in short hair (vibrissae), except that it becomes denser on the muzzle, which may allow for tactile interpretation of their environment. Manatees are the only known organism with uniformly vascularized corneas. This may be the result of irritation from or protection against their hypotonic freshwater environment.
Diet
Sirenians are referred to as "sea cows" because their diet consists mainly of seagrass. Dugongs sift through the seafloor in search of seagrasses, using their sense of smell because their eyesight is poor. They ingest the whole plant, including the roots, although they will feed on just the leaves if this is not possible. Using its divided upper lip, the West Indian manatee is known to consume over 60 different freshwater and saltwater plants, such as shoalweed, water lettuce, muskgrass, manatee grass, and turtle grass. An adult manatee will commonly eat up to 10–15% of its body weight, or , per day, which requires the manatee to graze for several hours per day. By contrast, 10% of the diet of the African manatee is fish and mollusks. Manatees have been known to eat small amounts of fish from nets.
As opposed to bulk feeding, dugongs target high-nitrogen grasses to maximize nutrient intake, and, although predominantly herbivorous, dugongs will occasionally eat invertebrates such as jellyfish, sea squirts, and shellfish. Some populations of dugongs, such as the one in Moreton Bay, Australia, are omnivorous, feeding on invertebrates such as polychaetes or marine algae when their supply of seagrasses is low. In other dugong populations in western and eastern Australia, there is evidence that dugongs actively seek out large invertebrates.
Populations of Amazonian manatees become restricted to lakes during the July–August dry season when water levels begin to fall, and are thought to fast during this period. Their large fat reserves and low metabolic rates—only 36% of the usual placental mammal metabolic rate—allow them to survive for up to seven months with little or no food.
Feeding behavior
Important anatomy in feeding
Perioral bristles are not only used to sense things, but can be used to grasp and manipulate food. Of the six distinct fields of bristles on upper and lower lips, the perioral fields have distinct length-to-diameter ratios, defining their boundaries. Macrovibrissae are used to detect food by its size and microvibrissae to manipulate food. They can be used to break off leaves and undesirable parts while feeding. Sirenians use their elaborate facial musculature along with perioral bristles to acquire, manipulate, and ingest aquatic vegetation. The snout makes up a muscular hydrostat, a biological structure that relies on muscular pressure and muscle contractions to manipulate and move food. The manatee uses its large upper perioral bristles to carry out a grasping motion: it performs a flare that tightens the muscular hydrostat while the large upper bristles get pushed out and the lower jaw drops and sweeps the vegetation in by closing. The primary bristles used for vegetation ingestion are the U2 and L1 fields. Dugongs and trichechids differ in how they use the U1 and U2 bristle fields during feeding. Dugongs use a medial-to-lateral motion for U2 bristles, while trichechids use a prehensile, lateral-to-medial grasping motion. These divergent feeding behaviors allow dugongs to exploit benthic foraging, including rhizome consumption, more effectively than trichechids.
Food handling
Food handling was measured by observing the length of cyclic movements (feeding cycles) of the manatees' perioral bristles used to introduce food into their mouths. Mean feeding cycle lengths varied based on the manatees' body size and the species of plant being consumed. Rates of food introduction, derived from mean feeding cycle lengths, were comparable to chewing rates reported in other studies. Manatees consumed plants with tubular stems and numerous branches more quickly than plants with flat blades. Food handling time using perioral bristles differed depending on the species of plants consumed, as reflected in the mean feeding cycle length. Thus, Florida manatees adapt their feeding behavior depending on the characteristics of the plants they consume. They exhibit different food handling strategies and efficiencies based on the plant species, with faster consumption observed for plants with tubular stems and numerous branches. This research provides valuable insights into the feeding ecology.
Cultivation grazing
Dugongs are constrained in their feeding by their rudimentary dentition and limited nitrogen abundance in seagrasses. To counter this, they use a strategy called "cultivation grazing". This grazing can alter the composition of seagrass communities and favor species. Early and rapidly growing species will succeed over slow-growing species. Oftentimes, these "pioneer" species can be high in nitrogen and low in fibre, making them a preferred diet for the dugongs. To ensure the abundance of favored seagrasses, dugongs exhibit sustained grazing pressure on seagrass patches up to a month or more. The grazing maximizes the presence of species preferred by the dugongs at the expense of less nutritious and less favored species. This grazing method also encourages rapid recovery of seagrass meadows- the dugongs graze in meandering, single trails that leave uncropped patches of seagrass. This ungrazed reserve with their surviving rhizomes are key to the expansion and restoration of seagrasses. Seagrasses respond to cropping by increasing nitrogen levels and decreasing lignin. Cultivation grazing allows dugongs to increase both nutritionally superior seagrasses, but the overall nutritional quality of the seagrasses. By maintaining the seagrasses in an immature state, dugongs ensure the highest level of nutrition.
Reproduction
Despite being mostly solitary, sirenians congregate in groups while females are in estrus. These groups usually include one female with multiple males. Sirenians are K-selectors; despite their longevity, females give birth only a few times during their lives and invest considerable parental care in their young. Dugongs generally gather in groups of less than a dozen individuals for one to two days. Since they congregate in turbid waters, little is known about their reproductive behavior. The males are often seen with scars, and the tusks on dugongs grow in first for males, suggesting they are important in lekking. They have also been known to lunge at each other. The age when a female first gives birth is disputed, ranging anywhere from six to 17 years. The time between births is unclear, with estimates ranging from two to seven years. In Sarasota, Florida, 53 females under observation produced at least 55 calves during a five-year period.
Manatees can reach sexual maturity as early as two to five years of age. Manatee gestation is around one year, and then they lactate for one to two years. West Indian manatees and African manatees can breed year-round, and a female will mate with multiple males. Amazonian manatees have a breeding season, usually mating when the river levels begin to rise, which varies from place to place.
Manatees in captivity
Manatees may be taken into captivity after being found stranded to facilitate their recovery, and there are many instances of manatees being successfully rehabilitated and released into the wild. As all extant sirenian species are rated as Vulnerable, these rehabilitation programs present a useful means to support these species. However, the vulnerability of these animals also means that the taking of manatees from the wild for commercial purposes is a conservation issue.
Diet in captivity
Manatees tend to do well in a captive environment and have been known to thrive. However, it can be difficult to replicate the conditions of their natural environment to the extent necessary to maintain a manatee at its healthiest; the typical diet fed to captive manatee populations may contain insufficient quantities of the nutrients they need.
Manatee captive-fed diets vary greatly from the manatee's diet in the wild. In captivity, manatees are fed 70–80% leafy green vegetables, 10–20% dried forage, and 5% vegetables and fruits. Dried forage is foods such as hay and timothy grass, which are often used as horse and cattle feed. The vegetables and fruits that are fed to manatees include romaine lettuce, carrots, and apples. In their natural habitat, approximately half of the manatee's diet is marine or estuarine plants. When compared to the captive diet, aquatic plants have more dry matter and soluble neutral detergent fiber, and less digestible nutrients. Although more easily digestible nutrients may seem to represent a better diet, a manatee's gastrointestinal tract is adapted to the wild diet through microbial processes of fermentation.
Rescue and rehabilitation efforts often involve orphaned infant manatees. In captivity, young manatees will be bottle-fed an amino acid-based milk formula that includes a protein source, oils, and a stabilizing agent. This concoction is supplemented with vitamins. During intake, young manatees might require electrolytes via intravenous hydration or even tube feeding if they continuously reject the bottle. After six months, they will be introduced to solid foods like romaine and iceberg lettuce, pumpkin, and root vegetables. After a year and a half, the weaning process will begin and the juvenile manatees will be offered less and less milk during feeding times, slowly transitioning to a completely solid food diet.
Threats and conservation
The three extant manatee species (family Trichechidae) and the dugong (family Dugongidae) are rated as Vulnerable on the IUCN Red List of Threatened Species. All four are vulnerable to extinction from habitat loss and other negative impacts related to human population growth and coastal development. Steller's sea cow, extinct since 1768, was hunted to extinction by humans.
The meat, oil, bones, and skin of manatees have commercial value. In some countries, such as Nigeria and Cameroon, African manatees are sold to zoos, aquariums, and online as pets, sometimes being shipped internationally. Though hunting of them is illegal, lack of law enforcement in these areas allows poaching. Some residents of West African countries, such as Mali and Chad, believe that the oil of the African manatee can cure ailments such as ear infections, rheumatism, and skin conditions. Hunting is the largest source of mortality in Amazonian manatees, and there are no management plans except in Colombia. Amazonian manatees, especially calves, are sometimes illegally sold as pets, but there are several institutions that care for and rescue these orphans, with the possibility of releasing them into the wild. The body parts of dugongs are used as medicinal remedies across the Indian Ocean.
Manatees in Cuba have faced poaching, entanglement and pollution. The area has some of the most extensive and best manatee habitat in the Caribbean, but the population has been unable to thrive there. Existing information about manatees in Cuba is limited; this makes it difficult to spread awareness, which therefore enhances the risks of poaching and entanglement in fishing nets in coastal communities. Poaching of the manatees has been a significant issue since the 1970s, when it was initially reported that the hunting was taking its toll on the manatee population in Cuba. In 1975, it was reported that the manatees' population in Cuba was rare and declining at an alarming rate due to pollution and hunting. In 1996 ,manatees were placed under protection through Fishery Decree Law 164. This law provided penalties against those who manipulate, harm, or injure manatees. The hunting of manatees in Cuba in the 1990s may have been the result of economic hardship, with the manatees being seen as a source of protein. Although there have been efforts made to protect the population of manatees in Cuba, it has not proven to be effective as those working to protect the population had hoped. Many of these areas are seen as parks that exist only on paper, and they do not have a significant impact on conversation and protection.
Environmental hazards induced by humans also puts sirenians at risk. Sirenians, especially the West Indian manatee, face high mortality from watercraft collision, and about half of all West Indian manatee deaths are caused by watercraft collisions. An increased use of hydroelectric power and consequent damming of rivers increase waterway traffic, which can lead to vessel collisions, and manatees may become entrapped in navigational locks. The urbanization of the coastline of areas such as the Caribbean, Florida, and Australia can result in a decline in seagrass populations. Seagrass meadows are also highly susceptible to pollution, and are currently among the top threatened ecosystems on Earth. Reliable areas of warm water in Florida are generally the result of discharge from power plants, but newer plants with more efficient cooling systems may disrupt the pattern of warm water refuges, and an increased demand on artesian springs, the animals' natural source of warm water, for human use decreases the number of warm water refuges. Congregating in the warm waters of industrial areas of Florida can expose manatees to pollutants and toxins at a time of year when their immune systems are already compromised.
Sirenians can be caught as bycatch from fisheries, and they can be seen as pests that interfere with local fishermen and damage their nets. African manatees have also been known to venture into rice paddies and destroy the crops during the rainy season, and these confrontations with locals may lead to intentional killing of the manatees.
Red tide, a harmful algae bloom of Karenia brevis that releases toxins into the water, kills many marine species. In 1982, many manatees were sickened by consuming brevetoxins that had accumulated in filter-feeding organisms attached to seagrass blades. Manatees can also inhale these brevotoxins from the surface of the water as they come up for air, leading to respiratory symptoms and even drowning. Manatee die-offs from exposure to red tide toxins were recorded by the Florida Fish and Wildlife Conservation Commission in southwest Florida in 2002, 2003, 2005, 2007, and 2013. A 2018 red tide bloom spread from Pasco County to Collier County off the west coast of Florida. As of January 2018, there were a total of 472 manatee deaths caused by this red tide, along with watercraft, cold stress, and other factors.
Manatees have been negatively impacted by plastics and other debris that makes its way into the ocean and other waterways. Plastic and debris can result in manatee entanglement, ingestion, amputation or even death. When a manatee ingests plastic, it is often not known until after death, when a necropsy is conducted and debris is found in the manatee's GI tract. Of the 40 Antillean manatees that were rescued, rehabilitated, and released by the Clearwater Marine Aquarium Manatee Reintroduction Program along the Central and South American coast, four were found to have plastic in their GI tract. Treatment was completed and the manatees were released. Later, three of the four were found dead, two as a direct result of plastic ingestion and the third with plastic pieces in its GI tract. Items found in the deceased manatees' GI tracts included condoms, plastic bags, Raschel knit polyester, unknown plastic debris, and ice cream and sanitary product wrappers.
A study at the University of Miami in Florida assessed 439 manatee carcasses that were recovered and necropsied between 1978 and 1986. Of these, 63 (14.4%) had ingested debris; four of the animals died as a direct result of ingestion of plastic or other debris. Some of the debris that was found in the animals' GI tracts included monofilament fishing line (the most common item found), plastic bags, string, twine, rope, fishhooks, wire, paper, cellophane, synthetic sponges, rubber bands and stockings.
Infectious diseases may also play an important role in morbidity and mortality. Although viruses have been identified only from Florida manatees, parasites and bacteria have been observed in at least three of the four sirenian species. The viruses that have been detected in Florida manatees include trichechid herpesvirus 1 (TrHV-1) and manatee papillomaviruses (TmPV) 1 through 4. Mycobacteriosis has reportedly led to mortality in captive Florida manatees and illness in Amazonian manatees while bacteria such as Vibrio, Pasteurella, Pseudomonas, Streptococcus, and Clostridium have been cultured from dead dugongs in Australia. Salmonellosis has been associated with mortality in dugongs since at least 1981. Although not well studied, the Senegal manatee is known to host the nematode Heterocheilus tunicatus, just as its sister species the West Indian manatee does. There is still a great deal to learn about the threat that infectious diseases pose to both wild and captive populations of manatees. The relationship between the presence of certain potential pathogens, including those listed above, and their effect on disease in individuals is still largely unknown, although many wild manatees are found to be positive for papillomavirus with no known negative health effects. Immunosuppressed individuals that test positive for papillomavirus can sometimes develop cutaneous lesions; however, cutaneous papillomatosis is not always correlated with a papillomavirus infection, and further study is warranted.
In Florida, agricultural runoff can negatively affect the manatee habitat, and during the rainy season, over 50 counties practice fertilizer bans to try to limit the pollutants that end up in waterways. Weather disasters and other natural occurrences are also sources of mortality. The West Indian manatee and dugong face risks from hurricanes and cyclones, which are predicted to increase in the future. These storms can spread pollutants and may damage seagrass populations. African manatees can become stranded during the dry season when rivers and lakes become too small or dry up completely.
Climate change is a growing concern for manatees, as changes in temperature can affect sea levels, pH, precipitation, salinity, and the circulation patterns of coastal ecosystems. Climate change is also predicted to make winter months even colder, leading to increased instances of cold stress in manatees. Manatees are known to have an incredibly low metabolic rate and poor insulation, therefore, it is harder for them to thermoregulate in cold water conditions. They typically will migrate to warmer waters once water temperatures start to drop below 20 degrees Celsius. This can include naturally warmer waters or artificial warm water habitats produced by power plants/energy center outfalls. Manatee cold stress syndrome can occur when there is prolonged exposure to water temperatures below the 20 degrees Celsius threshold, which can ultimately result in frostbite-like skin lesions, anorexia, fat atrophy, lymphoid depletion, and secondary infections and diseases. According to the Florida Fish and Wildlife Conservation Commission over the past three years, they have documented the highest number of cold-related deaths to date. This a common disease that is treated through the manatee rehabilitation facilities in the state of Florida like SeaWorld Orlando, Zoo Tampa at Lowry Park, Miami Seaquarium, and the Jacksonville Zoo and Gardens.
Manatee rehabilitation for diseases like cold stress syndrome is possible through the support of veterinary staff, zookeepers, researchers, and volunteers in the field. The number of manatees needing intervention is likely to rise as the number of warm water habitats decrease due to declining spring discharges and retirement of power plants. Common treatment therapies for manatee cold stress syndrome can include warm clean water, antibiotics, rehydration, enemas and mineral oil for constipation and foreign debris, and most importantly nutritional supplementation. For some manatee patients experiencing frostbite-like lesions, long-term supplementation may be recommended for optimal recovery. Those most affected by cold stress syndrome are those recently weaned, which can be trickier to treat because of concerns surrounding hypothermia.
Warming ocean temperatures can cause harmful algal blooms, which can choke out the light needed for growth of seagrass. Reduced seagrass beds means that more manatees end up congregating in smaller areas to feed, increasing competition for resources and the spread of pathogens. Exposure to brevetoxin during a red tide event is also a source of mortality; manatees may be exposed to brevetoxin after a red tide has subsided, as it can accumulate on seagrasses. The act of eating vegetation also stirs up sediment, resulting in the ingestion of contaminants trapped in the mud.
All sirenians are protected by the US Marine Mammal Protection Act of 1972, the US Endangered Species Act of 1973, and the Convention on the International Trade in Endangered Species of Wild Fauna and Flora (CITES). In addition to this, the four species are further protected by various specialty organizations. The dugong is listed in the Convention on Biological Diversity, the Convention on Migratory Species, and the Coral Triangle Initiative. In Florida, manatees are protected by the Florida Manatee Sanctuary Act of 1978, which implements actions such as the prohibition of watercraft or limits on their speeds where manatees exist. Marine mammal rehabilitation programs have been underway and regulated in the United States for more than 40 years. In 1973, injured and distressed manatees were rescued or aided in Florida. Eventually, the program was formalized into the Manatee Rescue, Rehabilitation, and Release Program managed by the US Fish and Wildlife Service. In 2012, the program became the Manatee Rescue & Rehabilitation Partnership, with permitting and oversight by the USFWS. From 1973 through 2014, this program rescued 1,619 manatees and released 526.
| Biology and health sciences | Mammals | null |
26805 | https://en.wikipedia.org/wiki/Sex | Sex | Sex is the biological trait that determines whether a sexually reproducing organism produces male or female gametes. During sexual reproduction, a male and a female gamete fuse to form a zygote, which develops into an offspring that inherits traits from each parent. By convention, organisms that produce smaller, more mobile gametes (spermatozoa, sperm) are called male, while organisms that produce larger, non-mobile gametes (ova, often called egg cells) are called female. An organism that produces both types of gamete is hermaphrodite.
In non-hermaphroditic species, the sex of an individual is determined through one of several biological sex-determination systems. Most mammalian species have the XY sex-determination system, where the male usually carries an X and a Y chromosome (XY), and the female usually carries two X chromosomes (XX). Other chromosomal sex-determination systems in animals include the ZW system in birds, and the XO system in some insects. Various environmental systems include temperature-dependent sex determination in reptiles and crustaceans.
The male and female of a species may be physically alike (sexual monomorphism) or have physical differences (sexual dimorphism). In sexually dimorphic species, including most birds and mammals, the sex of an individual is usually identified through observation of that individual's sexual characteristics. Sexual selection or mate choice can accelerate the evolution of differences between the sexes.
The terms male and female typically do not apply in sexually undifferentiated species in which the individuals are isomorphic (look the same) and the gametes are isogamous (indistinguishable in size and shape), such as the green alga Ulva lactuca. Some kinds of functional differences between individuals, such as in fungi, may be referred to as mating types.
Sexual reproduction
Sexual reproduction, in which two individuals produce an offspring that possesses a selection of the genetic traits of each parent, is exclusive to eukaryotes. Genetic traits are encoded in the deoxyribonucleic acid (DNA) of chromosomes. The eukaryote cell has a set of paired homologous chromosomes, one from each parent, and this double-chromosome stage is called "diploid". During sexual reproduction, a diploid organism produces specialized haploid sex cells called gametes via meiosis, each of which has a single set of chromosomes. Meiosis involves a stage of genetic recombination via chromosomal crossover, in which regions of DNA are exchanged between matched pairs of chromosomes, to form new chromosomes, each with a new combination of the genes of the parents. Then the chromosomes are separated into single sets in the gametes. When gametes fuse during fertilization, the resulting zygote has half of the genetic material of the mother and half of the father. The combination of chromosomal crossover and fertilization, bringing the two single sets of chromosomes together to make a new diploid zygote, results in a new organism that contains a different set of the genetic traits of each parent.
In animals, the haploid stage only occurs in the gametes, the sex cells that fuse to form a zygote that develops directly into a new diploid organism. In a plant species, the diploid organism produces a type of haploid spore by meiosis that is capable of undergoing repeated cell division to produce a multicellular haploid organism. In either case, the gametes may be externally similar (isogamy) as in the green alga Ulva or may be different in size and other aspects (anisogamy). The size difference is greatest in oogamy, a type of anisogamy in which a small, motile gamete combines with a much larger, non-motile gamete.
In anisogamic organisms, by convention, the larger gamete (called an ovum, or egg cell) is considered female, while the smaller gamete (called a spermatozoon, or sperm cell) is considered male. An individual that produces large gametes is female, and one that produces small gametes is male. An individual that produces both types of gamete is a hermaphrodite. In some species, a hermaphrodite can self-fertilize and produce an offspring on its own.
Animals
Most sexually reproducing animals spend their lives as diploid, with the haploid stage reduced to single-cell gametes. The gametes of animals have male and female forms—spermatozoa and egg cells, respectively. These gametes combine to form embryos which develop into new organisms.
The male gamete, a spermatozoon (produced in vertebrates within the testes), is a small cell containing a single long flagellum which propels it. Spermatozoa are extremely reduced cells, lacking many cellular components that would be necessary for embryonic development. They are specialized for motility, seeking out an egg cell and fusing with it in a process called fertilization.
Female gametes are egg cells. In vertebrates, they are produced within the ovaries. They are large, immobile cells that contain the nutrients and cellular components necessary for a developing embryo. Egg cells are often associated with other cells which support the development of the embryo, forming an egg. In mammals, the fertilized embryo instead develops within the female, receiving nutrition directly from its mother.
Animals are usually mobile and seek out a partner of the opposite sex for mating. Animals which live in the water can mate using external fertilization, where the eggs and sperm are released into and combine within the surrounding water. Most animals that live outside of water, however, use internal fertilization, transferring sperm directly into the female to prevent the gametes from drying up.
In most birds, both excretion and reproduction are done through a single posterior opening, called the cloaca—male and female birds touch cloaca to transfer sperm, a process called "cloacal kissing". In many other terrestrial animals, males use specialized sex organs to assist the transport of sperm—these male sex organs are called intromittent organs. In humans and other mammals, this male organ is known as the penis, which enters the female reproductive tract (called the vagina) to achieve insemination—a process called sexual intercourse. The penis contains a tube through which semen (a fluid containing sperm) travels. In female mammals, the vagina connects with the uterus, an organ which directly supports the development of a fertilized embryo within (a process called gestation).
Because of their motility, animal sexual behavior can involve coercive sex. Traumatic insemination, for example, is used by some insect species to inseminate females through a wound in the abdominal cavity—a process detrimental to the female's health.
Plants
Like animals, land plants have specialized male and female gametes. In seed plants, male gametes are produced by reduced male gametophytes that are contained within pollen which have hard coats that protect the male gamete forming cells during transport from the anthers to the stigma. The female gametes of seed plants are contained within ovules. Once fertilized, these form seeds which, like eggs, contain the nutrients necessary for the initial development of the embryonic plant.
The flowers of flowering plants contain their sexual organs. Most flowering plants are hermaphroditic, with both male and female parts in the same flower or on the same plant in single sex flowers, about 5% of plant species have individual plants that are one sex or the other. The female parts, in the center of a hermaphroditic or female flower, are the pistils, each unit consisting of a carpel, a style and a stigma. Two or more of these reproductive units may be merged to form a single compound pistil, the fused carpels forming an ovary. Within the carpels are ovules which develop into seeds after fertilization. The male parts of the flower are the stamens: these consist of long filaments arranged between the pistil and the petals that produce pollen in anthers at their tips. When a pollen grain lands upon the stigma on top of a carpel's style, it germinates to produce a pollen tube that grows down through the tissues of the style into the carpel, where it delivers male gamete nuclei to fertilize an ovule that eventually develops into a seed.
Some hermaphroditic plants are self-fertile, but plants have evolved multiple different self-incompatibility mechanisms to avoid self-fertilization, involving sequential hermaphroditism, molecular recognition systems and morphological mechanisms such as heterostyly.
In pines and other conifers, the sex organs are produced within cones that have male and female forms. Male cones are smaller than female ones and produce pollen, which is transported by wind to land in female cones. The larger and longer-lived female cones are typically more durable, and contain ovules within them that develop into seeds after fertilization.
Because seed plants are immobile, they depend upon passive methods for transporting pollen grains to other plants. Many, including conifers and grasses, produce lightweight pollen which is carried by wind to neighboring plants. Some flowering plants have heavier, sticky pollen that is specialized for transportation by insects or larger animals such as hummingbirds and bats, which may be attracted to flowers containing rewards of nectar and pollen. These animals transport the pollen as they move to other flowers, which also contain female reproductive organs, resulting in pollination.
Fungi
Most species of fungus can reproduce sexually and have life cycles with both haploid and diploid phases. These species of fungus are typically isogamous, i.e. lacking male and female specialization. One haploid fungus grows into contact with another, and then they fuse their cells. In some cases, the fusion is asymmetric, and the cell which donates only a nucleus (and no accompanying cellular material) could arguably be considered male. Fungi may also have more complex allelic mating systems, with other sexes not accurately described as male, female, or hermaphroditic.
Some fungi, including baker's yeast, have mating types that determine compatibility. Yeasts with the same mating types will not fuse with each other to form diploid cells, only with yeast carrying another mating type.
Many species of higher fungi produce mushrooms as part of their sexual reproduction. Within the mushroom, diploid cells are formed, later dividing into haploid spores.
Sexual systems
A sexual system is a distribution of male and female functions across organisms in a species.
Animals
Approximately 95% of animal species have separate male and female individuals, and are said to be gonochoric. About 5% of animal species are hermaphroditic. This low percentage is partially attributable to the very large number of insect species, in which hermaphroditism is absent. About 99% of vertebrates are gonochoric, and the remaining 1% that are hermaphroditic are almost all fishes.
Plants
The majority of plants are bisexual, either hermaphrodite (with both stamens and pistil in the same flower) or monoecious. In dioecious species male and female sexes are on separate plants. About 5% of flowering plants are dioecious, resulting from as many as 5000 independent origins. Dioecy is common in gymnosperms, in which about 65% of species are dioecious, but most conifers are monoecious.
Evolution of sex
It is generally accepted that isogamy was ancestral to anisogamy and that anisogamy evolved several times independently in different groups of eukaryotes, including protists, algae, plants, and animals. The evolution of anisogamy is synonymous with the origin of male and the origin of female. It is also the first step towards sexual dimorphism and influenced the evolution of various sex differences.
It is unclear whether anisogamy first led to the evolution of hermaphroditism or the evolution of gonochorism, and the evolution of sperm and eggs has left no fossil evidence.
A 1.2 billion year old fossil from Bangiomorpha pubescens has provided the oldest fossil record for the differentiation of male and female reproductive types and shown that sexes evolved early in eukaryotes. Studies on green algae have provided genetic evidence for the evolutionary link between sexes and mating types.
The original form of sex was external fertilization. Internal fertilization, or sex as we know it, evolved later and became dominant for vertebrates after their emergence on land.
Adaptive function of sex
The most basic role of meiosis appears to be conservation of the integrity of the genome that is passed on to progeny by parents. The two most fundamental aspects of sexual reproduction, meiotic recombination and outcrossing, are likely maintained respectively by the adaptive advantages of recombinational repair of genomic DNA damage and genetic complementation which masks the expression of deleterious recessive mutations. Genetic variation, often produced as a byproduct of these processes, may provide long-term advantages in those sexual lineages that favor outcrossing.
Sex-determination systems
The biological cause of an organism developing into one sex or the other is called sex determination. The cause may be genetic, environmental, haplodiploidy, or multiple factors. Within animals and other organisms that have genetic sex-determination systems, the determining factor may be the presence of a sex chromosome. In plants that are sexually dimorphic, such as Ginkgo biloba, the liverwort Marchantia polymorpha or the dioecious species in the flowering plant genus Silene, sex may also be determined by sex chromosomes. Non-genetic systems may use environmental cues, such as the temperature during early development in crocodiles, to determine the sex of the offspring.
Sex determination is often distinct from sex differentiation. Sex determination is the designation for the development stage towards either male or female while sex differentiation is the pathway towards the development of the phenotype.
Genetic
XY sex determination
Humans and most other mammals have an XY sex-determination system: the Y chromosome carries factors responsible for triggering male development, making XY sex determination mostly based on the presence or absence of the Y chromosome. It is the male gamete that determines the sex of the offspring. In this system XX mammals typically are female and XY typically are male. However, individuals with XXY or XYY are males, while individuals with X and XXX are females. Unusually, the platypus, a monotreme mammal, has ten sex chromosomes; females have ten X chromosomes, and males have five X chromosomes and five Y chromosomes. Platypus egg cells all have five X chromosomes, whereas sperm cells can either have five X chromosomes or five Y chromosomes.
XY sex determination is found in other organisms, including insects like the common fruit fly, and some plants. In some cases, it is the number of X chromosomes that determines sex rather than the presence of a Y chromosome. In the fruit fly individuals with XY are male and individuals with XX are female; however, individuals with XXY or XXX can also be female, and individuals with X can be males.
ZW sex determination
In birds, which have a ZW sex-determination system, the W chromosome carries factors responsible for female development, and default development is male. In this case, ZZ individuals are male and ZW are female. It is the female gamete that determines the sex of the offspring. This system is used by birds, some fish, and some crustaceans.
The majority of butterflies and moths also have a ZW sex-determination system. Females can have Z, ZZW, and even ZZWW.
XO sex determination
In the XO sex-determination system, males have one X chromosome (XO) while females have two (XX). All other chromosomes in these diploid organisms are paired, but organisms may inherit one or two X chromosomes. This system is found in most arachnids, insects such as silverfish (Apterygota), dragonflies (Paleoptera) and grasshoppers (Exopterygota), and some nematodes, crustaceans, and gastropods.
In field crickets, for example, insects with a single X chromosome develop as male, while those with two develop as female.
In the nematode Caenorhabditis elegans, most worms are self-fertilizing hermaphrodites with an XX karyotype, but occasional abnormalities in chromosome inheritance can give rise to individuals with only one X chromosome—these XO individuals are fertile males (and half their offspring are male).
ZO sex determination
In the ZO sex-determination system, males have two Z chromosomes whereas females have one. This system is found in several species of moths.
Environmental
For many species, sex is not determined by inherited traits, but instead by environmental factors such as temperature experienced during development or later in life.
In the fern Ceratopteris and other homosporous fern species, the default sex is hermaphrodite, but individuals which grow in soil that has previously supported hermaphrodites are influenced by the pheromone antheridiogen to develop as male. The bonelliidae larvae can only develop as males when they encounter a female.
Sequential hermaphroditism
Some species can change sex over the course of their lifespan, a phenomenon called sequential hermaphroditism.
Teleost fishes are the only vertebrate lineage where sequential hermaphroditism occurs. In clownfish, smaller fish are male, and the dominant and largest fish in a group becomes female; when a dominant female is absent, then her partner changes sex from male to female. In many wrasses the opposite is true: the fish are initially female and become male when they reach a certain size.
Sequential hermaphroditism also occurs in plants such as Arisaema triphyllum.
Temperature-dependent sex determination
Many reptiles, including all crocodiles and most turtles, have temperature-dependent sex determination. In these species, the temperature experienced by the embryos during their development determines their sex.
In some turtles, for example, males are produced at lower temperatures than females; but Macroclemys females are produced at temperatures lower than 22 °C or above 28 °C, while males are produced in between those temperatures.
Haplodiploidy
Certain insects, such as honey bees and ants, use a haplodiploid sex-determination system. Diploid bees and ants are generally female, and haploid individuals (which develop from unfertilized eggs) are male. This sex-determination system results in highly biased sex ratios, as the sex of offspring is determined by fertilization (arrhenotoky or pseudo-arrhenotoky resulting in males) rather than the assortment of chromosomes during meiosis.
Sex ratio
Sex differences
Anisogamy is the fundamental difference between male and female. Richard Dawkins has stated that it is possible to interpret all the differences between the sexes as stemming from this.
Sexual characteristics
Sexual dimorphism
In many animals and some plants, individuals of male and female sex differ in size and appearance, a phenomenon called sexual dimorphism. Sexual dimorphism in animals is often associated with sexual selection: the mating competition between individuals of one sex vis-à-vis the opposite sex. Other examples demonstrate that it is the preference of females that drives sexual dimorphism, such as in the case of the stalk-eyed fly.
Sex differences in humans include a generally larger size and more body hair in men, while women have larger breasts, wider hips, and a higher body fat percentage. In other species, there may be differences in coloration or other features, and may be so pronounced that the different sexes may be mistaken for two entirely different taxa.
Females are the larger sex in a majority of animals. For instance, female southern black widow spiders are typically twice as long as the males. This size disparity may be associated with the cost of producing egg cells, which requires more nutrition than producing sperm: larger females are able to produce more eggs. In many other cases, the male of a species is larger than the female. Mammal species with extreme sexual size dimorphism, such as elephant seals, tend to have highly polygynous mating systems, presumably due to selection for success in competition with other males.
Sexual dimorphism can be extreme, with males, such as some anglerfish, living parasitically on the female. Some plant species also exhibit dimorphism in which the females are significantly larger than the males, such as in the moss genus Dicranum and the liverwort genus Sphaerocarpos. There is some evidence that, in these genera, the dimorphism may be tied to a sex chromosome, or to chemical signaling from females.
In birds, males often have a more colorful appearance and may have features (like the long tail of male peacocks) that would seem to put them at a disadvantage (e.g. bright colors would seem to make a bird more visible to predators). One proposed explanation for this is the handicap principle. This hypothesis argues that, by demonstrating he can survive with such handicaps, the male is advertising his genetic fitness to females—traits that will benefit daughters as well, who will not be encumbered with such handicaps.
Sex differences in behavior
The sexes across gonochoric species usually differ in behavior. In most animal species, females invest more in parental care, although in some species, such as some coucals, the males invest more parental care. Females also tend to be more choosy for who they mate with, such as most bird species. Males tend to be more competitive for mating than females.
Distinction from gender
| Biology and health sciences | Biology | null |
26808 | https://en.wikipedia.org/wiki/Star | Star | A star is a luminous spheroid of plasma held together by self-gravity. The nearest star to Earth is the Sun. Many other stars are visible to the naked eye at night; their immense distances from Earth make them appear as fixed points of light. The most prominent stars have been categorised into constellations and asterisms, and many of the brightest stars have proper names. Astronomers have assembled star catalogues that identify the known stars and provide standardized stellar designations. The observable universe contains an estimated to stars. Only about 4,000 of these stars are visible to the naked eye—all within the Milky Way galaxy.
A star's life begins with the gravitational collapse of a gaseous nebula of material largely comprising hydrogen, helium, and trace heavier elements. Its total mass mainly determines its evolution and eventual fate. A star shines for most of its active life due to the thermonuclear fusion of hydrogen into helium in its core. This process releases energy that traverses the star's interior and radiates into outer space. At the end of a star's lifetime as a fusor, its core becomes a stellar remnant: a white dwarf, a neutron star, or—if it is sufficiently massive—a black hole.
Stellar nucleosynthesis in stars or their remnants creates almost all naturally occurring chemical elements heavier than lithium. Stellar mass loss or supernova explosions return chemically enriched material to the interstellar medium. These elements are then recycled into new stars. Astronomers can determine stellar properties—including mass, age, metallicity (chemical composition), variability, distance, and motion through space—by carrying out observations of a star's apparent brightness, spectrum, and changes in its position in the sky over time.
Stars can form orbital systems with other astronomical objects, as in planetary systems and star systems with two or more stars. When two such stars orbit closely, their gravitational interaction can significantly impact their evolution. Stars can form part of a much larger gravitationally bound structure, such as a star cluster or a galaxy.
Etymology
The word "star" ultimately derives from the Proto-Indo-European root "h₂stḗr" also meaning star, but further analyzable as h₂eh₁s- ("to burn", also the source of the word "ash") + -tēr (agentive suffix). Compare Latin , Greek aster, German . Some scholars believe the word is a borrowing from Akkadian "istar" (Venus). "Star" is cognate (shares the same root) with the following words: asterisk, asteroid, astral, constellation, Esther.
Observation history
Historically, stars have been important to civilizations throughout the world. They have been part of religious practices, divination rituals, mythology, used for celestial navigation and orientation, to mark the passage of seasons, and to define calendars.
Early astronomers recognized a difference between "fixed stars", whose position on the celestial sphere does not change, and "wandering stars" (planets), which move noticeably relative to the fixed stars over days or weeks. Many ancient astronomers believed that the stars were permanently affixed to a heavenly sphere and that they were immutable. By convention, astronomers grouped prominent stars into asterisms and constellations and used them to track the motions of the planets and the inferred position of the Sun. The motion of the Sun against the background stars (and the horizon) was used to create calendars, which could be used to regulate agricultural practices. The Gregorian calendar, currently used nearly everywhere in the world, is a solar calendar based on the angle of the Earth's rotational axis relative to its local star, the Sun.
The oldest accurately dated star chart was the result of ancient Egyptian astronomy in 1534 BC. The earliest known star catalogues were compiled by the ancient Babylonian astronomers of Mesopotamia in the late 2nd millennium BC, during the Kassite Period ().
The first star catalogue in Greek astronomy was created by Aristillus in approximately 300 BC, with the help of Timocharis. The star catalog of Hipparchus (2nd century BC) included 1,020 stars, and was used to assemble Ptolemy's star catalogue. Hipparchus is known for the discovery of the first recorded nova (new star). Many of the constellations and star names in use today derive from Greek astronomy.
Despite the apparent immutability of the heavens, Chinese astronomers were aware that new stars could appear. In 185 AD, they were the first to observe and write about a supernova, now known as SN 185. The brightest stellar event in recorded history was the SN 1006 supernova, which was observed in 1006 and written about by the Egyptian astronomer Ali ibn Ridwan and several Chinese astronomers. The SN 1054 supernova, which gave birth to the Crab Nebula, was also observed by Chinese and Islamic astronomers.
Medieval Islamic astronomers gave Arabic names to many stars that are still used today and they invented numerous astronomical instruments that could compute the positions of the stars. They built the first large observatory research institutes, mainly to produce Zij star catalogues. Among these, the Book of Fixed Stars (964) was written by the Persian astronomer Abd al-Rahman al-Sufi, who observed a number of stars, star clusters (including the Omicron Velorum and Brocchi's Clusters) and galaxies (including the Andromeda Galaxy). According to A. Zahoor, in the 11th century, the Persian polymath scholar Abu Rayhan Biruni described the Milky Way galaxy as a multitude of fragments having the properties of nebulous stars, and gave the latitudes of various stars during a lunar eclipse in 1019.
According to Josep Puig, the Andalusian astronomer Ibn Bajjah proposed that the Milky Way was made up of many stars that almost touched one another and appeared to be a continuous image due to the effect of refraction from sublunary material, citing his observation of the conjunction of Jupiter and Mars on 500 AH (1106/1107 AD) as evidence.
Early European astronomers such as Tycho Brahe identified new stars in the night sky (later termed novae), suggesting that the heavens were not immutable. In 1584, Giordano Bruno suggested that the stars were like the Sun, and may have other planets, possibly even Earth-like, in orbit around them, an idea that had been suggested earlier by the ancient Greek philosophers, Democritus and Epicurus, and by medieval Islamic cosmologists such as Fakhr al-Din al-Razi. By the following century, the idea of the stars being the same as the Sun was reaching a consensus among astronomers. To explain why these stars exerted no net gravitational pull on the Solar System, Isaac Newton suggested that the stars were equally distributed in every direction, an idea prompted by the theologian Richard Bentley.
The Italian astronomer Geminiano Montanari recorded observing variations in luminosity of the star Algol in 1667. Edmond Halley published the first measurements of the proper motion of a pair of nearby "fixed" stars, demonstrating that they had changed positions since the time of the ancient Greek astronomers Ptolemy and Hipparchus.
William Herschel was the first astronomer to attempt to determine the distribution of stars in the sky. During the 1780s, he established a series of gauges in 600 directions and counted the stars observed along each line of sight. From this, he deduced that the number of stars steadily increased toward one side of the sky, in the direction of the Milky Way core. His son John Herschel repeated this study in the southern hemisphere and found a corresponding increase in the same direction. In addition to his other accomplishments, William Herschel is noted for his discovery that some stars do not merely lie along the same line of sight, but are physical companions that form binary star systems.
The science of stellar spectroscopy was pioneered by Joseph von Fraunhofer and Angelo Secchi. By comparing the spectra of stars such as Sirius to the Sun, they found differences in the strength and number of their absorption lines—the dark lines in stellar spectra caused by the atmosphere's absorption of specific frequencies. In 1865, Secchi began classifying stars into spectral types. The modern version of the stellar classification scheme was developed by Annie J. Cannon during the early 1900s.
The first direct measurement of the distance to a star (61 Cygni at 11.4 light-years) was made in 1838 by Friedrich Bessel using the parallax technique. Parallax measurements demonstrated the vast separation of the stars in the heavens. Observation of double stars gained increasing importance during the 19th century. In 1834, Friedrich Bessel observed changes in the proper motion of the star Sirius and inferred a hidden companion. Edward Pickering discovered the first spectroscopic binary in 1899 when he observed the periodic splitting of the spectral lines of the star Mizar in a 104-day period. Detailed observations of many binary star systems were collected by astronomers such as Friedrich Georg Wilhelm von Struve and S. W. Burnham, allowing the masses of stars to be determined from computation of orbital elements. The first solution to the problem of deriving an orbit of binary stars from telescope observations was made by Felix Savary in 1827.
The twentieth century saw increasingly rapid advances in the scientific study of stars. The photograph became a valuable astronomical tool. Karl Schwarzschild discovered that the color of a star and, hence, its temperature, could be determined by comparing the visual magnitude against the photographic magnitude. The development of the photoelectric photometer allowed precise measurements of magnitude at multiple wavelength intervals. In 1921 Albert A. Michelson made the first measurements of a stellar diameter using an interferometer on the Hooker telescope at Mount Wilson Observatory.
Important theoretical work on the physical structure of stars occurred during the first decades of the twentieth century. In 1913, the Hertzsprung-Russell diagram was developed, propelling the astrophysical study of stars. Successful models were developed to explain the interiors of stars and stellar evolution. Cecilia Payne-Gaposchkin first proposed that stars were made primarily of hydrogen and helium in her 1925 PhD thesis. The spectra of stars were further understood through advances in quantum physics. This allowed the chemical composition of the stellar atmosphere to be determined.
With the exception of rare events such as supernovae and supernova impostors, individual stars have primarily been observed in the Local Group, and especially in the visible part of the Milky Way (as demonstrated by the detailed star catalogues available for the Milky Way galaxy) and its satellites. Individual stars such as Cepheid variables have been observed in the M87 and M100 galaxies of the Virgo Cluster, as well as luminous stars in some other relatively nearby galaxies. With the aid of gravitational lensing, a single star (named Icarus) has been observed at 9 billion light-years away.
Designations
The concept of a constellation was known to exist during the Babylonian period. Ancient sky watchers imagined that prominent arrangements of stars formed patterns, and they associated these with particular aspects of nature or their myths. Twelve of these formations lay along the band of the ecliptic and these became the basis of astrology. Many of the more prominent individual stars were given names, particularly with Arabic or Latin designations.
As well as certain constellations and the Sun itself, individual stars have their own myths. To the Ancient Greeks, some "stars", known as planets (Greek πλανήτης (planētēs), meaning "wanderer"), represented various important deities, from which the names of the planets Mercury, Venus, Mars, Jupiter and Saturn were taken. (Uranus and Neptune were Greek and Roman gods, but neither planet was known in Antiquity because of their low brightness. Their names were assigned by later astronomers.)
Circa 1600, the names of the constellations were used to name the stars in the corresponding regions of the sky. The German astronomer Johann Bayer created a series of star maps and applied Greek letters as designations to the stars in each constellation. Later a numbering system based on the star's right ascension was invented and added to John Flamsteed's star catalogue in his book "Historia coelestis Britannica" (the 1712 edition), whereby this numbering system came to be called Flamsteed designation or Flamsteed numbering.
The internationally recognized authority for naming celestial bodies is the International Astronomical Union (IAU). The International Astronomical Union maintains the Working Group on Star Names (WGSN) which catalogs and standardizes proper names for stars. A number of private companies sell names of stars which are not recognized by the IAU, professional astronomers, or the amateur astronomy community. The British Library calls this an unregulated commercial enterprise, and the New York City Department of Consumer and Worker Protection issued a violation against one such star-naming company for engaging in a deceptive trade practice.
Units of measurement
Although stellar parameters can be expressed in SI units or Gaussian units, it is often most convenient to express mass, luminosity, and radii in solar units, based on the characteristics of the Sun. In 2015, the IAU defined a set of nominal solar values (defined as SI constants, without uncertainties) which can be used for quoting stellar parameters:
{|
| nominal solar luminosity
| =
|-
| nominal solar radius
| =
|}
The solar mass was not explicitly defined by the IAU due to the large relative uncertainty () of the Newtonian constant of gravitation G. Since the product of the Newtonian constant of gravitation and solar mass
together (G) has been determined to much greater precision, the IAU defined the nominal solar mass parameter to be:
{|
| nominal solar mass parameter:
| G = /s2
|}
The nominal solar mass parameter can be combined with the most recent (2014) CODATA estimate of the Newtonian constant of gravitation G to derive the solar mass to be approximately . Although the exact values for the luminosity, radius, mass parameter, and mass may vary slightly in the future due to observational uncertainties, the 2015 IAU nominal constants will remain the same SI values as they remain useful measures for quoting stellar parameters.
Large lengths, such as the radius of a giant star or the semi-major axis of a binary star system, are often expressed in terms of the astronomical unit—approximately equal to the mean distance between the Earth and the Sun (150 million km or approximately 93 million miles). In 2012, the IAU defined the astronomical constant to be an exact length in meters: 149,597,870,700 m.
Formation and evolution
Stars condense from regions of space of higher matter density, yet those regions are less dense than within a vacuum chamber. These regions—known as molecular clouds—consist mostly of hydrogen, with about 23 to 28 percent helium and a few percent heavier elements. One example of such a star-forming region is the Orion Nebula. Most stars form in groups of dozens to hundreds of thousands of stars. Massive stars in these groups may powerfully illuminate those clouds, ionizing the hydrogen, and creating H II regions. Such feedback effects, from star formation, may ultimately disrupt the cloud and prevent further star formation.
All stars spend the majority of their existence as main sequence stars, fueled primarily by the nuclear fusion of hydrogen into helium within their cores. However, stars of different masses have markedly different properties at various stages of their development. The ultimate fate of more massive stars differs from that of less massive stars, as do their luminosities and the impact they have on their environment. Accordingly, astronomers often group stars by their mass:
Very low mass stars, with masses below , are fully convective and distribute helium evenly throughout the whole star while on the main sequence. Therefore, they never undergo shell burning and never become red giants. After exhausting their hydrogen they become helium white dwarfs and slowly cool. As the lifetime of stars is longer than the age of the universe, no such star has yet reached the white dwarf stage.
Low mass stars (including the Sun), with a mass between and ~ depending on composition, do become red giants as their core hydrogen is depleted and they begin to burn helium in core in a helium flash; they develop a degenerate carbon-oxygen core later on the asymptotic giant branch; they finally blow off their outer shell as a planetary nebula and leave behind their core in the form of a white dwarf.
Intermediate-mass stars, between ~ and ~, pass through evolutionary stages similar to low mass stars, but after a relatively short period on the red-giant branch they ignite helium without a flash and spend an extended period in the red clump before forming a degenerate carbon-oxygen core.
Massive stars generally have a minimum mass of ~. After exhausting the hydrogen at the core these stars become supergiants and go on to fuse elements heavier than helium. Many end their lives when their cores collapse and they explode as supernovae.
Star formation
The formation of a star begins with gravitational instability within a molecular cloud, caused by regions of higher density—often triggered by compression of clouds by radiation from massive stars, expanding bubbles in the interstellar medium, the collision of different molecular clouds, or the collision of galaxies (as in a starburst galaxy). When a region reaches a sufficient density of matter to satisfy the criteria for Jeans instability, it begins to collapse under its own gravitational force.
As the cloud collapses, individual conglomerations of dense dust and gas form "Bok globules". As a globule collapses and the density increases, the gravitational energy converts into heat and the temperature rises. When the protostellar cloud has approximately reached the stable condition of hydrostatic equilibrium, a protostar forms at the core. These pre-main-sequence stars are often surrounded by a protoplanetary disk and powered mainly by the conversion of gravitational energy. The period of gravitational contraction lasts about 10 million years for a star like the sun, up to 100 million years for a red dwarf.
Early stars of less than are called T Tauri stars, while those with greater mass are Herbig Ae/Be stars. These newly formed stars emit jets of gas along their axis of rotation, which may reduce the angular momentum of the collapsing star and result in small patches of nebulosity known as Herbig–Haro objects.
These jets, in combination with radiation from nearby massive stars, may help to drive away the surrounding cloud from which the star was formed.
Early in their development, T Tauri stars follow the Hayashi track—they contract and decrease in luminosity while remaining at roughly the same temperature. Less massive T Tauri stars follow this track to the main sequence, while more massive stars turn onto the Henyey track.
Most stars are observed to be members of binary star systems, and the properties of those binaries are the result of the conditions in which they formed. A gas cloud must lose its angular momentum in order to collapse and form a star. The fragmentation of the cloud into multiple stars distributes some of that angular momentum. The primordial binaries transfer some angular momentum by gravitational interactions during close encounters with other stars in young stellar clusters. These interactions tend to split apart more widely separated (soft) binaries while causing hard binaries to become more tightly bound. This produces the separation of binaries into their two observed populations distributions.
Main sequence
Stars spend about 90% of their lifetimes fusing hydrogen into helium in high-temperature-and-pressure reactions in their cores. Such stars are said to be on the main sequence and are called dwarf stars. Starting at zero-age main sequence, the proportion of helium in a star's core will steadily increase, the rate of nuclear fusion at the core will slowly increase, as will the star's temperature and luminosity.
The Sun, for example, is estimated to have increased in luminosity by about 40% since it reached the main sequence 4.6 billion () years ago.
Every star generates a stellar wind of particles that causes a continual outflow of gas into space. For most stars, the mass lost is negligible. The Sun loses every year, or about 0.01% of its total mass over its entire lifespan. However, very massive stars can lose to each year, significantly affecting their evolution. Stars that begin with more than can lose over half their total mass while on the main sequence.
The time a star spends on the main sequence depends primarily on the amount of fuel it has and the rate at which it fuses it. The Sun is expected to live 10 billion () years. Massive stars consume their fuel very rapidly and are short-lived. Low mass stars consume their fuel very slowly. Stars less massive than , called red dwarfs, are able to fuse nearly all of their mass while stars of about can only fuse about 10% of their mass. The combination of their slow fuel-consumption and relatively large usable fuel supply allows low mass stars to last about one trillion () years; the most extreme of will last for about 12 trillion years. Red dwarfs become hotter and more luminous as they accumulate helium. When they eventually run out of hydrogen, they contract into a white dwarf and decline in temperature. Since the lifespan of such stars is greater than the current age of the universe (13.8 billion years), no stars under about are expected to have moved off the main sequence.
Besides mass, the elements heavier than helium can play a significant role in the evolution of stars. Astronomers label all elements heavier than helium "metals", and call the chemical concentration of these elements in a star, its metallicity. A star's metallicity can influence the time the star takes to burn its fuel, and controls the formation of its magnetic fields, which affects the strength of its stellar wind. Older, population II stars have substantially less metallicity than the younger, population I stars due to the composition of the molecular clouds from which they formed. Over time, such clouds become increasingly enriched in heavier elements as older stars die and shed portions of their atmospheres.
Post–main sequence
As stars of at least exhaust the supply of hydrogen at their core, they start to fuse hydrogen in a shell surrounding the helium core. The outer layers of the star expand and cool greatly as they transition into a red giant. In some cases, they will fuse heavier elements at the core or in shells around the core. As the stars expand, they throw part of their mass, enriched with those heavier elements, into the interstellar environment, to be recycled later as new stars. In about 5 billion years, when the Sun enters the helium burning phase, it will expand to a maximum radius of roughly , 250 times its present size, and lose 30% of its current mass. | Physical sciences | Astronomy | null |
26826 | https://en.wikipedia.org/wiki/Sodium | Sodium | Sodium is a chemical element; it has symbol Na (from Neo-Latin ) and atomic number 11. It is a soft, silvery-white, highly reactive metal. Sodium is an alkali metal, being in group 1 of the periodic table. Its only stable isotope is 23Na. The free metal does not occur in nature and must be prepared from compounds. Sodium is the sixth most abundant element in the Earth's crust and exists in numerous minerals such as feldspars, sodalite, and halite (NaCl). Many salts of sodium are highly water-soluble: sodium ions have been leached by the action of water from the Earth's minerals over eons, and thus sodium and chlorine are the most common dissolved elements by weight in the oceans.
Sodium was first isolated by Humphry Davy in 1807 by the electrolysis of sodium hydroxide. Among many other useful sodium compounds, sodium hydroxide (lye) is used in soap manufacture, and sodium chloride (edible salt) is a de-icing agent and a nutrient for animals including humans.
Sodium is an essential element for all animals and some plants. Sodium ions are the major cation in the extracellular fluid (ECF) and as such are the major contributor to the ECF osmotic pressure. Animal cells actively pump sodium ions out of the cells by means of the sodium–potassium pump, an enzyme complex embedded in the cell membrane, in order to maintain a roughly ten-times higher concentration of sodium ions outside the cell than inside. In nerve cells, the sudden flow of sodium ions into the cell through voltage-gated sodium channels enables transmission of a nerve impulse in a process called the action potential.
Characteristics
Physical
Sodium at standard temperature and pressure is a soft silvery metal that combines with oxygen in the air, forming sodium oxides. Bulk sodium is usually stored in oil or an inert gas. Sodium metal can be easily cut with a knife. It is a good conductor of electricity and heat. Due to having low atomic mass and large atomic radius, sodium is third-least dense of all elemental metals and is one of only three metals that can float on water, the other two being lithium and potassium.
The melting (98 °C) and boiling (883 °C) points of sodium are lower than those of lithium but higher than those of the heavier alkali metals potassium, rubidium, and caesium, following periodic trends down the group. These properties change dramatically at elevated pressures: at 1.5 Mbar, the color changes from silvery metallic to black; at 1.9 Mbar the material becomes transparent with a red color; and at 3 Mbar, sodium is a clear and transparent solid. All of these high-pressure allotropes are insulators and electrides.
In a flame test, sodium and its compounds glow yellow because the excited 3s electrons of sodium emit a photon when they fall from 3p to 3s; the wavelength of this photon corresponds to the D line at about 589.3 nm. Spin-orbit interactions involving the electron in the 3p orbital split the D line into two, at 589.0 and 589.6 nm; hyperfine structures involving both orbitals cause many more lines.
Isotopes
Twenty isotopes of sodium are known, but only 23Na is stable. 23Na is created in the carbon-burning process in stars by fusing two carbon atoms together; this requires temperatures above 600 megakelvins and a star of at least three solar masses. Two radioactive, cosmogenic isotopes are the byproduct of cosmic ray spallation: 22Na has a half-life of 2.6 years and 24Na, a half-life of 15 hours; all other isotopes have a half-life of less than one minute.
Two nuclear isomers have been discovered, the longer-lived one being 24mNa with a half-life of around 20.2 milliseconds. Acute neutron radiation, as from a nuclear criticality accident, converts some of the stable 23Na in human blood to 24Na; the neutron radiation dosage of a victim can be calculated by measuring the concentration of 24Na relative to 23Na.
Chemistry
Sodium atoms have 11 electrons, one more than the stable configuration of the noble gas neon. The first and second ionization energies are 495.8 kJ/mol and 4562 kJ/mol, respectively. As a result, sodium usually forms ionic compounds involving the Na+ cation.
Metallic sodium
Metallic sodium is generally less reactive than potassium and more reactive than lithium. Sodium metal is highly reducing, with the standard reduction potential for the Na+/Na couple being −2.71 volts, though potassium and lithium have even more negative potentials.
Salts and oxides
Sodium compounds are of immense commercial importance, being particularly central to industries producing glass, paper, soap, and textiles. The most important sodium compounds are table salt (NaCl), soda ash (Na2CO3), baking soda (NaHCO3), caustic soda (NaOH), sodium nitrate (NaNO3), di- and tri-sodium phosphates, sodium thiosulfate (Na2S2O3·5H2O), and borax (Na2B4O7·10H2O). In compounds, sodium is usually ionically bonded to water and anions and is viewed as a hard Lewis acid.
Most soaps are sodium salts of fatty acids. Sodium soaps have a higher melting temperature (and seem "harder") than potassium soaps.
Like all the alkali metals, sodium reacts exothermically with water. The reaction produces caustic soda (sodium hydroxide) and flammable hydrogen gas. When burned in air, it forms primarily sodium peroxide with some sodium oxide.
Aqueous solutions
Sodium tends to form water-soluble compounds, such as halides, sulfates, nitrates, carboxylates and carbonates. The main aqueous species are the aquo complexes [Na(H2O)n]+, where n = 4–8; with n = 6 indicated from X-ray diffraction data and computer simulations.
Direct precipitation of sodium salts from aqueous solutions is rare because sodium salts typically have a high affinity for water. An exception is sodium bismuthate (NaBiO3), which is insoluble in cold water and decomposes in hot water. Because of the high solubility of its compounds, sodium salts are usually isolated as solids by evaporation or by precipitation with an organic antisolvent, such as ethanol; for example, only 0.35 g/L of sodium chloride will dissolve in ethanol. A crown ether such as 15-crown-5 may be used as a phase-transfer catalyst.
Sodium content of samples is determined by atomic absorption spectrophotometry or by potentiometry using ion-selective electrodes.
Electrides and sodides
Like the other alkali metals, sodium dissolves in ammonia and some amines to give deeply colored solutions; evaporation of these solutions leaves a shiny film of metallic sodium. The solutions contain the coordination complex [Na(NH3)6]+, with the positive charge counterbalanced by electrons as anions; cryptands permit the isolation of these complexes as crystalline solids. Sodium forms complexes with crown ethers, cryptands and other ligands.
For example, 15-crown-5 has a high affinity for sodium because the cavity size of 15-crown-5 is 1.7–2.2 Å, which is enough to fit the sodium ion (1.9 Å). Cryptands, like crown ethers and other ionophores, also have a high affinity for the sodium ion; derivatives of the alkalide Na− are obtainable by the addition of cryptands to solutions of sodium in ammonia via disproportionation.
Organosodium compounds
Many organosodium compounds have been prepared. Because of the high polarity of the C-Na bonds, they behave like sources of carbanions (salts with organic anions). Some well-known derivatives include sodium cyclopentadienide (NaC5H5) and trityl sodium ((C6H5)3CNa). Sodium naphthalene, Na+[C10H8•]−, a strong reducing agent, forms upon mixing Na and naphthalene in ethereal solutions.
Intermetallic compounds
Sodium forms alloys with many metals, such as potassium, calcium, lead, and the group 11 and 12 elements. Sodium and potassium form KNa2 and NaK. NaK is 40–90% potassium and it is liquid at ambient temperature. It is an excellent thermal and electrical conductor. Sodium-calcium alloys are by-products of the electrolytic production of sodium from a binary salt mixture of NaCl-CaCl2 and ternary mixture NaCl-CaCl2-BaCl2. Calcium is only partially miscible with sodium, and the 1–2% of it dissolved in the sodium obtained from said mixtures can be precipitated by cooling to 120 °C and filtering.
In a liquid state, sodium is completely miscible with lead. There are several methods to make sodium-lead alloys. One is to melt them together and another is to deposit sodium electrolytically on molten lead cathodes. NaPb3, NaPb, Na9Pb4, Na5Pb2, and Na15Pb4 are some of the known sodium-lead alloys. Sodium also forms alloys with gold (NaAu2) and silver (NaAg2). Group 12 metals (zinc, cadmium and mercury) are known to make alloys with sodium. NaZn13 and NaCd2 are alloys of zinc and cadmium. Sodium and mercury form NaHg, NaHg4, NaHg2, Na3Hg2, and Na3Hg.
History
Because of its importance in human health, salt has long been an important commodity. In medieval Europe, a compound of sodium with the Latin name of sodanum was used as a headache remedy. The name sodium is thought to originate from the Arabic suda, meaning headache, as the headache-alleviating properties of sodium carbonate or soda were well known in early times.
Although sodium, sometimes called soda, had long been recognized in compounds, the metal itself was not isolated until 1807 by Sir Humphry Davy through the electrolysis of sodium hydroxide. In 1809, the German physicist and chemist Ludwig Wilhelm Gilbert proposed the names Natronium for Humphry Davy's "sodium" and Kalium for Davy's "potassium".
The chemical abbreviation for sodium was first published in 1814 by Jöns Jakob Berzelius in his system of atomic symbols, and is an abbreviation of the element's Neo-Latin name natrium, which refers to the Egyptian natron, a natural mineral salt mainly consisting of hydrated sodium carbonate. Natron historically had several important industrial and household uses, later eclipsed by other sodium compounds.
Sodium imparts an intense yellow color to flames. As early as 1860, Kirchhoff and Bunsen noted the high sensitivity of a sodium flame test, and stated in Annalen der Physik und Chemie:
In a corner of our 60 m3 room farthest away from the apparatus, we exploded 3 mg of sodium chlorate with milk sugar while observing the nonluminous flame before the slit. After a while, it glowed a bright yellow and showed a strong sodium line that disappeared only after 10 minutes. From the weight of the sodium salt and the volume of air in the room, we easily calculate that one part by weight of air could not contain more than 1/20 millionth weight of sodium.
Occurrence
The Earth's crust contains 2.27% sodium, making it the sixth most abundant element on Earth and the fourth most abundant metal, behind aluminium, iron, calcium, and magnesium and ahead of potassium.Sodium's estimated oceanic abundance is 10.8 grams per liter. Because of its high reactivity, it is never found as a pure element. It is found in many minerals, some very soluble, such as halite and natron, others much less soluble, such as amphibole and zeolite. The insolubility of certain sodium minerals such as cryolite and feldspar arises from their polymeric anions, which in the case of feldspar is a polysilicate. In the universe, sodium is the 15th most abundant element with a 20,000 parts-per-billion abundance, making sodium 0.002% of the total atoms in the universe.
Astronomical observations
Atomic sodium has a very strong spectral line in the yellow-orange part of the spectrum (the same line as is used in sodium-vapour street lights). This appears as an absorption line in many types of stars, including the Sun. The line was first studied in 1814 by Joseph von Fraunhofer during his investigation of the lines in the solar spectrum, now known as the Fraunhofer lines. Fraunhofer named it the "D" line, although it is now known to actually be a group of closely spaced lines split by a fine and hyperfine structure.
The strength of the D line allows its detection in many other astronomical environments. In stars, it is seen in any whose surfaces are cool enough for sodium to exist in atomic form (rather than ionised). This corresponds to stars of roughly F-type and cooler. Many other stars appear to have a sodium absorption line, but this is actually caused by gas in the foreground interstellar medium. The two can be distinguished via high-resolution spectroscopy, because interstellar lines are much narrower than those broadened by stellar rotation.
Sodium has also been detected in numerous Solar System environments, including the exospheres of Mercury and the Moon, and numerous other bodies. Some comets have a sodium tail, which was first detected in observations of Comet Hale–Bopp in 1997. Sodium has even been detected in the atmospheres of some extrasolar planets via transit spectroscopy.
Commercial production
Employed in rather specialized applications, about 100,000 tonnes of metallic sodium are produced annually. Metallic sodium was first produced commercially in the late nineteenth century by carbothermal reduction of sodium carbonate at 1100 °C, as the first step of the Deville process for the production of aluminium:
Na2CO3 + 2 C → 2 Na + 3 CO
The high demand for aluminium created the need for the production of sodium. The introduction of the Hall–Héroult process for the production of aluminium by electrolysing a molten salt bath ended the need for large quantities of sodium. A related process based on the reduction of sodium hydroxide was developed in 1886.
Sodium is now produced commercially through the electrolysis of molten sodium chloride (common salt), based on a process patented in 1924. This is done in a Downs cell in which the NaCl is mixed with calcium chloride to lower the melting point below 700 °C. As calcium is less electropositive than sodium, no calcium will be deposited at the cathode. This method is less expensive than the previous Castner process (the electrolysis of sodium hydroxide).
If sodium of high purity is required, it can be distilled once or several times.
The market for sodium is volatile due to the difficulty in its storage and shipping; it must be stored under a dry inert gas atmosphere or anhydrous mineral oil to prevent the formation of a surface layer of sodium oxide or sodium superoxide.
Uses
Though metallic sodium has some important uses, the major applications for sodium use compounds; millions of tons of sodium chloride, hydroxide, and carbonate are produced annually. Sodium chloride is extensively used for anti-icing and de-icing and as a preservative; examples of the uses of sodium bicarbonate include baking, as a raising agent, and sodablasting. Along with potassium, many important medicines have sodium added to improve their bioavailability; though potassium is the better ion in most cases, sodium is chosen for its lower price and atomic weight. Sodium hydride is used as a base for various reactions (such as the aldol reaction) in organic chemistry.
Metallic sodium is used mainly for the production of sodium borohydride, sodium azide, indigo, and triphenylphosphine. A once-common use was the making of tetraethyllead and titanium metal; because of the move away from TEL and new titanium production methods, the production of sodium declined after 1970. Sodium is also used as an alloying metal, an anti-scaling agent, and as a reducing agent for metals when other materials are ineffective.
Note the free element is not used as a scaling agent, ions in the water are exchanged for sodium ions. Sodium plasma ("vapor") lamps are often used for street lighting in cities, shedding light that ranges from yellow-orange to peach as the pressure increases. By itself or with potassium, sodium is a desiccant; it gives an intense blue coloration with benzophenone when the desiccate is dry.
In organic synthesis, sodium is used in various reactions such as the Birch reduction, and the sodium fusion test is conducted to qualitatively analyse compounds. Sodium reacts with alcohols and gives alkoxides, and when sodium is dissolved in ammonia solution, it can be used to reduce alkynes to trans-alkenes. Lasers emitting light at the sodium D line are used to create artificial laser guide stars that assist in the adaptive optics for land-based visible-light telescopes.
Heat transfer
Liquid sodium is used as a heat transfer fluid in sodium-cooled fast reactors because it has the high thermal conductivity and low neutron absorption cross section required to achieve a high neutron flux in the reactor. The high boiling point of sodium allows the reactor to operate at ambient (normal) pressure, but drawbacks include its opacity, which hinders visual maintenance, and its strongly reducing properties. Sodium will explode in contact with water, although it will only burn gently in air.
Radioactive sodium-24 may be produced by neutron bombardment during operation, posing a slight radiation hazard; the radioactivity stops within a few days after removal from the reactor. If a reactor needs to be shut down frequently, sodium-potassium alloy (NaK) is used. Because NaK is a liquid at room temperature, the coolant does not solidify in the pipes. The pyrophoricity of the NaK means extra precautions must be taken to prevent and detect leaks.
Another heat transfer application of sodium is in poppet valves in high-performance internal combustion engines; the valve stems are partially filled with sodium and work as a heat pipe to cool the valves.
Biological role
Biological role in humans
In humans, sodium is an essential mineral that regulates blood volume, blood pressure, osmotic equilibrium and pH. The minimum physiological requirement for sodium is estimated to range from about 120 milligrams per day in newborns to 500 milligrams per day over the age of 10.
Diet
Sodium chloride, also known as 'edible salt' or 'table salt' (chemical formula ), is the principal source of sodium () in the diet and is used as seasoning and preservative in such commodities as pickled preserves and jerky. For Americans, most sodium chloride comes from processed foods. Other sources of sodium are its natural occurrence in food and such food additives as monosodium glutamate (MSG), sodium nitrite, sodium saccharin, baking soda (sodium bicarbonate), and sodium benzoate.
The U.S. Institute of Medicine set its tolerable upper intake level for sodium at 2.3 grams per day, but the average person in the United States consumes 3.4 grams per day. The American Heart Association recommends no more than 1.5 g of sodium per day.
The Committee to Review the Dietary Reference Intakes for Sodium and Potassium, which is part of the National Academies of Sciences, Engineering, and Medicine, has determined that there isn't enough evidence from research studies to establish Estimated Average Requirement (EAR) and Recommended Dietary Allowance (RDA) values for sodium. As a result, the committee has established Adequate Intake (AI) levels instead, as follows. The sodium AI for infants of 0–6 months is established at 110 mg/day, 7–12 months: 370 mg/day; for children 1–3 years: 800 mg/day, 4–8 years: 1,000 mg/day; for adolescents: 9–13 years – 1,200 mg/day, 14–18 years 1,500 mg/day; for adults regardless of their age or sex: 1,500 mg/day.
Sodium chloride () contains approximately 39.34% of its total mass as elemental sodium (). This means that of sodium chloride contains approximately of elemental sodium. For example, to find out how much sodium chloride contains 1500 mg of elemental sodium (the value of 1500 mg sodium is the adequate intake (AI) for an adult), we can use the proportion:
393.4 mg Na : 1000 mg NaCl = 1500 mg Na : x mg NaCl
Solving for x gives us the amount of sodium chloride that contains 1500 mg of elemental sodium
x = (1500 mg Na × 1000 mg NaCl) / 393.4 mg Na = 3812.91 mg
This mean that 3812.91 mg of sodium chloride contain 1500 mg of elemental sodium.
High sodium consumption
High sodium consumption is unhealthy, and can lead to alteration in the mechanical performance of the heart. High sodium consumption is also associated with chronic kidney disease, high blood pressure, cardiovascular diseases, and stroke.
High blood pressure
There is a strong correlation between higher sodium intake and higher blood pressure. Studies have found that lowering sodium intake by 2 g per day tends to lower systolic blood pressure by about two to four mm Hg. It has been estimated that such a decrease in sodium intake would lead to 9–17% fewer cases of hypertension.
Hypertension causes 7.6 million premature deaths worldwide each year. Since edible salt contains about 39.3% sodium—the rest being chlorine and trace chemicals; thus, 2.3 g sodium is about 5.9 g, or 5.3 ml, of salt—about one US teaspoon.
One scientific review found that people with or without hypertension who excreted less than 3 grams of sodium per day in their urine (and therefore were taking in less than 3 g/d) had a higher risk of death, stroke, or heart attack than those excreting 4 to 5 grams per day. Levels of 7 g per day or more in people with hypertension were associated with higher mortality and cardiovascular events, but this was not found to be true for people without hypertension. The US FDA states that adults with hypertension and prehypertension should reduce daily sodium intake to 1.5 g.
Physiology
The renin–angiotensin system regulates the amount of fluid and sodium concentration in the body. Reduction of blood pressure and sodium concentration in the kidney result in the production of renin, which in turn produces aldosterone and angiotensin, which stimulates the reabsorption of sodium back into the bloodstream. When the concentration of sodium increases, the production of renin decreases, and the sodium concentration returns to normal. The sodium ion (Na+) is an important electrolyte in neuron function, and in osmoregulation between cells and the extracellular fluid. This is accomplished in all animals by Na+/K+-ATPase, an active transporter pumping ions against the gradient, and sodium/potassium channels. The difference in extracellular and intracellular ion concentration, maintained by the sodium-potassium pump, produce electrical signals in the form of action potentials that supports cardiac muscle contraction and promote long distance communication between neurons. Sodium is the most prevalent metallic ion in extracellular fluid.
In humans, unusually low or high sodium levels in the blood is recognized in medicine as hyponatremia and hypernatremia. These conditions may be caused by genetic factors, ageing, or prolonged vomiting or diarrhea.
Biological role in plants
In C4 plants, sodium is a micronutrient that aids metabolism, specifically in regeneration of phosphoenolpyruvate and synthesis of chlorophyll. In others, it substitutes for potassium in several roles, such as maintaining turgor pressure and aiding in the opening and closing of stomata. Excess sodium in the soil can limit the uptake of water by decreasing the water potential, which may result in plant wilting; excess concentrations in the cytoplasm can lead to enzyme inhibition, which in turn causes necrosis and chlorosis.
In response, some plants have developed mechanisms to limit sodium uptake in the roots, to store it in cell vacuoles, and restrict salt transport from roots to leaves. Excess sodium may also be stored in old plant tissue, limiting the damage to new growth. Halophytes have adapted to be able to flourish in sodium rich environments.
Safety and precautions
Sodium forms flammable hydrogen and caustic sodium hydroxide on contact with water; ingestion and contact with moisture on skin, eyes or mucous membranes can cause severe burns. Sodium spontaneously explodes in the presence of water due to the formation of hydrogen (highly explosive) and sodium hydroxide (which dissolves in the water, liberating more surface). However, sodium exposed to air and ignited or reaching autoignition (reported to occur when a molten pool of sodium reaches about ) displays a relatively mild fire.
In the case of massive (non-molten) pieces of sodium, the reaction with oxygen eventually becomes slow due to formation of a protective layer. Fire extinguishers based on water accelerate sodium fires. Those based on carbon dioxide and bromochlorodifluoromethane should not be used on sodium fire. Metal fires are Class D, but not all Class D extinguishers are effective when used to extinguish sodium fires. An effective extinguishing agent for sodium fires is Met-L-X. Other effective agents include Lith-X, which has graphite powder and an organophosphate flame retardant, and dry sand.
Sodium fires are prevented in nuclear reactors by isolating sodium from oxygen with surrounding pipes containing inert gas. Pool-type sodium fires are prevented using diverse design measures called catch pan systems. They collect leaking sodium into a leak-recovery tank where it is isolated from oxygen.
Liquid sodium fires are more dangerous to handle than solid sodium fires, particularly if there is insufficient experience with the safe handling of molten sodium. In a technical report for the United States Fire Administration, R. J. Gordon writes (emphasis in original)
| Physical sciences | Chemical elements_2 | null |
26832 | https://en.wikipedia.org/wiki/Stramenopile | Stramenopile | The stramenopiles, also called heterokonts, are a clade of organisms distinguished by the presence of stiff tripartite external hairs. In most species, the hairs are attached to flagella, in some they are attached to other areas of the cellular surface, and in some they have been secondarily lost (in which case relatedness to stramenopile ancestors is evident from other shared cytological features or from genetic similarity). Stramenopiles represent one of the three major clades in the SAR supergroup, along with Alveolata and Rhizaria.
Stramenopiles are eukaryotes; most are single-celled, but some are multicellular including some large seaweeds, the brown algae. The group includes a variety of algal protists, heterotrophic flagellates, opalines and closely related proteromonad flagellates (all endobionts in other organisms); the actinophryid Heliozoa, and oomycetes. The tripartite hairs characteristic of the group have been lost in some of the included taxa – for example in most diatoms.
Many stramenopiles are unicellular flagellates, and most others produce flagellated cells at some point in their lifecycles, for instance as gametes or zoospores. Most flagellated heterokonts have two flagella; the anterior flagellum has one or two rows of stiff hairs or mastigonemes, and the posterior flagellum is without such embellishments, being smooth, usually shorter, or in a few cases not projecting from the cell.
The term 'heterokont' is used both as an adjective – indicating that a cell has two dissimilar flagella, and as the name of a taxon. The groups included in that taxon have however varied widely, creating the 'heterokont problem', now resolved by the definition of the stramenopiles.
History
The term 'stramenopile' was introduced by D. J. Patterson in 1989, defining a group that overlapped with the ambiguously defined heterokonts. The name "stramenopile" has been discussed by J. C. David.
The heterokont problem
The term 'heterokont' is used as both an adjective – indicating that a cell has two dissimilar flagella – and as the name of a taxon. The taxon 'Heterokontae' was introduced in 1899 by Alexander Luther for algae that are now considered the Xanthophyceae. But the same term was used for other groupings of algae. For example, in 1956, Copeland used it to include the xanthophytes (using the name Vaucheriacea), a group that included what became known as the chrysophytes, the silicoflagellates, and the hyphochytrids. Copeland also included the unrelated collar flagellates (as the choanoflagellates) in which he placed the bicosoecids. He also included the not-closely related haptophytes. The consequence of associating multiple concepts to the taxon 'heterokont' is that the meaning of 'heterokont' can only be made clear by making reference to its usage: Heterokontae sensu Luther 1899; Heterokontae sensu Copeland 1956, etc. This contextual clarification is rare, such that when the taxon name is used, it is unclear how it should be understood. The term 'heterokont' has lost its usefulness in critical discussions about the identity, nature, character and relatedness of the group. The term 'stramenopile' sought to identify a clade (monophyletic and holophyletic lineage) using the approach developed by transformed cladists of pointing to a defining innovative characteristic or apomorphy.
Over time, the scope of application has changed, especially when in the 1970s ultrastructural studies revealed greater diversity among the algae with chromoplasts (chlorophylls a and c) than had previously been recognized. At the same time, a protistological perspective was replacing the 19th century one based on the division of unicellular eukaryotes into animals and plants. One consequence was that an array of heterotrophic organisms, many not previously considered as 'heterokonts', were seen as related to the 'core heterokonts' (those having anterior flagella with stiff hairs). Newly recognized relatives included the parasitic opalines, proteromonads, and actinophryid Heliozoa. They joined other heterotrophic protists, such as bicosoecids, labyrinthulids, and oomycete fungi, that were included by some as heterokonts and excluded by others. Rather than continue to use a name whose meaning had changed over time and was hence ambiguous, the name 'stramenopile' was introduced to refer to the clade of protists that had tripartite stiff (usually flagellar) hairs and all their descendants. Molecular studies confirm that the genes that code for the proteins of these hairs are exclusive to stramenopiles.
Characteristics
The presumed apomorphy of tripartite flagellar hairs in stramenopiles is well characterized. The basal part of the hair is flexible and inserts into the cell membrane; the second part is dominated by a long stiff tube (the 'straw' or 'stramen'); and finally the tube is tipped by many delicate hairs called mastigonemes. The proteins that code for the mastigonemes appear to be exclusive to the stramenopile clade, and are present even in taxa (such as diatoms) that no longer have such hairs.
Most stramenopiles have two flagella near the apex. They are usually supported by four microtubule roots in a distinctive pattern. There is a transitional helix inside the flagellum where the beating axoneme with its distinctive geometric pattern of nine peripheral couplets around two central microtubules changes into the nine-triplet structure of the basal body.
Plastids
Many stramenopiles have plastids which enable them to photosynthesise, using light to make their own food. Those plastids are coloured off-green, orange, golden or brown because of the presence of chlorophyll a, chlorophyll c, and fucoxanthin. This form of plastid is called a stramenochrome or chromoplast. The most significant autotrophic stramenopiles are the brown algae (wracks and many other seaweeds), and the diatoms. The latter are among the most significant primary producers in marine and freshwater ecosystems. Most molecular analyses suggest that the most basal stramenopiles lacked plastids and were accordingly colourless heterotrophs, feeding on other organisms. This implies that the stramenopiles arose as heterotrophs, diversified, and then some of them acquired chromoplasts. Some lineages (such as the axodine lineage that included the chromophytic pedinellids, colourless ciliophryids, and colourless actinophryid heliozoa) have secondarily reverted to heterotrophy.
Ecology
Some stramenopiles are significant as autotrophs and as heterotrophs in natural ecosystems; others are parasitic.
Blastocystis is a gastrointestinal parasite of humans;
opalines and proteromonads live in the intestines of cold-blooded vertebrates and have been described as parasitic;
oomycetes include some significant plant pathogens such as the cause of potato blight, Phytophthora infestans.
Diatoms are major contributors to global carbon cycles because they are the most important autotrophs in most marine habitats.
The brown algae, including familiar seaweeds like wrack and kelp, are major autotrophs of the intertidal and subtidal marine habitats.
Some of the bacterivorous stramenopiles, such as Cafeteria, are common and widespread consumers of bacteria, and thus play a major role in recycling carbon and nutrients within microbial food webs.
Evolution
External
Stramenopiles are most closely related to alveolates and Rhizaria, all of which have tubular mitochondrial cristae and collectively form the SAR supergroup, whose name is formed from their initials. The ancestor of the SAR supergroup appears to have captured a unicellular photosynthetic red alga, and many stramenopiles, as well as members of other SAR groups such as the Rhizaria, still have plastids which retain the double membrane of the red alga and a double membrane surrounding it, for a total of four membranes. In addition, species of Telonemia, the sister group to SAR, exhibit heterokont flagella with tripartite mastigonemes, implying a more ancient origin of stramenopile characteristics.
Internal
The following cladogram summarizes the evolutionary relationships between stramenopiles. The phylogenetic relationships of Bigyra vary greatly from one analysis to the next: it has been recovered as either monophyletic or paraphyletic. When paraphyletic, the branching order of the bigyran groups also varies: in some studies, Sagenista is the most basal-branching clade, while in others Opalozoa is the most basal. Nonetheless, Platysulcea is consistently recovered as the sister clade to all other stramenopiles. In addition, a flagellate species discovered in 2023, Kaonashia insperata, remains in an uncertain phylogenetic position, but more closely related to Gyrista than to other clades.
Classification
The classification of the stramenopiles according to Adl et al. (2019), with additions from newer research:
Platysulcea
Bigyra
Opalozoa
Nanomonadea
Opalinata [=Slopalinida ]
Bicosoecida
Sagenista
Labyrinthulomycetes
Pseudophyllomitidae
Gyrista
Bigyromonada
Developea
Pirsoniales
Pseudofungi
Hyphochytriales
Peronosporomycetes [=Oomycetes ]
Actinophryidae
Ochrophyta
Chrysista
Chrysoparadoxophyceae
Chrysophyceae
Chloromorophyceae (nomen dubium)
Eustigmatophyceae
Olisthodiscophyceae
Phaeophyceae
Phaeosacciophyceae
Phaeothamniophyceae
Raphidophyceae
Schizocladiophyceae
Synchromophyceae [=Picophagea ]
Xanthophyceae [Heterokontae ; Heteromonadea ; Xanthophyta ]
Diatomista
Bolidophyceae
Diatomeae [=Bacillariophyta ]
Dictyochophyceae
Pelagophyceae
Pinguiophyceae
| Biology and health sciences | Other organisms | null |
26833 | https://en.wikipedia.org/wiki/Scientific%20method | Scientific method | The scientific method is an empirical method for acquiring knowledge that has been referred to while doing science since at least the 17th century. The scientific method involves careful observation coupled with rigorous skepticism, because cognitive assumptions can distort the interpretation of the observation. Scientific inquiry includes creating a testable hypothesis through inductive reasoning, testing it through experiments and statistical analysis, and adjusting or discarding the hypothesis based on the results.
Although procedures vary between fields, the underlying process is often similar. In more detail: the scientific method involves making conjectures (hypothetical explanations), predicting the logical consequences of hypothesis, then carrying out experiments or empirical observations based on those predictions. A hypothesis is a conjecture based on knowledge obtained while seeking answers to the question. Hypotheses can be very specific or broad but must be falsifiable, implying that it is possible to identify a possible outcome of an experiment or observation that conflicts with predictions deduced from the hypothesis; otherwise, the hypothesis cannot be meaningfully tested.
While the scientific method is often presented as a fixed sequence of steps, it actually represents a set of general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always in the same order. Numerous discoveries have not followed the textbook model of the scientific method and chance has played a role, for instance.
History
The history of scientific method considers changes in the methodology of scientific inquiry, not the history of science itself. The development of rules for scientific reasoning has not been straightforward; scientific method has been the subject of intense and recurring debate throughout the history of science, and eminent natural philosophers and scientists have argued for the primacy of various approaches to establishing scientific knowledge.
Different early expressions of empiricism and the scientific method can be found throughout history, for instance with the ancient Stoics, Epicurus, Alhazen, Avicenna, Al-Biruni, Roger Bacon, and William of Ockham.
In the scientific revolution of the 16th and 17th centuries some of the most important developments were the furthering of empiricism by Francis Bacon and Robert Hooke, the rationalist approach described by René Descartes and inductivism, brought to particular prominence by Isaac Newton and those who followed him. Experiments were advocated by Francis Bacon, and performed by Giambattista della Porta, Johannes Kepler, and Galileo Galilei. There was particular development aided by theoretical works by a skeptic Francisco Sanches, by idealists as well as empiricists John Locke, George Berkeley, and David Hume. C. S. Peirce formulated the hypothetico-deductive model in the 20th century, and the model has undergone significant revision since.
The term "scientific method" emerged in the 19th century, as a result of significant institutional development of science, and terminologies establishing clear boundaries between science and non-science, such as "scientist" and "pseudoscience", appearing. Throughout the 1830s and 1850s, when Baconianism was popular, naturalists like William Whewell, John Herschel and John Stuart Mill engaged in debates over "induction" and "facts" and were focused on how to generate knowledge. In the late 19th and early 20th centuries, a debate over realism vs. antirealism was conducted as powerful scientific theories extended beyond the realm of the observable.
Modern use and critical thought
The term "scientific method" came into popular use in the twentieth century; Dewey's 1910 book, How We Think, inspired popular guidelines, appearing in dictionaries and science textbooks, although there was little consensus over its meaning. Although there was growth through the middle of the twentieth century, by the 1960s and 1970s numerous influential philosophers of science such as Thomas Kuhn and Paul Feyerabend had questioned the universality of the "scientific method" and in doing so largely replaced the notion of science as a homogeneous and universal method with that of it being a heterogeneous and local practice. In particular, Paul Feyerabend, in the 1975 first edition of his book Against Method, argued against there being any universal rules of science; Karl Popper, and Gauch 2003, disagree with Feyerabend's claim.
Later stances include physicist Lee Smolin's 2013 essay "There Is No Scientific Method", in which he espouses two ethical principles, and historian of science Daniel Thurs' chapter in the 2015 book Newton's Apple and Other Myths about Science, which concluded that the scientific method is a myth or, at best, an idealization. As myths are beliefs, they are subject to the narrative fallacy as Taleb points out. Philosophers Robert Nola and Howard Sankey, in their 2007 book Theories of Scientific Method, said that debates over the scientific method continue, and argued that Feyerabend, despite the title of Against Method, accepted certain rules of method and attempted to justify those rules with a meta methodology.
Staddon (2017) argues it is a mistake to try following rules in the absence of an algorithmic scientific method; in that case, "science is best understood through examples". But algorithmic methods, such as disproof of existing theory by experiment have been used since Alhacen (1027) and his Book of Optics, and Galileo (1638) and his Two New Sciences, and The Assayer, which still stand as scientific method.
Elements of inquiry
Overview
The scientific method is the process by which science is carried out. As in other areas of inquiry, science (through the scientific method) can build on previous knowledge, and unify understanding of its studied topics over time. This model can be seen to underlie the scientific revolution.
The overall process involves making conjectures (hypotheses), predicting their logical consequences, then carrying out experiments based on those predictions to determine whether the original conjecture was correct. However, there are difficulties in a formulaic statement of method. Though the scientific method is often presented as a fixed sequence of steps, these actions are more accurately general principles. Not all steps take place in every scientific inquiry (nor to the same degree), and they are not always done in the same order.
Factors of scientific inquiry
There are different ways of outlining the basic method used for scientific inquiry. The scientific community and philosophers of science generally agree on the following classification of method components. These methodological elements and organization of procedures tend to be more characteristic of experimental sciences than social sciences. Nonetheless, the cycle of formulating hypotheses, testing and analyzing the results, and formulating new hypotheses, will resemble the cycle described below.The scientific method is an iterative, cyclical process through which information is continually revised. It is generally recognized to develop advances in knowledge through the following elements, in varying combinations or contributions:
Characterizations (observations, definitions, and measurements of the subject of inquiry)
Hypotheses (theoretical, hypothetical explanations of observations and measurements of the subject)
Predictions (inductive and deductive reasoning from the hypothesis or theory)
Experiments (tests of all of the above)
Each element of the scientific method is subject to peer review for possible mistakes. These activities do not describe all that scientists do but apply mostly to experimental sciences (e.g., physics, chemistry, biology, and psychology). The elements above are often taught in the educational system as "the scientific method".
The scientific method is not a single recipe: it requires intelligence, imagination, and creativity. In this sense, it is not a mindless set of standards and procedures to follow but is rather an ongoing cycle, constantly developing more useful, accurate, and comprehensive models and methods. For example, when Einstein developed the Special and General Theories of Relativity, he did not in any way refute or discount Newton's Principia. On the contrary, if the astronomically massive, the feather-light, and the extremely fast are removed from Einstein's theories – all phenomena Newton could not have observed – Newton's equations are what remain. Einstein's theories are expansions and refinements of Newton's theories and, thus, increase confidence in Newton's work.
An iterative, pragmatic scheme of the four points above is sometimes offered as a guideline for proceeding:
Define a question
Gather information and resources (observe)
Form an explanatory hypothesis
Test the hypothesis by performing an experiment and collecting data in a reproducible manner
Analyze the data
Interpret the data and draw conclusions that serve as a starting point for a new hypothesis
Publish results
Retest (frequently done by other scientists)
The iterative cycle inherent in this step-by-step method goes from point 3 to 6 and back to 3 again.
While this schema outlines a typical hypothesis/testing method, many philosophers, historians, and sociologists of science, including Paul Feyerabend, claim that such descriptions of scientific method have little relation to the ways that science is actually practiced.
Characterizations
The basic elements of the scientific method are illustrated by the following example (which occurred from 1944 to 1953) from the discovery of the structure of DNA (marked with and indented).
In 1950, it was known that genetic inheritance had a mathematical description, starting with the studies of Gregor Mendel, and that DNA contained genetic information (Oswald Avery's transforming principle). But the mechanism of storing genetic information (i.e., genes) in DNA was unclear. Researchers in Bragg's laboratory at Cambridge University made X-ray diffraction pictures of various molecules, starting with crystals of salt, and proceeding to more complicated substances. Using clues painstakingly assembled over decades, beginning with its chemical composition, it was determined that it should be possible to characterize the physical structure of DNA, and the X-ray images would be the vehicle.
The scientific method depends upon increasingly sophisticated characterizations of the subjects of investigation. (The subjects can also be called unsolved problems or the unknowns.) For example, Benjamin Franklin conjectured, correctly, that St. Elmo's fire was electrical in nature, but it has taken a long series of experiments and theoretical changes to establish this. While seeking the pertinent properties of the subjects, careful thought may also entail some definitions and observations; these observations often demand careful measurements and/or counting can take the form of expansive empirical research.
A scientific question can refer to the explanation of a specific observation, as in "Why is the sky blue?" but can also be open-ended, as in "How can I design a drug to cure this particular disease?" This stage frequently involves finding and evaluating evidence from previous experiments, personal scientific observations or assertions, as well as the work of other scientists. If the answer is already known, a different question that builds on the evidence can be posed. When applying the scientific method to research, determining a good question can be very difficult and it will affect the outcome of the investigation.
The systematic, careful collection of measurements or counts of relevant quantities is often the critical difference between pseudo-sciences, such as alchemy, and science, such as chemistry or biology. Scientific measurements are usually tabulated, graphed, or mapped, and statistical manipulations, such as correlation and regression, performed on them. The measurements might be made in a controlled setting, such as a laboratory, or made on more or less inaccessible or unmanipulatable objects such as stars or human populations. The measurements often require specialized scientific instruments such as thermometers, spectroscopes, particle accelerators, or voltmeters, and the progress of a scientific field is usually intimately tied to their invention and improvement.
Definition
The scientific definition of a term sometimes differs substantially from its natural language usage. For example, mass and weight overlap in meaning in common discourse, but have distinct meanings in mechanics. Scientific quantities are often characterized by their units of measure which can later be described in terms of conventional physical units when communicating the work.
New theories are sometimes developed after realizing certain terms have not previously been sufficiently clearly defined. For example, Albert Einstein's first paper on relativity begins by defining simultaneity and the means for determining length. These ideas were skipped over by Isaac Newton with, "I do not define time, space, place and motion, as being well known to all." Einstein's paper then demonstrates that they (viz., absolute time and length independent of motion) were approximations. Francis Crick cautions us that when characterizing a subject, however, it can be premature to define something when it remains ill-understood. In Crick's study of consciousness, he actually found it easier to study awareness in the visual system, rather than to study free will, for example. His cautionary example was the gene; the gene was much more poorly understood before Watson and Crick's pioneering discovery of the structure of DNA; it would have been counterproductive to spend much time on the definition of the gene, before them.
Hypothesis development
Linus Pauling proposed that DNA might be a triple helix.. The structure that we propose is a three-chain structure, each chain being a helix' – Linus Pauling" This hypothesis was also considered by Francis Crick and James D. Watson but discarded. When Watson and Crick learned of Pauling's hypothesis, they understood from existing data that Pauling was wrong. and that Pauling would soon admit his difficulties with that structure.
A hypothesis is a suggested explanation of a phenomenon, or alternately a reasoned proposal suggesting a possible correlation between or among a set of phenomena. Normally, hypotheses have the form of a mathematical model. Sometimes, but not always, they can also be formulated as existential statements, stating that some particular instance of the phenomenon being studied has some characteristic and causal explanations, which have the general form of universal statements, stating that every instance of the phenomenon has a particular characteristic.
Scientists are free to use whatever resources they have – their own creativity, ideas from other fields, inductive reasoning, Bayesian inference, and so on – to imagine possible explanations for a phenomenon under study. Albert Einstein once observed that "there is no logical bridge between phenomena and their theoretical principles." Charles Sanders Peirce, borrowing a page from Aristotle (Prior Analytics, 2.25) described the incipient stages of inquiry, instigated by the "irritation of doubt" to venture a plausible guess, as abductive reasoning. The history of science is filled with stories of scientists claiming a "flash of inspiration", or a hunch, which then motivated them to look for evidence to support or refute their idea. Michael Polanyi made such creativity the centerpiece of his discussion of methodology.
William Glen observes that
In general, scientists tend to look for theories that are "elegant" or "beautiful". Scientists often use these terms to refer to a theory that is following the known facts but is nevertheless relatively simple and easy to handle. Occam's Razor serves as a rule of thumb for choosing the most desirable amongst a group of equally explanatory hypotheses.
To minimize the confirmation bias that results from entertaining a single hypothesis, strong inference emphasizes the need for entertaining multiple alternative hypotheses, and avoiding artifacts.
Predictions from the hypothesis
James D. Watson, Francis Crick, and others hypothesized that DNA had a helical structure. This implied that DNA's X-ray diffraction pattern would be 'x shaped'.: June 1952 — Watson had succeeded in getting X-ray pictures of TMV showing a diffraction pattern consistent with the transform of a helix. This prediction followed from the work of Cochran, Crick and Vand (and independently by Stokes). The Cochran-Crick-Vand-Stokes theorem provided a mathematical explanation for the empirical observation that diffraction from helical structures produces x-shaped patterns.
In their first paper, Watson and Crick also noted that the double helix structure they proposed provided a simple mechanism for DNA replication, writing, "It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material".Any useful hypothesis will enable predictions, by reasoning including deductive reasoning. It might predict the outcome of an experiment in a laboratory setting or the observation of a phenomenon in nature. The prediction can also be statistical and deal only with probabilities.
It is essential that the outcome of testing such a prediction be currently unknown. Only in this case does a successful outcome increase the probability that the hypothesis is true. If the outcome is already known, it is called a consequence and should have already been considered while formulating the hypothesis.
If the predictions are not accessible by observation or experience, the hypothesis is not yet testable and so will remain to that extent unscientific in a strict sense. A new technology or theory might make the necessary experiments feasible. For example, while a hypothesis on the existence of other intelligent species may be convincing with scientifically based speculation, no known experiment can test this hypothesis. Therefore, science itself can have little to say about the possibility. In the future, a new technique may allow for an experimental test and the speculation would then become part of accepted science.
For example, Einstein's theory of general relativity makes several specific predictions about the observable structure of spacetime, such as that light bends in a gravitational field, and that the amount of bending depends in a precise way on the strength of that gravitational field. Arthur Eddington's observations made during a 1919 solar eclipse supported General Relativity rather than Newtonian gravitation.
Experiments
Watson and Crick showed an initial (and incorrect) proposal for the structure of DNA to a team from King's College London – Rosalind Franklin, Maurice Wilkins, and Raymond Gosling. Franklin immediately spotted the flaws which concerned the water content. Later Watson saw Franklin's photo 51, a detailed X-ray diffraction image, which showed an X-shapeCynthia Wolberger (2021) Photograph 51 explained and was able to confirm the structure was helical.: "The instant I saw the picture my mouth fell open and my pulse began to race." Page 168 shows the X-shaped pattern of the B-form of DNA, clearly indicating crucial details of its helical structure to Watson and Crick.
Once predictions are made, they can be sought by experiments. If the test results contradict the predictions, the hypotheses which entailed them are called into question and become less tenable. Sometimes the experiments are conducted incorrectly or are not very well designed when compared to a crucial experiment. If the experimental results confirm the predictions, then the hypotheses are considered more likely to be correct, but might still be wrong and continue to be subject to further testing. The experimental control is a technique for dealing with observational error. This technique uses the contrast between multiple samples, or observations, or populations, under differing conditions, to see what varies or what remains the same. We vary the conditions for the acts of measurement, to help isolate what has changed. Mill's canons can then help us figure out what the important factor is. Factor analysis is one technique for discovering the important factor in an effect.
Depending on the predictions, the experiments can have different shapes. It could be a classical experiment in a laboratory setting, a double-blind study or an archaeological excavation. Even taking a plane from New York to Paris is an experiment that tests the aerodynamical hypotheses used for constructing the plane.
These institutions thereby reduce the research function to a cost/benefit, which is expressed as money, and the time and attention of the researchers to be expended, in exchange for a report to their constituents. Current large instruments, such as CERN's Large Hadron Collider (LHC), or LIGO, or the National Ignition Facility (NIF), or the International Space Station (ISS), or the James Webb Space Telescope (JWST), entail expected costs of billions of dollars, and timeframes extending over decades. These kinds of institutions affect public policy, on a national or even international basis, and the researchers would require shared access to such machines and their adjunct infrastructure.
Scientists assume an attitude of openness and accountability on the part of those experimenting. Detailed record-keeping is essential, to aid in recording and reporting on the experimental results, and supports the effectiveness and integrity of the procedure. They will also assist in reproducing the experimental results, likely by others. Traces of this approach can be seen in the work of Hipparchus (190–120 BCE), when determining a value for the precession of the Earth, while controlled experiments can be seen in the works of al-Battani (853–929 CE) and Alhazen (965–1039 CE).
Communication and iteration
Watson and Crick then produced their model, using this information along with the previously known information about DNA's composition, especially Chargaff's rules of base pairing. After considerable fruitless experimentation, being discouraged by their superior from continuing, and numerous false starts,: Sunday, February 8, 1953 — Maurice Wilkes gave Watson and Crick permission to work on models, as Wilkes would not be building models until Franklin left DNA research. Watson and Crick were able to infer the essential structure of DNA by concrete modeling of the physical shapes of the nucleotides which comprise it.: "Suddenly I became aware that an adenine-thymine pair held together by two hydrogen bonds was identical in shape to a guanine-cytosine pair held together by at least two hydrogen bonds. ..." They were guided by the bond lengths which had been deduced by Linus Pauling and by Rosalind Franklin's X-ray diffraction images.
The scientific method is iterative. At any stage, it is possible to refine its accuracy and precision, so that some consideration will lead the scientist to repeat an earlier part of the process. Failure to develop an interesting hypothesis may lead a scientist to re-define the subject under consideration. Failure of a hypothesis to produce interesting and testable predictions may lead to reconsideration of the hypothesis or of the definition of the subject. Failure of an experiment to produce interesting results may lead a scientist to reconsider the experimental method, the hypothesis, or the definition of the subject.
This manner of iteration can span decades and sometimes centuries. Published papers can be built upon. For example: By 1027, Alhazen, based on his measurements of the refraction of light, was able to deduce that outer space was less dense than air, that is: "the body of the heavens is rarer than the body of air". In 1079 Ibn Mu'adh's Treatise On Twilight was able to infer that Earth's atmosphere was 50 miles thick, based on atmospheric refraction of the sun's rays.
This is why the scientific method is often represented as circular – new information leads to new characterisations, and the cycle of science continues. Measurements collected can be archived, passed onwards and used by others. Other scientists may start their own research and enter the process at any stage. They might adopt the characterization and formulate their own hypothesis, or they might adopt the hypothesis and deduce their own predictions. Often the experiment is not done by the person who made the prediction, and the characterization is based on experiments done by someone else. Published results of experiments can also serve as a hypothesis predicting their own reproducibility.
Confirmation
Science is a social enterprise, and scientific work tends to be accepted by the scientific community when it has been confirmed. Crucially, experimental and theoretical results must be reproduced by others within the scientific community. Researchers have given their lives for this vision; Georg Wilhelm Richmann was killed by ball lightning (1753) when attempting to replicate the 1752 kite-flying experiment of Benjamin Franklin.
If an experiment cannot be repeated to produce the same results, this implies that the original results might have been in error. As a result, it is common for a single experiment to be performed multiple times, especially when there are uncontrolled variables or other indications of experimental error. For significant or surprising results, other scientists may also attempt to replicate the results for themselves, especially if those results would be important to their own work. Replication has become a contentious issue in social and biomedical science where treatments are administered to groups of individuals. Typically an experimental group gets the treatment, such as a drug, and the control group gets a placebo. John Ioannidis in 2005 pointed out that the method being used has led to many findings that cannot be replicated.
The process of peer review involves the evaluation of the experiment by experts, who typically give their opinions anonymously. Some journals request that the experimenter provide lists of possible peer reviewers, especially if the field is highly specialized. Peer review does not certify the correctness of the results, only that, in the opinion of the reviewer, the experiments themselves were sound (based on the description supplied by the experimenter). If the work passes peer review, which occasionally may require new experiments requested by the reviewers, it will be published in a peer-reviewed scientific journal. The specific journal that publishes the results indicates the perceived quality of the work.
Scientists typically are careful in recording their data, a requirement promoted by Ludwik Fleck (1896–1961) and others. Though not typically required, they might be requested to supply this data to other scientists who wish to replicate their original results (or parts of their original results), extending to the sharing of any experimental samples that may be difficult to obtain. To protect against bad science and fraudulent data, government research-granting agencies such as the National Science Foundation, and science journals, including Nature and Science, have a policy that researchers must archive their data and methods so that other researchers can test the data and methods and build on the research that has gone before. Scientific data archiving can be done at several national archives in the U.S. or the World Data Center.
Foundational principles
Honesty, openness, and falsifiability
The unfettered principles of science are to strive for accuracy and the creed of honesty; openness already being a matter of degrees. Openness is restricted by the general rigour of scepticism. And of course the matter of non-science.
Smolin, in 2013, espoused ethical principles rather than giving any potentially limited definition of the rules of inquiry. His ideas stand in the context of the scale of data–driven and big science, which has seen increased importance of honesty and consequently reproducibility. His thought is that science is a community effort by those who have accreditation and are working within the community. He also warns against overzealous parsimony.
Popper previously took ethical principles even further, going as far as to ascribe value to theories only if they were falsifiable. Popper used the falsifiability criterion to demarcate a scientific theory from a theory like astrology: both "explain" observations, but the scientific theory takes the risk of making predictions that decide whether it is right or wrong:
Theory's interactions with observation
Science has limits. Those limits are usually deemed to be answers to questions that aren't in science's domain, such as faith. Science has other limits as well, as it seeks to make true statements about reality. The nature of truth and the discussion on how scientific statements relate to reality is best left to the article on the philosophy of science here. More immediately topical limitations show themselves in the observation of reality.
It is the natural limitations of scientific inquiry that there is no pure observation as theory is required to interpret empirical data, and observation is therefore influenced by the observer's conceptual framework. As science is an unfinished project, this does lead to difficulties. Namely, that false conclusions are drawn, because of limited information.
An example here are the experiments of Kepler and Brahe, used by Hanson to illustrate the concept. Despite observing the same sunrise the two scientists came to different conclusions—their intersubjectivity leading to differing conclusions. Johannes Kepler used Tycho Brahe's method of observation, which was to project the image of the Sun on a piece of paper through a pinhole aperture, instead of looking directly at the Sun. He disagreed with Brahe's conclusion that total eclipses of the Sun were impossible because, contrary to Brahe, he knew that there were historical accounts of total eclipses. Instead, he deduced that the images taken would become more accurate, the larger the aperture—this fact is now fundamental for optical system design. Another historic example here is the discovery of Neptune, credited as being found via mathematics because previous observers didn't know what they were looking at.
Empiricism, rationalism, and more pragmatic views
Scientific endeavour can be characterised as the pursuit of truths about the natural world or as the elimination of doubt about the same. The former is the direct construction of explanations from empirical data and logic, the latter the reduction of potential explanations. It was established above how the interpretation of empirical data is theory-laden, so neither approach is trivial.
The ubiquitous element in the scientific method is empiricism, which holds that knowledge is created by a process involving observation; scientific theories generalize observations. This is in opposition to stringent forms of rationalism, which holds that knowledge is created by the human intellect; later clarified by Popper to be built on prior theory. The scientific method embodies the position that reason alone cannot solve a particular scientific problem; it unequivocally refutes claims that revelation, political or religious dogma, appeals to tradition, commonly held beliefs, common sense, or currently held theories pose the only possible means of demonstrating truth.
In 1877, C. S. Peirce characterized inquiry in general not as the pursuit of truth per se but as the struggle to move from irritating, inhibitory doubts born of surprises, disagreements, and the like, and to reach a secure belief, the belief being that on which one is prepared to act. His pragmatic views framed scientific inquiry as part of a broader spectrum and as spurred, like inquiry generally, by actual doubt, not mere verbal or "hyperbolic doubt", which he held to be fruitless. This "hyperbolic doubt" Peirce argues against here is of course just another name for Cartesian doubt associated with René Descartes. It is a methodological route to certain knowledge by identifying what can't be doubted.
A strong formulation of the scientific method is not always aligned with a form of empiricism in which the empirical data is put forward in the form of experience or other abstracted forms of knowledge as in current scientific practice the use of scientific modelling and reliance on abstract typologies and theories is normally accepted. In 2010, Hawking suggested that physics' models of reality should simply be accepted where they prove to make useful predictions. He calls the concept model-dependent realism.
Rationality
Rationality embodies the essence of sound reasoning, a cornerstone not only in philosophical discourse but also in the realms of science and practical decision-making. According to the traditional viewpoint, rationality serves a dual purpose: it governs beliefs, ensuring they align with logical principles, and it steers actions, directing them towards coherent and beneficial outcomes. This understanding underscores the pivotal role of reason in shaping our understanding of the world and in informing our choices and behaviours. The following section will first explore beliefs and biases, and then get to the rational reasoning most associated with the sciences.
Beliefs and biases
Scientific methodology often directs that hypotheses be tested in controlled conditions wherever possible. This is frequently possible in certain areas, such as in the biological sciences, and more difficult in other areas, such as in astronomy.
The practice of experimental control and reproducibility can have the effect of diminishing the potentially harmful effects of circumstance, and to a degree, personal bias. For example, pre-existing beliefs can alter the interpretation of results, as in confirmation bias; this is a heuristic that leads a person with a particular belief to see things as reinforcing their belief, even if another observer might disagree (in other words, people tend to observe what they expect to observe).
A historical example is the belief that the legs of a galloping horse are splayed at the point when none of the horse's legs touch the ground, to the point of this image being included in paintings by its supporters. However, the first stop-action pictures of a horse's gallop by Eadweard Muybridge showed this to be false, and that the legs are instead gathered together.
Another important human bias that plays a role is a preference for new, surprising statements (see Appeal to novelty), which can result in a search for evidence that the new is true. Poorly attested beliefs can be believed and acted upon via a less rigorous heuristic.
Goldhaber and Nieto published in 2010 the observation that if theoretical structures with "many closely neighboring subjects are described by connecting theoretical concepts, then the theoretical structure acquires a robustness which makes it increasingly hardthough certainly never impossibleto overturn". When a narrative is constructed its elements become easier to believe.
notes "Words and ideas are originally phonetic and mental equivalences of the experiences coinciding with them. ... Such proto-ideas are at first always too broad and insufficiently specialized. ... Once a structurally complete and closed system of opinions consisting of many details and relations has been formed, it offers enduring resistance to anything that contradicts it". Sometimes, these relations have their elements assumed a priori, or contain some other logical or methodological flaw in the process that ultimately produced them. Donald M. MacKay has analyzed these elements in terms of limits to the accuracy of measurement and has related them to instrumental elements in a category of measurement.
Deductive and inductive reasoning
The idea of there being two opposed justifications for truth has shown up throughout the history of scientific method as analysis versus synthesis, non-ampliative/ampliative, or even confirmation and verification. (And there are other kinds of reasoning.) One to use what is observed to build towards fundamental truths – and the other to derive from those fundamental truths more specific principles.
Deductive reasoning is the building of knowledge based on what has been shown to be true before. It requires the assumption of fact established prior, and, given the truth of the assumptions, a valid deduction guarantees the truth of the conclusion. Inductive reasoning builds knowledge not from established truth, but from a body of observations. It requires stringent scepticism regarding observed phenomena, because cognitive assumptions can distort the interpretation of initial perceptions.
An example for how inductive and deductive reasoning works can be found in the history of gravitational theory. It took thousands of years of measurements, from the Chaldean, Indian, Persian, Greek, Arabic, and European astronomers, to fully record the motion of planet Earth. Kepler(and others) were then able to build their early theories by generalizing the collected data inductively, and Newton was able to unify prior theory and measurements into the consequences of his laws of motion in 1727.
Another common example of inductive reasoning is the observation of a counterexample to current theory inducing the need for new ideas. Le Verrier in 1859 pointed out problems with the perihelion of Mercury that showed Newton's theory to be at least incomplete. The observed difference of Mercury's precession between Newtonian theory and observation was one of the things that occurred to Einstein as a possible early test of his theory of relativity. His relativistic calculations matched observation much more closely than Newtonian theory did. Though, today's Standard Model of physics suggests that we still do not know at least some of the concepts surrounding Einstein's theory, it holds to this day and is being built on deductively.
A theory being assumed as true and subsequently built on is a common example of deductive reasoning. Theory building on Einstein's achievement can simply state that 'we have shown that this case fulfils the conditions under which general/special relativity applies, therefore its conclusions apply also'. If it was properly shown that 'this case' fulfils the conditions, the conclusion follows. An extension of this is the assumption of a solution to an open problem. This weaker kind of deductive reasoning will get used in current research, when multiple scientists or even teams of researchers are all gradually solving specific cases in working towards proving a larger theory. This often sees hypotheses being revised again and again as new proof emerges.
This way of presenting inductive and deductive reasoning shows part of why science is often presented as being a cycle of iteration. It is important to keep in mind that that cycle's foundations lie in reasoning, and not wholly in the following of procedure.
Certainty, probabilities, and statistical inference
Claims of scientific truth can be opposed in three ways: by falsifying them, by questioning their certainty, or by asserting the claim itself to be incoherent. Incoherence, here, means internal errors in logic, like stating opposites to be true; falsification is what Popper would have called the honest work of conjecture and refutation — certainty, perhaps, is where difficulties in telling truths from non-truths arise most easily.
Measurements in scientific work are usually accompanied by estimates of their uncertainty. The uncertainty is often estimated by making repeated measurements of the desired quantity. Uncertainties may also be calculated by consideration of the uncertainties of the individual underlying quantities used. Counts of things, such as the number of people in a nation at a particular time, may also have an uncertainty due to data collection limitations. Or counts may represent a sample of desired quantities, with an uncertainty that depends upon the sampling method used and the number of samples taken.
In the case of measurement imprecision, there will simply be a 'probable deviation' expressing itself in a study's conclusions. Statistics are different. Inductive statistical generalisation will take sample data and extrapolate more general conclusions, which has to be justified — and scrutinised. It can even be said that statistical models are only ever useful, but never a complete representation of circumstances.
In statistical analysis, expected and unexpected bias is a large factor. Research questions, the collection of data, or the interpretation of results, all are subject to larger amounts of scrutiny than in comfortably logical environments. Statistical models go through a process for validation, for which one could even say that awareness of potential biases is more important than the hard logic; errors in logic are easier to find in peer review, after all. More general, claims to rational knowledge, and especially statistics, have to be put into their appropriate context. Simple statements such as '9 out of 10 doctors recommend' are therefore of unknown quality because they do not justify their methodology.
Lack of familiarity with statistical methodologies can result in erroneous conclusions. Foregoing the easy example, multiple probabilities interacting is where, for example medical professionals, have shown a lack of proper understanding. Bayes' theorem is the mathematical principle lining out how standing probabilities are adjusted given new information. The boy or girl paradox is a common example. In knowledge representation, Bayesian estimation of mutual information between random variables is a way to measure dependence, independence, or interdependence of the information under scrutiny.
Beyond commonly associated survey methodology of field research, the concept together with probabilistic reasoning is used to advance fields of science where research objects have no definitive states of being. For example, in statistical mechanics.
Methods of inquiry
Hypothetico-deductive method
The hypothetico-deductive model, or hypothesis-testing method, or "traditional" scientific method is, as the name implies, based on the formation of hypotheses and their testing via deductive reasoning. A hypothesis stating implications, often called predictions, that are falsifiable via experiment is of central importance here, as not the hypothesis but its implications are what is tested. Basically, scientists will look at the hypothetical consequences a (potential) theory holds and prove or disprove those instead of the theory itself. If an experimental test of those hypothetical consequences shows them to be false, it follows logically that the part of the theory that implied them was false also. If they show as true however, it does not prove the theory definitively.
The logic of this testing is what affords this method of inquiry to be reasoned deductively. The formulated hypothesis is assumed to be 'true', and from that 'true' statement implications are inferred. If the following tests show the implications to be false, it follows that the hypothesis was false also. If test show the implications to be true, new insights will be gained. It is important to be aware that a positive test here will at best strongly imply but not definitively prove the tested hypothesis, as deductive inference (A ⇒ B) is not equivalent like that; only (¬B ⇒ ¬A) is valid logic. Their positive outcomes however, as Hempel put it, provide "at least some support, some corroboration or confirmation for it". This is why Popper insisted on fielded hypotheses to be falsifieable, as successful tests imply very little otherwise. As Gillies put it, "successful theories are those that survive elimination through falsification".
Deductive reasoning in this mode of inquiry will sometimes be replaced by abductive reasoning—the search for the most plausible explanation via logical inference. For example in biology, where general laws are few, as valid deductions rely on solid presuppositions.
Inductive method
The inductivist approach to deriving scientific truth first rose to prominence with Francis Bacon and particularly with Isaac Newton and those who followed him. After the establishment of the HD-method, it was often put aside as something of a "fishing expedition" though. It is still valid to some degree, but today's inductive method is often far removed from the historic approach—the scale of the data collected lending new effectiveness to the method. It is most-associated with data-mining projects or large-scale observation projects. In both these cases, it is often not at all clear what the results of proposed experiments will be, and thus knowledge will arise after the collection of data through inductive reasoning.
Where the traditional method of inquiry does both, the inductive approach usually formulates only a research question, not a hypothesis. Following the initial question instead, a suitable "high-throughput method" of data-collection is determined, the resulting data processed and 'cleaned up', and conclusions drawn after. "This shift in focus elevates the data to the supreme role of revealing novel insights by themselves".
The advantage the inductive method has over methods formulating a hypothesis that it is essentially free of "a researcher's preconceived notions" regarding their subject. On the other hand, inductive reasoning is always attached to a measure of certainty, as all inductively reasoned conclusions are. This measure of certainty can reach quite high degrees, though. For example, in the determination of large primes, which are used in encryption software.
Mathematical modelling
Mathematical modelling, or allochthonous reasoning, typically is the formulation of a hypothesis followed by building mathematical constructs that can be tested in place of conducting physical laboratory experiments. This approach has two main factors: simplification/abstraction and secondly a set of correspondence rules. The correspondence rules lay out how the constructed model will relate back to reality-how truth is derived; and the simplifying steps taken in the abstraction of the given system are to reduce factors that do not bear relevance and thereby reduce unexpected errors. These steps can also help the researcher in understanding the important factors of the system, how far parsimony can be taken until the system becomes more and more unchangeable and thereby stable. Parsimony and related principles are further explored below.
Once this translation into mathematics is complete, the resulting model, in place of the corresponding system, can be analysed through purely mathematical and computational means. The results of this analysis are of course also purely mathematical in nature and get translated back to the system as it exists in reality via the previously determined correspondence rules—iteration following review and interpretation of the findings. The way such models are reasoned will often be mathematically deductive—but they don't have to be. An example here are Monte-Carlo simulations. These generate empirical data "arbitrarily", and, while they may not be able to reveal universal principles, they can nevertheless be useful.
Scientific inquiry
Scientific inquiry generally aims to obtain knowledge in the form of testable explanations that scientists can use to predict the results of future experiments. This allows scientists to gain a better understanding of the topic under study, and later to use that understanding to intervene in its causal mechanisms (such as to cure disease). The better an explanation is at making predictions, the more useful it frequently can be, and the more likely it will continue to explain a body of evidence better than its alternatives. The most successful explanations – those that explain and make accurate predictions in a wide range of circumstances – are often called scientific theories.
Most experimental results do not produce large changes in human understanding; improvements in theoretical scientific understanding typically result from a gradual process of development over time, sometimes across different domains of science. Scientific models vary in the extent to which they have been experimentally tested and for how long, and in their acceptance in the scientific community. In general, explanations become accepted over time as evidence accumulates on a given topic, and the explanation in question proves more powerful than its alternatives at explaining the evidence. Often subsequent researchers re-formulate the explanations over time, or combined explanations to produce new explanations.
Properties of scientific inquiry
Scientific knowledge is closely tied to empirical findings and can remain subject to falsification if new experimental observations are incompatible with what is found. That is, no theory can ever be considered final since new problematic evidence might be discovered. If such evidence is found, a new theory may be proposed, or (more commonly) it is found that modifications to the previous theory are sufficient to explain the new evidence. The strength of a theory relates to how long it has persisted without major alteration to its core principles.
Theories can also become subsumed by other theories. For example, Newton's laws explained thousands of years of scientific observations of the planets almost perfectly. However, these laws were then determined to be special cases of a more general theory (relativity), which explained both the (previously unexplained) exceptions to Newton's laws and predicted and explained other observations such as the deflection of light by gravity. Thus, in certain cases independent, unconnected, scientific observations can be connected, unified by principles of increasing explanatory power.
Since new theories might be more comprehensive than what preceded them, and thus be able to explain more than previous ones, successor theories might be able to meet a higher standard by explaining a larger body of observations than their predecessors. For example, the theory of evolution explains the diversity of life on Earth, how species adapt to their environments, and many other patterns observed in the natural world; its most recent major modification was unification with genetics to form the modern evolutionary synthesis. In subsequent modifications, it has also subsumed aspects of many other fields such as biochemistry and molecular biology.
Heuristics
Confirmation theory
During the course of history, one theory has succeeded another, and some have suggested further work while others have seemed content just to explain the phenomena. The reasons why one theory has replaced another are not always obvious or simple. The philosophy of science includes the question: What criteria are satisfied by a 'good' theory. This question has a long history, and many scientists, as well as philosophers, have considered it. The objective is to be able to choose one theory as preferable to another without introducing cognitive bias. Though different thinkers emphasize different aspects, a good theory:
is accurate (the trivial element);
is consistent, both internally and with other relevant currently accepted theories;
has explanatory power, meaning its consequences extend beyond the data it is required to explain;
has unificatory power; as in its organizing otherwise confused and isolated phenomena
and is fruitful for further research.
In trying to look for such theories, scientists will, given a lack of guidance by empirical evidence, try to adhere to:
parsimony in causal explanations
and look for invariant observations.
Scientists will sometimes also list the very subjective criteria of "formal elegance" which can indicate multiple different things.
The goal here is to make the choice between theories less arbitrary. Nonetheless, these criteria contain subjective elements, and should be considered heuristics rather than a definitive. Also, criteria such as these do not necessarily decide between alternative theories. Quoting Bird:
It also is debatable whether existing scientific theories satisfy all these criteria, which may represent goals not yet achieved. For example, explanatory power over all existing observations is satisfied by no one theory at the moment.
Parsimony
The desiderata of a "good" theory have been debated for centuries, going back perhaps even earlier than Occam's razor, which is often taken as an attribute of a good theory. Science tries to be simple. When gathered data supports multiple explanations, the most simple explanation for phenomena or the most simple formation of a theory is recommended by the principle of parsimony. Scientists go as far as to call simple proofs of complex statements beautiful.
The concept of parsimony should not be held to imply complete frugality in the pursuit of scientific truth. The general process starts at the opposite end of there being a vast number of potential explanations and general disorder. An example can be seen in Paul Krugman's process, who makes explicit to "dare to be silly". He writes that in his work on new theories of international trade he reviewed prior work with an open frame of mind and broadened his initial viewpoint even in unlikely directions. Once he had a sufficient body of ideas, he would try to simplify and thus find what worked among what did not. Specific to Krugman here was to "question the question". He recognised that prior work had applied erroneous models to already present evidence, commenting that "intelligent commentary was ignored". Thus touching on the need to bridge the common bias against other circles of thought.
Elegance
Occam's razor might fall under the heading of "simple elegance", but it is arguable that parsimony and elegance pull in different directions. Introducing additional elements could simplify theory formulation, whereas simplifying a theory's ontology might lead to increased syntactical complexity.
Sometimes ad-hoc modifications of a failing idea may also be dismissed as lacking "formal elegance". This appeal to what may be called "aesthetic" is hard to characterise, but essentially about a sort of familiarity. Though, argument based on "elegance" is contentious and over-reliance on familiarity will breed stagnation.
Invariance
Principles of invariance have been a theme in scientific writing, and especially physics, since at least the early 20th century. The basic idea here is that good structures to look for are those independent of perspective, an idea that has featured earlier of course for example in Mill's Methods of difference and agreement—methods that would be referred back to in the context of contrast and invariance. But as tends to be the case, there is a difference between something being a basic consideration and something being given weight. Principles of invariance have only been given weight in the wake of Einstein's theories of relativity, which reduced everything to relations and were thereby fundamentally unchangeable, unable to be varied. As David Deutsch put it in 2009: "the search for hard-to-vary explanations is the origin of all progress".
An example here can be found in one of Einstein's thought experiments. The one of a lab suspended in empty space is an example of a useful invariant observation. He imagined the absence of gravity and an experimenter free floating in the lab. — If now an entity pulls the lab upwards, accelerating uniformly, the experimenter would perceive the resulting force as gravity. The entity however would feel the work needed to accelerate the lab continuously. Through this experiment Einstein was able to equate gravitational and inertial mass; something unexplained by Newton's laws, and an early but "powerful argument for a generalised postulate of relativity".
The discussion on invariance in physics is often had in the more specific context of symmetry. The Einstein example above, in the parlance of Mill would be an agreement between two values. In the context of invariance, it is a variable that remains unchanged through some kind of transformation or change in perspective. And discussion focused on symmetry would view the two perspectives as systems that share a relevant aspect and are therefore symmetrical.
Related principles here are falsifiability and testability. The opposite of something being hard-to-vary are theories that resist falsification—a frustration that was expressed colourfully by Wolfgang Pauli as them being "not even wrong". The importance of scientific theories to be falsifiable finds especial emphasis in the philosophy of Karl Popper. The broader view here is testability, since it includes the former and allows for additional practical considerations.
Philosophy and discourse
Philosophy of science looks at the underpinning logic of the scientific method, at what separates science from non-science, and the ethic that is implicit in science. There are basic assumptions, derived from philosophy by at least one prominent scientist, that form the base of the scientific method – namely, that reality is objective and consistent, that humans have the capacity to perceive reality accurately, and that rational explanations exist for elements of the real world. These assumptions from methodological naturalism form a basis on which science may be grounded. Logical positivist, empiricist, falsificationist, and other theories have criticized these assumptions and given alternative accounts of the logic of science, but each has also itself been criticized.
There are several kinds of modern philosophical conceptualizations and attempts at definitions of the method of science. The one attempted by the unificationists, who argue for the existence of a unified definition that is useful (or at least 'works' in every context of science). The pluralists, arguing degrees of science being too fractured for a universal definition of its method to by useful. And those, who argue that the very attempt at definition is already detrimental to the free flow of ideas.
Additionally, there have been views on the social framework in which science is done, and the impact of the sciences social environment on research. Also, there is 'scientific method' as popularised by Dewey in How We Think (1910) and Karl Pearson in Grammar of Science (1892), as used in fairly uncritical manner in education.
Pluralism
Scientific pluralism is a position within the philosophy of science that rejects various proposed unities of scientific method and subject matter. Scientific pluralists hold that science is not unified in one or more of the following ways: the metaphysics of its subject matter, the epistemology of scientific knowledge, or the research methods and models that should be used. Some pluralists believe that pluralism is necessary due to the nature of science. Others say that since scientific disciplines already vary in practice, there is no reason to believe this variation is wrong until a specific unification is empirically proven. Finally, some hold that pluralism should be allowed for normative reasons, even if unity were possible in theory.
Unificationism
Unificationism, in science, was a central tenet of logical positivism. Different logical positivists construed this doctrine in several different ways, e.g. as a reductionist thesis, that the objects investigated by the special sciences reduce to the objects of a common, putatively more basic domain of science, usually thought to be physics; as the thesis that all theories and results of the various sciences can or ought to be expressed in a common language or "universal slang"; or as the thesis that all the special sciences share a common scientific method.
Development of the idea has been troubled by accelerated advancement in technology that has opened up many new ways to look at the world.
Epistemological anarchism
Paul Feyerabend examined the history of science, and was led to deny that science is genuinely a methodological process. In his book Against Method he argued that no description of scientific method could possibly be broad enough to include all the approaches and methods used by scientists, and that there are no useful and exception-free methodological rules governing the progress of science. In essence, he said that for any specific method or norm of science, one can find a historic episode where violating it has contributed to the progress of science. He jokingly suggested that, if believers in the scientific method wish to express a single universally valid rule, it should be 'anything goes'. As has been argued before him however, this is uneconomic; problem solvers, and researchers are to be prudent with their resources during their inquiry.
A more general inference against formalised method has been found through research involving interviews with scientists regarding their conception of method. This research indicated that scientists frequently encounter difficulty in determining whether the available evidence supports their hypotheses. This reveals that there are no straightforward mappings between overarching methodological concepts and precise strategies to direct the conduct of research.
Education
In science education, the idea of a general and universal scientific method has been notably influential, and numerous studies (in the US) have shown that this framing of method often forms part of both students’ and teachers’ conception of science. This convention of traditional education has been argued against by scientists, as there is a consensus that educations' sequential elements and unified view of scientific method do not reflect how scientists actually work. Major organizations of scientists such as the American Association for the Advancement of Science (AAAS) consider the sciences to be a part of the liberal arts traditions of learning and proper understating of science includes understanding of philosophy and history, not just science in isolation.
How the sciences make knowledge has been taught in the context of "the" scientific method (singular) since the early 20th century. Various systems of education, including but not limited to the US, have taught the method of science as a process or procedure, structured as a definitive series of steps: observation, hypothesis, prediction, experiment.
This version of the method of science has been a long-established standard in primary and secondary education, as well as the biomedical sciences. It has long been held to be an inaccurate idealisation of how some scientific inquiries are structured.
The taught presentation of science had to defend demerits such as:
it pays no regard to the social context of science,
it suggests a singular methodology of deriving knowledge,
it overemphasises experimentation,
it oversimplifies science, giving the impression that following a scientific process automatically leads to knowledge,
it gives the illusion of determination; that questions necessarily lead to some kind of answers and answers are preceded by (specific) questions,
and, it holds that scientific theories arise from observed phenomena only.
The scientific method no longer features in the standards for US education of 2013 (NGSS) that replaced those of 1996 (NRC). They, too, influenced international science education, and the standards measured for have shifted since from the singular hypothesis-testing method to a broader conception of scientific methods. These scientific methods, which are rooted in scientific practices and not epistemology, are described as the 3 dimensions of scientific and engineering practices, crosscutting concepts (interdisciplinary ideas), and disciplinary core ideas.
The scientific method, as a result of simplified and universal explanations, is often held to have reached a kind of mythological status; as a tool for communication or, at best, an idealisation. Education's approach was heavily influenced by John Dewey's, How We Think (1910). Van der Ploeg (2016) indicated that Dewey's views on education had long been used to further an idea of citizen education removed from "sound education", claiming that references to Dewey in such arguments were undue interpretations (of Dewey).
Sociology of knowledge
The sociology of knowledge is a concept in the discussion around scientific method, claiming the underlying method of science to be sociological. King explains that sociology distinguishes here between the system of ideas that govern the sciences through an inner logic, and the social system in which those ideas arise.
Thought collectives
A perhaps accessible lead into what is claimed is Fleck's thought, echoed in Kuhn's concept of normal science. According to Fleck, scientists' work is based on a thought-style, that cannot be rationally reconstructed. It gets instilled through the experience of learning, and science is then advanced based on a tradition of shared assumptions held by what he called thought collectives. Fleck also claims this phenomenon to be largely invisible to members of the group.
Comparably, following the field research in an academic scientific laboratory by Latour and Woolgar, Karin Knorr Cetina has conducted a comparative study of two scientific fields (namely high energy physics and molecular biology) to conclude that the epistemic practices and reasonings within both scientific communities are different enough to introduce the concept of "epistemic cultures", in contradiction with the idea that a so-called "scientific method" is unique and a unifying concept.
Situated cognition and relativism
On the idea of Fleck's thought collectives sociologists built the concept of situated cognition: that the perspective of the researcher fundamentally affects their work; and, too, more radical views.
Norwood Russell Hanson, alongside Thomas Kuhn and Paul Feyerabend, extensively explored the theory-laden nature of observation in science. Hanson introduced the concept in 1958, emphasizing that observation is influenced by the observer's conceptual framework. He used the concept of gestalt to show how preconceptions can affect both observation and description, and illustrated this with examples like the initial rejection of Golgi bodies as an artefact of staining technique, and the differing interpretations of the same sunrise by Tycho Brahe and Johannes Kepler. Intersubjectivity led to different conclusions.
Kuhn and Feyerabend acknowledged Hanson's pioneering work, although Feyerabend's views on methodological pluralism were more radical. Criticisms like those from Kuhn and Feyerabend prompted discussions leading to the development of the strong programme, a sociological approach that seeks to explain scientific knowledge without recourse to the truth or validity of scientific theories. It examines how scientific beliefs are shaped by social factors such as power, ideology, and interests.
The postmodernist critiques of science have themselves been the subject of intense controversy. This ongoing debate, known as the science wars, is the result of conflicting values and assumptions between postmodernist and realist perspectives. Postmodernists argue that scientific knowledge is merely a discourse, devoid of any claim to fundamental truth. In contrast, realists within the scientific community maintain that science uncovers real and fundamental truths about reality. Many books have been written by scientists which take on this problem and challenge the assertions of the postmodernists while defending science as a legitimate way of deriving truth.
Limits of method
Role of chance in discovery
Somewhere between 33% and 50% of all scientific discoveries are estimated to have been stumbled upon, rather than sought out. This may explain why scientists so often express that they were lucky. Scientists themselves in the 19th and 20th century acknowledged the role of fortunate luck or serendipity in discoveries. Louis Pasteur is credited with the famous saying that "Luck favours the prepared mind", but some psychologists have begun to study what it means to be 'prepared for luck' in the scientific context. Research is showing that scientists are taught various heuristics that tend to harness chance and the unexpected. This is what Nassim Nicholas Taleb calls "Anti-fragility"; while some systems of investigation are fragile in the face of human error, human bias, and randomness, the scientific method is more than resistant or tough – it actually benefits from such randomness in many ways (it is anti-fragile). Taleb believes that the more anti-fragile the system, the more it will flourish in the real world.
Psychologist Kevin Dunbar says the process of discovery often starts with researchers finding bugs in their experiments. These unexpected results lead researchers to try to fix what they think is an error in their method. Eventually, the researcher decides the error is too persistent and systematic to be a coincidence. The highly controlled, cautious, and curious aspects of the scientific method are thus what make it well suited for identifying such persistent systematic errors. At this point, the researcher will begin to think of theoretical explanations for the error, often seeking the help of colleagues across different domains of expertise.
Relationship with statistics
When the scientific method employs statistics as a key part of its arsenal, there are mathematical and practical issues that can have a deleterious effect on the reliability of the output of scientific methods. This is described in a popular 2005 scientific paper "Why Most Published Research Findings Are False" by John Ioannidis, which is considered foundational to the field of metascience. Much research in metascience seeks to identify poor use of statistics and improve its use, an example being the misuse of p-values.
The particular points raised are statistical ("The smaller the studies conducted in a scientific field, the less likely the research findings are to be true" and "The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.") and economical ("The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true" and "The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.") Hence: "Most research findings are false for most research designs and for most fields" and "As shown, the majority of modern biomedical research is operating in areas with very low pre- and poststudy probability for true findings." However: "Nevertheless, most new discoveries will continue to stem from hypothesis-generating research with low or very low pre-study odds," which means that *new* discoveries will come from research that, when that research started, had low or very low odds (a low or very low chance) of succeeding. Hence, if the scientific method is used to expand the frontiers of knowledge, research into areas that are outside the mainstream will yield the newest discoveries.
Science of complex systems
Science applied to complex systems can involve elements such as transdisciplinarity, systems theory, control theory, and scientific modelling.
In general, the scientific method may be difficult to apply stringently to diverse, interconnected systems and large data sets. In particular, practices used within Big data, such as predictive analytics, may be considered to be at odds with the scientific method, as some of the data may have been stripped of the parameters which might be material in alternative hypotheses for an explanation; thus the stripped data would only serve to support the null hypothesis in the predictive analytics application. notes "a scientific discovery remains incomplete without considerations of the social practices that condition it".
Relationship with mathematics
Science is the process of gathering, comparing, and evaluating proposed models against observables. A model can be a simulation, mathematical or chemical formula, or set of proposed steps. Science is like mathematics in that researchers in both disciplines try to distinguish what is known from what is unknown at each stage of discovery. Models, in both science and mathematics, need to be internally consistent and also ought to be falsifiable (capable of disproof). In mathematics, a statement need not yet be proved; at such a stage, that statement would be called a conjecture.
Mathematical work and scientific work can inspire each other. For example, the technical concept of time arose in science, and timelessness was a hallmark of a mathematical topic. But today, the Poincaré conjecture has been proved using time as a mathematical concept in which objects can flow (see Ricci flow).
Nevertheless, the connection between mathematics and reality (and so science to the extent it describes reality) remains obscure. Eugene Wigner's paper, "The Unreasonable Effectiveness of Mathematics in the Natural Sciences", is a very well-known account of the issue from a Nobel Prize-winning physicist. In fact, some observers (including some well-known mathematicians such as Gregory Chaitin, and others such as Lakoff and Núñez) have suggested that mathematics is the result of practitioner bias and human limitation (including cultural ones), somewhat like the post-modernist view of science.
George Pólya's work on problem solving, the construction of mathematical proofs, and heuristic show that the mathematical method and the scientific method differ in detail, while nevertheless resembling each other in using iterative or recursive steps.
In Pólya's view, understanding involves restating unfamiliar definitions in your own words, resorting to geometrical figures, and questioning what we know and do not know already; analysis, which Pólya takes from Pappus, involves free and heuristic construction of plausible arguments, working backward from the goal, and devising a plan for constructing the proof; synthesis is the strict Euclidean exposition of step-by-step details of the proof; review involves reconsidering and re-examining the result and the path taken to it.
Building on Pólya's work, Imre Lakatos argued that mathematicians actually use contradiction, criticism, and revision as principles for improving their work. In like manner to science, where truth is sought, but certainty is not found, in Proofs and Refutations, what Lakatos tried to establish was that no theorem of informal mathematics is final or perfect. This means that, in non-axiomatic mathematics, we should not think that a theorem is ultimately true, only that no counterexample has yet been found. Once a counterexample, i.e. an entity contradicting/not explained by the theorem is found, we adjust the theorem, possibly extending the domain of its validity. This is a continuous way our knowledge accumulates, through the logic and process of proofs and refutations. (However, if axioms are given for a branch of mathematics, this creates a logical system —Wittgenstein 1921 Tractatus Logico-Philosophicus 5.13; Lakatos claimed that proofs from such a system were tautological, i.e. internally logically true, by rewriting forms, as shown by Poincaré, who demonstrated the technique of transforming tautologically true forms (viz. the Euler characteristic) into or out of forms from homology, or more abstractly, from homological algebra.
Lakatos proposed an account of mathematical knowledge based on Polya's idea of heuristics. In Proofs and Refutations, Lakatos gave several basic rules for finding proofs and counterexamples to conjectures. He thought that mathematical 'thought experiments' are a valid way to discover mathematical conjectures and proofs.
Gauss, when asked how he came about his theorems, once replied "durch planmässiges Tattonieren" (through systematic palpable experimentation).
| Physical sciences | Science: General | null |
26838 | https://en.wikipedia.org/wiki/Shotgun | Shotgun | A shotgun (also known as a scattergun, peppergun, or historically as a fowling piece) is a long-barreled firearm designed to shoot a straight-walled cartridge known as a shotshell, which discharges numerous small spherical projectiles called shot, or a single solid projectile called a slug. Shotguns are most commonly used as smoothbore firearms, meaning that their gun barrels have no rifling on the inner wall, but rifled barrels for shooting sabot slugs (slug barrels) are also available.
Shotguns come in a wide variety of calibers and gauges ranging from 5.5 mm (.22 inch) to up to , though the 12-gauge (18.53 mm or 0.729 in) and 20-gauge (15.63 mm or 0.615 in) bores are by far the most common. Almost all are breechloading, and can be single barreled, double barreled, or in the form of a combination gun. Like rifles, shotguns also come in a range of different action types, both single-shot and repeating. For non-repeating designs, over-and-under and side-by-side break action shotguns are by far the most common variants. Although revolving shotguns do exist, most modern repeating shotguns are either pump action or semi automatic, and also fully automatic, lever-action, or bolt-action to a lesser extent.
Preceding smoothbore firearms (such as the musket) were widely used by European militaries from the 17th until the mid-19th century. The muzzleloading blunderbuss, the direct ancestor of the shotgun, was also used in similar roles from self-defense to riot control. Shotguns were often favored by cavalry troops in the early to mid-19th century because of its ease of use and generally good effectiveness on the move, as well as by coachmen for its substantial power. But by the late 19th century, these weapons became largely replaced on the battlefield by breechloading rifled firearms shooting spin-stabilized cylindro-conoidal bullets, which were far more accurate with longer effective ranges. The military value of shotguns was rediscovered in the First World War, when American forces used the pump-action Winchester Model 1897 shotgun in trench fighting to great effect. Since then, shotguns have been used in a variety of close quarters combat roles in civilian, law enforcement, and military applications.
The smoothbore shotgun barrel generates less resistance and thus allows greater propellant loads for heavier projectiles without as much risk of overpressure or a squib load, and are also easier to clean. The shot pellets from a shotshell are propelled indirectly through a wadding inside the shell and scatter upon leaving the barrel, which is usually choked at the muzzle end to control the projectile scatter. This means each shotgun discharge will produce a cluster of impact points instead of a single point of impact like other firearms. Having multiple projectiles also means the muzzle energy is divided among the pellets, leaving each individual projectile with less penetrative kinetic energy. The lack of spin stabilization and the generally suboptimal aerodynamic shape of the shot pellets also make them less accurate and decelerate quite quickly in flight due to drag, giving shotguns short effective ranges. In a hunting context, this makes shotguns useful primarily for hunting fast-flying birds and other agile small/medium-sized game without risking overpenetration and stray shots to distant bystanders and objects. However, in a military or law enforcement context, the high short-range blunt knockback force and large number of projectiles makes the shotgun useful as a door breaching tool, a crowd control or close-quarters defensive weapon. Militants or insurgents may use shotguns in asymmetric engagements, as shotguns are commonly owned civilian weapons in many countries. Shotguns are also used for target-shooting sports such as skeet, trap, and sporting clays, which involve flying clay disks, known as "clay pigeons", thrown in various ways by a dedicated launching device called a "trap".
Design factors
Action
The action is the operating mechanism of a gun. There are many types of shotguns, typically categorized by the number of barrels or the way the gun is reloaded.
Break-action
For most of the history of the shotgun, the breechloading break-action shotgun was the most common type, and double-barreled variants are by far the most commonly seen in modern days. These are typically divided into two subtypes: the traditional "side-by-side" shotgun features two barrels mounted horizontally beside each other (as the name suggests), whereas the "over-and-under" shotgun has the two barrels mounted vertically one on top of the other. Side-by-side shotguns were traditionally used for hunting and other sporting pursuits (early long-barreled side-by-side shotguns were known as "fowling pieces" for their use hunting ducks and other waterbirds as well as some landfowls), whereas over-and-under shotguns are more commonly associated with recreational use (such as clay pigeon shooting). Both types of double-barrel shotgun are used for hunting and sporting use, with the individual configuration largely being a matter of personal preference.
Another, less commonly encountered type of break-action shotgun is the combination gun, which is an over-and-under design with one smoothbore barrel and one rifle barrel (more often with a rifle barrel on top, but a rifle barrel on bottom was not uncommon). There is also a class of break-action guns called drillings, which contain three barrels, usually two smoothbore barrels of the same gauge and a rifled barrel, though the only common theme is that at least one barrel be smoothbore. The most common arrangement was essentially a side-by-side shotgun with the rifled barrel below and centered. Usually a drilling containing more than one rifled barrel would have both rifled barrels in the same caliber, but examples do exist with different caliber barrels, usually a .22 long rifle and a centerfire cartridge. Although very rare, drillings with three and even four (a vierling) shotgun barrels were made.
Pump-action
In pump-action shotguns, a linearly sliding fore-end handguard (i.e. pump) is manually moved back-and-forth like a hand pump to work the action, extracting the spent shell and inserting a new round, while cocking the hammer or striker. A pump-action shotgun is typically fed from a tubular magazine underneath the barrel, which also serves as a guide rail for the pump. The rounds are fed in one by one through a port in the receiver, where they are lifted by a lever called the elevator and pushed forward into the chamber by the bolt. A pair of latches at the rear of the magazine hold the rounds in place and facilitate feeding of one shell at a time. If it is desired to load the gun fully, a round may be loaded through the ejection port directly into the chamber, or cycled from the magazine, which is then topped off with another round. Well-known examples include the Winchester Model 1897, Remington 870, and Mossberg 500/590.
Pump-action shotguns are common hunting, fowling and sporting shotguns. Hunting models generally have a barrel between . Tube-fed models designed for hunting often come with a dowel rod or other stop that is inserted into the magazine and reduces the capacity of the gun to three shells (two in the magazine and one chambered) as is mandated by U.S. federal law when hunting migratory birds. They can also easily be used with an empty magazine as a single-shot weapon, by simply dropping the next round to be fired into the open ejection port after the spent round is ejected. For this reason, pump-actions are commonly used to teach novice shooters under supervision, as the trainer can load each round more quickly than with a break-action, while unlike a break-action the student can maintain his grip on the gun and concentrate on proper handling and firing of the weapon.
Pump-action shotguns with shorter barrels and little or no barrel choke are highly popular for use in home defense, military and law enforcement, and are commonly known as riot guns. The minimum barrel length for shotguns in most of the U.S. is , and this barrel length (sometimes to increase magazine capacity and/or ensure the gun is legal regardless of measuring differences) is the primary choice for riot shotguns. The shorter barrel makes the weapon easier to maneuver around corners and in tight spaces, though slightly longer barrels are sometimes used outdoors for a tighter spread pattern or increased accuracy of slug projectiles. Home-defense and law enforcement shotguns are usually chambered for 12-gauge shells, providing maximum shot power and the use of a variety of projectiles such as buckshot, rubber, sandbag and slug shells, but 20-gauge (common in bird-hunting shotguns) or .410 (common in youth-size shotguns) are also available in defense-type shotgun models allowing easier use by novice shooters.
A riot shotgun has many advantages over a handgun or rifle. Compared to "defense-caliber" handguns (chambered for 9mm Parabellum, .38 Special, .357 Magnum, .40 S&W, .45 ACP and similar), a shotgun has far more power and damage potential (up to 10 times the muzzle energy of a .45 ACP cartridge), allowing a "one-shot stop" that is more difficult to achieve with typical handgun loads. Compared to a rifle, riot shotguns are easier to maneuver due to the shorter barrel, still provide better damage potential at indoor distances (generally 3–5 meters/yards), and reduce the risk of "overpenetration"; that is, the bullet or shot passing completely through the target and continuing beyond, which poses a risk to those behind the target through walls. The wide spread of the shot reduces the importance of shot placement compared to a single projectile, which increases the effectiveness of "point shooting" – rapidly aiming simply by pointing the weapon in the direction of the target. This allows easy, fast use by novices.
Lever-action
Early attempts at repeating shotguns invariably centred around either bolt-or lever-action designs, drawing inspiration from contemporary repeating rifles, with the earliest successful repeating shotgun being the lever-action Winchester M1887, designed by John Browning at the behest of the Winchester Repeating Arms Company.
Lever shotguns, while less common, were popular in the late 19th century with the Winchester Model 1887 and Model 1901 being prime examples. Initially very popular, demand waned after the introduction of pump-action shotguns around the start of the 20th century, and production was eventually discontinued in 1920.
One major issue with lever-actions (and to a lesser extent pump-actions) was that early shotgun shells were often made of paper or similar fragile materials (modern hulls are plastic or metal). As a result, the loading of shells, or working of the action of the shotgun, could often result in cartridges getting crushed and becoming unusable, or even damaging the gun.
Lever shotguns have seen a return to the gun market in recent years, however, with Winchester producing the Model 9410 (chambering the .410 gauge shotgun shell and using the action of the Winchester Model 94 series lever-action rifle, hence the name), and a handful of other firearm manufacturers (primarily Norinco of China and ADI Ltd. of Australia) producing versions of the Winchester Model 1887/1901 designed for modern 12-gauge smokeless shotshells with more durable plastic casings. There has been a notable uptick in lever-action shotgun sales in Australia since 1997, when pump-actions were effectively outlawed.
Bolt-action
Bolt-action shotguns, while uncommon, do exist. One of the best-known examples is a 12-gauge manufactured by Mossberg featuring a 3-round magazine, marketed in Australia just after changes to the gun laws in 1997 heavily restricted the ownership and use of pump-action and semi-automatic shotguns. They were not a huge success, as they were somewhat slow and awkward to operate, and the rate of fire was noticeably slower (on average) than a double-barrelled gun. The Rifle Factory Ishapore in India also manufactured a single-shot .410 bore shotgun based on the SMLE Mk III* rifle. The Russian Berdana shotgun was effectively a single-shot bolt-action rifle that became obsolete, and was subsequently modified to chamber 16-gauge shotgun shells for civilian sale. The U.S. military M26 is also a bolt-action weapon. Bolt-action shotguns have also been used in the "goose gun" application, intended to kill birds such as geese at greater range. Typically, goose guns have long barrels (up to 36 inches), and small bolt-fed magazines. Bolt-action shotguns are also used in conjunction with slug shells for the maximum possible accuracy from a shotgun.
In Australia, some straight-pull bolt-action shotguns, such as the Turkish-made Pardus BA12 and Dickinson T1000, the American C-More Competition M26, as well as the indigenous-designed SHS STP 12, have become increasingly popular alternatives to lever-action shotguns, largely due to the better ergonomics with less stress on the shooter's trigger hand and fingers when cycling the action.
Revolver
Colt briefly manufactured several revolving shotguns that were met with mixed success. The Colt Model 1839 Shotgun was manufactured between 1839 and 1841. Later, the Colt Model 1855 Shotgun, based on the Model 1855 revolving rifle, was manufactured between 1860 and 1863. Because of their low production numbers and age they are among the rarest of all Colt firearms.
The Armsel Striker was a modern take on the revolving shotgun that held 10 rounds of 12-gauge ammunition in its cylinder. It was copied by Cobray as the Streetsweeper.
Taurus manufactures a carbine variant of the Taurus Judge revolver along with its Australian partner company, Rossi known as the Taurus/Rossi Circuit Judge. It comes in the original combination chambering of .410 bore and .45 Long Colt, as well as the .44 Remington Magnum chambering. The rifle has small blast shields attached to the cylinder to protect the shooter from hot gases escaping between the cylinder and barrel.
The MTs255 () is a shotgun fed by a 5-round internal revolving cylinder. It is produced by the TsKIB SOO, Central Design and Research Bureau of Sporting and Hunting Arms. They are available in 12, 20, 28 and 32 gauges, and .410 bore.
Semi-automatic
Recoil/inertia-driven or gas-operated actions are other popular methods of increasing the rate of fire of a shotgun; these self-loading shotguns are generally referred to as autoloaders. Instead of having the action manually operated by a pump or lever, the action automatically cycles each time the shotgun is fired, ejecting the spent shell and reloading a fresh one into the chamber. The first successful semi-automatic shotgun was John Browning's Auto-5, first produced by Fabrique Nationale beginning in 1902. Other well-known examples include the Remington 1100, Benelli M1, and Saiga-12.
Some, such as the Franchi SPAS-12 and Benelli M3, are capable of switching between semi-automatic and pump action. These are popular for two reasons; first, some jurisdictions forbid the use of semi-automatic actions for hunting, and second, lower-powered rounds, like "reduced-recoil" buckshot shells and many less-lethal cartridges, have insufficient power to reliably cycle a semi-automatic shotgun.
Automatic
Fully automatic shotguns, such as the 1960's (appeared in 1967) Vietnam War era Remington Model 7188 (designed for and used by US Navy SEALs in Vietnam), the Auto Assault-12 (AA-12) or the USAS-12 also exist, but they are still rare.
Other
In addition to the commonly encountered shotgun actions already listed, there are also shotguns based on the Martini-Henry rifle design, originally designed by British arms maker W.W. Greener.
Some of the more interesting advances in shotgun technology include the versatile NeoStead 2000 and fully automatics such as the Pancor Jackhammer or Auto-Assault 12.
In 1925, Rodolfo Cosmi produced the first working prototype hybrid semi-automatic shotgun, which had an 8-round magazine located in the stock. While it reloaded automatically after each shot like a semi-automatic, it had a break-action to load the first shell. This design has only been repeated once, by Beretta with their UGB25 automatic shotgun. The user loads the first shell by breaking the gun in the manner of a break-action shotgun, then closes it and inserts the second shell into a clip on the gun's right side. The spent hulls are ejected downwards. The guns combine the advantages of the break action (they can be proven to be safe by breaking open, there are no flying hulls) with those of the semi-automatic (low recoil, low barrel axis position hence low muzzle flip).
The Italian firearms manufacturer Benelli Armi SpA also makes the Benelli M3, a dual-mode hybrid shotgun that allows the user the choice of semi-automatic or pump-action operation. Pump-action operation is employed when shooting less energetic shells (such as baton rounds) that do not generate enough recoil to operate the semi-automatic mechanism. Conversely, the semi-automatic mode can be employed with more powerful shells, absorbing some of the recoil. Switching between the two modes is done by manipulating the ring located at the front of the foregrip.
The French firearm manufacturer Verney-Carron produces the Véloce shotgun, a "lever-release blowback firearm" using bolt catch mechanism like its similarly designed SpeedLine rifle. The Véloce is in essence a modified inertia-driven semi-automatic shotgun, but after blowback the bolt is trapped by a bolt stop and cannot return to battery unless it is manually released by depressing a thumb lever near the tang of the grip. Because the gun will not chamber a new round without manual actuation, the design is technically not really a self-loading, and Verney-Carron described it as a "manual repeating shotgun". When Australian firearm dealers tried to import the Véloce shotgun in 2018, Greens' David Shoebridge and anti-gun groups such as Gun Control Australia caused a moral panic on the mainstream media, calling it "semi-semi-automatic" that needed to be prohibited as a "rapid-fire weapon".
Gauge
The gauge number is determined by the weight, in fractions of a pound, of a solid sphere of lead with a diameter equal to the inside diameter of the barrel. So, a 10-gauge shotgun nominally should have an inside diameter equal to that of a sphere made from one-tenth of a pound of lead. Each gauge has a set caliber. By far the most common gauges are 12 (0.729 in, 18.5mm diameter) and 20 (0.614 in, 15.6mm), this includes other more or less common gauges, such as the 10, 16, 24, 28, 32, and 67 (.410 bore) gauge.
Different gauges have different typical applications. 12-gauge shotguns are common for hunting geese, large ducks, or other big larger gamebirds; professional skeet and trap shooting; military applications; and home-defense applications. 16-gauge shotguns were once common for hunters who wanted to use only a single shotgun for gamebirds normally pursued with 12 or 20-gauge shotguns, but have become rarer in recent years. 20-gauge shotguns are often used for gamebirds such as doves, smaller ducks, and quail. 28-gauge shotguns are not as common, but are classic quail-hunting guns. .410 gauge shotguns are typically used for squirrel hunting or for sportsmen seeking the challenge of killing game with a smaller load.
Other, less common shotgun cartridges have their own unique uses. Ammunition manufacturer CCI produces 9mm Parabellum (.355 in.) and several other popular pistol calibers up to .45 ACP (11.43mm), as well as smaller calibers such as .22 Long Rifle (5.5mm) and .22 Magnum (5.5mm). These are commonly called snake shot cartridges. Larger gauges, up to 4 bore, too powerful to shoulder, have been built, but were generally affixed to small boats and referred to as punt guns. These were used for commercial waterfowl hunting, to kill large numbers of birds resting on the water.
Handguns have also been produced that are capable of firing either .45 (Long) Colt or .410 shotgun shells from the same chamber; they are commonly known as "snake guns". Derringers such as the "Snake Slayer and Cowboy Defender" are popular among some outdoors-men in the South and Southwest regions of the United States. There are also some revolvers, such as the Taurus Judge and Smith & Wesson Governor, that are capable of shooting the .45LC/.410 rounds; but as with derringers they are not considered shotguns.
The .410 bore (10.4 mm) is unusual, being measured in inches, and would be approximately 67 "real" gauge, though its short hull versions are nominally called 36-gauge in Europe. It uses a relatively small charge of shot. It is used for hunting and for skeet. Because of its very light recoil (approx 10 N), it is often used as a beginner's gun. However, the small charge and typically tight choke make it more difficult to hit targets. It is also frequently used by expert shooters because of the difficulty, especially in expensive side by side and over/under models for hunting small bird game such as quail and doves. Inexpensive bolt-action .410 shotguns are a very common first hunting shotgun among young pre-teen hunters, as they are used mostly for hunting squirrels, while additionally teaching bolt-action manipulation skills that will transfer easily later to adult-sized hunting rifles. Most of these young hunters move up to a 20-gauge within a few years, and to 12-gauge shotguns and full-size hunting rifles by their late teens. Still, many who are particularly recoil-averse choose to stay with 20-gauge shotguns all their adult life, as it is a suitable gauge for many popular hunting uses.
A recent innovation is the back-boring of barrels, in which the barrels are bored out slightly larger than their actual gauge. This reduces the compression forces on the shot when it transitions from the chamber to the barrel. This leads to a slight reduction in perceived recoil, and an improvement in shot pattern due to reduced deformation of the shot.
Shot
Most shotguns are used to fire "a number of ball shot", in addition to slugs and sabots. The ball shot or pellets is for the most part made of lead but this has been partially replaced by bismuth, steel, tungsten-iron, tungsten-nickel-iron and even tungsten polymer loads. Non-toxic loads are required by Federal law for waterfowl hunting in the US, as the shot may be ingested by the waterfowl, which some authorities believe can lead to health problems due to the lead exposure. Shot is termed either birdshot or buckshot depending on the shot size. Informally, birdshot pellets have a diameter smaller than and buckshot are larger than that. Pellet size is indicated by a number; for bird shot this ranges from the smallest 12 (1.2 mm, 0.05 in) to 2 (3.8 mm, 0.15 in) and then BB (4.6 mm, 0.18 in).
For buckshot, the numbers start and end with 4, 3, 2, 1, 0 ("single-aught"), 00 ("double-aught"), 000 ("triple-aught"), and 0000 ("quadruple-aught"). A different informal distinction is that "bird shot" pellets are small enough that they can be measured into the cartridge by weight, and simply poured in, whereas "buckshot" pellets are so large they must be stacked inside the cartridge in a fixed geometric arrangement to fit. The diameter in hundredths of an inch of bird shot sizes from No. 9 to No. 1 can be obtained by subtracting the shot size from 17. Thus, No. 4 bird shot is 17 – 4 = 13 = in diameter. Different terminology is used outside the United States. In England and Australia, for example, 00 buckshot cartridges are commonly referred to as "S.G." (Swanshot gauge) cartridges.
Pattern and choke
Shot, small and round and delivered without spin, is ballistically inefficient. As the shot leaves the barrel it begins to disperse in the air. The resulting cloud of pellets is known as the shot pattern, or shotgun shot spread. The ideal pattern would be a circle with an even distribution of shot throughout, with a density sufficient to ensure enough pellets will intersect the target to achieve the desired result, such as a kill when hunting or a break when shooting clay targets. In reality the pattern is closer to a Gaussian, or normal distribution, with a higher density in the center that tapers off at the edges. Patterns are usually measured by firing at a diameter circle on a large sheet of paper placed at varying distances. The hits inside the circle are counted, and compared to the total number of pellets, and the density of the pattern inside the circle is examined. An "ideal" pattern would put nearly 100% of the pellets in the circle and would have no voids—any region where a target silhouette will fit and not cover 3 or more holes is considered a potential problem.
A constriction in the end of the barrel known as the choke is used to tailor the pattern for different purposes. Chokes may either be formed as part of the barrel at the time of manufacture, by squeezing the end of the bore down over a mandrel, or by threading the barrel and screwing in an interchangeable choke tube. The choke typically consists of a conical section that smoothly tapers from the bore diameter down to the choke diameter, followed by a cylindrical section of the choke diameter. Briley Manufacturing, a maker of interchangeable shotgun chokes, uses a conical portion about three times the bore diameter in length, so the shot is gradually squeezed down with minimal deformation. The cylindrical section is shorter, usually . The use of interchangeable chokes has made it easy to tune the performance of a given combination of shotgun and shotshell to achieve the desired performance.
The choke should be tailored to the range and size of the targets. A skeet shooter shooting at close targets might use 127 micrometres (0.005 inches) of constriction to produce a diameter pattern at a distance of . A trap shooter shooting at distant targets might use 762 micrometres (0.030 inches) of constriction to produce a diameter pattern at . Special chokes for turkey hunting, which requires long range shots at the small head and neck of the bird, can go as high as 1500 micrometres (0.060 inches). The use of too much choke and a small pattern increases the difficulty of hitting the target, whereas the use of too little choke produces large patterns with insufficient pellet density to reliably break targets or kill game. "Cylinder barrels" have no constriction.
Other specialized choke tubes exist as well. Some turkey hunting tubes have constrictions greater than "Super Full", or additional features like porting to reduce recoil, or "straight rifling" that is designed to stop any spin that the shot column might acquire when traveling down the barrel. These tubes are often extended tubes, meaning they project beyond the end of the bore, giving more room for things like a longer conical section. Shot spreaders or diffusion chokes work opposite of normal chokes—they are designed to spread the shot more than a cylinder bore, generating wider patterns for very short range use. A number of recent spreader chokes, such as the Briley "Diffusion" line, actually use rifling in the choke to spin the shot slightly, creating a wider spread. The Briley Diffusion uses a 1 in 36 cm twist, as does the FABARM Lion Paradox shotgun.
Oval chokes, which are designed to provide a shot pattern wider than it is tall, are sometimes found on combat shotguns, primarily those of the Vietnam War era. They were available for aftermarket addition in the 1970s from companies like A & W Engineering. Military versions of the Ithaca 37 with duckbill choke were used in limited numbers during the Vietnam War by US Navy Seals. It arguably increased effectiveness in close range engagements against multiple targets. Two major disadvantages plagued the system. One was erratic patterning. The second was that the shot would spread too quickly providing a limited effective zone.
Offset chokes, where the pattern is intentionally slightly off of center, are used to change the point of impact. For instance, an offset choke can be used to make a double barrelled shotgun with poorly aligned barrels hit the same spot with both barrels.
Barrel length
Shotguns generally have longer barrels than modern rifles. Unlike rifles, however, the long shotgun barrel is not for ballistic purposes; shotgun shells use small powder charges in large diameter bores, and this leads to very low muzzle pressures (see internal ballistics) and very little velocity change with increasing barrel length. According to Remington, modern powder in a shotgun burns completely in barrels.
Since shotguns are generally used for shooting at small, fast moving targets, it is important to lead the target by firing slightly ahead of the target, so that when the shot reaches the range of the target, the target will have moved into the pattern.
Shotguns made for close ranges, where the angular speed of the targets is great (such as skeet or upland bird hunting), tend to have shorter barrels, around . Shotguns for longer range shooting, where angular speeds are small (trap shooting; quail, pheasant, and waterfowl hunting), tend to have longer barrels, 28 to . The longer barrels have more angular momentum, and will therefore swing more slowly but more steadily. The short, low angular momentum barrels swing faster, but are less steady. These lengths are for pump or semi-auto shotguns; break open guns have shorter overall lengths for the same barrel length, and so will use longer barrels. The break open design saves between in overall length, but in most cases pays for this by having two barrels, which adds weight at the muzzle. Barrels for shotguns have been getting longer as modern steels and production methods make the barrels stronger and lighter; a longer, lighter barrel gives the same inertia for less overall weight.
Shotguns for use against larger, slower targets generally have even shorter barrels. Small game shotguns, for hunting game like rabbits and squirrels, or shotguns for use with buckshot for deer, are often .
Shotguns intended for all-round hunting are a compromise, but a barrel pump-action 12-gauge shotgun with a modified choke can serve admirably for use as one gun intended for general all-round hunting of small-game such as quails, rabbits, pheasants, doves, and squirrels in semi-open wooded or farmland areas in many parts of the eastern US (Kentucky, Indiana, Tennessee) where dense brush is less of a hindrance and the ability to have more reach is important. For hunting in dense brush, shorter barrel lengths are often preferred when hunting the same types of game.
Caliber conversion sleeves
Shotguns are well suited for the use of caliber conversion sleeves, allowing most single- and double-barrel shotguns to fire a wide range of ammunition. The X Caliber system consists of eight adapter sleeves that allow the 12-gauge models to fire: .380 ACP, 9mm Luger, .38 Special, .357 Magnum, .40 S&W, .44 Special, .44 Magnum, .45 ACP, .45 Long Colt, .410 gauge and 20-gauge ammunition. They even make four adapter sleeves that allow the 20-gauge models to fire: 9mm Luger, .38 Special, .357 Magnum, .45 ACP, .45 Long Colt, and .410 gauge ammunition.
Ammunition
The extremely large caliber of shotgun shells has led to a wide variety of different ammunition.
Shotshells are the most commonly used round, filled with lead or lead substitute pellets.
Of this general class, the most common subset is birdshot, which uses a large number (from dozens to hundreds) of small pellets, meant to create a wide "kill spread" to hunt birds in flight. Shot shells are described by the size and number of the pellets within, and numbered in reverse order (the smaller the number, the bigger the pellet size, similar to bore gauge). Size nine (#9) shot is the smallest size normally used for hunting and is used on small upland game birds such as dove and quail. Larger sizes are used for hunting larger upland game birds and waterfowl.
Buckshot is similar to but larger than birdshot, and was originally designed for hunting larger game, such as deer (hence the name). While the advent of new, more accurate slug technologies is making buckshot less attractive for hunting, it is still the most common choice for police, military, and home defense uses. Like birdshot, buckshot is described by pellet size, with larger numbers indicating smaller shot. From the smallest to the largest, buckshot sizes are: #4, (called "number four"), #1, 0 ("one-aught"), 00 ("double-aught"), 000 ("triple-aught") and 0000 ("four-aught"). A typical round for defensive use would be a 12-gauge length 00 buck shell, which contains 9 pellets roughly in diameter, each comparable to a .38 Special bullet in damage potential. New "tactical" buckshot rounds, designed specifically for defensive use, use slightly fewer shot at lower velocity to reduce recoil and increase controllability of the shotgun. There are some shotgun rounds designed specifically for police use that shoot effectively from with a grouping of the balls.
Slug rounds are rounds that fire a single solid slug. They are used for hunting large game, and in certain military and law enforcement applications. Modern slugs are moderately accurate, especially when fired from special rifled slug barrels. They are often used in "shotgun-only" hunting zones near inhabited areas, where rifles are prohibited due to their greater range.
Sabots are a common type of slug round. While some slugs are exactly that—a 12-gauge metal projectile in a cartridge—a sabot is a smaller but more aerodynamic projectile surrounded by a "shoe" of some other material. This "sabot" jacket seals the barrel, increasing pressure and acceleration, while also inducing spin on the projectile in a rifled barrel. Once the projectile clears the barrel, the sabot material falls away, leaving an unmarked, aerodynamic bullet to continue toward the target. The advantages over a traditional slug are increased shot power, increased bullet velocity due to the lighter-mass bullet, and increased accuracy due to the velocity and the reduction in deformation of the slug itself. Disadvantages versus a traditional slug include lower muzzle momentum due to reduced mass, reduced damage due to smaller bullet diameter, and significantly higher per-unit cost.
Specialty ammunition
The unique properties of the shotgun, such as large case capacity, large bore, and the lack of rifling, has led to the development of a large variety of specialty shells, ranging from novelties to high tech military rounds.
Hunting, defensive, and military
Brenneke and Foster type slugs have the same basic configuration as normal slugs, but have increased accuracy. The hollowed rear of the Foster slug improves accuracy by placing more mass in the front of the projectile, therefore inhibiting the "tumble" that normal slugs may generate. The Brenneke slug takes this concept a bit further, with the addition of a wad that stays connected to the projectile after discharge, increasing accuracy. Both slugs are commonly found with fins or ribs, which are meant to allow the projectile to safely squeeze down during passage through chokes, but they do not increase stability in flight.
Flechette rounds contain aerodynamic darts, typically from 8 to 20 in number. The flechettes provide greatly extended range due to their aerodynamic shape and improved penetration of light armor. American troops during the Vietnam War packed their own flechette shotgun rounds, called beehive rounds after the similar artillery rounds. However, terminal performance was poor due to the very light weight of the flechettes, and their use was quickly dropped.
Grenade rounds use exploding projectiles to increase long range lethality. These are currently experimental, but the British FRAG-12, which comes in High Explosive (HE), High Explosive Armor-piercing (HEAP) and High Explosive Fragmenting Antipersonnel (HEFA) forms, is under consideration by military forces.
Less-lethal rounds, for riot and animal control
Flexible baton rounds, commonly called bean bags, fire a fabric bag filled with birdshot or a similar loose, dense substance. The "punch" effect of the bag is useful for knocking down targets; the rounds are used by police to subdue violent suspects. The bean bag round is by far the most common less-lethal round used. Due to the large surface area of these rounds, they lose velocity rapidly, and must be used at fairly short ranges to be effective, though use at extremely short ranges, under , can result in broken bones or other serious or lethal injuries. The rounds can also fly in a frisbee-like fashion and cut the person or animal being fired at. For this reason, these types of rounds are referred to as less-lethal, as opposed to less-than-lethal.
Gas shells spray a cone of gas for several meters. These are primarily used by riot police. They normally contain pepper gas or tear gas. Other variations launch a gas-grenade-like projectile.
Rock salt shells are hand loaded with coarse rock salt crystals, replacing the standard lead or steel shot. Rock salt shells could be seen as the forerunners of modern less-lethal rounds. In the United States, rock salt shells were and are sometimes still used by rural civilians to defend their property. The brittle salt was unlikely to cause serious injury at long ranges, but would cause painful stinging injuries and served as a warning. British gamekeepers have used rock salt shells to deter poachers. Rather than get into a physical confrontation, they stalk the poachers, making themselves known by a loud shout of "Run!" just before firing, to avoid hitting the now-fleeing subject in the eyes.
Rubber slugs or rubber buckshot are similar in principle to the bean bag rounds. Composed of flexible rubber or plastic and fired at low velocities, these rounds are probably the most common choice for riot control.
Taser International announced in 2007 a new 12-gauge eXtended Range Electronic Projectile or XREP, which contains a small electroshock weapon unit in a carrier that can be fired from a standard 12-gauge shotgun. The XREP projectile is fin stabilized, and travels at an initial velocity of 100 m/s (300 ft/s). Barbs on the front attach the electroshock unit to the target, with a tassel deploying from the rear to widen the circuit. A twenty-second burst of electrical energy is delivered to the target. This product was expected to be released to market in 2008. They were used—despite still being subject to testing, in breach of the supplier's license—by Northumbria police in their standoff with Raoul Moat in 2010.
Breaching rounds, often called frangible, Disintegrator, or Hatton rounds, are designed to destroy door locking mechanisms without risking lives. They are constructed of a very brittle substance that transfers most of the energy to the primary target but then fragment into much smaller pieces or dust so as not to injure unseen targets such as hostages or non-combatants that may be standing behind a breached door.
Bird bombs are low-powered rounds that fire a firecracker that is fused to explode a short time after firing. They are designed to scare animals, such as birds that congregate on airport runways.
Screechers fire a pyrotechnic whistle that emits a loud whistling sound for the duration of its flight. These are also used to scare animals.
Blank shells contain only a small amount of powder and no actual load. When fired, the blanks provide the sound and flash of a real load, but with no projectile. These may be used for simulation of gunfire, scaring wildlife, or as power for a launching device such as the Mossberg #50298 marine line launcher.
is a type of shotgun shell which contains sixteen 00-buck balls made of Zytel, and is designed as a non-lethal ammunition ideally used in small spaces.
Novelty and other
Bolo rounds are made of two or more slugs molded onto steel wire. When fired, the slugs separate, pulling the wire taut creating a flying blade, which could theoretically decapitate people and animals or amputate limbs. However, many active shotgun users consider this to be overstated, and view bolo shells as being less effective than conventional ammunition. Bolo shell rounds are banned in many locations (including the US states of Florida and Illinois) due to concerns about their potential lethality. The round is named in reference to bolas, which use two or more weighted balls on a rope to trap cattle or game.
Dragon's breath usually refers to a zirconium-based pyrotechnic shotgun round. When fired, a gout of flame erupts from the barrel of the gun (up to ). The visual effect it produces is impressive, similar to that of a short ranged flamethrower. However, it has few tactical uses, mainly distraction/disorientation.
Flare rounds are sometimes carried by hunters for safety and rescue purposes. They are available in low and high altitude versions. Some brands claim they can reach a height of up to .
Uses
The typical use of a shotgun is against small and fast moving targets, often while in the air. The spreading of the shot allows the user to point the shotgun close to the target, rather than having to aim precisely as in the case of a single projectile. The disadvantages of shot are limited range and limited penetration of the shot, which is why shotguns are used at short ranges, and typically against smaller targets. Larger shot sizes, up to the extreme case of the single projectile slug load, result in increased penetration, but at the expense of fewer projectiles and lower probability of hitting the target.
Aside from the most common use against small, fast moving targets, the shotgun has several advantages when used against still targets. First, it has enormous stopping power at short range, more than nearly all handguns and many rifles. Though many believe the shotgun is a great firearm for inexperienced shooters, the truth is, at close range, the spread of shot is not very large at all, and competency in aiming is still required. A typical self-defense load of buckshot contains 8–27 large lead pellets, resulting in many wound tracks in the target. Also, unlike a fully jacketed rifle bullet, each pellet of shot is less likely to penetrate walls and hit bystanders (though in the case of traditional 00-Buck, overpenetration of soft and hard targets may be an issue). It is favored by law enforcement for its low penetration and high stopping power.
On the other hand, the hit potential of a defensive shotgun is often overstated. The typical defensive shot is taken at very close ranges, at which the shot charge expands no more than a few centimeters. This means the shotgun must still be aimed at the target with some care. Balancing this is the fact that shot spreads further upon entering the target, and the multiple wound channels are far more likely to produce a disabling wound than a rifle or handgun.
Sporting
Some of the most common uses of shotguns are the sports of skeet shooting, trap shooting, and sporting clays. These involve shooting clay discs, also known as clay pigeons, thrown in by hand and by machine. Both skeet and trap competitions are featured at the Olympic Games.
Hunting
The shotgun is popular for bird hunting (called "game-shooting" in the United Kingdom, where "hunting" refers to hunting mammals with a pack of hounds), it is also used for more general forms of hunting especially in semi-populated areas where the range of rifle bullets may pose a hazard. Use of a smoothbore shotgun with a rifled slug or, alternatively, a rifled barrel shotgun with a sabot slug, improves accuracy to or more. This is well within the range of the majority of kill shots by experienced hunters using shotguns.
However, given the relatively low muzzle velocity of slug ammunition, typically around 500 m/s (about 1600 feet per second), and the blunt, poorly streamlined shape of typical slugs (which cause them to lose velocity very rapidly, compared to rifle bullets), a hunter must pay close attention to the ballistics of the particular ammunition used to ensure an effective and humane kill shot.
At any reasonable range, shotgun slugs make effective lethal wounds due to their tremendous mass, reducing the length of time that an animal might suffer. For example, a typical 12-gauge shotgun slug is a blunt piece of metal that could be described as an 18 mm (.729 inch) caliber that weighs 28 grams (432 grains). For comparison, a common deer-hunting rifle round is a 7.62 mm (.308 inch) slug weighing 9.7 grams (150 grains), but the dynamics of the rifle cartridge allow for a different type of wound, and a much further reach.
Shotguns are often used with rifled barrels in locations where it is not lawful to hunt with a rifle. Typically, a sabot slug is used in these barrels for maximum accuracy and performance. Shotguns are often used to hunt whitetail deer in the thick brush and briers of the Southeastern and upper Midwestern United States, where, due to the dense cover, ranges tend to be close – 25m or less.
Sabot slugs are essentially very large hollow point bullets, and are streamlined for maximum spin and accuracy when shot through a rifled barrel. They have greater ranges than older Foster and Brenneke-type slugs.
People often use semiautomatic or pump-action shotguns for hunting waterfowl to small game.
Industrial
Another use are industrial shotguns, used for buildup of slag in kilns. They are similar in usage but differ than powder-actuated tools. These are also used in the mining and cement industry for knocking down overhands and residue buildup etc. A known example of an industrial shotgun is the 8 gauge Remington MasterBlaster.
Law enforcement
In many countries, especially the United States and Canada, shotguns are widely used as a support weapon by police forces. One of the rationales for issuing shotguns is that, even without much training, an officer will probably be able to hit targets at close to intermediate range, due to the "spreading" effect of buckshot. This is largely a myth, as the spread of buckshot at 25 feet averages 8 inches, which is still very capable of missing a target. Some police forces are replacing shotguns in this role with carbine rifles such as AR-15s. Shotguns are also used in roadblock situations, where police are blocking a highway to search cars for suspects. In the US, law enforcement agencies often use riot shotguns, especially for crowd and riot control where they may be loaded with less-lethal rounds such as rubber bullets or bean bags. Shotguns are also often used as breaching devices to defeat locks.
Military
Shotguns are common weapons in military use, particularly for special purposes. Shotguns are found aboard naval vessels for shipboard security, because the weapon is very effective at close range as a way of repelling enemy boarding parties. In a naval setting, stainless steel shotguns are often used, because regular steel is more prone to corrosion in the marine environment. Shotguns are also used by military police units. U.S. Marines have used shotguns since their inception at the squad level, often in the hands of NCOs, while the U.S. Army often issued them to a squad's point man. Shotguns were modified for and used in the trench warfare of World War I, in the jungle combat of World War II and the Vietnam War. Shotguns were also used in the Iraq War, being popular with soldiers in urban combat environments. Some U.S. units in Iraq used shotguns with special frangible breaching rounds to blow the locks or hinges off doors when making a surprise entry into a dwelling.
Home and personal defense
Shotguns are a popular means of home defense for many of the same reasons they are preferred for close-quarters tasks in law enforcement and the military.
History
Most early firearms, such as the blunderbuss, arquebus, and musket had large diameter, smoothbore barrels, and could fire shot as well as solid balls. A firearm intended for use in wing shooting of birds was known as a fowling piece. The 1728 Cyclopaedia defines a fowling piece as:
For example, the Brown Bess musket, in service with the British army from 1722 to 1838, had a 19 mm (.75 inch) smoothbore barrel, roughly the same as a 10-gauge shotgun, and was long, just short of the above recommended 168 cm (5 feet). On the other hand, records from the Plymouth colony show a maximum length of 137 cm (4 feet) for fowling pieces, shorter than the typical musket.
Shot was also used in warfare; the buck and ball loading, combining a musket ball with three or six buckshot, was used throughout the history of the smoothbore musket. The first recorded use of the term shotgun was in 1776 in Kentucky. It was noted as part of the "frontier language of the West" by James Fenimore Cooper.
With the adoption of smaller bores and rifled barrels, the shotgun began to emerge as a separate entity. Shotguns have long been the preferred method for sport hunting of birds, and the largest shotguns, the punt guns, were used for commercial hunting. The double-barreled shotgun has changed little since the development of the boxlock action in 1875. Modern innovations such as interchangeable chokes and subgauge inserts make the double-barreled shotgun the shotgun of choice in skeet, trap shooting, and sporting clays, as well as with many hunters.
As wing shooting has been a prestige sport, specialty gunsmiths such as Krieghoff or Perazzi have produced fancy double-barrel guns for wealthy European and American hunters. These weapons can cost US$5,000 or more; some elaborately decorated presentation guns have sold for up to US$100,000.
During its long history, the shotgun has been favored by bird hunters, guards, and law enforcement officials. The shotgun has fallen in and out of favor with military forces several times in its long history. Shotguns and similar weapons are simpler than long-range rifles, and were developed earlier. The development of more accurate and deadlier long-range rifles minimized the usefulness of the shotgun on the open battlefields of European wars. But armies have "rediscovered" the shotgun for specialty uses many times.
19th century
During the 19th century, shotguns were mainly employed by cavalry units. Both sides of the American Civil War employed shotguns. U.S. cavalry used the shotgun extensively during the Indian Wars in the latter half of the 19th century. Mounted units favored the shotgun for its moving target effectiveness, and devastating close-range firepower. The shotgun was also favored by citizen militias and similar groups.
With the exception of cavalry units, the shotgun saw less and less use throughout the 19th century on the battlefield. As a defense weapon it remained popular with guards and lawmen, however, and the shotgun became one of many symbols of the American Old West. Lawman Cody Lyons killed two men with a shotgun; his friend Doc Holliday's only confirmed kill was with a shotgun. The weapon both these men used was the short-barreled version favored by private strongbox guards on stages and trains. These guards, called express messengers, became known as shotgun messengers, since they rode with the weapon (loaded with buckshot) for defense against bandits. Passenger carriages carrying a strongbox usually had at least one private guard armed with a shotgun riding in front of the coach, next to the driver. This practice has survived in American slang; the term "riding shotgun" is used for the passenger who sits in the front passenger seat. The shotgun was a popular weapon for personal protection in the American Old West, requiring less skill on the part of the user than a revolver.
Hammerless shotguns
The origins of the hammerless shotgun are European but otherwise obscure. The earliest breechloading shotguns originated in France and Belgium in the early 19th century (see also the history of the Pinfire) and a number of them such as those by Robert and Chateauvillard from the 1830s and 1840s did not use hammers. In fact during these decades a wide variety of ingenious weapons, including rifles, adopted what is now often known as a 'needle-fire' method of igniting the charge, where a firing pin or a longer sharper needle provided the necessary impact. The most widely used British hammerless needle-fire shotgun was the unusual hinged-chamber fixed-barrel breech-loader by Joseph Needham, produced from the 1850s. By the 1860s hammerless guns were increasingly used in Europe both in war and sport although hammer guns were still very much in the majority. The first significant encroachment on hammer guns was a hammerless patent which could be used with a conventional side-lock. This was British gunmaker T. Murcott's 1871 action nicknamed the 'mousetrap' on account of its loud snap action. However, the most successful hammerless innovation of the 1870s was Anson and Deeley's boxlock patent of 1875. This simple but ingenious design only used four moving parts allowing the production of cheaper and reliable shotguns.
Daniel Myron LeFever is credited with the invention of the American hammerless shotgun. Working for Barber & LeFever in Syracuse, New York, he introduced his first hammerless shotgun in 1878. This gun was cocked with external cocking levers on the side of the breech. He went on to patent the first truly automatic hammerless shotgun in 1883. This gun automatically cocked itself when the breech was closed. He later developed the mechanism to automatically eject the shells when the breech was opened.
John Moses Browning
One of the men most responsible for the modern development of the shotgun was prolific gun designer John Browning. While working for Winchester Firearms, Browning revolutionized shotgun design. In 1887, Browning introduced the Model 1887 Lever Action Repeating Shotgun, which loaded a fresh cartridge from its internal magazine by the operation of the action lever. Before this time most shotguns were the 'break open' type.
This development was greatly overshadowed by two further innovations he introduced at the end of the 19th century. In 1893, Browning produced the Model 1893 Pump Action Shotgun, introducing the now familiar pump action to the market. And in 1900, he patented the Browning Auto-5, America's first semi-automatic shotgun. The first semi-automatic shotgun in the world was patented in 1891–1893 by the Clair brothers of France. The Browning Auto-5 remained in production until 1998.
World wars
The decline in military use of shotguns reversed in World War I. American forces under General Pershing employed 12-gauge pump-action shotguns when they were deployed to the Western Front in 1917. These shotguns were fitted with bayonets and a heat shield so the barrel could be gripped while the bayonet was deployed. Shotguns fitted in this fashion became known as trench guns by the United States Army. Those without such modifications were known as riot guns. After World War I, the United States military began referring to all shotguns as riot guns.
Due to the cramped conditions of trench warfare, the American shotguns were extremely effective. Germany even filed an official diplomatic protest against their use, alleging they violated the laws of warfare. The judge advocate general reviewed the protest, and it was rejected because the Germans protested use of lead shot (which would have been illegal) but military shot was plated. This is the only occasion the legality of the shotgun's use in warfare has been questioned.
During World War II, the shotgun was not heavily used in the war in Europe by official military forces. However, the shotgun was a favorite weapon of Allied-supported partisans, such as the French Resistance. By contrast, in the Pacific theater, thick jungles and heavily fortified positions made the shotgun a favorite weapon of the United States Marines. Marines tended to use pump shotguns, since the pump action was less likely to jam in the humid and dirty conditions of the Pacific campaign. Similarly, the United States Navy used pump shotguns to guard ships when in port in Chinese harbors (e.g., Shanghai). The United States Army Air Forces also used pump shotguns to guard bombers and other aircraft against saboteurs when parked on airbases across the Pacific and on the West Coast of the United States. Pump and semi-automatic shotguns were used in marksmanship training, particularly for bomber gunners. The most common pump shotguns used for these duties were the 12-gauge Winchester Model 97 and Model 12. The break-open action, single barrel shotgun was used by the British Home Guard and U.S. home security forces. Notably, industrial centers (such as the Gopher State Steel Works) were guarded by National Guard soldiers with Winchester Model 37 12-gauge shotguns.
Late 20th century to present
Since the end of World War II, the shotgun has remained a specialty weapon for modern armies. It has been deployed for specialized tasks where its strengths were put to particularly good use. It was used to defend machine gun emplacements during the Korean War, American and French jungle patrols used shotguns during the Vietnam War, and shotguns saw extensive use as door breaching and close quarter weapons in the early stages of the Iraq War, and saw limited use in tank crews. Many modern navies make extensive use of shotguns by personnel engaged in boarding hostile ships, as any shots fired will almost certainly be over a short range. Nonetheless, shotguns are far less common in military use than rifles, carbines, submachine guns, or pistols.
On the other hand, the shotgun has become a standard in law enforcement use. A variety of specialty less-lethal or non-lethal ammunitions, such as tear gas shells, bean bags, flares, explosive sonic stun rounds, and rubber projectiles, all packaged into 12-gauge shotgun shells, are produced specifically for the law enforcement market. Recently, Taser International introduced a self-contained electronic weapon which is fired from a standard 12-gauge shotgun.
The shotgun remains a standard firearm for hunting throughout the world for all sorts of game from birds and small game to large game such as deer. The versatility of the shotgun as a hunting weapon has steadily increased as slug rounds and more advanced rifled barrels have given shotguns longer range and higher killing power. The shotgun has become a ubiquitous firearm in the hunting community.
Legal issues
Globally, shotguns are generally not as heavily regulated as rifles or handguns, likely because they lack the range of rifles and are not easily concealable as handguns are; thus, they are perceived as a lesser threat by legislative authorities. The one exception is a sawed-off shotgun, especially a lupara, as it is more easily concealed than a normal shotgun.
Australia
Within Australia, all shotguns manufactured after 1 January 1901 are considered firearms and are subject to registration and licensing. Most shotguns (including break-action, bolt-action and lever-action shotguns) are classed as "Category A" weapons and, as such, are comparatively easy to obtain a licence for, given a legally recognised "legitimate reason" (compare to the British requirement for "good reason" for a FAC), such as sport shooting or hunting. However, pump-action and semi-automatic shotguns are classed as "Category C" (magazine capacity no more than 5 rounds) or "Category D" (magazine capacity more than 5 rounds) weapons; a licence for this type of firearm is, practically speaking, unavailable to the average citizen due to the difficulty and red tape of acquiring one. For more information, see Gun politics in Australia.
Canada
Canada has three classifications of firearms: non-restricted, restricted, and prohibited. Shotguns are found in all three classes.
All non-restricted shotguns must have an overall length of at least . Semi-automatic shotguns must also have a barrel length of no less than and with a capacity of 5 shells or less in the magazine to remain non-restricted. All other shotgun action types (pump/slide, break open, lever, bolt) do not have a magazine limit restriction or a minimum barrel length provided the overall length of the firearm remains more than and the barrel was produced by an approved manufacturer. Shotgun barrels may only be reduced in length to a minimum of . Non-restricted shotguns may be possessed with any Possession and Acquisition Licence (PAL) or Possession-Only License (POL) and may be transported throughout the country without special authorization and may be used for hunting certain species at certain times of the year.
Semi-automatic shotguns with a barrel length of less than are considered restricted and any shotgun that has been altered so its barrel length is less than or if its overall length is less than is considered prohibited. Restricted and prohibited shotguns may be possessed with a PAL or POL that has been endorsed for restricted or prohibited grandfathered firearms. These shotguns require special Authorization to Transport (ATT).
The Canadian Firearms Registry was a government-run registry of all legally owned firearms in Canada. The government provided amnesty from prosecution to shotgun and rifle owners if they fail to register non-restricted shotguns and rifles. The long gun portion of the registry was scrapped in 2011.
See online for an official Canadian list of non-restricted and restricted and prohibited firearms.
United Kingdom
In the United Kingdom, a Shotgun Certificate (SGC) is required to possess a "Section 2" shotgun. These cost £50 and can only be denied if the chief of police in the area believes and can prove that the applicant poses a real danger to the public, or if the applicant has been convicted of a crime punishable by imprisonment for a term of three years or more or if the applicant cannot securely store a shotgun (gun clamps, wire locks and locking gun cabinets are considered secure). The round number restrictions apply only to the magazine, not the chamber, so it is legal to have a single-barreled semi-auto or pump-action shotgun that holds more than 3 rounds in total, or a shotgun with separate chambers (which would need to also be multi-barrelled). For a shotgun to qualify as a section 2 shotgun, it must meet the following criteria:
Prior to an SGC being issued, an interview is conducted with the local Firearms Officer. In the past this was a duty undertaken by the local police, but more recently this function has been contracted out to civilian staff. The officer will check the location and suitability of the gun safe that is to be used for storage and conduct a general interview to establish the reasons behind the applicant requiring an SGC.
An SGC holder can own any number of shotguns meeting these requirements so long as he/she can store them securely. No certificate is required to own shotgun ammunition, but one is required to buy it. There is no restriction on the amount of shotgun ammunition that can be bought or owned. There are also no rules regarding the storage of ammunition.
However, shotgun ammunition which contains fewer than 6 projectiles requires a section 1 Firearms Certificate (FAC). Shotguns with a magazine capacity greater than 2 rounds are also considered to be section 1 firearms and, as such, require an FAC to own. An FAC costs £50 but is much more restrictive than an SGC. The applicant must nominate two referees who are known to the applicant to vouch for his or her character; a new 'variation' is required for each new caliber of gun to be owned; limits are set on how much ammunition a person can own at any one time; and an FAC can be denied if the applicant does not have sufficient 'good reason'. 'Good reason' generally means hunting, collecting, or target shooting – though other reasons may be acceptable. Personal defense is not an acceptable reason.
Any pump-action or semi-automatic smoothbore gun (such as a shotgun) with a barrel length of less than 24 inches or total length of less than 40 inches is considered to be a section 5 firearm, that is, one that is subject to general prohibition, unless it is chambered for .22 caliber rimfire ammunition.
United States
In the US, federal law prohibits shotguns from being capable of holding more than three shells including the round in the chamber when used for hunting migratory gamebirds such as doves, ducks, and geese. For other uses, a capacity of any number of shells is generally permitted. Most magazine-fed shotguns come with a removable magazine plug to limit the capacity to 2, plus 1 in the chamber, for a total of 3 rounds, while hunting migratory gamebirds. Certain states have restrictions on magazine capacity or design features under hunting or assault weapon laws.
Shotguns intended for defensive use have barrels as short as for private use (the minimum shotgun barrel length allowed by law in the United States without federal registration). Barrel lengths of less than as measured from the breechface to the muzzle when the weapon is in battery, or have an overall length of less than are classified as short barreled shotguns (SBS) under the 1934 National Firearms Act and are regulated. A similar short barreled weapon having a pistol grip may be classified as an AOW or "Any Other Weapon" or "Firearm", depending on barrel length. A shotgun is defined as a weapon (with a buttstock) designed to be fired from the shoulder. The classification varies depending on how the weapon was originally manufactured.
Shotguns used by military, police, and other government agencies are regulated under the National Firearms Act of 1934; however, they are exempt from transfer taxes. These weapons commonly have barrels as short as so that they are easier to handle in confined spaces. Non-prohibited private citizens may own short-barreled shotguns by passing extensive background checks (state and local laws may be more restrictive) as well as paying a $200 federal tax and being issued a stamp. Defensive shotguns sometimes have no buttstock or will have a folding stock to reduce overall length even more when required. AOWs transfer with a $5 tax stamp from the BATFE.
| Technology | Projectile weapons | null |
26872 | https://en.wikipedia.org/wiki/SI%20base%20unit | SI base unit | The SI base units are the standard units of measurement defined by the International System of Units (SI) for the seven base quantities of what is now known as the International System of Quantities: they are notably a basic set from which all other SI units can be derived. The units and their physical quantities are the second for time, the metre (sometimes spelled meter) for length or distance, the kilogram for mass, the ampere for electric current, the kelvin for thermodynamic temperature, the mole for amount of substance, and the candela for luminous intensity. The SI base units are a fundamental part of modern metrology, and thus part of the foundation of modern science and technology.
The SI base units form a set of mutually independent dimensions as required by dimensional analysis commonly employed in science and technology.
The names and symbols of SI base units are written in lowercase, except the symbols of those named after a person, which are written with an initial capital letter. For example, the metre has the symbol m, but the kelvin has symbol K, because it is named after Lord Kelvin and the ampere with symbol A is named after André-Marie Ampère.
Definitions
On 20 May 2019, as the final act of the 2019 revision of the SI, the BIPM officially introduced the following new definitions, replacing the preceding definitions of the SI base units.
2019 revision of the SI
New base unit definitions were adopted on 16 November 2018, and they became effective on 20 May 2019. The definitions of the base units have been modified several times since the Metre Convention in 1875, and new additions of base units have occurred. Since the redefinition of the metre in 1960, the kilogram had been the only base unit still defined directly in terms of a physical artefact, rather than a property of nature. This led to a number of the other SI base units being defined indirectly in terms of the mass of the same artefact; the mole, the ampere, and the candela were linked through their definitions to the mass of the International Prototype of the Kilogram, a roughly golfball-sized platinum–iridium cylinder stored in a vault near Paris.
It has long been an objective in metrology to define the kilogram in terms of a fundamental constant, in the same way that the metre is now defined in terms of the speed of light. The 21st General Conference on Weights and Measures (CGPM, 1999) placed these efforts on an official footing, and recommended "that national laboratories continue their efforts to refine experiments that link the unit of mass to fundamental or atomic constants with a view to a future redefinition of the kilogram". Two possibilities attracted particular attention: the Planck constant and the Avogadro constant.
In 2005, the International Committee for Weights and Measures (CIPM) approved preparation of new definitions for the kilogram, the ampere, and the kelvin and it noted the possibility of a new definition of the mole based on the Avogadro constant. The 23rd CGPM (2007) decided to postpone any formal change until the next General Conference in 2011.
In a note to the CIPM in October 2009, Ian Mills, the President of the CIPM Consultative Committee – Units (CCU) catalogued the uncertainties of the fundamental constants of physics according to the current definitions and their values under the proposed new definition. He urged the CIPM to accept the proposed changes in the definition of the kilogram, ampere, kelvin, and mole so that they are referenced to the values of the fundamental constants, namely the Planck constant (h), the elementary charge (e), the Boltzmann constant (k), and the Avogadro constant (NA). This approach was approved in 2018, only after measurements of these constants were achieved with sufficient accuracy.
| Physical sciences | Measurement systems | Basics and measurement |
26873 | https://en.wikipedia.org/wiki/Second | Second | The second (symbol: s) is a unit of time, historically defined as of a day – this factor derived from the division of the day first into 24 hours, then to 60 minutes and finally to 60 seconds each (24 × 60 × 60 = 86400).
The current and formal definition in the International System of Units (SI) is more precise:The second [...] is defined by taking the fixed numerical value of the caesium frequency, ΔνCs, the unperturbed ground-state hyperfine transition frequency of the caesium 133 atom, to be when expressed in the unit Hz, which is equal to s−1.
This current definition was adopted in 1967 when it became feasible to define the second based on fundamental properties of nature with caesium clocks. Because the speed of Earth's rotation varies and is slowing ever so slightly, a leap second is added at irregular intervals to civil time to keep clocks in sync with Earth's rotation.
Etymology
"Minute" comes from the Latin , meaning "first small part" i.e. first division of the hour - dividing into sixty, and "second" comes from the , "second small part", dividing again into sixty.
Uses
Analog clocks and watches often have sixty tick marks on their faces, representing seconds (and minutes), and a "second hand" to mark the passage of time in seconds. Digital clocks and watches often have a two-digit seconds counter.
SI prefixes are frequently combined with the word second to denote subdivisions of the second: milliseconds (thousandths), microseconds (millionths), nanoseconds (billionths), and sometimes smaller units of a second. Multiples of seconds are usually counted in hours and minutes. Though SI prefixes may also be used to form multiples of the second such as kiloseconds (thousands of seconds), such units are rarely used in practice. An everyday experience with small fractions of a second is a 1-gigahertz microprocessor that has a cycle time of 1 nanosecond. Camera shutter speeds are often expressed in fractions of a second, such as second or second.
Sexagesimal divisions of the day from a calendar based on astronomical observation have existed since the third millennium BC, though they were not seconds as we know them today. Small divisions of time could not be measured back then, so such divisions were mathematically derived. The first timekeepers that could count seconds accurately were pendulum clocks invented in the 17th century. Starting in the 1950s, atomic clocks became better timekeepers than Earth's rotation, and they continue to set the standard today.
Clocks and solar time
A mechanical clock, which does not depend on measuring the relative rotational position of the Earth, keeps uniform time called mean time, within whatever accuracy is intrinsic to it. That means that every second, minute and every other division of time counted by the clock has the same duration as any other identical division of time. But a sundial, which measures the relative position of the Sun in the sky called apparent time, does not keep uniform time. The time kept by a sundial varies by time of year, meaning that seconds, minutes and every other division of time is a different duration at different times of the year. The time of day measured with mean time versus apparent time may differ by as much as 15 minutes, but a single day differs from the next by only a small amount; 15 minutes is a cumulative difference over a part of the year. The effect is due chiefly to the obliqueness of Earth's axis with respect to its orbit around the Sun.
The difference between apparent solar time and mean time was recognized by astronomers since antiquity, but prior to the invention of accurate mechanical clocks in the mid-17th century, sundials were the only reliable timepieces, and apparent solar time was the only generally accepted standard.
Events and units of time in seconds
Fractions of a second are usually denoted in decimal notation, for example 2.01 seconds, or two and one hundredth seconds. Multiples of seconds are usually expressed as minutes and seconds, or hours, minutes and seconds of clock time, separated by colons, such as 11:23:24, or 45:23 (the latter notation can give rise to ambiguity, because the same notation is used to denote hours and minutes). It rarely makes sense to express longer periods of time like hours or days in seconds, because they are awkwardly large numbers. For the metric unit of second, there are decimal prefixes representing 10 to 10 seconds.
Some common units of time in seconds are: a minute is 60 seconds; an hour is 3,600 seconds; a day is 86,400 seconds; a week is 604,800 seconds; a year (other than leap years) is 31,536,000 seconds; and a (Gregorian) century averages 3,155,695,200 seconds; with all of the above excluding any possible leap seconds. In astronomy, a Julian year is precisely 31,557,600 seconds.
Some common events in seconds are: a stone falls about 4.9 meters from rest in one second; a pendulum of length about one meter has a swing of one second, so pendulum clocks have pendulums about a meter long; the fastest human sprinters run 10 meters in a second; an ocean wave in deep water travels about 23 meters in one second; sound travels about 343 meters in one second in air; light takes 1.3 seconds to reach Earth from the surface of the Moon, a distance of 384,400 kilometers.
Other units incorporating seconds
A second is directly part of other units, such as frequency measured in hertz (inverse seconds or s−1), speed in meters per second, and acceleration in meters per second squared. The metric system unit becquerel, a measure of radioactive decay, is measured in inverse seconds and higher powers of second are involved in derivatives of acceleration such as jerk. Though many derivative units for everyday things are reported in terms of larger units of time, not seconds, they are ultimately defined in terms of the SI second; this includes time expressed in hours and minutes, velocity of a car in kilometers per hour or miles per hour, kilowatt hours of electricity usage, and speed of a turntable in rotations per minute.
Moreover, most other SI base units are defined by their relationship to the second: the meter is defined by setting the speed of light (in vacuum) to be 299 792 458 m/s, exactly; definitions of the SI base units kilogram, ampere, kelvin, and candela also depend on the second. The only base unit whose definition does not depend on the second is the mole, and only two of the 22 named derived units, radian and steradian, do not depend on the second either.
Timekeeping standards
A set of atomic clocks throughout the world keeps time by consensus: the clocks "vote" on the correct time, and all voting clocks are steered to agree with the consensus, which is called International Atomic Time (TAI). TAI "ticks" atomic seconds.
Civil time is defined to agree with the rotation of the Earth. The international standard for timekeeping is Coordinated Universal Time (UTC). This time scale "ticks" the same atomic seconds as TAI, but inserts or omits leap seconds as necessary to correct for variations in the rate of rotation of the Earth.
A time scale in which the seconds are not exactly equal to atomic seconds is UT1, a form of universal time. UT1 is defined by the rotation of the Earth with respect to the Sun, and does not contain any leap seconds. UT1 always differs from UTC by less than a second.
Optical lattice clock
While they are not yet part of any timekeeping standard, optical lattice clocks with frequencies in the visible light spectrum now exist and are the most accurate timekeepers of all. A strontium clock with frequency 430 THz, in the red range of visible light, during the 2010s held the accuracy record: it gains or loses less than a second in 15 billion years, which is longer than the estimated age of the universe. Such a clock can measure a change in its elevation of as little as 2 cm by the change in its rate due to gravitational time dilation.
History of definition
There have only ever been three definitions of the second: as a fraction of the day, as a fraction of an extrapolated year, and as the microwave frequency of a caesium atomic clock, which have each realized a sexagesimal division of the day from ancient astronomical calendars.
Sexagesimal divisions of calendar time and day
Civilizations in the classic period and earlier created divisions of the calendar as well as arcs using a sexagesimal system of counting, so at that time the second was a sexagesimal subdivision of the day (ancient second=), not of the hour like the modern second (=). Sundials and water clocks were among the earliest timekeeping devices, and units of time were measured in degrees of arc. Conceptual units of time smaller than realisable on sundials were also used.
There are references to "second" as part of a lunar month in the writings of natural philosophers of the Middle Ages, which were mathematical subdivisions that could not be measured mechanically.
Fraction of solar day
The earliest mechanical clocks, which appeared starting in the 14th century, had displays that divided the hour into halves, thirds, quarters and sometimes even 12 parts, but never by 60. In fact, the hour was not commonly divided in 60 minutes as it was not uniform in duration. It was not practical for timekeepers to consider minutes until the first mechanical clocks that displayed minutes appeared near the end of the 16th century. Mechanical clocks kept the mean time, as opposed to the apparent time displayed by sundials.
By that time, sexagesimal divisions of time were well established in Europe.
The earliest clocks to display seconds appeared during the last half of the 16th century. The second became accurately measurable with the development of mechanical clocks. The earliest spring-driven timepiece with a second hand that marked seconds is an unsigned clock depicting Orpheus in the Fremersdorf collection, dated between 1560 and During the 3rd quarter of the 16th century, Taqi al-Din built a clock with marks every 1/5 minute.
In 1579, Jost Bürgi built a clock for William of Hesse that marked seconds. In 1581, Tycho Brahe redesigned clocks that had displayed only minutes at his observatory so they also displayed seconds, even though those seconds were not accurate. In 1587, Tycho complained that his four clocks disagreed by plus or minus four seconds.
In 1656, Dutch scientist Christiaan Huygens invented the first pendulum clock. It had a pendulum length of just under a meter, giving it a swing of one second, and an escapement that ticked every second. It was the first clock that could accurately keep time in seconds. By the 1730s, 80 years later, John Harrison's maritime chronometers could keep time accurate to within one second in 100 days.
In 1832, Gauss proposed using the second as the base unit of time in his millimeter–milligram–second system of units. The British Association for the Advancement of Science (BAAS) in 1862 stated that "All men of science are agreed to use the second of mean solar time as the unit of time." BAAS formally proposed the CGS system in 1874, although this system was gradually replaced over the next 70 years by MKS units. Both the CGS and MKS systems used the same second as their base unit of time. MKS was adopted internationally during the 1940s, defining the second as of a mean solar day.
Fraction of an ephemeris year
Sometime in the late 1940s, quartz crystal oscillator clocks with an operating frequency of ~100 kHz advanced to keep time with accuracy better than 1 part in 108 over an operating period of a day. It became apparent that a consensus of such clocks kept better time than the rotation of the Earth. Metrologists also knew that Earth's orbit around the Sun (a year) was much more stable than Earth's rotation. This led to proposals as early as 1950 to define the second as a fraction of a year.
The Earth's motion was described in Newcomb's Tables of the Sun (1895), which provided a formula for estimating the motion of the Sun relative to the epoch 1900 based on astronomical observations made between 1750 and 1892. This resulted in adoption of an ephemeris time scale expressed in units of the sidereal year at that epoch by the IAU in 1952. This extrapolated timescale brings the observed positions of the celestial bodies into accord with Newtonian dynamical theories of their motion. In 1955, the tropical year, considered more fundamental than the sidereal year, was chosen by the IAU as the unit of time. The tropical year in the definition was not measured but calculated from a formula describing a mean tropical year that decreased linearly over time.
In 1956, the second was redefined in terms of a year relative to that epoch. The second was thus defined as "the fraction of the tropical year for 1900 January 0 at 12 hours ephemeris time". This definition was adopted as part of the International System of Units in 1960.
Atomic definition
Even the best mechanical, electric motorized and quartz crystal-based clocks develop discrepancies from environmental conditions; far better for timekeeping is the natural and exact "vibration" in an energized atom. The frequency of vibration (i.e., radiation) is very specific depending on the type of atom and how it is excited. Since 1967, the second has been defined as exactly "the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom". This length of a second was selected to correspond exactly to the length of the ephemeris second previously defined. Atomic clocks use such a frequency to measure seconds by counting cycles per second at that frequency. Radiation of this kind is one of the most stable and reproducible phenomena of nature. The current generation of atomic clocks is accurate to within one second in a few hundred million years. Since 1967, atomic clocks based on atoms other than caesium-133 have been developed with increased precision by a factor of 100. Therefore a new definition of the second is planned.
Atomic clocks now set the length of a second and the time standard for the world.
Table
Future redefinition
In 2022, the best realisation of the second is done with caesium primary standard clocks such as IT-CsF2, NIST-F2, NPL-CsF2, PTB-CSF2, SU–CsFO2 or SYRTE-FO2. These clocks work by laser-cooling a cloud of Cs atoms to a microkelvin in a magneto-optic trap. These cold atoms are then launched vertically by laser light. The atoms then undergo Ramsey excitation in a microwave cavity. The fraction of excited atoms is then detected by laser beams. These clocks have systematic uncertainty, which is equivalent to 50 picoseconds per day. A system of several fountains worldwide contribute to International Atomic Time. These caesium clocks also underpin optical frequency measurements.
Optical clocks are based on forbidden optical transitions in ions or atoms. They have frequencies around , with a natural linewidth of typically 1 Hz, so the Q-factor is about , or even higher. They have better stabilities than microwave clocks, which means that they can facilitate evaluation of lower uncertainties. They also have better time resolution, which means the clock "ticks" faster. Optical clocks use either a single ion, or an optical lattice with – atoms.
Rydberg constant
A definition based on the Rydberg constant would involve fixing the value to a certain value: . The Rydberg constant describes the energy levels in a hydrogen atom with the nonrelativistic approximation .
The only viable way to fix the Rydberg constant involves trapping and cooling hydrogen. This is difficult because it is very light and the atoms move very fast, causing Doppler shifts. The radiation needed to cool the hydrogen – – is also difficult. Another hurdle involves improving the uncertainty in QED calculations, specifically the Lamb shift in the 1s-2s transition of the hydrogen atom.
Requirements
A redefinition must include improved optical clock reliability. TAI must be contributed to by optical clocks before the BIPM affirms a redefinition. A consistent method of sending signals must be developed before the second is redefined, such as fiber-optics.
SI multiples
SI prefixes are commonly used for times shorter than one second, but rarely for multiples of a second. Instead, certain non-SI units are permitted for use with SI: minutes, hours, days, and in astronomy Julian years.
| Physical sciences | Time | null |
26884 | https://en.wikipedia.org/wiki/Superconductivity | Superconductivity | Superconductivity is a set of physical properties observed in superconductors: materials where electrical resistance vanishes and magnetic fields are expelled from the material. Unlike an ordinary metallic conductor, whose resistance decreases gradually as its temperature is lowered, even down to near absolute zero, a superconductor has a characteristic critical temperature below which the resistance drops abruptly to zero. An electric current through a loop of superconducting wire can persist indefinitely with no power source.
The superconductivity phenomenon was discovered in 1911 by Dutch physicist Heike Kamerlingh Onnes. Like ferromagnetism and atomic spectral lines, superconductivity is a phenomenon which can only be explained by quantum mechanics. It is characterized by the Meissner effect, the complete cancelation of the magnetic field in the interior of the superconductor during its transitions into the superconducting state. The occurrence of the Meissner effect indicates that superconductivity cannot be understood simply as the idealization of perfect conductivity in classical physics.
In 1986, it was discovered that some cuprate-perovskite ceramic materials have a critical temperature above ..
It was shortly found (by Ching-Wu Chu) that replacing the lanthanum with yttrium, i.e. making YBCO, raised the critical temperature to , which was important because liquid nitrogen could then be used as a refrigerant.
Such a high transition temperature is theoretically impossible for a conventional superconductor, leading the materials to be termed high-temperature superconductors. The cheaply available coolant liquid nitrogen boils at and thus the existence of superconductivity at higher temperatures than this facilitates many experiments and applications that are less practical at lower temperatures.
History
Superconductivity was discovered on April 8, 1911, by Heike Kamerlingh Onnes, who was studying the resistance of solid mercury at cryogenic temperatures using the recently produced liquid helium as a refrigerant. At the temperature of 4.2 K, he observed that the resistance abruptly disappeared. In the same experiment, he also observed the superfluid transition of helium at 2.2 K, without recognizing its significance. The precise date and circumstances of the discovery were only reconstructed a century later, when Onnes's notebook was found. In subsequent decades, superconductivity was observed in several other materials. In 1913, lead was found to superconduct at 7 K, and in 1941 niobium nitride was found to superconduct at 16 K.
Great efforts have been devoted to finding out how and why superconductivity works; the important step occurred in 1933, when Meissner and Ochsenfeld discovered that superconductors expelled applied magnetic fields, a phenomenon which has come to be known as the Meissner effect. In 1935, Fritz and Heinz London showed that the Meissner effect was a consequence of the minimization of the electromagnetic free energy carried by superconducting current.
London constitutive equations
The theoretical model that was first conceived for superconductivity was completely classical: it is summarized by London constitutive equations.
It was put forward by the brothers Fritz and Heinz London in 1935, shortly after the discovery that magnetic fields are expelled from superconductors. A major triumph of the equations of this theory is their ability to explain the Meissner effect, wherein a material exponentially expels all internal magnetic fields as it crosses the superconducting threshold. By using the London equation, one can obtain the dependence of the magnetic field inside the superconductor on the distance to the surface.
The two constitutive equations for a superconductor by London are:
The first equation follows from Newton's second law for superconducting electrons.
Conventional theories (1950s)
During the 1950s, theoretical condensed matter physicists arrived at an understanding of "conventional" superconductivity, through a pair of remarkable and important theories: the phenomenological Ginzburg–Landau theory (1950) and the microscopic BCS theory (1957).
In 1950, the phenomenological Ginzburg–Landau theory of superconductivity was devised by Landau and Ginzburg. This theory, which combined Landau's theory of second-order phase transitions with a Schrödinger-like wave equation, had great success in explaining the macroscopic properties of superconductors. In particular, Abrikosov showed that Ginzburg–Landau theory predicts the division of superconductors into the two categories now referred to as Type I and Type II. Abrikosov and Ginzburg were awarded the 2003 Nobel Prize for their work (Landau had received the 1962 Nobel Prize for other work, and died in 1968). The four-dimensional extension of the Ginzburg–Landau theory, the Coleman-Weinberg model, is important in quantum field theory and cosmology.
Also in 1950, Maxwell and Reynolds et al. found that the critical temperature of a superconductor depends on the isotopic mass of the constituent element. This important discovery pointed to the electron–phonon interaction as the microscopic mechanism responsible for superconductivity.
The complete microscopic theory of superconductivity was finally proposed in 1957 by Bardeen, Cooper and Schrieffer. This BCS theory explained the superconducting current as a superfluid of Cooper pairs, pairs of electrons interacting through the exchange of phonons. For this work, the authors were awarded the Nobel Prize in 1972.
The BCS theory was set on a firmer footing in 1958, when N. N. Bogolyubov showed that the BCS wavefunction, which had originally been derived from a variational argument, could be obtained using a canonical transformation of the electronic Hamiltonian. In 1959, Lev Gor'kov showed that the BCS theory reduced to the Ginzburg–Landau theory close to the critical temperature.
Generalizations of BCS theory for conventional superconductors form the basis for the understanding of the phenomenon of superfluidity, because they fall into the lambda transition universality class. The extent to which such generalizations can be applied to unconventional superconductors is still controversial.
Further history
The first practical application of superconductivity was developed in 1954 with Dudley Allen Buck's invention of the cryotron. Two superconductors with greatly different values of the critical magnetic field are combined to produce a fast, simple switch for computer elements.
Soon after discovering superconductivity in 1911, Kamerlingh Onnes attempted to make an electromagnet with superconducting windings but found that relatively low magnetic fields destroyed superconductivity in the materials he investigated. Much later, in 1955, G. B. Yntema succeeded in constructing a small 0.7-tesla iron-core electromagnet with superconducting niobium wire windings. Then, in 1961, J. E. Kunzler, E. Buehler, F. S. L. Hsu, and J. H. Wernick made the startling discovery that, at 4.2 kelvin, niobium–tin, a compound consisting of three parts niobium and one part tin, was capable of supporting a current density of more than 100,000 amperes per square centimeter in a magnetic field of 8.8 tesla. Despite being brittle and difficult to fabricate, niobium–tin has since proved extremely useful in supermagnets generating magnetic fields as high as 20 tesla. In 1962, T. G. Berlincourt and R. R. Hake discovered that more ductile alloys of niobium and titanium are suitable for applications up to 10 tesla. Promptly thereafter, commercial production of niobium–titanium supermagnet wire commenced at Westinghouse Electric Corporation and at Wah Chang Corporation. Although niobium–titanium boasts less-impressive superconducting properties than those of niobium–tin, niobium–titanium has, nevertheless, become the most widely used "workhorse" supermagnet material, in large measure a consequence of its very high ductility and ease of fabrication. However, both niobium–tin and niobium–titanium find wide application in MRI medical imagers, bending and focusing magnets for enormous high-energy-particle accelerators, and a host of other applications. Conectus, a European superconductivity consortium, estimated that in 2014, global economic activity for which superconductivity was indispensable amounted to about five billion euros, with MRI systems accounting for about 80% of that total.
In 1962, Josephson made the important theoretical prediction that a supercurrent can flow between two pieces of superconductor separated by a thin layer of insulator. This phenomenon, now called the Josephson effect, is exploited by superconducting devices such as SQUIDs. It is used in the most accurate available measurements of the magnetic flux quantum Φ0 = h/(2e), where h is the Planck constant. Coupled with the quantum Hall resistivity, this leads to a precise measurement of the Planck constant. Josephson was awarded the Nobel Prize for this work in 1973.
In 2008, it was proposed that the same mechanism that produces superconductivity could produce a superinsulator state in some materials, with almost infinite electrical resistance. The first development and study of superconducting Bose–Einstein condensate (BEC) in 2020 suggests that there is a "smooth transition between" BEC and Bardeen-Cooper-Shrieffer regimes.
Classification
There are many criteria by which superconductors are classified. The most common are:
Response to a magnetic field
A superconductor can be Type I, meaning it has a single critical field, above which all superconductivity is lost and below which the magnetic field is completely expelled from the superconductor; or Type II, meaning it has two critical fields, between which it allows partial penetration of the magnetic field through isolated points. These points are called vortices. Furthermore, in multicomponent superconductors it is possible to have a combination of the two behaviours. In that case the superconductor is of Type-1.5.
By theory of operation
A superconductor is conventional if it is driven by electron–phonon interaction and explained by the usual BCS theory or its extension, the Eliashberg theory. Otherwise, it is unconventional. Alternatively, a superconductor is called unconventional if the superconducting order parameter transforms according to a non-trivial irreducible representation of the point group or space group of the system.
By critical temperature
A superconductor is generally considered high-temperature if it reaches a superconducting state above a temperature of 30 K (−243.15 °C); as in the initial discovery by Georg Bednorz and K. Alex Müller. It may also reference materials that transition to superconductivity when cooled using liquid nitrogen – that is, at only Tc > 77 K, although this is generally used only to emphasize that liquid nitrogen coolant is sufficient. Low temperature superconductors refer to materials with a critical temperature below 30 K, and are cooled mainly by liquid helium (Tc > 4.2 K). One exception to this rule is the iron pnictide group of superconductors which display behaviour and properties typical of high-temperature superconductors, yet some of the group have critical temperatures below 30 K.
By material
Superconductor material classes include chemical elements (e.g. mercury or lead), alloys (such as niobium–titanium, germanium–niobium, and niobium nitride), ceramics (YBCO and magnesium diboride), superconducting pnictides (like fluorine-doped LaOFeAs) or organic superconductors (fullerenes and carbon nanotubes; though perhaps these examples should be included among the chemical elements, as they are composed entirely of carbon).
Elementary properties
Several physical properties of superconductors vary from material to material, such as the critical temperature, the value of the superconducting gap, the critical magnetic field, and the critical current density at which superconductivity is destroyed. On the other hand, there is a class of properties that are independent of the underlying material. The Meissner effect, the quantization of the magnetic flux or permanent currents, i.e. the state of zero resistance are the most important examples. The existence of these "universal" properties is rooted in the nature of the broken symmetry of the superconductor and the emergence of off-diagonal long range order. Superconductivity is a thermodynamic phase, and thus possesses certain distinguishing properties which are largely independent of microscopic details. Off diagonal long range order is closely connected to the formation of Cooper pairs.
Zero electrical DC resistance
The simplest method to measure the electrical resistance of a sample of some material is to place it in an electrical circuit in series with a current source I and measure the resulting voltage V across the sample. The resistance of the sample is given by Ohm's law as R = V / I. If the voltage is zero, this means that the resistance is zero.
Superconductors are also able to maintain a current with no applied voltage whatsoever, a property exploited in superconducting electromagnets such as those found in MRI machines. Experiments have demonstrated that currents in superconducting coils can persist for years without any measurable degradation. Experimental evidence points to a lifetime of at least 100,000 years. Theoretical estimates for the lifetime of a persistent current can exceed the estimated lifetime of the universe, depending on the wire geometry and the temperature. In practice, currents injected in superconducting coils persisted for 28 years, 7 months, 27 days in a superconducting gravimeter in Belgium, from August 4, 1995 until March 31, 2024. In such instruments, the measurement is based on the monitoring of the levitation of a superconducting niobium sphere with a mass of four grams.
In a normal conductor, an electric current may be visualized as a fluid of electrons moving across a heavy ionic lattice. The electrons are constantly colliding with the ions in the lattice, and during each collision some of the energy carried by the current is absorbed by the lattice and converted into heat, which is essentially the vibrational kinetic energy of the lattice ions. As a result, the energy carried by the current is constantly being dissipated. This is the phenomenon of electrical resistance and Joule heating.
The situation is different in a superconductor. In a conventional superconductor, the electronic fluid cannot be resolved into individual electrons. Instead, it consists of bound pairs of electrons known as Cooper pairs. This pairing is caused by an attractive force between electrons from the exchange of phonons. This pairing is very weak, and small thermal vibrations can fracture the bond. Due to quantum mechanics, the energy spectrum of this Cooper pair fluid possesses an energy gap, meaning there is a minimum amount of energy ΔE that must be supplied in order to excite the fluid. Therefore, if ΔE is larger than the thermal energy of the lattice, given by kT, where k is the Boltzmann constant and T is the temperature, the fluid will not be scattered by the lattice. The Cooper pair fluid is thus a superfluid, meaning it can flow without energy dissipation.
In the class of superconductors known as type II superconductors, including all known high-temperature superconductors, an extremely low but non-zero resistivity appears at temperatures not too far below the nominal superconducting transition when an electric current is applied in conjunction with a strong magnetic field, which may be caused by the electric current. This is due to the motion of magnetic vortices in the electronic superfluid, which dissipates some of the energy carried by the current. If the current is sufficiently small, the vortices are stationary, and the resistivity vanishes. The resistance due to this effect is minuscule compared with that of non-superconducting materials, but must be taken into account in sensitive experiments. However, as the temperature decreases far enough below the nominal superconducting transition, these vortices can become frozen into a disordered but stationary phase known as a "vortex glass". Below this vortex glass transition temperature, the resistance of the material becomes truly zero.
Phase transition
In superconducting materials, the characteristics of superconductivity appear when the temperature T is lowered below a critical temperature Tc. The value of this critical temperature varies from material to material. Conventional superconductors usually have critical temperatures ranging from around 20 K to less than 1 K. Solid mercury, for example, has a critical temperature of 4.2 K. As of 2015, the highest critical temperature found for a conventional superconductor is 203 K for H2S, although high pressures of approximately 90 gigapascals were required. Cuprate superconductors can have much higher critical temperatures: YBa2Cu3O7, one of the first cuprate superconductors to be discovered, has a critical temperature above 90 K, and mercury-based cuprates have been found with critical temperatures in excess of 130 K. The basic physical mechanism responsible for the high critical temperature is not yet clear. However, it is clear that a two-electron pairing is involved, although the nature of the pairing ( wave vs. wave) remains controversial.
Similarly, at a fixed temperature below the critical temperature, superconducting materials cease to superconduct when an external magnetic field is applied which is greater than the critical magnetic field. This is because the Gibbs free energy of the superconducting phase increases quadratically with the magnetic field while the free energy of the normal phase is roughly independent of the magnetic field. If the material superconducts in the absence of a field, then the superconducting phase free energy is lower than that of the normal phase and so for some finite value of the magnetic field (proportional to the square root of the difference of the free energies at zero magnetic field) the two free energies will be equal and a phase transition to the normal phase will occur. More generally, a higher temperature and a stronger magnetic field lead to a smaller fraction of electrons that are superconducting and consequently to a longer London penetration depth of external magnetic fields and currents. The penetration depth becomes infinite at the phase transition.
The onset of superconductivity is accompanied by abrupt changes in various physical properties, which is the hallmark of a phase transition. For example, the electronic heat capacity is proportional to the temperature in the normal (non-superconducting) regime. At the superconducting transition, it suffers a discontinuous jump and thereafter ceases to be linear. At low temperatures, it varies instead as e−α/T for some constant, α. This exponential behavior is one of the pieces of evidence for the existence of the energy gap.
The order of the superconducting phase transition was long a matter of debate. Experiments indicate that the transition is second-order, meaning there is no latent heat. However, in the presence of an external magnetic field there is latent heat, because the superconducting phase has a lower entropy below the critical temperature than the normal phase. It has been experimentally demonstrated that, as a consequence, when the magnetic field is increased beyond the critical field, the resulting phase transition leads to a decrease in the temperature of the superconducting material.
Calculations in the 1970s suggested that it may actually be weakly first-order due to the effect of long-range fluctuations in the electromagnetic field. In the 1980s it was shown theoretically with the help of a disorder field theory, in which the vortex lines of the superconductor play a major role, that the transition is of second order within the type II regime and of first order (i.e., latent heat) within the type I regime, and that the two regions are separated by a tricritical point. The results were strongly supported by Monte Carlo computer simulations.
Meissner effect
When a superconductor is placed in a weak external magnetic field H, and cooled below its transition temperature, the magnetic field is ejected. The Meissner effect does not cause the field to be completely ejected but instead, the field penetrates the superconductor but only to a very small distance, characterized by a parameter λ, called the London penetration depth, decaying exponentially to zero within the bulk of the material. The Meissner effect is a defining characteristic of superconductivity. For most superconductors, the London penetration depth is on the order of 100 nm.
The Meissner effect is sometimes confused with the kind of diamagnetism one would expect in a perfect electrical conductor: according to Lenz's law, when a changing magnetic field is applied to a conductor, it will induce an electric current in the conductor that creates an opposing magnetic field. In a perfect conductor, an arbitrarily large current can be induced, and the resulting magnetic field exactly cancels the applied field.
The Meissner effect is distinct from thisit is the spontaneous expulsion that occurs during transition to superconductivity. Suppose we have a material in its normal state, containing a constant internal magnetic field. When the material is cooled below the critical temperature, we would observe the abrupt expulsion of the internal magnetic field, which we would not expect based on Lenz's law.
The Meissner effect was given a phenomenological explanation by the brothers Fritz and Heinz London, who showed that the electromagnetic free energy in a superconductor is minimized provided
where H is the magnetic field and λ is the London penetration depth.
This equation, which is known as the London equation, predicts that the magnetic field in a superconductor decays exponentially from whatever value it possesses at the surface.
A superconductor with little or no magnetic field within it is said to be in the Meissner state. The Meissner state breaks down when the applied magnetic field is too large. Superconductors can be divided into two classes according to how this breakdown occurs. In Type I superconductors, superconductivity is abruptly destroyed when the strength of the applied field rises above a critical value Hc. Depending on the geometry of the sample, one may obtain an intermediate state consisting of a baroque pattern of regions of normal material carrying a magnetic field mixed with regions of superconducting material containing no field. In Type II superconductors, raising the applied field past a critical value Hc1 leads to a mixed state (also known as the vortex state) in which an increasing amount of magnetic flux penetrates the material, but there remains no resistance to the flow of electric current as long as the current is not too large. At a second critical field strength Hc2, superconductivity is destroyed. The mixed state is actually caused by vortices in the electronic superfluid, sometimes called fluxons because the flux carried by these vortices is quantized. Most pure elemental superconductors, except niobium and carbon nanotubes, are Type I, while almost all impure and compound superconductors are Type II.
London moment
Conversely, a spinning superconductor generates a magnetic field, precisely aligned with the spin axis. The effect, the London moment, was put to good use in Gravity Probe B. This experiment measured the magnetic fields of four superconducting gyroscopes to determine their spin axes. This was critical to the experiment since it is one of the few ways to accurately determine the spin axis of an otherwise featureless sphere.
High-temperature superconductivity
Until 1986, physicists had believed that BCS theory forbade superconductivity at temperatures above about 30 K. In that year, Bednorz and Müller discovered superconductivity in lanthanum barium copper oxide (LBCO), a lanthanum-based cuprate perovskite material, which had a transition temperature of 35 K (Nobel Prize in Physics, 1987). It was soon found that replacing the lanthanum with yttrium (i.e., making YBCO) raised the critical temperature above 90 K.
This temperature jump is of particular engineering significance, since it allows liquid nitrogen as a refrigerant, replacing liquid helium. Liquid nitrogen can be produced relatively cheaply, even on-site. The higher temperatures additionally help to avoid some of the problems that arise at liquid helium temperatures, such as the formation of plugs of frozen air that can block cryogenic lines and cause unanticipated and potentially hazardous pressure buildup.
Many other cuprate superconductors have since been discovered, and the theory of superconductivity in these materials is one of the major outstanding challenges of theoretical condensed matter physics. There are currently two main hypotheses – the resonating-valence-bond theory, and spin fluctuation which has the most support in the research community. The second hypothesis proposed that electron pairing in high-temperature superconductors is mediated by short-range spin waves known as paramagnons.
In 2008, holographic superconductivity, which uses holographic duality or AdS/CFT correspondence theory, was proposed by Gubser, Hartnoll, Herzog, and Horowitz, as a possible explanation of high-temperature superconductivity in certain materials.
From about 1993, the highest-temperature superconductor known was a ceramic material consisting of mercury, barium, calcium, copper and oxygen (HgBa2Ca2Cu3O8+δ) with .
In February 2008, an iron-based family of high-temperature superconductors was discovered. Hideo Hosono, of the Tokyo Institute of Technology, and colleagues found lanthanum oxygen fluorine iron arsenide (LaO1−xFxFeAs), an oxypnictide that superconducts below 26 K. Replacing the lanthanum in LaO1−xFxFeAs with samarium leads to superconductors that work at 55 K.
In 2014 and 2015, hydrogen sulfide () at extremely high pressures (around 150 gigapascals) was first predicted and then confirmed to be a high-temperature superconductor with a transition temperature of 80 K. Additionally, in 2019 it was discovered that lanthanum hydride () becomes a superconductor at 250 K under a pressure of 170 gigapascals.
In 2018, a research team from the Department of Physics, Massachusetts Institute of Technology, discovered superconductivity in bilayer graphene with one layer twisted at an angle of approximately 1.1 degrees with cooling and applying a small electric charge. Even if the experiments were not carried out in a high-temperature environment, the results are correlated less to classical but high temperature superconductors, given that no foreign atoms need to be introduced. The superconductivity effect came about as a result of electrons twisted into a vortex between the graphene layers, called "skyrmions". These act as a single particle and can pair up across the graphene's layers, leading to the basic conditions required for superconductivity.
In 2020, a room-temperature superconductor (critical temperature 288 K) made from hydrogen, carbon and sulfur under pressures of around 270 gigapascals was described in a paper in Nature. However, in 2022 the article was retracted by the editors because the validity of background subtraction procedures had been called into question. All nine authors maintain that the raw data strongly support the main claims of the paper.
On 31 December 2023 "Global Room-Temperature Superconductivity in Graphite" was published in the journal "Advanced Quantum Technologies" claiming to demonstrate superconductivity at room temperature and ambient pressure in Highly oriented pyrolytic graphite with dense arrays of nearly parallel line defects.
Applications
Superconductors are promising candidate materials for devising fundamental circuit elements of electronic, spintronic, and quantum technologies. One such example is a superconducting diode, in which supercurrent flows along one direction only, that promise dissipationless superconducting and semiconducting-superconducting hybrid technologies.
Superconducting magnets are some of the most powerful electromagnets known. They are used in MRI/NMR machines, mass spectrometers, the beam-steering magnets used in particle accelerators and plasma confining magnets in some tokamaks. They can also be used for magnetic separation, where weakly magnetic particles are extracted from a background of less or non-magnetic particles, as in the pigment industries. They can also be used in large wind turbines to overcome the restrictions imposed by high electrical currents, with an industrial grade 3.6 megawatt superconducting windmill generator having been tested successfully in Denmark.
In the 1950s and 1960s, superconductors were used to build experimental digital computers using cryotron switches. More recently, superconductors have been used to make digital circuits based on rapid single flux quantum technology and RF and microwave filters for mobile phone base stations.
Superconductors are used to build Josephson junctions which are the building blocks of SQUIDs (superconducting quantum interference devices), the most sensitive magnetometers known. SQUIDs are used in scanning SQUID microscopes and magnetoencephalography. Series of Josephson devices are used to realize the SI volt. Superconducting photon detectors can be realised in a variety of device configurations. Depending on the particular mode of operation, a superconductor–insulator–superconductor Josephson junction can be used as a photon detector or as a mixer. The large resistance change at the transition from the normal to the superconducting state is used to build thermometers in cryogenic micro-calorimeter photon detectors. The same effect is used in ultrasensitive bolometers made from superconducting materials. Superconducting nanowire single-photon detectors offer high speed, low noise single-photon detection and have been employed widely in advanced photon-counting applications.
Other early markets are arising where the relative efficiency, size and weight advantages of devices based on high-temperature superconductivity outweigh the additional costs involved. For example, in wind turbines the lower weight and volume of superconducting generators could lead to savings in construction and tower costs, offsetting the higher costs for the generator and lowering the total levelized cost of electricity (LCOE).
Promising future applications include high-performance smart grid, electric power transmission, transformers, power storage devices, compact fusion power devices, electric motors (e.g. for vehicle propulsion, as in vactrains or maglev trains), magnetic levitation devices, fault current limiters, enhancing spintronic devices with superconducting materials, and superconducting magnetic refrigeration. However, superconductivity is sensitive to moving magnetic fields, so applications that use alternating current (e.g. transformers) will be more difficult to develop than those that rely upon direct current. Compared to traditional power lines, superconducting transmission lines are more efficient and require only a fraction of the space, which would not only lead to a better environmental performance but could also improve public acceptance for expansion of the electric grid. Another attractive industrial aspect is the ability for high power transmission at lower voltages. Advancements in the efficiency of cooling systems and use of cheap coolants such as liquid nitrogen have also significantly decreased cooling costs needed for superconductivity.
Nobel Prizes
As of 2022, there have been five Nobel Prizes in Physics for superconductivity related subjects:
Heike Kamerlingh Onnes (1913), "for his investigations on the properties of matter at low temperatures which led, inter alia, to the production of liquid helium".
John Bardeen, Leon N. Cooper, and J. Robert Schrieffer (1972), "for their jointly developed theory of superconductivity, usually called the BCS-theory".
Leo Esaki, Ivar Giaever, and Brian D. Josephson (1973), "for their experimental discoveries regarding tunneling phenomena in semiconductors and superconductors, respectively" and "for his theoretical predictions of the properties of a supercurrent through a tunnel barrier, in particular those phenomena which are generally known as the Josephson effects".
Georg Bednorz and K. Alex Müller (1987), "for their important break-through in the discovery of superconductivity in ceramic materials".
Alexei A. Abrikosov, Vitaly L. Ginzburg, and Anthony J. Leggett (2003), "for pioneering contributions to the theory of superconductors and superfluids".
| Physical sciences | Electrical circuits | null |
26899 | https://en.wikipedia.org/wiki/Spearmint | Spearmint | Spearmint (Mentha spicata), also known as garden mint, common mint, lamb mint and mackerel mint, is native to Europe and southern temperate Asia, extending from Ireland in the west to southern China in the east. It is naturalized in many other temperate parts of the world, including northern and southern Africa, North America, and South America. It is used as a flavouring in food and herbal teas. The aromatic oil, called oil of spearmint, is also used as a flavoring and sometimes as a scent.
The species and its subspecies have many synonyms, including Mentha crispa, Mentha crispata, and Mentha viridis.
Description
Spearmint is a perennial herbaceous plant. It is tall, with variably hairless to hairy stems and foliage, and a wide-spreading fleshy underground rhizome from which it grows. The leaves are long and broad, with a serrated margin. The stem is square-shaped, a defining characteristic of the mint family of herbs. Spearmint produces flowers in slender spikes, each flower pink or white in colour, long and broad. Spearmint flowers in the summer (from July to September in the northern hemisphere), and has relatively large seeds, which measure . The name ''spear'' mint derives from the pointed leaf tips.
Mentha spicata varies considerably in leaf blade dimensions, the prominence of leaf veins, and pubescence.
Taxonomy
Mentha spicata was first described scientifically by Carl Linnaeus in 1753. The epithet spicata means 'bearing a spike'. The species has two accepted subspecies, each of which has acquired a large number of synonyms:
Mentha spicata subsp. condensata (Briq.) Greuter & Burdet – eastern Mediterranean, from Italy to Egypt
Mentha spicata subsp. spicata – distribution as for the species as a whole
Origin
The plant is an allopolyploid species (2n = 48), which could be a result of hybridization and chromosome doubling. Mentha longifolia and Mentha suaveolens (2n = 24) are likely to be the contributing diploid species.
Hybrids
Mentha spicata hybridizes with other Mentha species, forming hybrids such as:
Mentha × piperita (hybrid with Mentha aquatica), black peppermint, hairy peppermint
Mentha × gracilis (hybrid with Mentha arvensis), Scotch spearmint
Mentha × villosa (hybrid with Mentha suaveolens)
Varieties and cultivars
There are several commonly available varieties and cultivars of Mentha spicata:
M. spicata var. crispa (syn. M. spicata 'Crispa') – with very crinkled leaves.
M. spicata var. crispa 'Moroccan' – with crinkled leaves and white flowers.
M. spicata 'Tashkent' – with slightly crinkled leaves.
M. spicata 'Spanish' – with mauve-pink flowers.
History and domestication
Mention of spearmint dates back to at least the 1st century AD, with references from naturalist Pliny and mentions in the Bible. Further records show descriptions of mint in ancient mythology. Findings of early versions of toothpaste using mint in the 14th century suggest widespread domestication by this point. It was introduced into England by the Romans by the 5th century, and the "Father of British Botany", of the surname Turner, mentions mint as being good for the stomach. John Gerard's Herbal (1597) states that: "It is good against watering eyes and all manner of break outs on the head and sores. "It is applied with salt to the biting of mad dogs," and that "They lay it on the stinging of wasps and bees with good success." He also mentions that "the smell rejoices the heart of man", for which reason they used to strew it in chambers and places of recreation, pleasure, and repose, where feasts and banquets are made."
Spearmint is documented as being an important cash crop in Connecticut during the period of the American Revolution, at which time mint tea was noted as being a popular drink due to it not being taxed.
Ecology
Spearmint can readily adapt to grow in various types of soil. Spearmint tends to thrive with plenty of organic material in full sun to part shade. The plant is also known to be found in moist habitats such as swamps or creeks, where the soil is sand or clay.
Spearmint ideally thrives in soils that are deep, well-drained, moist, rich in nutrients and organic matter, and have a crumbly texture. The pH range should be between 6.0 and 7.5.
Diseases and pests
Fungal diseases
Fungal diseases are common diseases in spearmint. Two main diseases are rust and leaf spot. Puccinia menthae is a fungus that causes the disease called "rust". Rust affects the leaves of spearmint by producing pustules inducing the leaves to fall off. Leaf spot is a fungal disease that occurs when Alternaria alernata is present on the spearmint leaves. The infection looks like circular dark spot on the top side of the leaf. Other fungi that cause disease in spearmint are Rhizoctonia solani, Verticillium dahliae, Phoma strasseri, and Erysiphe cischoracearum.
Nematode diseases
Some nematode diseases in spearmint include root knot and root lesions. Nematode species that cause root knots in this plant are various Meloidogyne species. The other nematode species are Pratylenchus which cause root lesions.
Viral and phytoplasmal diseases
Spearmint can be infected by tobacco ringspot virus. This virus can lead to stunted plant growth and deformation of the leaves in this plant. In China, spearmint have been seen with mosaic symptoms and deformed leaves. This is an indication that the plant can also be infected by the viruses, cucumber mosaic and tomato aspermy.
Cultivation and harvest
Spearmint grows well in nearly all temperate climates. Gardeners often grow it in pots or planters due to its invasive, spreading rhizomes.
Spearmint leaves can be used fresh, dried, or frozen. The leaves lose their aromatic appeal after the plant flowers. It can be dried by cutting just before, or right (at peak) as the flowers open, about one-half to three-quarters the way down the stalk (leaving smaller shoots room to grow). Some dispute exists as to what drying method works best; some prefer different materials (such as plastic or cloth) and different lighting conditions (such as darkness or sunlight). The leaves can also be preserved in salt, sugar, sugar syrup, alcohol, or oil.
Oil uses
Spearmint is used for its aromatic oil, called oil of spearmint. The most abundant compound in spearmint oil is R-(–)-carvone, which gives spearmint its distinctive smell. Spearmint oil also contains significant amounts of limonene, dihydrocarvone, and 1,8-cineol. Unlike oil of peppermint, oil of spearmint contains minimal amounts of menthol and menthone. It is used as a flavouring for toothpaste and confectionery, and is sometimes added to shampoos and soaps.
Traditional medicine
Spearmint has been used in traditional medicine.
Insecticide and pesticide
Spearmint essential oil has had success as a larvicide against mosquitoes. Using spearmint as a larvicide would be a greener alternative to synthetic insecticides due to their toxicity and negative effect to the environment.
Used as a fumigant, spearmint essential oil is an effective insecticide against adult moths.
Antimicrobial research
Spearmint has been used for its supposed antimicrobial activity, which may be related to carvone. Its in vitro antibacterial activity has been compared to that of amoxicillin, penicillin, and streptomycin. Spearmint oil is found to have higher activity against gram-positive bacteria compared to gram-negative bacteria in vitro, which may be due to differing sensitivities to oils.
Beverages
Spearmint leaves are infused in water to make spearmint tea. Spearmint is an ingredient of Maghrebi mint tea. Grown in the mountainous regions of Morocco, this variety of mint possesses a clear, pungent, but mild aroma. Spearmint is an ingredient in several cocktails, such as the mojito and mint julep. Sweet tea, iced and flavored with spearmint, is a summer tradition in the Southern United States.
Gallery
| Biology and health sciences | Herbs and spices | Plants |
26903 | https://en.wikipedia.org/wiki/Solar%20System | Solar System | The Solar System is the gravitationally bound system of the Sun and the objects that orbit it. It formed about 4.6 billion years ago when a dense region of a molecular cloud collapsed, forming the Sun and a protoplanetary disc. The Sun is a typical star that maintains a balanced equilibrium by the fusion of hydrogen into helium at its core, releasing this energy from its outer photosphere. Astronomers classify it as a G-type main-sequence star.
The largest objects that orbit the Sun are the eight planets. In order from the Sun, they are four terrestrial planets (Mercury, Venus, Earth and Mars); two gas giants (Jupiter and Saturn); and two ice giants (Uranus and Neptune). All terrestrial planets have solid surfaces. Inversely, all giant planets do not have a definite surface, as they are mainly composed of gases and liquids. Over 99.86% of the Solar System's mass is in the Sun and nearly 90% of the remaining mass is in Jupiter and Saturn.
There is a strong consensus among astronomers that the Solar System has at least nine dwarf planets: , , Pluto, , , , , , and . There are a vast number of small Solar System bodies, such as asteroids, comets, centaurs, meteoroids, and interplanetary dust clouds. Some of these bodies are in the asteroid belt (between Mars's and Jupiter's orbit) and the Kuiper belt (just outside Neptune's orbit). Six planets, seven dwarf planets, and other bodies have orbiting natural satellites, which are commonly called 'moons'.
The Solar System is constantly flooded by the Sun's charged particles, the solar wind, forming the heliosphere. Around 75–90 astronomical units from the Sun, the solar wind is halted, resulting in the heliopause. This is the boundary of the Solar System to interstellar space. The outermost region of the Solar System is the theorized Oort cloud, the source for long-period comets, extending to a radius of . The closest star to the Solar System, Proxima Centauri, is away. Both stars belong to the Milky Way galaxy.
Formation and evolution
Past
The Solar System formed at least 4.568 billion years ago from the gravitational collapse of a region within a large molecular cloud. This initial cloud was likely several light-years across and probably birthed several stars. As is typical of molecular clouds, this one consisted mostly of hydrogen, with some helium, and small amounts of heavier elements fused by previous generations of stars.
As the pre-solar nebula collapsed, conservation of angular momentum caused it to rotate faster. The center, where most of the mass collected, became increasingly hotter than the surroundings. As the contracting nebula spun faster, it began to flatten into a protoplanetary disc with a diameter of roughly and a hot, dense protostar at the center. The planets formed by accretion from this disc, in which dust and gas gravitationally attracted each other, coalescing to form ever larger bodies. Hundreds of protoplanets may have existed in the early Solar System, but they either merged or were destroyed or ejected, leaving the planets, dwarf planets, and leftover minor bodies.
Due to their higher boiling points, only metals and silicates could exist in solid form in the warm inner Solar System close to the Sun (within the frost line). They eventually formed the rocky planets of Mercury, Venus, Earth, and Mars. Because these refractory materials only comprised a small fraction of the solar nebula, the terrestrial planets could not grow very large.
The giant planets (Jupiter, Saturn, Uranus, and Neptune) formed further out, beyond the frost line, the point between the orbits of Mars and Jupiter where material is cool enough for volatile icy compounds to remain solid. The ices that formed these planets were more plentiful than the metals and silicates that formed the terrestrial inner planets, allowing them to grow massive enough to capture large atmospheres of hydrogen and helium, the lightest and most abundant elements. Leftover debris that never became planets congregated in regions such as the asteroid belt, Kuiper belt, and Oort cloud.
Within 50 million years, the pressure and density of hydrogen in the center of the protostar became great enough for it to begin thermonuclear fusion. As helium accumulates at its core, the Sun is growing brighter; early in its main-sequence life its brightness was 70% that of what it is today. The temperature, reaction rate, pressure, and density increased until hydrostatic equilibrium was achieved: the thermal pressure counterbalancing the force of gravity. At this point, the Sun became a main-sequence star. Solar wind from the Sun created the heliosphere and swept away the remaining gas and dust from the protoplanetary disc into interstellar space.
Following the dissipation of the protoplanetary disk, the Nice model proposes that gravitational encounters between planetisimals and the gas giants caused each to migrate into different orbits. This led to dynamical instability of the entire system, which scattered the planetisimals and ultimately placed the gas giants in their current positions. During this period, the grand tack hypothesis suggests that a final inward migration of Jupiter dispersed much of the asteroid belt, leading to the Late Heavy Bombardment of the inner planets.
Present and future
The Solar System remains in a relatively stable, slowly evolving state by following isolated, gravitationally bound orbits around the Sun. Although the Solar System has been fairly stable for billions of years, it is technically chaotic, and may eventually be disrupted. There is a small chance that another star will pass through the Solar System in the next few billion years. Although this could destabilize the system and eventually lead millions of years later to expulsion of planets, collisions of planets, or planets hitting the Sun, it would most likely leave the Solar System much as it is today.
The Sun's main-sequence phase, from beginning to end, will last about 10 billion years for the Sun compared to around two billion years for all other subsequent phases of the Sun's pre-remnant life combined. The Solar System will remain roughly as it is known today until the hydrogen in the core of the Sun has been entirely converted to helium, which will occur roughly 5 billion years from now. This will mark the end of the Sun's main-sequence life. At that time, the core of the Sun will contract with hydrogen fusion occurring along a shell surrounding the inert helium, and the energy output will be greater than at present. The outer layers of the Sun will expand to roughly 260 times its current diameter, and the Sun will become a red giant. Because of its increased surface area, the surface of the Sun will be cooler ( at its coolest) than it is on the main sequence.
The expanding Sun is expected to vaporize Mercury as well as Venus, and render Earth and Mars uninhabitable (possibly destroying Earth as well). Eventually, the core will be hot enough for helium fusion; the Sun will burn helium for a fraction of the time it burned hydrogen in the core. The Sun is not massive enough to commence the fusion of heavier elements, and nuclear reactions in the core will dwindle. Its outer layers will be ejected into space, leaving behind a dense white dwarf, half the original mass of the Sun but only the size of Earth. The ejected outer layers may form a planetary nebula, returning some of the material that formed the Sun—but now enriched with heavier elements like carbon—to the interstellar medium.
General characteristics
Astronomers sometimes divide the Solar System structure into separate regions. The inner Solar System includes Mercury, Venus, Earth, Mars, and the bodies in the asteroid belt. The outer Solar System includes Jupiter, Saturn, Uranus, Neptune, and the bodies in the Kuiper belt. Since the discovery of the Kuiper belt, the outermost parts of the Solar System are considered a distinct region consisting of the objects beyond Neptune.
Composition
The principal component of the Solar System is the Sun, a G-type main-sequence star that contains 99.86% of the system's known mass and dominates it gravitationally. The Sun's four largest orbiting bodies, the giant planets, account for 99% of the remaining mass, with Jupiter and Saturn together comprising more than 90%. The remaining objects of the Solar System (including the four terrestrial planets, the dwarf planets, moons, asteroids, and comets) together comprise less than 0.002% of the Solar System's total mass.
The Sun is composed of roughly 98% hydrogen and helium, as are Jupiter and Saturn. A composition gradient exists in the Solar System, created by heat and light pressure from the early Sun; those objects closer to the Sun, which are more affected by heat and light pressure, are composed of elements with high melting points. Objects farther from the Sun are composed largely of materials with lower melting points. The boundary in the Solar System beyond which those volatile substances could coalesce is known as the frost line, and it lies at roughly five times the Earth's distance from the Sun.
Orbits
The planets and other large objects in orbit around the Sun lie near the plane of Earth's orbit, known as the ecliptic. Smaller icy objects such as comets frequently orbit at significantly greater angles to this plane. Most of the planets in the Solar System have secondary systems of their own, being orbited by natural satellites called moons. All of the largest natural satellites are in synchronous rotation, with one face permanently turned toward their parent. The four giant planets have planetary rings, thin discs of tiny particles that orbit them in unison.
As a result of the formation of the Solar System, planets and most other objects orbit the Sun in the same direction that the Sun is rotating. That is, counter-clockwise, as viewed from above Earth's north pole. There are exceptions, such as Halley's Comet. Most of the larger moons orbit their planets in prograde direction, matching the direction of planetary rotation; Neptune's moon Triton is the largest to orbit in the opposite, retrograde manner. Most larger objects rotate around their own axes in the prograde direction relative to their orbit, though the rotation of Venus is retrograde.
To a good first approximation, Kepler's laws of planetary motion describe the orbits of objects around the Sun. These laws stipulate that each object travels along an ellipse with the Sun at one focus, which causes the body's distance from the Sun to vary over the course of its year. A body's closest approach to the Sun is called its perihelion, whereas its most distant point from the Sun is called its aphelion. With the exception of Mercury, the orbits of the planets are nearly circular, but many comets, asteroids, and Kuiper belt objects follow highly elliptical orbits. Kepler's laws only account for the influence of the Sun's gravity upon an orbiting body, not the gravitational pulls of different bodies upon each other. On a human time scale, these perturbations can be accounted for using numerical models, but the planetary system can change chaotically over billions of years.
The angular momentum of the Solar System is a measure of the total amount of orbital and rotational momentum possessed by all its moving components. Although the Sun dominates the system by mass, it accounts for only about 2% of the angular momentum. The planets, dominated by Jupiter, account for most of the rest of the angular momentum due to the combination of their mass, orbit, and distance from the Sun, with a possibly significant contribution from comets.
Distances and scales
The radius of the Sun is . Thus, the Sun occupies 0.00001% (1 part in 107) of the volume of a sphere with a radius the size of Earth's orbit, whereas Earth's volume is roughly 1 millionth (10−6) that of the Sun. Jupiter, the largest planet, is from the Sun and has a radius of , whereas the most distant planet, Neptune, is from the Sun.
With a few exceptions, the farther a planet or belt is from the Sun, the larger the distance between its orbit and the orbit of the next nearest object to the Sun. For example, Venus is approximately 0.33 AU farther out from the Sun than Mercury, whereas Saturn is 4.3 AU out from Jupiter, and Neptune lies 10.5 AU out from Uranus. Attempts have been made to determine a relationship between these orbital distances, like the Titius–Bode law and Johannes Kepler's model based on the Platonic solids, but ongoing discoveries have invalidated these hypotheses.
Some Solar System models attempt to convey the relative scales involved in the Solar System in human terms. Some are small in scale (and may be mechanical—called orreries)—whereas others extend across cities or regional areas. The largest such scale model, the Sweden Solar System, uses the 110-meter (361-foot) Avicii Arena in Stockholm as its substitute Sun, and, following the scale, Jupiter is a 7.5-meter (25-foot) sphere at Stockholm Arlanda Airport, 40 km (25 mi) away, whereas the farthest current object, Sedna, is a 10 cm (4 in) sphere in Luleå, 912 km (567 mi) away. At that scale, the distance to Proxima Centauri would be roughly 8 times further than the Moon is from Earth.
If the Sun–Neptune distance is scaled to , then the Sun would be about in diameter (roughly two-thirds the diameter of a golf ball), the giant planets would be all smaller than about , and Earth's diameter along with that of the other terrestrial planets would be smaller than a flea () at this scale.
Habitability
Besides solar energy, the primary characteristic of the Solar System enabling the presence of life is the heliosphere and planetary magnetic fields (for those planets that have them). These magnetic fields partially shield the Solar System from high-energy interstellar particles called cosmic rays. The density of cosmic rays in the interstellar medium and the strength of the Sun's magnetic field change on very long timescales, so the level of cosmic-ray penetration in the Solar System varies, though by how much is unknown.
The zone of habitability of the Solar System is conventionally located in the inner Solar System, where planetary surface or atmospheric temperatures admit the possibility of liquid water. Habitability might be possible in subsurface oceans of various outer Solar System moons.
Comparison with extrasolar systems
Compared to many extrasolar systems, the Solar System stands out in lacking planets interior to the orbit of Mercury. The known Solar System lacks super-Earths, planets between one and ten times as massive as the Earth, although the hypothetical Planet Nine, if it does exist, could be a super-Earth orbiting in the edge of the Solar System.
Uncommonly, it has only small terrestrial and large gas giants; elsewhere planets of intermediate size are typical—both rocky and gas—so there is no "gap" as seen between the size of Earth and of Neptune (with a radius 3.8 times as large). As many of these super-Earths are closer to their respective stars than Mercury is to the Sun, a hypothesis has arisen that all planetary systems start with many close-in planets, and that typically a sequence of their collisions causes consolidation of mass into few larger planets, but in case of the Solar System the collisions caused their destruction and ejection.
The orbits of Solar System planets are nearly circular. Compared to many other systems, they have smaller orbital eccentricity. Although there are attempts to explain it partly with a bias in the radial-velocity detection method and partly with long interactions of a quite high number of planets, the exact causes remain undetermined.
Sun
The Sun is the Solar System's star and by far its most massive component. Its large mass (332,900 Earth masses), which comprises 99.86% of all the mass in the Solar System, produces temperatures and densities in its core high enough to sustain nuclear fusion of hydrogen into helium. This releases an enormous amount of energy, mostly radiated into space as electromagnetic radiation peaking in visible light.
Because the Sun fuses hydrogen at its core, it is a main-sequence star. More specifically, it is a G2-type main-sequence star, where the type designation refers to its effective temperature. Hotter main-sequence stars are more luminous but shorter lived. The Sun's temperature is intermediate between that of the hottest stars and that of the coolest stars. Stars brighter and hotter than the Sun are rare, whereas substantially dimmer and cooler stars, known as red dwarfs, make up about 75% of the fusor stars in the Milky Way.
The Sun is a population I star, having formed in the spiral arms of the Milky Way galaxy. It has a higher abundance of elements heavier than hydrogen and helium ("metals" in astronomical parlance) than the older population II stars in the galactic bulge and halo. Elements heavier than hydrogen and helium were formed in the cores of ancient and exploding stars, so the first generation of stars had to die before the universe could be enriched with these atoms. The oldest stars contain few metals, whereas stars born later have more. This higher metallicity is thought to have been crucial to the Sun's development of a planetary system because the planets formed from the accretion of "metals".
The region of space dominated by the Solar magnetosphere is the heliosphere, which spans much of the Solar System. Along with light, the Sun radiates a continuous stream of charged particles (a plasma) called the solar wind. This stream spreads outwards at speeds from to , filling the vacuum between the bodies of the Solar System. The result is a thin, dusty atmosphere, called the interplanetary medium, which extends to at least .
Activity on the Sun's surface, such as solar flares and coronal mass ejections, disturbs the heliosphere, creating space weather and causing geomagnetic storms. Coronal mass ejections and similar events blow a magnetic field and huge quantities of material from the surface of the Sun. The interaction of this magnetic field and material with Earth's magnetic field funnels charged particles into Earth's upper atmosphere, where its interactions create aurorae seen near the magnetic poles. The largest stable structure within the heliosphere is the heliospheric current sheet, a spiral form created by the actions of the Sun's rotating magnetic field on the interplanetary medium.
Inner Solar System
The inner Solar System is the region comprising the terrestrial planets and the asteroids. Composed mainly of silicates and metals, the objects of the inner Solar System are relatively close to the Sun; the radius of this entire region is less than the distance between the orbits of Jupiter and Saturn. This region is within the frost line, which is a little less than from the Sun.
Inner planets
The four terrestrial or inner planets have dense, rocky compositions, few or no moons, and no ring systems. They are composed largely of refractory minerals such as silicateswhich form their crusts and mantlesand metals such as iron and nickel which form their cores. Three of the four inner planets (Venus, Earth, and Mars) have atmospheres substantial enough to generate weather; all have impact craters and tectonic surface features, such as rift valleys and volcanoes.
(0.31–0.59 AU from the Sun) is the smallest planet in the Solar System. Its surface is grayish, with an expansive rupes (cliff) system generated from thrust faults and bright ray systems formed by impact event remnants. The surface has widely varying temperature, with the equatorial regions ranging from at night to during sunlight. In the past, Mercury was volcanically active, producing smooth basaltic plains similar to the Moon. It is likely that Mercury has a silicate crust and a large iron core. Mercury has a very tenuous atmosphere, consisting of solar-wind particles and ejected atoms. Mercury has no natural satellites.
(0.72–0.73 AU) has a reflective, whitish atmosphere that is mainly composed of carbon dioxide. At the surface, the atmospheric pressure is ninety times as dense as on Earth's sea level. Venus has a surface temperatures over , mainly due to the amount of greenhouse gases in the atmosphere. The planet lacks a protective magnetic field to protect against stripping by the solar wind, which suggests that its atmosphere is sustained by volcanic activity. Its surface displays extensive evidence of volcanic activity with stagnant lid tectonics. Venus has no natural satellites.
(0.98–1.02 AU) is the only place in the universe where life and surface liquid water are known to exist. Earth's atmosphere contains 78% nitrogen and 21% oxygen, which is the result of the presence of life. The planet has a complex climate and weather system, with conditions differing drastically between climate regions. The solid surface of Earth is dominated by green vegetation, deserts and white ice sheets. Earth's surface is shaped by plate tectonics that formed the continental masses. Earth's planetary magnetosphere shields the surface from radiation, limiting atmospheric stripping and maintaining life habitability.
The Moon is Earth's only natural satellite. Its diameter is one-quarter the size of Earth's. Its surface is covered in very fine regolith and dominated by impact craters. Large dark patches on the Moon, maria, are formed from past volcanic activity. The Moon's atmosphere is extremely thin, consisting of a partial vacuum with particle densities of under 107 per cm−3.
(1.38–1.67 AU) has a radius about half of that of Earth. Most of the planet is red due to iron oxide in Martian soil, and the polar regions are covered in white ice caps made of water and carbon dioxide. Mars has an atmosphere composed mostly of carbon dioxide, with surface pressure 0.6% of that of Earth, which is sufficient to support some weather phenomena. During the Mars year (687 Earth days), there are large surface temperature swings on the surface between to . The surface is peppered with volcanoes and rift valleys, and has a rich collection of minerals. Mars has a highly differentiated internal structure, and lost its magnetosphere 4 billion years ago. Mars has two tiny moons:
Phobos is Mars's inner moon. It is a small, irregularly shaped object with a mean radius of . Its surface is very unreflective and dominated by impact craters. In particular, Phobos's surface has a very large Stickney impact crater that is roughly in radius.
Deimos is Mars's outer moon. Like Phobos, it is irregularly shaped, with a mean radius of and its surface reflects little light. However, the surface of Deimos is noticeably smoother than Phobos because the regolith partially covers the impact craters.
Asteroids
Asteroids except for the largest, Ceres, are classified as small Solar System bodies and are composed mainly of carbonaceous, refractory rocky and metallic minerals, with some ice. They range from a few meters to hundreds of kilometers in size. based on their orbital characteristics. Some asteroids have natural satellites that orbit them, that is, asteroids that orbit larger asteroids.
Mercury-crossing asteroids are those with perihelia within the orbit of Mercury. At least 362 are known to date, and include the closest objects to the Sun known in the Solar System. No vulcanoids, asteroids between the orbit of Mercury and the Sun, have been discovered. As of 2024, one asteroid has been discovered to orbit completely within Venus's orbit, 594913 ꞌAylóꞌchaxnim.
Venus-crossing asteroids are those that cross the orbit of Venus. There are 2,809 as of 2015.
Near-Earth asteroids have orbits that approach relatively close to Earth's orbit, and some of them are potentially hazardous objects because they might collide with Earth in the future.
Mars-crossing asteroids are those with perhihelia above 1.3 AU which cross the orbit of Mars. As of 2024, NASA lists 26,182 confirmed Mars-crossing asteroids.
Asteroid belt
The asteroid belt occupies a torus-shaped region between 2.3 and from the Sun, which lies between the orbits of Mars and Jupiter. It is thought to be remnants from the Solar System's formation that failed to coalesce because of the gravitational interference of Jupiter. The asteroid belt contains tens of thousands, possibly millions, of objects over one kilometer in diameter. Despite this, the total mass of the asteroid belt is unlikely to be more than a thousandth of that of Earth. The asteroid belt is very sparsely populated; spacecraft routinely pass through without incident.
Below are the descriptions of the three largest bodies in the asteroid belt. They are all considered to be relatively intact protoplanets, a precursor stage before becoming a fully-formed planet (see List of exceptional asteroids):
(2.55–2.98 AU) is the only dwarf planet in the asteroid belt. It is the largest object in the belt, with a diameter of . Its surface contains a mixture of carbon, frozen water and hydrated minerals. There are signs of past cryovolcanic activity, where volatile material such as water are erupted onto the surface, as seen in surface bright spots. Ceres has a very thin water vapor atmosphere, but practically speaking it is indistinguishable from a vacuum.
(2.13–3.41 AU) is the second-largest object in the asteroid belt. Its fragments survive as the Vesta asteroid family and numerous HED meteorites found on Earth. Vesta's surface, dominated by basaltic and metamorphic material, has a denser composition than Ceres's. Its surface is marked by two giant craters: Rheasilvia and Veneneia.
Pallas (2.15–2.57 AU) is the third-largest object in the asteroid belt. It has its own Pallas asteroid family. Not much is known about Pallas because it has never been visited by a spacecraft, though its surface is predicted to be composed of silicates.
Hilda asteroids are in a 3:2 resonance with Jupiter; that is, they go around the Sun three times for every two Jovian orbits. They lie in three linked clusters between Jupiter and the main asteroid belt.
Trojans are bodies located within another body's gravitationally stable Lagrange points: , 60° ahead in its orbit, or , 60° behind in its orbit. Every planet except Mercury and Saturn is known to possess at least 1 trojan. The Jupiter trojan population is roughly equal to that of the asteroid belt. After Jupiter, Neptune possesses the most confirmed trojans, at 28.
Outer Solar System
The outer region of the Solar System is home to the giant planets and their large moons. The centaurs and many short-period comets orbit in this region. Due to their greater distance from the Sun, the solid objects in the outer Solar System contain a higher proportion of volatiles such as water, ammonia, and methane, than planets of the inner Solar System because their lower temperatures allow these compounds to remain solid, without significant sublimation.
Outer planets
The four outer planets, called giant planets or Jovian planets, collectively make up 99% of the mass orbiting the Sun. All four giant planets have multiple moons and a ring system, although only Saturn's rings are easily observed from Earth. Jupiter and Saturn are composed mainly of gases with extremely low melting points, such as hydrogen, helium, and neon, hence their designation as gas giants. Uranus and Neptune are ice giants, meaning they are largely composed of 'ice' in the astronomical sense (chemical compounds with melting points of up to a few hundred kelvins such as water, methane, ammonia, hydrogen sulfide, and carbon dioxide.) Icy substances comprise the majority of the satellites of the giant planets and small objects that lie beyond Neptune's orbit.
(4.95–5.46 AU) is the biggest and most massive planet in the Solar System. On its surface, there are orange-brown and white cloud bands moving via the principles of atmospheric circulation, with giant storms swirling on the surface such as the Great Red Spot and white 'ovals'. Jupiter possesses a strong enough magnetosphere to redirect ionizing radiation and cause auroras on its poles. As of 2024, Jupiter has 95 confirmed satellites, which can roughly be sorted into three groups:
The Amalthea group, consisting of Metis, Adrastea, Amalthea, and Thebe. They orbit substantially closer to Jupiter than other satellites. Materials from these natural satellites are the source of Jupiter's faint ring.
The Galilean moons, consisting of Ganymede, Callisto, Io, and Europa. They are the largest moons of Jupiter and exhibit planetary properties.
Irregular satellites, consisting of substantially smaller natural satellites. They have more distant orbits than the other objects.
(9.08–10.12 AU) has a distinctive visible ring system orbiting around its equator composed of small ice and rock particles. Like Jupiter, it is mostly made of hydrogen and helium. At its north and south poles, Saturn has peculiar hexagon-shaped storms larger than the diameter of Earth. Saturn has a magnetosphere capable of producing weak auroras. As of 2024, Saturn has 146 confirmed satellites, grouped into:
Ring moonlets and shepherds, which orbit inside or close to Saturn's rings. A moonlet can only partially clear out dust in its orbit, while the ring shepherds are able to completely clear out dust, forming visible gaps in the rings.
Inner large satellites Mimas, Enceladus, Tethys, and Dione. These satellites orbit within Saturn's E ring. They are composed mostly of water ice and are believed to have differentiated internal structures.
Trojan moons Calypso and Telesto (trojans of Tethys), and Helene and Polydeuces (trojans of Dione). These small moons share their orbits with Tethys and Dione, leading or trailing either.
Outer large satellites Rhea, Titan, Hyperion, and Iapetus. Titan is the only satellite in the Solar System to have a substantial atmosphere.
Irregular satellites, consisting of substantially smaller natural satellites. They have more distant orbits than the other objects. Phoebe is the largest irregular satellite of Saturn.
(18.3–20.1 AU), uniquely among the planets, orbits the Sun on its side with an axial tilt >90°. This gives the planet extreme seasonal variation as each pole points alternately toward and then away from the Sun. Uranus' outer layer has a muted cyan color, but underneath these clouds are many mysteries about its climate, such as unusually low internal heat and erratic cloud formation. As of 2024, Uranus has 28 confirmed satellites, divided into three groups:
Inner satellites, which orbit inside Uranus' ring system. They are very close to each other, which suggests that their orbits are chaotic.
Large satellites, consisting of Titania, Oberon, Umbriel, Ariel, and Miranda. Most of them have roughly equal amounts of rock and ice, except Miranda, which is made primarily of ice.
Irregular satellites, having more distant and eccentric orbits than the other objects.
(29.9–30.5 AU) is the furthest planet known in the Solar System. Its outer atmosphere has a slightly muted cyan color, with occasional storms on the surface that look like dark spots. Like Uranus, many atmospheric phenomena of Neptune are unexplained, such as the thermosphere's abnormally high temperature or the strong tilt (47°) of its magnetosphere. As of 2024, Neptune has 16 confirmed satellites, divided into two groups:
Regular satellites, which have circular orbits that lie near Neptune's equator.
Irregular satellites, which as the name implies, have less regular orbits. One of them, Triton, is Neptune's largest moon. It is geologically active, with erupting geysers of nitrogen gas, and possesses a thin, cloudy nitrogen atmosphere.
Centaurs
The centaurs are icy, comet-like bodies whose semi-major axes are longer than Jupiter's and shorter than Neptune's (between 5.5 and 30 AU). These are former Kuiper belt and scattered disc objects (SDOs) that were gravitationally perturbed closer to the Sun by the outer planets, and are expected to become comets or be ejected out of the Solar System. While most centaurs are inactive and asteroid-like, some exhibit cometary activity, such as the first centaur discovered, 2060 Chiron, which has been classified as a comet (95P) because it develops a coma just as comets do when they approach the Sun. The largest known centaur, 10199 Chariklo, has a diameter of about and is one of the few minor planets possessing a ring system.
Trans-Neptunian region
Beyond the orbit of Neptune lies the area of the "trans-Neptunian region", with the doughnut-shaped Kuiper belt, home of Pluto and several other dwarf planets, and an overlapping disc of scattered objects, which is tilted toward the plane of the Solar System and reaches much further out than the Kuiper belt. The entire region is still largely unexplored. It appears to consist overwhelmingly of many thousands of small worlds—the largest having a diameter only a fifth that of Earth and a mass far smaller than that of the Moon—composed mainly of rock and ice. This region is sometimes described as the "third zone of the Solar System", enclosing the inner and the outer Solar System.
Kuiper belt
The Kuiper belt is a great ring of debris similar to the asteroid belt, but consisting mainly of objects composed primarily of ice. It extends between 30 and 50 AU from the Sun. It is composed mainly of small Solar System bodies, although the largest few are probably large enough to be dwarf planets. There are estimated to be over 100,000 Kuiper belt objects with a diameter greater than , but the total mass of the Kuiper belt is thought to be only a tenth or even a hundredth the mass of Earth. Many Kuiper belt objects have satellites, and most have orbits that are substantially inclined (~10°) to the plane of the ecliptic.
The Kuiper belt can be roughly divided into the "classical" belt and the resonant trans-Neptunian objects. The latter have orbits whose periods are in a simple ratio to that of Neptune: for example, going around the Sun twice for every three times that Neptune does, or once for every two. The classical belt consists of objects having no resonance with Neptune, and extends from roughly 39.4 to 47.7 AU. Members of the classical Kuiper belt are sometimes called "cubewanos", after the first of their kind to be discovered, originally designated 1992 QB1, (and has since been named Albion); they are still in near primordial, low-eccentricity orbits.
There is strong consensus among astronomers that five members of the Kuiper belt are . Many dwarf planet candidates are being considered, pending further data for verification.
(29.7–49.3 AU) is the largest known object in the Kuiper belt. Pluto has a relatively eccentric orbit, inclined 17 degrees to the ecliptic plane. Pluto has a 2:3 resonance with Neptune, meaning that Pluto orbits twice around the Sun for every three Neptunian orbits. Kuiper belt objects whose orbits share this resonance are called plutinos. Pluto has five moons: Charon, Styx, Nix, Kerberos, and Hydra.
Charon, the largest of Pluto's moons, is sometimes described as part of a binary system with Pluto, as the two bodies orbit a barycenter of gravity above their surfaces (i.e. they appear to "orbit each other").
(30.3–48.1 AU), is in the same 2:3 orbital resonance with Neptune as Pluto, and is the largest such object after Pluto itself. Its eccentricity and inclination are similar to Pluto's, but its perihelion lies about 120° from that of Pluto. Thus, the phase of Orcus's orbit is opposite to Pluto's: Orcus is at aphelion (most recently in 2019) around when Pluto is at perihelion (most recently in 1989) and vice versa. For this reason, it has been called the anti-Pluto. It has one known moon, Vanth.
Haumea (34.6–51.6 AU) was discovered in 2005. It is in a temporary 7:12 orbital resonance with Neptune. Haumea possesses a ring system, two known moons named Hiʻiaka and Namaka, and rotates so quickly (once every 3.9 hours) that it is stretched into an ellipsoid. It is part of a collisional family of Kuiper belt objects that share similar orbits, which suggests a giant impact on Haumea ejected fragments into space billions of years ago.
Makemake (38.1–52.8 AU), although smaller than Pluto, is the largest known object in the classical Kuiper belt (that is, a Kuiper belt object not in a confirmed resonance with Neptune). Makemake is the brightest object in the Kuiper belt after Pluto. Discovered in 2005, it was officially named in 2009. Its orbit is far more inclined than Pluto's, at 29°. It has one known moon, S/2015 (136472) 1.
(41.9–45.5 AU) is the second-largest known object in the classical Kuiper belt, after Makemake. Its orbit is significantly less eccentric and inclined than those of Makemake or Haumea. It possesses a ring system and one known moon, Weywot.
Scattered disc
The scattered disc, which overlaps the Kuiper belt but extends out to near 500 AU, is thought to be the source of short-period comets. Scattered-disc objects are believed to have been perturbed into erratic orbits by the gravitational influence of Neptune's early outward migration. Most scattered disc objects have perihelia within the Kuiper belt but aphelia far beyond it (some more than 150 AU from the Sun). SDOs' orbits can be inclined up to 46.8° from the ecliptic plane. Some astronomers consider the scattered disc to be merely another region of the Kuiper belt and describe scattered-disc objects as "scattered Kuiper belt objects". Some astronomers classify centaurs as inward-scattered Kuiper belt objects along with the outward-scattered residents of the scattered disc.
Currently, there is strong consensus among astronomers that two of the bodies in the scattered disc are :
(38.3–97.5 AU) is the largest known scattered disc object and the most massive known dwarf planet. Eris's discovery contributed to a debate about the definition of a planet because it is 25% more massive than Pluto and about the same diameter. It has one known moon, Dysnomia. Like Pluto, its orbit is highly eccentric, with a perihelion of 38.2 AU (roughly Pluto's distance from the Sun) and an aphelion of 97.6 AU, and steeply inclined to the ecliptic plane at an angle of 44°.
(33.8–101.2 AU) is a dwarf planet in a comparable orbit to Eris, except that it is in a 3:10 resonance with Neptune. It has one known moon, Xiangliu.
Extreme trans-Neptunian objects
Some objects in the Solar System have a very large orbit, and therefore are much less affected by the known giant planets than other minor planet populations. These bodies are called extreme trans-Neptunian objects, or ETNOs for short. Generally, ETNOs' semi-major axes are at least 150–250 AU wide. For example, 541132 Leleākūhonua orbits the Sun once every ~32,000 years, with a distance of 65–2000 AU from the Sun.
This population is divided into three subgroups by astronomers. The scattered ETNOs have perihelia around 38–45 AU and an exceptionally high eccentricity of more than 0.85. As with the regular scattered disc objects, they were likely formed as result of gravitational scattering by Neptune and still interact with the giant planets. The detached ETNOs, with perihelia approximately between 40–45 and 50–60 AU, are less affected by Neptune than the scattered ETNOs, but are still relatively close to Neptune. The sednoids or inner Oort cloud objects, with perihelia beyond 50–60 AU, are too far from Neptune to be strongly influenced by it.
Currently, there is one ETNO that is classified as a dwarf planet:
(76.2–937 AU) was the first extreme trans-Neptunian object to be discovered. It is a large, reddish object, and it takes ~11,400 years for Sedna to complete one orbit. Mike Brown, who discovered the object in 2003, asserts that it cannot be part of the scattered disc or the Kuiper belt because its perihelion is too distant to have been affected by Neptune's migration. The sednoid population is named after Sedna.
Edge of the heliosphere
The Sun's stellar-wind bubble, the heliosphere, a region of space dominated by the Sun, has its boundary at the termination shock. Based on the Sun's peculiar motion relative to the local standard of rest, this boundary is roughly 80–100 AU from the Sun upwind of the interstellar medium and roughly 200 AU from the Sun downwind. Here the solar wind collides with the interstellar medium and dramatically slows, condenses and becomes more turbulent, forming a great oval structure known as the heliosheath.
The heliosheath has been theorized to look and behave very much like a comet's tail, extending outward for a further 40 AU on the upwind side but tailing many times that distance downwind to possibly several thousands of AU. Evidence from the Cassini and Interstellar Boundary Explorer spacecraft has suggested that it is forced into a bubble shape by the constraining action of the interstellar magnetic field, but the actual shape remains unknown.
The shape and form of the outer edge of the heliosphere is likely affected by the fluid dynamics of interactions with the interstellar medium as well as solar magnetic fields prevailing to the south, e.g. it is bluntly shaped with the northern hemisphere extending 9 AU farther than the southern hemisphere. The heliopause is considered the beginning of the interstellar medium. Beyond the heliopause, at around 230 AU, lies the bow shock: a plasma "wake" left by the Sun as it travels through the Milky Way. Large objects outside the heliopause remain gravitationally bound to the Sun, but the flow of matter in the interstellar medium homogenizes the distribution of micro-scale objects.
Miscellaneous populations
Comets
Comets are small Solar System bodies, typically only a few kilometers across, composed largely of volatile ices. They have highly eccentric orbits, generally a perihelion within the orbits of the inner planets and an aphelion far beyond Pluto. When a comet enters the inner Solar System, its proximity to the Sun causes its icy surface to sublimate and ionise, creating a coma: a long tail of gas and dust often visible to the naked eye.
Short-period comets have orbits lasting less than two hundred years. Long-period comets have orbits lasting thousands of years. Short-period comets are thought to originate in the Kuiper belt, whereas long-period comets, such as Hale–Bopp, are thought to originate in the Oort cloud. Many comet groups, such as the Kreutz sungrazers, formed from the breakup of a single parent. Some comets with hyperbolic orbits may originate outside the Solar System, but determining their precise orbits is difficult. Old comets whose volatiles have mostly been driven out by solar warming are often categorized as asteroids.
Meteoroids, meteors and dust
Solid objects smaller than one meter are usually called meteoroids and micrometeoroids (grain-sized), with the exact division between the two categories being debated over the years. By 2017, the IAU designated any solid object having a diameter between ~30 micrometers and 1 meter as meteoroids, and depreciated the micrometeoroid categorization, instead terms smaller particles simply as 'dust particles'.
Some meteoroids formed via disintegration of comets and asteroids, while a few formed via impact debris ejected from planetary bodies. Most meteoroids are made of silicates and heavier metals like nickel and iron. When passing through the Solar System, comets produce a trail of meteoroids; it is hypothesized that this is caused either by vaporization of the comet's material or by simple breakup of dormant comets. When crossing an atmosphere, these meteoroids will produce bright streaks in the sky due to atmospheric entry, called meteors. If a stream of meteoroids enter the atmosphere on parallel trajectories, the meteors will seemingly 'radiate' from a point in the sky, hence the phenomenon's name: meteor shower.
The inner Solar System is home to the zodiacal dust cloud, which is visible as the hazy zodiacal light in dark, unpolluted skies. It may be generated by collisions within the asteroid belt brought on by gravitational interactions with the planets; a more recent proposed origin is materials from planet Mars. The outer Solar System hosts a cosmic dust cloud. It extends from about to about , and was probably created by collisions within the Kuiper belt.
Boundary region and uncertainties
Much of the Solar System is still unknown. Regions beyond thousands of AU away are still virtually unmapped and learning about this region of space is difficult. Study in this region depends upon inferences from those few objects whose orbits happen to be perturbed such that they fall closer to the Sun, and even then, detecting these objects has often been possible only when they happened to become bright enough to register as comets. Many objects may yet be discovered in the Solar System's uncharted regions.
The Oort cloud is a theorized spherical shell of up to a trillion icy objects that is thought to be the source for all long-period comets. No direct observation of the Oort cloud is possible with present imaging technology. It is theorized to surround the Solar System at roughly 50,000 AU (~0.9 ly) from the Sun and possibly to as far as 100,000 AU (~1.8 ly). The Oort cloud is thought to be composed of comets that were ejected from the inner Solar System by gravitational interactions with the outer planets. Oort cloud objects move very slowly, and can be perturbed by infrequent events, such as collisions, the gravitational effects of a passing star, or the galactic tide, the tidal force exerted by the Milky Way.
As of the 2020s, a few astronomers have hypothesized that Planet Nine (a planet beyond Neptune) might exist, based on statistical variance in the orbit of extreme trans-Neptunian objects. Their closest approaches to the Sun are mostly clustered around one sector and their orbits are similarly tilted, suggesting that a large planet might be influencing their orbit over millions of years. However, some astronomers said that this observation might be credited to observational biases or just sheer coincidence. An alternative hypothesis has a close flyby of another star disrupting the outer Solar System.
The Sun's gravitational field is estimated to dominate the gravitational forces of surrounding stars out to about two light-years (). Lower estimates for the radius of the Oort cloud, by contrast, do not place it farther than . Most of the mass is orbiting in the region between 3,000 and . The furthest known objects, such as Comet West, have aphelia around from the Sun. The Sun's Hill sphere with respect to the galactic nucleus, the effective range of its gravitational influence, is thought to extend up to a thousand times farther and encompasses the hypothetical Oort cloud. It was calculated by G. A. Chebotarev to be 230,000 AU.
Celestial neighborhood
Within 10 light-years of the Sun there are relatively few stars, the closest being the triple star system Alpha Centauri, which is about 4.4 light-years away and may be in the Local Bubble's G-Cloud. Alpha Centauri A and B are a closely tied pair of Sun-like stars, whereas the closest star to the Sun, the small red dwarf Proxima Centauri, orbits the pair at a distance of 0.2 light-years. In 2016, a potentially habitable exoplanet was found to be orbiting Proxima Centauri, called Proxima Centauri b, the closest confirmed exoplanet to the Sun.
The Solar System is surrounded by the Local Interstellar Cloud, although it is not clear if it is embedded in the Local Interstellar Cloud or if it lies just outside the cloud's edge. Multiple other interstellar clouds exist in the region within 300 light-years of the Sun, known as the Local Bubble. The latter feature is an hourglass-shaped cavity or superbubble in the interstellar medium roughly 300 light-years across. The bubble is suffused with high-temperature plasma, suggesting that it may be the product of several recent supernovae.
The Local Bubble is a small superbubble compared to the neighboring wider Radcliffe Wave and Split linear structures (formerly Gould Belt), each of which are some thousands of light-years in length. All these structures are part of the Orion Arm, which contains most of the stars in the Milky Way that are visible to the unaided eye.
Groups of stars form together in star clusters, before dissolving into co-moving associations. A prominent grouping that is visible to the naked eye is the Ursa Major moving group, which is around 80 light-years away within the Local Bubble. The nearest star cluster is Hyades, which lies at the edge of the Local Bubble. The closest star-forming regions are the Corona Australis Molecular Cloud, the Rho Ophiuchi cloud complex and the Taurus molecular cloud; the latter lies just beyond the Local Bubble and is part of the Radcliffe wave.
Stellar flybys that pass within of the Sun occur roughly once every 100,000 years. The closest well-measured approach was Scholz's Star, which approached to ~ of the Sun some ~70 thousands years ago, likely passing through the outer Oort cloud. There is a 1% chance every billion years that a star will pass within of the Sun, potentially disrupting the Solar System.
Galactic position
The Solar System is located in the Milky Way, a barred spiral galaxy with a diameter of about 100,000 light-years containing more than 100 billion stars. The Sun is part of one of the Milky Way's outer spiral arms, known as the Orion–Cygnus Arm or Local Spur. It is a member of the thin disk population of stars orbiting close to the galactic plane.
Its speed around the center of the Milky Way is about 220 km/s, so that it completes one revolution every 240 million years. This revolution is known as the Solar System's galactic year. The solar apex, the direction of the Sun's path through interstellar space, is near the constellation Hercules in the direction of the current location of the bright star Vega. The plane of the ecliptic lies at an angle of about 60° to the galactic plane.
The Sun follows a nearly circular orbit around the Galactic Center (where the supermassive black hole Sagittarius A* resides) at a distance of 26,660 light-years, orbiting at roughly the same speed as that of the spiral arms. If it orbited close to the center, gravitational tugs from nearby stars could perturb bodies in the Oort cloud and send many comets into the inner Solar System, producing collisions with potentially catastrophic implications for life on Earth. In this scenario, the intense radiation of the Galactic Center could interfere with the development of complex life.
The Solar System's location in the Milky Way is a factor in the evolutionary history of life on Earth. Spiral arms are home to a far larger concentration of supernovae, gravitational instabilities, and radiation that could disrupt the Solar System, but since Earth stays in the Local Spur and therefore does not pass frequently through spiral arms, this has given Earth long periods of stability for life to evolve. However, according to the controversial Shiva hypothesis, the changing position of the Solar System relative to other parts of the Milky Way could explain periodic extinction events on Earth.
Discovery and exploration
Humanity's knowledge of the Solar System has grown incrementally over the centuries. Up to the Late Middle Ages–Renaissance, astronomers from Europe to India believed Earth to be stationary at the center of the universe and categorically different from the divine or ethereal objects that moved through the sky. Although the Greek philosopher Aristarchus of Samos had speculated on a heliocentric reordering of the cosmos, Nicolaus Copernicus was the first person known to have developed a mathematically predictive heliocentric system.
Heliocentrism did not triumph immediately over geocentrism, but the work of Copernicus had its champions, notably Johannes Kepler. Using a heliocentric model that improved upon Copernicus by allowing orbits to be elliptical, and the precise observational data of Tycho Brahe, Kepler produced the Rudolphine Tables, which enabled accurate computations of the positions of the then-known planets. Pierre Gassendi used them to predict a transit of Mercury in 1631, and Jeremiah Horrocks did the same for a transit of Venus in 1639. This provided a strong vindication of heliocentrism and Kepler's elliptical orbits.
In the 17th century, Galileo publicized the use of the telescope in astronomy; he and Simon Marius independently discovered that Jupiter had four satellites in orbit around it. Christiaan Huygens followed on from these observations by discovering Saturn's moon Titan and the shape of the rings of Saturn. In 1677, Edmond Halley observed a transit of Mercury across the Sun, leading him to realize that observations of the solar parallax of a planet (more ideally using the transit of Venus) could be used to trigonometrically determine the distances between Earth, Venus, and the Sun. Halley's friend Isaac Newton, in his magisterial Principia Mathematica of 1687, demonstrated that celestial bodies are not quintessentially different from Earthly ones: the same laws of motion and of gravity apply on Earth and in the skies.
The term "Solar System" entered the English language by 1704, when John Locke used it to refer to the Sun, planets, and comets. In 1705, Halley realized that repeated sightings of a comet were of the same object, returning regularly once every 75–76 years. This was the first evidence that anything other than the planets repeatedly orbited the Sun, though Seneca had theorized this about comets in the 1st century. Careful observations of the 1769 transit of Venus allowed astronomers to calculate the average Earth–Sun distance as , only 0.8% greater than the modern value.
Uranus, having occasionally been observed since 1690 and possibly from antiquity, was recognized to be a planet orbiting beyond Saturn by 1783. In 1838, Friedrich Bessel successfully measured a stellar parallax, an apparent shift in the position of a star created by Earth's motion around the Sun, providing the first direct, experimental proof of heliocentrism. Neptune was identified as a planet some years later, in 1846, thanks to its gravitational pull causing a slight but detectable variation in the orbit of Uranus. Mercury's orbital anomaly observations led to searches for Vulcan, a planet interior of Mercury, but these attempts were quashed with Albert Einstein's theory of general relativity in 1915.
In the 20th century, humans began their space exploration around the Solar System, starting with placing telescopes in space since the 1960s. By 1989, all eight planets have been visited by space probes. Probes have returned samples from comets and asteroids, as well as flown through the Sun's corona and visited two dwarf planets (Pluto and Ceres). To save on fuel, some space missions make use of gravity assist maneuvers, such as the two Voyager probes accelerating when flyby planets in the outer Solar System and the Parker Solar Probe decelerating closer towards the Sun after flyby with Venus.
Humans have landed on the Moon during the Apollo program in the 1960s and 1970s and will return to the Moon in the 2020s with the Artemis program. Discoveries in the 20th and 21st century has prompted the redefinition of the term planet in 2006, hence the demotion of Pluto to a dwarf planet, and further interest in trans-Neptunian objects.
| Physical sciences | Science and medicine | null |
26904 | https://en.wikipedia.org/wiki/Silurian | Silurian | The Silurian ( ) is a geologic period and system spanning 24.6 million years from the end of the Ordovician Period, at million years ago (Mya), to the beginning of the Devonian Period, Mya. The Silurian is the third and shortest period of the Paleozoic Era, and the third of twelve periods of the Phanerozoic Eon. As with other geologic periods, the rock beds that define the period's start and end are well identified, but the exact dates are uncertain by a few million years. The base of the Silurian is set at a series of major Ordovician–Silurian extinction events when up to 60% of marine genera were wiped out.
One important event in this period was the initial establishment of terrestrial life in what is known as the Silurian-Devonian Terrestrial Revolution: vascular plants emerged from more primitive land plants, dikaryan fungi started expanding and diversifying along with glomeromycotan fungi, and three groups of arthropods (myriapods, arachnids and hexapods) became fully terrestrialized.
Another significant evolutionary milestone during the Silurian was the diversification of jawed fish, which include placoderms, acanthodians (which gave rise to cartilaginous fish) and osteichthyan (bony fish, further divided into lobe-finned and ray-finned fishes), although this corresponded to sharp decline of jawless fish such as conodonts and ostracoderms.
History of study
The Silurian system was first identified by the Scottish geologist Roderick Murchison, who was examining fossil-bearing sedimentary rock strata in south Wales in the early 1830s. He named the sequences for a Celtic tribe of Wales, the Silures, inspired by his friend Adam Sedgwick, who had named the period of his study the Cambrian, from a Latin name for Wales. Whilst the British rocks now identified as belonging to the Silurian System and the lands now thought to have been inhabited in antiquity by the Silures show little correlation (. Geologic map of Wales, Map of pre-Roman tribes of Wales), Murchison conjectured that their territory included Caer Caradoc and Wenlock Edge exposures - and that if it did not there were plenty of Silurian rocks elsewhere 'to sanction the name proposed'. In 1835 the two men presented a joint paper, under the title On the Silurian and Cambrian Systems, Exhibiting the Order in which the Older Sedimentary Strata Succeed each other in England and Wales, which was the germ of the modern geological time scale. As it was first identified, the "Silurian" series when traced farther afield quickly came to overlap Sedgwick's "Cambrian" sequence, however, provoking furious disagreements that ended the friendship.
The English geologist Charles Lapworth resolved the conflict by defining a new Ordovician system including the contested beds. An alternative name for the Silurian was "Gotlandian" after the strata of the Baltic island of Gotland.
The French geologist Joachim Barrande, building on Murchison's work, used the term Silurian in a more comprehensive sense than was justified by subsequent knowledge. He divided the Silurian rocks of Bohemia into eight stages. His interpretation was questioned in 1854 by Edward Forbes, and the later stages of Barrande; F, G and H have since been shown to be Devonian. Despite these modifications in the original groupings of the strata, it is recognized that Barrande established Bohemia as a classic ground for the study of the earliest Silurian fossils.
Subdivisions
Paleogeography
With the supercontinent Gondwana covering the equator and much of the southern hemisphere, a large ocean occupied most of the northern half of the globe. The high sea levels of the Silurian and the relatively flat land (with few significant mountain belts) resulted in a number of island chains, and thus a rich diversity of environmental settings.
During the Silurian, Gondwana continued a slow southward drift to high southern latitudes, but there is evidence that the Silurian icecaps were less extensive than those of the late-Ordovician glaciation. The southern continents remained united during this period. The melting of icecaps and glaciers contributed to a rise in sea level, recognizable from the fact that Silurian sediments overlie eroded Ordovician sediments, forming an unconformity. The continents of Avalonia, Baltica, and Laurentia drifted together near the equator, starting the formation of a second supercontinent known as Euramerica.
When the proto-Europe collided with North America, the collision folded coastal sediments that had been accumulating since the Cambrian off the east coast of North America and the west coast of Europe. This event is the Caledonian orogeny, a spate of mountain building that stretched from New York State through conjoined Europe and Greenland to Norway. At the end of the Silurian, sea levels dropped again, leaving telltale basins of evaporites extending from Michigan to West Virginia, and the new mountain ranges were rapidly eroded. The Teays River, flowing into the shallow mid-continental sea, eroded Ordovician Period strata, forming deposits of Silurian strata in northern Ohio and Indiana.
The vast ocean of Panthalassa covered most of the northern hemisphere. Other minor oceans include two phases of the Tethys, the Proto-Tethys and Paleo-Tethys, the Rheic Ocean, the Iapetus Ocean (a narrow seaway between Avalonia and Laurentia), and the newly formed Ural Ocean.
Climate and sea level
The Silurian period was once believed to have enjoyed relatively stable and warm temperatures, in contrast with the extreme glaciations of the Ordovician before it and the extreme heat of the ensuing Devonian; however, it is now known that the global climate underwent many drastic fluctuations throughout the Silurian, evidenced by numerous major carbon and oxygen isotope excursions during this geologic period. Sea levels rose from their Hirnantian low throughout the first half of the Silurian; they subsequently fell throughout the rest of the period, although smaller scale patterns are superimposed on this general trend; fifteen high-stands (periods when sea levels were above the edge of the continental shelf) can be identified, and the highest Silurian sea level was probably around higher than the lowest level reached.
During this period, the Earth entered a warm greenhouse phase, supported by high CO2 levels of 4500 ppm, and warm shallow seas covered much of the equatorial land masses. Early in the Silurian, glaciers retreated back into the South Pole until they almost disappeared in the middle of Silurian. Layers of broken shells (called coquina) provide strong evidence of a climate dominated by violent storms generated then as now by warm sea surfaces.
Perturbations
The climate and carbon cycle appear to be rather unsettled during the Silurian, which had a higher frequency of isotopic excursions (indicative of climate fluctuations) than any other period. The Ireviken event, Mulde event, and Lau event each represent isotopic excursions following a minor mass extinction and associated with rapid sea-level change. Each one leaves a similar signature in the geological record, both geochemically and biologically; pelagic (free-swimming) organisms were particularly hard hit, as were brachiopods, corals, and trilobites, and extinctions rarely occur in a rapid series of fast bursts. The climate fluctuations are best explained by a sequence of glaciations, but the lack of tillites in the middle to late Silurian make this explanation problematic.
Flora and fauna
The Silurian period has been viewed by some palaeontologists as an extended recovery interval following the Late Ordovician mass extinction (LOME), which interrupted the cascading increase in biodiversity that had continuously gone on throughout the Cambrian and most of the Ordovician.
The Silurian was the first period to see megafossils of extensive terrestrial biota in the form of moss-like miniature forests along lakes and streams and networks of large, mycorrhizal nematophytes, heralding the beginning of the Silurian-Devonian Terrestrial Revolution. However, the land fauna did not have a major impact on the Earth until it diversified in the Devonian.
The first fossil records of vascular plants, that is, land plants with tissues that carry water and food, appeared in the second half of the Silurian Period. The earliest-known representatives of this group are Cooksonia. Most of the sediments containing Cooksonia are marine in nature. Preferred habitats were likely along rivers and streams. Baragwanathia appears to be almost as old, dating to the early Ludlow (420 million years) and has branching stems and needle-like leaves of . The plant shows a high degree of development in relation to the age of its fossil remains. Fossils of this plant have been recorded in Australia, Canada, and China. Eohostimella heathana is an early, probably terrestrial, "plant" known from compression fossils of Early Silurian (Llandovery) age. The chemistry of its fossils is similar to that of fossilised vascular plants, rather than algae.
Fossils that are considered as terrestrial animals are also known from the Silurian. The definitive oldest record of millipede ever known is Kampecaris obanensis and Archidesmus sp. from the late Silurian (425 million years ago) of Kerrera. There are also other millipedes, centipedes, and trigonotarbid arachnoids known from Ludlow (420 million years ago). Predatory invertebrates would indicate that simple food webs were in place that included non-predatory prey animals. Extrapolating back from Early Devonian biota, Andrew Jeram et al. in 1990 suggested a food web based on as-yet-undiscovered detritivores and grazers on micro-organisms. Millipedes from Cowie Formation such as Cowiedesmus and Pneumodesmus were considered as the oldest millipede from the middle Silurian at 428–430 million years ago, although the age of this formation is later reinterpreted to be from the early Devonian instead by some researchers. Regardless, Pneumodesmus is still an important fossil as the oldest definitive evidence of spiracles to breath in the air.
The first bony fish, the Osteichthyes, appeared, represented by the Acanthodians covered with bony scales. Fish reached considerable diversity and developed movable jaws, adapted from the supports of the front two or three gill arches. A diverse fauna of eurypterids (sea scorpions)—some of them several meters in length—prowled the shallow Silurian seas and lakes of North America; many of their fossils have been found in New York state. Brachiopods were abundant and diverse, with the taxonomic composition, ecology, and biodiversity of Silurian brachiopods mirroring Ordovician ones. Brachiopods that survived the LOME developed novel adaptations for environmental stress, and they tended to be endemic to a single palaeoplate in the mass extinction's aftermath, but expanded their range afterwards. The most abundant brachiopods were atrypids and pentamerides; atrypids were the first to recover and rediversify in the Rhuddanian after LOME, while pentameride recovery was delayed until the Aeronian. Bryozoans exhibited significant degrees of endemism to a particular shelf. They also developed symbiotic relationships with cnidarians and stromatolites. Many bivalve fossils have also been found in Silurian deposits, and the first deep-boring bivalves are known from this period. Chitons saw a peak in diversity during the middle of the Silurian. Hederelloids enjoyed significant success in the Silurian, with some developing symbioses with the colonial rugose coral Entelophyllum. The Silurian was a heyday for tentaculitoids, which experienced an evolutionary radiation focused mainly in Baltoscandia, along with an expansion of their geographic range in the Llandovery and Wenlock. Trilobites started to recover in the Rhuddanian, and they continued to enjoy success in the Silurian as they had in the Ordovician despite their reduction in clade diversity as a result of LOME. The Early Silurian was a chaotic time of turnover for crinoids as they rediversified after LOME. Members of Flexibilia, which were minimally impacted by LOME, took on an increasing ecological prominence in Silurian seas. Monobathrid camerates, like flexibles, diversified in the Llandovery, whereas cyathocrinids and dendrocrinids diversified later in the Silurian. Scyphocrinoid loboliths suddenly appeared in the terminal Silurian, shortly before the Silurian-Devonian boundary, and disappeared as abruptly as they appeared very shortly after their first appearance. Endobiotic symbionts were common in the corals and stromatoporoids. Rugose corals especially were colonised and encrusted by a diverse range of epibionts, including certain hederelloids as aforementioned. Photosymbiotic scleractinians made their first appearance during the Middle Silurian. Reef abundance was patchy; sometimes, fossils are frequent, but at other points, are virtually absent from the rock record.
| Physical sciences | Geological timescale | Earth science |
26932 | https://en.wikipedia.org/wiki/Sagittarius%20%28constellation%29 | Sagittarius (constellation) | Sagittarius is one of the constellations of the zodiac and is located in the Southern celestial hemisphere. It is one of the 48 constellations listed by the 2nd-century astronomer Ptolemy and remains one of the 88 modern constellations. Its old astronomical symbol is (♐︎). Its name is Latin for "archer". Sagittarius is commonly represented as a centaur drawing a bow. It lies between Scorpius and Ophiuchus to the west and Capricornus and Microscopium to the east.
The center of the Milky Way lies in the westernmost part of Sagittarius (see Sagittarius A).
Visualizations
As seen from the northern hemisphere, the constellation's brighter stars form an easily recognizable asterism known as "the Teapot". The stars δ Sgr (Kaus Media), ε Sgr (Kaus Australis), ζ Sgr (Ascella), and φ Sgr form the body of the pot; λ Sgr (Kaus Borealis) is the point of the lid; γ2 Sgr (Alnasl) is the tip of the spout; and σ Sgr (Nunki) and τ Sgr the handle. These same stars originally formed the bow and arrow of Sagittarius.
Marking the bottom of the teapot's "handle" (or the shoulder area of the archer), is the bright star (2.59 magnitude) Zeta Sagittarii (ζ Sgr), named Ascella, and the fainter Tau Sagittarii (τ Sgr).
To complete the teapot metaphor, under dark skies a particularly dense area of the Milky Way (the Large Sagittarius Star Cloud) can be seen rising in a north-westerly arc above the spout, like a puff of steam rising from a boiling kettle.
The constellation as a whole is often depicted as having the rough appearance of a stick-figure archer drawing its bow, with the fainter stars providing the outline of the horse's body. Sagittarius famously points its arrow at the heart of Scorpius, represented by the reddish star Antares, as the two constellations race around the sky. Following the direct line formed by Delta Sagittarii (δ Sgr) and Gamma2 Sagittarii (γ2 Sgr) leads nearly directly to Antares. Fittingly, Gamma2 Sagittarii is Alnasl, the Arabic word for "arrowhead", and Delta Sagittarii is called Kaus Media, the "center of the bow," from which the arrow protrudes. Kaus Media bisects Lambda Sagittarii (λ Sgr) and Epsilon Sagittarii (ε Sgr), whose names Kaus Borealis and Kaus Australis refer to the northern and southern portions of the bow, respectively.
Due to its astronomical interest and its status as a Zodiac constellation, Sagittarius is one of the best-known constellations and is considered a prominent feature of the summer skies in the northern hemisphere. However, at locations north of 43°N the constellation either drags along the southern horizon, or it does not rise at all. By contrast, in most of the southern hemisphere Sagittarius can appear overhead or nearly so. It is hidden behind the Sun's glare from mid-November to mid-January and is the location of the Sun at the December solstice. By March, Sagittarius is rising at midnight. In June, it achieves opposition and can be seen all night. The June full moon appears in Sagittarius.
In classical antiquity, Capricorn was the location of the Sun at the December solstice, but due to the precession of the equinoxes, this had shifted to Sagittarius by the time of the Roman Empire. By approximately 2700 AD, the Sun will be in Scorpius at the December solstice.
Notable features
Stars
α Sgr (Rukbat, meaning "the archer's knee") despite having the "alpha" designation, is not the brightest star of the constellation, having a magnitude of only 3.96. It is towards the bottom center of the map as shown. Instead, the brightest star is Epsilon Sagittarii (ε Sgr) ("Kaus Australis," or "southern part of the bow"), at magnitude 1.85, or about seven times as bright as α Sgr.
Sigma Sagittarii (σ Sgr) ("Nunki") is the constellation's second-brightest star at magnitude 2.08. Nunki is a B2V star approximately 260 light-years away. "Nunki" is a Babylonian name of uncertain origin, but thought to represent the sacred Babylonian city of Eridu on the Euphrates, which would make Nunki the oldest star name currently in use.
Zeta Sagittarii (ζ Sgr) ("Ascella"), with apparent magnitude 2.61 of A2 spectra, is actually a double star whose two components have magnitudes 3.3 and 3.5.
Delta Sagittarii (δ Sgr) ("Kaus Meridionalis"), is a K2 spectra star with magnitude 2.71 about 350 light years from Earth.
Eta Sagittarii (η Sgr) is a double star with component magnitudes of 3.18 and 10, while Pi Sagittarii (π Sgr) ("Albaldah") is actually a triple system whose components have magnitudes 3.7, 3.8, and 6.0.
The Bayer designation Beta Sagittarii (Beta Sgr, β Sagittarii, β Sgr) is shared by two star systems, β¹ Sagittarii, with apparent magnitude 3.96, and β² Sagittarii, magnitude 7.4. The two stars are separated by 0.36° in the sky and are 378 light-years from earth. Beta Sagittarii, located at a position associated with the forelegs of the centaur, has the traditional name "Arkab", meaning "Achilles tendon".
Nova Sagittarii 2015 No. 2 was discovered on 15 March 2015, by of Chatsworth Island, NSW, Australia. It lies near the center of the constellation. It reached a peak magnitude of 4.3 before steadily fading.
Deep-sky objects
The Milky Way is at its densest near Sagittarius, as this is where the Galactic Center lies. As a result, Sagittarius contains many star clusters and nebulae.
Star clouds
Sagittarius contains two well-known star clouds, both considered fine binocular objects.
The Large Sagittarius Star Cloud is the brightest visible region of the Milky Way. It is a portion of the central bulge of the galaxy seen around the thick dust of the Great Rift, and is the innermost galactic structure that can be observed in visible wavelengths. It has several embedded clusters and superimposed dark nebulae.
The Small Sagittarius Star Cloud, also known as Messier 24, has an apparent magnitude of 2.5. The cloud fills a space of significant volume to a depth of 10,000 to 16,000 light-years. Embedded in M24 is NGC 6603, a small star cluster that is very dense. NGC 6567, a dim planetary nebula, and Barnard 92, a Bok globule, are also nearby.
Nebulae
Sagittarius contains several well-known nebulae, including the Lagoon Nebula (Messier 8), near λ Sagittarii; the Omega Nebula (Messier 17), near the border with Scutum; and the Trifid Nebula (Messier 20), a large nebula containing some very young, hot stars.
The Lagoon Nebula (M8) is an emission nebula that is located 5,000 light-years from Earth and measures 140 light-years by 60 light-years (1.5°). Though it appears grey in telescopes to the unaided eye, long-exposure photographs reveal its pink hue, common to emission nebulae. It is fairly bright, with an integrated magnitude of 3.0. The Lagoon Nebula was discovered independently by John Flamsteed in 1680, Guillaume Le Gentil in 1747, and Charles Messier in 1764. The central area of the Lagoon Nebula is also known as the Hourglass Nebula, so named for its distinctive shape. The Hourglass Nebula has its shape because of matter propelled by Herschel 36. The Lagoon Nebula also features three dark nebulae listed in the Barnard Catalogue. The Lagoon Nebula was instrumental in the discovery of Bok globules, as Bart Bok studied prints of the nebula intensively in 1947. Approximately 17,000 Bok globules were discovered in the nebula nine years later as a part of the Palomar Sky Survey; studies later showed that Bok's hypothesis that the globules held protostars was correct.
The Omega Nebula is a fairly bright nebula, sometimes called the Horseshoe Nebula or Swan Nebula. It has an integrated magnitude of 6.0 and is 4890 light-years from Earth. It was discovered in 1746 by Philippe Loys de Chésaux; observers since him have differed greatly in how they view the nebula, hence its myriad of names. Most often viewed as a checkmark, it was seen as a swan by George F. Chambers in 1889, a loon by Roy Bishop, and as a curl of smoke by Camille Flammarion.
The Trifid Nebula (M20, NGC 6514) is an emission nebula in Sagittarius that lies less than two degrees from the Lagoon Nebula. Discovered by French comet-hunter Charles Messier, it is located between 2,000 and 9,000 light-years from Earth and has a diameter of approximately 50 light-years. The outside of the Trifid Nebula is a bluish reflection nebula; the interior is pink with two dark bands that divide it into three areas, sometimes called "lobes". Hydrogen in the nebula is ionized, creating its characteristic color, by a central triple star, which formed in the intersection of the two dark bands. M20 is associated with a cluster that has a magnitude of 6.3.
The Red Spider Nebula (NGC 6537) is a planetary nebula located at a distance of about 4000 light-years from Earth.
NGC 6559 is a star-forming region located at a distance of about 5000 light-years from Earth, in the constellation of Sagittarius, showing both emission (red) and reflection (blue) regions.
In addition, several other nebulae have been located within Sagittarius and are of interest to astronomy.
NGC 6445 is a planetary nebula with an approximate magnitude of 11. A large nebula at over one arcminute in diameter, it appears very close to the globular cluster NGC 6440.
NGC 6638 is a dimmer globular at magnitude 9.2, though it is more distant than M71 at a distance of 26,000 light-years. It is a Shapley class VI cluster; the classification means that it has an intermediate concentration at its core. It is approximately a degree away from the brighter globulars M22 and M28; NGC 6638 is southeast and southwest of the clusters respectively.
Other deep sky objects
In 1999 a violent outburst at V4641 Sgr was thought to have revealed the location of the closest known black hole to Earth, but later investigation increased its estimated distance by a factor of 15. The complex radio source Sagittarius A is also in Sagittarius, near its western boundary with Ophiuchus. Astronomers believe that one of its components, known as Sagittarius A*, is associated with a supermassive black hole at the center of the galaxy, with a mass of 2.6 million solar masses. Although not visible to the naked eye, Sagittarius A* is located off the top of the spout of the Teapot asterism. The Sagittarius Dwarf Elliptical Galaxy is located just outside the Milky Way.
Baade's Window is an area with very little obscuring dust that shows objects closer to the Milky Way's center than would normally be visible. NGC 6522, magnitude 8.6, and NGC 6528, magnitude 9.5, are both globular clusters visible through Baade's Window. 20,000 and 24,000 light-years from Earth, with Shapley classes of VI and V respectively, both are moderately concentrated at their cores. NGC 6528 is closer to the galactic core at an approximate distance of 2,000 light-years.
2MASS-GC02, also known as Hurt 2, is a globular cluster at a distance of about 16 thousand light-years from Earth. It was discovered in 2000 by Joselino Vasquez, and confirmed by a team of astronomers under the leadership of R. J. Hurt at 2MASS.
Exploration
The space probe New Horizons is moving on a trajectory out of the Solar System as of 2016 that places the probe in front of Sagittarius as seen from the Earth. New Horizons will exhaust its radioisotope thermoelectric generator long before it reaches any other stars.
The Wow! signal was a strong narrowband radio signal that appeared to have come from the direction of Sagittarius.
Mythology
The Babylonians identified Sagittarius as the god Nergal, a centaur-like creature firing an arrow from a bow. It is generally depicted with wings, with two heads, one panther head and one human head, as well as a scorpion's stinger raised above its more conventional horse's tail. The Sumerian name Pabilsag is composed of two elements – Pabil, meaning 'elder paternal kinsman' and Sag, meaning 'chief, head'. The name may thus be translated as the 'Forefather' or 'Chief Ancestor'. The figure is reminiscent of modern depictions of Sagittarius.
Greek mythology
In Greek mythology, Sagittarius is usually identified as a centaur: half human, half horse. However, perhaps due to the Greeks' adoption of the Sumerian constellation, some confusion surrounds the identity of the archer. Some identify Sagittarius as the centaur Chiron, the son of Philyra and Cronus, who was said to have changed himself into a horse to escape his jealous wife, Rhea, and tutor to Jason. As there are two centaurs in the sky, some identify Chiron with the other constellation, known as Centaurus. Or, as an alternative tradition holds, that Chiron devised the constellations Sagittarius and Centaurus to help guide the Argonauts in their quest for the Golden Fleece.
A competing mythological tradition, as espoused by Eratosthenes, identified the Archer not as a centaur but as the satyr Crotus, son of Pan, who Greeks credited with the invention of archery. According to myth, Crotus often went hunting on horseback and lived among the Muses, who requested that Zeus place him in the sky, where he is seen demonstrating archery.
The arrow of this constellation points towards the star Antares, the "heart of the scorpion", and Sagittarius stands poised to attack should Scorpius ever attack the nearby Hercules, or to avenge Scorpius's slaying of Orion.
Terebellum
On the west side of the constellation, Ptolemy also described the asterism Terebellum consisting of four 4th magnitude stars, including the closest and fastest moving member, Omega Sagittarii.
Astrology
, the Sun appears in the constellation Sagittarius from 18 December to 18 January. In tropical astrology, the Sun is considered to be in the sign Sagittarius from 22 November to 21 December, and in sidereal astrology, from 16 December to 14 January.
| Physical sciences | Zodiac | Astronomy |
26933 | https://en.wikipedia.org/wiki/Scorpius | Scorpius | Scorpius is a zodiac constellation located in the Southern celestial hemisphere, where it sits near the center of the Milky Way, between Libra to the west and Sagittarius to the east. Scorpius is an ancient constellation whose recognition predates Greek culture; it is one of the 48 constellations identified by the Greek astronomer Ptolemy in the second century.
Notable features
Stars
Scorpius contains many bright stars, including Antares (α Sco), "rival of Mars," so named because of its distinct reddish hue; β1 Sco (Graffias or Acrab), a triple star; δ Sco (Dschubba, "the forehead"); θ Sco (Sargas, of sumerian origin); ν Sco (Jabbah); ξ Sco; π Sco (Fang); σ Sco (Alniyat); and τ Sco (Paikauhale).
Marking the tip of the scorpion's curved tail are λ Sco (Shaula) and υ Sco (Lesath), whose names both mean "sting." Given their proximity to one another, λ Sco and υ Sco are sometimes referred to as the Cat's Eyes.
The constellation's bright stars form a pattern like a longshoreman's hook. Most of them are massive members of the nearest OB association: Scorpius–Centaurus.
The star δ Sco, after having been a stable 2.3 magnitude star, flared in July 2000 to 1.9 in a matter of weeks. It has since become a variable star fluctuating between 2.0 and 1.6. This means that at its brightest it is the second brightest star in Scorpius.
U Scorpii is the fastest known nova, with a period of about 10 years.
AH Scorpii is a red supergiant star and one of the largest known stars, being 1,400 times larger than the Sun. It is also a luminous star, 340,000 times brighter than the Sun, altought is too faint to be seen to the naked eye, with a brightness varying from 6.5 to 9.6.
The close pair of stars ω1 Scorpii and ω² Scorpii are an optical double, which can be resolved by the unaided eye. One is a yellow giant, while the other is a blue B-type star in the Scorpius-Centaurus Association.
The star once designated γ Sco (despite being well within the boundaries of Libra) is today known as σ Lib. Moreover, the entire constellation of Libra was considered to be claws of Scorpius (Chelae Scorpionis) in Ancient Greek times, with a set of scales held aloft by Astraea (represented by adjacent Virgo) being formed from these westernmost stars during later Greek times. The division into Libra was formalised during Roman times.
Deep-sky objects
Due to its location straddling the Milky Way, this constellation contains many deep-sky objects such as the open clusters Messier 6 (the Butterfly Cluster) and Messier 7 (the Ptolemy Cluster), NGC 6231 (by ζ² Sco), and the globular clusters Messier 4 and Messier 80.
Messier 80 (NGC 6093) is a globular cluster of magnitude 7.3, 33,000 light-years from Earth. It is a compact Shapley class II cluster; the classification indicates that it is highly concentrated and dense at its nucleus. M80 was discovered in 1781 by Charles Messier. It was the site of a rare discovery in 1860 when Arthur von Auwers discovered the nova T Scorpii.
NGC 6302, also called the Bug Nebula, is a bipolar planetary nebula. NGC 6334, also known as the Cat's Paw Nebula, is an emission nebula and star-forming region.
Mythology
In Greek mythology, several myths associated with Scorpius attribute it to Orion. According to one version, Orion boasted to the goddess Artemis and her mother, Leto, that he would kill every animal on Earth. Artemis and Leto sent a scorpion to kill Orion. Their battle caught the attention of Zeus, who raised both combatants to the sky to serve as a reminder for mortals to curb their excessive pride. In another version of the myth, Artemis' twin brother, Apollo, was the one who sent the scorpion to kill Orion after the hunter earned the goddess' favor by admitting she was better than him. After Zeus raised Orion and the scorpion to the sky, the former hunts every winter but flees every summer when the scorpion comes. In both versions, Artemis asked Zeus to raise Orion.
In a Greek myth without Orion, the celestial scorpion encountered Phaethon while he was driving his father Helios' Sun Chariot.
Origins
The Babylonians called this constellation MUL.GIR.TAB - the 'Scorpion'; the signs can be literally read as 'the (creature with) a burning sting'.
In some old descriptions the constellation of Libra is treated as the Scorpion's claws. Libra was known as the Claws of the Scorpion in Babylonian (zibānītu (compare Arabic zubānā)) and in Greek (χηλαι).
Astrology
The Western astrological sign Scorpio differs from the astronomical constellation. Astronomically, the Sun is in Scorpius's IAU boundaries for just six days, from November 23 to November 28. Much of the difference is due to the constellation Ophiuchus, which is used by few astrologers. Scorpius corresponds to the Hindu nakshatras Anuradha, Jyeshtha, and Mula.
Culture
The Javanese people of Indonesia call this constellation Banyakangrem ("the brooded swan") or Kalapa Doyong ("leaning coconut tree") due to the shape similarity.
In Hawaii, Scorpius is known as the demigod Maui's Fishhook or Ka Makau Nui o Māui (meaning the Big Fishhook of Māui) and the name of the fishhook was Manaiakalani.
Scorpius was divided into two asterisms which were used by Bugis sailors for navigation. The northern part of Scorpius (α, β, γ or σ Lib, δ, ε, ζ, μ, σ and τ Scorpii) was called bintoéng lambarué, meaning "skate stars". The southern part of Scorpius (η, θ, ι, κ, λ and ν Scorpii) was called bintoéng balé mangngiwéng, meaning "shark stars".
| Physical sciences | Zodiac | Astronomy |
26962 | https://en.wikipedia.org/wiki/Special%20relativity | Special relativity | In physics, the special theory of relativity, or special relativity for short, is a scientific theory of the relationship between space and time. In Albert Einstein's 1905 paper, On the Electrodynamics of Moving Bodies, the theory is presented as being based on just two postulates:
The laws of physics are invariant (identical) in all inertial frames of reference (that is, frames of reference with no acceleration). This is known as the principle of relativity.
The speed of light in vacuum is the same for all observers, regardless of the motion of light source or observer. This is known as the principle of light constancy, or the principle of light speed invariance.
The first postulate was first formulated by Galileo Galilei (see Galilean invariance).
Origins and significance
Special relativity was described by Albert Einstein in a paper published on 26 September 1905 titled "On the Electrodynamics of Moving Bodies". Maxwell's equations of electromagnetism appeared to be incompatible with Newtonian mechanics, and the Michelson–Morley experiment failed to detect the Earth's motion against the hypothesized luminiferous aether. These led to the development of the Lorentz transformations, by Hendrik Lorentz, which adjust distances and times for moving objects. Special relativity corrects the hitherto laws of mechanics to handle situations involving all motions and especially those at a speed close to that of light (known as ). Today, special relativity is proven to be the most accurate model of motion at any speed when gravitational and quantum effects are negligible. Even so, the Newtonian model is still valid as a simple and accurate approximation at low velocities (relative to the speed of light), for example, everyday motions on Earth.
Special relativity has a wide range of consequences that have been experimentally verified. These include the relativity of simultaneity, length contraction, time dilation, the relativistic velocity addition formula, the relativistic Doppler effect, relativistic mass, a universal speed limit, mass–energy equivalence, the speed of causality and the Thomas precession. It has, for example, replaced the conventional notion of an absolute universal time with the notion of a time that is dependent on reference frame and spatial position. Rather than an invariant time interval between two events, there is an invariant spacetime interval. Combined with other laws of physics, the two postulates of special relativity predict the equivalence of mass and energy, as expressed in the mass–energy equivalence formula , where is the speed of light in vacuum. It also explains how the phenomena of electricity and magnetism are related.
A defining feature of special relativity is the replacement of the Galilean transformations of Newtonian mechanics with the Lorentz transformations. Time and space cannot be defined separately from each other (as was previously thought to be the case). Rather, space and time are interwoven into a single continuum known as "spacetime". Events that occur at the same time for one observer can occur at different times for another.
Until several years later when Einstein developed general relativity, which introduced a curved spacetime to incorporate gravity, the phrase "special relativity" was not used. A translation sometimes used is "restricted relativity"; "special" really means "special case". Some of the work of Albert Einstein in special relativity is built on the earlier work by Hendrik Lorentz and Henri Poincaré. The theory became essentially complete in 1907, with Hermann Minkowski's papers on spacetime.
The theory is "special" in that it only applies in the special case where the spacetime is "flat", that is, where the curvature of spacetime (a consequence of the energy–momentum tensor and representing gravity) is negligible. To correctly accommodate gravity, Einstein formulated general relativity in 1915. Special relativity, contrary to some historical descriptions, does accommodate accelerations as well as accelerating frames of reference.
Just as Galilean relativity is now accepted to be an approximation of special relativity that is valid for low speeds, special relativity is considered an approximation of general relativity that is valid for weak gravitational fields, that is, at a sufficiently small scale (e.g., when tidal forces are negligible) and in conditions of free fall. But general relativity incorporates non-Euclidean geometry to represent gravitational effects as the geometric curvature of spacetime. Special relativity is restricted to the flat spacetime known as Minkowski space. As long as the universe can be modeled as a pseudo-Riemannian manifold, a Lorentz-invariant frame that abides by special relativity can be defined for a sufficiently small neighborhood of each point in this curved spacetime.
Galileo Galilei had already postulated that there is no absolute and well-defined state of rest (no privileged reference frames), a principle now called Galileo's principle of relativity. Einstein extended this principle so that it accounted for the constant speed of light, a phenomenon that had been observed in the Michelson–Morley experiment. He also postulated that it holds for all the laws of physics, including both the laws of mechanics and of electrodynamics.
Traditional "two postulates" approach to special relativity
Einstein discerned two fundamental propositions that seemed to be the most assured, regardless of the exact validity of the (then) known laws of either mechanics or electrodynamics. These propositions were the constancy of the speed of light in vacuum and the independence of physical laws (especially the constancy of the speed of light) from the choice of inertial system. In his initial presentation of special relativity in 1905 he expressed these postulates as:
The principle of relativity – the laws by which the states of physical systems undergo change are not affected, whether these changes of state be referred to the one or the other of two systems in uniform translatory motion relative to each other.
The principle of invariant light speed – "... light is always propagated in empty space with a definite velocity [speed] c which is independent of the state of motion of the emitting body" (from the preface). That is, light in vacuum propagates with the speed c (a fixed constant, independent of direction) in at least one system of inertial coordinates (the "stationary system"), regardless of the state of motion of the light source.
The constancy of the speed of light was motivated by Maxwell's theory of electromagnetism and the lack of evidence for the luminiferous ether. There is conflicting evidence on the extent to which Einstein was influenced by the null result of the Michelson–Morley experiment. In any case, the null result of the Michelson–Morley experiment helped the notion of the constancy of the speed of light gain widespread and rapid acceptance.
The derivation of special relativity depends not only on these two explicit postulates, but also on several tacit assumptions (made in almost all theories of physics), including the isotropy and homogeneity of space and the independence of measuring rods and clocks from their past history.
Following Einstein's original presentation of special relativity in 1905, many different sets of postulates have been proposed in various alternative derivations. But the most common set of postulates remains those employed by Einstein in his original paper. A more mathematical statement of the principle of relativity made later by Einstein, which introduces the concept of simplicity not mentioned above is:
Henri Poincaré provided the mathematical framework for relativity theory by proving that Lorentz transformations are a subset of his Poincaré group of symmetry transformations. Einstein later derived these transformations from his axioms.
Many of Einstein's papers present derivations of the Lorentz transformation based upon these two principles.
Principle of relativity
Reference frames and relative motion
Reference frames play a crucial role in relativity theory. The term reference frame as used here is an observational perspective in space that is not undergoing any change in motion (acceleration), from which a position can be measured along 3 spatial axes (so, at rest or constant velocity). In addition, a reference frame has the ability to determine measurements of the time of events using a "clock" (any reference device with uniform periodicity).
An event is an occurrence that can be assigned a single unique moment and location in space relative to a reference frame: it is a "point" in spacetime. Since the speed of light is constant in relativity irrespective of the reference frame, pulses of light can be used to unambiguously measure distances and refer back to the times that events occurred to the clock, even though light takes time to reach the clock after the event has transpired.
For example, the explosion of a firecracker may be considered to be an "event". We can completely specify an event by its four spacetime coordinates: The time of occurrence and its 3-dimensional spatial location define a reference point. Let's call this reference frame S.
In relativity theory, we often want to calculate the coordinates of an event from differing reference frames. The equations that relate measurements made in different frames are called transformation equations.
Standard configuration
To gain insight into how the spacetime coordinates measured by observers in different reference frames compare with each other, it is useful to work with a simplified setup with frames in a standard configuration. With care, this allows simplification of the math with no loss of generality in the conclusions that are reached. In Fig. 2-1, two Galilean reference frames (i.e., conventional 3-space frames) are displayed in relative motion. Frame S belongs to a first observer O, and frame (pronounced "S prime" or "S dash") belongs to a second observer .
The x, y, z axes of frame S are oriented parallel to the respective primed axes of frame .
Frame moves, for simplicity, in a single direction: the x-direction of frame S with a constant velocity v as measured in frame S.
The origins of frames S and are coincident when time for frame S and for frame .
Since there is no absolute reference frame in relativity theory, a concept of "moving" does not strictly exist, as everything may be moving with respect to some other reference frame. Instead, any two frames that move at the same speed in the same direction are said to be comoving. Therefore, S and are not comoving.
Lack of an absolute reference frame
The principle of relativity, which states that physical laws have the same form in each inertial reference frame, dates back to Galileo, and was incorporated into Newtonian physics. But in the late 19th century the existence of electromagnetic waves led some physicists to suggest that the universe was filled with a substance they called "aether", which, they postulated, would act as the medium through which these waves, or vibrations, propagated (in many respects similar to the way sound propagates through air). The aether was thought to be an absolute reference frame against which all speeds could be measured, and could be considered fixed and motionless relative to Earth or some other fixed reference point. The aether was supposed to be sufficiently elastic to support electromagnetic waves, while those waves could interact with matter, yet offering no resistance to bodies passing through it (its one property was that it allowed electromagnetic waves to propagate). The results of various experiments, including the Michelson–Morley experiment in 1887 (subsequently verified with more accurate and innovative experiments), led to the theory of special relativity, by showing that the aether did not exist. Einstein's solution was to discard the notion of an aether and the absolute state of rest. In relativity, any reference frame moving with uniform motion will observe the same laws of physics. In particular, the speed of light in vacuum is always measured to be c, even when measured by multiple systems that are moving at different (but constant) velocities.
Relativity without the second postulate
From the principle of relativity alone without assuming the constancy of the speed of light (i.e., using the isotropy of space and the symmetry implied by the principle of special relativity) it can be shown that the spacetime transformations between inertial frames are either Euclidean, Galilean, or Lorentzian. In the Lorentzian case, one can then obtain relativistic interval conservation and a certain finite limiting speed. Experiments suggest that this speed is the speed of light in vacuum.
Lorentz invariance as the essential core of special relativity
Alternative approaches to special relativity
Einstein consistently based the derivation of Lorentz invariance (the essential core of special relativity) on just the two basic principles of: relativity and invariance of the speed of light. He wrote:
Thus many modern treatments of special relativity base it on the single postulate of universal Lorentz covariance, or, equivalently, on the single postulate of Minkowski spacetime.
Rather than considering universal Lorentz covariance to be a derived principle, this article considers it to be the fundamental postulate of special relativity. The traditional two-postulate approach to special relativity is presented in innumerable college textbooks and popular presentations. Textbooks starting with the single postulate of Minkowski spacetime include those by Taylor and Wheeler and by Callahan. This is also the approach followed by the Wikipedia articles Spacetime and Minkowski diagram.
Lorentz transformation and its inverse
Define an event to have spacetime coordinates in system S and in a reference frame moving at a velocity v on the x-axis with respect to that frame, . Then the Lorentz transformation specifies that these coordinates are related in the following way:
where is the Lorentz factor and c is the speed of light in vacuum, and the velocity v of , relative to S, is parallel to the x-axis. For simplicity, the y and z coordinates are unaffected; only the x and t coordinates are transformed. These Lorentz transformations form a one-parameter group of linear mappings, that parameter being called rapidity.
Solving the four transformation equations above for the unprimed coordinates yields the inverse Lorentz transformation:
This shows that the unprimed frame is moving with the velocity −v, as measured in the primed frame.
There is nothing special about the x-axis. The transformation can apply to the y- or z-axis, or indeed in any direction parallel to the motion (which are warped by the γ factor) and perpendicular; see the article Lorentz transformation for details.
A quantity that is invariant under Lorentz transformations is known as a Lorentz scalar.
Writing the Lorentz transformation and its inverse in terms of coordinate differences, where one event has coordinates and , another event has coordinates and , and the differences are defined as
we get
If we take differentials instead of taking differences, we get
Graphical representation of the Lorentz transformation
Spacetime diagrams (Minkowski diagrams) are an extremely useful aid to visualizing how coordinates transform between different reference frames. Although it is not as easy to perform exact computations using them as directly invoking the Lorentz transformations, their main power is their ability to provide an intuitive grasp of the results of a relativistic scenario.
To draw a spacetime diagram, begin by considering two Galilean reference frames, S and S′, in standard configuration, as shown in Fig. 2-1.
Fig. 3-1a. Draw the and axes of frame S. The axis is horizontal and the (actually ) axis is vertical, which is the opposite of the usual convention in kinematics. The axis is scaled by a factor of so that both axes have common units of length. In the diagram shown, the gridlines are spaced one unit distance apart. The 45° diagonal lines represent the worldlines of two photons passing through the origin at time The slope of these worldlines is 1 because the photons advance one unit in space per unit of time. Two events, and have been plotted on this graph so that their coordinates may be compared in the S and S' frames.
Fig. 3-1b. Draw the and axes of frame S'. The axis represents the worldline of the origin of the S' coordinate system as measured in frame S. In this figure, Both the and axes are tilted from the unprimed axes by an angle where The primed and unprimed axes share a common origin because frames S and S' had been set up in standard configuration, so that when
Fig. 3-1c. Units in the primed axes have a different scale from units in the unprimed axes. From the Lorentz transformations, we observe that coordinates of in the primed coordinate system transform to in the unprimed coordinate system. Likewise, coordinates of in the primed coordinate system transform to in the unprimed system. Draw gridlines parallel with the axis through points as measured in the unprimed frame, where is an integer. Likewise, draw gridlines parallel with the axis through as measured in the unprimed frame. Using the Pythagorean theorem, we observe that the spacing between units equals times the spacing between units, as measured in frame S. This ratio is always greater than 1, and ultimately it approaches infinity as
Fig. 3-1d. Since the speed of light is an invariant, the worldlines of two photons passing through the origin at time still plot as 45° diagonal lines. The primed coordinates of and are related to the unprimed coordinates through the Lorentz transformations and could be approximately measured from the graph (assuming that it has been plotted accurately enough), but the real merit of a Minkowski diagram is its granting us a geometric view of the scenario. For example, in this figure, we observe that the two timelike-separated events that had different x-coordinates in the unprimed frame are now at the same position in space.
While the unprimed frame is drawn with space and time axes that meet at right angles, the primed frame is drawn with axes that meet at acute or obtuse angles. This asymmetry is due to unavoidable distortions in how spacetime coordinates map onto a Cartesian plane, but the frames are actually equivalent.
Consequences derived from the Lorentz transformation
The consequences of special relativity can be derived from the Lorentz transformation equations. These transformations, and hence special relativity, lead to different physical predictions than those of Newtonian mechanics at all relative velocities, and most pronounced when relative velocities become comparable to the speed of light. The speed of light is so much larger than anything most humans encounter that some of the effects predicted by relativity are initially counterintuitive.
Invariant interval
In Galilean relativity, the spatial separation, (), and the temporal separation, (), between two events are independent invariants, the values of which do not change when observed from different frames of reference.
In special relativity, however, the interweaving of spatial and temporal coordinates generates the concept of an invariant interval, denoted as :
In considering the physical significance of , there are three cases to note:
Δs2 > 0: In this case, the two events are separated by more time than space, and they are hence said to be timelike separated. This implies that , and given the Lorentz transformation , it is evident that there exists a less than for which (in particular, ). In other words, given two events that are timelike separated, it is possible to find a frame in which the two events happen at the same place. In this frame, the separation in time, , is called the proper time.
Δs2 < 0: In this case, the two events are separated by more space than time, and they are hence said to be spacelike separated. This implies that , and given the Lorentz transformation , there exists a less than for which (in particular, ). In other words, given two events that are spacelike separated, it is possible to find a frame in which the two events happen at the same time. In this frame, the separation in space, , is called the proper distance, or proper length. For values of greater than and less than , the sign of changes, meaning that the temporal order of spacelike-separated events changes depending on the frame in which the events are viewed. But the temporal order of timelike-separated events is absolute, since the only way that could be greater than would be if .
Δs2 = 0: In this case, the two events are said to be lightlike separated. This implies that , and this relationship is frame independent due to the invariance of . From this, we observe that the speed of light is in every inertial frame. In other words, starting from the assumption of universal Lorentz covariance, the constant speed of light is a derived result, rather than a postulate as in the two-postulates formulation of the special theory.
The interweaving of space and time revokes the implicitly assumed concepts of absolute simultaneity and synchronization across non-comoving frames.
The form of , being the difference of the squared time lapse and the squared spatial distance, demonstrates a fundamental discrepancy between Euclidean and spacetime distances. The invariance of this interval is a property of the general Lorentz transform (also called the Poincaré transformation), making it an isometry of spacetime. The general Lorentz transform extends the standard Lorentz transform (which deals with translations without rotation, that is, Lorentz boosts, in the x-direction) with all other translations, reflections, and rotations between any Cartesian inertial frame.
In the analysis of simplified scenarios, such as spacetime diagrams, a reduced-dimensionality form of the invariant interval is often employed:
Demonstrating that the interval is invariant is straightforward for the reduced-dimensionality case and with frames in standard configuration:
The value of is hence independent of the frame in which it is measured.
Relativity of simultaneity
Consider two events happening in two different locations that occur simultaneously in the reference frame of one inertial observer. They may occur non-simultaneously in the reference frame of another inertial observer (lack of absolute simultaneity).
From (the forward Lorentz transformation in terms of coordinate differences)
It is clear that the two events that are simultaneous in frame S (satisfying ), are not necessarily simultaneous in another inertial frame (satisfying ). Only if these events are additionally co-local in frame S (satisfying ), will they be simultaneous in another frame .
The Sagnac effect can be considered a manifestation of the relativity of simultaneity. Since relativity of simultaneity is a first order effect in , instruments based on the Sagnac effect for their operation, such as ring laser gyroscopes and fiber optic gyroscopes, are capable of extreme levels of sensitivity.
Time dilation
The time lapse between two events is not invariant from one observer to another, but is dependent on the relative speeds of the observers' reference frames.
Suppose a clock is at rest in the unprimed system S. The location of the clock on two different ticks is then characterized by . To find the relation between the times between these ticks as measured in both systems, can be used to find:
for events satisfying
This shows that the time (Δ) between the two ticks as seen in the frame in which the clock is moving (), is longer than the time (Δt) between these ticks as measured in the rest frame of the clock (S). Time dilation explains a number of physical phenomena; for example, the lifetime of high speed muons created by the collision of cosmic rays with particles in the Earth's outer atmosphere and moving towards the surface is greater than the lifetime of slowly moving muons, created and decaying in a laboratory.
Whenever one hears a statement to the effect that "moving clocks run slow", one should envision an inertial reference frame thickly populated with identical, synchronized clocks. As a moving clock travels through this array, its reading at any particular point is compared with a stationary clock at the same point.
The measurements that we would get if we actually looked at a moving clock would, in general, not at all be the same thing, because the time that we would see would be delayed by the finite speed of light, i.e. the times that we see would be distorted by the Doppler effect. Measurements of relativistic effects must always be understood as having been made after finite speed-of-light effects have been factored out.
Langevin's light-clock
Paul Langevin, an early proponent of the theory of relativity, did much to popularize the theory in the face of resistance by many physicists to Einstein's revolutionary concepts. Among his numerous contributions to the foundations of special relativity were independent work on the mass–energy relationship, a thorough examination of the twin paradox, and investigations into rotating coordinate systems. His name is frequently attached to a hypothetical construct called a "light-clock" (originally developed by Lewis and Tolman in 1909), which he used to perform a novel derivation of the Lorentz transformation.
A light-clock is imagined to be a box of perfectly reflecting walls wherein a light signal reflects back and forth from opposite faces. The concept of time dilation is frequently taught using a light-clock that is traveling in uniform inertial motion perpendicular to a line connecting the two mirrors. (Langevin himself made use of a light-clock oriented parallel to its line of motion.)
Consider the scenario illustrated in Observer A holds a light-clock of length as well as an electronic timer with which she measures how long it takes a pulse to make a round trip up and down along the light-clock. Although observer A is traveling rapidly along a train, from her point of view the emission and receipt of the pulse occur at the same place, and she measures the interval using a single clock located at the precise position of these two events. For the interval between these two events, observer A finds . A time interval measured using a single clock that is motionless in a particular reference frame is called a proper time interval.
Fig. 4-3B illustrates these same two events from the standpoint of observer B, who is parked by the tracks as the train goes by at a speed of . Instead of making straight up-and-down motions, observer B sees the pulses moving along a zig-zag line. However, because of the postulate of the constancy of the speed of light, the speed of the pulses along these diagonal lines is the same that observer A saw for her up-and-down pulses. B measures the speed of the vertical component of these pulses as so that the total round-trip time of the pulses is . Note that for observer B, the emission and receipt of the light pulse occurred at different places, and he measured the interval using two stationary and synchronized clocks located at two different positions in his reference frame. The interval that B measured was therefore not a proper time interval because he did not measure it with a single resting clock.
Reciprocal time dilation
In the above description of the Langevin light-clock, the labeling of one observer as stationary and the other as in motion was completely arbitrary. One could just as well have observer B carrying the light-clock and moving at a speed of to the left, in which case observer A would perceive B's clock as running slower than her local clock.
There is no paradox here, because there is no independent observer C who will agree with both A and B. Observer C necessarily makes his measurements from his own reference frame. If that reference frame coincides with A's reference frame, then C will agree with A's measurement of time. If C's reference frame coincides with B's reference frame, then C will agree with B's measurement of time. If C's reference frame coincides with neither A's frame nor B's frame, then C's measurement of time will disagree with both A's and B's measurement of time.
Twin paradox
The reciprocity of time dilation between two observers in separate inertial frames leads to the so-called twin paradox, articulated in its present form by Langevin in 1911. Langevin imagined an adventurer wishing to explore the future of the Earth. This traveler boards a projectile capable of traveling at 99.995% of the speed of light. After making a round-trip journey to and from a nearby star lasting only two years of his own life, he returns to an Earth that is two hundred years older.
This result appears puzzling because both the traveler and an Earthbound observer would see the other as moving, and so, because of the reciprocity of time dilation, one might initially expect that each should have found the other to have aged less. In reality, there is no paradox at all, because in order for the two observers to perform side-by-side comparisons of their elapsed proper times, the symmetry of the situation must be broken: At least one of the two observers must change their state of motion to match that of the other.
Knowing the general resolution of the paradox, however, does not immediately yield the ability to calculate correct quantitative results. Many solutions to this puzzle have been provided in the literature and have been reviewed in the Twin paradox article. We will examine in the following one such solution to the paradox.
Our basic aim will be to demonstrate that, after the trip, both twins are in perfect agreement about who aged by how much, regardless of their different experiences. illustrates a scenario where the traveling twin flies at to and from a star distant. During the trip, each twin sends yearly time signals (measured in their own proper times) to the other. After the trip, the cumulative counts are compared. On the outward phase of the trip, each twin receives the other's signals at the lowered rate of . Initially, the situation is perfectly symmetric: note that each twin receives the other's one-year signal at two years measured on their own clock. The symmetry is broken when the traveling twin turns around at the four-year mark as measured by her clock. During the remaining four years of her trip, she receives signals at the enhanced rate of . The situation is quite different with the stationary twin. Because of light-speed delay, he does not see his sister turn around until eight years have passed on his own clock. Thus, he receives enhanced-rate signals from his sister for only a relatively brief period. Although the twins disagree in their respective measures of total time, we see in the following table, as well as by simple observation of the Minkowski diagram, that each twin is in total agreement with the other as to the total number of signals sent from one to the other. There is hence no paradox.
Length contraction
The dimensions (e.g., length) of an object as measured by one observer may be smaller than the results of measurements of the same object made by another observer (e.g., the ladder paradox involves a long ladder traveling near the speed of light and being contained within a smaller garage).
Similarly, suppose a measuring rod is at rest and aligned along the x-axis in the unprimed system S. In this system, the length of this rod is written as Δx. To measure the length of this rod in the system , in which the rod is moving, the distances to the end points of the rod must be measured simultaneously in that system . In other words, the measurement is characterized by , which can be combined with to find the relation between the lengths Δx and Δ:
for events satisfying
This shows that the length (Δ) of the rod as measured in the frame in which it is moving (), is shorter than its length (Δx) in its own rest frame (S).
Time dilation and length contraction are not merely appearances. Time dilation is explicitly related to our way of measuring time intervals between events that occur at the same place in a given coordinate system (called "co-local" events). These time intervals (which can be, and are, actually measured experimentally by relevant observers) are different in another coordinate system moving with respect to the first, unless the events, in addition to being co-local, are also simultaneous. Similarly, length contraction relates to our measured distances between separated but simultaneous events in a given coordinate system of choice. If these events are not co-local, but are separated by distance (space), they will not occur at the same spatial distance from each other when seen from another moving coordinate system.
Lorentz transformation of velocities
Consider two frames S and in standard configuration. A particle in S moves in the x direction with velocity vector . What is its velocity in frame ?
We can write
Substituting expressions for and from into , followed by straightforward mathematical manipulations and back-substitution from yields the Lorentz transformation of the speed to :
The inverse relation is obtained by interchanging the primed and unprimed symbols and replacing with .
For not aligned along the x-axis, we write:
The forward and inverse transformations for this case are:
and can be interpreted as giving the resultant of the two velocities and , and they replace the formula . which is valid in Galilean relativity. Interpreted in such a fashion, they are commonly referred to as the relativistic velocity addition (or composition) formulas, valid for the three axes of S and being aligned with each other (although not necessarily in standard configuration).
We note the following points:
If an object (e.g., a photon) were moving at the speed of light in one frame , then it would also be moving at the speed of light in any other frame, moving at .
The resultant speed of two velocities with magnitude less than c is always a velocity with magnitude less than c.
If both and (and then also and ) are small with respect to the speed of light (that is, e.g., , then the intuitive Galilean transformations are recovered from the transformation equations for special relativity
Attaching a frame to a photon (riding a light beam like Einstein considers) requires special treatment of the transformations.
There is nothing special about the x direction in the standard configuration. The above formalism applies to any direction; and three orthogonal directions allow dealing with all directions in space by decomposing the velocity vectors to their components in these directions. See Velocity-addition formula for details.
Thomas rotation
The composition of two non-collinear Lorentz boosts (i.e., two non-collinear Lorentz transformations, neither of which involve rotation) results in a Lorentz transformation that is not a pure boost but is the composition of a boost and a rotation.
Thomas rotation results from the relativity of simultaneity. In Fig. 4-5a, a rod of length in its rest frame (i.e., having a proper length of ) rises vertically along the y-axis in the ground frame.
In Fig. 4-5b, the same rod is observed from the frame of a rocket moving at speed to the right. If we imagine two clocks situated at the left and right ends of the rod that are synchronized in the frame of the rod, relativity of simultaneity causes the observer in the rocket frame to observe (not see) the clock at the right end of the rod as being advanced in time by , and the rod is correspondingly observed as tilted.
Unlike second-order relativistic effects such as length contraction or time dilation, this effect becomes quite significant even at fairly low velocities. For example, this can be seen in the spin of moving particles, where Thomas precession is a relativistic correction that applies to the spin of an elementary particle or the rotation of a macroscopic gyroscope, relating the angular velocity of the spin of a particle following a curvilinear orbit to the angular velocity of the orbital motion.
Thomas rotation provides the resolution to the well-known "meter stick and hole paradox".
Causality and prohibition of motion faster than light
In Fig. 4-6, the time interval between the events A (the "cause") and B (the "effect") is 'timelike'; that is, there is a frame of reference in which events A and B occur at the same location in space, separated only by occurring at different times. If A precedes B in that frame, then A precedes B in all frames accessible by a Lorentz transformation. It is possible for matter (or information) to travel (below light speed) from the location of A, starting at the time of A, to the location of B, arriving at the time of B, so there can be a causal relationship (with A the cause and B the effect).
The interval AC in the diagram is 'spacelike'; that is, there is a frame of reference in which events A and C occur simultaneously, separated only in space. There are also frames in which A precedes C (as shown) and frames in which C precedes A. But no frames are accessible by a Lorentz transformation, in which events A and C occur at the same location. If it were possible for a cause-and-effect relationship to exist between events A and C, paradoxes of causality would result.
For example, if signals could be sent faster than light, then signals could be sent into the sender's past (observer B in the diagrams). A variety of causal paradoxes could then be constructed.
Consider the spacetime diagrams in Fig. 4-7. A and B stand alongside a railroad track, when a high-speed train passes by, with C riding in the last car of the train and D riding in the leading car. The world lines of A and B are vertical (ct), distinguishing the stationary position of these observers on the ground, while the world lines of C and D are tilted forwards (), reflecting the rapid motion of the observers C and D stationary in their train, as observed from the ground.
Fig. 4-7a. The event of "B passing a message to D", as the leading car passes by, is at the origin of D's frame. D sends the message along the train to C in the rear car, using a fictitious "instantaneous communicator". The worldline of this message is the fat red arrow along the axis, which is a line of simultaneity in the primed frames of C and D. In the (unprimed) ground frame the signal arrives earlier than it was sent.
Fig. 4-7b. The event of "C passing the message to A", who is standing by the railroad tracks, is at the origin of their frames. Now A sends the message along the tracks to B via an "instantaneous communicator". The worldline of this message is the blue fat arrow, along the axis, which is a line of simultaneity for the frames of A and B. As seen from the spacetime diagram, in the primed frames of C and D, B will receive the message before it was sent out, a violation of causality.
It is not necessary for signals to be instantaneous to violate causality. Even if the signal from D to C were slightly shallower than the axis (and the signal from A to B slightly steeper than the axis), it would still be possible for B to receive his message before he had sent it. By increasing the speed of the train to near light speeds, the and axes can be squeezed very close to the dashed line representing the speed of light. With this modified setup, it can be demonstrated that even signals only slightly faster than the speed of light will result in causality violation.
Therefore, if causality is to be preserved, one of the consequences of special relativity is that no information signal or material object can travel faster than light in vacuum.
This is not to say that all faster than light speeds are impossible. Various trivial situations can be described where some "things" (not actual matter or energy) move faster than light. For example, the location where the beam of a search light hits the bottom of a cloud can move faster than light when the search light is turned rapidly (although this does not violate causality or any other relativistic phenomenon).
Optical effects
Dragging effects
In 1850, Hippolyte Fizeau and Léon Foucault independently established that light travels more slowly in water than in air, thus validating a prediction of Fresnel's wave theory of light and invalidating the corresponding prediction of Newton's corpuscular theory. The speed of light was measured in still water. What would be the speed of light in flowing water?
In 1851, Fizeau conducted an experiment to answer this question, a simplified representation of which is illustrated in Fig. 5-1. A beam of light is divided by a beam splitter, and the split beams are passed in opposite directions through a tube of flowing water. They are recombined to form interference fringes, indicating a difference in optical path length, that an observer can view. The experiment demonstrated that dragging of the light by the flowing water caused a displacement of the fringes, showing that the motion of the water had affected the speed of the light.
According to the theories prevailing at the time, light traveling through a moving medium would be a simple sum of its speed through the medium plus the speed of the medium. Contrary to expectation, Fizeau found that although light appeared to be dragged by the water, the magnitude of the dragging was much lower than expected. If is the speed of light in still water, and is the speed of the water, and is the water-borne speed of light in the lab frame with the flow of water adding to or subtracting from the speed of light, then
Fizeau's results, although consistent with Fresnel's earlier hypothesis of partial aether dragging, were extremely disconcerting to physicists of the time. Among other things, the presence of an index of refraction term meant that, since depends on wavelength, the aether must be capable of sustaining different motions at the same time. A variety of theoretical explanations were proposed to explain Fresnel's dragging coefficient, that were completely at odds with each other. Even before the Michelson–Morley experiment, Fizeau's experimental results were among a number of observations that created a critical situation in explaining the optics of moving bodies.
From the point of view of special relativity, Fizeau's result is nothing but an approximation to , the relativistic formula for composition of velocities.
Relativistic aberration of light
Because of the finite speed of light, if the relative motions of a source and receiver include a transverse component, then the direction from which light arrives at the receiver will be displaced from the geometric position in space of the source relative to the receiver. The classical calculation of the displacement takes two forms and makes different predictions depending on whether the receiver, the source, or both are in motion with respect to the medium. (1) If the receiver is in motion, the displacement would be the consequence of the aberration of light. The incident angle of the beam relative to the receiver would be calculable from the vector sum of the receiver's motions and the velocity of the incident light. (2) If the source is in motion, the displacement would be the consequence of light-time correction. The displacement of the apparent position of the source from its geometric position would be the result of the source's motion during the time that its light takes to reach the receiver.
The classical explanation failed experimental test. Since the aberration angle depends on the relationship between the velocity of the receiver and the speed of the incident light, passage of the incident light through a refractive medium should change the aberration angle. In 1810, Arago used this expected phenomenon in a failed attempt to measure the speed of light, and in 1870, George Airy tested the hypothesis using a water-filled telescope, finding that, against expectation, the measured aberration was identical to the aberration measured with an air-filled telescope. A "cumbrous" attempt to explain these results used the hypothesis of partial aether-drag, but was incompatible with the results of the Michelson–Morley experiment, which apparently demanded complete aether-drag.
Assuming inertial frames, the relativistic expression for the aberration of light is applicable to both the receiver moving and source moving cases. A variety of trigonometrically equivalent formulas have been published. Expressed in terms of the variables in Fig. 5-2, these include
OR OR
Relativistic Doppler effect
Relativistic longitudinal Doppler effect
The classical Doppler effect depends on whether the source, receiver, or both are in motion with respect to the medium. The relativistic Doppler effect is independent of any medium. Nevertheless, relativistic Doppler shift for the longitudinal case, with source and receiver moving directly towards or away from each other, can be derived as if it were the classical phenomenon, but modified by the addition of a time dilation term, and that is the treatment described here.
Assume the receiver and the source are moving away from each other with a relative speed as measured by an observer on the receiver or the source (The sign convention adopted here is that is negative if the receiver and the source are moving towards each other). Assume that the source is stationary in the medium. Then
where is the speed of sound.
For light, and with the receiver moving at relativistic speeds, clocks on the receiver are time dilated relative to clocks at the source. The receiver will measure the received frequency to be
where
and
is the Lorentz factor.
An identical expression for relativistic Doppler shift is obtained when performing the analysis in the reference frame of the receiver with a moving source.
Transverse Doppler effect
The transverse Doppler effect is one of the main novel predictions of the special theory of relativity.
Classically, one might expect that if source and receiver are moving transversely with respect to each other with no longitudinal component to their relative motions, that there should be no Doppler shift in the light arriving at the receiver.
Special relativity predicts otherwise. Fig. 5-3 illustrates two common variants of this scenario. Both variants can be analyzed using simple time dilation arguments. In Fig. 5-3a, the receiver observes light from the source as being blueshifted by a factor of . In Fig. 5-3b, the light is redshifted by the same factor.
Measurement versus visual appearance
Time dilation and length contraction are not optical illusions, but genuine effects. Measurements of these effects are not an artifact of Doppler shift, nor are they the result of neglecting to take into account the time it takes light to travel from an event to an observer.
Scientists make a fundamental distinction between measurement or observation on the one hand, versus visual appearance, or what one sees. The measured shape of an object is a hypothetical snapshot of all of the object's points as they exist at a single moment in time. But the visual appearance of an object is affected by the varying lengths of time that light takes to travel from different points on the object to one's eye.
For many years, the distinction between the two had not been generally appreciated, and it had generally been thought that a length contracted object passing by an observer would in fact actually be seen as length contracted. In 1959, James Terrell and Roger Penrose independently pointed out that differential time lag effects in signals reaching the observer from the different parts of a moving object result in a fast moving object's visual appearance being quite different from its measured shape. For example, a receding object would appear contracted, an approaching object would appear elongated, and a passing object would have a skew appearance that has been likened to a rotation. A sphere in motion retains the circular outline for all speeds, for any distance, and for all view angles, although
the surface of the sphere and the images on it will appear distorted.
Both Fig. 5-4 and Fig. 5-5 illustrate objects moving transversely to the line of sight. In Fig. 5-4, a cube is viewed from a distance of four times the length of its sides. At high speeds, the sides of the cube that are perpendicular to the direction of motion appear hyperbolic in shape. The cube is actually not rotated. Rather, light from the rear of the cube takes longer to reach one's eyes compared with light from the front, during which time the cube has moved to the right. At high speeds, the sphere in Fig. 5-5 takes on the appearance of a flattened disk tilted up to 45° from the line of sight. If the objects' motions are not strictly transverse but instead include a longitudinal component, exaggerated distortions in perspective may be seen. This illusion has come to be known as Terrell rotation or the Terrell–Penrose effect.
Another example where visual appearance is at odds with measurement comes from the observation of apparent superluminal motion in various radio galaxies, BL Lac objects, quasars, and other astronomical objects that eject relativistic-speed jets of matter at narrow angles with respect to the viewer. An apparent optical illusion results giving the appearance of faster than light travel. In Fig. 5-6, galaxy M87 streams out a high-speed jet of subatomic particles almost directly towards us, but Penrose–Terrell rotation causes the jet to appear to be moving laterally in the same manner that the appearance of the cube in Fig. 5-4 has been stretched out.
Dynamics
Section dealt strictly with kinematics, the study of the motion of points, bodies, and systems of bodies without considering the forces that caused the motion. This section discusses masses, forces, energy and so forth, and as such requires consideration of physical effects beyond those encompassed by the Lorentz transformation itself.
Equivalence of mass and energy
As an object's speed approaches the speed of light from an observer's point of view, its relativistic mass increases thereby making it more and more difficult to accelerate it from within the observer's frame of reference.
The energy content of an object at rest with mass m equals mc2. Conservation of energy implies that, in any reaction, a decrease of the sum of the masses of particles must be accompanied by an increase in kinetic energies of the particles after the reaction. Similarly, the mass of an object can be increased by taking in kinetic energies.
In addition to the papers referenced above – which give derivations of the Lorentz transformation and describe the foundations of special relativity—Einstein also wrote at least four papers giving heuristic arguments for the equivalence (and transmutability) of mass and energy, for .
Mass–energy equivalence is a consequence of special relativity. The energy and momentum, which are separate in Newtonian mechanics, form a four-vector in relativity, and this relates the time component (the energy) to the space components (the momentum) in a non-trivial way. For an object at rest, the energy–momentum four-vector is : it has a time component, which is the energy, and three space components, which are zero. By changing frames with a Lorentz transformation in the x direction with a small value of the velocity v, the energy momentum four-vector becomes . The momentum is equal to the energy multiplied by the velocity divided by c2. As such, the Newtonian mass of an object, which is the ratio of the momentum to the velocity for slow velocities, is equal to E/c2.
The energy and momentum are properties of matter and radiation, and it is impossible to deduce that they form a four-vector just from the two basic postulates of special relativity by themselves, because these do not talk about matter or radiation, they only talk about space and time. The derivation therefore requires some additional physical reasoning. In his 1905 paper, Einstein used the additional principles that Newtonian mechanics should hold for slow velocities, so that there is one energy scalar and one three-vector momentum at slow velocities, and that the conservation law for energy and momentum is exactly true in relativity. Furthermore, he assumed that the energy of light is transformed by the same Doppler-shift factor as its frequency, which he had previously shown to be true based on Maxwell's equations. The first of Einstein's papers on this subject was "Does the Inertia of a Body Depend upon its Energy Content?" in 1905. Although Einstein's argument in this paper is nearly universally accepted by physicists as correct, even self-evident, many authors over the years have suggested that it is wrong. Other authors suggest that the argument was merely inconclusive because it relied on some implicit assumptions.
Einstein acknowledged the controversy over his derivation in his 1907 survey paper on special relativity. There he notes that it is problematic to rely on Maxwell's equations for the heuristic mass–energy argument. The argument in his 1905 paper can be carried out with the emission of any massless particles, but the Maxwell equations are implicitly used to make it obvious that the emission of light in particular can be achieved only by doing work. To emit electromagnetic waves, all you have to do is shake a charged particle, and this is clearly doing work, so that the emission is of energy.
Einstein's 1905 demonstration of E = mc2
In his fourth of his 1905 Annus mirabilis papers, Einstein presented a heuristic argument for the equivalence of mass and energy. Although, as discussed above, subsequent scholarship has established that his arguments fell short of a broadly definitive proof, the conclusions that he reached in this paper have stood the test of time.
Einstein took as starting assumptions his recently discovered formula for relativistic Doppler shift, the laws of conservation of energy and conservation of momentum, and the relationship between the frequency of light and its energy as implied by Maxwell's equations.
Fig. 6-1 (top). Consider a system of plane waves of light having frequency traveling in direction relative to the x-axis of reference frame S. The frequency (and hence energy) of the waves as measured in frame that is moving along the x-axis at velocity is given by the relativistic Doppler shift formula that Einstein had developed in his 1905 paper on special relativity:
Fig. 6-1 (bottom). Consider an arbitrary body that is stationary in reference frame S. Let this body emit a pair of equal-energy light-pulses in opposite directions at angle with respect to the x-axis. Each pulse has energy . Because of conservation of momentum, the body remains stationary in S after emission of the two pulses. Let be the energy of the body before emission of the two pulses and after their emission.
Next, consider the same system observed from frame that is moving along the x-axis at speed relative to frame S. In this frame, light from the forwards and reverse pulses will be relativistically Doppler-shifted. Let be the energy of the body measured in reference frame before emission of the two pulses and after their emission. We obtain the following relationships:
From the above equations, we obtain the following:
The two differences of form seen in the above equation have a straightforward physical interpretation. Since and are the energies of the arbitrary body in the moving and stationary frames, and represents the kinetic energies of the bodies before and after the emission of light (except for an additive constant that fixes the zero point of energy and is conventionally set to zero). Hence,
Taking a Taylor series expansion and neglecting higher order terms, he obtained
Comparing the above expression with the classical expression for kinetic energy, K.E. = mv2, Einstein then noted: "If a body gives off the energy L in the form of radiation, its mass diminishes by L/c2."
Rindler has observed that Einstein's heuristic argument suggested merely that energy contributes to mass. In 1905, Einstein's cautious expression of the mass–energy relationship allowed for the possibility that "dormant" mass might exist that would remain behind after all the energy of a body was removed. By 1907, however, Einstein was ready to assert that all inertial mass represented a reserve of energy. "To equate all mass with energy required an act of aesthetic faith, very characteristic of Einstein." Einstein's bold hypothesis has been amply confirmed in the years subsequent to his original proposal.
For a variety of reasons, Einstein's original derivation is currently seldom taught. Besides the vigorous debate that continues until this day as to the formal correctness of his original derivation, the recognition of special relativity as being what Einstein called a "principle theory" has led to a shift away from reliance on electromagnetic phenomena to purely dynamic methods of proof.
How far can you travel from the Earth?
Since nothing can travel faster than light, one might conclude that a human can never travel farther from Earth than ~ 100 light years. You would easily think that a traveler would never be able to reach more than the few solar systems that exist within the limit of 100 light years from Earth. However, because of time dilation, a hypothetical spaceship can travel thousands of light years during a passenger's lifetime. If a spaceship could be built that accelerates at a constant 1g, it will, after one year, be travelling at almost the speed of light as seen from Earth. This is described by:
where v(t) is the velocity at a time t, a is the acceleration of the spaceship and t is the coordinate time as measured by people on Earth. Therefore, after one year of accelerating at 9.81 m/s2, the spaceship will be travelling at and after three years, relative to Earth. After three years of this acceleration, with the spaceship achieving a velocity of 94.6% of the speed of light relative to Earth, time dilation will result in each second experienced on the spaceship corresponding to 3.1 seconds back on Earth. During their journey, people on Earth will experience more time than they do – since their clocks (all physical phenomena) would really be ticking 3.1 times faster than those of the spaceship. A 5-year round trip for the traveller will take 6.5 Earth years and cover a distance of over 6 light-years. A 20-year round trip for them (5 years accelerating, 5 decelerating, twice each) will land them back on Earth having travelled for 335 Earth years and a distance of 331 light years. A full 40-year trip at 1g will appear on Earth to last 58,000 years and cover a distance of 55,000 light years. A 40-year trip at will take years and cover about light years. A one-way 28 year (14 years accelerating, 14 decelerating as measured with the astronaut's clock) trip at 1g acceleration could reach 2,000,000 light-years to the Andromeda Galaxy. This same time dilation is why a muon travelling close to c is observed to travel much farther than c times its half-life (when at rest).
Elastic collisions
Examination of the collision products generated by particle accelerators around the world provides scientists evidence of the structure of the subatomic world and the natural laws governing it. Analysis of the collision products, the sum of whose masses may vastly exceed the masses of the incident particles, requires special relativity.
In Newtonian mechanics, analysis of collisions involves use of the conservation laws for mass, momentum and energy. In relativistic mechanics, mass is not independently conserved, because it has been subsumed into the total relativistic energy. We illustrate the differences that arise between the Newtonian and relativistic treatments of particle collisions by examining the simple case of two perfectly elastic colliding particles of equal mass. (Inelastic collisions are discussed in Spacetime#Conservation laws. Radioactive decay may be considered a sort of time-reversed inelastic collision.)
Elastic scattering of charged elementary particles deviates from ideality due to the production of Bremsstrahlung radiation.
Newtonian analysis
Fig. 6-2 provides a demonstration of the result, familiar to billiard players, that if a stationary ball is struck elastically by another one of the same mass (assuming no sidespin, or "English"), then after collision, the diverging paths of the two balls will subtend a right angle. (a) In the stationary frame, an incident sphere traveling at 2v strikes a stationary sphere. (b) In the center of momentum frame, the two spheres approach each other symmetrically at ±v. After elastic collision, the two spheres rebound from each other with equal and opposite velocities ±u. Energy conservation requires that = . (c) Reverting to the stationary frame, the rebound velocities are . The dot product , indicating that the vectors are orthogonal.
Relativistic analysis
Consider the elastic collision scenario in Fig. 6-3 between a moving particle colliding with an equal mass stationary particle. Unlike the Newtonian case, the angle between the two particles after collision is less than 90°, is dependent on the angle of scattering, and becomes smaller and smaller as the velocity of the incident particle approaches the speed of light:
The relativistic momentum and total relativistic energy of a particle are given by
Conservation of momentum dictates that the sum of the momenta of the incoming particle and the stationary particle (which initially has momentum = 0) equals the sum of the momenta of the emergent particles:
Likewise, the sum of the total relativistic energies of the incoming particle and the stationary particle (which initially has total energy mc2) equals the sum of the total energies of the emergent particles:
Breaking down () into its components, replacing with the dimensionless , and factoring out common terms from () and () yields the following:
From these we obtain the following relationships:
For the symmetrical case in which and , () takes on the simpler form:
Beyond the basics
Rapidity
Lorentz transformations relate coordinates of events in one reference frame to those of another frame. Relativistic composition of velocities is used to add two velocities together. The formulas to perform the latter computations are nonlinear, making them more complex than the corresponding Galilean formulas.
This nonlinearity is an artifact of our choice of parameters. We have previously noted that in an spacetime diagram, the points at some constant spacetime interval from the origin form an invariant hyperbola. We have also noted that the coordinate systems of two spacetime reference frames in standard configuration are hyperbolically rotated with respect to each other.
The natural functions for expressing these relationships are the hyperbolic analogs of the trigonometric functions. Fig. 7-1a shows a unit circle with sin(a) and cos(a), the only difference between this diagram and the familiar unit circle of elementary trigonometry being that a is interpreted, not as the angle between the ray and the , but as twice the area of the sector swept out by the ray from the . Numerically, the angle and measures for the unit circle are identical. Fig. 7-1b shows a unit hyperbola with sinh(a) and cosh(a), where a is likewise interpreted as twice the tinted area. Fig. 7-2 presents plots of the sinh, cosh, and tanh functions.
For the unit circle, the slope of the ray is given by
In the Cartesian plane, rotation of point into point by angle θ is given by
In a spacetime diagram, the velocity parameter is the analog of slope. The rapidity, φ, is defined by
where
The rapidity defined above is very useful in special relativity because many expressions take on a considerably simpler form when expressed in terms of it. For example, rapidity is simply additive in the collinear velocity-addition formula;
or in other words, .
The Lorentz transformations take a simple form when expressed in terms of rapidity. The γ factor can be written as
Transformations describing relative motion with uniform velocity and without rotation of the space coordinate axes are called boosts.
Substituting γ and γβ into the transformations as previously presented and rewriting in matrix form, the Lorentz boost in the may be written as
and the inverse Lorentz boost in the may be written as
In other words, Lorentz boosts represent hyperbolic rotations in Minkowski spacetime.
The advantages of using hyperbolic functions are such that some textbooks such as the classic ones by Taylor and Wheeler introduce their use at a very early stage.
4‑vectors
Four‑vectors have been mentioned above in context of the energy–momentum , but without any great emphasis. Indeed, none of the elementary derivations of special relativity require them. But once understood, , and more generally tensors, greatly simplify the mathematics and conceptual understanding of special relativity. Working exclusively with such objects leads to formulas that are manifestly relativistically invariant, which is a considerable advantage in non-trivial contexts. For instance, demonstrating relativistic invariance of Maxwell's equations in their usual form is not trivial, while it is merely a routine calculation, really no more than an observation, using the field strength tensor formulation.
On the other hand, general relativity, from the outset, relies heavily on , and more generally tensors, representing physically relevant entities. Relating these via equations that do not rely on specific coordinates requires tensors, capable of connecting such even within a curved spacetime, and not just within a flat one as in special relativity. The study of tensors is outside the scope of this article, which provides only a basic discussion of spacetime.
Definition of 4-vectors
A 4-tuple, is a "4-vector" if its component Ai transform between frames according to the Lorentz transformation.
If using coordinates, A is a if it transforms (in the ) according to
which comes from simply replacing ct with A0 and x with A1 in the earlier presentation of the Lorentz transformation.
As usual, when we write x, t, etc. we generally mean Δx, Δt etc.
The last three components of a must be a standard vector in three-dimensional space. Therefore, a must transform like under Lorentz transformations as well as rotations.
Properties of 4-vectors
Closure under linear combination: If A and B are , then is also a .
Inner-product invariance: If A and B are , then their inner product (scalar product) is invariant, i.e. their inner product is independent of the frame in which it is calculated. Note how the calculation of inner product differs from the calculation of the inner product of a . In the following, and are :
In addition to being invariant under Lorentz transformation, the above inner product is also invariant under rotation in .
Two vectors are said to be orthogonal if . Unlike the case with , orthogonal are not necessarily at right angles to each other. The rule is that two are orthogonal if they are offset by equal and opposite angles from the 45° line, which is the world line of a light ray. This implies that a lightlike is orthogonal to itself.
Invariance of the magnitude of a vector: The magnitude of a vector is the inner product of a with itself, and is a frame-independent property. As with intervals, the magnitude may be positive, negative or zero, so that the vectors are referred to as timelike, spacelike or null (lightlike). Note that a null vector is not the same as a zero vector. A null vector is one for which , while a zero vector is one whose components are all zero. Special cases illustrating the invariance of the norm include the invariant interval and the invariant length of the relativistic momentum vector .
Examples of 4-vectors
Displacement 4-vector: Otherwise known as the spacetime separation, this is or for infinitesimal separations, .
Velocity 4-vector: This results when the displacement is divided by , where is the proper time between the two events that yield dt, dx, dy, and dz.
The is tangent to the world line of a particle, and has a length equal to one unit of time in the frame of the particle.
An accelerated particle does not have an inertial frame in which it is always at rest. However, an inertial frame can always be found that is momentarily comoving with the particle. This frame, the momentarily comoving reference frame (MCRF), enables application of special relativity to the analysis of accelerated particles.
Since photons move on null lines, for a photon, and a cannot be defined. There is no frame in which a photon is at rest, and no MCRF can be established along a photon's path.
Energy–momentum 4-vector:
As indicated before, there are varying treatments for the energy–momentum so that one may also see it expressed as or . The first component is the total energy (including mass) of the particle (or system of particles) in a given frame, while the remaining components are its spatial momentum. The energy–momentum is a conserved quantity.
Acceleration 4-vector: This results from taking the derivative of the velocity with respect to .
Force 4-vector: This is the derivative of the momentum with respect to
As expected, the final components of the above are all standard corresponding to spatial , etc.
4-vectors and physical law
The first postulate of special relativity declares the equivalency of all inertial frames. A physical law holding in one frame must apply in all frames, since otherwise it would be possible to differentiate between frames. Newtonian momenta fail to behave properly under Lorentzian transformation, and Einstein preferred to change the definition of momentum to one involving rather than give up on conservation of momentum.
Physical laws must be based on constructs that are frame independent. This means that physical laws may take the form of equations connecting scalars, which are always frame independent. However, equations involving require the use of tensors with appropriate rank, which themselves can be thought of as being built up from .
Acceleration
It is a common misconception that special relativity is applicable only to inertial frames, and that it is unable to handle accelerating objects or accelerating reference frames. Actually, accelerating objects can generally be analyzed without needing to deal with accelerating frames at all. It is only when gravitation is significant that general relativity is required.
Properly handling accelerating frames does require some care, however. The difference between special and general relativity is that (1) In special relativity, all velocities are relative, but acceleration is absolute. (2) In general relativity, all motion is relative, whether inertial, accelerating, or rotating. To accommodate this difference, general relativity uses curved spacetime.
In this section, we analyze several scenarios involving accelerated reference frames.
Dewan–Beran–Bell spaceship paradox
The Dewan–Beran–Bell spaceship paradox (Bell's spaceship paradox) is a good example of a problem where intuitive reasoning unassisted by the geometric insight of the spacetime approach can lead to issues.
In Fig. 7-4, two identical spaceships float in space and are at rest relative to each other. They are connected by a string that is capable of only a limited amount of stretching before breaking. At a given instant in our frame, the observer frame, both spaceships accelerate in the same direction along the line between them with the same constant proper acceleration. Will the string break?
When the paradox was new and relatively unknown, even professional physicists had difficulty working out the solution. Two lines of reasoning lead to opposite conclusions. Both arguments, which are presented below, are flawed even though one of them yields the correct answer.
To observers in the rest frame, the spaceships start a distance L apart and remain the same distance apart during acceleration. During acceleration, L is a length contracted distance of the distance in the frame of the accelerating spaceships. After a sufficiently long time, γ will increase to a sufficiently large factor that the string must break.
Let A and B be the rear and front spaceships. In the frame of the spaceships, each spaceship sees the other spaceship doing the same thing that it is doing. A says that B has the same acceleration that he has, and B sees that A matches her every move. So the spaceships stay the same distance apart, and the string does not break.
The problem with the first argument is that there is no "frame of the spaceships." There cannot be, because the two spaceships measure a growing distance between the two. Because there is no common frame of the spaceships, the length of the string is ill-defined. Nevertheless, the conclusion is correct, and the argument is mostly right. The second argument, however, completely ignores the relativity of simultaneity.
A spacetime diagram (Fig. 7-5) makes the correct solution to this paradox almost immediately evident. Two observers in Minkowski spacetime accelerate with constant magnitude acceleration for proper time (acceleration and elapsed time measured by the observers themselves, not some inertial observer). They are comoving and inertial before and after this phase. In Minkowski geometry, the length along the line of simultaneity turns out to be greater than the length along the line of simultaneity .
The length increase can be calculated with the help of the Lorentz transformation. If, as illustrated in Fig. 7-5, the acceleration is finished, the ships will remain at a constant offset in some frame . If and are the ships' positions in , the positions in frame are:
The "paradox", as it were, comes from the way that Bell constructed his example. In the usual discussion of Lorentz contraction, the rest length is fixed and the moving length shortens as measured in frame . As shown in Fig. 7-5, Bell's example asserts the moving lengths and measured in frame to be fixed, thereby forcing the rest frame length in frame to increase.
Accelerated observer with horizon
Certain special relativity problem setups can lead to insight about phenomena normally associated with general relativity, such as event horizons. In the text accompanying Section "Invariant hyperbola" of the article Spacetime, the magenta hyperbolae represented actual paths that are tracked by a constantly accelerating traveler in spacetime. During periods of positive acceleration, the traveler's velocity just approaches the speed of light, while, measured in our frame, the traveler's acceleration constantly decreases.
Fig. 7-6 details various features of the traveler's motions with more specificity. At any given moment, her space axis is formed by a line passing through the origin and her current position on the hyperbola, while her time axis is the tangent to the hyperbola at her position. The velocity parameter approaches a limit of one as increases. Likewise, approaches infinity.
The shape of the invariant hyperbola corresponds to a path of constant proper acceleration. This is demonstrable as follows:
We remember that .
Since , we conclude that .
From the relativistic force law, .
Substituting from step 2 and the expression for from step 3 yields , which is a constant expression.
Fig. 7-6 illustrates a specific calculated scenario. Terence (A) and Stella (B) initially stand together 100 light hours from the origin. Stella lifts off at time 0, her spacecraft accelerating at 0.01 c per hour. Every twenty hours, Terence radios updates to Stella about the situation at home (solid green lines). Stella receives these regular transmissions, but the increasing distance (offset in part by time dilation) causes her to receive Terence's communications later and later as measured on her clock, and she never receives any communications from Terence after 100 hours on his clock (dashed green lines).
After 100 hours according to Terence's clock, Stella enters a dark region. She has traveled outside Terence's timelike future. On the other hand, Terence can continue to receive Stella's messages to him indefinitely. He just has to wait long enough. Spacetime has been divided into distinct regions separated by an apparent event horizon. So long as Stella continues to accelerate, she can never know what takes place behind this horizon.
Relativity and unifying electromagnetism
Theoretical investigation in classical electromagnetism led to the discovery of wave propagation. Equations generalizing the electromagnetic effects found that finite propagation speed of the E and B fields required certain behaviors on charged particles. The general study of moving charges forms the Liénard–Wiechert potential, which is a step towards special relativity.
The Lorentz transformation of the electric field of a moving charge into a non-moving observer's reference frame results in the appearance of a mathematical term commonly called the magnetic field. Conversely, the magnetic field generated by a moving charge disappears and becomes a purely electrostatic field in a comoving frame of reference. Maxwell's equations are thus simply an empirical fit to special relativistic effects in a classical model of the Universe. As electric and magnetic fields are reference frame dependent and thus intertwined, one speaks of electromagnetic fields. Special relativity provides the transformation rules for how an electromagnetic field in one inertial frame appears in another inertial frame.
Maxwell's equations in the 3D form are already consistent with the physical content of special relativity, although they are easier to manipulate in a manifestly covariant form, that is, in the language of tensor calculus.
Theories of relativity and quantum mechanics
Special relativity can be combined with quantum mechanics to form relativistic quantum mechanics and quantum electrodynamics. How general relativity and quantum mechanics can be unified is one of the unsolved problems in physics; quantum gravity and a "theory of everything", which require a unification including general relativity too, are active and ongoing areas in theoretical research.
The early Bohr–Sommerfeld atomic model explained the fine structure of alkali metal atoms using both special relativity and the preliminary knowledge on quantum mechanics of the time.
In 1928, Paul Dirac constructed an influential relativistic wave equation, now known as the Dirac equation in his honour, that is fully compatible both with special relativity and with the final version of quantum theory existing after 1926. This equation not only described the intrinsic angular momentum of the electrons called spin, it also led to the prediction of the antiparticle of the electron (the positron), and fine structure could only be fully explained with special relativity. It was the first foundation of relativistic quantum mechanics.
On the other hand, the existence of antiparticles leads to the conclusion that relativistic quantum mechanics is not enough for a more accurate and complete theory of particle interactions. Instead, a theory of particles interpreted as quantized fields, called quantum field theory, becomes necessary; in which particles can be created and destroyed throughout space and time.
Status
Special relativity in its Minkowski spacetime is accurate only when the absolute value of the gravitational potential is much less than c2 in the region of interest. In a strong gravitational field, one must use general relativity. General relativity becomes special relativity at the limit of a weak field. At very small scales, such as at the Planck length and below, quantum effects must be taken into consideration resulting in quantum gravity. But at macroscopic scales and in the absence of strong gravitational fields, special relativity is experimentally tested to extremely high degree of accuracy (10−20)
and thus accepted by the physics community. Experimental results that appear to contradict it are not reproducible and are thus widely believed to be due to experimental errors.
Special relativity is mathematically self-consistent, and it is an organic part of all modern physical theories, most notably quantum field theory, string theory, and general relativity (in the limiting case of negligible gravitational fields).
Newtonian mechanics mathematically follows from special relativity at small velocities (compared to the speed of light) – thus Newtonian mechanics can be considered as a special relativity of slow moving bodies. See Classical mechanics for a more detailed discussion.
Several experiments predating Einstein's 1905 paper are now interpreted as evidence for relativity. Of these it is known Einstein was aware of the Fizeau experiment before 1905, and historians have concluded that Einstein was at least aware of the Michelson–Morley experiment as early as 1899 despite claims he made in his later years that it played no role in his development of the theory.
The Fizeau experiment (1851, repeated by Michelson and Morley in 1886) measured the speed of light in moving media, with results that are consistent with relativistic addition of colinear velocities.
The famous Michelson–Morley experiment (1881, 1887) gave further support to the postulate that detecting an absolute reference velocity was not achievable. It should be stated here that, contrary to many alternative claims, it said little about the invariance of the speed of light with respect to the source and observer's velocity, as both source and observer were travelling together at the same velocity at all times.
The Trouton–Noble experiment (1903) showed that the torque on a capacitor is independent of position and inertial reference frame.
The Experiments of Rayleigh and Brace (1902, 1904) showed that length contraction does not lead to birefringence for a co-moving observer, in accordance with the relativity principle.
Particle accelerators accelerate and measure the properties of particles moving at near the speed of light, where their behavior is consistent with relativity theory and inconsistent with the earlier Newtonian mechanics. These machines would simply not work if they were not engineered according to relativistic principles. In addition, a considerable number of modern experiments have been conducted to test special relativity. Some examples:
Tests of relativistic energy and momentum – testing the limiting speed of particles
Ives–Stilwell experiment – testing relativistic Doppler effect and time dilation
Experimental testing of time dilation – relativistic effects on a fast-moving particle's half-life
Kennedy–Thorndike experiment – time dilation in accordance with Lorentz transformations
Hughes–Drever experiment – testing isotropy of space and mass
Modern searches for Lorentz violation – various modern tests
Experiments to test emission theory demonstrated that the speed of light is independent of the speed of the emitter.
Experiments to test the aether drag hypothesis – no "aether flow obstruction".
Technical discussion of spacetime
Geometry of spacetime
Comparison between flat Euclidean space and Minkowski space
Special relativity uses a "flat" 4-dimensional Minkowski space – an example of a spacetime. Minkowski spacetime appears to be very similar to the standard 3-dimensional Euclidean space, but there is a crucial difference with respect to time.
In 3D space, the differential of distance (line element) ds is defined by
where are the differentials of the three spatial dimensions. In Minkowski geometry, there is an extra dimension with coordinate X0 derived from time, such that the distance differential fulfills
where are the differentials of the four spacetime dimensions. This suggests a deep theoretical insight: special relativity is simply a rotational symmetry of our spacetime, analogous to the rotational symmetry of Euclidean space (see Fig. 10-1). Just as Euclidean space uses a Euclidean metric, so spacetime uses a Minkowski metric. Basically, special relativity can be stated as the invariance of any spacetime interval (that is the 4D distance between any two events) when viewed from any inertial reference frame. All equations and effects of special relativity can be derived from this rotational symmetry (the Poincaré group) of Minkowski spacetime.
The actual form of ds above depends on the metric and on the choices for the X0 coordinate.
To make the time coordinate look like the space coordinates, it can be treated as imaginary: (this is called a Wick rotation).
According to Misner, Thorne and Wheeler (1971, §2.3), ultimately the deeper understanding of both special and general relativity will come from the study of the Minkowski metric (described below) and to take , rather than a "disguised" Euclidean metric using ict as the time coordinate.
Some authors use , with factors of c elsewhere to compensate; for instance, spatial coordinates are divided by c or factors of c±2 are included in the metric tensor.
These numerous conventions can be superseded by using natural units where . Then space and time have equivalent units, and no factors of c appear anywhere.
3D spacetime
If we reduce the spatial dimensions to 2, so that we can represent the physics in a 3D space
we see that the null geodesics lie along a dual-cone (see Fig. 10-2) defined by the equation;
or simply
which is the equation of a circle of radius c dt.
4D spacetime
If we extend this to three spatial dimensions, the null geodesics are the 4-dimensional cone:
so
As illustrated in Fig. 10-3, the null geodesics can be visualized as a set of continuous concentric spheres with radii = c dt.
This null dual-cone represents the "line of sight" of a point in space. That is, when we look at the stars and say "The light from that star that I am receiving is X years old", we are looking down this line of sight: a null geodesic. We are looking at an event a distance away and a time d/c in the past. For this reason the null dual cone is also known as the "light cone". (The point in the lower left of the Fig. 10-2 represents the star, the origin represents the observer, and the line represents the null geodesic "line of sight".)
The cone in the −t region is the information that the point is "receiving", while the cone in the +t section is the information that the point is "sending".
The geometry of Minkowski space can be depicted using Minkowski diagrams, which are useful also in understanding many of the thought experiments in special relativity.
Physics in spacetime
Transformations of physical quantities between reference frames
Above, the Lorentz transformation for the time coordinate and three space coordinates illustrates that they are intertwined. This is true more generally: certain pairs of "timelike" and "spacelike" quantities naturally combine on equal footing under the same Lorentz transformation.
The Lorentz transformation in standard configuration above, that is, for a boost in the x-direction, can be recast into matrix form as follows:
In Newtonian mechanics, quantities that have magnitude and direction are mathematically described as 3d vectors in Euclidean space, and in general they are parametrized by time. In special relativity, this notion is extended by adding the appropriate timelike quantity to a spacelike vector quantity, and we have 4d vectors, or "four-vectors", in Minkowski spacetime. The components of vectors are written using tensor index notation, as this has numerous advantages. The notation makes it clear the equations are manifestly covariant under the Poincaré group, thus bypassing the tedious calculations to check this fact. In constructing such equations, we often find that equations previously thought to be unrelated are, in fact, closely connected being part of the same tensor equation. Recognizing other physical quantities as tensors simplifies their transformation laws. Throughout, upper indices (superscripts) are contravariant indices rather than exponents except when they indicate a square (this should be clear from the context), and lower indices (subscripts) are covariant indices. For simplicity and consistency with the earlier equations, Cartesian coordinates will be used.
The simplest example of a four-vector is the position of an event in spacetime, which constitutes a timelike component ct and spacelike component , in a contravariant position four-vector with components:
where we define so that the time coordinate has the same dimension of distance as the other spatial dimensions; so that space and time are treated equally. Now the transformation of the contravariant components of the position 4-vector can be compactly written as:
where there is an implied summation on from 0 to 3, and is a matrix.
More generally, all contravariant components of a four-vector transform from one frame to another frame by a Lorentz transformation:
Examples of other 4-vectors include the four-velocity , defined as the derivative of the position 4-vector with respect to proper time:
where the Lorentz factor is:
The relativistic energy and relativistic momentum of an object are respectively the timelike and spacelike components of a contravariant four-momentum vector:
where m is the invariant mass.
The four-acceleration is the proper time derivative of 4-velocity:
The transformation rules for three-dimensional velocities and accelerations are very awkward; even above in standard configuration the velocity equations are quite complicated owing to their non-linearity. On the other hand, the transformation of four-velocity and four-acceleration are simpler by means of the Lorentz transformation matrix.
The four-gradient of a scalar field φ transforms covariantly rather than contravariantly:
which is the transpose of:
only in Cartesian coordinates. It is the covariant derivative that transforms in manifest covariance, in Cartesian coordinates this happens to reduce to the partial derivatives, but not in other coordinates.
More generally, the covariant components of a 4-vector transform according to the inverse Lorentz transformation:
where is the reciprocal matrix of .
The postulates of special relativity constrain the exact form the Lorentz transformation matrices take.
More generally, most physical quantities are best described as (components of) tensors. So to transform from one frame to another, we use the well-known tensor transformation law
where is the reciprocal matrix of . All tensors transform by this rule.
An example of a four-dimensional second order antisymmetric tensor is the relativistic angular momentum, which has six components: three are the classical angular momentum, and the other three are related to the boost of the center of mass of the system. The derivative of the relativistic angular momentum with respect to proper time is the relativistic torque, also second order antisymmetric tensor.
The electromagnetic field tensor is another second order antisymmetric tensor field, with six components: three for the electric field and another three for the magnetic field. There is also the stress–energy tensor for the electromagnetic field, namely the electromagnetic stress–energy tensor.
Metric
The metric tensor allows one to define the inner product of two vectors, which in turn allows one to assign a magnitude to the vector. Given the four-dimensional nature of spacetime the Minkowski metric η has components (valid with suitably chosen coordinates), which can be arranged in a matrix:
which is equal to its reciprocal, , in those frames. Throughout we use the signs as above, different authors use different conventions – see Minkowski metric alternative signs.
The Poincaré group is the most general group of transformations that preserves the Minkowski metric:
and this is the physical symmetry underlying special relativity.
The metric can be used for raising and lowering indices on vectors and tensors. Invariants can be constructed using the metric, the inner product of a 4-vector T with another 4-vector S is:
Invariant means that it takes the same value in all inertial frames, because it is a scalar (0 rank tensor), and so no appears in its trivial transformation. The magnitude of the 4-vector T is the positive square root of the inner product with itself:
One can extend this idea to tensors of higher order, for a second order tensor we can form the invariants:
similarly for higher order tensors. Invariant expressions, particularly inner products of 4-vectors with themselves, provide equations that are useful for calculations, because one does not need to perform Lorentz transformations to determine the invariants.
Relativistic kinematics and invariance
The coordinate differentials transform also contravariantly:
so the squared length of the differential of the position four-vector dXμ constructed using
is an invariant. Notice that when the line element dX2 is negative that is the differential of proper time, while when dX2 is positive, is differential of the proper distance.
The 4-velocity Uμ has an invariant form:
which means all velocity four-vectors have a magnitude of c. This is an expression of the fact that there is no such thing as being at coordinate rest in relativity: at the least, you are always moving forward through time. Differentiating the above equation by τ produces:
So in special relativity, the acceleration four-vector and the velocity four-vector are orthogonal.
Relativistic dynamics and invariance
The invariant magnitude of the momentum 4-vector generates the energy–momentum relation:
We can work out what this invariant is by first arguing that, since it is a scalar, it does not matter in which reference frame we calculate it, and then by transforming to a frame where the total momentum is zero.
We see that the rest energy is an independent invariant. A rest energy can be calculated even for particles and systems in motion, by translating to a frame in which momentum is zero.
The rest energy is related to the mass according to the celebrated equation discussed above:
The mass of systems measured in their center of momentum frame (where total momentum is zero) is given by the total energy of the system in this frame. It may not be equal to the sum of individual system masses measured in other frames.
To use Newton's third law of motion, both forces must be defined as the rate of change of momentum with respect to the same time coordinate. That is, it requires the 3D force defined above. Unfortunately, there is no tensor in 4D that contains the components of the 3D force vector among its components.
If a particle is not traveling at c, one can transform the 3D force from the particle's co-moving reference frame into the observer's reference frame. This yields a 4-vector called the four-force. It is the rate of change of the above energy momentum four-vector with respect to proper time. The covariant version of the four-force is:
In the rest frame of the object, the time component of the four-force is zero unless the "invariant mass" of the object is changing (this requires a non-closed system in which energy/mass is being directly added or removed from the object) in which case it is the negative of that rate of change of mass, times c. In general, though, the components of the four-force are not equal to the components of the three-force, because the three force is defined by the rate of change of momentum with respect to coordinate time, that is, dp/dt while the four-force is defined by the rate of change of momentum with respect to proper time, that is, dp/dτ.
In a continuous medium, the 3D density of force combines with the density of power to form a covariant 4-vector. The spatial part is the result of dividing the force on a small cell (in 3-space) by the volume of that cell. The time component is −1/c times the power transferred to that cell divided by the volume of the cell. This will be used below in the section on electromagnetism.
| Physical sciences | Theory of relativity | null |
26985 | https://en.wikipedia.org/wiki/Salinity | Salinity | Salinity () is the saltiness or amount of salt dissolved in a body of water, called saline water (see also soil salinity). It is usually measured in g/L or g/kg (grams of salt per liter/kilogram of water; the latter is dimensionless and equal to ‰).
Salinity is an important factor in determining many aspects of the chemistry of natural waters and of biological processes within it, and is a thermodynamic state variable that, along with temperature and pressure, governs physical characteristics like the density and heat capacity of the water.
A contour line of constant salinity is called an isohaline, or sometimes isohale.
Definitions
Salinity in rivers, lakes, and the ocean is conceptually simple, but technically challenging to define and measure precisely. Conceptually the salinity is the quantity of dissolved salt content of the water. Salts are compounds like sodium chloride, magnesium sulfate, potassium nitrate, and sodium bicarbonate which dissolve into ions. The concentration of dissolved chloride ions is sometimes referred to as chlorinity. Operationally, dissolved matter is defined as that which can pass through a very fine filter (historically a filter with a pore size of 0.45 μm, but later usually 0.2 μm). Salinity can be expressed in the form of a mass fraction, i.e. the mass of the dissolved material in a unit mass of solution.
Seawater typically has a mass salinity of around 35 g/kg, although lower values are typical near coasts where rivers enter the ocean. Rivers and lakes can have a wide range of salinities, from less than 0.01 g/kg to a few g/kg, although there are many places where higher salinities are found. The Dead Sea has a salinity of more than 200 g/kg. Precipitation typically has a TDS of 20 mg/kg or less.
Whatever pore size is used in the definition, the resulting salinity value of a given sample of natural water will not vary by more than a few percent (%). Physical oceanographers working in the abyssal ocean, however, are often concerned with precision and intercomparability of measurements by different researchers, at different times, to almost five significant digits. A bottled seawater product known as IAPSO Standard Seawater is used by oceanographers to standardize their measurements with enough precision to meet this requirement.
Composition
Measurement and definition difficulties arise because natural waters contain a complex mixture of many different elements from different sources (not all from dissolved salts) in different molecular forms. The chemical properties of some of these forms depend on temperature and pressure. Many of these forms are difficult to measure with high accuracy, and in any case complete chemical analysis is not practical when analyzing multiple samples. Different practical definitions of salinity result from different attempts to account for these problems, to different levels of precision, while still remaining reasonably easy to use.
For practical reasons salinity is usually related to the sum of masses of a subset of these dissolved chemical constituents (so-called solution salinity), rather than to the unknown mass of salts that gave rise to this composition (an exception is when artificial seawater is created). For many purposes this sum can be limited to a set of eight major ions in natural waters, although for seawater at highest precision an additional seven minor ions are also included. The major ions dominate the inorganic composition of most (but by no means all) natural waters. Exceptions include some pit lakes and waters from some hydrothermal springs.
The concentrations of dissolved gases like oxygen and nitrogen are not usually included in descriptions of salinity. However, carbon dioxide gas, which when dissolved is partially converted into carbonates and bicarbonates, is often included. Silicon in the form of silicic acid, which usually appears as a neutral molecule in the pH range of most natural waters, may also be included for some purposes (e.g., when salinity/density relationships are being investigated).
Seawater
The term 'salinity' is, for oceanographers, usually associated with one of a set of specific measurement techniques. As the dominant techniques evolve, so do different descriptions of salinity. Salinities were largely measured using titration-based techniques before the 1980s. Titration with silver nitrate could be used to determine the concentration of halide ions (mainly chlorine and bromine) to give a chlorinity. The chlorinity was then multiplied by a factor to account for all other constituents. The resulting 'Knudsen salinities' are expressed in units of parts per thousand (ppt or ‰).
The use of electrical conductivity measurements to estimate the ionic content of seawater led to the development of the scale called the practical salinity scale 1978 (PSS-78). Salinities measured using PSS-78 do not have units. The suffix psu or PSU (denoting practical salinity unit) is sometimes added to PSS-78 measurement values. The addition of PSU as a unit after the value is "formally incorrect and strongly discouraged".
In 2010 a new standard for the properties of seawater called the thermodynamic equation of seawater 2010 (TEOS-10) was introduced, advocating absolute salinity as a replacement for practical salinity, and conservative temperature as a replacement for potential temperature. This standard includes a new scale called the reference composition salinity scale. Absolute salinities on this scale are expressed as a mass fraction, in grams per kilogram of solution. Salinities on this scale are determined by combining electrical conductivity measurements with other information that can account for regional changes in the composition of seawater. They can also be determined by making direct density measurements.
A sample of seawater from most locations with a chlorinity of 19.37 ppt will have a Knudsen salinity of 35.00 ppt, a PSS-78 practical salinity of about 35.0, and a TEOS-10 absolute salinity of about 35.2 g/kg. The electrical conductivity of this water at a temperature of 15 °C is 42.9 mS/cm.
On the global scale, it is extremely likely that human-caused climate change has contributed to observed surface and subsurface salinity changes since the 1950s, and projections of surface salinity changes throughout the 21st century indicate that fresh ocean regions will continue to get fresher and salty regions will continue to get saltier.
Salinity is serving as a tracer of different masses. Surface water is pulled in to replace the sinking water, which in turn eventually becomes cold and salty enough to sink. Salinity distribution contributes to shape the oceanic circulation.
Lakes and rivers
Limnologists and chemists often define salinity in terms of mass of salt per unit volume, expressed in units of mg/L or g/L. It is implied, although often not stated, that this value applies accurately only at some reference temperature because solution volume varies with temperature. Values presented in this way are typically accurate to the order of 1%. Limnologists also use electrical conductivity, or "reference conductivity", as a proxy for salinity. This measurement may be corrected for temperature effects, and is usually expressed in units of μS/cm.
A river or lake water with a salinity of around 70 mg/L will typically have a specific conductivity at 25 °C of between 80 and 130 μS/cm. The actual ratio depends on the ions present. The actual conductivity usually changes by about 2% per degree Celsius, so the measured conductivity at 5 °C might only be in the range of 50–80 μS/cm.
Direct density measurements are also used to estimate salinities, particularly in highly saline lakes. Sometimes density at a specific temperature is used as a proxy for salinity. At other times an empirical salinity/density relationship developed for a particular body of water is used to estimate the salinity of samples from a measured density.
Classification of water bodies based upon salinity
Marine waters are those of the ocean, another term for which is euhaline seas. The salinity of euhaline seas is 30 to 35 ‰. Brackish seas or waters have salinity in the range of 0.5 to 29 ‰ and metahaline seas from 36 to 40 ‰. These waters are all regarded as thalassic because their salinity is derived from the ocean and defined as homoiohaline if salinity does not vary much over time (essentially constant). The table on the right, modified from Por (1972), follows the "Venice system" (1959).
In contrast to homoiohaline environments are certain poikilohaline environments (which may also be thalassic) in which the salinity variation is biologically significant. Poikilohaline water salinities may range anywhere from 0.5 to greater than 300 ‰. The important characteristic is that these waters tend to vary in salinity over some biologically meaningful range seasonally or on some other roughly comparable time scale. Put simply, these are bodies of water with quite variable salinity.
Highly saline water, from which salts crystallize (or are about to), is referred to as brine.
Environmental considerations
Salinity is an ecological factor of considerable importance, influencing the types of organisms that live in a body of water. As well, salinity influences the kinds of plants that will grow either in a water body, or on land fed by a water (or by a groundwater). A plant adapted to saline conditions is called a halophyte. A halophyte which is tolerant to residual sodium carbonate salinity are called glasswort or saltwort or barilla plants. Organisms (mostly bacteria) that can live in very salty conditions are classified as extremophiles, or halophiles specifically. An organism that can withstand a wide range of salinities is euryhaline.
Salts are expensive to remove from water, and salt content is an important factor in water use, factoring into potability and suitability for irrigation. Increases in salinity have been observed in lakes and rivers in the United States, due to common road salt and other salt de-icers in runoff.
The degree of salinity in oceans is a driver of the world's ocean circulation, where density changes due to both salinity changes and temperature changes at the surface of the ocean produce changes in buoyancy, which cause the sinking and rising of water masses. Changes in the salinity of the oceans are thought to contribute to global changes in carbon dioxide as more saline waters are less soluble to carbon dioxide. In addition, during glacial periods, the hydrography is such that a possible cause of reduced circulation is the production of stratified oceans. In such cases, it is more difficult to subduct water through the thermohaline circulation.
Not only is salinity a driver of ocean circulation, but changes in ocean circulation also affect salinity, particularly in the subpolar North Atlantic where from 1990 to 2010 increased contributions of Greenland meltwater were counteracted by increased northward transport of salty Atlantic waters. However, North Atlantic waters have become fresher since the mid-2010s due to increased Greenland meltwater flux.
| Physical sciences | Water: General | Earth science |
26997 | https://en.wikipedia.org/wiki/Scientist | Scientist | A scientist is a person who researches to advance knowledge in an area of the natural sciences.
In classical antiquity, there was no real ancient analog of a modern scientist. Instead, philosophers engaged in the philosophical study of nature called natural philosophy, a precursor of natural science. Though Thales ( 624–545 BC) was arguably the first scientist for describing how cosmic events may be seen as natural, not necessarily caused by gods, it was not until the 19th century that the term scientist came into regular use after it was coined by the theologian, philosopher, and historian of science William Whewell in 1833.
History
The roles of "scientists", and their predecessors before the emergence of modern scientific disciplines, have evolved considerably over time. Scientists of different eras (and before them, natural philosophers, mathematicians, natural historians, natural theologians, engineers, and others who contributed to the development of science) have had widely different places in society, and the social norms, ethical values, and epistemic virtues associated with scientists—and expected of them—have changed over time as well. Accordingly, many different historical figures can be identified as early scientists, depending on which characteristics of modern science are taken to be essential.
Some historians point to the Scientific Revolution that began in 16th century as the period when science in a recognizably modern form developed. It was not until the 19th century that sufficient socioeconomic changes had occurred for scientists to emerge as a major profession.
Classical antiquity
Knowledge about nature in classical antiquity was pursued by many kinds of scholars. Greek contributions to science—including works of geometry and mathematical astronomy, early accounts of biological processes and catalogs of plants and animals, and theories of knowledge and learning—were produced by philosophers and physicians, as well as practitioners of various trades. These roles, and their associations with scientific knowledge, spread with the Roman Empire and, with the spread of Christianity, became closely linked to religious institutions in most European countries. Astrology and astronomy became an important area of knowledge, and the role of astronomer/astrologer developed with the support of political and religious patronage. By the time of the medieval university system, knowledge was divided into the trivium—philosophy, including natural philosophy—and the quadrivium—mathematics, including astronomy. Hence, the medieval analogs of scientists were often either philosophers or mathematicians. Knowledge of plants and animals was broadly the province of physicians.
Middle Ages
Science in medieval Islam generated some new modes of developing natural knowledge, although still within the bounds of existing social roles such as philosopher and mathematician. Many proto-scientists from the Islamic Golden Age are considered polymaths, in part because of the lack of anything corresponding to modern scientific disciplines. Many of these early polymaths were also religious priests and theologians: for example, Alhazen and al-Biruni were mutakallimiin; the physician Avicenna was a hafiz; the physician Ibn al-Nafis was a hafiz, muhaddith and ulema; the botanist Otto Brunfels was a theologian and historian of Protestantism; the astronomer and physician Nicolaus Copernicus was a priest. During the Italian Renaissance scientists like Leonardo da Vinci, Michelangelo, Galileo Galilei and Gerolamo Cardano have been considered the most recognizable polymaths.
Renaissance
During the Renaissance, Italians made substantial contributions in science. Leonardo da Vinci made significant discoveries in paleontology and anatomy. The Father of modern Science,
Galileo Galilei, made key improvements on the thermometer and telescope which allowed him to observe and clearly describe the solar system. Descartes was not only a pioneer of analytic geometry but formulated a theory of mechanics and advanced ideas about the origins of animal movement and perception. Vision interested the physicists Young and Helmholtz, who also studied optics, hearing and music. Newton extended Descartes's mathematics by inventing calculus (at the same time as Leibniz). He provided a comprehensive formulation of classical mechanics and investigated light and optics. Fourier founded a new branch of mathematics — infinite, periodic series — studied heat flow and infrared radiation, and discovered the greenhouse effect. Girolamo Cardano, Blaise Pascal Pierre de Fermat, Von Neumann, Turing, Khinchin, Markov and Wiener, all mathematicians, made major contributions to science and probability theory, including the ideas behind computers, and some of the foundations of statistical mechanics and quantum mechanics. Many mathematically inclined scientists, including Galileo, were also musicians.
There are many compelling stories in medicine and biology, such as the development of ideas about the circulation of blood from Galen to Harvey. Some scholars and historians attributes Christianity to having contributed to the rise of the Scientific Revolution.
Age of Enlightenment
During the age of Enlightenment, Luigi Galvani, the pioneer of bioelectromagnetics, discovered animal electricity. He discovered that a charge applied to the spinal cord of a frog could generate muscular spasms throughout its body. Charges could make frog legs jump even if the legs were no longer attached to a frog. While cutting a frog leg, Galvani's steel scalpel touched a brass hook that was holding the leg in place. The leg twitched. Further experiments confirmed this effect, and Galvani was convinced that he was seeing the effects of what he called animal electricity, the life force within the muscles of the frog. At the University of Pavia, Galvani's colleague Alessandro Volta was able to reproduce the results, but was sceptical of Galvani's explanation.
Lazzaro Spallanzani is one of the most influential figures in experimental physiology and the natural sciences. His investigations have exerted a lasting influence on the medical sciences. He made important contributions to the experimental study of bodily functions and animal reproduction.
Francesco Redi discovered that microorganisms can cause disease.
19th century
Until the late 19th or early 20th century, scientists were still referred to as "natural philosophers" or "men of science".
English philosopher and historian of science William Whewell coined the term scientist in 1833, and it first appeared in print in Whewell's anonymous 1834 review of Mary Somerville's On the Connexion of the Physical Sciences published in the Quarterly Review. Whewell wrote of "an increasing proclivity of separation and dismemberment" in the sciences; while highly specific terms proliferated—chemist, mathematician, naturalist—the broad term "philosopher" was no longer satisfactory to group together those who pursued science, without the caveats of "natural" or "experimental" philosopher. Whewell compared these increasing divisions with Somerville's aim of "[rendering] a most important service to science" "by showing how detached branches have, in the history of science, united by the discovery of general principles." Whewell reported in his review that members of the British Association for the Advancement of Science had been complaining at recent meetings about the lack of a good term for "students of the knowledge of the material world collectively." Alluding to himself, he noted that "some ingenious gentleman proposed that, by analogy with artist, they might form [the word] scientist, and added that there could be no scruple in making free with this term since we already have such words as economist, and atheist—but this was not generally palatable".
Whewell proposed the word again more seriously (and not anonymously) in his 1840 The Philosophy of the Inductive Sciences:
He also proposed the term physicist at the same time, as a counterpart to the French word physicien. Neither term gained wide acceptance until decades later; scientist became a common term in the late 19th century in the United States and around the turn of the 20th century in Great Britain. By the twentieth century, the modern notion of science as a special brand of information about the world, practiced by a distinct group and pursued through a unique method, was essentially in place.
20th century
Marie Curie became the first woman to win the Nobel Prize and the first person to win it twice. Her efforts led to the development of nuclear energy and Radiotherapy for the treatment of cancer. In 1922, she was appointed a member of the International Commission on Intellectual Co-operation by the Council of the League of Nations. She campaigned for scientist's right to patent their discoveries and inventions. She also campaigned for free access to international scientific literature and for internationally recognized scientific symbols.
Profession
As a profession, the scientist of today is widely recognized. However, there is no formal process to determine who is a scientist and who is not a scientist. Anyone can be a scientist in some sense. Some professions have legal requirements for their practice (e.g. licensure) and some scientists are independent scientists meaning that they practice science on their own, but to practice science there are no known licensure requirements.
Education
In modern times, many professional scientists are trained in an academic setting (e.g., universities and research institutes), mostly at the level of graduate schools. Upon completion, they would normally attain an academic degree, with the highest degree being a doctorate such as a Doctor of Philosophy (PhD). Although graduate education for scientists varies among institutions and countries, some common training requirements include specializing in an area of interest, publishing research findings in peer-reviewed scientific journals and presenting them at scientific conferences, giving lectures or teaching, and defending a thesis (or dissertation) during an oral examination. To aid them in this endeavor, graduate students often work under the guidance of a mentor, usually a senior scientist, which may continue after the completion of their doctorates whereby they work as postdoctoral researchers.
Career
After the completion of their training, many scientists pursue careers in a variety of work settings and conditions. In 2017, the British scientific journal Nature published the results of a large-scale survey of more than 5,700 doctoral students worldwide, asking them which sectors of the economy they would like to work in. A little over half of the respondents wanted to pursue a career in academia, with smaller proportions hoping to work in industry, government, and nonprofit environments.
Other motivations are recognition by their peers and prestige. The Nobel Prize, a widely regarded prestigious award, is awarded annually to those who have achieved scientific advances in the fields of medicine, physics, and chemistry.
Some scientists have a desire to apply scientific knowledge for the benefit of people's health, the nations, the world, nature, or industries (academic scientist and industrial scientist). Scientists tend to be less motivated by direct financial reward for their work than other careers. As a result, scientific researchers often accept lower average salaries when compared with many other professions which require a similar amount of training and qualification.
Research interests
Scientists include experimentalists who mainly perform experiments to test hypotheses, and theoreticians who mainly develop models to explain existing data and predict new results. There is a continuum between the two activities and the division between them is not clear-cut, with many scientists performing both tasks.
Those considering science as a career often look to the frontiers. These include cosmology and biology, especially molecular biology and the human genome project. Other areas of active research include the exploration of matter at the scale of elementary particles as described by high-energy physics, and materials science, which seeks to discover and design new materials. Others choose to study brain function and neurotransmitters, which is considered by many to be the "final frontier". There are many important discoveries to make regarding the nature of the mind and human thought, much of which still remains unknown.
By specialization
Natural science
Physical science
Life science
Social science
Formal science
Applied
Interdisciplinary
By employer
Academic
Independent scientist
Industrial/applied scientist
Citizen scientist
Government scientist
Demography
By country
The number of scientists is vastly different from country to country. For instance, there are only four full-time scientists per 10,000 workers in India, while this number is 79 for the United Kingdom, and 85 for the United States.
United States
According to the National Science Foundation, 4.7 million people with science degrees worked in the United States in 2015, across all disciplines and employment sectors. The figure included twice as many men as women. Of that total, 17% worked in academia, that is, at universities and undergraduate institutions, and men held 53% of those positions. 5% of scientists worked for the federal government, and about 3.5% were self-employed. Of the latter two groups, two-thirds were men. 59% of scientists in the United States were employed in industry or business, and another 6% worked in non-profit positions.
By gender
Scientist and engineering statistics are usually intertwined, but they indicate that women enter the field far less than men, though this gap is narrowing. The number of science and engineering doctorates awarded to women rose from a mere 7 percent in 1970 to 34 percent in 1985 and in engineering alone the numbers of bachelor's degrees awarded to women rose from only 385 in 1975 to more than 11000 in 1985.
| Physical sciences | Science basics | Basics and measurement |
27000 | https://en.wikipedia.org/wiki/Smog | Smog | Smog, or smoke fog, is a type of intense air pollution. The word "smog" was coined in the early 20th century, and is a portmanteau of the words smoke and fog to refer to smoky fog due to its opacity, and odor. The word was then intended to refer to what was sometimes known as pea soup fog, a familiar and serious problem in London from the 19th century to the mid-20th century, where it was commonly known as a London particular or London fog. This kind of visible air pollution is composed of nitrogen oxides, sulfur oxide, ozone, smoke and other particulates. Man-made smog is derived from coal combustion emissions, vehicular emissions, industrial emissions, forest and agricultural fires and photochemical reactions of these emissions.
Smog is often categorized as being either summer smog or winter smog. Summer smog is primarily associated with the photochemical formation of ozone. During the summer season when the temperatures are warmer and there is more sunlight present, photochemical smog is the dominant type of smog formation. During the winter months when the temperatures are colder, and atmospheric inversions are common, there is an increase in coal and other fossil fuel usage to heat homes and buildings. These combustion emissions, together with the lack of pollutant dispersion under inversions, characterize winter smog formation. Smog formation in general relies on both primary and secondary pollutants. Primary pollutants are emitted directly from a source, such as emissions of sulfur dioxide from coal combustion. Secondary pollutants, such as ozone, are formed when primary pollutants undergo chemical reactions in the atmosphere.
Photochemical smog, as found for example in Los Angeles, is a type of air pollution derived from vehicular emission from internal combustion engines and industrial fumes. These pollutants react in the atmosphere with sunlight to form secondary pollutants that also combine with the primary emissions to form photochemical smog. In certain other cities, such as Delhi, smog severity is often aggravated by stubble burning in neighboring agricultural areas since the 1980s. The atmospheric pollution levels of Los Angeles, Beijing, Delhi, Lahore, Mexico City, Tehran and other cities are often increased by an inversion that traps pollution close to the ground. The developing smog is usually toxic to humans and can cause severe sickness, a shortened life span, or premature death.
Etymology
Coinage of the term "smog" has been attributed to Henry Antoine Des Voeux in his 1905 paper, "Fog and Smoke" for a meeting of the Public Health Congress. The 26 July 1905 edition of the London newspaper Daily Graphic quoted Des Voeux, "He said it required no science to see that there was something produced in great cities which was not found in the country, and that was smoky fog, or what was known as 'smog'." The following day the newspaper stated that "Dr. Des Voeux did a public service in coining a new word for the London fog."
However, the term appeared twenty-five years earlier than Voeux's paper, in the Santa Cruz & Monterey Illustrated Handbook published in 1880 and also appears in print in a column quoting from the book in the 3 July 1880, Santa Cruz Weekly Sentinel. On 17 December 1881, in the publication Sporting Times, the author claims to have invented the word: "The 'Smog'a word I have invented, combined of smoke and fog, to designate the London atmosphere..."
Anthropogenic causes
Coal
Coal fire can emit significant clouds of smoke that contribute to the formation of winter smog. Coal fires can be used to heat individual buildings or to provide energy in a power-producing plant. Air pollution from this source has been reported in England since the Middle Ages. London, in particular, was notorious up through the mid-20th century for its coal-caused smogs, which were nicknamed "pea-soupers". Air pollution of this type is still a problem in areas that generate significant smoke from burning coal. The emissions from coal combustion are one of the main causes of air pollution in China. Especially during autumn and winter when coal-fired heating ramps up, the amount of produced smoke at times forces some Chinese cities to close down roads, schools or airports. One prominent example for this was China's Northeastern city of Harbin in 2013.
Transportation emissions
Traffic emissions – such as from trucks, buses, and automobiles – also contribute to the formation of smog. Airborne by-products from vehicle exhaust systems and air conditioning cause air pollution and are a major ingredient in the creation of smog in some large cities.
The major culprits from transportation sources are carbon monoxide (CO), nitrogen oxides (NO and NO2) and volatile organic compounds including hydrocarbons (hydrocarbons are the main component of petroleum fuels such as gasoline and diesel fuel). Transportation emissions also include sulfur dioxides and particulate matter but in much smaller quantities than the pollutants mentioned previously. The nitrogen oxides and volatile organic compounds can undergo a series of chemical reactions with sunlight, heat, ammonia, moisture, and other compounds to form the noxious vapors, ground level ozone, and particles that comprise smog.
Photochemical smog
Photochemical smog, often referred to as "summer smog", is the chemical reaction of sunlight, nitrogen oxides and volatile organic compounds in the atmosphere, which leaves airborne particles and ground-level ozone. Photochemical smog depends on primary pollutants as well as the formation of secondary pollutants. These primary pollutants include nitrogen oxides, particularly nitric oxide (NO) and nitrogen dioxide (NO2), and volatile organic compounds. The relevant secondary pollutants include peroxylacyl nitrates (PAN), tropospheric ozone, and aldehydes. An important secondary pollutant for photochemical smog is ozone, which is formed when hydrocarbons (HC) and nitrogen oxides (NOx) combine in the presence of sunlight; nitrogen dioxide (NO2), which is formed as nitric oxide (NO) combines with oxygen (O2) in the air. In addition, when SO2 and NOx are emitted they eventually are oxidized in the troposphere to nitric acid and sulfuric acid, which, when mixed with water, form the main components of acid rain. All of these harsh chemicals are usually highly reactive and oxidizing. Photochemical smog is therefore considered to be a problem of modern industrialization. It is present in all modern cities, but it is more common in cities with sunny, warm, dry climates and a large number of motor vehicles. Because it travels with the wind, it can affect sparsely populated areas as well.
The composition and chemical reactions involved in photochemical smog were not understood until the 1950s. In 1948, flavor chemist Arie Haagen-Smit adapted some of his equipment to collect chemicals from polluted air, and identified ozone as a component of Los Angeles smog. Haagen-Smit went on to discover that nitrogen oxides from automotive exhausts and gaseous hydrocarbons from cars and oil refineries, exposed to sunlight, were key ingredients in the formation of ozone and photochemical smog. Haagen-Smit worked with Arnold Beckman, who developed various equipment for detecting smog, ranging from an "Apparatus for recording gas concentrations in the atmosphere" patented on 7 October 1952, to "air quality monitoring vans" for use by government and industry.
Formation and reactions
During the morning rush hour, a high concentration of nitric oxide and hydrocarbons are emitted to the atmosphere, mostly via on-road traffic but also from industrial sources. Some hydrocarbons are rapidly oxidized by OH· and form peroxy radicals, which convert nitric oxide (NO) to nitrogen dioxide (NO2).
(1) R{.} + O2 + M -> RO2{.} + M
(2) RO2{.} + NO -> NO2 + RO{.}
(3) HO2{.} + NO -> NO2 + OH{.}
Nitrogen dioxide (NO2) and nitric oxide (NO) further react with ozone (O3) in a series of chemical reactions:
(4) NO2 + hv -> O(^3P) + NO,
(5) O(^3P) + O2 + M-> O3 + M(heat)
(6) O3 + NO -> NO2 + O2
This series of equations is referred to as the photostationary state (PSS). However, because of the presence of Reaction 2 and 3, NOx and ozone are not in a perfectly steady state. By replacing Reaction 6 with Reaction 2 and Reaction 3, the O3 molecule is no longer destroyed. Therefore, the concentration of ozone keeps increasing throughout the day. This mechanism can escalate the formation of ozone in smog. Other reactions such as the photooxidation of formaldehyde (HCHO), a common secondary pollutant, can also contribute to the increased concentration of ozone and NO2. Photochemical smog is more prevalent during summer days since incident solar radiation fluxes are high, which favors the formation of ozone (reactions 4 and 5). The presence of a temperature inversion layer is another important factor. That is because it prevents the vertical convective mixing of the air and thus allows the pollutants, including ozone, to accumulate near the ground level, which again favors the formation of photochemical smog.
There are certain reactions that can limit the formation of O3 in smog. The main limiting reaction in polluted areas is:
(7) NO2 + OH{.} + M -> HNO3 + M
This reaction removes NO2 which limits the amount of O3 that can be produced from its photolysis (reaction 4). HNO3, nitric acid, is a sticky compound that can easily be removed onto surfaces (dry deposition) or dissolved in water and be rained out (wet deposition). Both ways are common in the atmosphere and can efficiently remove radicals and nitrogen dioxide.
Natural causes
Volcanoes
An erupting volcano can emit high levels of sulfur dioxide along with a large quantity of particulate matter; two key components to the creation of smog. However, the smog created as a result of a volcanic eruption is often known as vog to distinguish it as a natural occurrence. The chemical reactions that form smog following a volcanic eruption are different than the reactions that form photochemical smog. The term smog encompasses the effect when a large number of gas-phase molecules and particulate matter are emitted to the atmosphere, creating a visible haze. The event causing a large number of emissions can vary but still result in the formation of smog.
Plants
Plants are another natural source of hydrocarbons that could undergo reactions in the atmosphere and produce smog. Globally both plants and soil contribute a substantial amount to the production of hydrocarbons, mainly by producing isoprene and terpenes. Hydrocarbons released by plants can often be more reactive than man-made hydrocarbons. For example when plants release isoprene, the isoprene reacts very quickly in the atmosphere with hydroxyl radicals. These reactions produce hydroperoxides which increase ozone formation.
Health effects
Smog is a serious problem in many cities and continues to harm human health. Ground-level ozone, sulfur dioxide, nitrogen dioxide and carbon monoxide are especially harmful for senior citizens, children, and people with heart and lung conditions such as emphysema, bronchitis, and asthma. It can inflame breathing passages, decrease the lungs' working capacity, cause shortness of breath, pain when inhaling deeply, wheezing, and coughing. It can cause eye and nose irritation and it dries out the protective membranes of the nose and throat and interferes with the body's ability to fight infection, increasing susceptibility to illness. Hospital admissions and respiratory deaths often increase during periods when ozone levels are high.
There is a lack of knowledge on the long-term effects of air pollution exposure and the origin of asthma. An experiment was carried out using intense air pollution similar to that of the 1952 Great Smog of London. The results from this experiment concluded that there is a link between early-life pollution exposure that leads to the development of asthma, proposing the ongoing effect of the Great Smog.
Modern studies continue to find links between mortality and the presence of smog. One study, published in Nature magazine, found that smog episodes in the city of Jinan, a large city in eastern China, during 2011–15, were associated with a 5.87% (95% CI 0.16–11.58%) increase in the rate of overall mortality. This study highlights the effect of exposure to air pollution on the rate of mortality in China. A similar study in Xi'an found an association between ambient air pollution and increased mortality associated with respiratory diseases.
Levels of unhealthy exposure
The U.S. EPA has developed an air quality index to help explain air pollution levels to the general public. 8 hour average ozone concentrations of 85 to 104 ppbv are described as "Unhealthy for Sensitive Groups", 105 ppbv to 124 ppbv as "unhealthy" and 125 ppb to 404 ppb as "very unhealthy". The "very unhealthy" range for some other pollutants are: 355 μg m−3 – 424 μg m−3 for PM10; 15.5 ppm – 30.4ppm for CO and 0.65 ppm – 1.24 ppm for NO2.
Premature deaths due to cancer and respiratory disease
In 2016, the Ontario Medical Association announced that smog is responsible for an estimated 9,500 premature deaths in the province each year.
A 20-year American Cancer Society study found that cumulative exposure also increases the likelihood of premature death from respiratory disease, implying the 8-hour standard may be insufficient.
Alzheimer risk
Tiny magnetic particles from air pollution have for the first time been discovered to be lodged in human brains– and researchers think they could be a possible cause of Alzheimer's disease.
Researchers at Lancaster University found abundant magnetite nanoparticles in the brain tissue from 37 individuals aged three to 92-years-old who lived in Mexico City and Manchester. This strongly magnetic mineral is toxic and has been implicated in the production of reactive oxygen species (free radicals) in the human brain, which is associated with neurodegenerative diseases including Alzheimer's disease.
Risk of certain birth defects
A study examining 806 women who had babies with birth defects between 1997 and 2006, and 849 women who had healthy babies, found that smog in the San Joaquin Valley area of California was linked to two types of neural tube defects: spina bifida (a condition involving, among other manifestations, certain malformations of the spinal column), and anencephaly (the underdevelopment or absence of part or all of the brain, which if not fatal usually results in profound impairment). An emerging cohort study in China linked early-life smog exposure to an increased risk for adverse pregnancy outcomes, in particular oxidative stress.
Low birth weight
According to a study published in The Lancet, even a very small (5 μg) change in PM2.5 exposure was associated with an increase (18%) in risk of a low birth weight at delivery, and this relationship held even below the current accepted safe levels.
Other negative effects
Although severe health effects caused by smog are the chief issue, intense air pollution caused by haze from air pollution, dust storm particles, and bush fire smoke, cause a reduction in irradiance that hurts both solar photovoltaic production as well as agricultural yield.
Areas affected
Smog can form in almost any climate where industries or cities release large amounts of air pollution, such as smoke or gases. However, it is worse during periods of warmer, sunnier weather when the upper air is warm enough to inhibit vertical circulation. It is especially prevalent in geologic basins encircled by hills or mountains. It often stays for an extended period of time over densely populated cities or urban areas and can build up to dangerous levels.
Asia
India
For the past few years, cities in northern India have been covered in a thick layer of winter smog. The situation has turned quite drastic in the national capital, Delhi. This smog is caused by the collection of particulate matter (a very fine type of dust and toxic gases) in the air due to stagnant movement of air during winters. Moreover, during the post-monsoon to winter transition, air quality in the Indo-Gangetic Plain (IGP) worsens significantly due to shifts in weather patterns, such as changes in wind, temperature, and boundary layer mixing. The impact of emissions from both biomass burning and urban activities has intensified, leading to a rise in aerosols mainly particulate matters. The nearby Himalayan region is also affected, where mountainous topography trap air pollutants and increase the air quality issues specifically in northern India.
Delhi is the most polluted city in the world and according to one estimate, air pollution causes the death of about 10,500 people in Delhi every year. During 2013–14, peak levels of fine particulate matter (PM) in Delhi increased by about 44%, primarily due to high vehicular and industrial emissions, construction work and crop burning in adjoining states. Delhi has the highest level of the airborne particulate matter, PM2.5 considered most harmful to health, with 153 micrograms. Rising air pollution level has significantly increased lung-related ailments (especially asthma and lung cancer) among Delhi's children and women. The dense smog in Delhi during winter season results in major air and rail traffic disruptions every year. According to Indian meteorologists, the average maximum temperature in Delhi during winters has declined notably since 1998 due to rising air pollution.
Environmentalists have criticized the Delhi government for not doing enough to curb air pollution and to inform people about air quality issues. Most of Delhi's residents are unaware of alarming levels of air pollution in the city and the health risks associated with it. Since the mid-1990s, Delhi has undertaken some measures to curb air pollution – Delhi has the third highest quantity of trees among Indian cities and the Delhi Transport Corporation operates the world's largest fleet of environmentally friendly compressed natural gas (CNG) buses. In 1996, the Centre for Science and Environment (CSE) started a public interest litigation in the Supreme Court of India that ordered the conversion of Delhi's fleet of buses and taxis to run on CNG and banned the use of leaded petrol in 1998. In 2003, Delhi won the United States Department of Energy's first 'Clean Cities International Partner of the Year' award for its "bold efforts to curb air pollution and support alternative fuel initiatives". The Delhi Metro has also been credited for significantly reducing air pollutants in the city.
However, according to several authors, most of these gains have been lost, especially due to stubble burning, rise in market share of diesel cars and a considerable decline in bus ridership. According to CUE and System of Air Quality Weather Forecasting and Research (SAFER), burning of agricultural waste in nearby Punjab, Haryana and Uttar Pradesh regions results in severe intensification of smog over Delhi. The state government of adjoining Uttar Pradesh is considering imposing a ban on crop burning to reduce pollution in Delhi NCR and an environmental panel has appealed to India's Supreme Court to impose a 30% cess on diesel cars.
China
Joint research between American and Chinese researchers in 2006 concluded that much of Beijing's pollution comes from surrounding cities and provinces. On average 35–60% of the ozone can be traced to sources outside the city. Shandong Province and Tianjin Municipality have a "significant influence on Beijing's air quality", partly due to the prevailing south/southeasterly flow during the summer and the mountains to the north and northwest.
Iran
In December 2005, schools and public offices were forced to close in Tehran and 1,600 people were taken to hospital, in a severe smog blamed largely on unfiltered car exhaust.
Mongolia
In the late 1990s, massive immigration to Ulaanbaatar from the countryside began. An estimated 150,000 households, mainly living in traditional Mongolian gers on the outskirts of Ulaanbaatar, burn wood and coal (some poor families burn even car tires and trash) to heat themselves during the harsh winter, which lasts from October to April, since these outskirts are not connected to the city's central heating system. A temporary solution to decrease smog was proposed in the form of stoves with improved efficiency, although with no visible results.
Coal-fired ger stoves release high levels of ash and other particulate matter (PM). When inhaled, these particles can settle in the lungs and respiratory tract and cause health problems. At two to 10 times above Mongolian and international air quality standards, Ulaanbaatar's PM rates are among the worst in the world, according to a December 2009 World Bank report. The Asian Development Bank (ADB) estimates that health costs related to this air pollution account for as much as 4 percent of Mongolia's GDP.
Southeast Asia
Smog is a regular problem in Southeast Asia caused by land and forest fires in Indonesia, especially Sumatra and Kalimantan, although the term haze is preferred in describing the problem. Farmers and plantation owners are usually responsible for the fires, which they use to clear tracts of land for further plantings. Those fires mainly affect Brunei, Indonesia, Philippines, Malaysia, Singapore and Thailand, and occasionally Guam and Saipan. The economic losses of the fires in 1997 have been estimated at more than US$9 billion. This includes damages in agriculture production, destruction of forest lands, health, transportation, tourism, and other economic endeavours. Not included are social, environmental, and psychological problems and long-term health effects. The second-latest bout of haze to occur in Malaysia, Singapore and the Malacca Straits is in October 2006, and was caused by smoke from fires in Indonesia being blown across the Straits of Malacca by south-westerly winds. A similar haze has occurred in June 2013, with the PSI setting a new record in Singapore on 21 June at 12pm with a reading of 401, which is in the "Hazardous" range.
The Association of Southeast Asian Nations (ASEAN) reacted. In 2002, the Agreement on Transboundary Haze Pollution was signed between all ASEAN nations. ASEAN formed a Regional Haze Action Plan (RHAP) and established a co-ordination and support unit (CSU). RHAP, with the help of Canada, established a monitoring and warning system for forest/vegetation fires and implemented a Fire Danger Rating System (FDRS). The Malaysian Meteorological Department (MMD) has issued a daily rating of fire danger since September 2003. Indonesia has been ineffective at enforcing legal policies on errant farmers.
Pakistan
Since the start of the winter season, heavy smog loaded with pollutants covered major parts of Punjab, especially the city of Lahore, causing breathing problems and disrupting normal traffic. A recent study from 2022 shows that the primary cause of pollution in Lahore is from traffic-related PM (both exhausts and non exhaust sources). Air quality in the Punjab, Pakistan deteriorates markedly during the post-monsoon to winter transition, driven by shifts in weather patterns like alterations in wind, temperature, and boundary layer mixing. In post-moonsoon, anthropogenic emissions from sources like vehicle exhaust, industrial activities, and crop burning impact air quality across Punjab, Pakistan, affecting the region by 90–100%.
Doctors advised residents to stay indoors and wear facemasks outside.
United Kingdom
London
In 1306, concerns over air pollution were sufficient for Edward I to (briefly) ban coal fires in London. In 1661, John Evelyn's Fumifugium suggested burning fragrant wood instead of mineral coal, which he believed would reduce coughing. The "Ballad of Gresham College" the same year describes how the smoke "does our lungs and spirits choke, Our hanging spoil, and rust our iron."
Severe episodes of smog continued in the 19th and 20th centuries, mainly in the winter, and were nicknamed "pea-soupers," from the phrase "as thick as pea soup". The Great Smog of 1952 darkened the streets of London and killed approximately 4,000 people in the short time of four days (a further 8,000 died from its effects in the following weeks and months). Initially, a flu epidemic was blamed for the loss of life.
In 1956 the Clean Air Act started legally enforcing smokeless zones in the capital. There were areas where no soft coal was allowed to be burned in homes or in businesses, only coke, which produces no smoke. Because of the smokeless zones, reduced levels of sooty particulates eliminated the intense and persistent London smog.
It was after this that the great clean-up of London began. One by one, historical buildings which, during the previous two centuries had gradually completely blackened externally, had their stone facades cleaned and restored to their original appearance. Victorian buildings whose appearance changed dramatically after cleaning included the British Museum of Natural History. A more recent example was the Palace of Westminster, which was cleaned in the 1980s. A notable exception to the restoration trend was 10 Downing Street, whose bricks upon cleaning in the late 1950s proved to be naturally yellow; the smog-derived black color of the façade was considered so iconic that the bricks were painted black to preserve the image. Smog caused by traffic pollution, however, does still occur in modern London.
Other areas
Other areas of the United Kingdom were affected by smog, especially heavily industrialised areas.
The cities of Glasgow and Edinburgh, in Scotland, suffered smoke-laden fogs in 1909. Des Voeux, commonly credited with creating the "smog" moniker, presented a paper in 1911 to the Manchester Conference of the Smoke Abatement League of Great Britain about the fogs and resulting deaths.
One Birmingham resident described near black-out conditions in the 1900s before the Clean Air Act, with visibility so poor that cyclists had to dismount and walk to stay on the road.
On 29 April 2015, the UK Supreme Court ruled that the government must take immediate action to cut air pollution, following a case brought by environmental lawyers at ClientEarth.
Latin America
Mexico
Due to its location in a highland "bowl", cold air sinks down onto the urban area of Mexico City, trapping industrial and vehicle pollution underneath, and turning it into the most infamously smog-plagued city of Latin America. Within one generation, the city has changed from being known for some of the cleanest air of the world into one with some of the worst pollution, with pollutants like nitrogen dioxide being double or even triple international standards.
Chile
Similar to Mexico City, the air pollution of the Santiago valley in Chile, located between the Andes and the Chilean Coast Range, turn it into the most infamously smog-plagued city of South America. Other aggravates of the situation reside in its high latitude (31 degrees South) and dry weather during most of the year.
North America
Canada
According to the Canadian Science Smog Assessment published in 2012, smog is responsible for detrimental effects on human and ecosystem health, as well as socioeconomic well-being across the country. It was estimated that the province of Ontario sustains $201 million in damages annually for selected crops, and an estimated tourism revenue degradation of $7.5 million in Vancouver and $1.32 million in The Fraser Valley due to decreased visibility. Air pollution in British Columbia is of particular concern, especially in the Fraser Valley, because of a meteorological effect called inversion which decreases air dispersion and leads to smog concentration.
United States
Smog was brought to the attention of the general U.S. public in 1933 with the publication of the book "Stop That Smoke", by Henry Obermeyer, a New York public utility official, in which he pointed out the effect on human life and even the destruction of of a farmer's spinach crop. Since then, the United States Environmental Protection Agency has designated over 300 U.S. counties to be non-attainment areas for one or more pollutants tracked as part of the National Ambient Air Quality Standards. These areas are largely clustered around large metropolitan areas, with the largest contiguous non-attainment zones in California and the Northeast. Various U.S. and Canadian government agencies collaborate to produce real-time air quality maps and forecasts. To combat smog conditions, localities may declare "smog alert" days, such as in the Spare the Air program in the San Francisco Bay Area. By 1970, Congress enacted the Clean Air Act to regulate air pollutant emissions.
In the United States, smog pollution kills 24,000 Americans every year. The U.S. is among the dirtier countries in terms of smog, ranked 123 out of 195 countries measured, where 1 is cleanest and 195 is most smog polluted.
Los Angeles and the San Joaquin Valley
Because of their locations in low basins surrounded by mountains, Los Angeles and the San Joaquin Valley are notorious for their smog. Heavy automobile traffic, combined with the additional effects of the San Francisco Bay and Los Angeles/Long Beach port complexes, frequently contribute to further air pollution.
Los Angeles, in particular, is strongly predisposed to the accumulation of smog, because of the peculiarities of its geography and weather patterns. Los Angeles is situated in a flat basin with the ocean on one side and mountain ranges on three sides. A nearby cold ocean current depresses surface air temperatures in the area, resulting in an inversion layer: a phenomenon where air temperature increases, instead of decreasing, with altitude, suppressing thermals and restricting vertical convection. All taken together, this results in a relatively thin, enclosed layer of air above the city that cannot easily escape out of the basin and tends to accumulate pollution.
Los Angeles was one of the best-known cities suffering from transportation smog for much of the 20th century, so much so that it was sometimes said that Los Angeles was a synonym for smog. In 1970, when the Clean Air Act was passed, Los Angeles was the most polluted basin in the country, and California was unable to create a State Implementation Plan that would enable it to meet the new air quality standards. However, ensuing strict regulations by state and federal government agencies overseeing this problem (such as the California Air Resources Board and the United States Environmental Protection Agency), including tight restrictions on allowed emissions levels for all new cars sold in California and mandatory regular emission tests of older vehicles, resulted in significant improvements in air quality. For example, air concentrations of volatile organic compounds declined by a factor of 50 between 1962 and 2012. Concentrations of air pollutants such as nitrous oxides and ozone declined by 70% to 80% over the same period of time.
Major incidents in the U.S.
26 July 1943, Los Angeles, California: A smog so sudden and severe that "Los Angeles residents believe the Japanese are attacking them with chemical warfare."
30-31 October 1948, Donora, Pennsylvania: 20 died, 600 hospitalized, thousands more stricken. Lawsuits were not settled until 1951.
24 November 1966, New York City, New York: Smog kills at least 169 people.
Pollution index
The severity of smog is often measured using automated optical instruments such as nephelometers, as haze is associated with visibility and traffic control in ports. Haze, however, can also be an indication of poor air quality, though this is often better reflected using accurate purpose-built air indexes such as the American Air Quality Index, the Malaysian API (Air Pollution Index), and the Singaporean Pollutant Standards Index.
In hazy conditions, it is likely that the index will report the suspended particulate level. The disclosure of the responsible pollutant is mandated in some jurisdictions.
The Malaysian API does not have a capped value. Hence, its most hazardous readings can go above 500. When the reading goes above 500, a state of emergency is declared in the affected area. Usually, this means that non-essential government services are suspended, and all ports in the affected area are closed. There may also be prohibitions on private sector commercial and industrial activities in the affected area excluding the food sector. So far, the state of emergency rulings due to hazardous API levels was applied to the Malaysian towns of Port Klang, Kuala Selangor, and the state of Sarawak during 1997 Southeast Asian haze and the 2005 Malaysian haze.
Cultural references
The London "pea-soupers" earned the capital the nickname of "The Smoke". Similarly, Edinburgh was known as "Auld Reekie". The smogs feature in many London novels as a motif indicating hidden danger or a mystery, perhaps most overtly in Margery Allingham's The Tiger in the Smoke (1952), but also in Dickens's Bleak House (1852) and T.S. Eliot's "The Love Song of J. Alfred Prufrock".
In the 1957 Warner Brothers cartoon, What's Opera, Doc, Elmer Fudd called for various calamities to befall Bugs Bunny, ending in a screamed "SMOG!!"
The 1970 made-for-TV movie A Clear and Present Danger was one of the first American television network entertainment programs to warn about the problem of smog and air pollution, as it dramatized a man's efforts toward clean air after emphysema killed his friend.
The history of smog in LA is detailed in Smogtown by Chip Jacobs and William J. Kelly.
| Physical sciences | Atmospheric optics | Earth science |
27008 | https://en.wikipedia.org/wiki/Ship | Ship | A ship is a large vessel that travels the world's oceans and other navigable waterways, carrying cargo or passengers, or in support of specialized missions, such as defense, research and fishing. Ships are generally distinguished from boats, based on size, shape, load capacity and purpose. Ships have supported exploration, trade, warfare, migration, colonization, and science. Ship transport is responsible for the largest portion of world commerce.
The word ship has meant, depending on the era and the context, either just a large vessel or specifically a ship-rigged sailing ship with three or more masts, each of which is square-rigged.
The earliest historical evidence of boats is found in Egypt during the 4th millennium BCE. In 2024, ships had a global cargo capacity of 2.4 billion tons, with the three largest classes being ships carrying dry bulk (43%), oil tankers (28%) and container ships (14%).
Nomenclature
Ships are typically larger than boats, but there is no universally accepted distinction between the two. Ships generally can remain at sea for longer periods of time than boats. A legal definition of ship from Indian case law is a vessel that carries goods by sea. A common notion is that a ship can carry a boat, but not vice versa. A ship is likely to have a full-time crew assigned. A US Navy rule of thumb is that ships heel towards the outside of a sharp turn, whereas boats heel towards the inside because of the relative location of the center of mass versus the center of buoyancy. American and British 19th century maritime law distinguished "vessels" from other watercraft; ships and boats fall in one legal category, whereas open boats and rafts are not considered vessels.
Starting around the middle of the 18th century, sailing vessels started to be categorised by their type of rig. (Previously they were described by their hull typefor example pink, cat.) Alongside the other rig types such as schooner and brig, the term "ship" referred to the rig type. In this sense, a ship is a vessel with three or more masts, all of which are square-rigged. For clarity, this may be referred to as a full-rigged ship or a vessel may be described as "ship-rigged". Alongside this rig-specific usage, "ship" continued to have the more general meaning of a large sea-going vessel. Often the meaning can only be determined by the context.
Some large vessels are traditionally called boats, notably submarines. Others include Great Lakes freighters, riverboats, and ferryboats, which may be designed for operation on inland or protected coastal waters.
In most maritime traditions ships have individual names, and modern ships may belong to a ship class often named after its first ship.
In many documents the ship name is introduced with a ship prefix being an abbreviation of the ship class, for example "MS" (motor ship) or "SV" (sailing vessel), making it easier to distinguish a ship name from other individual names in a text.
"Ship" (along with "nation") is an English word that has retained a female grammatical gender in some usages, which allows it sometimes to be referred to as a "she" without being of female natural gender.
History
For most of history, transport by shipprovided there is a feasible route has generally been cheaper, safer and faster than making the same journey on land. Only the coming of railways in the middle of the 19th century and the growth of commercial aviation in the second half of the 20th century have changed this principle. This applied equally to sea crossings, coastal voyages and use of rivers and lakes.
Examples of the consequences of this include the large grain trade in the Mediterranean during the classical period. Cities such as Rome were totally reliant on the delivery by sailing and human powered (oars) ships of the large amounts of grain needed. It has been estimated that it cost less for a sailing ship of the Roman Empire to carry grain the length of the Mediterranean than to move the same amount 15 miles by road. Rome consumed about 150,000 tons of Egyptian grain each year over the first three centuries AD.
Until recently, it was generally the case that a ship represented the most advanced representation of the technology that any society could achieve.
Prehistory and antiquity
Asian developments
The earliest attestations of ships in maritime transport in Mesopotamia are model ships, which date back to the 4th millennium BC. In archaic texts in Uruk, Sumer, the ideogram for "ship" is attested, but in the inscriptions of the kings of Lagash, ships were first mentioned in connection to maritime trade and naval warfare at around 2500–2350 BCE.
Austronesian peoples originated in what is now Taiwan. From here, they took part in the Austronesian Expansion. Their distinctive maritime technology was integral to this movement and included catamarans and outriggers. It has been suggested that they had sails some time before 2000 BCE. Their crab claw sails enabled them to sail for vast distances in open ocean. From Taiwan, they rapidly colonized the islands of Maritime Southeast Asia, then sailed further onwards to Micronesia, Island Melanesia, Polynesia, and Madagascar, eventually colonizing a territory spanning half the globe.
Austronesian sails were made from woven leaves, usually from pandan plants. These were complemented by paddlers, who usually positioned themselves on platforms on the outriggers in the larger boats. Austronesian ships ranged in complexity from simple dugout canoes with outriggers or lashed together to large edge-pegged plank-built boats built around a keel made from a dugout canoe. Their designs were unique, evolving from ancient rafts to the characteristic double-hulled, single-outrigger, and double-outrigger designs of Austronesian ships.
In the 2nd century AD, people from the Indonesian archipelago already made large ships measuring over 50 m long and standing 4–7 m out of the water. They could carry 600–1000 people and 250–1000 ton cargo. These ships were known as kunlun bo or k'unlun po (崑崙舶, lit. "ship of the Kunlun people") by the Chinese, and kolandiaphonta by the Greeks. They had 4–7 masts and were able to sail against the wind due to the usage of tanja sails. These ships may have reached as far as Ghana. In the 11th century, a new type of ship called djong or jong was recorded in Java and Bali. This type of ship was built using wooden dowels and treenails, unlike the kunlun bo which used vegetal fibres for lashings.
In China, miniature models of ships that feature steering oars have been dated to the Warring States period (c. 475–221 BC). By the Han dynasty, a well kept naval fleet was an integral part of the military. Centre-line rudders, mounted at the stern, started to appear on Chinese ship models starting in the 1st century AD. However, these early Chinese ships were fluvial (riverine), and were not seaworthy. The Chinese only acquired sea-going ship technologies in the 10th-century AD Song dynasty after contact with Southeast Asian k'un-lun po trading ships, leading to the development of the junks.
Mediterranean developments
The earliest historical evidence of boats is found in Egypt during the 4th millennium BCE The Greek historian and geographer Agatharchides had documented ship-faring among the early Egyptians: "During the prosperous period of the Old Kingdom, between the 30th and 25th centuries BC, the river-routes were kept in order, and Egyptian ships sailed the Red Sea as far as the myrrh-country." Sneferu's ancient cedar wood ship Praise of the Two Lands is the first reference recorded (2613 BC) to a ship being referred to by name.
The ancient Egyptians were perfectly at ease building sailboats. A remarkable example of their shipbuilding skills was the Khufu ship, a vessel in length entombed at the foot of the Great Pyramid of Giza around 2500 BC and found intact in 1954.
The oldest discovered sea faring hulled boat is the Late Bronze Age Uluburun shipwreck off the coast of Turkey, dating back to 1300 BC.
By 1200 B.C., the Phoenicians were building large merchant ships. In world maritime history, declares Richard Woodman, they are recognized as "the first true seafarers, founding the art of pilotage, cabotage, and navigation" and the architects of "the first true ship, built of planks, capable of carrying a deadweight cargo and being sailed and steered."
14th through the 18th centuries
Asian developments
At this time, ships were developing in Asia in much the same way as Europe. Japan used defensive naval techniques in the Mongol invasions of Japan in 1281. It is likely that the Mongols of the time took advantage of both European and Asian shipbuilding techniques. During the 15th century, China's Ming dynasty assembled one of the largest and most powerful naval fleets in the world for the diplomatic and power projection voyages of Zheng He. Elsewhere in Japan in the 15th century, one of the world's first iron-clads, "Tekkōsen" (鉄甲船), literally meaning "iron ships", was also developed. In Japan, during the Sengoku era from the 15th century to 17th century, the great struggle for feudal supremacy was fought, in part, by coastal fleets of several hundred boats, including the atakebune. In Korea, in the early 15th century during the Joseon era, "Geobukseon"(거북선), was developed.
The empire of Majapahit used large ships called jong, built in northern Java, for transporting troops overseas. The jongs were transport ships which could carry 100–2000 tons of cargo and 50–1000 people, 28.99–88.56 meter in length. The exact number of jong fielded by Majapahit is unknown, but the largest number of jong deployed in an expedition is about 400 jongs, when Majapahit attacked Pasai, in 1350.
Europe
Until the late 13th or early 14th century, European shipbuilding had two separate traditions. In Northern Europe clinker construction predominated. In this, the hull planks are fastened together in an overlapping manner. This is a "shell first" construction technique, with the hull shape being defined by the shaping and fitting of the hull planks. The reinforcing s (or ribs) are fitted after the planks. Clinker construction in this era usually used planks that were cleft (split radially from the log) and could be made thinner and stronger per unit of thickness than the sawn logs, thanks to preserving the radial integrity of the grain.
An exception to clinker construction in the Northern European tradition is the bottom planking of the cog. Here, the hull planks are not joined to each other and are laid flush (not overlapped). They are held together by fastening to the frames but this is done after the shaping and fitting of these planks. Therefore, this is another case of a "shell first" construction technique.
These Northern European ships were rigged with a single mast setting a square sail. They were steered by rudders hung on the .
In contrast, the ship-building tradition of the Mediterranean was of carvel constructionthe fitting of the hull planking to the frames of the hull. Depending on the precise detail of this method, it may be characterised as either "frame first" or "frame-led". In either variant, during construction, the hull shape is determined by the frames, not the planking. The hull planks are not fastened to each other, only to the frames.
These Mediterranean ships were rigged with lateen sails on one or more masts (depending on the size of the vessel) and were steered with a side rudder. They are often referred to as "round ships".
Crucially, the Mediterranean and Northern European traditions merged. Cogs are known to have travelled to the Mediterranean in the 12th and 13th centuries. Some aspects of their designs were being copied by Mediterranean ship-builders early in the 14th century. Iconography shows square sails being used on the mainmast but a lateen on the mizzen, and a sternpost hung rudder replacing the side rudder. The name for this type of vessel was "coche" or, for a larger example, "carrack". Some of these new Mediterranean types travelled to Northern European waters and, in the first two decades of the 15th century, a few were captured by the English, two of which had previously been under charter to the French. The two-masted rig started to be copied immediately, but at this stage on a clinker hull. The adoption of carvel hulls had to wait until sufficient shipwrights with appropriate skills could be hired, but by late in the 1430s, there were instances of carvel ships being built in Northern Europe, and in increasing numbers over the rest of the century.
This hybridisation of Mediterranean and Northern European ship types created the full-rigged ship, a three-masted vessel with a square-rigged foremast and mainmast and a lateen sail on the mizzen. This provided most of the ships used in the Age of Discovery, being able to carry sufficient stores for a long voyage and with a rig suited to the open ocean. Over the next four hundred years, steady evolution and development, from the starting point of the carrack, gave types such as the galleon, fluit, East Indiaman, ordinary cargo ships, warships, clippers and many more, all based on this three-masted square-rigged type.
The transition from clinker to carvel construction facilitated the use of artillery at sea since the internal framing of the hull could be made strong enough to accommodate the weight of guns. It was easier to fit gunports in a carvel hull. As vessels became larger and the demand for ship-building timber affected the size of trees available, clinker construction became limited by the difficulty of finding large enough logs from which to cleave planks. Nonetheless, some clinker vessels approached the size of contemporary carracks. Before the adoption of carvel construction, the increasing size of clinker-built vessels necessitated greater amounts of internal framing of their hulls for strengthsomething that somewhat lessened the conceptual change to the new technique.
19th and 20th centuries: specialization and modernization
Parallel to the development of warships, ships in service of marine fishery and trade also developed in the period between antiquity and the Renaissance.
Maritime trade was driven by the development of shipping companies with significant financial resources. Canal barges, towed by draft animals on an adjacent towpath, contended with the railway up to and past the early days of the Industrial Revolution. Flat-bottomed and flexible scow boats also became widely used for transporting small cargoes. Mercantile trade went hand-in-hand with exploration, self-financed by the commercial benefits of exploration.
During the first half of the 18th century, the French Navy began to develop a new type of vessel known as a ship of the line, featuring seventy-four guns. This type of ship became the backbone of all European fighting fleets. These ships were long and their construction required 2,800 oak trees and of rope; they carried a crew of about 800 sailors and soldiers. During the 19th century the Royal Navy enforced a ban on the slave trade, acted to suppress piracy, and continued to map the world. Ships and their owners grew with the 19th century Industrial Revolution across Europe and North America, leading to increased numbers of oceangoing ships, as well as other coastal and canal based vessels.
Through more than half of the 19th century and into the early years of the 20th century, steam ships coexisted with sailing vessels. Initially, steam was only viable on shorter routes, typically transporting passengers who could afford higher fares and mail. Steam went through many developmental steps that gave greater fuel efficiency, thereby increasingly making steamships commercially competitive with sail. Screw propulsion, which relied, among other things, on the invention of an effective stern gland for the propeller shaft, worked better than paddle wheels. Higher boiler pressures of powering compound engines, were introduced in 1865, making long-distance steam cargo vessels commercially viable on the route from England to Chinaeven before the opening of the Suez Canal in 1869. Within a few years, steam had replaced many of the sailing ships that had served this route. Even greater fuel efficiency was obtained with triple-expansion steam enginesbut this had to wait for higher quality steel to be available to make boilers running at in SS Aberdeen (1881). By this point virtually all routes could be served competitively by steamships. Sail continued with some cargoes, where low costs were more important to the shipper than a predictable and rapid journey time.
The Second Industrial Revolution in particular led to new mechanical methods of propulsion, and the ability to construct ships from metal triggered an explosion in ship design. These led to the development of long-distance commercial ships and Ocean liners, as well as technological changes including the Marine steam engine, screw propellers, triple expansion engines and others. Factors included the quest for more efficient ships, the end of long running and wasteful maritime conflicts, and the increased financial capacity of industrial powers created more specialized ships and other maritime vessels. Ship types built for entirely new functions that appeared by the 20th century included research ships, offshore support vessels (OSVs), Floating production storage and offloading (FPSOs), Pipe and cable laying ships, drill ships and Survey vessels.
The late 20th century saw changes to ships that included the decline of ocean liners as air travel increased. The rise of container ships from the 1960s onwards dramatically changed the nature of commercial merchant shipping, as containerization led to larger ship sizes, dedicated container routes and the decline of general cargo vessels as well as tramp steaming. The late 20th century also saw a rise in cruise ships for tourism around the world.
21st century
In 2016, there were more than 49,000 merchant ships, totaling almost 1.8 billion deadweight tons. Of these 28% were oil tankers, 43% were bulk carriers, and 13% were container ships. By 2019, the world's fleet included 51,684 commercial vessels with gross tonnage of more than 1,000 tons, totaling 1.96 billion tons. Such ships carried 11 billion tons of cargo in 2018, a sum that grew by 2.7% over the previous year. In terms of tonnage, 29% of ships were tankers, 43% are bulk carriers, 13% container ships and 15% were other types.
In 2008, there were 1,240 warships operating in the world, not counting small vessels such as patrol boats. The United States accounted for 3 million tons worth of these vessels, Russia 1.35 million tons, the United Kingdom 504,660 tons and China 402,830 tons. The 20th century saw many naval engagements during the two world wars, the Cold War, and the rise to power of naval forces of the two blocs. The world's major powers have recently used their naval power in cases such as the United Kingdom in the Falkland Islands and the United States in Iraq.
The size of the world's fishing fleet is more difficult to estimate. The largest of these are counted as commercial vessels, but the smallest are legion. Fishing vessels can be found in most seaside villages in the world. As of 2004, the United Nations Food and Agriculture Organization estimated 4 million fishing vessels were operating worldwide. The same study estimated that the world's 29 million fishermen caught of fish and shellfish that year.
In 2023, the number of ships globally grew by 3.4%. In 2024, new ships are increasingly being built with alternative fuel capability to increase sustainability and reduce carbon emissions. Alternative ship fuels include LNG, LPG, methanol, biofuel, ammonia and hydrogen among others.
As of 2024, wind power for ships had received renewed interest for its potential to mitigate greenhouse gas emissions.
Types of ships
Because ships are constructed using the principles of naval architecture that require same structural components, their classification is based on their function such as that suggested by Paulet and Presles, which requires modification of the components. The categories accepted in general by naval architects are:
High-speed craft – Multihulls including wave piercers, small-waterplane-area twin hull (SWATH), surface effect ships and hovercraft, hydrofoil, wing in ground effect craft (WIG).
Off shore oil vessels – Platform supply vessels, pipe layers, accommodation and crane barges, non and semi-submersible drilling rigs, drill ships, production platforms, floating production storage and offloading units.
Fishing vessels
Motorised fishing trawlers, trap setters, seiners, longliners, trollers & factory ships.
Traditional sailing and rowed fishing vessels and boats used for handline fishing
Harbour work craft
Cable layers
Tugboats, dredgers, salvage vessels, tenders, pilot boats.
Floating dry docks, crane vessels, lighterships.
Dry cargo ships – tramp freighters, bulk carriers, cargo liners, container vessels, barge carriers, Ro-Ro ships, refrigerated cargo ships, timber carriers, livestock carriers & light vehicle carriers.
Liquid cargo ships – tankers, oil tankers, liquefied gas carriers, LNG carriers, chemical carriers.
Passenger ships
Liners, cruise and special trade passenger (STP) ships
Cross-channel, coastal and harbour ferries
Luxury and cruising yachts and superyachts
Sail training and sailing ships
Galleys – biremes, triremes and quinquiremes
Recreational boats and craft – rowed, masted and motorised craft
Special-purpose vessels – weather and research vessels, deep sea survey vessels, and icebreakers.
Submarines – watercraft capable of independent operation underwater.
Naval ships
Warships – aircraft carriers, amphibious warfare ships, battleships, battlecruisers, coastal defence ships, cruisers, destroyers, frigates, corvettes, patrol ships, minesweepers, etc.
Auxiliary ships – ammunition ships, replenishment oilers, repair ships, storeships, troopships, etc.
Hospital ships
Some of these are discussed in the following sections.
Inland vessels
Freshwater shipping may occur on lakes, rivers and canals. Ships designed for those body of waters may be specially adapted to the widths and depths of specific waterways. Examples of freshwater waterways that are navigable in part by large vessels include the Danube, Mississippi, Rhine, Yangtze and Amazon Rivers, and the Great Lakes.
Great Lakes
Lake freighters, also called lakers, are cargo vessels that ply the Great Lakes. The most well-known is , the latest major vessel to be wrecked on the Lakes. These vessels are traditionally called boats, not ships. Visiting ocean-going vessels are called "salties". Because of their additional beam, very large salties are never seen inland of the Saint Lawrence Seaway. Because the smallest of the Soo Locks is larger than any Seaway lock, salties that can pass through the Seaway may travel anywhere in the Great Lakes. Because of their deeper draft, salties may accept partial loads on the Great Lakes, "topping off" when they have exited the Seaway. Similarly, the largest lakers are confined to the Upper Lakes (Superior, Michigan, Huron, Erie) because they are too large to use the Seaway locks, beginning at the Welland Canal that bypasses the Niagara River.
Since the freshwater lakes are less corrosive to ships than the salt water of the oceans, lakers tend to last much longer than ocean freighters. Lakers older than 50 years are not unusual, and as of 2005, all were over 20 years of age.
, built in 1906 as William P Snyder, was the oldest laker still working on the Lakes until its conversion into a barge starting in 2013. Similarly, E.M. Ford, built in 1898 as Presque Isle, was sailing the lakes 98 years later in 1996. As of 2007 E.M. Ford was still afloat as a stationary transfer vessel at a riverside cement silo in Saginaw, Michigan.
Merchant ship
Merchant ships are ships used for commercial purposes and can be divided into four broad categories: fishing vessels, cargo ships, passenger ships, and special-purpose ships. The UNCTAD review of maritime transport categorizes ships as: oil tankers, bulk (and combination) carriers, general cargo ships, container ships, and "other ships", which includes "liquefied petroleum gas carriers, liquefied natural gas carriers, parcel (chemical) tankers, specialized tankers, reefers, offshore supply, tugs, dredgers, cruise, ferries, other non-cargo". General cargo ships include "multi-purpose and project vessels and roll-on/roll-off cargo".
Modern commercial vessels are typically powered by a single propeller driven by a diesel or, less usually, gas turbine engine., but until the mid-19th century they were predominantly square sail rigged. The fastest vessels may use pump-jet engines. Most commercial vessels such as container ships, have full hull-forms (higher Block coefficients) to maximize cargo capacity. Merchant ships and fishing vessels are usually made of steel, although aluminum can be used on faster craft, and fiberglass or wood on smaller vessels. Commercial vessels generally have a crew headed by a sea captain, with deck officers and engine officers on larger vessels. Special-purpose vessels often have specialized crew if necessary, for example scientists aboard research vessels.
Fishing boats are generally small, often little more than but up to for a large tuna or whaling ship. Aboard a fish processing vessel, the catch can be made ready for market and sold more quickly once the ship makes port. Special purpose vessels have special gear. For example, trawlers have winches and arms, stern-trawlers have a rear ramp, and tuna seiners have skiffs. In 2004, of fish were caught in the marine capture fishery. Anchoveta represented the largest single catch at . That year, the top ten marine capture species also included Alaska pollock, Blue whiting, Skipjack tuna, Atlantic herring, Chub mackerel, Japanese anchovy, Chilean jack mackerel, Largehead hairtail, and Yellowfin tuna. Other species including salmon, shrimp, lobster, clams, squid and crab, are also commercially fished. Modern commercial fishermen use many methods. One is fishing by nets, such as purse seine, beach seine, lift nets, gillnets, or entangling nets. Another is trawling, including bottom trawl. Hooks and lines are used in methods like long-line fishing and hand-line fishing. Another method is the use of fishing trap.
Cargo ships transport dry and liquid cargo. Dry cargo can be transported in bulk by bulk carriers, packed directly onto a general cargo ship in break-bulk, packed in intermodal containers as aboard a container ship, or driven aboard as in roll-on roll-off ships. Liquid cargo is generally carried in bulk aboard tankers, such as oil tankers which may include both crude and finished products of oil, chemical tankers which may also carry vegetable oils other than chemicals and gas carriers, although smaller shipments may be carried on container ships in tank containers.
Passenger ships range in size from small river ferries to very large cruise ships. This type of vessel includes ferries, which move passengers and vehicles on short trips; ocean liners, which carry passengers from one place to another; and cruise ships, which carry passengers on voyages undertaken for pleasure, visiting several places and with leisure activities on board, often returning them to the port of embarkation. Riverboats and inland ferries are specially designed to carry passengers, cargo, or both in the challenging river environment. Rivers present special hazards to vessels. They usually have varying water flows that alternately lead to high speed water flows or protruding rock hazards. Changing siltation patterns may cause the sudden appearance of shoal waters, and often floating or sunken logs and trees (called snags) can endanger the hulls and propulsion of riverboats. Riverboats are generally of shallow draft, being broad of beam and rather square in plan, with a low freeboard and high topsides. Riverboats can survive with this type of configuration as they do not have to withstand the high winds or large waves that are seen on large lakes, seas, or oceans.
Fishing vessels are a subset of commercial vessels, but generally small in size and often subject to different regulations and classification. They can be categorized by several criteria: architecture, the type of fish they catch, the fishing method used, geographical origin, and technical features such as rigging. As of 2004, the world's fishing fleet consisted of some 4 million vessels. Of these, 1.3 million were decked vessels with enclosed areas and the rest were open vessels. Most decked vessels were mechanized, but two-thirds of the open vessels were traditional craft propelled by sails and oars. More than 60% of all existing large fishing vessels were built in Japan, Peru, the Russian Federation, Spain or the United States of America.
Special purpose vessels
A weather ship was a ship stationed in the ocean as a platform for surface and upper air meteorological observations for use in marine weather forecasting. Surface weather observations were taken hourly, and four radiosonde releases occurred daily. It was also meant to aid in search and rescue operations and to support transatlantic flights. Proposed as early as 1927 by the aviation community, the establishment of weather ships proved to be so useful during World War II that the International Civil Aviation Organization (ICAO) established a global network of weather ships in 1948, with 13 to be supplied by the United States. This number was eventually negotiated down to nine.
The weather ship crews were normally at sea for three weeks at a time, returning to port for 10-day stretches. Weather ship observations proved to be helpful in wind and wave studies, as they did not avoid weather systems like other ships tended to for safety reasons. They were also helpful in monitoring storms at sea, such as tropical cyclones. The removal of a weather ship became a negative factor in forecasts leading up to the Great Storm of 1987. Beginning in the 1970s, their role became largely superseded by weather buoys due to the ships' significant cost. The agreement of the use of weather ships by the international community ended in 1990. The last weather ship was Polarfront, known as weather station M ("Mike"), which was put out of operation on 1 January 2010. Weather observations from ships continue from a fleet of voluntary merchant vessels in routine commercial operation.
Naval vessels
Naval ships are diverse in types of vessel. They include: surface warships, submarines, and auxiliary ships.
Modern warships are generally divided into seven main categories: aircraft carriers, cruisers, destroyers, frigates, corvettes, submarines and amphibious warfare ships. The distinctions among cruisers, destroyers, frigates, and corvettes are not codified; the same vessel may be described differently in different navies. Battleships were used during the Second World War and occasionally since then (the last battleships were removed from the U.S. Naval Vessel Register in March 2006), but were made obsolete by the use of carrier-borne aircraft and guided missiles.
Most military submarines are either attack submarines or ballistic missile submarines. Until the end of World War II the primary role of the diesel/electric submarine was anti-ship warfare, inserting and removing covert agents and military forces, and intelligence-gathering. With the development of the homing torpedo, better sonar systems, and nuclear propulsion, submarines also became able to effectively hunt each other. The development of submarine-launched nuclear and cruise missiles gave submarines a substantial and long-ranged ability to attack both land and sea targets with a variety of weapons ranging from cluster munitions to nuclear weapons.
Most navies also include many types of support and auxiliary vessel, such as minesweepers, patrol boats, offshore patrol vessels, replenishment ships, and hospital ships which are designated medical treatment facilities.
Fast combat vessels such as cruisers and destroyers usually have fine hulls to maximize speed and maneuverability. They also usually have advanced marine electronics and communication systems, as well as weapons.
Architecture
Some components exist in vessels of any size and purpose. Every vessel has a hull of sorts. Every vessel has some sort of propulsion, whether it's a pole, an ox, or a nuclear reactor. Most vessels have some sort of steering system. Other characteristics are common, but not as universal, such as compartments, holds, a superstructure, and equipment such as anchors and winches.
Hull
For a ship to float, its weight must be less than that of the water displaced by the ship's hull. There are many types of hulls, from logs lashed together to form a raft to the advanced hulls of America's Cup sailboats. A vessel may have a single hull (called a monohull design), two in the case of catamarans, or three in the case of trimarans. Vessels with more than three hulls are rare, but some experiments have been conducted with designs such as pentamarans. Multiple hulls are generally parallel to each other and connected by rigid arms.
Hulls have several elements. The bow is the foremost part of the hull. Many ships feature a bulbous bow. The keel is at the very bottom of the hull, extending the entire length of the ship. The rear part of the hull is known as the stern, and many hulls have a flat back known as a transom. Common hull appendages include propellers for propulsion, rudders for steering, and stabilizers to quell a ship's rolling motion. Other hull features can be related to the vessel's work, such as fishing gear and sonar domes.
Hulls are subject to various hydrostatic and hydrodynamic constraints. The key hydrostatic constraint is that it must be able to support the entire weight of the boat, and maintain stability even with often unevenly distributed weight. Hydrodynamic constraints include the ability to withstand shock waves, weather collisions and groundings.
Older ships and pleasure craft often have or had wooden hulls. Steel is used for most commercial vessels. Aluminium is frequently used for fast vessels, and composite materials are often found in sailboats and pleasure craft. Some ships have been made with concrete hulls.
Propulsion systems
Propulsion systems for ships fall into three categories: human propulsion, sailing, and mechanical propulsion. Human propulsion includes rowing, which was used even on large galleys. Propulsion by sail generally consists of a sail hoisted on an erect mast, supported by stays and spars and controlled by ropes. Sail systems were the dominant form of propulsion until the 19th century. They are now generally used for recreation and competition, although experimental sail systems, such as the turbosails, rotorsails, and wingsails have been used on larger modern vessels for fuel savings.
Mechanical propulsion systems generally consist of a motor or engine turning a propeller, or less frequently, an impeller or wave propulsion fins. Steam engines were first used for this purpose, but have mostly been replaced by two-stroke or four-stroke diesel engines, outboard motors, and gas turbine engines on faster ships. Nuclear reactors producing steam are used to propel warships and icebreakers, and there have been attempts to use them to power commercial vessels (see NS Savannah).
In addition to traditional fixed and controllable pitch propellers there are many specialized variations, such as contra-rotating and nozzle-style propellers. Most vessels have a single propeller, but some large vessels may have up to four propellers supplemented with transverse thrusters for maneuvring at ports. The propeller is connected to the main engine via a propeller shaft and, in case of medium- and high-speed engines, a reduction gearbox. Some modern vessels have a diesel–electric powertrain in which the propeller is turned by an electric motor powered by the ship's generators.
As environmental sustainability becomes a paramount concern, the maritime industry is exploring cleaner propulsion technologies. Alternatives like LPG (Liquefied Petroleum Gas), ammonia, and hydrogen are emerging as viable options. LPG is already utilized as fuel for long-distance shipping, offering a cleaner option with a lower carbon footprint. Meanwhile, hydrogen and ammonia technologies are in development stages for long-haul applications, promising even more significant reductions in emissions and a step closer to achieving carbon-neutral shipping.
Steering systems
For ships with independent propulsion systems for each side, such as manual oars or some paddles, steering systems may not be necessary. In most designs, such as boats propelled by engines or sails, a steering system becomes necessary. The most common is a rudder, a submerged plane located at the rear of the hull. Rudders are rotated to generate a lateral force which turns the boat. Rudders can be rotated by a tiller, manual wheels, or electro-hydraulic systems. Autopilot systems combine mechanical rudders with navigation systems. Ducted propellers are sometimes used for steering.
Some propulsion systems are inherently steering systems. Examples include the outboard motor, the bow thruster, and the azimuth thruster.
Holds, compartments, and the superstructure
Larger boats and ships generally have multiple decks and compartments. Separate berthings and heads are found on sailboats over about . Fishing boats and cargo ships typically have one or more cargo holds. Most larger vessels have an engine room, a galley, and various compartments for work. Tanks are used to store fuel, engine oil, and fresh water. Ballast tanks are equipped to change a ship's trim and modify its stability.
Superstructures are found above the main deck. On sailboats, these are usually very low. On modern cargo ships, they are almost always located near the ship's stern. On passenger ships and warships, the superstructure generally extends far forward.
Equipment
Shipboard equipment varies from ship to ship depending on such factors as the ship's era, design, area of operation, and purpose. Some types of equipment that are widely found include:
Masts can be the home of antennas, navigation lights, radar transponders, fog signals, and similar devices often required by law.
Ground tackle comprises the anchor, its chain or cable, and connecting fittings.
Cargo equipment such as cranes and cargo booms may be used to load and unload cargo and ship's stores.
Safety equipment such as lifeboats, liferafts, and survival suits are carried aboard many vessels for emergency use.
Design considerations
Hydrostatics
Ships float in the water at a level where mass of the displaced water equals the mass of the vessel, so that the downwards force of gravity equals the upward force of buoyancy. As a vessel is lowered into the water its weight remains constant but the corresponding weight of water displaced by its hull increases. If the vessel's mass is evenly distributed throughout, it floats evenly along its length and across its beam (width). A vessel's stability is considered in both this hydrostatic sense as well as a hydrodynamic sense, when subjected to movement, rolling and pitching, and the action of waves and wind. Stability problems can lead to excessive pitching and rolling, and eventually capsizing and sinking.
Hydrodynamics
The advance of a vessel through water is resisted by the water. This resistance can be broken down into several components, the main ones being the friction of the water on the hull and wave making resistance. To reduce resistance and therefore increase the speed for a given power, it is necessary to reduce the wetted surface and use submerged hull shapes that produce low amplitude waves. To do so, high-speed vessels are often more slender, with fewer or smaller appendages. The friction of the water is also reduced by regular maintenance of the hull to remove the sea creatures and algae that accumulate there. Antifouling paint is commonly used to assist in this. Advanced designs such as the bulbous bow assist in decreasing wave resistance.
A simple way of considering wave-making resistance is to look at the hull in relation to its wake. At speeds lower than the wave propagation speed, the wave rapidly dissipates to the sides. As the hull approaches the wave propagation speed, however, the wake at the bow begins to build up faster than it can dissipate, and so it grows in amplitude. Since the water is not able to "get out of the way of the hull fast enough", the hull, in essence, has to climb over or push through the bow wave. This results in an exponential increase in resistance with increasing speed.
This hull speed is found by the formula:
or, in metric units:
where L is the length of the waterline in feet or meters.
When the vessel exceeds a speed/length ratio of 0.94, it starts to outrun most of its bow wave, and the hull actually settles slightly in the water as it is now only supported by two wave peaks. As the vessel exceeds a speed/length ratio of 1.34, the hull speed, the wavelength is now longer than the hull, and the stern is no longer supported by the wake, causing the stern to squat, and the bow rise. The hull is now starting to climb its own bow wave, and resistance begins to increase at a very high rate. While it is possible to drive a displacement hull faster than a speed/length ratio of 1.34, it is prohibitively expensive to do so. Most large vessels operate at speed/length ratios well below that level, at speed/length ratios of under 1.0.
For large projects with adequate funding, hydrodynamic resistance can be tested experimentally in a hull testing pool or using tools of computational fluid dynamics.
Vessels are also subject to ocean surface waves and sea swell as well as effects of wind and weather. These movements can be stressful for passengers and equipment, and must be controlled if possible. The rolling movement can be controlled, to an extent, by ballasting or by devices such as fin stabilizers. Pitching movement is more difficult to limit and can be dangerous if the bow submerges in the waves, a phenomenon called pounding. Sometimes, ships must change course or speed to stop violent rolling or pitching.
Lifecycle
A ship will pass through several stages during its career. The first is usually an initial contract to build the ship, the details of which can vary widely based on relationships between the shipowners, operators, designers and the shipyard. Then, the design phase carried out by a naval architect. Then the ship is constructed in a shipyard. After construction, the vessel is launched and goes into service. Ships end their careers in a number of ways, ranging from shipwrecks to service as a museum ship to the scrapyard.
Design
A vessel's design starts with a specification, which a naval architect uses to create a project outline, assess required dimensions, and create a basic layout of spaces and a rough displacement. After this initial rough draft, the architect can create an initial hull design, a general profile and an initial overview of the ship's propulsion. At this stage, the designer can iterate on the ship's design, adding detail and refining the design at each stage.
The designer will typically produce an overall plan, a general specification describing the peculiarities of the vessel, and construction blueprints to be used at the building site. Designs for larger or more complex vessels may also include sail plans, electrical schematics, and plumbing and ventilation plans.
As environmental laws are becoming more strict, ship designers need to create their design in such a way that the ship, when it nears its end-of-term, can be disassembled or disposed easily and that waste is reduced to a minimum.
Construction
Ship construction takes place in a shipyard, and can last from a few months for a unit produced in series, to several years to reconstruct a wooden boat like the frigate Hermione, to more than 10 years for an aircraft carrier. During World War II, the need for cargo ships was so urgent that construction time for Liberty Ships went from initially eight months or longer, down to weeks or even days. Builders employed production line and prefabrication techniques such as those used in shipyards today.
Hull materials and vessel size play a large part in determining the method of construction. The hull of a mass-produced fiberglass sailboat is constructed from a mold, while the steel hull of a cargo ship is made from large sections welded together as they are built.
Generally, construction starts with the hull, and on vessels over about , by the laying of the keel. This is done in a drydock or on land. Once the hull is assembled and painted, it is launched. The last stages, such as raising the superstructure and adding equipment and accommodation, can be done after the vessel is afloat.
Once completed, the vessel is delivered to the customer. Ship launching is often a ceremony of some significance, and is usually when the vessel is formally named. A typical small rowboat can cost under US$100, $1,000 for a small speedboat, tens of thousands of dollars for a cruising sailboat, and about $2,000,000 for a Vendée Globe class sailboat. A trawler may cost $2.5 million, and a 1,000-person-capacity high-speed passenger ferry can cost in the neighborhood of $50 million. A ship's cost partly depends on its complexity: a small, general cargo ship will cost $20 million, a Panamax-sized bulk carrier around $35 million, a supertanker around $105 million and a large LNG carrier nearly $200 million. The most expensive ships generally are so because of the cost of embedded electronics: a costs around $2 billion, and an aircraft carrier goes for about $3.5 billion.
In 2023, the majority of the world's ships (95% of global output) were built in just three countries: China, South Korea and Japan.
Repair and conversion
Ships undergo nearly constant maintenance during their career, whether they be underway, pierside, or in some cases, in periods of reduced operating status between charters or shipping seasons.
Most ships, however, require trips to special facilities such as a drydock at regular intervals. Tasks often done at drydock include removing biological growths on the hull, sandblasting and repainting the hull, and replacing sacrificial anodes used to protect submerged equipment from corrosion. Major repairs to the propulsion and steering systems as well as major electrical systems are also often performed at dry dock.
Some vessels that sustain major damage at sea may be repaired at a facility equipped for major repairs, such as a shipyard. Ships may also be converted for a new purpose: oil tankers are often converted into floating production storage and offloading units.
End of service
Most ocean-going cargo ships have a life expectancy of between 20 and 30 years. A sailboat made of plywood or fiberglass can last between 30 and 40 years. Solid wooden ships can last much longer but require regular maintenance. Carefully maintained steel-hulled yachts can have a lifespan of over 100 years.
As ships age, forces such as corrosion, osmosis, and rotting compromise hull strength, and a vessel becomes too dangerous to sail. At this point, it can be scuttled at sea or scrapped by shipbreakers. Ships can also be used as museum ships, or expended to construct breakwaters or artificial reefs.
Many ships do not make it to the scrapyard, and are lost in fires, collisions, grounding, or sinking at sea. The Allies lost some 5,150 ships during World War II.
Measuring ships
One can measure ships in terms of length overall, length between perpendiculars, length of the ship at the waterline, beam (breadth), depth (distance between the crown of the weather deck and the top of the keelson), draft (distance between the highest waterline and the bottom of the ship) and tonnage. A number of different tonnage definitions exist and are used when describing merchant ships for the purpose of tolls, taxation, etc.
In Britain until Samuel Plimsoll's Merchant Shipping Act of 1876, ship-owners could load their vessels until their decks were almost awash, resulting in a dangerously unstable condition. Anyone who signed on to such a ship for a voyage and, upon realizing the danger, chose to leave the ship, could end up in jail. Plimsoll, a Member of Parliament, realised the problem and engaged some engineers to derive a fairly simple formula to determine the position of a line on the side of any specific ship's hull which, when it reached the surface of the water during loading of cargo, meant the ship had reached its maximum safe loading level. To this day, that mark, called the "Plimsoll mark", "freeboard mark" or "load line mark", exists on ships' sides, and consists of a circle with a horizontal line through the centre. On the Great Lakes of North America the circle is replaced with a diamond. Because different types of water (summer, fresh, tropical fresh, winter north Atlantic) have different densities, subsequent regulations required painting a group of lines forward of the Plimsoll mark to indicate the safe depth (or freeboard above the surface) to which a specific ship could load in water of various densities. Hence the "ladder" of lines seen forward of the Plimsoll mark to this day. These are called the "load lines" in the marine industry.
Ship pollution
Ship pollution is the pollution of air and water by shipping. It is a problem that has been accelerating as trade has become increasingly globalized, posing an increasing threat to the world's oceans and waterways as globalization continues. It is expected that "shipping traffic to and from the United States is projected to double by 2020." Because of increased traffic in ocean ports, pollution from ships also directly affects coastal areas. The pollution produced affects biodiversity, climate, food, and human health. However, the degree to which humans are polluting and how it affects the world is highly debated and has been a hot international topic for the past 30 years.
Oil spills
Oil spills have devastating effects on the environment. Crude oil contains polycyclic aromatic hydrocarbons (PAHs) which are very difficult to clean up, and last for years in the sediment and marine environment. Marine species constantly exposed to PAHs can exhibit developmental problems, susceptibility to disease, and abnormal reproductive cycles.
By the sheer amount of oil carried, modern oil tankers must be considered something of a threat to the environment. An oil tanker can carry of crude oil, or . This is more than six times the amount spilled in the widely known Exxon Valdez incident. In this spill, the ship ran aground and dumped of oil into the ocean in March 1989. Despite efforts of scientists, managers, and volunteers, over 400,000 seabirds, about 1,000 sea otters, and immense numbers of fish were killed.
The International Tanker Owners Pollution Federation has researched 9,351 accidental spills since 1974. According to this study, most spills result from routine operations such as loading cargo, discharging cargo, and taking on fuel oil. 91% of the operational oil spills were small, resulting in less than 7 tons per spill. Spills resulting from accidents like collisions, groundings, hull failures, and explosions are much larger, with 84% of these involving losses of over 700 tons.
Following the Exxon Valdez spill, the United States passed the Oil Pollution Act of 1990 (OPA-90), which included a stipulation that all tankers entering its waters be double-hulled by 2015. Following the sinkings of Erika (1999) and Prestige (2002), the European Union passed its own stringent anti-pollution packages (known as Erika I, II, and III), which require all tankers entering its waters to be double-hulled by 2010. The Erika packages are controversial because they introduced the new legal concept of "serious negligence".
Ballast water
When a large vessel such as a container ship or an oil tanker unloads cargo, seawater is pumped into other compartments in the hull to help stabilize and balance the ship. During loading, this ballast water is pumped out from these compartments.
One of the problems with ballast water transfer is the transport of harmful organisms. Meinesz believes that one of the worst cases of a single invasive species causing harm to an ecosystem can be attributed to a seemingly harmless planktonic organism . Mnemiopsis leidyi, a species of comb jelly that inhabits estuaries from the United States to the Valdés peninsula in Argentina along the Atlantic coast, has caused notable damage in the Black Sea. It was first introduced in 1982, and thought to have been transported to the Black Sea in a ship's ballast water. The population of the comb jelly shot up exponentially and, by 1988, it was wreaking havoc upon the local fishing industry. "The anchovy catch fell from in 1984 to in 1993; sprat from in 1984 to in 1993; horse mackerel from in 1984 to zero in 1993." Now that the comb jellies have exhausted the zooplankton, including fish larvae, their numbers have fallen dramatically, yet they continue to maintain a stranglehold on the ecosystem. Recently the comb jellies have been discovered in the Caspian Sea. Invasive species can take over once occupied areas, facilitate the spread of new diseases, introduce new genetic material, alter landscapes and jeopardize the ability of native species to obtain food. "On land and in the sea, invasive species are responsible for about 137 billion dollars in lost revenue and management costs in the U.S. each year."
Ballast and bilge discharge from ships can also spread human pathogens and other harmful diseases and toxins potentially causing health issues for humans and marine life alike. Discharges into coastal waters, along with other sources of marine pollution, have the potential to be toxic to marine plants, animals, and microorganisms, causing alterations such as changes in growth, disruption of hormone cycles, birth defects, suppression of the immune system, and disorders resulting in cancer, tumors, and genetic abnormalities or even death.
Exhaust emissions
Exhaust emissions from ships are considered to be a significant source of air pollution. "Seagoing vessels are responsible for an estimated 14 percent of emissions of nitrogen from fossil fuels and 16 percent of the emissions of sulfur from petroleum uses into the atmosphere." In Europe ships make up a large percentage of the sulfur introduced to the air, "as much sulfur as all the cars, lorries and factories in Europe put together". "By 2010, up to 40% of air pollution over land could come from ships." Sulfur in the air creates acid rain which damages crops and buildings. When inhaled, sulfur is known to cause respiratory problems and increase the risk of a heart attack.
Ship breaking
Ship breaking or ship demolition is a type of ship disposal involving the breaking up of ships for scrap recycling, with the hulls being discarded in ship graveyards. Most ships have a lifespan of a few decades before there is so much wear that refitting and repair becomes uneconomical. Ship breaking allows materials from the ship, especially steel, to be reused.
In addition to steel and other useful materials, however, ships (particularly older vessels) can contain many substances that are banned or considered dangerous in developed countries. Asbestos and polychlorinated biphenyls (PCBs) are typical examples. Asbestos was used heavily in ship construction until it was finally banned in most of the developed world in the mid-1980s. Currently, the costs associated with removing asbestos, along with the potentially expensive insurance and health risks, have meant that ship-breaking in most developed countries is no longer economically viable. Removing the metal for scrap can potentially cost more than the scrap value of the metal itself. In most of the developing world, however, shipyards can operate without the risk of personal injury lawsuits or workers' health claims, meaning many of these shipyards may operate with high health risks. Furthermore, workers are paid very low rates with no overtime or other allowances. Protective equipment is sometimes absent or inadequate. Dangerous vapors and fumes from burning materials can be inhaled, and dusty asbestos-laden areas around such breakdown locations are commonplace.
Aside from the health of the yard workers, in recent years, ship breaking has also become an issue of major environmental concern. Many developing nations, in which ship breaking yards are located, have lax or no environmental law, enabling large quantities of highly toxic materials to escape into the environment and causing serious health problems among ship breakers, the local population and wildlife. Environmental campaign groups such as Greenpeace have made the issue a high priority for their campaigns.
| Technology | Transportation | null |
27010 | https://en.wikipedia.org/wiki/Software%20engineering | Software engineering | Software engineering is a field within computer science focused on designing, developing, testing, and maintaining of software applications. It involves applying engineering principles and computer programming expertise to develop software systems that meet user needs.
The terms programmer and coder overlap software engineer, but they imply only the construction aspect of typical software engineer workload.
A software engineer applies a software development process, which involves defining, implementing, testing, managing, and maintaining software systems; creating and modifying the development process.
History
Beginning in the 1960s, software engineering was recognized as a separate field of engineering.
The development of software engineering was seen as a struggle. Problems included software that was over budget, exceeded deadlines, required extensive debugging and maintenance, and unsuccessfully met the needs of consumers or was never even completed.
In 1968, NATO held the first software engineering conference where issues related to software were addressed. Guidelines and best practices for the development of software were established.
The origins of the term software engineering have been attributed to various sources. The term appeared in a list of services offered by companies in the June 1965 issue of "Computers and Automation" and was used more formally in the August 1966 issue of Communications of the ACM (Volume 9, number 8) in "President's Letter to the ACM Membership" by Anthony A. Oettinger. It is also associated with the title of a NATO conference in 1968 by Professor Friedrich L. Bauer. Margaret Hamilton described the discipline of "software engineering" during the Apollo missions to give what they were doing legitimacy. At the time there was perceived to be a "software crisis". The 40th International Conference on Software Engineering (ICSE 2018) celebrates 50 years of "Software Engineering" with the Plenary Sessions' keynotes of Frederick Brooks and Margaret Hamilton.
In 1984, the Software Engineering Institute (SEI) was established as a federally funded research and development center headquartered on the campus of Carnegie Mellon University in Pittsburgh, Pennsylvania, United States.
Watts Humphrey founded the SEI Software Process Program, aimed at understanding and managing the software engineering process. The Process Maturity Levels introduced became the Capability Maturity Model Integration for Development (CMMI-DEV), which defined how the US Government evaluates the abilities of a software development team.
Modern, generally accepted best-practices for software engineering have been collected by the ISO/IEC JTC 1/SC 7 subcommittee and published as the Software Engineering Body of Knowledge (SWEBOK). Software engineering is considered one of the major computing disciplines.
Terminology
Definition
Notable definitions of software engineering include:
"The systematic application of scientific and technological knowledge, methods, and experience to the design, implementation, testing, and documentation of software."—The Bureau of Labor Statistics—IEEE Systems and software engineering – Vocabulary
"The application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software."—IEEE Standard Glossary of Software Engineering Terminology
"An engineering discipline that is concerned with all aspects of software production."—Ian Sommerville
"The establishment and use of sound engineering principles in order to economically obtain software that is reliable and works efficiently on real machines."—Fritz Bauer
"A branch of computer science that deals with the design, implementation, and maintenance of complex computer programs."—Merriam-Webster
Software engineering' encompasses not just the act of writing code, but all of the tools and processes an organization uses to build and maintain that code over time. [...] Software engineering can be thought of as 'programming integrated over time.—Software Engineering at Google
The term has also been used less formally:
as the informal contemporary term for the broad range of activities that were formerly called computer programming and systems analysis
as the broad term for all aspects of the practice of computer programming, as opposed to the theory of computer programming, which is formally studied as a sub-discipline of computer science
as the term embodying the advocacy of a specific approach to computer programming, one that urges that it be treated as an engineering discipline rather than an art or a craft, and advocates the codification of recommended practices
Etymology
Margaret Hamilton promoted the term "software engineering" during her work on the Apollo program. The term "engineering" was used to acknowledge that the work should be taken just as seriously as other contributions toward the advancement of technology. Hamilton details her use of the term:When I first came up with the term, no one had heard of it before, at least in our world. It was an ongoing joke for a long time. They liked to kid me about my radical ideas. It was a memorable day when one of the most respected hardware gurus explained to everyone in a meeting that he agreed with me that the process of building software should also be considered an engineering discipline, just like with hardware. Not because of his acceptance of the new "term" per se, but because we had earned his and the acceptance of the others in the room as being in an engineering field in its own right.
Suitability
Individual commentators have disagreed sharply on how to define software engineering or its legitimacy as an engineering discipline. David Parnas has said that software engineering is, in fact, a form of engineering. Steve McConnell has said that it is not, but that it should be. Donald Knuth has said that programming is an art and a science. Edsger W. Dijkstra claimed that the terms software engineering and software engineer have been misused in the United States.
Workload
Requirements analysis
Requirements engineering is about elicitation, analysis, specification, and validation of requirements for software. Software requirements can be functional, non-functional or domain.
Functional requirements describe expected behaviors (i.e. outputs). Non-functional requirements specify issues like portability, security, maintainability, reliability, scalability, performance, reusability, and flexibility. They are classified into the following types: interface constraints, performance constraints (such as response time, security, storage space, etc.), operating constraints, life cycle constraints (maintainability, portability, etc.), and economic constraints. Knowledge of how the system or software works is needed when it comes to specifying non-functional requirements. Domain requirements have to do with the characteristic of a certain category or domain of projects.
Design
Software design is the process of making high-level plans for the software. Design is sometimes divided into levels:
Interface design plans the interaction between a system and its environment as well as the inner workings of the system.
Architectural design plans the major components of a system, including their responsibilities, properties, and interfaces between them.
Detailed design plans internal elements, including their properties, relationships, algorithms and data structures.
Construction
Software construction typically involves programming (a.k.a. coding), unit testing, integration testing, and debugging so as to implement the design. “Software testing is related to, but different from, ... debugging”.
Testing during this phase is generally performed by the programmer and with the purpose to verify that the code behaves as designed and to know when the code is ready for the next level of testing.
Testing
Software testing is an empirical, technical investigation conducted to provide stakeholders with information about the quality of the software under test.
When described separately from construction, testing typically is performed by test engineers or quality assurance instead of the programmers who wrote it. It is performed at the system level and is considered an aspect of software quality.
Program analysis
Program analysis is the process of analyzing computer programs with respect to an aspect such as performance, robustness, and security.
Maintenance
Software maintenance refers to supporting the software after release. It may include but is not limited to: error correction, optimization, deletion of unused and discarded features, and enhancement of existing features.
Usually, maintenance takes up 40% to 80% of project cost.
Education
Knowledge of computer programming is a prerequisite for becoming a software engineer. In 2004, the IEEE Computer Society produced the SWEBOK, which has been published as ISO/IEC Technical Report 1979:2005, describing the body of knowledge that they recommend to be mastered by a graduate software engineer with four years of experience.
Many software engineers enter the profession by obtaining a university degree or training at a vocational school. One standard international curriculum for undergraduate software engineering degrees was defined by the Joint Task Force on Computing Curricula of the IEEE Computer Society and the Association for Computing Machinery, and updated in 2014. A number of universities have Software Engineering degree programs; , there were 244 Campus Bachelor of Software Engineering programs, 70 Online programs, 230 Masters-level programs, 41 Doctorate-level programs, and 69 Certificate-level programs in the United States.
In addition to university education, many companies sponsor internships for students wishing to pursue careers in information technology. These internships can introduce the student to real-world tasks that typical software engineers encounter every day. Similar experience can be gained through military service in software engineering.
Software engineering degree programs
Half of all practitioners today have degrees in computer science, information systems, or information technology. A small but growing number of practitioners have software engineering degrees. In 1987, the Department of Computing at Imperial College London introduced the first three-year software engineering bachelor's degree in the world; in the following year, the University of Sheffield established a similar program. In 1996, the Rochester Institute of Technology established the first software engineering bachelor's degree program in the United States; however, it did not obtain ABET accreditation until 2003, the same year as Rice University, Clarkson University, Milwaukee School of Engineering, and Mississippi State University. In 1997, PSG College of Technology in Coimbatore, India was the first to start a five-year integrated Master of Science degree in Software Engineering.
Since then, software engineering undergraduate degrees have been established at many universities. A standard international curriculum for undergraduate software engineering degrees, SE2004, was defined by a steering committee between 2001 and 2004 with funding from the Association for Computing Machinery and the IEEE Computer Society. , about 50 universities in the U.S. offer software engineering degrees, which teach both computer science and engineering principles and practices. The first software engineering master's degree was established at Seattle University in 1979. Since then, graduate software engineering degrees have been made available from many more universities. Likewise in Canada, the Canadian Engineering Accreditation Board (CEAB) of the Canadian Council of Professional Engineers has recognized several software engineering programs.
In 1998, the US Naval Postgraduate School (NPS) established the first doctorate program in Software Engineering in the world. Additionally, many online advanced degrees in Software Engineering have appeared such as the Master of Science in Software Engineering (MSE) degree offered through the Computer Science and Engineering Department at California State University, Fullerton. Steve McConnell opines that because most universities teach computer science rather than software engineering, there is a shortage of true software engineers. ETS (École de technologie supérieure) University and UQAM (Université du Québec à Montréal) were mandated by IEEE to develop the Software Engineering Body of Knowledge (SWEBOK), which has become an ISO standard describing the body of knowledge covered by a software engineer.
Profession
Legal requirements for the licensing or certification of professional software engineers vary around the world. In the UK, there is no licensing or legal requirement to assume or use the job title Software Engineer. In some areas of Canada, such as Alberta, British Columbia, Ontario, and Quebec, software engineers can hold the Professional Engineer (P.Eng) designation and/or the Information Systems Professional (I.S.P.) designation. In Europe, Software Engineers can obtain the European Engineer (EUR ING) professional title. Software Engineers can also become professionally qualified as a Chartered Engineer through the British Computer Society.
In the United States, the NCEES began offering a Professional Engineer exam for Software Engineering in 2013, thereby allowing Software Engineers to be licensed and recognized. NCEES ended the exam after April 2019 due to lack of participation. Mandatory licensing is currently still largely debated, and perceived as controversial.
The IEEE Computer Society and the ACM, the two main US-based professional organizations of software engineering, publish guides to the profession of software engineering. The IEEE's Guide to the Software Engineering Body of Knowledge – 2004 Version, or SWEBOK, defines the field and describes the knowledge the IEEE expects a practicing software engineer to have. The most current version is SWEBOK v4. The IEEE also promulgates a "Software Engineering Code of Ethics".
Employment
There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016.
Many software engineers work as employees or contractors. Software engineers work with businesses, government agencies (civilian or military), and non-profit organizations. Some software engineers work for themselves as freelancers. Some organizations have specialists to perform each of the tasks in the software development process. Other organizations require software engineers to do many or all of them. In large projects, people may specialize in only one role. In small projects, people may fill several or all roles at the same time. Many companies hire interns, often university or college students during a summer break, or externships. Specializations include analysts, architects, developers, testers, technical support, middleware analysts, project managers, software product managers, educators, and researchers.
Most software engineers and programmers work 40 hours a week, but about 15 percent of software engineers and 11 percent of programmers worked more than 50 hours a week in 2008. Potential injuries in these occupations are possible because like other workers who spend long periods sitting in front of a computer terminal typing at a keyboard, engineers and programmers are susceptible to eyestrain, back discomfort, Thrombosis, Obesity, and hand and wrist problems such as carpal tunnel syndrome.
United States
The U. S. Bureau of Labor Statistics (BLS) counted 1,365,500 software developers holding jobs in the U.S. in 2018. Due to its relative newness as a field of study, formal education in software engineering is often taught as part of a computer science curriculum, and many software engineers hold computer science degrees. The BLS estimates from 2023 to 2033 that computer software engineering would increase by 17%. This is down from the 2022 to 2032 BLS estimate of 25% for software engineering. And, is further down from their 30% 2010 to 2020 BLS estimate. Due to this trend, job growth may not be as fast as during the last decade, as jobs that would have gone to computer software engineers in the United States would instead be outsourced to computer software engineers in countries such as India and other foreign countries. In addition, the BLS Job Outlook for Computer Programmers, the U.S. Bureau of Labor Statistics (BLS) Occupational Outlook predicts a decline of -7 percent from 2016 to 2026, a further decline of -9 percent from 2019 to 2029, a decline of -10 percent from 2021 to 2031. and then a decline of -11 percent from 2022 to 2032. Since computer programming can be done from anywhere in the world, companies sometimes hire programmers in countries where wages are lower. Furthermore, the ratio of women in many software fields has also been declining over the years as compared to other engineering fields. Then there is the additional concern that recent advances in Artificial Intelligence might impact the demand for future generations of Software Engineers. However, this trend may change or slow in the future as many current software engineers in the U.S. market flee the profession or age out of the market in the next few decades.
Certification
The Software Engineering Institute offers certifications on specific topics like security, process improvement and software architecture. IBM, Microsoft and other companies also sponsor their own certification examinations. Many IT certification programs are oriented toward specific technologies, and managed by the vendors of these technologies. These certification programs are tailored to the institutions that would employ people who use these technologies.
Broader certification of general software engineering skills is available through various professional societies. , the IEEE had certified over 575 software professionals as a Certified Software Development Professional (CSDP). In 2008 they added an entry-level certification known as the Certified Software Development Associate (CSDA). The ACM had a professional certification program in the early 1980s, which was discontinued due to lack of interest. The ACM and the IEEE Computer Society together examined the possibility of licensing of software engineers as Professional Engineers in the 1990s,
but eventually decided that such licensing was inappropriate for the professional industrial practice of software engineering. John C. Knight and Nancy G. Leveson presented a more balanced analysis of the licensing issue in 2002.
In the U.K. the British Computer Society has developed a legally recognized professional certification called Chartered IT Professional (CITP), available to fully qualified members (MBCS). Software engineers may be eligible for membership of the British Computer Society or Institution of Engineering and Technology and so qualify to be considered for Chartered Engineer status through either of those institutions. In Canada the Canadian Information Processing Society has developed a legally recognized professional certification called Information Systems Professional (ISP). In Ontario, Canada, Software Engineers who graduate from a Canadian Engineering Accreditation Board (CEAB) accredited program, successfully complete PEO's (Professional Engineers Ontario) Professional Practice Examination (PPE) and have at least 48 months of acceptable engineering experience are eligible to be licensed through the Professional Engineers Ontario and can become Professional Engineers P.Eng. The PEO does not recognize any online or distance education however; and does not consider Computer Science programs to be equivalent to software engineering programs despite the tremendous overlap between the two. This has sparked controversy and a certification war. It has also held the number of P.Eng holders for the profession exceptionally low. The vast majority of working professionals in the field hold a degree in CS, not SE. Given the difficult certification path for holders of non-SE degrees, most never bother to pursue the license.
Impact of globalization
The initial impact of outsourcing, and the relatively lower cost of international human resources in developing third world countries led to a massive migration of software development activities from corporations in North America and Europe to India and later: China, Russia, and other developing countries. This approach had some flaws, mainly the distance / time zone difference that prevented human interaction between clients and developers and the massive job transfer. This had a negative impact on many aspects of the software engineering profession. For example, some students in the developed world avoid education related to software engineering because of the fear of offshore outsourcing (importing software products or services from other countries) and of being displaced by foreign visa workers. Although statistics do not currently show a threat to software engineering itself; a related career, computer programming does appear to have been affected. Nevertheless, the ability to smartly leverage offshore and near-shore resources via the follow-the-sun workflow has improved the overall operational capability of many organizations. When North Americans leave work, Asians are just arriving to work. When Asians are leaving work, Europeans arrive to work. This provides a continuous ability to have human oversight on business-critical processes 24 hours per day, without paying overtime compensation or disrupting a key human resource, sleep patterns.
While global outsourcing has several advantages, global – and generally distributed – development can run into serious difficulties resulting from the distance between developers. This is due to the key elements of this type of distance that have been identified as geographical, temporal, cultural and communication (that includes the use of different languages and dialects of English in different locations). Research has been carried out in the area of global software development over the last 15 years and an extensive body of relevant work published that highlights the benefits and problems associated with the complex activity. As with other aspects of software engineering research is ongoing in this and related areas.
Prizes
There are various prizes in the field of software engineering:
ACM-AAAI Allen Newell Award- USA. Awarded to career contributions that have breadth within computer science, or that bridge computer science and other disciplines.
BCS Lovelace Medal. Awarded to individuals who have made outstanding contributions to the understanding or advancement of computing.
ACM SIGSOFT Outstanding Research Award, selected for individual(s) who have made “significant and lasting research contributions to the theory or practice of software engineering.”
More ACM SIGSOFT Awards.
The Codie award, a yearly award issued by the Software and Information Industry Association for excellence in software development within the software industry.
Harlan Mills Award for "contributions to the theory and practice of the information sciences, focused on software engineering".
ICSE Most Influential Paper Award.
Jolt Award, also for the software industry.
Stevens Award given in memory of Wayne Stevens.
Criticism
Some call for licensing, certification and codified bodies of knowledge as mechanisms for spreading the engineering knowledge and maturing the field.
Some claim that the concept of software engineering is so new that it is rarely understood, and it is widely misinterpreted, including in software engineering textbooks, papers, and among the communities of programmers and crafters.
Some claim that a core issue with software engineering is that its approaches are not empirical enough because a real-world validation of approaches is usually absent, or very limited and hence software engineering is often misinterpreted as feasible only in a "theoretical environment."
Edsger Dijkstra, a founder of many of the concepts in software development today, rejected the idea of "software engineering" up until his death in 2002, arguing that those terms were poor analogies for what he called the "radical novelty" of computer science:
| Technology | Disciplines | null |
18070419 | https://en.wikipedia.org/wiki/Yellow%20supergiant | Yellow supergiant | A yellow supergiant (YSG) is a star, generally of spectral type F or G, having a supergiant luminosity class (e.g. Ia or Ib). They are stars that have evolved away from the main sequence, expanding and becoming more luminous.
Yellow supergiants are hotter and smaller than red supergiants; naked eye examples include Polaris, Alpha Leporis, Alpha Persei, Delta Canis Majoris and Iota¹ Scorpii. Many of them are variable stars, mostly pulsating Cepheids such as δ Cephei itself.
Spectrum
Yellow supergiants generally have spectral types of F and G, although sometimes late A or early K stars are included. These spectral types are characterised by hydrogen lines that are very strong in class A, weakening through F and G until they are very weak or absent in class K. Calcium H and K lines are present in late A spectra, but stronger in class F, and strongest in class G, before weakening again in cooler stars. Lines of ionised metals are strong in class A, weaker in class F and G, and absent from cooler stars. In class G, neutral metal lines are also found, along with CH molecular bands.
Supergiants are identified in the Yerkes spectral classification by luminosities classes Ia and Ib, with intermediates such as Iab and Ia/ab sometimes being used. These luminosity classes are assigned using spectral lines that are sensitive to luminosity. Historically, the Ca H and K line strengths have been used for yellow stars, as well as the strengths of various metal lines. The neutral oxygen lines, such as the 777.3 nm triplet, have also been used since they are extremely sensitive to luminosity across a wide range of spectral types. Modern atmospheric models can accurately match all the spectral line strengths and profiles to give a spectral classification, or even skip straight to the physical parameters of the star, but in practice luminosity classes are still usually assigned by comparison against standard stars.
Some yellow supergiant spectral standard stars:
F0 Ib: α Leporis
F2 Ib: 89 Herculis
F5 Ib: α Persei
F8 Ia: δ Canis Majoris
G0 Ib: μ Persei
G2 Ib: α Aquarii
G5 Ib: 9 Pegasi
G8 Ib: ε Geminorum
Properties
Yellow supergiants have a relatively narrow range of temperatures corresponding to their spectral types, from about 4,000 K to 7,000 K. Their luminosities range from about upwards, with the most luminous stars exceeding . The high luminosities indicate that they are much larger than the sun, from about to .
The masses of yellow supergiants vary greatly, from less than the sun for stars such as W Virginis to or more (e.g. V810 Centauri). Corresponding surface gravities (log(g) cgs) are around 1–2 for high-mass supergiants, but can be as low as 0 for low-mass supergiants.
Yellow supergiants are rare stars, much less common than red supergiants and main sequence stars. In M31 (Andromeda Galaxy), 16 yellow supergiants are seen associated with evolution from class O stars, of which there are around 25,000 visible.
Variability
Many yellow supergiants are in a region of the HR diagram known as the instability strip because their temperatures and luminosities cause them to be dynamically unstable. Most yellow supergiants observed in the instability strip are Cepheid variables, named for δ Cephei, which pulsate with well-defined periods that are related to their luminosities. This means they can be used as standard candles for determining the distance of stars knowing only their period of variability. Cepheids with longer periods are cooler and more luminous.
Two distinct types of Cepheid variable have been identified, which have different period-luminosity relationships: Classical Cepheid variables are young massive population I stars; type II Cepheids are older population II stars with low masses, including W Virginis variables, BL Herculis variables and RV Tauri variables. The Classical Cepheids are more luminous than the type II Cepheids with the same period.
R Coronae Borealis variables are often yellow supergiants, but their variability is produced by a different mechanism from the Cepheids. At irregular intervals, they become obscured by dust condensation around the star and their brightness drops dramatically.
Evolution
Supergiants are stars that have evolved away from the main sequence after exhausting the hydrogen in their cores. Yellow supergiants are a heterogenous group of stars crossing the standard categories of stars in the HR diagram at various different stages of their evolution.
Stars more massive than spend a few million years on the main sequence as class O and early B stars until the dense hydrogen in their cores becomes depleted. Then they expand and cool to become supergiants. They spend a few thousand years as a yellow supergiant while cooling, then spend one to four million years as a red supergiant, typically. Supergiants make up less than 1% of stars; though different proportions in the visible early eras of the universe. The relatively brief phases and concentration of matter explains the rarity of these stars.
Some red supergiants undergo a blue loop, temporarily re-heating and becoming yellow or even blue supergiants before cooling again. Stellar models show that blue loops rely on particular chemical makeups and other assumptions, but they are most likely for stars of low red supergiant mass. While cooling for the first time or when performing a sufficiently extended blue loop, yellow supergiants will cross the instability strip and pulsate as Classical Cepheid variables with periods around ten days and longer.
Intermediate mass stars leave the main sequence by cooling along the subgiant branch until they reach the red-giant branch. Stars more massive than about have a sufficiently large helium core that it begins fusion before becoming degenerate. These stars will perform a blue loop.
For masses between about and , the blue loop can extend to F and G spectral types at luminosities reaching . These stars may develop supergiant luminosity classes, especially if they are pulsating. When these stars cross the instability strip they will pulsate as short period Cepheids. Blue loops in these stars can last for around 10 million years, so this type of yellow supergiant is more common than the more luminous types.
Stars with masses similar to the sun develop degenerate helium cores after they leave the main sequence and ascend to the tip of the red-giant branch where they ignite helium in a flash. They then fuse core helium on the horizontal branch with luminosities too low to be considered supergiants.
Stars leaving the blue half of the horizontal branch to be classified in the asymptotic giant branch (AGB) pass through the yellow classifications and will pulsate as BL Herculis variables. Such yellow stars may be given a supergiant luminosity class despite their low masses but assisted by luminous pulsation. In the AGB thermal pulses from the helium-fusing shell of stars may cause a blue loop across the instability strip. Such stars will pulsate as W Virginis variables and again may be classified as relatively low luminosity yellow supergiants. When the hydrogen-fusing shell of a low or intermediate mass star of the AGB nears its surface, the cool outer layers are rapidly lost, which causes the star to heat up, eventually becoming a white dwarf. These stars have masses lower than the sun, but luminosities that can be or higher, so they will become yellow supergiants for a short time. Post-AGB stars are believed to pulsate as RV Tauri variables when they cross the instability strip.
The evolutionary status of yellow supergiant R Coronae Borealis variables is unclear. They may be post-AGB stars reignited by a late helium shell flash, or they could be formed from white dwarf mergers.
It is expected that first-time yellow supergiants mature to the red supergiant stage without any supernova. The cores of some post-red supergiant yellow supergiants might collapse and trigger a supernova. A handful of supernovae have been associated with apparent yellow supergiant progenitors that are not luminous enough to be post-red supergiants. If these are confirmed then an explanation must be found for how a star of moderate mass still with a helium core would cause a core-collapse supernova. The obvious candidate in such cases is always some form of binary interaction.
Based on reports from Chinese astronomers in the 2nd/1st century BC, the red supergiant Betelgeuse was described as yellow, hinting it may have been a yellow supergiant at the time.
Yellow hypergiants
Particularly luminous and unstable yellow supergiants are often grouped into a separate class of stars called the yellow hypergiants. These are mostly thought to be post-red supergiant stars, very massive stars that have lost a considerable portion of their outer layers and are now evolving towards becoming blue supergiants and Wolf-Rayet stars.
| Physical sciences | Stellar astronomy | Astronomy |
18075548 | https://en.wikipedia.org/wiki/Units%20of%20information | Units of information | A unit of information is any unit of measure of digital data size. In digital computing, a unit of information is used to describe the capacity of a digital data storage device. In telecommunications, a unit of information is used to describe the throughput of a communication channel. In information theory, a unit of information is used to measure information contained in messages and the entropy of random variables.
Due to the need to work with data sizes that range from very small to very large, units of information cover a wide range of data sizes. Units are defined as multiples of a smaller unit except for the smallest unit which is based on convention and hardware design. Multiplier prefixes are used to describe relatively large sizes.
For binary hardware, by far the most common hardware today, the smallest unit is the bit, a portmanteau of binary digit, which represents a value that is one of two possible values; typically shown as 0 and 1. The nibble, 4 bits, represents the value of a single hexadecimal digit. The byte, 8 bits, 2 nibbles, is possibly the most commonly known and used base unit to describe data size. The word is a size that varies by and has a special importance for a particular hardware context. On modern hardware, a word is typically 2, 4 or 8 bytes, but the size varies dramatically on older hardware. Larger sizes can be expressed as multiples of a base unit via SI metric prefixes (powers of ten) or the newer and generally more accurate IEC binary prefixes (powers of two).
Information theory
In 1928, Ralph Hartley observed a fundamental storage principle, which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted . Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely .
Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.
When b is 2, the unit is the shannon, equal to the information content of one "bit". A system with 8 possible states, for example, can store up to bits of information. Other units that have been named include:
Base b = 3 the unit is called "trit", and is equal to (≈ 1.585) bits.
Base b = 10 the unit is called decimal digit, hartley, ban, decit, or dit, and is equal to log2 10 (≈ 3.322) bits.
Base b = e, the base of natural logarithms the unit is called a nat, nit, or nepit (from Neperian), and is worth (≈ 1.443) bits.
The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.
Units derived from bit
Several conventional names are used for collections or groups of bits.
Byte
Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet. An 8-bit byte can represent 256 (28) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte (IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.
Nibble
A group of four bits, or half a byte, is sometimes called a nibble, nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has.
Word, block, and page
Computers usually manipulate bits in groups of a fixed size, conventionally called words. The number of bits in a word is usually defined by the size of the registers in the computer's CPU, or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72 bits or others.
Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").
Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks, or, in CPU caches, cache lines.
Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages.
Multiplicative prefixes
A unit for a large amount of data can be formed using either a metric or binary prefix with a base unit. For storage, the base unit is typically byte. For communication throughput, a base unit of bit is common. For example, using the metric kilo prefix, a kilobyte is 1000 bytes and a kilobit is 1000 bits.
Use of metric prefixes is common, but often inaccurate since binary storage hardware is organized with capacity that is a power of 2 not 10 as the metric prefixes are. In the context of computing, the metric prefixes are often intended to mean something other than their normal meaning. For example, a kilobyte is actually 1024 bytes even though the standard meaning of kilo is 1000. And, mega normally means one million, but in computing is often used to mean 220 = . The table below illustrates the differences between normal metric sizes and the implied actual size the binary size.
The International Electrotechnical Commission (IEC) issued a standard that introduces binary prefixes that accurately represent binary sizes without changing the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2.
The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated.
Size examples
1 bit: Answer to a yes/no question
1 byte: A number from 0 to 255
90 bytes: Enough to store a typical line of text from a book
512 bytes = 0.5 KiB: The typical sector size of an old style hard disk drive (modern Advanced Format sectors are 4096 bytes).
1024 bytes = 1 KiB: A block size in some older UNIX filesystems
2048 bytes = 2 KiB: A CD-ROM sector
4096 bytes = 4 KiB: A memory page in x86 (since Intel 80386) and many other architectures, also the modern Advanced Format hard disk drive sector size.
4 kB: About one page of text from a novel
120 kB: The text of a typical pocket book
1 MiB: A 1024×1024 pixel bitmap image with 256 colors (8 bpp color depth)
3 MB: A three-minute song (133 kbit/s)
650–900 MB – a CD-ROM
1 GB: 114 minutes of uncompressed CD-quality audio at 1.4 Mbit/s
16 GB: DDR5 DRAM laptop memory under $40 (as of early 2024)
32/64/128 GB: Three common sizes of USB flash drives
1 TB: The size of a $30 hard disk (as of early 2024)
6 TB: The size of a $100 hard disk (as of early 2022)
16 TB: The size of a small/cheap $130 (as of early 2024) enterprise SAS hard disk drive
24 TB: The size of $440 (as of early 2024) "video" hard disk drive
32 TB: Largest hard disk drive (as of mid-2024)
100 TB: Largest commercially available solid-state drive (as of mid-2024)
200 TB: Largest solid-state drive constructed (prediction for mid-2022)
1.6 PB (1600 TB): Amount of possible storage in one 2U server (world record as of 2021, using 100 TB solid-states drives).
1.3 ZB: Prediction of the volume of the whole internet in 2016
Obsolete and unusual units
Some notable unit names that are today obsolete or only used in limited contexts.
1 bit: unibit
2 bits: dibit, crumb, quartic digit, quad, semi-nibble, nyp
3 bits: tribit, triad, triade
4 bits: see nibble
5 bits: pentad, pentade,
6 bits: byte (in early IBM machines using BCD alphamerics), hexad, hexade, sextet
7 bits: heptad, heptade
8 bits: octad, octade
9 bits: nonet, rarely used
10 bits: declet, decle
12 bits: slab
15 bits: parcel (on CDC 6600 and CDC 7600)
16 bits: doublet, wyde, parcel (on Cray-1), chawmp (on a 32-bit machine)
18 bits: chomp, chawmp (on a 36-bit machine)
32 bits: quadlet, tetra
64 bits: octlet, octa
96 bits: bentobox (in ITRON OS)
128 bits: hexlet, paragraph (on Intel x86 processors)
256 bytes: page (on Intel 4004, 8080 and 8086 processors, also many other 8-bit processors – typically much larger on many 16-bit/32-bit processors)
6 trits: tryte
combit, comword
| Physical sciences | Basics | Basics and measurement |
16874172 | https://en.wikipedia.org/wiki/Topographic%20Abney%20level | Topographic Abney level | An Abney level and clinometer is an instrument used in surveying which consists of a fixed sighting tube, a movable spirit level that is connected to a pointing arm, and a protractor scale. An internal mirror allows the user to see the bubble in the level while sighting a distant target. It can be used as a hand-held instrument or mounted on a Jacob's staff for more precise measurement, and it is small enough to carry in a coat pocket.
The Abney level is an easy to use, relatively inexpensive, and, when used correctly, an accurate surveying tool. Abney levels typically include scales graduated in measure degrees of arc, percent grade, and in topographic Abney levels, grade in feet per surveyor's chain, and chainage correction. The latter is the cosine of the angle, used to convert distances measured along the slope to horizontal distances. By using trigonometry the user of an Abney level can determine height, volume, and grade.
Abney levels are made with square tubular bodies so that they may also be used to directly measure the slopes of plane surfaces by simply placing the body of the level on the surface, adjusting the level, and then reading the angle off of the scale.
Origins
The Abney level was invented by Sir William de Wiveleslie Abney (24 July 1843 – 3 December 1920) who was an English astronomer and chemist best known for his pioneering of color photography and color vision. Abney invented this instrument under the employment of the School of Military Engineering in Chatham, England, prior to late 1870. It is described by W. & L. E. Gurley as an English modification of the Locke hand level.
Elliott Brothers of London registered an "improved clinometer and spirit level combined" in December 1870 based on "the old form as originally designed by Lieutenant Abney."
By 1871, a committee of the Royal Geographical Society recommended a long list of instruments that explorers should carry. Along with necessary tools such as a watch, compass, sextant and plenty of paper, the committee included "a pocket level (Abney's)" in a secondary list of "additional instruments, not necessary, but convenient."
Usage
In 1914 and 1915, the Forestry Quarterly published a series of articles on the use of the Abney level. These tutorial articles remain useful today, but the primary reference for usage is the 1927 Abney Level Handbook.
The Abney level is typically used at the eye height of the surveyor, either hand-held or mounted on a staff at that height. To measure lines on a particular slope, the desired angle or grade is first set on the level and then the surveyor sights through the sighting tube and brings the cross-hair in line with the bubble in the level while viewing the target. This allows the surveyor to see if the target is above or below the line of sight.
To measure an unknown slope, the surveyor first sights a target along that slope and then adjusts the angle of the level until the bubble is centered on the cross-hair. Once this is done, the slope may be read from the scale.
Because the level is typically held at the surveyor's eye-height, it is common to use the face of a second surveyor of similar height as a target. If the second surveyor is not the same height, the approximate location of eye-height must be noted (i.e. chin, nose, top of head). Mounting a face-sized target at eye-height on a level staff may be more accurate. Because most Abney levels do not contain a telescope, direct reading from a level staff is only possible at short range, although it is possible to make special staffs that can be read at a distance without magnification.
Common uses
Abney levels remain in common use in several fields:
In topographic surveying, to determine slope to horizontal distance calculation, contour tracing and relative heights.
In forestry, for tree height measurement.
In mining and mine safety inspection, to measure the grades of haulage roads.
In geology, in measurements of rock outcrops and fault scarps.
| Technology | Surveying tools | null |
695215 | https://en.wikipedia.org/wiki/Horizon%20problem | Horizon problem | The horizon problem (also known as the homogeneity problem) is a cosmological fine-tuning problem within the Big Bang model of the universe. It arises due to the difficulty in explaining the observed homogeneity of causally disconnected regions of space in the absence of a mechanism that sets the same initial conditions everywhere. It was first pointed out by Wolfgang Rindler in 1956.
The most commonly accepted solution is cosmic inflation. Different solutions propose a cyclic universe or a variable speed of light.
Background
Astronomical distances and particle horizons
The distances of observable objects in the night sky correspond to times in the past. We use the light-year (the distance light can travel in the time of one Earth year) to describe these cosmological distances. A galaxy measured at ten billion light-years appears to us as it was ten billion years ago, because the light has taken that long to travel to the observer. If one were to look at a galaxy ten billion light-years away in one direction and another in the opposite direction, the total distance between them is twenty billion light-years. This means that the light from the first has not yet reached the second because the universe is only about 13.8 billion years old. In a more general sense, there are portions of the universe that are visible to us, but invisible to each other, outside each other's respective particle horizons.
Causal information propagation
In accepted relativistic physical theories, no information can travel faster than the speed of light. In this context, "information" means "any sort of physical interaction". For instance, heat will naturally flow from a hotter area to a cooler one, and in physics terms, this is one example of information exchange. Given the example above, the two galaxies in question cannot have shared any sort of information; they are not in causal contact. In the absence of common initial conditions, one would expect, then, that their physical properties would be different, and more generally, that the universe as a whole would have varying properties in causally disconnected regions.
Horizon problem
Contrary to this expectation, the observations of the cosmic microwave background (CMB) and galaxy surveys show that the observable universe is nearly isotropic, which, through the Copernican principle, also implies homogeneity. CMB sky surveys show that the temperatures of the CMB are coordinated to a level of where is the difference between the observed temperature in a region of the sky and the average temperature of the sky . This coordination implies that the entire sky, and thus the entire observable universe, must have been causally connected long enough for the universe to come into thermal equilibrium.
According to the Big Bang model, as the density of the expanding universe dropped, it eventually reached a temperature where photons fell out of thermal equilibrium with matter; they decoupled from the electron-proton plasma and began free-streaming across the universe. This moment in time is referred to as the epoch of Recombination, when electrons and protons became bound to form electrically neutral hydrogen; without free electrons to scatter the photons, the photons began free-streaming. This epoch is observed through the CMB. Since we observe the CMB as a background to objects at a smaller redshift, we describe this epoch as the transition of the universe from opaque to transparent. The CMB physically describes the ‘surface of last scattering’ as it appears to us as a surface, or a background, as shown in the figure below.
Note we use conformal time in the following diagrams. Conformal time describes the amount of time it would take a photon to travel from the location of the observer to the farthest observable distance (if the universe stopped expanding right now).
The decoupling, or the last scattering, is thought to have occurred about 300,000 years after the Big Bang, or at a redshift of about . We can determine both the approximate angular diameter of the universe and the physical size of the particle horizon that had existed at this time.
The angular diameter distance, in terms of redshift , is described by . If we assume a flat cosmology then,
The epoch of recombination occurred during a matter dominated era of the universe, so we can approximate as . Putting these together, we see that the angular diameter distance, or the size of the observable universe for a redshift is
Since , we can approximate the above equation as
Substituting this into our definition of angular diameter distance, we obtain
From this formula, we obtain the angular diameter distance of the cosmic microwave background as .
The particle horizon describes the maximum distance light particles could have traveled to the observer given the age of the universe. We can determine the comoving distance for the age of the universe at the time of recombination using from earlier,
To get the physical size of the particle horizon ,
We would expect any region of the CMB within 2 degrees of angular separation to have been in causal contact, but at any scale larger than 2° there should have been no exchange of information.
CMB regions that are separated by more than 2° lie outside one another's particle horizons and are causally disconnected. The horizon problem describes the fact that we see isotropy in the CMB temperature across the entire sky, despite the entire sky not being in causal contact to establish thermal equilibrium. Refer to the timespace diagram to the right for a visualization of this problem.
If the universe started with even slightly different temperatures in different places, the CMB should not be isotropic unless there is a mechanism that evens out the temperature by the time of decoupling. In reality, the CMB has the same temperature in the entire sky, .
Inflationary model
The theory of cosmic inflation has attempted to address the problem by positing a 10-second period of exponential expansion in the first second of the history of the universe due to a scalar field interaction. According to the inflationary model, the universe increased in size by a factor of more than 10, from a small and causally connected region in near equilibrium. Inflation then expanded the universe rapidly, isolating nearby regions of spacetime by growing them beyond the limits of causal contact, effectively "locking in" the uniformity at large distances. Essentially, the inflationary model suggests that the universe was entirely in causal contact in the very early universe. Inflation then expands this universe by approximately 60 e-foldings (the scale factor increases by factor ). We observe the CMB after inflation has occurred at a very large scale. It maintained thermal equilibrium to this large size because of the rapid expansion from inflation.
One consequence of cosmic inflation is that the anisotropies in the Big Bang due to quantum fluctuations are reduced but not eliminated. Differences in the temperature of the cosmic background are smoothed by cosmic inflation, but they still exist. The theory predicts a spectrum for the anisotropies in the microwave background which is mostly consistent with observations from WMAP and COBE.
However, gravity alone may be sufficient to explain this homogeneity.
Variable-speed-of-light theories
Cosmological models employing a variable speed of light have been proposed to resolve the horizon problem of and provide an alternative to cosmic inflation. In the VSL models, the fundamental constant c, denoting the speed of light in vacuum, is greater in the early universe than its present value, effectively increasing the particle horizon at the time of decoupling sufficiently to account for the observed isotropy of the CMB.
| Physical sciences | Physical cosmology | Astronomy |
695241 | https://en.wikipedia.org/wiki/Scale%20invariance | Scale invariance | In physics, mathematics and statistics, scale invariance is a feature of objects or laws that do not change if scales of length, energy, or other variables, are multiplied by a common factor, and thus represent a universality.
The technical term for this transformation is a dilatation (also known as dilation). Dilatations can form part of a larger conformal symmetry.
In mathematics, scale invariance usually refers to an invariance of individual functions or curves. A closely related concept is self-similarity, where a function or curve is invariant under a discrete subset of the dilations. It is also possible for the probability distributions of random processes to display this kind of scale invariance or self-similarity.
In classical field theory, scale invariance most commonly applies to the invariance of a whole theory under dilatations. Such theories typically describe classical physical processes with no characteristic length scale.
In quantum field theory, scale invariance has an interpretation in terms of particle physics. In a scale-invariant theory, the strength of particle interactions does not depend on the energy of the particles involved.
In statistical mechanics, scale invariance is a feature of phase transitions. The key observation is that near a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for an explicitly scale-invariant theory to describe the phenomena. Such theories are scale-invariant statistical field theories, and are formally very similar to scale-invariant quantum field theories.
Universality is the observation that widely different microscopic systems can display the same behaviour at a phase transition. Thus phase transitions in many different systems may be described by the same underlying scale-invariant theory.
In general, dimensionless quantities are scale-invariant. The analogous concept in statistics are standardized moments, which are scale-invariant statistics of a variable, while the unstandardized moments are not.
Scale-invariant curves and self-similarity
In mathematics, one can consider the scaling properties of a function or curve under rescalings of the variable . That is, one is interested in the shape of for some scale factor , which can be taken to be a length or size rescaling. The requirement for to be invariant under all rescalings is usually taken to be
for some choice of exponent Δ, and for all dilations . This is equivalent to being a homogeneous function of degree Δ.
Examples of scale-invariant functions are the monomials , for which , in that clearly
An example of a scale-invariant curve is the logarithmic spiral, a kind of curve that often appears in nature. In polar coordinates , the spiral can be written as
Allowing for rotations of the curve, it is invariant under all rescalings ; that is, is identical to a rotated version of .
Projective geometry
The idea of scale invariance of a monomial generalizes in higher dimensions to the idea of a homogeneous polynomial, and more generally to a homogeneous function. Homogeneous functions are the natural denizens of projective space, and homogeneous polynomials are studied as projective varieties in projective geometry. Projective geometry is a particularly rich field of mathematics; in its most abstract forms, the geometry of schemes, it has connections to various topics in string theory.
Fractals
It is sometimes said that fractals are scale-invariant, although more precisely, one should say that they are self-similar. A fractal is equal to itself typically for only a discrete set of values , and even then a translation and rotation may have to be applied to match the fractal up to itself.
Thus, for example, the Koch curve scales with , but the scaling holds only for values of for integer . In addition, the Koch curve scales not only at the origin, but, in a certain sense, "everywhere": miniature copies of itself can be found all along the curve.
Some fractals may have multiple scaling factors at play at once; such scaling is studied with multi-fractal analysis.
Periodic external and internal rays are invariant curves .
Scale invariance in stochastic processes
If is the average, expected power at frequency , then noise scales as
with Δ = 0 for white noise, Δ = −1 for pink noise, and Δ = −2 for Brownian noise (and more generally, Brownian motion).
More precisely, scaling in stochastic systems concerns itself with the likelihood of choosing a particular configuration out of the set of all possible random configurations. This likelihood is given by the probability distribution.
Examples of scale-invariant distributions are the Pareto distribution and the Zipfian distribution.
Scale-invariant Tweedie distributions
Tweedie distributions are a special case of exponential dispersion models, a class of statistical models used to describe error distributions for the generalized linear model and characterized by closure under additive and reproductive convolution as well as under scale transformation. These include a number of common distributions: the normal distribution, Poisson distribution and gamma distribution, as well as more unusual distributions like the compound Poisson-gamma distribution, positive stable distributions, and extreme stable distributions.
Consequent to their inherent scale invariance Tweedie random variables Y demonstrate a variance var(Y) to mean E(Y) power law:
,
where a and p are positive constants. This variance to mean power law is known in the physics literature as fluctuation scaling, and in the ecology literature as Taylor's law.
Random sequences, governed by the Tweedie distributions and evaluated by the method of expanding bins exhibit a biconditional relationship between the variance to mean power law and power law autocorrelations. The Wiener–Khinchin theorem further implies that for any sequence that exhibits a variance to mean power law under these conditions will also manifest 1/f noise.
The Tweedie convergence theorem provides a hypothetical explanation for the wide manifestation of fluctuation scaling and 1/f noise. It requires, in essence, that any exponential dispersion model that asymptotically manifests a variance to mean power law will be required express a variance function that comes within the domain of attraction of a Tweedie model. Almost all distribution functions with finite cumulant generating functions qualify as exponential dispersion models and most exponential dispersion models manifest variance functions of this form. Hence many probability distributions have variance functions that express this asymptotic behavior, and the Tweedie distributions become foci of convergence for a wide range of data types.
Much as the central limit theorem requires certain kinds of random variables to have as a focus of convergence the Gaussian distribution and express white noise, the Tweedie convergence theorem requires certain non-Gaussian random variables to express 1/f noise and fluctuation scaling.
Cosmology
In physical cosmology, the power spectrum of the spatial distribution of the cosmic microwave background is near to being a scale-invariant function. Although in mathematics this means that the spectrum is a power-law, in cosmology the term "scale-invariant" indicates that the amplitude, , of primordial fluctuations as a function of wave number, , is approximately constant, i.e. a flat spectrum. This pattern is consistent with the proposal of cosmic inflation.
Scale invariance in classical field theory
Classical field theory is generically described by a field, or set of fields, φ, that depend on coordinates, x. Valid field configurations are then determined by solving differential equations for φ, and these equations are known as field equations.
For a theory to be scale-invariant, its field equations should be invariant under a rescaling of the coordinates, combined with some specified rescaling of the fields,
The parameter Δ is known as the scaling dimension of the field, and its value depends on the theory under consideration. Scale invariance will typically hold provided that no fixed length scale appears in the theory. Conversely, the presence of a fixed length scale indicates that a theory is not scale-invariant.
A consequence of scale invariance is that given a solution of a scale-invariant field equation, we can automatically find other solutions by rescaling both the coordinates and the fields appropriately. In technical terms, given a solution, φ(x), one always has other solutions of the form
Scale invariance of field configurations
For a particular field configuration, φ(x), to be scale-invariant, we require that
where Δ is, again, the scaling dimension of the field.
We note that this condition is rather restrictive. In general, solutions even of scale-invariant field equations will not be scale-invariant, and in such cases the symmetry is said to be spontaneously broken.
Classical electromagnetism
An example of a scale-invariant classical field theory is electromagnetism with no charges or currents. The fields are the electric and magnetic fields, E(x,t) and B(x,t), while their field equations are Maxwell's equations.
With no charges or currents, these field equations take the form of wave equations
where c is the speed of light.
These field equations are invariant under the transformation
Moreover, given solutions of Maxwell's equations, E(x, t) and B(x, t), it holds that
E(λx, λt) and B(λx, λt) are also solutions.
Massless scalar field theory
Another example of a scale-invariant classical field theory is the massless scalar field (note that the name scalar is unrelated to scale invariance). The scalar field, is a function of a set of spatial variables, x, and a time variable, .
Consider first the linear theory. Like the electromagnetic field equations above, the equation of motion for this theory is also a wave equation,
and is invariant under the transformation
The name massless refers to the absence of a term in the field equation. Such a term is often referred to as a `mass' term, and would break the invariance under the above transformation. In relativistic field theories, a mass-scale, is physically equivalent to a fixed length scale through
and so it should not be surprising that massive scalar field theory is not scale-invariant.
φ4 theory
The field equations in the examples above are all linear in the fields, which has meant that the scaling dimension, Δ, has not been so important. However, one usually requires that the scalar field action is dimensionless, and this fixes the scaling dimension of . In particular,
where is the combined number of spatial and time dimensions.
Given this scaling dimension for , there are certain nonlinear modifications of massless scalar field theory which are also scale-invariant. One example is massless φ4 theory for = 4. The field equation is
(Note that the name 4 derives from the form of the Lagrangian, which contains the fourth power of .)
When = 4 (e.g. three spatial dimensions and one time dimension), the scalar field scaling dimension is Δ = 1. The field equation is then invariant under the transformation
The key point is that the parameter must be dimensionless, otherwise one introduces a fixed length scale into the theory: For 4 theory, this is only the case in = 4.
Note that under these transformations the argument of the function is unchanged.
Scale invariance in quantum field theory
The scale-dependence of a quantum field theory (QFT) is characterised by the way its coupling parameters depend on the energy-scale of a given physical process. This energy dependence is described by the renormalization group, and is encoded in the beta-functions of the theory.
For a QFT to be scale-invariant, its coupling parameters must be independent of the energy-scale, and this is indicated by the vanishing of the beta-functions of the theory. Such theories are also known as fixed points of the corresponding renormalization group flow.
Quantum electrodynamics
A simple example of a scale-invariant QFT is the quantized electromagnetic field without charged particles. This theory actually has no coupling parameters (since photons are massless and non-interacting) and is therefore scale-invariant, much like the classical theory.
However, in nature the electromagnetic field is coupled to charged particles, such as electrons. The QFT describing the interactions of photons and charged particles is quantum electrodynamics (QED), and this theory is not scale-invariant. We can see this from the QED beta-function. This tells us that the electric charge (which is the coupling parameter in the theory) increases with increasing energy. Therefore, while the quantized electromagnetic field without charged particles is scale-invariant, QED is not scale-invariant.
Massless scalar field theory
Free, massless quantized scalar field theory has no coupling parameters. Therefore, like the classical version, it is scale-invariant. In the language of the renormalization group, this theory is known as the Gaussian fixed point.
However, even though the classical massless φ4 theory is scale-invariant in D = 4, the quantized version is not scale-invariant. We can see this from the beta-function for the coupling parameter, g.
Even though the quantized massless φ4 is not scale-invariant, there do exist scale-invariant quantized scalar field theories other than the Gaussian fixed point. One example is the Wilson–Fisher fixed point, below.
Conformal field theory
Scale-invariant QFTs are almost always invariant under the full conformal symmetry, and the study of such QFTs is conformal field theory (CFT). Operators in a CFT have a well-defined scaling dimension, analogous to the scaling dimension, ∆, of a classical field discussed above. However, the scaling dimensions of operators in a CFT typically differ from those of the fields in the corresponding classical theory. The additional contributions appearing in the CFT are known as anomalous scaling dimensions.
Scale and conformal anomalies
The φ4 theory example above demonstrates that the coupling parameters of a quantum field theory can be scale-dependent even if the corresponding classical field theory is scale-invariant (or conformally invariant). If this is the case, the classical scale (or conformal) invariance is said to be anomalous. A classically scale-invariant field theory, where scale invariance is broken by quantum effects, provides an explication of the nearly exponential expansion of the early universe called cosmic inflation, as long as the theory can be studied through perturbation theory.
Phase transitions
In statistical mechanics, as a system undergoes a phase transition, its fluctuations are described by a scale-invariant statistical field theory. For a system in equilibrium (i.e. time-independent) in spatial dimensions, the corresponding statistical field theory is formally similar to a -dimensional CFT. The scaling dimensions in such problems are usually referred to as critical exponents, and one can in principle compute these exponents in the appropriate CFT.
The Ising model
An example that links together many of the ideas in this article is the phase transition of the Ising model, a simple model of ferromagnetic substances. This is a statistical mechanics model, which also has a description in terms of conformal field theory. The system consists of an array of lattice sites, which form a -dimensional periodic lattice. Associated with each lattice site is a magnetic moment, or spin, and this spin can take either the value +1 or −1. (These states are also called up and down, respectively.)
The key point is that the Ising model has a spin-spin interaction, making it energetically favourable for two adjacent spins to be aligned. On the other hand, thermal fluctuations typically introduce a randomness into the alignment of spins. At some critical temperature, , spontaneous magnetization is said to occur. This means that below the spin-spin interaction will begin to dominate, and there is some net alignment of spins in one of the two directions.
An example of the kind of physical quantities one would like to calculate at this critical temperature is the correlation between spins separated by a distance . This has the generic behaviour:
for some particular value of , which is an example of a critical exponent.
CFT description
The fluctuations at temperature are scale-invariant, and so the Ising model at this phase transition is expected to be described by a scale-invariant statistical field theory. In fact, this theory is the Wilson–Fisher fixed point, a particular scale-invariant scalar field theory.
In this context, is understood as a correlation function of scalar fields,
Now we can fit together a number of the ideas seen already.
From the above, one sees that the critical exponent, , for this phase transition, is also an anomalous dimension. This is because the classical dimension of the scalar field,
is modified to become
where is the number of dimensions of the Ising model lattice.
So this anomalous dimension in the conformal field theory is the same as a particular critical exponent of the Ising model phase transition.
Note that for dimension , can be calculated approximately, using the epsilon expansion, and one finds that
.
In the physically interesting case of three spatial dimensions, we have =1, and so this expansion is not strictly reliable. However, a semi-quantitative prediction is that is numerically small in three dimensions.
On the other hand, in the two-dimensional case the Ising model is exactly soluble. In particular, it is equivalent to one of the minimal models, a family of well-understood CFTs, and it is possible to compute (and the other critical exponents) exactly,
.
Schramm–Loewner evolution
The anomalous dimensions in certain two-dimensional CFTs can be related to the typical fractal dimensions of random walks, where the random walks are defined via Schramm–Loewner evolution (SLE). As we have seen above, CFTs describe the physics of phase transitions, and so one can relate the critical exponents of certain phase transitions to these fractal dimensions. Examples include the 2d critical Ising model and the more general 2d critical Potts model. Relating other 2d CFTs to SLE is an active area of research.
Universality
A phenomenon known as universality is seen in a large variety of physical systems. It expresses the idea that different microscopic physics can give rise to the same scaling behaviour at a phase transition. A canonical example of universality involves the following two systems:
The Ising model phase transition, described above.
The liquid-vapour transition in classical fluids.
Even though the microscopic physics of these two systems is completely different, their critical exponents turn out to be the same. Moreover, one can calculate these exponents using the same statistical field theory. The key observation is that at a phase transition or critical point, fluctuations occur at all length scales, and thus one should look for a scale-invariant statistical field theory to describe the phenomena. In a sense, universality is the observation that there are relatively few such scale-invariant theories.
The set of different microscopic theories described by the same scale-invariant theory is known as a universality class. Other examples of systems which belong to a universality class are:
Avalanches in piles of sand. The likelihood of an avalanche is in power-law proportion to the size of the avalanche, and avalanches are seen to occur at all size scales.
The frequency of network outages on the Internet, as a function of size and duration.
The frequency of citations of journal articles, considered in the network of all citations amongst all papers, as a function of the number of citations in a given paper.
The formation and propagation of cracks and tears in materials ranging from steel to rock to paper. The variations of the direction of the tear, or the roughness of a fractured surface, are in power-law proportion to the size scale.
The electrical breakdown of dielectrics, which resemble cracks and tears.
The percolation of fluids through disordered media, such as petroleum through fractured rock beds, or water through filter paper, such as in chromatography. Power-law scaling connects the rate of flow to the distribution of fractures.
The diffusion of molecules in solution, and the phenomenon of diffusion-limited aggregation.
The distribution of rocks of different sizes in an aggregate mixture that is being shaken (with gravity acting on the rocks).
The key observation is that, for all of these different systems, the behaviour resembles a phase transition, and that the language of statistical mechanics and scale-invariant statistical field theory may be applied to describe them.
Other examples of scale invariance
Newtonian fluid mechanics with no applied forces
Under certain circumstances, fluid mechanics is a scale-invariant classical field theory. The fields are the velocity of the fluid flow, , the fluid density, , and the fluid pressure, . These fields must satisfy both the Navier–Stokes equation and the continuity equation. For a Newtonian fluid these take the respective forms
where is the dynamic viscosity.
In order to deduce the scale invariance of these equations we specify an equation of state, relating the fluid pressure to the fluid density. The equation of state depends on the type of fluid and the conditions to which it is subjected. For example, we consider the isothermal ideal gas, which satisfies
where is the speed of sound in the fluid. Given this equation of state, Navier–Stokes and the continuity equation are invariant under the transformations
Given the solutions and , we automatically have that
and are also solutions.
Computer vision
In computer vision and biological vision, scaling transformations arise because of the perspective image mapping and because of objects having different physical size in the world. In these areas, scale invariance refers to local image descriptors or visual representations of the image data that remain invariant when the local scale in the image domain is changed.
Detecting local maxima over scales of normalized derivative responses provides a general framework for obtaining scale invariance from image data.
Examples of applications include blob detection, corner detection, ridge detection, and object recognition via the scale-invariant feature transform.
| Physical sciences | Physics basics: General | Physics |
695423 | https://en.wikipedia.org/wiki/Cephalosporin | Cephalosporin | The cephalosporins (sg. ) are a class of β-lactam antibiotics originally derived from the fungus Acremonium, which was previously known as Cephalosporium.
Together with cephamycins, they constitute a subgroup of β-lactam antibiotics called cephems. Cephalosporins were discovered in 1945, and first sold in 1964.
Discovery
The aerobic mold which yielded cephalosporin C was found in the sea near a sewage outfall in Su Siccu, by Cagliari harbour in Sardinia, by the Italian pharmacologist Giuseppe Brotzu in July 1945.
Structure
Cephalosporin contains a 6-membered dihydrothiazine ring. Substitutions at position 3 generally affect pharmacology; substitutions at position 7 affect antibacterial activity, but these cases are not always true.
Medical uses
Cephalosporins can be indicated for the prophylaxis and treatment of infections caused by bacteria susceptible to this particular form of antibiotic. First-generation cephalosporins are active predominantly against Gram-positive bacteria, such as Staphylococcus and Streptococcus. They are therefore used mostly for skin and soft tissue infections and the prevention of hospital-acquired surgical infections. Successive generations of cephalosporins have increased activity against Gram-negative bacteria, albeit often with reduced activity against Gram-positive organisms.
The antibiotic may be used for patients who are allergic to penicillin due to the different β-lactam antibiotic structure. The drug is able to be excreted in the urine.
Side effects
Common adverse drug reactions (ADRs) (≥ 1% of patients) associated with the cephalosporin therapy include: diarrhea, nausea, rash, electrolyte disturbances, and pain and inflammation at injection site. Infrequent ADRs (0.1–1% of patients) include vomiting, headache, dizziness, oral and vaginal candidiasis, pseudomembranous colitis, superinfection, eosinophilia, nephrotoxicity, neutropenia, thrombocytopenia, and fever.
Allergic hypersensitivity
The commonly quoted figure of 10% of patients with allergic hypersensitivity to penicillins and/or carbapenems also having cross-reactivity with cephalosporins originated from a 1975 study looking at the original cephalosporins, and subsequent "safety first" policy meant this was widely quoted and assumed to apply to all members of the group. Hence, it was commonly stated that they are contraindicated in patients with a history of severe, immediate allergic reactions (urticaria, anaphylaxis, interstitial nephritis, etc.) to penicillins or carbapenems.
The contraindication, however, should be viewed in the light of recent epidemiological work suggesting, for many second-generation (or later) cephalosporins, the cross-reactivity rate with penicillin is much lower, having no significantly increased risk of reactivity over the first generation based on the studies examined. The British National Formulary previously issued blanket warnings of 10% cross-reactivity, but, since the September 2008 edition, suggests, in the absence of suitable alternatives, oral cefixime or cefuroxime and injectable cefotaxime, ceftazidime, and ceftriaxone can be used with caution, but the use of cefaclor, cefadroxil, cefalexin, and cefradine should be avoided. A 2012 literature review similarly finds that the risk is negligible with third- and fourth-generation cephalosporins. The risk with first-generation cephalosporins having similar R1 sidechains was also found to be overestimated, with the real value closer to 1%.
MTT side chain
Several cephalosporins are associated with hypoprothrombinemia and a disulfiram-like reaction with ethanol. These include latamoxef (moxalactam), cefmenoxime, cefoperazone, cefamandole, cefmetazole, and cefotetan. This is thought to be due to the methylthiotetrazole side-chain of these cephalosporins, which blocks the enzyme vitamin K epoxide reductase (likely causing hypothrombinemia) and aldehyde dehydrogenase (causing alcohol intolerance). Thus, consumption of alcohol after taking these cephalosporin orally or intravenously is contraindicated, and in severe cases can lead to death. The methylthiodioxotriazine sidechain found in ceftriaxone has a similar effect. Cephalosporins without these structural elements are believed to be safe with alcohol.
Mechanism of action
Cephalosporins are bactericidal and, like other β-lactam antibiotics, disrupt the synthesis of the peptidoglycan layer forming the bacterial cell wall. The peptidoglycan layer is important for cell wall structural integrity. The final transpeptidation step in the synthesis of the peptidoglycan is facilitated by penicillin-binding proteins (PBPs). PBPs bind to the D-Ala-D-Ala at the end of muropeptides (peptidoglycan precursors) to crosslink the peptidoglycan. Beta-lactam antibiotics mimic the D-Ala-D-Ala site, thereby irreversibly inhibiting PBP crosslinking of peptidoglycan.
Resistance
Resistance to cephalosporin antibiotics can involve either reduced affinity of existing PBP components or the acquisition of a supplementary β-lactam-insensitive PBP. Compared to other β-lactam antibiotics (such as penicillins), they are less susceptible to β-lactamases. Currently, some Citrobacter freundii, Enterobacter cloacae, Neisseria gonorrhoeae, and Escherichia coli strains are resistant to cephalosporins. Some Morganella morganii, Proteus vulgaris, Providencia rettgeri, Pseudomonas aeruginosa, Serratia marcescens and Klebsiella pneumoniae strains have also developed resistance to cephalosporins to varying degrees.
Classification
The cephalosporin nucleus can be modified to gain different properties. Cephalosporins are sometimes grouped into "generations" by their antimicrobial properties.
The first cephalosporins were designated first-generation cephalosporins, whereas, later, more extended-spectrum cephalosporins were classified as second-generation cephalosporins. Each newer generation has significantly greater Gram-negative antimicrobial properties than the preceding generation, in most cases with decreased activity against Gram-positive organisms. Fourth-generation cephalosporins, however, have true broad-spectrum activity.
The classification of cephalosporins into "generations" is commonly practised, although the exact categorization is often imprecise. For example, the fourth generation of cephalosporins is not recognized as such in Japan. In Japan, cefaclor is classed as a first-generation cephalosporin, though in the United States it is a second-generation one; and cefbuperazone, cefminox, and cefotetan are classed as second-generation cephalosporins.
First generation
Cefalotin, cefazolin, cefalexin, cefapirin, cefradine, and cefadroxil are drugs belonging to this group.
Second generation
Cefoxitin, cefuroxime, cefaclor, cefprozil, and cefmetazole are classed as second-generation cephems.
Third generation
Ceftazidime, ceftriaxone, and cefotaxime are classed as third-generation cephalosporins. Flomoxef and latamoxef are in a new, related class called oxacephems.
Fourth generation
Drugs included in this group are cefepime and cefpirome.
Further generations
Some state that cephalosporins can be divided into five or even six generations, although the usefulness of this organization system is of limited clinical relevance.
Naming
Most first-generation cephalosporins were originally spelled "ceph-" in English-speaking countries. This continues to be the preferred spelling in the United States, Australia, and New Zealand, while European countries (including the United Kingdom) have adopted the International Nonproprietary Names, which are always spelled "cef-". Newer first-generation cephalosporins and all cephalosporins of later generations are spelled "cef-", even in the United States.
Activity
There exist bacteria which cannot be treated with cephalosporins of generations first through fourth:
Listeria spp.
Atypicals (including Mycoplasma and Chlamydia)
MRSA
Enterococci
Fifth-generation cephalosporins (e.g. ceftaroline) are effective against MRSA, Listeria spp., and Enterococcus faecalis.
Overview table
History
Cephalosporin compounds were first isolated from cultures of Acremonium strictum from a sewer in Sardinia in 1948 by Italian scientist Giuseppe Brotzu. He noticed these cultures produced substances that were effective against Salmonella typhi, the cause of typhoid fever, which had β-lactamase. Guy Newton and Edward Abraham at the Sir William Dunn School of Pathology at the University of Oxford isolated cephalosporin C. The cephalosporin nucleus, 7-aminocephalosporanic acid (7-ACA), was derived from cephalosporin C and proved to be analogous to the penicillin nucleus 6-aminopenicillanic acid (6-APA), but it was not sufficiently potent for clinical use. Modification of the 7-ACA side chains resulted in the development of useful antibiotic agents, and the first agent, cefalotin (cephalothin), was launched by Eli Lilly and Company in 1964.
| Biology and health sciences | Antibiotics | Health |
695530 | https://en.wikipedia.org/wiki/Decade | Decade | A decade () is a period of 10 years. Decades may describe any 10-year period, such as those of a person's life, or refer to specific groupings of calendar years.
Usage
Any period of ten years is a "decade". For example, the statement that "during his last decade, Mozart explored chromatic harmony to a degree rare at the time" refers to the last 10 years of Wolfgang Amadeus Mozart's life without regard to which calendar years are encompassed. Also, 'the first decade' of a person's life begins on the day of their birth and ends at the end of their 10th year of life when they have their 10th birthday; the second decade of life starts with their 11th year of life (during which one is typically still referred to as being "10") and ends at the end of their 20th year of life, on their 20th birthday; similarly, the third decade of life, when one is in one's twenties or 20s, starts with the 21st year of life, and so on, with subsequent decades of life similarly described by referencing the tens digit of one's age. Another word for 'decade' is 'decennium'.
0-to-9 decade
The most widely used method for denominating decades is to group years based on their shared tens digit, from a year ending in a 0 to a year ending in a 9 for example, the period from 1960 to 1969 is the 1960s, and the period from 1970 to 1979 is the 1970s. Sometimes, only the tens part is mentioned (60s or sixties, and 70s or seventies), although this may leave it ambiguous as to which century is meant. However, this method of grouping decades cannot be applied to the decade immediately preceding AD10, because there was no year 0.
Referring to ten-year periods as decades in this way only became common in the late 19th century. Particularly in the 20th century, 0-to-9 decades came to be referred to with associated nicknames, such as the "Roaring Twenties" (1920s), the "Warring Forties" (1940s), and the "Swinging Sixties" (1960s). This practice is occasionally also applied to decades of earlier centuries; for example, referencing the 1890s as the "Gay Nineties" or "Naughty Nineties".
1-to-0 decade
A rarer approach groups years from the beginning of the AD calendar era to produce successive decades from a year ending in a 1 to a year ending in a 0, with the years 1–10 described as "the 1st decade", years 11–20 "the 2nd decade", and so on; later decades are more usually described as 'the st, nd, rd, or th decade of the st, nd, rd, or th century' (using the strict interpretation of 'century'). For example, "the second decad of the 12th. Cent." ; "The last decade of that century"; "1st decade of the 16th century"; "third decade of the 16th century"; "the first decade of the 18th century". This decade grouping may also be identified explicitly; for example, "1961–1970"; "2001–2010"; "2021–2030". The BC calendar era ended with the year 1BC and the AD calendar era began the following year, AD1. There was no year 0.
Public usage of the two methods
A YouGov poll was conducted on December 2, 2019, asking 13,582 adults in the United States, "When do you think the next decade will begin and end?" Results showed that 64% answered that "the next decade" would begin on January 1, 2020, and end on December 31, 2029 (0-to-9 method); 17% answered that "the next decade" would begin on January 1, 2021, and end on December 31, 2030 (1-to-0 method); 19% replied that they did not know.
| Physical sciences | Time | Basics and measurement |
695713 | https://en.wikipedia.org/wiki/Toothpick | Toothpick | A toothpick is a small thin stick of wood, plastic, bamboo, metal, bone or other substance with at least one and sometimes two pointed ends to insert between teeth to remove detritus, usually after a meal. Toothpicks are also used for festive occasions to hold or spear small appetizers (like cheese cubes or olives) or as a cocktail stick, and can be decorated with plastic frills or small paper umbrellas or flags.
History
Known in all cultures, the toothpick is the oldest instrument for dental cleaning. Hominin remains from Dmanisi, Georgia, dated to about 1.8 million years ago, bear lesions indicating the repeated use of a “toothpick”. A Neanderthal man's jawbone found in the Cova Foradà in Spain evidenced use of a toothpick to alleviate pain in his teeth caused by periodontal disease and dental wear. Toothpicks made of bronze have been found as burial objects in prehistoric graves in Northern Italy and in the East Alps. In 1986, researchers in Florida discovered the 7500-year-old remains of ancient Native Americans and discovered small grooves between many of the molar teeth. One of the researchers, Justin Martin of Concordia University Wisconsin, said, "The enamel on teeth is quite tough, so they must have used the probes quite rigorously to make the grooves."
Materials and manufacture
There are delicate, artistic examples made of silver in antiquity, as well as from mastic wood with the Romans.
In the 17th century, toothpicks were luxury objects and like jewelry, were artfully stylized using precious metal and set with expensive stones.
In the Southern United States, the baculum (penis bone) of a raccoon, called a "coon rod", was sometimes filed to a point for use as a toothpick.
The first toothpick-manufacturing machine was developed in 1869, by Marc Signorello. Another was patented in 1872, by Silas Noble and J. P. Cooley.
Wooden toothpicks are cut from birch wood. Logs are first spiral cut into thin sheets, which are then cut, chopped, milled and bleached (to lighten) into the individual toothpicks. Nowadays other means of interdental cleaning are preferred such as dental floss, toothbrushes, and oral irrigators.
Dentistry
Dentists generally prefer floss to picks because of possible damages to oral health, specifically to the gum, to tooth enamel (if chewed), to tooth roots (if the gum is pushed low enough). Picks may also damage veneers and crowns, have splinters, or be accidentally swallowed.
A review of small-scale studies indicates that toothpicks and triangular woodsticks are similar in their ability to remove plaque.
| Biology and health sciences | Hygiene products | Health |
695978 | https://en.wikipedia.org/wiki/Paranoid%20personality%20disorder | Paranoid personality disorder | Paranoid personality disorder (PPD) is a mental disorder characterized by paranoia, and a pervasive, long-standing suspiciousness and generalized mistrust of others. People with this personality disorder may be hypersensitive, easily insulted, and habitually relate to the world by vigilant scanning of the environment for clues or suggestions that may validate their fears or biases. They are eager observers and they often think they are in danger and look for signs and threats of that danger, potentially not appreciating other interpretations or evidence.
They tend to be guarded and suspicious and have quite constricted emotional lives. Their reduced capacity for meaningful emotional involvement and the general pattern of isolated withdrawal often lend a quality of schizoid isolation to their life experience. People with PPD may have a tendency to bear grudges, suspiciousness, tendency to interpret others' actions as hostile, persistent tendency to self-reference, or a tenacious sense of personal right. Patients with this disorder can also have significant comorbidity with other personality disorders, such as schizotypal, schizoid, narcissistic, avoidant, and borderline.
Causes
A genetic contribution to paranoid traits and a possible genetic link between this personality disorder and schizophrenia exist. A large long-term Norwegian twin study found paranoid personality disorder to be modestly heritable and to share a portion of its genetic and environmental risk factors with the other cluster A personality disorders, schizoid and schizotypal.
Psychosocial theories implicate projection of negative internal feelings and parental modeling. Cognitive theorists believe the disorder to be a result of an underlying belief that other people are unfriendly in combination with a lack of self-awareness.
Diagnosis
ICD-10
The World Health Organization's ICD-10 lists paranoid personality disorder under (). It is a requirement of ICD-10 that a diagnosis of any specific personality disorder also satisfies a set of general personality disorder criteria. It is also pointed out that for different cultures it may be necessary to develop specific sets of criteria with regard to social norms, rules and other obligations.
PPD is characterized by at least three of the following symptoms:
excessive sensitivity to setbacks and rebuffs;
tendency to bear grudges persistently (i.e. refusal to forgive insults and injuries or slights);
suspiciousness and a pervasive tendency to distort experience by misconstruing the neutral or friendly actions of others as hostile or contemptuous;
a combative and tenacious sense of self-righteousness out of keeping with the actual situation;
recurrent suspicions, without justification, regarding sexual fidelity of spouse or sexual partner;
tendency to experience excessive self-aggrandizing, manifest in a persistent self-referential attitude;
preoccupation with unsubstantiated "conspiratorial" explanations of events both immediate to the patient and in the world at large.
Includes: expansive paranoid, fanatic, querulant and sensitive paranoid personality disorder.
Excludes: delusional disorder and schizophrenia.
DSM-5
The American Psychiatric Association's DSM-5 has similar criteria for paranoid personality disorder. They require in general the presence of lasting distrust and suspicion of others, interpreting their motives as malevolent, from an early adult age, occurring in a range of situations. Four of seven specific issues must be present, which include different types of suspicions or doubt (such as of being exploited, or that remarks have a subtle threatening meaning), in some cases regarding others in general or specifically friends or partners, and in some cases referring to a response of holding grudges or reacting angrily.
PPD is characterized by a pervasive distrust and suspiciousness of others such that their motives are interpreted as malevolent, beginning by early adulthood and present in a variety of contexts. To qualify for a diagnosis, the patient must meet at least four out of the following criteria:
Suspects, without sufficient basis, that others are exploiting, harming, or deceiving them.
Is preoccupied with unjustified doubts about the loyalty or trustworthiness of friends or associates.
Is reluctant to confide in others because of unwarranted fear that the information will be used maliciously against them.
Reads hidden demeaning or threatening meanings into benign remarks or events.
Persistently bears grudges (i.e., is unforgiving of insults, injuries, or slights).
Perceives attacks on their character or reputation that are not apparent to others and is quick to react angrily or to counterattack.
Has recurrent suspicions, without justification, regarding fidelity of spouse or sexual partner.
The DSM-5 lists paranoid personality disorder essentially unchanged from the DSM-IV-TR version and lists associated features that describe it in a more quotidian way. These features include suspiciousness, intimacy avoidance, hostility and unusual beliefs/experiences.
Millon's subtypes
Various researchers and clinicians may propose varieties and subsets or dimensions of personality related to the official diagnoses. Psychologist Theodore Millon has proposed five subtypes of paranoid personality:
Differential diagnosis
The paranoid may be at greater than average risk of experiencing major depressive disorder, agoraphobia, social anxiety disorder, obsessive-compulsive disorder and substance-related disorders. Criteria for other personality disorder diagnoses are commonly also met, such as: schizoid, schizotypal, narcissistic, avoidant, borderline and negativistic personality disorder.
Treatment
Partly as a result of tendencies to mistrust others, there have been few studies conducted over the treatment of paranoid personality disorder. Currently, there are no medicines FDA approved in treating PPD, but antidepressants, antipsychotics, and mood stabilizers may be prescribed under wrong assumptions to treat some of the symptoms. Another form of treatment of PPD is psychoanalysis, normally used in cases where both PPD and BPD are present. However, no published studies directly state the effectiveness of this form of treatment on specifically PPD, as opposed to its effects on BPD. CBT (cognitive behavioral therapy) has also been suggested as a possible treatment to paranoid personality disorder, but while case studies have shown improvement in the symptoms of the disorder, no systematic/widespread data has been collected to support this. Treatments for PPD can be challenging, as individuals with PPD are reluctant in finding help and have difficulty trusting others.
Epidemiology
PPD occurs in about 0.5–4.4% of the general population. It is seen in 2–10% of psychiatric outpatients. In clinical samples men have higher rates, whereas epidemiologically there is a reported higher rate of women.
History
Paranoid personality disorder is listed in DSM-V and was included in all previous versions of the DSM. One of the earliest descriptions of the paranoid personality comes from the French psychiatrist Valentin Magnan who described a "fragile personality" that showed idiosyncratic thinking, hypochondriasis, undue sensitivity, referential thinking, and suspiciousness.
Closely related to this description is Emil Kraepelin's description from 1905 of a pseudo-querulous personality who is "always on the alert to find grievance, but without delusions", vain, self-absorbed, sensitive, irritable, litigious, obstinate, and living at strife with the world. In 1921, he renamed the condition paranoid personality and described these people as distrustful, feeling unjustly treated and feeling subjected to hostility, interference and oppression. He also observed a contradiction in these personalities: on the one hand, they stubbornly hold on to their unusual ideas, on the other hand, they often accept every piece of gossip as the truth. Kraepelin also noted that paranoid personalities were often present in people who later developed paranoid psychosis. Subsequent writers also considered traits like suspiciousness and hostility to predispose people to developing delusional illnesses, particularly "late paraphrenias" of old age.
Following Kraepelin, Eugen Bleuler described "contentious psychopathy" or "paranoid constitution" as displaying the characteristic triad of suspiciousness, grandiosity, and feelings of persecution. He also emphasized that these people's false assumptions do not attain the form of real delusion.
Ernst Kretschmer emphasized the sensitive inner core of the paranoia-prone personality: they feel shy and inadequate but at the same time they have an attitude of entitlement. They attribute their failures to the machinations of others but secretly to their own inadequacy. They experience constant tension between feelings of self-importance and experiencing the environment as unappreciative and humiliating.
Karl Jaspers, a German phenomenologist, described "self-insecure" personalities who resemble the paranoid personality. According to Jaspers, such people experience inner humiliation, brought about by outside experiences and their interpretations of them. They have an urge to get external confirmation to their self-deprecation and that makes them see insults in the behavior of other people. They suffer from every slight because they seek the real reason for them in themselves. This kind of insecurity leads to overcompensation: compulsive formality, strict social observances, and exaggerated displays of assurance.
In 1950, Kurt Schneider described the "fanatic psychopaths" and divided them into two categories: the combative type that is very insistent about his false notions and actively quarrelsome, and the eccentric type that is passive, secretive, vulnerable to esoteric sects, but nonetheless suspicious about others.
The descriptions of Leonhard and Sheperd from the sixties describe paranoid people as overvaluing their abilities and attributing their failure to the ill-will of others; they also mention that their interpersonal relations are disturbed and they are in constant conflict with others.
In 1975, Polatin described the paranoid personality as rigid, suspicious, watchful, self-centered and selfish, inwardly hypersensitive, but emotionally undemonstrative. However, when there is a difference of opinion, the underlying mistrust, authoritarianism, and rage burst through.
In the 1980s, paranoid personality disorder received little attention, and when it did receive it, the focus was on its
potential relationship to paranoid schizophrenia. The most significant contribution of this decade comes from Theodore Millon who divided the features of paranoid personality disorder to four categories:
1) Behavioral characteristics of vigilance, abrasive irritability, and counterattack
2) Complaints indicating oversensitivity, social isolation, and mistrust
3) The dynamics of denying personal insecurities, attributing these to others, and self-inflation through grandiose fantasies
4) Coping style of detesting dependence and hostile distancing of oneself from others
Controversy
Due to repeated concerns of the validity of PPD and poor empirical evidence, it has been suggested that PPD be removed from the DSM. This is believed to contribute to low research output on PPD.
| Biology and health sciences | Mental disorders | Health |
696619 | https://en.wikipedia.org/wiki/Binomial%20series | Binomial series | In mathematics, the binomial series is a generalization of the polynomial that comes from a binomial formula expression like for a nonnegative integer . Specifically, the binomial series is the MacLaurin series for the function , where and . Explicitly,
where the power series on the right-hand side of () is expressed in terms of the (generalized) binomial coefficients
Note that if is a nonnegative integer then the term and all later terms in the series are , since each contains a factor of . Thus, in this case, the series is finite and gives the algebraic binomial formula.
Convergence
Conditions for convergence
Whether () converges depends on the values of the complex numbers and . More precisely:
If , the series converges absolutely for any complex number .
If , the series converges absolutely if and only if either or , where denotes the real part of .
If and , the series converges if and only if .
If , the series converges if and only if either or .
If , the series diverges except when is a non-negative integer, in which case the series is a finite sum.
In particular, if is not a non-negative integer, the situation at the boundary of the disk of convergence, , is summarized as follows:
If , the series converges absolutely.
If , the series converges conditionally if and diverges if .
If , the series diverges.
Identities to be used in the proof
The following hold for any complex number :
Unless is a nonnegative integer (in which case the binomial coefficients vanish as is larger than ), a useful asymptotic relationship for the binomial coefficients is, in Landau notation:
This is essentially equivalent to Euler's definition of the Gamma function:
and implies immediately the coarser bounds
for some positive constants and .
Formula () for the generalized binomial coefficient can be rewritten as
Proof
To prove (i) and (v), apply the ratio test and use formula () above to show that whenever is not a nonnegative integer, the radius of convergence is exactly 1. Part (ii) follows from formula (), by comparison with the -series
with . To prove (iii), first use formula () to obtain
and then use (ii) and formula () again to prove convergence of the right-hand side when is assumed. On the other hand, the series does not converge if and , again by formula (). Alternatively, we may observe that for all , . Thus, by formula (), for all . This completes the proof of (iii). Turning to (iv), we use identity () above with and in place of , along with formula (), to obtain
as . Assertion (iv) now follows from the asymptotic behavior of the sequence . (Precisely,
certainly converges to if and diverges to if . If , then converges if and only if the sequence converges , which is certainly true if but false if : in the latter case the sequence is dense , due to the fact that diverges and converges to zero).
Summation of the binomial series
The usual argument to compute the sum of the binomial series goes as follows. Differentiating term-wise the binomial series within the disk of convergence and using formula (), one has that the sum of the series is an analytic function solving the ordinary differential equation with initial condition .
The unique solution of this problem is the function . Indeed, multiplying by the integrating factor gives
so the function is a constant, which the initial condition tells us is . That is, is the sum of the binomial series for .
The equality extends to whenever the series converges, as a consequence of Abel's theorem and by continuity of .
Negative binomial series
Closely related is the negative binomial series defined by the MacLaurin series for the function , where and . Explicitly,
which is written in terms of the multiset coefficient
When is a positive integer, several common sequences are apparent. The case gives the series , where the coefficient of each term of the series is simply . The case gives the series , which has the counting numbers as coefficients. The case gives the series , which has the triangle numbers as coefficients. The case gives the series , which has the tetrahedral numbers as coefficients, and similarly for higher integer values of .
The negative binomial series includes the case of the geometric series, the power series
(which is the negative binomial series when , convergent in the disc ) and, more generally, series obtained by differentiation of the geometric power series:
with , a positive integer.
History
The first results concerning binomial series for other than positive-integer exponents were given by Sir Isaac Newton in the study of areas enclosed under certain curves. John Wallis built upon this work by considering expressions of the form where is a fraction. He found that (written in modern terms) the successive coefficients of are to be found by multiplying the preceding coefficient by (as in the case of integer exponents), thereby implicitly giving a formula for these coefficients. He explicitly writes the following instances
The binomial series is therefore sometimes referred to as Newton's binomial theorem. Newton gives no proof and is not explicit about the nature of the series. Later, on 1826 Niels Henrik Abel discussed the subject in a paper published on Crelle's Journal, treating notably questions of convergence.
| Mathematics | Sequences and series | null |
696912 | https://en.wikipedia.org/wiki/Bound%20state | Bound state | A bound state is a composite of two or more fundamental building blocks, such as particles, atoms, or bodies, that behaves as a single object and in which energy is required to split them.
In quantum physics, a bound state is a quantum state of a particle subject to a potential such that the particle has a tendency to remain localized in one or more regions of space. The potential may be external or it may be the result of the presence of another particle; in the latter case, one can equivalently define a bound state as a state representing two or more particles whose interaction energy exceeds the total energy of each separate particle. One consequence is that, given a potential vanishing at infinity, negative-energy states must be bound. The energy spectrum of the set of bound states are most commonly discrete, unlike scattering states of free particles, which have a continuous spectrum.
Although not bound states in the strict sense, metastable states with a net positive interaction energy, but long decay time, are often considered unstable bound states as well and are called "quasi-bound states". Examples include radionuclides and Rydberg atoms.
In relativistic quantum field theory, a stable bound state of particles with masses corresponds to a pole in the S-matrix with a center-of-mass energy less than . An unstable bound state shows up as a pole with a complex center-of-mass energy.
Examples
A proton and an electron can move separately; when they do, the total center-of-mass energy is positive, and such a pair of particles can be described as an ionized atom. Once the electron starts to "orbit" the proton, the energy becomes negative, and a bound state – namely the hydrogen atom – is formed. Only the lowest-energy bound state, the ground state, is stable. Other excited states are unstable and will decay into stable (but not other unstable) bound states with less energy by emitting a photon.
A positronium "atom" is an unstable bound state of an electron and a positron. It decays into photons.
Any state in the quantum harmonic oscillator is bound, but has positive energy. Note that , so the below does not apply.
A nucleus is a bound state of protons and neutrons (nucleons).
The proton itself is a bound state of three quarks (two up and one down; one red, one green and one blue). However, unlike the case of the hydrogen atom, the individual quarks can never be isolated. See confinement.
The Hubbard and Jaynes–Cummings–Hubbard (JCH) models support similar bound states. In the Hubbard model, two repulsive bosonic atoms can form a bound pair in an optical lattice. The JCH Hamiltonian also supports two-polariton bound states when the photon-atom interaction is sufficiently strong.
Definition
Let -finite measure space be a probability space associated with separable complex Hilbert space . Define a one-parameter group of unitary operators , a density operator and an observable on . Let be the induced probability distribution of with respect to . Then the evolution
is bound with respect to if
,
where .
A quantum particle is in a bound state if at no point in time it is found “too far away" from any finite region . Using a wave function representation, for example, this means
such that
In general, a quantum state is a bound state if and only if it is finitely normalizable for all times . Furthermore, a bound state lies within the pure point part of the spectrum of if and only if it is an eigenvector of .
More informally, "boundedness" results foremost from the choice of domain of definition and characteristics of the state rather than the observable. For a concrete example: let and let be the position operator. Given compactly supported and .
If the state evolution of "moves this wave package to the right", e.g., if for all , then is not bound state with respect to position.
If does not change in time, i.e., for all , then is bound with respect to position.
More generally: If the state evolution of "just moves inside a bounded domain", then is bound with respect to position.
Properties
As finitely normalizable states must lie within the pure point part of the spectrum, bound states must lie within the pure point part. However, as Neumann and Wigner pointed out, it is possible for the energy of a bound state to be located in the continuous part of the spectrum. This phenomenon is referred to as bound state in the continuum.
Position-bound states
Consider the one-particle Schrödinger equation. If a state has energy , then the wavefunction satisfies, for some
so that is exponentially suppressed at large . This behaviour is well-studied for smoothly varying potentials in the WKB approximation for wavefunction, where an oscillatory behaviour is observed if the right hand side of the equation is negative and growing/decaying behaviour if it is positive. Hence, negative energy-states are bound if vanishes at infinity.
Non-degeneracy in one-dimensional bound states
One-dimensional bound states can be shown to be non-degenerate in energy for well-behaved wavefunctions that decay to zero at infinities. This need not hold true for wavefunctions in higher dimensions. Due to the property of non-degenerate states, one-dimensional bound states can always be expressed as real wavefunctions.
Node theorem
Node theorem states that bound wavefunction ordered according to increasing energy has exactly nodes, i.e., points where . Due to the form of Schrödinger's time independent equations, it is not possible for a physical wavefunction to have since it corresponds to solution.
Requirements
A boson with mass mediating a weakly coupled interaction produces an Yukawa-like interaction potential,
,
where , is the gauge coupling constant, and is the reduced Compton wavelength. A scalar boson produces a universally attractive potential, whereas a vector attracts particles to antiparticles but repels like pairs. For two particles of mass and , the Bohr radius of the system becomes
and yields the dimensionless number
.
In order for the first bound state to exist at all, . Because the photon is massless, is infinite for electromagnetism. For the weak interaction, the Z boson's mass is , which prevents the formation of bound states between most particles, as it is the proton's mass and the electron's mass.
Note, however, that, if the Higgs interaction did not break electroweak symmetry at the electroweak scale, then the SU(2) weak interaction would become confining.
| Physical sciences | Quantum mechanics | Physics |
697155 | https://en.wikipedia.org/wiki/Hierarchy%20problem | Hierarchy problem | In theoretical physics, the hierarchy problem is the problem concerning the large discrepancy between aspects of the weak force and gravity. There is no scientific consensus on why, for example, the weak force is 1024 times stronger than gravity.
Technical definition
A hierarchy problem occurs when the fundamental value of some physical parameter, such as a coupling constant or a mass, in some Lagrangian is vastly different from its effective value, which is the value that gets measured in an experiment. This happens because the effective value is related to the fundamental value by a prescription known as renormalization, which applies corrections to it.
Typically the renormalized value of parameters are close to their fundamental values, but in some cases, it appears that there has been a delicate cancellation between the fundamental quantity and the quantum corrections. Hierarchy problems are related to fine-tuning problems and problems of naturalness.
Over the past decade many scientists argued that the hierarchy problem is a specific application of Bayesian statistics.
Studying renormalization in hierarchy problems is difficult, because such quantum corrections are usually power-law divergent, which means that the shortest-distance physics are most important. Because we do not know the precise details of the quantum gravity, we cannot even address how this delicate cancellation between two large terms occurs. Therefore, researchers are led to postulate new physical phenomena that resolve hierarchy problems without fine-tuning.
Overview
Suppose a physics model requires four parameters to produce a very high-quality working model capable of generating predictions regarding some aspect of our physical universe. Suppose we find through experiments that the parameters have values: 1.2, 1.31, 0.9 and a value near .
One might wonder how such figures arise. But in particular, might be especially curious about a theory where three values are close to one, and the fourth is so different; in other words, the huge disproportion we seem to find between the first three parameters and the fourth. We might also wonder if one force is so much weaker than the others that it needs a factor of to allow it to be related to them in terms of effects, how did our universe come to be so exactly balanced when its forces emerged? In current particle physics, the differences between some parameters are much larger than this, so the question is even more noteworthy.
One answer given by philosophers is the anthropic principle. If the universe came to exist by chance, and perhaps vast numbers of other universes exist or have existed, then life capable of physics experiments only arose in universes that, by chance, had very balanced forces. All of the universes where the forces were not balanced did not develop life capable of asking this question. So if lifeforms like human beings are aware and capable of asking such a question, humans must have arisen in a universe having balanced forces, however rare that might be.
A second possible answer is that there is a deeper understanding of physics that we currently do not possess. There might be parameters that we can derive physical constants from that have less unbalanced values, or there might be a model with fewer parameters.
Examples in particle physics
Higgs mass
In particle physics, the most important hierarchy problem is the question that asks why the weak force is 1024 times as strong as gravity. Both of these forces involve constants of nature, the Fermi constant for the weak force and the Newtonian constant of gravitation for gravity. Furthermore, if the Standard Model is used to calculate the quantum corrections to Fermi's constant, it appears that Fermi's constant is surprisingly large and is expected to be closer to Newton's constant unless there is a delicate cancellation between the bare value of Fermi's constant and the quantum corrections to it.
More technically, the question is why the Higgs boson is so much lighter than the Planck mass (or the grand unification energy, or a heavy neutrino mass scale): one would expect that the large quantum contributions to the square of the Higgs boson mass would inevitably make the mass huge, comparable to the scale at which new physics appears unless there is an incredible fine-tuning cancellation between the quadratic radiative corrections and the bare mass.
The problem cannot even be formulated in the strict context of the Standard Model, for the Higgs mass cannot be calculated. In a sense, the problem amounts to the worry that a future theory of fundamental particles, in which the Higgs boson mass will be calculable, should not have excessive fine-tunings.
Theoretical solutions
There have been many proposed solutions by many experienced physicists.
Supersymmetry
Some physicists believe that one may solve the hierarchy problem via supersymmetry. Supersymmetry can explain how a tiny Higgs mass can be protected from quantum corrections. Supersymmetry removes the power-law divergences of the radiative corrections to the Higgs mass and solves the hierarchy problem as long as the supersymmetric particles are light enough to satisfy the Barbieri–Giudice criterion. This still leaves open the mu problem, however. The tenets of supersymmetry are being tested at the LHC, although no evidence has been found so far for supersymmetry.
Each particle that couples to the Higgs field has an associated Yukawa coupling . The coupling with the Higgs field for fermions gives an interaction term , with being the Dirac field and the Higgs field. Also, the mass of a fermion is proportional to its Yukawa coupling, meaning that the Higgs boson will couple most to the most massive particle. This means that the most significant corrections to the Higgs mass will originate from the heaviest particles, most prominently the top quark. By applying the Feynman rules, one gets the quantum corrections to the Higgs mass squared from a fermion to be:
The is called the ultraviolet cutoff and is the scale up to which the Standard Model is valid. If we take this scale to be the Planck scale, then we have the quadratically diverging Lagrangian. However, suppose there existed two complex scalars (taken to be spin 0) such that:
(the couplings to the Higgs are exactly the same).
Then by the Feynman rules, the correction (from both scalars) is:
(Note that the contribution here is positive. This is because of the spin-statistics theorem, which means that fermions will have a negative contribution and bosons a positive contribution. This fact is exploited.)
This gives a total contribution to the Higgs mass to be zero if we include both the fermionic and bosonic particles. Supersymmetry is an extension of this that creates 'superpartners' for all Standard Model particles.
Conformal
Without supersymmetry, a solution to the hierarchy problem has been proposed using just the Standard Model. The idea can be traced back to the fact that the term in the Higgs field that produces the uncontrolled quadratic correction upon renormalization is the quadratic one. If the Higgs field had no mass term, then no hierarchy problem arises. But by missing a quadratic term in the Higgs field, one must find a way to recover the breaking of electroweak symmetry through a non-null vacuum expectation value. This can be obtained using the Weinberg–Coleman mechanism with terms in the Higgs potential arising from quantum corrections. Mass obtained in this way is far too small with respect to what is seen in accelerator facilities and so a conformal Standard Model needs more than one Higgs particle. This proposal has been put forward in 2006 by Krzysztof Antoni Meissner and Hermann Nicolai and is currently under scrutiny. But if no further excitation is observed beyond the one seen so far at LHC, this model would have to be abandoned.
Extra dimensions
No experimental or observational evidence of extra dimensions has been officially reported. Analyses of results from the Large Hadron Collider severely constrain theories with large extra dimensions. However, extra dimensions could explain why the gravity force is so weak, and why the expansion of the universe is faster than expected.
If we live in a 3+1 dimensional world, then we calculate the gravitational force via Gauss's law for gravity:
which is simply Newton's law of gravitation. Note that Newton's constant can be rewritten in terms of the Planck mass.
If we extend this idea to extra dimensions, then we get:
where is the -dimensional Planck mass. However, we are assuming that these extra dimensions are the same size as the normal 3+1 dimensions. Let us say that the extra dimensions are of size than normal dimensions. If we let , then we get (2). However, if we let , then we get our usual Newton's law. However, when , the flux in the extra dimensions becomes a constant, because there is no extra room for gravitational flux to flow through. Thus the flux will be proportional to because this is the flux in the extra dimensions. The formula is:
which gives:
Thus the fundamental Planck mass (the extra-dimensional one) could actually be small, meaning that gravity is actually strong, but this must be compensated by the number of the extra dimensions and their size. Physically, this means that gravity is weak because there is a loss of flux to the extra dimensions.
This section is adapted from Quantum Field Theory in a Nutshell by A. Zee.
Braneworld models
In 1998 Nima Arkani-Hamed, Savas Dimopoulos, and Gia Dvali proposed the ADD model, also known as the model with large extra dimensions, an alternative scenario to explain the weakness of gravity relative to the other forces. This theory requires that the fields of the Standard Model are confined to a four-dimensional membrane, while gravity propagates in several additional spatial dimensions that are large compared to the Planck scale.
In 1998–99 Merab Gogberashvili published on arXiv (and subsequently in peer-reviewed journals) a number of articles where he showed that if the Universe is considered as a thin shell (a mathematical synonym for "brane") expanding in 5-dimensional space then it is possible to obtain one scale for particle theory corresponding to the 5-dimensional cosmological constant and Universe thickness, and thus to solve the hierarchy problem. It was also shown that four-dimensionality of the Universe is the result of a stability requirement since the extra component of the Einstein field equations giving the localized solution for matter fields coincides with one of the conditions of stability.
Subsequently, there were proposed the closely related Randall–Sundrum scenarios which offered their solution to the hierarchy problem.
UV/IR mixing
In 2019, a pair of researchers proposed that IR/UV mixing resulting in the breakdown of the effective quantum field theory could resolve the hierarchy problem. In 2021, another group of researchers showed that UV/IR mixing could resolve the hierarchy problem in string theory.
Cosmological constant
In physical cosmology, current observations in favor of an accelerating universe imply the existence of a tiny, but nonzero cosmological constant. This problem, called the cosmological constant problem, is a hierarchy problem very similar to that of the Higgs boson mass problem, since the cosmological constant is also very sensitive to quantum corrections, but it is complicated by the necessary involvement of general relativity in the problem. Proposed solutions to the cosmological constant problem include modifying and/or extending gravity, adding matter with unvanishing pressure, and UV/IR mixing in the Standard Model and gravity. Some physicists have resorted to anthropic reasoning to solve the cosmological constant problem, but it is disputed whether anthropic reasoning is scientific.
| Physical sciences | Particle physics: General | Physics |
697208 | https://en.wikipedia.org/wiki/Dizziness | Dizziness | Dizziness is an imprecise term that can refer to a sense of disorientation in space, vertigo, or lightheadedness. It can also refer to disequilibrium or a non-specific feeling, such as giddiness or foolishness.
Dizziness is a common medical complaint, affecting 20–30% of persons. Dizziness is broken down into four main subtypes: vertigo (~25–50%), disequilibrium (less than ~15%), presyncope (less than ~15%), and nonspecific dizziness (~10%).
Vertigo is the sensation of spinning or having one's surroundings spin about them. Many people find vertigo very disturbing and often report associated nausea and vomiting.
Presyncope describes lightheadedness or feeling faint; the name relates to syncope, which is actually fainting.
Disequilibrium is the sensation of being off balance and is most often characterized by frequent falls in a specific direction. This condition is not often associated with nausea or vomiting.
Non-specific dizziness may be psychiatric in origin. It is a diagnosis of exclusion and can sometimes be brought about by hyperventilation.
Mechanism and causes
Many conditions cause dizziness because multiple parts of the body are required for maintaining balance including the inner ear, eyes, muscles, skeleton, and the nervous system. Thus dizziness can be caused by a variety of problems and may reflect a focal process (such as one affecting balance or coordination) or a diffuse one (such as a toxic exposure or low perfusion state).
Common causes of dizziness include:
Inadequate blood supply to the brain due to:
A sudden fall in blood pressure
Heart problems or artery blockages
Anaemias, such as vitamin B12 deficiency anemia, iron deficiency anemia
Loss or distortion of vision or visual cues
Standing too quickly/prolonged standing
Disorders of the inner ear
Dehydration
Distortion of brain/nervous function by medications such as anticonvulsants and sedatives
Dysfunction of cervical proprioception
Side effects from other prescription drugs, such as proton-pump inhibitors or Coumadin (warfarin)
Diagnosis
Differential diagnosis
Dizziness may occur from an abnormality involving the brain (in particular the brainstem or cerebellum), inner ear, eyes, heart, vascular system, fluid or blood volume, spinal cord, peripheral nerves, or body electrolytes. Dizziness can accompany certain serious events, such as a concussion or brain bleed, epilepsy and seizures (convulsions), stroke, and cases of meningitis and encephalitis. However, the most common subcategories can be broken down as follows: 40% peripheral vestibular dysfunction, 10% central nervous system lesion, 15% psychiatric disorder, 25% presyncope/disequilibrium, and 10% nonspecific dizziness. Some vestibular pathologies have symptoms that are comorbid with mental disorders.
While traditional medical teaching has focused on determining the cause of dizziness based on the category (such as vertigo vs. presyncope), research published in 2017 suggests that this analysis is of limited clinical utility.
Medical conditions that often have dizziness as a symptom include:
Benign paroxysmal positional vertigo
Ménière's disease
Labyrinthitis
Otitis media
Brain tumor
Acoustic neuroma
Motion sickness
Ramsay Hunt syndrome
Fatal Familial Insomnia
Migraine
Multiple sclerosis
Pregnancy
Low blood pressure (hypotension)
Low blood oxygen content (hypoxemia)
Heart attack
Iron deficiency (anemia)
Vitamin B12 deficiency
Low blood sugar (hypoglycemia)
Hormonal changes (e.g. thyroid disease, menstruation, pregnancy)
Panic disorder
Hyperventilation
Anxiety
Depression
Age-diminished visual, balance, and perception of spatial orientation abilities
Stroke; cause of isolated dizziness in 0.7% of people who present to the emergency department
Disequilibrium
In medicine, disequilibrium refers to impaired equilibrioception that can be characterised as a sensation of impending fall or of the need to obtain external assistance for proper locomotion. It is sometimes described as a feeling of improper tilt of the floor, or as a sense of floating. This sensation can originate in the inner ear or other motion sensors, or in the central nervous system. Neurologic disorders tend to cause constant vertigo or disequilibrium and usually have other symptoms of neurologic dysfunction associated with the vertigo. Many medications used to treat seizures, depression, anxiety, and pain affect the vestibular system and the central nervous system which can cause the symptom of disequilibrium.
| Biology and health sciences | Symptoms and signs | Health |
697387 | https://en.wikipedia.org/wiki/Granular%20convection | Granular convection | Granular convection is a phenomenon where granular material subjected to shaking or vibration will exhibit circulation patterns similar to types of fluid convection. It is sometimes called the Brazil nut effect, when the largest of irregularly shaped particles end up on the surface of a granular material containing a mixture of variously sized objects. This name derives from the example of a typical container of mixed nuts, in which the largest will be Brazil nuts. The phenomenon is also known as the muesli effect since it is seen in packets of breakfast cereal containing particles of different sizes but similar density, such as muesli mix.
Under experimental conditions, granular convection of variously sized particles has been observed forming convection cells similar to fluid motion.
Explanation
It may be counterintuitive to find that the largest and (presumably) heaviest particles rise to the top, but several explanations are possible:
When the objects are irregularly shaped, random motion causes some oblong items to occasionally turn in a vertical orientation. The vertical orientation allows smaller items to fall beneath the larger item. If subsequent motion causes the larger item to re-orient horizontally, then it will remain at the top of the mixture.
The center of mass of the whole system (containing the mixed nuts) in an arbitrary state is not optimally low; it has the tendency to be higher due to there being more empty space around the larger Brazil nuts than around smaller nuts. When the nuts are shaken, the system has the tendency to move to a lower energy state, which means moving the center of mass down by moving the smaller nuts down and thereby the Brazil nuts up.
Including the effects of air in spaces between particles, larger particles may become buoyant or sink. Smaller particles can fall into the spaces underneath a larger particle after each shake. Over time, the larger particle rises in the mixture. (According to Heinrich Jaeger, "[this] explanation for size separation might work in situations in which there is no granular convection, for example for containers with completely frictionless side walls or deep below the surface of tall containers (where convection is strongly suppressed). On the other hand, when friction with the side walls or other mechanisms set up a convection roll pattern inside the vibrated container, we found that the convective motion immediately takes over as the dominant mechanism for size separation.")
The same explanation without buoyancy or center of mass arguments: As a larger particle moves upward, any motion of smaller particles into the spaces underneath blocks the larger particle from settling back in its previous position. Repetitive motion results in more smaller particles slipping beneath larger particles. A greater density of the larger particles has no effect on this process. Shaking is not necessary; any process which raises particles and then lets them settle would have this effect. The process of raising the particles imparts potential energy into the system. The result of all the particles settling in a different order may be an increase in the potential energy—a raising of the center of mass.
When shaken, the particles move in vibration-induced convection flow; individual particles move up through the middle, across the surface, and down the sides. If a large particle is involved, it will be moved up to the top by convection flow. Once at the top, the large particle will stay there because the convection currents are too narrow to sweep it down along the wall.
The pore size distribution of a random packing of hard spheres with various sizes makes that smaller spheres have larger probability to move downwards by gravitation than larger spheres.
The phenomenon is related to Parrondo's paradox in as much as the Brazil nuts move to the top of the mixed nuts against the gravitational gradient when subjected to random shaking.
Study techniques
Granular convection has been probed by the use of magnetic resonance imaging (MRI), where convection rolls similar to those in fluids (Bénard cells) can be visualized.
Other studies have used time-lapse CT scans, refractive index matched fluids, and positron emission tracing. On the lower-tech end of the scale, researchers have also used thin, clear plastic boxes, so that the motion of some objects is directly visible.
The effect has been observed in even tiny particles driven only by brownian motion with no external energy input.
Applications
Manufacturing
The effect is of interest to food manufacturing and similar operations. Once a homogeneous mixture of granular materials has been produced, it is usually undesirable for the different particle types to segregate. Several factors determine the severity of the Brazil nut effect, including the sizes and densities of the particles, the pressure of any gas between the particles, and the shape of the container. A rectangular box (such as a box of breakfast cereal) or cylinder (such as a can of nuts) works well to favour the effect, while a container with outwardly slanting walls (such as in a conical or spherical geometry) results in what is known as the reverse Brazil nut effect.
Astronomy
In astronomy, it is common in low density, or rubble pile asteroids, for example the asteroid 25143 Itokawa and 101955 Bennu.
Geology
In geology, the effect is common in formerly glaciated areas such as New England and areas in regions of permafrost where the landscape is shaped into hummocks by frost heave — new stones appear in the fields every year from deeper underground. Horace Greeley noted "Picking stones is a never-ending labor on one of those New England farms. Pick as closely as you may, the next plowing turns up a fresh eruption of boulders and pebbles, from the size of a hickory nut to that of a tea-kettle." A hint to the cause appears in his further description that "this work is mainly to be done in March or April, when the earth is saturated with ice-cold water". Underground water freezes, lifting all particles above it. As the water starts to melt, smaller particles can settle into the opening spaces while larger particles are still raised. By the time ice no longer supports the larger rocks, they are at least partially supported by the smaller particles that slipped below them. Repeated freeze-thaw cycles in a single year speeds up the process.
This phenomenon is one of the causes of inverse grading which can be observed in many situations including soil liquefaction during earthquakes or mudslides. Liquefaction is a general phenomenon where a mixture of fluid and granular material subjected to vibration ultimately leads to circulation patterns similar to both fluid convection and granular convection. Indeed, liquefaction is fluid-granular convection with circulation patterns which are known as sand boils or sand volcanoes in the study of soil liquefaction. Granular convection is also exemplified by debris flow, which is a fast moving, liquefied landslide of unconsolidated, saturated debris that looks like flowing concrete. These flows can carry material ranging in size from clay to boulders, including woody debris such as logs and tree stumps. Flows can be triggered by intense rainfall, glacial melt, or a combination of the two.
| Physical sciences | Soil mechanics | Physics |
697411 | https://en.wikipedia.org/wiki/Safe | Safe | A safe (also called a strongbox or coffer) is a secure lockable enclosure used for securing valuable objects against theft or fire. A safe is usually a hollow cuboid or cylinder, with one face being removable or hinged to form a door. The body and door may be cast from metal (such as steel) or formed out of plastic through blow molding. Bank teller safes typically are secured to the counter, have a slit opening for dropping valuables into the safe without opening it, and a time-delay combination lock to foil thieves. One significant distinction between types of safes is whether the safe is secured to a wall or structure or if it can be moved around.
History
The first known safe dates back to the 13th century BC and was found in the tomb of Pharaoh Ramesses II. It was made of wood and consisted of a locking system resembling the modern pin tumbler lock.
In the 16th century, blacksmiths in southern Germany, Austria, and France first forged cash boxes in sheet iron. These sheet-iron money chests served as the models for mass-produced cash boxes in the 19th century.
In the 17th century, in northern Europe, iron safes were sometimes made in the shape of a barrel, with a padlock on top.
In 1835, English inventors Charles and Jeremiah Chubb in Wolverhampton, England, received a patent for a burglar-resisting safe and began a production of safes. The Chubb brothers had produced locks since 1818. Chubb Locks was an independent company until 2000 when it was sold to Assa Abloy.
On November 2, 1886, inventor Henry Brown patented a "receptacle for storing and preserving papers". The container was fire retardant and accident resistant as it was made from forged metal. The box was able to be safely secured with a lock and key and also able to maintain organization by offering different slots to organize important papers.
Specifications
Specifications for safes include some or all of the following parameters:
Burglar-resistance
Fire-resistance
Environmental resistance (e.g., to water or dust)
Type of lock (e.g., combination, key, time lock, electronic locking)
Location (e.g., wall safe, floor safe)
Smart safes as part of an automated cash handling system
It is often possible to open a safe without access to the key or knowledge of the combination; this activity is known as safe-cracking and is a popular theme in heist films.
A diversion safe, or hidden safe, is a safe that is made from an otherwise ordinary object such as a book, a candle, a can, or wall outlet. Valuables are placed in these hidden safes, which are themselves placed inconspicuously (for example, a book would be placed on a book shelf).
Fire-resistant record protection equipment consists of self-contained devices that incorporate insulated bodies, doors, drawers or lids, or non-rated multi-drawer devices housing individually rated containers that contain one or more inner compartments for storage of records. These devices are intended to provide protection to one or more types of records as evidenced by the assigned Class rating or ratings; Class 350 for paper, Class 150 for microfilm, microfiche other and photographic film and Class 125 for magnetic media and hard drives. Enclosures of this type are typically rated to protect contents for , 1, 2, or 4hours; they will not protect indefinitely. They may also be rated for their resistance to impact should the safe fall a specified distance onto a hard surface, or have debris fall upon it during a fire.
Burglary-resistant safes are rated as to their resistance to various types of tools and the duration of the attack.
Safes can contain hardware that automatically dispenses cash or validates bills as part of an automated cash handling system.
Room-sized fireproof vaults
For larger volumes of heat-sensitive materials, a modular room-sized vault is much more economical than purchasing and storing many fire rated safes. Typically these room-sized vaults are utilized by corporations, government agencies and off-site storage service firms. Fireproof vaults are rated up to Class 125-4 Hour for large data storage applications. These vaults utilize ceramic fiber, a high temperature industrial insulating material, as the core of their modular panel system. All components of the vault, not just the walls and roof panels, must be Class 125 rated to achieve that overall rating for the vault. This includes the door assembly (a double door is needed since there is no single Class 125 vault door available), cable penetrations, coolant line penetrations (for split HVAC systems), and air duct penetrations.
There are also Class 150 applications (such as microfilm) and Class 350 vaults for protecting valuable paper documents. Like the data-rated (Class 125) structures, these vault systems employ ceramic fiber insulation and components rated to meet or exceed the required level of protection.
In recent years room-sized Class 125 vaults have been installed to protect entire data centers. As data storage technologies migrate from tape-based storage methods to hard drives, this trend is likely to continue.
Fire-resistant safes
A fire-resistant safe is a type of safe that is designed to protect its contents from high temperatures or actual fire. Fire resistant safes are usually rated by the amount of time they can withstand the extreme temperatures a fire produces, while not exceeding a set internal temperature, e.g., less than . Models are typically available between half-hour and four-hour durations.
In the UK, the BS EN-1047 standard is set aside for data and document safes to determine their ability to withstand prolonged intense heat and impact damage.
Document safes are designed to maintain an internal temperature no greater than while in a constantly heated environment in excess of .
Data safes are designed to maintain an internal temperature no greater than while in a constantly heated environment in excess of .
These conditions are maintained for the duration of the test. This is usually at least 30 minutes but can extend to many hours depending on grade.
Both kinds of safe are also tested for impact by dropping from a set height onto a solid surface and then tested for fire survivability once again.
In the United States, both the writing of standards for fire-resistance and the actual testing of safes is performed by Underwriters Laboratories.
An in-floor safe installed in a concrete floor is very resistant to fire. However, not all floor safes are watertight; they may fill with water from fire hoses. Contents can be protected against water damage by appropriate packaging.
Reinforced, fireproof cabinets are also used for dangerous chemicals or flammable goods.
Wall safes
Wall safes are designed to provide hidden protection for documents and miscellaneous valuables. Adjustable depth allows the maximization of usable space when installed in different wall thicknesses. Some wall safes have pry-resistant recessed doors with concealed hinges. A painting can be hung over a wall safe to hide it.
Small safes may be fixed to a wall to prevent the entire safe being removed, without concealment. Very small secure enclosures known as key safes, opened by entering a combination, are attached to the wall of a building to store the keys allowing access, so that they are available only to a person knowing the combination, typically for holiday lets, carers, or emergency use.
Safe-cracking
Safe-cracking is opening a safe without a combination or key. There are many methods of safe-cracking ranging from brute force methods to guessing the combination. The easiest method that can be used on many safes is "safe bouncing", which involves hitting the safe on top; this may cause the locking pin to budge, opening the safe.
Physicist Richard Feynman gained a reputation for safe-cracking while working on the Manhattan Project during the Second World War. He did this for recreation, describing his experiences and methods in detail in his book Surely You're Joking, Mr. Feynman!. He made the point that the secure storage he successfully opened clandestinely (to which he would have been given access if he asked) contained contents far more important than any thief had ever accessed, all the secrets of the wartime atomic bomb project.
UL Safe Standards
Underwriters Laboratories (UL) testing certifications are known to be some of the most rigorous and most respected in the world. UL provides numerous ratings, the most common security and fire ratings as discussed below. UL ratings are the typical rating standards used for safes within the United States. They are only matched by B.T.U/VDMA certifications (Germany).
Fire ratings
UL provides a variety of fire rating classifications, 125, 150, and 350 representing the maximum internal temperature in degrees Fahrenheit the safe may not exceed during the test. The classifications come in durations from -hour to 4 hours in length. The safe is exposed to gradually higher temperatures depending on the duration of the test. The most common standards being the 350 one hour (1,700 degrees) and 350 two hour (1,850 degrees) ratings as the temperature paper chars is approximately 451 degrees Fahrenheit.
Burglary ratings
UL standards are one of the principal North American protection standards. The resistance time limit specifies "tools on the safe" time without access to contents. The test might take hours to run and can be repeated as many times as the UL staff feel necessary to ensure that all prospective avenues of attack have been thoroughly explored.
Residential Security Containers (RSC)
This is the entry level security rating offered by Underwriters Laboratories and it has its own standard: (UL 1037). The standard originally had one level, now known as RSC Level I. The standard was expanded in 2016 providing a greater range of security options. This standard also involves a drop test for products weighing not more than 750 pounds, simulating attempting to gain entry by dropping the safe.
RSC Level I - Must withstand a five-minute attack by one technician using common hand tools such as drills, screwdrivers and hammers.
RSC Level II - Must withstand a ten-minute attack by two technicians who use more aggressive tools such as picks, sledgehammers, pry bars, high-speed carbide drills and pressure applying devices. In addition, the technicians will attempt to make a six-square-inch opening in the door or the front face of the safe.
RSC Level III - Also gives two technicians a ten-minute window to perform the test, but the range of tools become even more aggressive, and the size of the maximum attack opening must not exceed two square inches.
Tool-Resistant Safe (TL)
Safes at this level are typically, but not exclusively, used for commercial applications such as jewelers and coin dealers. These ratings are granted to combination locked safes that successfully resist when attacked by two technicians with common hand tools, picking tools, mechanical or portable electric tools, grinding points, carbide drills and pressure applying devices or mechanisms. In addition to those requirements, the safe must weigh at least 750 pounds or come with instructions for anchoring, and have body walls of material equivalent to at least 1" open hearth steel with a minimum tensile strength of 50,000psi. The UL Standard for tool-resistant safes and above are governed by UL Standard 687.
TL-15 - This is a combination-locked safe that offers limited protection against combinations of common mechanical and electrical tools. The safe will resist abuse for 15 minutes from tools such as hand tools, picking tools, mechanical or electric tools, grinding points, carbide drills and devices that apply pressure. While the UL 687 defines this as a "limited degree" of protection, that standard is used for commercial applications, and the TL-15 rating offers significantly better protection than many unrated safes.
TL-30 - This safe offers moderate protection against combinations of mechanical and electrical tools. The safe will resist abuse for 30 minutes from the same tools as the TL-15 test, plus more aggressive tools including cutting wheels and power saws.
TL-30x6 - This is safe can withstand the same assaults as the TL-30 but protection is offered on all six-sides of the body as opposed to only the door.
Torch & Tool Resistant Safe (TRTL)
TRTL-30x6 - This is a combination-locked safe that offers high protection against combinations of mechanical, electrical, and cutting tools. The safe will resist abuse for 30 minutes from tools such as hand tools, picking tools, mechanical or electrical tools, grinding points, carbide drills, devices that apply pressure, cutting wheels, power saws, impact tools and, in addition, can withstand an oxy-fuel welding and cutting torch (tested gas limited to combined total oxygen and fuel gas.)
TRTL-60x6 - This class will withstand the same assaults as Class TRTL-30x6 for 60 minutes.
Torch, Explosive & Tool Resistant Safe (TXTL)
TXTL-60x6 - This class meets all the requirements for Class TRTL-60x6 and in addition can withstand detonation of one charge of of nitroglycerin, or other high explosive of equivalent energy. Multiple charges up to a total of may be used.
European safe standards
Depending on the usage, the European Committee for Standardization has published different European standards for safes. Testing and certification according to these standards should be done by an accredited certification body, e.g. European Certification Body.
EN 1143-1 is the main testing standard for safes, ATM safes, strongroom doors and strongrooms. For safes it features eleven resistance grades (0, I, II, ..., to X). From one grade to the next the security rises by approximately 50%. Testing is based on a free choice of attack tools and methods. Testing requires partial access (hand hole) and complete access attempts, on all sides of the product. The security is calculated by using ratings of tools and the attack time. The result is expressed in resistance units (RU).
EN 14450 is a testing standard for secure cabinets and strongboxes. The standard covers products meant for purposes where the security resistance required is less than that of EN 1143–1.
For fire-resistant safes the EN 1047-1 (fire resistance standard similar to the fire resistance safe standard of UL) and EN 15659 (for light fire storage units) were published.
Gallery
| Technology | Containers | null |
697815 | https://en.wikipedia.org/wiki/Crested%20penguin | Crested penguin | Eudyptes is a genus of penguins whose members are collectively called crested penguins. The exact number of species in the genus varies between four and seven depending on the authority, and a Chatham Islands species became extinct in recent centuries. All are black and white penguins with yellow crests, red bills and eyes, and are found on Subantarctic islands in the world's southern oceans. All lay two eggs, but raise only one young per breeding season; the first egg laid is substantially smaller than the second.
Taxonomy
The genus Eudyptes was introduced by the French ornithologist Louis Pierre Vieillot in 1816; the name is derived from the Ancient Greek words meaning "fine", and meaning "diver". The type species was designated as the southern rockhopper penguin by George Robert Gray in 1840.
Six extant species have been classically recognised, with the recent splitting of the rockhopper penguin increasing it to seven. Conversely, the close relationship of the macaroni and royal penguins, and the erect-crested and Snares penguins have led some to propose that the two pairs should be regarded as species.
Order Sphenisciformes
Family Spheniscidae
Fiordland penguin, Eudyptes pachyrhynchus
Snares penguin, Eudyptes robustus – has been considered a subspecies of the Fiordland penguin
Erect-crested penguin, Eudyptes sclateri
Southern rockhopper penguin, Eudyptes chrysocome
Eastern rockhopper penguin, Eudyptes (chrysocome) filholi
Western rockhopper penguin, Eudyptes (chrysocome) chrysocome
Northern rockhopper penguin, Eudyptes moseleyi – traditionally considered a subspecies of Eudyptes chrysocome as the rockhopper penguin.
Royal penguin, Eudyptes schlegeli – sometimes considered a morph of E. chrysolophus
Macaroni penguin, Eudyptes chrysolophus
Chatham penguin, Eudyptes warhami (extinct)
The Chatham Islands Eudyptes warhami is known only from subfossil bones, and became extinct shortly following human colonisation of the Chatham Islands. This genetically-distinct species was relatively large, with a thin, slim and low bill. (T.L. Cole et al. (2019) Mol. Biol. Evol.)
Evolution
Mitochondrial and nuclear DNA evidence suggests that the crested penguins split from the ancestors of their closest living relative, the yellow-eyed penguin, in the mid-Miocene around 15 million years ago, before splitting into separate species around 8 million years ago in the late Miocene.
A fossil penguin genus, Madrynornis, has been identified as the closest known relative of the crested penguins. Found in late Miocene deposits dated to about 10 million years ago, it must have separated from the crested penguins around 12 million years ago. Given that the head ornamentation by yellow filoplumes seems plesiomorphic for the Eudyptes-Megadyptes lineage, Madrynornis probably had them too.
Extant Species
Description
The crested penguins are all similar in appearance, having sharply delineated black and white plumage with red beaks and prominent yellow crests. Their calls are more complex than those of other species, with several phrases of differing lengths. The royal penguin (mostly) has a white face, while other species have black faces.
Breeding
Crested penguins breed on Subantarctic islands in the southern reaches of the world's oceans; the greatest diversity occurring around New Zealand and surrounding islands. Their breeding displays and behaviours are generally more complex than other penguin species. Both male and female parents take shifts incubating eggs and young.
Crested penguins lay two eggs, but almost always raise only one young successfully. All species exhibit the odd phenomenon of egg-size dimorphism in breeding; the first egg (or A-egg) laid is substantially smaller than the second egg (B-egg). This is most extreme in the macaroni penguin, where the first egg averages only 60% the size of the second. The reason for this is a mystery remains unknown, although several theories have been proposed. British ornithologist David Lack theorized that the genus was evolving toward the laying of a one-egg clutch. Experiments with egg substitution have shown that A-eggs can produce viable chicks that were only 7% lighter at time of fledging. Physiologically, the first egg is smaller because it develops while the mother is still at sea swimming and thus has less energy to invest in the egg.
Recently, brooding royal and erect-crested penguins have been reported to tip the smaller eggs out as the second is laid.
| Biology and health sciences | Sphenisciformes | Animals |
697890 | https://en.wikipedia.org/wiki/Gestational%20diabetes | Gestational diabetes | Gestational diabetes is a condition in which a woman without diabetes develops high blood sugar levels during pregnancy. Gestational diabetes generally results in few symptoms; however, obesity increases the rate of pre-eclampsia, cesarean sections, and embryo macrosomia, as well as gestational diabetes. Babies born to individuals with poorly treated gestational diabetes are at increased risk of macrosomia, of having hypoglycemia after birth, and of jaundice. If untreated, diabetes can also result in stillbirth. Long term, children are at higher risk of being overweight and of developing type 2 diabetes.
Gestational diabetes can occur during pregnancy because of insulin resistance or reduced production of insulin. Risk factors include being overweight, previously having gestational diabetes, a family history of type 2 diabetes, and having polycystic ovarian syndrome. Diagnosis is by blood tests. For those at normal risk, screening is recommended between 24 and 28 weeks' gestation. For those at high risk, testing may occur at the first prenatal visit.
Maintenance of healthy weight and exercising before pregnancy assist in prevention. Gestational diabetes is treated with a diabetic diet, exercise, medication (such as metformin), and sometimes insulin injections. Most people manage blood sugar with diet and exercise. Blood sugar testing among those who are affected is often recommended four times a day. Breastfeeding is recommended as soon as possible after birth.
Gestational diabetes affects 3–9% of pregnancies, depending on the population studied. It is especially common during the third trimester. It affects 1% of those under the age of 20 and 13% of those over the age of 44. A number of ethnic groups including Asians, American Indians, Indigenous Australians, and Pacific Islanders are at higher risk. However, the variations in prevalence are also due to different screening strategies and diagnostic criteria being used. In 90% of cases, gestational diabetes resolves after the baby is born. Affected people, however, are at an increased risk of developing type 2 diabetes.
Classification
Gestational diabetes is formally defined as "any degree of glucose intolerance with onset or first recognition during pregnancy". This definition acknowledges the possibility that a woman may have previously undiagnosed diabetes mellitus, or may have developed diabetes coincidentally with pregnancy. Whether symptoms subside after pregnancy is also irrelevant to the diagnosis.
A woman is diagnosed with gestational diabetes when glucose intolerance continues beyond 24 to 28 weeks of gestation.
The White classification, named after Priscilla White, who pioneered research on the effect of diabetes types on perinatal outcome, is widely used to assess maternal and fetal risk. It distinguishes between gestational diabetes (type A) and pregestational diabetes (diabetes that existed prior to pregnancy). These two groups are further subdivided according to their associated risks and management.
The two subtypes of gestational diabetes under this classification system are:
Type A1: abnormal oral glucose tolerance test (OGTT), but normal blood glucose levels during fasting and two hours after meals; diet modification is sufficient to control glucose levels
Type A2: abnormal OGTT compounded by abnormal glucose levels during fasting or after meals; additional therapy with insulin or other medications is required
Diabetes which existed prior to pregnancy is also split up into several subtypes under this system:
Type B: onset at age 20 or older and duration of less than 10 years.
Type C: onset at age 10–19 or duration of 10–19 years.
Type D: onset before age 10 or duration greater than 20 years.
Type E: overt diabetes mellitus with calcified pelvic vessels.
Type F: diabetic nephropathy.
Type R: proliferative retinopathy.
Type RF: retinopathy and nephropathy.
Type H: ischemic heart disease.
Type T: prior kidney transplant.
An early age of onset or long-standing disease comes with greater risks, hence the first three subtypes.
Two other sets of criteria are available for diagnosis of gestational diabetes, both based on blood-sugar levels.
Criteria for diagnosis of gestational diabetes, using the 100 gram Glucose Tolerance Test, according to Carpenter and Coustan:
Fasting 95 mg/dl
1 hour 180 mg/dl
2 hours 155 mg/dl
3 hours 140 mg/dl
Criteria for diagnosis of gestational diabetes according to National Diabetes Data Group:
Fasting 105 mg/dl
1 hour 190 mg/dl
2 hours 165 mg/dl
3 hours 145 mg/dl
The third criterion used was endorsed by Diabetes in Pregnancy Study Group India and approved by the National Health Mission in its Guidelines
DIPSI(Diabetes in Pregnancy Study Group India Guidelines
OGTT is performed in pregnant women by measuring the plasma glucose after 2 hours of fasting or non-fasting after ingesting 75 grams of glucose (Monohydrate Dextrose Anhydrous). The Indian Guidelines (DIPSI Test) are simple for diagnosing gestational diabetes (GDM). They can be done quickly in low-resource settings, where many pregnant women visit for ANC check-ups in a Non-fasting state. A single value of ≥140 mg/dl is diagnostic for Gestational Diabetes Mellitus.
Guidelines to Screen glucose intolerance at appropriate Gestational weeks: Prediction of GDM can be done if 2hr PPBG≥110 mg/dl at 10th week. At the 8th week itself, PPBG needs to be estimated because, in case PPBG is > 110 mg/dl at this week, a grace period of 2 weeks is available to bring it down to PPBG <110 mg/dl at the 10th week with metformin 250 mg twice a day, in addition to Medical Nutritional Therapy (MNT) and exercise.
Risk factors
Classical risk factors for developing gestational diabetes are:
Polycystic ovary syndrome
A previous diagnosis of gestational diabetes or prediabetes, impaired glucose tolerance, or impaired fasting glycaemia
A family history revealing a first-degree relative with type 2 diabetes
Maternal age – a woman's risk factor increases as she gets older (especially for women over 35 years of age).
Paternal age – one study found that a father's age over 55 years was associated with GD
Ethnicity (those with higher risk factors include African-Americans, Afro-Caribbeans, Native Americans, Hispanics, Pacific Islanders, and people originating from South Asia)
Being overweight, obese or severely obese increases the risk by a factor 2.1, 3.6 and 8.6, respectively.
A previous pregnancy which resulted in a child with a macrosomia (high birth weight: >90th centile or >4000 g (8 lbs 12.8 oz))
Previous poor obstetric history
Other genetic risk factors: There are at least 10 genes where certain polymorphism are associated with an increased risk of gestational diabetes, most notably TCF7L2. The MTNR1B gene is a common gene that is associated with how the body handles insulin and glucose. When this gene is not working properly, it can lead to less insulin production and higher blood glucose levels.
In addition to this, statistics show a double risk of GDM in smokers. Some studies have looked at more controversial potential risk factors, such as short stature.
About 40–60% of women with GDM have no demonstrable risk factor; for this reason many advocate to screen all women. Typically, women with GDM exhibit no symptoms (another reason for universal screening), but some women may demonstrate increased thirst, increased urination, fatigue, nausea and vomiting, bladder infection, yeast infections and blurred vision.
Pregnant women with these risk factors may need to undergo an early screening in addition to the routine screening.
Pathophysiology
The precise mechanisms underlying gestational diabetes remain unknown. The hallmark of GDM is increased insulin resistance. Pregnancy hormones and other factors are thought to interfere with the action of insulin as it binds to the insulin receptor. The interference probably occurs at the level of the cell signaling pathway beyond the insulin receptor. Since insulin promotes the entry of glucose into most cells, insulin resistance prevents glucose from entering the cells properly. As a result, glucose remains in the bloodstream, where glucose levels rise. More insulin is needed to overcome this resistance; about 1.5–2.5 times more insulin is produced than in a normal pregnancy.
Insulin resistance is a normal phenomenon emerging in the second trimester of pregnancy, which in cases of GDM progresses thereafter to levels seen in a non-pregnant woman with type 2 diabetes. It is thought to secure glucose supply to the growing fetus. Women with GDM have an insulin resistance that they cannot compensate for with increased production in the β-cells of the pancreas. Placental hormones, and, to a lesser extent, increased fat deposits during pregnancy, seem to mediate insulin resistance during pregnancy. Cortisol and progesterone are the main culprits, but human placental lactogen, prolactin and estradiol contribute, too. Multivariate stepwise regression analysis reveals that, in combination with other placental hormones, leptin, tumor necrosis factor alpha, and resistin are involved in the decrease in insulin sensitivity occurring during pregnancy, with tumor necrosis factor alpha named as the strongest independent predictor of insulin sensitivity in pregnancy. An inverse correlation with the changes in insulin sensitivity from the time before conception through late gestation accounts for about half of the variance in the decrease in insulin sensitivity during gestation: in other words, low levels or alteration of TNF alpha factors corresponds with a greater chance of, or predisposition to, insulin resistance or sensitivity.
It is unclear why some women are unable to balance insulin needs and develop GDM; however, a number of explanations have been given, similar to those in type 2 diabetes: autoimmunity, single gene mutations, obesity, along with other mechanisms.
Though the clinical presentation of gestational diabetes is well characterized, the biochemical mechanism behind the disease is not well known. One proposed biochemical mechanism involves insulin-producing β-cell adaptation controlled by the HGF/c-MET signaling pathway. β-cell adaption refers to the change that pancreatic islet cells undergo during pregnancy in response to maternal hormones in order to compensate for the increased physiological needs of mother and baby. These changes in the β-cells cause increased insulin secretion as a result of increased β-cell proliferation.
HGF/c-MET has also been implicated in β-cell regeneration, which suggests that HGF/c-MET may help increase β-cell mass in order to compensate for insulin needs during pregnancy. Recent studies support that loss of HGF/c-MET signaling results in aberrant β-cell adaptation.
c-MET is a receptor tyrosine kinase (RTK) that is activated by its ligand, hepatocyte growth factor (HGF), and is involved in the activation of several cellular processes. When HGF binds c-MET, the receptor homodimerizes and self-phosphorylates to form an SH2 recognition domain. The downstream pathways activated include common signaling molecules such as RAS and MAPK, which affect cell motility, and cell cycle progression.
Studies have shown that HGF is an important signaling molecule in stress related situations where more insulin is needed. Pregnancy causes increased insulin resistance and so a higher insulin demand. The β-cells must compensate for this by either increasing insulin production or proliferating. If neither of the processes occur, then markers for gestational diabetes are observed. It has been observed that pregnancy increases HGF levels, showing a correlation that suggests a connection between the signaling pathway and increased insulin needs. In fact, when no signaling is present, gestational diabetes is more likely to occur.
The exact mechanism of HGF/c-MET regulated β-cell adaptation is not yet known but there are several hypotheses about how the signaling molecules contribute to insulin levels during pregnancy. c-MET may interact with FoxM1, a molecule important in the cell cycle, as FOXM1 levels decrease when c-MET is not present. Additionally, c-MET may interact with p27 as the protein levels increase with c-MET is not present. Another hypothesis says that c-MET may control β-cell apoptosis because a lack of c-MET causes increases cell death but the signaling mechanisms have not been elucidated.
Although the mechanism of HGF/c-MET control of gestational diabetes is not yet well understood, there is a strong correlation between the signaling pathway and the inability to produce an adequate amount of insulin during pregnancy and thus it may be the target for future diabetic therapies.
Because glucose travels across the placenta (through diffusion facilitated by GLUT1 carrier), which is located in the syncytiotrophoblast on both the microvillus and basal membranes, these membranes may be the rate-limiting step in placental glucose transport. There is a two- to three-fold increase in the expression of syncytiotrophoblast glucose transporters with advancing gestation. Finally, the role of GLUT3/GLUT4 transport remains speculative. If the untreated gestational diabetes fetus is exposed to consistently higher glucose levels, this leads to increased fetal levels of insulin (insulin itself cannot cross the placenta). The growth-stimulating effects of insulin can lead to excessive growth and a large body (macrosomia). After birth, the high glucose environment disappears, leaving these newborns with ongoing high insulin production and susceptibility to low blood glucose levels (hypoglycemia).
Screening
A number of screening and diagnostic tests have been used to look for high levels of glucose in plasma or serum in defined circumstances. One method is a stepwise approach where a suspicious result on a screening test is followed by diagnostic test. Alternatively, a more involved diagnostic test can be used directly at the first prenatal visit for a woman with a high-risk pregnancy. (for example in those with polycystic ovarian syndrome or acanthosis nigricans).
Non-challenge blood glucose tests involve measuring glucose levels in blood samples without challenging the subject with glucose solutions. A blood glucose level is determined when fasting, two hours after a meal, or simply at any random time. In contrast, challenge tests involve drinking a glucose solution and measuring glucose concentration thereafter in the blood; in diabetes, they tend to remain high. The glucose solution has a very sweet taste which some women find unpleasant; sometimes, therefore, artificial flavours are added. Some women may experience nausea during the test, and more so with higher glucose levels.
There is currently not enough research to show which way is best at diagnosing gestational diabetes. Routine screening of women with a glucose challenge test may find more women with gestational diabetes than only screening women with risk factors. Hemoglobin A1c (HbA1c) is not recommended for diagnosing gestational diabetes, as it is a less reliable marker of glycemia during pregnancy than oral glucose tolerance testing (OGTT).
Because women diagnosed with Gestational Diabetes (GDM) during pregnancy are at an increased risk for developing Type 2 Diabetes Mellitus after pregnancy, post pregnancy glucose tolerance testing is needed. Based on the recent meta-analysis conducted by the Patient-Centered Outcomes Research Institute, research has shown that post pregnancy testing reminders are associated with greater adherence to oral glucose tolerance testing up to 1 year postpartum.
Pathways
Opinions differ about optimal screening and diagnostic measures, in part due to differences in population risks, cost-effectiveness considerations, and lack of an evidence base to support large national screening programs. The most elaborate regimen entails a random blood glucose test during a booking visit, a screening glucose challenge test around 24–28 weeks' gestation, followed by an OGTT if the tests are outside normal limits. If there is a high suspicion, a woman may be tested earlier.
In the United States, most obstetricians prefer universal screening with a screening glucose challenge test. In the United Kingdom, obstetric units often rely on risk factors and a random blood glucose test. The American Diabetes Association and the Society of Obstetricians and Gynaecologists of Canada recommend routine screening unless the woman is low risk (this means the woman must be younger than 25 years and have a body mass index less than 27, with no personal, ethnic or family risk factors) The Canadian Diabetes Association and the American College of Obstetricians and Gynecologists recommend universal screening. The U.S. Preventive Services Task Force found there is insufficient evidence to recommend for or against routine screening, and a 2017 a Cochrane review found that there is not evidence to determine which screening method is best for women and their babies.
Some pregnant women and care providers choose to forgo routine screening due to the absence of risk factors, however this is not advised due to the large proportion of women who develop gestational diabetes despite having no risk factors present and the dangers to the mother and baby if gestational diabetes remains untreated.
Non-challenge blood glucose tests
When a plasma glucose level is found to be higher than 126 mg/dL (7.0 mmol/L) after fasting, or over 200 mg/dL (11.1 mmol/L) on any occasion, and if this is confirmed on a subsequent day, the diagnosis of GDM is made, and no further testing is required. These tests are typically performed at the first antenatal visit. They are simple to administer and inexpensive, but have a lower test performance compared to the other tests, with moderate sensitivity, low specificity and high false positive rates.
Screening glucose challenge test
The screening glucose challenge test (sometimes called the O'Sullivan test) is performed between 24 and 28 weeks, and can be seen as a simplified version of the oral glucose tolerance test (OGTT). No previous fasting is required for this screening test, in contrast to the OGTT. The O'Sullivan test involves drinking a solution containing 50 grams of glucose, and measuring blood levels one hour later.
If the cut-off point is set at 140 mg/dL (7.8 mmol/L), 80% of women with GDM will be detected. If this threshold for further testing is lowered to 130 mg/dL, 90% of GDM cases will be detected, but there will also be more women who will be subjected to a consequent OGTT unnecessarily.
Oral glucose tolerance test
A standardized oral glucose tolerance test (OGTT) should be done in the morning after an overnight fast of between 8 and 14 hours. During the three previous days the subject must have an unrestricted diet (containing at least 150 g carbohydrate per day) and unlimited physical activity. The subject should remain seated during the test and should not smoke throughout the test.
IADPSG (International Association of Diabetes and Pregnancy Study Groups) has developed diagnostic criteria for GDM, based on the results of adverse pregnancy outcomes in the Hyperglycemia and Adverse Pregnancy Outcomes (HAPO) study. These were recommended by WHO 2013.
According to these gestational diabetes mellitus should be diagnosed at any time in pregnancy if one of the following criteria are met, using a 75 g glucose OGTT:
Fasting blood glucose level ≥92 mg/dL (5.1 mmol/L)
1 hour blood glucose level ≥180 mg/dL (10 mmol/L)
2 hour blood glucose level ≥153 mg/dL (8.5 mmol/L)
Urinary glucose testing
Women with GDM may have high glucose levels in their urine (glucosuria). Although dipstick testing is widely practiced, it performs poorly, and discontinuing routine dipstick testing has not been shown to cause underdiagnosis where universal screening is performed. Increased glomerular filtration rates during pregnancy contribute to some 50% of women having glucose in their urine on dipstick tests at some point during their pregnancy. The sensitivity of glucosuria for GDM in the first two trimesters is only around 10% and the positive predictive value is around 20%.
Prevention
Vitamin D supplementation during pregnancy may help to prevent gestational diabetes. A 2015 review found that when done during pregnancy moderate physical exercise is effective for the prevention of gestational diabetes. A 2014 review however did not find a significant effect. It is uncertain if additional dietary advice interventions help to reduce the risk of gestational diabetes. However, data from the Nurses' Health Study shows that adherence to a healthy plant-based diet is associated with lower risk for GDM.
Diet and physical activity interventions designed to prevent excessive gestational weight gain reduce the rates of gestational diabetes. However, the impact of these interventions varies with the body-mass index of the person as well as with the region in which the studies were performed.
Moderate-quality evidence suggest that there is a reduced risk of gestational diabetes mellitus and caesarean section with combined diet and exercise interventions during pregnancy as well as reductions in gestational weight gain, compared with standard care.
A 2023 review found that a plant-based diet (including fruits, vegetables, whole grains, nuts and seeds, and tea) rich in phytochemicals lowers the risk of GDM. A Cochrane review, updated 2023, stated that myo‐inositol has a potential beneficial effect of improving insulin sensitivity, which suggested that it may be useful for women in preventing gestational diabetes″.
It has been suggested that for women who have had gestational diabetes, diet, exercise, education, and lifestyle changes between pregnancies may lower their chances of having gestational diabetes again in future pregnancies. For women with a normal BMI pre-pregnancy, light to moderate exercise for 30-60 minutes three times a week during pregnancy can decrease the occurrence of GDM. It was found that women who completed at least 600 MET-min/week of moderate intensity exercise can cause at least a 25% reduction in the odds of developing GDM. When studying the difference effects between aerobic and resistance training, it was found that there were no differences in fasting blood glucose levels, insulin utilization rate, or pregnancy outcomes. However, there was an better improvement in the 2-hour postprandial blood glucose level. The resistance training group was also more compliant with their workout program than the aerobic group. Based on this information, resistance training may be a better option for women with gestational diabetes, but doing both aerobic training and resistance training would be optimal.
Management
Treatment of GDM with diet and insulin reduces health problems mother and child. Treatment of GDM is also accompanied by more inductions of labour.
A repeat OGTT should be carried out 6 weeks after delivery, to confirm the diabetes has disappeared. Afterwards, regular screening for type 2 diabetes is advised.
Lifestyle interventions include exercise, diet advice, behavioural interventions, relaxation, self-monitoring glucose, and combined interventions. Women with gestational diabetes who receive lifestyle interventions seem to have less postpartum depression, and were more likely to reach their weight loss targets after giving birth, than women who had no intervention. Their babies are also less likely to be large for their gestational age, and have less percentage of fat when they are born. More research is needed to find out which lifestyle interventions are best. Some women with GDM use probiotics but it is very uncertain if there are any benefits in terms of blood glucose levels, high blood pressure disorders or induction of labour.
If a diabetic diet or G.I. Diet, exercise, and oral medication are inadequate to control glucose levels, insulin therapy may become necessary.
The development of macrosomia can be evaluated during pregnancy by using sonography. Women who use insulin, with a history of stillbirth, or with hypertension are managed like women with overt diabetes.
Researchers have found ways for pregnant women with gestational diabetes to reduce their complications with their current health, long-term effects, and fetal health with the help of exercise. Laredo-Aguilera, et al and Dipla, et al both presented findings from systematic and meta-analysis reviews that showed positive effects of resistance exercise or a combination of resistance and aerobic exercise. Aerobic and resistance training was found to control glucose, HbcA1, and insulin levels in women with GDM. Not only is the mother’s health affected, but also the fetus. If GDM is not treated or is made worse, the child may suffer from macrosomia, impaired intrauterine growth, obstetric trauma, hyperbilirubinemia, hypoglycemia, or even infection. Pregnant women with GDM who are overweight or obese are at a greater risk to pass down these negative affects by 2.14-3.56 times. Benefits to resistance and aerobic exercise include maternal decrease in cramps, lower back pain, edema, depression, urinary incontinence, duration of labor, constipation, and the number of c-sections. These benefits can affect the fetus by having a decreased body fat mass, improved stress tolerance, and advanced neurobehavioral maturation. In the article written by Laredo-Aguilera, et al, there were seven interventions and seven different countries that were used for research. Within all of the interventions there were significant improvements in glucose concentration, reduced requirements of insulin injections, postprandial glucose level control, and glycemic control. The study reviewed by Dipla, et al found that even a single exercise bout increases skeletal muscle glucose uptake, minimizing hyperglycemia. Regular exercise training has been found to promote mitochondrial biogenesis, improve oxidative capacity, enhance insulin sensitivity and vascular function, and reduce systemic inflammation in women with GDM. Women with GDM must provide enough glucose to the fetus, but often become insulin resistant. Exercise has been previously known to be dangerous for women during pregnancy, but now there are multiple studies that have found otherwise. It is very important for women with gestational diabetes to measure their heart rate reserve to determine exercise intensity and exercise at a RPE between 12 and 14.
Lifestyle
Counselling before pregnancy (for example, about preventive folic acid supplements) and multidisciplinary management are important for good pregnancy outcomes. Most women can manage their GDM with dietary changes and exercise. Self monitoring of blood glucose levels can guide therapy. Some women will need antidiabetic drugs, most commonly insulin therapy.
Any diet needs to provide sufficient calories for pregnancy, typically 2,000–2,500 kcal with the exclusion of simple carbohydrates. The main goal of dietary modifications is to avoid peaks in blood sugar levels. This can be done by spreading carbohydrate intake over meals and snacks throughout the day, and using slow-release carbohydrate sources—known as the G.I. Diet. Since insulin resistance is highest in mornings, breakfast carbohydrates need to be restricted more.
The Mediterranean diet may be associated with reduced incidence of gestational diabetes. However, there is not enough evidence to indicate if one type of dietary advice is better than another.
Though there is no specific structure for exercise programs for GDM, it is understood that being subjected to constant exposure to a sedentary lifestyle and participating in <2999 MET-mins a week in physical activity is linked to a 10 times higher risk of developing GDM. Conversely, partaking in > 3000 MET-mins of any physical activity can reduce developing GDM. However, light intensity walking is an effective way to help control casual glucose level (CGL), but a minimum of 6000 steps must be achieved daily to have consistent effectiveness in controlling CGL. Nevertheless there is no significant correlation between light intensity walking and hbA1c, therefore regular moderate intensity exercise is advised, specifically aerobic exercise has been proven to improve both fasting and postprandial blood glucose, insulin dosage, and insulin usage within the body. It is still contested which form of exercise/physical activity is best for pregnant women, yet some movement is better than no movement.
Although exercise does not reduce the risk of developing GDM, it does help reduce some of the risk associated with it. When it comes to exercise in pregnant women who have GDM there is a decrease in the risk of having a newborn with macrosomia, decrease in maternal weight gain and a decrease in c-sections. Although exercise is not the cure for GDM it does help pregnant women decrease any complication or risk factors that can arise from disease.
Self monitoring can be accomplished using a handheld capillary glucose dosage system. Compliance with these glucometer systems can be low. There is not a lot of research into what target blood sugar levels should be for women with gestational diabetes and targets recommended to women vary around the world. Target ranges advised by the Australasian Diabetes in Pregnancy Society are as follows:
fasting capillary blood glucose levels <5.5 mmol/L
1 hour postprandial capillary blood glucose levels <8.0 mmol/L
2 hour postprandial blood glucose levels <6.7 mmol/L
Regular blood samples can be used to determine HbA1c levels, which give an idea of glucose control over a longer time period.
Research suggests a possible benefit of breastfeeding to reduce the risk of diabetes and related risks for both mother and child.
Medication
If monitoring reveals failing control of glucose levels with these measures, or if there is evidence of complications like excessive fetal growth, treatment with insulin might be necessary. This is most commonly fast-acting insulin given just before eating to blunt glucose rises after meals. Care needs to be taken to avoid low blood sugar levels due to excessive insulin. Insulin therapy can be normal or very tight; more injections can result in better control but requires more effort, and there is no consensus that it has large benefits. A 2016 Cochrane review (updated in 2023) concluded that quality evidence is not yet available to determine the best blood sugar range for improving health for pregnant women with GDM and their babies.
There is some evidence that certain medications by mouth might be safe in pregnancy, or at least, are less dangerous to the developing fetus than poorly controlled diabetes. When comparing which diabetes tablets (medication by mouth) work best and are safest, there is not enough quality research to support one medication over another. The medication metformin is better than glyburide. If blood glucose cannot be adequately controlled with a single agent, the combination of metformin and insulin may be better than insulin alone. Another review found good short term safety for both the mother and baby with metformin but unclear long term safety.
People may prefer metformin by mouth to insulin injections. Treatment of polycystic ovarian syndrome with metformin during pregnancy has been noted to decrease GDM levels.
Almost half of the women did not reach sufficient control with metformin alone and needed supplemental therapy with insulin; compared to those treated with insulin alone, they required less insulin, and they gained less weight. With no long-term studies into children of women treated with the drug, there remains a possibility of long-term complications from metformin therapy. Babies born to women treated with metformin have been found to develop less visceral fat, making them less prone to insulin resistance in later life.
Prognosis
Gestational diabetes generally resolves once the baby is born. Based on different studies, the chances of developing GDM in a second pregnancy, if a woman had GDM in her first pregnancy, are between 30 and 84%, depending on ethnic background. A second pregnancy within one year of the previous pregnancy has a large likelihood of GDM recurrence.
Women diagnosed with gestational diabetes have an increased risk of developing diabetes mellitus in the future. The risk is highest in women who needed insulin treatment, had antibodies associated with diabetes (such as antibodies against glutamate decarboxylase, islet cell antibodies, or insulinoma antigen-2), women with more than two previous pregnancies, and women who were obese (in order of importance). Women requiring insulin to manage gestational diabetes have a 50% risk of developing diabetes within the next five years. Depending on the population studied, the diagnostic criteria and the length of follow-up, the risk can vary enormously. The risk appears to be highest in the first 5 years, reaching a plateau thereafter. One of the longest studies followed a group of women from Boston, Massachusetts; half of them developed diabetes after 6 years, and more than 70% had diabetes after 28 years. In a retrospective study in Navajo women, the risk of diabetes after GDM was estimated to be 50 to 70% after 11 years. Another study found a risk of diabetes after GDM of more than 25% after 15 years. In populations with a low risk for type 2 diabetes, in lean subjects and in women with auto-antibodies, there is a higher rate of women developing type 1 diabetes (LADA).
Children of women with GDM have an increased risk for childhood and adult obesity and an increased risk of glucose intolerance and type 2 diabetes later in life. This risk relates to increased maternal glucose values. It is currently unclear how much genetic susceptibility and environmental factors contribute to this risk, and whether treatment of GDM can influence this outcome.
Relative benefits and harms of different oral anti-diabetic medications are not yet well understood as of 2017.
There are scarce statistical data on the risk of other conditions in women with GDM; in the Jerusalem Perinatal study, 410 out of 37,962 women were reported to have GDM, and there was a tendency towards more breast and pancreatic cancer, but more research is needed to confirm this finding.
Research is being conducted to develop a web-based clinical decision support system for GDM prediction using machine learning techniques. Results so far demonstrated great potential in clinical practicality for automatic GDM prognosis.
Complications
GDM poses a risk to mother and child. This risk is largely related to uncontrolled blood glucose levels and its consequences. The risk increases with higher blood glucose levels. Treatment resulting in better control of these levels can reduce some of the risks of GDM considerably.
Having GDM can lead to mental health issues with the distress added onto the pregnancy. Women with gestational diabetes experienced increased anxiety, depression, and stress. Not only does it affect mental health during pregnancy, but it also leads to an increased risk of postpartum depression. The risk is over 4 times greater than a normal pregnancy. It was found that physical activity could decrease the risk for postpartum depression. It is a form of therapy that can help reduce the stress of these mental health issues.
The two main risks GDM imposes on the baby are growth abnormalities and chemical imbalances after birth, which may require admission to a neonatal intensive care unit. Infants born to mothers with GDM are at risk of being both large for gestational age (macrosomic) in unmanaged GDM, and small for gestational age and Intrauterine growth retardation in managed GDM. Macrosomia in turn increases the risk of instrumental deliveries (e.g. forceps, ventouse and caesarean section) or problems during vaginal delivery (such as shoulder dystocia). Macrosomia may affect 12% of normal women compared to 20% of women with GDM. However, the evidence for each of these complications is not equally strong; in the Hyperglycemia and Adverse Pregnancy Outcome (HAPO) study for example, there was an increased risk for babies to be large but not small for gestational age in women with uncontrolled GDM. In a recent birth cohort study of 5150 deliveries, a research group active at the University of Helsinki and Helsinki University Hospital, Finland demonstrated that the mother's GDM is an independent factor that increases the risk of fetal hypoxia, during labour. The study was published in the Acta Diabetologica in June 2021. Another finding was that GDM increased the susceptibility of the fetus to intrapartum hypoxia, regardless of the size of the fetus. The risk of hypoxia and the resulting risk of poor condition in newborn infants was nearly 7-fold in the fetuses of mothers with GDM compared to the fetuses of non-diabetic mothers. Furthermore, according to the findings, the risk of needing to perform resuscitation on the newborn after birth was 10-fold.
Another finding was that gestational diabetes increased the susceptibility of the fetus to intrapartal hypoxia, regardless of the size of the fetus.
"The risk of hypoxia and the resulting risk of poor condition in newborn infants was nearly seven-fold in the fetuses of mothers with gestational diabetes compared to the fetuses of non-diabetic mothers," says researcher Mikko Tarvonen. According to the findings, the risk of needing to perform resuscitation on the newborn was ten-fold. Research into complications for GDM is difficult because of the many confounding factors (such as obesity). Labelling a woman as having GDM may in itself increase the risk of having an unnecessary caesarean section.
Neonates born from women with consistently high blood sugar levels are also at an increased risk of low blood glucose (hypoglycemia), jaundice, high red blood cell mass (polycythemia) and low blood calcium (hypocalcemia) and magnesium (hypomagnesemia). Untreated GDM also interferes with maturation, causing dysmature babies prone to respiratory distress syndrome due to incomplete lung maturation and impaired surfactant synthesis.
Unlike pre-gestational diabetes, gestational diabetes has not been clearly shown to be an independent risk factor for birth defects. Birth defects usually originate sometime during the first trimester (before the 13th week) of pregnancy, whereas GDM gradually develops and is least pronounced during the first and early second trimester. Studies have shown that the offspring of women with GDM are at a higher risk for congenital malformations. A large case-control study found that gestational diabetes was linked with a limited group of birth defects, and that this association was generally limited to women with a higher body mass index (≥ 25 kg/m2). It is difficult to make sure that this is not partially due to the inclusion of women with pre-existent type 2 diabetes who were not diagnosed before pregnancy.
Because of conflicting studies, it is unclear at the moment whether women with GDM have a higher risk of preeclampsia. In the HAPO study, the risk of preeclampsia was between 13% and 37% higher, although not all possible confounding factors were corrected.
Epidemiology
The prevalence of GDM was 14.7%, 9.9%, and 14.4% in low-income countries (LIC), middle-income countries (MIC), and high-income countries (HIC) in 2021 by International Association of Diabetes in Pregnancy Study Group’s criteria.
By 2021, the Global prevalence of hyperglycemia in pregnancy (HIP) as per IDF atlas will be 21.1 million people, accounting for 16.7% of births to women aged 20-49. These individuals may experience some form of hyperglycemia during pregnancy; 80.3% of these were due to GDM.
| Biology and health sciences | Specific diseases | Health |
698477 | https://en.wikipedia.org/wiki/Urinal | Urinal | A urinal (, ) is a sanitary plumbing fixture similar to a toilet, but for urination only. Urinals are often provided in men's public restrooms in Western countries (less so in Muslim countries). They are usually used in a standing position. Urinals can be equipped with manual flushing, automatic flushing, or without flushing, as is the case for waterless urinals. They can be arranged as single sanitary fixtures (with or without privacy walls), or in a trough design without privacy walls.
Urinals designed for females ("female urinals") also exist but are rare. It is possible for females to use stand-up urinals using a female urination device. The term "urinal" may also apply to a small building or other structure containing such fixtures. It can also refer to a small container in which urine can be collected for medical analysis, or for use where access to toilet facilities is not possible, such as in small aircraft, during extended stakeouts, or for the bedridden.
Description
A stand-up urinal can be used conveniently and appropriately by someone who has a penis or other adaptive means with which to urinate from a standing position. There is no age restriction, and urinals are commonly used by men and boys of all ages. Female urinals also exist but are not common.
In busy public toilets, urinals are installed for efficiency. Compared with urination in a general-purpose toilet, usage is faster and more sanitary because at the urinal there are no additional doors or locks to touch, and no seat to turn up. Consistent use of urinals also keeps the toilet stalls cleaner and more available for persons who need to defecate. A urinal takes less space, is simpler, and consumes less water per flush (or even no water at all) than a flush toilet. Large numbers of them are usually installed along a common supply pipe and drain. Urinals may also come in different heights, to accommodate tall and short users.
Public urinals usually have a plastic mesh guard, which may optionally contain a deodorizing urinal deodorizer block or "urinal cake". The mesh is intended to prevent solid objects (such as cigarette butts, feces, chewing gum, or paper) from being flushed and possibly causing a plumbing stoppage. In some restaurants, bars, and clubs, ice may be put in the urinals, serving some of the same purposes as the deodorizing block without dispensing odorous chemicals.
Arrangement
For purposes of space and economic practicality, urinals are not typically placed inside stalls. Unlike in female public toilets that do not have urinals, optimal resource efficiency in male restrooms therefore requires urinating in full visibility of other users. In recent years, it has become more common in some countries for dividers or partitions to be installed between urinals to eliminate any chance of incidental exposure during the process of urination.
Urinals in high-capacity public toilets are usually arranged in one or more rows directly opposite the door, so that users have their backs to people entering or standing outside. Often, one or two of the urinals, typically at one end of a long row, will be mounted lower than the others; they are meant for the disabled and other users who cannot reach the regular urinals. In facilities where people of various heights are present, such as schools, urinals that extend down to floor level may be used to allow anyone of any height to use any urinal.
Instead of individual fixtures, trough urinals may be installed. These designs can be used by a number of people simultaneously, but they do not allow for much privacy. They are often installed where there is a high peak demand, such as in schools, music festivals, theatrical events, sports stadiums, discos, dance clubs, and convention halls.
Urinals were once installed exclusively in commercial or institutional settings, but are also now available for private homes. They offer the advantages of substantial water savings in residences with many occupants, and reduction of "splash back", making cleaning easier.
Urinals with flushing
Most public urinals incorporate a water flushing system to rinse urine from the bowl of the device, to prevent foul odors. The flush can be triggered by one of several methods:
Manual handles
This type of flush might be regarded as standard in the United States. Each urinal is equipped with a button or short lever to activate the flush, with users expected to operate it as they leave. Such a directly controlled system is the most efficient, provided that patrons remember to use it. This is far from certain, however, often because of fear of touching the handle, which is located too high to kick. Urinals with foot-activated flushing systems are sometimes found in high-traffic areas; these systems have a button set into the floor or a pedal on the wall at ankle height. The Americans with Disabilities Act requires that flush valves be mounted no higher than AFF (above the finished floor). Additionally, the urinal is to be mounted no higher than AFF, and to have a rim that is tapered and elongated and protrudes at least from the wall. This enables users in wheelchairs to straddle the lip of the urinal and urinate without having to "arc" the flow of urine upwards.
Timed flush
In Germany, the United Kingdom, France, the Republic of Ireland, Hong Kong and some parts of Sweden and Finland, manual flush handles are unusual. Instead, the traditional system is a timed flush that operates automatically at regular intervals. Groups of up to ten or more urinals will be connected to a single overhead cistern, which contains the timing mechanism. A constant drip-feed of water slowly fills the cistern until a tipping point is reached, when the valve opens (or a siphon begins to drain the cistern), and all the urinals in the group are flushed. Electronic controllers performing the same function are also used.
This system does not require any action from its users, but it is wasteful of water when toilets are used irregularly. However, in these countries users are so used to the automatic system, that attempts to install manual flushes to save water are generally unsuccessful. Users ignore them not through deliberate laziness or fear of infection, but because activating the flush is not habitual.
To help reduce water usage when public toilets are closed, some public toilets with timed flushing use an electric water valve connected to the room's light switch. When the building is in active use during the day and the lights are on, the timed flush operates normally. At night when the building is closed, the lights are turned off and the flushing action stops.
Door-regulated flush
This is an older method of water-saving automatic flushing, which only operates when the public toilet has been used. A push-button switch is mounted in the door frame, and triggers the flush valve for all urinals every time the door is opened. While it cannot detect the use of individual urinals, it provides reasonable flushing action without wasting excessive amounts of water when the urinals are not being used. This method requires a spring-operated automatic door closer, since the flush mechanism only operates when the door opens.
Alternatively, a flushing system connected to the door can count the number of users, and operate when the number of door opening events reaches a certain value. At night, the door never opens, so flushing never occurs.
Automatic flush
Electronic automatic flushes solve the problems of previous approaches, and are common in new installations. A passive infrared sensor identifies when the urinal has been used, by detecting when someone has stood in front of it and moved away, and then activates the flush. There usually is also a small override button, to allow optional manual flushing.
Automatic flush facilities can be retrofitted to existing systems. The handle-operated valves of a manual system can be replaced with a suitably designed self-contained electronic valve, often battery-powered to avoid the need to add cables. Older timed-flush installations may add a device that regulates the water flow to the cistern according to the overall activity detected in the room. This does not provide true per-fixture automatic flushing, but is simple and cheap to add because only one device is required for the whole system.
To prevent false-triggering of the automatic flush, most infrared detectors require that a presence be detected for at least five seconds, such as when a person is standing in front of it. This prevents a whole line of automatic flush units from triggering in succession if someone just walks past them. The automatic flush mechanism also typically waits for the presence to go out of sensor range before flushing. This reduces water usage, compared to a sensor that would trigger a continuous flushing action the whole time that a presence is detected.
Waterless urinals
Since about the 1990s urinals have been available on the market that use no water at all. These are called waterless urinals or flushless urinals.
The first waterless urinal was developed at the end of the 19th century by the German-Austrian Wilhelm Beetz using an oil-based syphon with a liquid that he called Urinol.
Waterless urinals can save between of water per urinal per year, depending on the amount of water used in the water-flushed urinal for comparison purposes, and the number of uses per day. For example, these numbers assume that the urinal would be used between 40 and 120 times per business day.
Waterless urinals allow the collection of undiluted pure urine which can be used as a fertilizer.
Odor control
Models of waterless urinals introduced by the Waterless Company in 1991 and others in 2001 by Falcon Waterfree Technologies and Sloan Valve Company, as well as Duravit, use a trap insert filled with a sealant liquid instead of water. The lighter-than-water sealant floats on top of the urine collected in the U-bend, preventing odors from being released into the air. The cartridge and sealant must be periodically replaced.
Waterless urinals may also use an outlet system that traps the odor, preventing the smell often present in toilet blocks. Another method to eliminate odor was introduced by Caroma, which installed a deodorizing block in their waterless urinal that was activated during use.
Odor control in waterless urinals is also achieved with simple one-way valves which are manufactured as a flat rubber tube (the tube opens when urine flows through) or with two silicone "curtain" pieces. The former is used in the waterless urinals by the company Keramag in Germany (model Centaurus) and the latter is marketed by the company Addicom in South Africa who called it the EcoSmellStop device.
Applications
Waterless urinals can be installed in high-traffic facilities, and in situations where providing a water supply may be difficult or where water conservation is desired.
Waterless urinals have become rather common in Germany since about 2009 and can be found at restaurants, cinemas, highway rest stops, train stations and so forth. It was estimated in 2009 that there are about 6 million urinals in Germany, and about 100,000 of those were of the waterless type in that year.
Due to high-level water restrictions during about 2005-2008 the city council of Brisbane, Australia mandated conversion to waterless urinals. Flush urinals are nowadays rarely seen in Brisbane.
Installation and maintenance
The drain pipes from waterless urinals need to be installed correctly in terms of diameter, slope and pipe materials in order to prevent buildup of struvite ("urine stone") and calcium phosphate precipitates in the pipes, which would cause blockages and could require expensive repairs. Also, the undiluted urine is corrosive to metals (except for stainless steel), which is why plastic pipes are generally preferred for urine drain pipes.
Most waterless urinals do not prevent odorous staining on the surface of the urinals, and periodic cleaning of the fixture and its surrounds is still required. When maintained according to manufacturers' recommendations, well-designed waterless urinals do not emit any more odors than flushed urinals do. However, some odor-trapping devices work better than others in the longer term. Regular, thorough maintenance of the respective odor control device is needed for all types of waterless urinals, as per the manufacturer's recommendation.
Situation in the United States
US federal law has mandated no more than one gallon per flush since 1994, and the EPA estimates that the average urinal is flushed 20 times per day, which gives an average water use of per year. Mechanical traps are not allowed by US building codes but are allowed in many other countries.
Plumbers' unions initially opposed waterless urinals, citing concerns about health and safety, which have been debunked by scientists who have studied the devices. Facing opposition to their attempts to have the devices allowed in plumbing codes, manufacturers devised a compromise. The Uniform Plumbing Code was modified to allow waterless urinals to be installed, provided that unneeded water lines were nevertheless run to the back of the urinals. This allows conventional water-flushing urinals to be retrofitted later, if waterless models were judged to be unsatisfactory over time.
In March 2006, the Associated Press reported that the plumbers' union in Philadelphia had become upset because developer Liberty Property Trust had decided to use waterless urinals in the Comcast Center. Many in the union believed that this would lead to less work for them. The developer cited saving the city of water per year as its deciding factor.
In February 2010, the headquarters of the California EPA removed waterless urinals that were installed in 2003 due to "hundreds of complaints", including odors and splashed urine on the floors. Officials blamed the failure of the project on incompatibility with the building's existing plumbing systems.
Street urinals
In some localities, urinals may be located on public sidewalks or in public areas such as parks. These urinals are often equipped with partitions for the sake of privacy, and some are fully enclosed structures. They may or may not be equipped with water flushing mechanisms.
The first 'pissoirs' were installed in Paris in the 1830s, and the idea gradually gained popularity in other European cities. From a peak in the 1930s when there were over 1,000 in Paris alone, historic urinals have gradually disappeared, in favour of facilities for both sexes. In the 21st century, public urination by men in some locations was again seen as a nuisance, and modern versions of street urinals have been installed.
The Netherlands has a number of strategically placed street urinals in various cities, intended to reduce public urination by drunken men. Amsterdam has the largest collection of historic urinals, with about 30 'Plaskrul' ('pisscurl'), a flushless urinal with a curved privacy screen, in the central city. In recent years, urinals that can be retracted into the ground during the day or between special events have been installed, in order to save space when they are not expected to be needed. When closed they look like a large manhole in a sidewalk. Similar retractable models, such as the model by the Dutch company Urilift, are also seen in the UK and other countries. At night when bars are open they rise out of the sidewalk; some time after the bars close, the urinals return to their manhole configuration so that they are unseen by people during the day.
In the Philippines, Marikina was the first city to install street urinals in the late 1990s. When Marikina Mayor Bayani Fernando was appointed chair of the Metropolitan Manila Development Authority, he installed street urinals in the rest of Metro Manila as well.
Special urinals
Urinals designed for females
In the Western world, females are generally taught to sit or squat while urinating. Many therefore do not know how – or even that it is possible – for a female to aim her urine stream as would be required to use a stand-up urinal. Thus, several different types of urinals have been designed for females that do not require the user to aim their urine stream. A typical user could thus theoretically approach such a urinal squatting backwards over it without necessarily trying to aim their stream.
Arts and interactive urinals
Kisses! is a controversial urinal designed by the female Dutch designer Meike van Schijndel. It is shaped like an open pair of red lips. In early March 2004, the National Organization for Women (NOW) took offense to the new urinals that Virgin Atlantic decided to install in the Virgin Atlantic clubhouse at John F Kennedy International Airport in New York City. After receiving many angry phone calls from female customers, Virgin Atlantic Vice President John Riordan called NOW to apologize. Protestors surmised a connection to oral sex or urolagnia, and based their complaints on the urinals being sexist. A McDonald's restaurant in the Netherlands removed them after a customer complained to the head office in the United States.
Interactive urinals have been developed in a number of countries, allowing users to entertain themselves during urination. One example is the Toylet, a video game system produced by the Japanese company Sega that allows users to play video games using their urine to control the on-screen action.
Makeshift urinals
During military operations, such as the Korean War, Vietnam War, or Operation Desert Storm, "piss tubes" were used as makeshift urinals. To make one, soldiers would affix an inverted disposable water bottle on one end of a rigid tube, burying the other end. Removing the base of the bottle made a funnel which would be left at the proper height. Deposited urine simply soaked into the ground; when the area became saturated, the device was relocated.
In vehicles
, the aircraft manufacturer Airbus offered its customers the option of installing urinals in its A380 aircraft.
History
In the spring of 1830, the city government of Paris decided to install the first public urinals on the major boulevards. They were put in place by the summer, but in July of the same year, many were destroyed through their use as materials for street barricades during the French Revolution of 1830.
The urinals were re-introduced in Paris after 1834, when over 400 were installed by Claude-Philibert Barthelot, comte de Rambuteau, the Préfet of the Department of the Seine. Having a simple cylindrical shape, built of masonry, open on the street side, and ornately decorated on the other side as well as the cap, they were popularly known as ('Rambuteau columns'). In response, Rambuteau suggested the name , in reference to the 1st century Roman emperor Titus Flavius Vespasianus, who placed a tax on urine collected from public toilets for use in tanning. This is the usual term by which street urinals are known in the French speaking world, although and are also in common use.
In Paris, the next version was a masonry column that allowed for the pasting of posters on the side facing the footpath, creating a tradition that continues to this day (as a Morris column, a column with an elaborate roof and without the urinal).
Cast iron urinals were developed in the United Kingdom, with the Scottish firm of Walter McFarlane and Company casting urinals at their Saracen Foundry and erecting the first at Paisley Road, Glasgow in October 1850. By the end of 1852, nearly 50 cast iron urinals had been installed in Glasgow, including designs with more than one stall. Unlike Rambuteau's columns, which were entirely open at the front, McFarlane's one-man urinals were designed with spiral cast iron screens that allowed the user to be hidden from sight, and his multi-stall urinals were completely hidden within ornate, modular cast iron panels. Three manufacturers in Glasgow, Walter Macfarlane & Co., George Smith (Sun Foundry) and James Allan Snr & Son (Elmbank Foundry), supplied the majority of cast iron urinals across Britain and exported them around the world, including Australia and Argentina.
Back in Paris, cast iron urinals were introduced as part of Baron Haussmann's remodelling of the city. A large variety of designs were produced in subsequent decades, housing two to 8 stalls, typically only screening the central portion of the user from public view, with the head and feet still visible. Screens were also added to Rambuteau columns. At the peak of their spread in the 1930s there were 1,230 in Paris, but by 1966 their number had decreased to 329. From 1980 they were replaced systematically with new technology, a unisex, enclosed, automatically self-cleaning unit called the .
In Berlin, the first , in wood, were erected in 1863. In order to provide a design as distinguished as in other cities, several architectural design competitions were organised in 1847, 1865 and 1877. The last design, proposed by a city councillor, was the one adopted in 1878, a cast-iron octagonal structure with seven stalls and a peaked roof, known locally as a Café Achteck ('Octagon Cafe'). In common with British designs, they provided complete enclosure, and were provided with interior lighting. Their number increased to 142 by 1920, but there are now only about a dozen remaining in use.
A similar design was adopted in Vienna, though simpler, smaller and hexagonal. They were equipped with a novel "oil system", patented by Wilhelm Beetz in 1882, where a type of oil was used to neutralise odours, dispensing with the necessity for plumbing. About 15 are still in use, and one has been restored and set up as a display in the Vienna Technical Museum.
In central Amsterdam, there are about 35 pee curls, which consist of a raised metal screen that curls in a spiral enclosing a single urinal stall, including some two-person examples with the same details but a simpler shape. Though the design first emerged in the 1870s, an updated design by Joan van der Mey dates from 1916. All the remaining examples were restored in 2008.
of various sizes and designs, but mostly in patterned cast-iron, can still be found dotted across the UK, with a few in London, but especially in Birmingham and Bristol. A solitary example of Walter McFarlane's one-man spiral urinal remains in Thorn Park, Plymouth. A number have been restored and relocated to the grounds of various open-air museums and heritage railway lines.
Rectangular , with elaborate patterned cast-iron panels, similar in design to some of the UK ones, were installed in the city of Sydney, Australia, in 1880 and Melbourne, Australia, in the period 1903–1918. Of at least 40 that were made, nine remain in place and in use on the streets in and around central Melbourne, and have been classified by the National Trust since 1998.
In recent years, temporary with multiple unscreened urinals around a central column have been introduced in the UK. A temporary for women called the 'Peeasy' is used in Switzerland.
Until the 1990s, street urinals were a common sight in Paris (France), and in the 1930s more than 1200 were in service. They were famous among foreign tourists. Parisians referred to them as vespasiennes, the name being derived from that of the Roman Emperor Vespasian, who, according to an anecdote, imposed a tax on urine. Beginning in the 1990s, the vespasiennes (renowned for their smell and lack of hygiene) were gradually replaced by Sanisettes. Today only one vespasienne remains in the city (on Boulevard Arago), and it is still regularly used. They still exist in other French cities and in other countries.
Society and culture
Examples of urinals in popular culture include:
Marcel Duchamp's Fountain (1917), which some have called the most influential modern artwork, is a urinal which Duchamp signed "R. Mutt".
Police in Nassau County, New York adopted talking urinals in an anti-drunk driving initiative. Using Wizmark, a talking urinal display screen, police can provide bars with free pre-programmed urinal messages urging patrons not to drink and drive.
Ernest Hemingway converted a urinal from Sloppy Joe's bar into a water fountain for his cats. The fountain remains a prominent feature at his former home in Key West, Florida, a popular tourist destination in the town.
Pissoir, retitled Urinal in some countries, was the first feature film directed by John Greyson. It was released in 1980 and takes place in a toilet.
Gabriel Chevallier's 1934 satirical novel Clochemerle deals with the ramifications of plans to install a new urinal in a French village.
Indiana Urinalysis (1988) is a documentary on the subject of urinals. Topics include "types of urinals, urinal etiquette, usage of urinal cakes, why urinals are always white, preference of urinal vs. toilet, and urinals for women, as well as a collection of urinal anecdotes." It received a Citation Award from the Indiana Film Society in 1990.
"Mystery of the Urinal Deuce" is an episode of the American adult animated sitcom South Park, in which the plot revolves around the elementary school's efforts to establish the identity of a person who defecated in a urinal.
Gallery of unusual or historical urinals
| Technology | Hydraulics and pneumatics | null |
698648 | https://en.wikipedia.org/wiki/Polarization%20density | Polarization density | In classical electromagnetism, polarization density (or electric polarization, or simply polarization) is the vector field that expresses the volumetric density of permanent or induced electric dipole moments in a dielectric material. When a dielectric is placed in an external electric field, its molecules gain electric dipole moment and the dielectric is said to be polarized.
Electric polarization of a given dielectric material sample is defined as the quotient of electric dipole moment (a vector quantity, expressed as coulombs*meters (C*m) in SI units) to volume (meters cubed).
Polarization density is denoted mathematically by P; in SI units, it is expressed in coulombs per square meter (C/m2).
Polarization density also describes how a material responds to an applied electric field as well as the way the material changes the electric field, and can be used to calculate the forces that result from those interactions. It can be compared to magnetization, which is the measure of the corresponding response of a material to a magnetic field in magnetism.
Similar to ferromagnets, which have a non-zero permanent magnetization even if no external magnetic field is applied, ferroelectric materials have a non-zero polarization in the absence of external electric field.
Definition
An external electric field that is applied to a dielectric material, causes a displacement of bound charged elements.
A bound charge is a charge that is associated with an atom or molecule within a material. It is called "bound" because it is not free to move within the material like free charges. Positive charged elements are displaced in the direction of the field, and negative charged elements are displaced opposite to the direction of the field. The molecules may remain neutral in charge, yet an electric dipole moment forms.
For a certain volume element in the material, which carries a dipole moment , we define the polarization density :
In general, the dipole moment changes from point to point within the dielectric. Hence, the polarization density of a dielectric inside an infinitesimal volume dV with an infinitesimal dipole moment is:
The net charge appearing as a result of polarization is called bound charge and denoted .
This definition of polarization density as a "dipole moment per unit volume" is widely adopted, though in some cases it can lead to ambiguities and paradoxes.
Other expressions
Let a volume be isolated inside the dielectric. Due to polarization the positive bound charge will be displaced a distance relative to the negative bound charge , giving rise to a dipole moment . Substitution of this expression in yields
Since the charge bounded in the volume is equal to the equation for becomes:
where is the density of the bound charge in the volume under consideration. It is clear from the definition above that the dipoles are overall neutral and thus is balanced by an equal density of opposite charges within the volume. Charges that are not balanced are part of the free charge discussed below.
Gauss's law for the field of P
For a given volume enclosed by a surface , the bound charge inside it is equal to the flux of through taken with the negative sign, or
Differential form
By the divergence theorem, Gauss's law for the field P can be stated in differential form as:
where is the divergence of the field P through a given surface containing the bound charge density .
Relationship between the fields of P and E
Homogeneous, isotropic dielectrics
In a homogeneous, linear, non-dispersive and isotropic dielectric medium, the polarization is aligned with and proportional to the electric field E:
where is the electric constant, and is the electric susceptibility of the medium. Note that in this case simplifies to a scalar, although more generally it is a tensor. This is a particular case due to the isotropy of the dielectric.
Taking into account this relation between P and E, equation () becomes:
The expression in the integral is Gauss's law for the field which yields the total charge, both free and bound , in the volume enclosed by . Therefore,
which can be written in terms of free charge and bound charge densities (by considering the relationship between the charges, their volume charge densities and the given volume):
Since within a homogeneous dielectric there can be no free charges , by the last equation it follows that there is no bulk bound charge in the material . And since free charges can get as close to the dielectric as to its topmost surface, it follows that polarization only gives rise to surface bound charge density (denoted to avoid ambiguity with the volume bound charge density ).
may be related to by the following equation:
where is the normal vector to the surface pointing outwards. (see charge density for the rigorous proof)
Anisotropic dielectrics
The class of dielectrics where the polarization density and the electric field are not in the same direction are known as anisotropic materials.
In such materials, the -th component of the polarization is related to the -th component of the electric field according to:
This relation shows, for example, that a material can polarize in the x direction by applying a field in the z direction, and so on. The case of an anisotropic dielectric medium is described by the field of crystal optics.
As in most electromagnetism, this relation deals with macroscopic averages of the fields and dipole density, so that one has a continuum approximation of the dielectric materials that neglects atomic-scale behaviors. The polarizability of individual particles in the medium can be related to the average susceptibility and polarization density by the Clausius–Mossotti relation.
In general, the susceptibility is a function of the frequency of the applied field. When the field is an arbitrary function of time , the polarization is a convolution of the Fourier transform of with the . This reflects the fact that the dipoles in the material cannot respond instantaneously to the applied field, and causality considerations lead to the Kramers–Kronig relations.
If the polarization P is not linearly proportional to the electric field , the medium is termed nonlinear and is described by the field of nonlinear optics. To a good approximation (for sufficiently weak fields, assuming no permanent dipole moments are present), P is usually given by a Taylor series in whose coefficients are the nonlinear susceptibilities:
where is the linear susceptibility, is the second-order susceptibility (describing phenomena such as the Pockels effect, optical rectification and second-harmonic generation), and is the third-order susceptibility (describing third-order effects such as the Kerr effect and electric field-induced optical rectification).
In ferroelectric materials, there is no one-to-one correspondence between P and E at all because of hysteresis.
Polarization density in Maxwell's equations
The behavior of electric fields (, ), magnetic fields (, ), charge density () and current density () are described by Maxwell's equations in matter.
Relations between E, D and P
In terms of volume charge densities, the free charge density is given by
where is the total charge density. By considering the relationship of each of the terms of the above equation to the divergence of their corresponding fields (of the electric displacement field , and in that order), this can be written as:
This is known as the constitutive equation for electric fields. Here is the electric permittivity of empty space. In this equation, P is the (negative of the) field induced in the material when the "fixed" charges, the dipoles, shift in response to the total underlying field E, whereas D is the field due to the remaining charges, known as "free" charges.
In general, varies as a function of depending on the medium, as described later in the article. In many problems, it is more convenient to work with and the free charges than with and the total charge.
Therefore, a polarized medium, by way of Green's Theorem can be split into four components.
The bound volumetric charge density:
The bound surface charge density:
The free volumetric charge density:
The free surface charge density:
Time-varying polarization density
When the polarization density changes with time, the time-dependent bound-charge density creates a polarization current density of
so that the total current density that enters Maxwell's equations is given by
where Jf is the free-charge current density, and the second term is the magnetization current density (also called the bound current density), a contribution from atomic-scale magnetic dipoles (when they are present).
Polarization ambiguity
Crystalline materials
In a simple approach the polarization inside a solid is not, in general, uniquely defined. Because a bulk solid is periodic, one must choose a unit cell in which to compute the polarization (see figure). In other words, two people, Alice and Bob, looking at the same solid, may calculate different values of P, and neither of them will be wrong. For example, if Alice chooses a unit cell with positive ions at the top and Bob chooses the unit cell with negative ions at the top, their computed P vectors will have opposite directions. Alice and Bob will agree on the microscopic electric field E in the solid, but disagree on the value of the displacement field .
Even though the value of P is not uniquely defined in a bulk solid, variations in P are uniquely defined. If the crystal is gradually changed from one structure to another, there will be a current inside each unit cell, due to the motion of nuclei and electrons. This current results in a macroscopic transfer of charge from one side of the crystal to the other, and therefore it can be measured with an ammeter (like any other current) when wires are attached to the opposite sides of the crystal. The time-integral of the current is proportional to the change in P. The current can be calculated in computer simulations (such as density functional theory); the formula for the integrated current turns out to be a type of Berry's phase.
The non-uniqueness of P is not problematic, because every measurable consequence of P is in fact a consequence of a continuous change in P. For example, when a material is put in an electric field E, which ramps up from zero to a finite value, the material's electronic and ionic positions slightly shift. This changes P, and the result is electric susceptibility (and hence permittivity). As another example, when some crystals are heated, their electronic and ionic positions slightly shift, changing P. The result is pyroelectricity. In all cases, the properties of interest are associated with a change in P.
In what is now called the modern theory of polarization, the polarization is defined as a difference. Any structure which has inversion symmetry has zero polarization; there is an identical distribution of positive and negative charges about an inversion center. If the material deforms there can be a polarization due to the charge in the charge distribution.
Amorphous materials
Another problem in the definition of P is related to the arbitrary choice of the "unit volume", or more precisely to the system's scale. For example, at microscopic scale a plasma can be regarded as a gas of free charges, thus P should be zero. On the contrary, at a macroscopic scale the same plasma can be described as a continuous medium, exhibiting a permittivity and thus a net polarization .
| Physical sciences | Electrostatics | Physics |
698759 | https://en.wikipedia.org/wiki/Propagator | Propagator | In quantum mechanics and quantum field theory, the propagator is a function that specifies the probability amplitude for a particle to travel from one place to another in a given period of time, or to travel with a certain energy and momentum. In Feynman diagrams, which serve to calculate the rate of collisions in quantum field theory, virtual particles contribute their propagator to the rate of the scattering event described by the respective diagram. Propagators may also be viewed as the inverse of the wave operator appropriate to the particle, and are, therefore, often called (causal) Green's functions (called "causal" to distinguish it from the elliptic Laplacian Green's function).
Non-relativistic propagators
In non-relativistic quantum mechanics, the propagator gives the probability amplitude for a particle to travel from one spatial point (x') at one time (t') to another spatial point (x) at a later time (t).
The Green's function G for the Schrödinger equation is a function
satisfying
where denotes the Hamiltonian, denotes the Dirac delta-function and is the Heaviside step function. The kernel of the above Schrödinger differential operator in the big parentheses is denoted by and called the propagator.
This propagator may also be written as the transition amplitude
where is the unitary time-evolution operator for the system taking states at time to states at time . Note the initial condition enforced by
The propagator may also be found by using a path integral:
where denotes the Lagrangian and the boundary conditions are given by . The paths that are summed over move only forwards in time and are integrated with the differential following the path in time.
The propagator lets one find the wave function of a system, given an initial wave function and a time interval. The new wave function is given by
If only depends on the difference , this is a convolution of the initial wave function and the propagator.
Examples
For a time-translationally invariant system, the propagator only depends on the time difference , so it may be rewritten as
The propagator of a one-dimensional free particle, obtainable from, e.g., the path integral, is then
Similarly, the propagator of a one-dimensional quantum harmonic oscillator is the Mehler kernel,
The latter may be obtained from the previous free-particle result upon making use of van Kortryk's SU(1,1) Lie-group identity,
valid for operators and satisfying the Heisenberg relation .
For the -dimensional case, the propagator can be simply obtained by the product
Relativistic propagators
In relativistic quantum mechanics and quantum field theory the propagators are Lorentz-invariant. They give the amplitude for a particle to travel between two spacetime events.
Scalar propagator
In quantum field theory, the theory of a free (or non-interacting) scalar field is a useful and simple example which serves to illustrate the concepts needed for more complicated theories. It describes spin-zero particles. There are a number of possible propagators for free scalar field theory. We now describe the most common ones.
Position space
The position space propagators are Green's functions for the Klein–Gordon equation. This means that they are functions satisfying
where
are two points in Minkowski spacetime,
is the d'Alembertian operator acting on the coordinates,
is the Dirac delta function.
(As typical in relativistic quantum field theory calculations, we use units where the speed of light and the reduced Planck constant are set to unity.)
We shall restrict attention to 4-dimensional Minkowski spacetime. We can perform a Fourier transform of the equation for the propagator, obtaining
This equation can be inverted in the sense of distributions, noting that the equation has the solution (see Sokhotski–Plemelj theorem)
with implying the limit to zero. Below, we discuss the right choice of the sign arising from causality requirements.
The solution is
where
is the 4-vector inner product.
The different choices for how to deform the integration contour in the above expression lead to various forms for the propagator. The choice of contour is usually phrased in terms of the integral.
The integrand then has two poles at
so different choices of how to avoid these lead to different propagators.
Causal propagators
Retarded propagator
A contour going clockwise over both poles gives the causal retarded propagator. This is zero if is spacelike or is to the future of , so it is zero if .
This choice of contour is equivalent to calculating the limit,
Here
is the Heaviside step function,
is the proper time from to , and is a Bessel function of the first kind. The propagator is non-zero only if , i.e., causally precedes , which, for Minkowski spacetime, means
and
This expression can be related to the vacuum expectation value of the commutator of the free scalar field operator,
where
Advanced propagator
A contour going anti-clockwise under both poles gives the causal advanced propagator. This is zero if is spacelike or if is to the past of , so it is zero if .
This choice of contour is equivalent to calculating the limit
This expression can also be expressed in terms of the vacuum expectation value of the commutator of the free scalar field.
In this case,
Feynman propagator
A contour going under the left pole and over the right pole gives the Feynman propagator, introduced by Richard Feynman in 1948.
This choice of contour is equivalent to calculating the limit
Here, is a Hankel function and is a modified Bessel function.
This expression can be derived directly from the field theory as the vacuum expectation value of the time-ordered product of the free scalar field, that is, the product always taken such that the time ordering of the spacetime points is the same,
This expression is Lorentz invariant, as long as the field operators commute with one another when the points and are separated by a spacelike interval.
The usual derivation is to insert a complete set of single-particle momentum states between the fields with Lorentz covariant normalization, and then to show that the functions providing the causal time ordering may be obtained by a contour integral along the energy axis, if the integrand is as above (hence the infinitesimal imaginary part), to move the pole off the real line.
The propagator may also be derived using the path integral formulation of quantum theory.
Dirac propagator
Introduced by Paul Dirac in 1938.
Momentum space propagator
The Fourier transform of the position space propagators can be thought of as propagators in momentum space. These take a much simpler form than the position space propagators.
They are often written with an explicit term although this is understood to be a reminder about which integration contour is appropriate (see above). This term is included to incorporate boundary conditions and causality (see below).
For a 4-momentum the causal and Feynman propagators in momentum space are:
For purposes of Feynman diagram calculations, it is usually convenient to write these with an additional overall factor of (conventions vary).
Faster than light?
The Feynman propagator has some properties that seem baffling at first. In particular, unlike the commutator, the propagator is nonzero outside of the light cone, though it falls off rapidly for spacelike intervals. Interpreted as an amplitude for particle motion, this translates to the virtual particle travelling faster than light. It is not immediately obvious how this can be reconciled with causality: can we use faster-than-light virtual particles to send faster-than-light messages?
The answer is no: while in classical mechanics the intervals along which particles and causal effects can travel are the same, this is no longer true in quantum field theory, where it is commutators that determine which operators can affect one another.
So what does the spacelike part of the propagator represent? In QFT the vacuum is an active participant, and particle numbers and field values are related by an uncertainty principle; field values are uncertain even for particle number zero. There is a nonzero probability amplitude to find a significant fluctuation in the vacuum value of the field if one measures it locally (or, to be more precise, if one measures an operator obtained by averaging the field over a small region). Furthermore, the dynamics of the fields tend to favor spatially correlated fluctuations to some extent. The nonzero time-ordered product for spacelike-separated fields then just measures the amplitude for a nonlocal correlation in these vacuum fluctuations, analogous to an EPR correlation. Indeed, the propagator is often called a two-point correlation function for the free field.
Since, by the postulates of quantum field theory, all observable operators commute with each other at spacelike separation, messages can no more be sent through these correlations than they can through any other EPR correlations; the correlations are in random variables.
Regarding virtual particles, the propagator at spacelike separation can be thought of as a means of calculating the amplitude for creating a virtual particle-antiparticle pair that eventually disappears into the vacuum, or for detecting a virtual pair emerging from the vacuum. In Feynman's language, such creation and annihilation processes are equivalent to a virtual particle wandering backward and forward through time, which can take it outside of the light cone. However, no signaling back in time is allowed.
Explanation using limits
This can be made clearer by writing the propagator in the following form for a massless particle:
This is the usual definition but normalised by a factor of . Then the rule is that one only takes the limit at the end of a calculation.
One sees that
and
Hence this means that a single massless particle will always stay on the light cone. It is also shown that the total probability for a photon at any time must be normalised by the reciprocal of the following factor:
We see that the parts outside the light cone usually are zero in the limit and only are important in Feynman diagrams.
Propagators in Feynman diagrams
The most common use of the propagator is in calculating probability amplitudes for particle interactions using Feynman diagrams. These calculations are usually carried out in momentum space. In general, the amplitude gets a factor of the propagator for every internal line, that is, every line that does not represent an incoming or outgoing particle in the initial or final state. It will also get a factor proportional to, and similar in form to, an interaction term in the theory's Lagrangian for every internal vertex where lines meet. These prescriptions are known as Feynman rules.
Internal lines correspond to virtual particles. Since the propagator does not vanish for combinations of energy and momentum disallowed by the classical equations of motion, we say that the virtual particles are allowed to be off shell. In fact, since the propagator is obtained by inverting the wave equation, in general, it will have singularities on shell.
The energy carried by the particle in the propagator can even be negative. This can be interpreted simply as the case in which, instead of a particle going one way, its antiparticle is going the other way, and therefore carrying an opposing flow of positive energy. The propagator encompasses both possibilities. It does mean that one has to be careful about minus signs for the case of fermions, whose propagators are not even functions in the energy and momentum (see below).
Virtual particles conserve energy and momentum. However, since they can be off shell, wherever the diagram contains a closed loop, the energies and momenta of the virtual particles participating in the loop will be partly unconstrained, since a change in a quantity for one particle in the loop can be balanced by an equal and opposite change in another. Therefore, every loop in a Feynman diagram requires an integral over a continuum of possible energies and momenta. In general, these integrals of products of propagators can diverge, a situation that must be handled by the process of renormalization.
Other theories
Spin
If the particle possesses spin then its propagator is in general somewhat more complicated, as it will involve the particle's spin or polarization indices. The differential equation satisfied by the propagator for a spin particle is given by
where is the unit matrix in four dimensions, and employing the Feynman slash notation. This is the Dirac equation for a delta function source in spacetime. Using the momentum representation,
the equation becomes
where on the right-hand side an integral representation of the four-dimensional delta function is used. Thus
By multiplying from the left with
(dropping unit matrices from the notation) and using properties of the gamma matrices,
the momentum-space propagator used in Feynman diagrams for a Dirac field representing the electron in quantum electrodynamics is found to have form
The downstairs is a prescription for how to handle the poles in the complex -plane. It automatically yields the Feynman contour of integration by shifting the poles appropriately. It is sometimes written
for short. It should be remembered that this expression is just shorthand notation for . "One over matrix" is otherwise nonsensical. In position space one has
This is related to the Feynman propagator by
where .
Spin 1
The propagator for a gauge boson in a gauge theory depends on the choice of convention to fix the gauge. For the gauge used by Feynman and Stueckelberg, the propagator for a photon is
The general form with gauge parameter , up to overall sign and the factor of , reads
The propagator for a massive vector field can be derived from the Stueckelberg Lagrangian. The general form with gauge parameter , up to overall sign and the factor of , reads
With these general forms one obtains the propagators in unitary gauge for , the propagator in Feynman or 't Hooft gauge for and in Landau or Lorenz gauge for . There are also other notations where the gauge parameter is the inverse of , usually denoted (see gauges). The name of the propagator, however, refers to its final form and not necessarily to the value of the gauge parameter.
Unitary gauge:
Feynman ('t Hooft) gauge:
Landau (Lorenz) gauge:
Graviton propagator
The graviton propagator for Minkowski space in general relativity is
where is the number of spacetime dimensions, is the transverse and traceless spin-2 projection operator and is a spin-0 scalar multiplet.
The graviton propagator for (Anti) de Sitter space is
where is the Hubble constant. Note that upon taking the limit and , the AdS propagator reduces to the Minkowski propagator.
Related singular functions
The scalar propagators are Green's functions for the Klein–Gordon equation. There are related singular functions which are important in quantum field theory. These functions are most simply defined in terms of the vacuum expectation value of products of field operators.
Solutions to the Klein–Gordon equation
Pauli–Jordan function
The commutator of two scalar field operators defines the Pauli–Jordan function by
with
This satisfies
and is zero if .
Positive and negative frequency parts (cut propagators)
We can define the positive and negative frequency parts of , sometimes called cut propagators, in a relativistically invariant way.
This allows us to define the positive frequency part:
and the negative frequency part:
These satisfy
and
Auxiliary function
The anti-commutator of two scalar field operators defines function by
with
This satisfies
Green's functions for the Klein–Gordon equation
The retarded, advanced and Feynman propagators defined above are all Green's functions for the Klein–Gordon equation.
They are related to the singular functions by
where is the sign of .
| Physical sciences | Quantum mechanics | Physics |
10047006 | https://en.wikipedia.org/wiki/Unipedalism | Unipedalism | A uniped (from Latin uni- "one" and ped- "foot") is a person or creature with only one foot and one leg, as contrasted with a biped (two legs) and a quadruped (four legs). Moving using only one leg is known as unipedal movement. Many bivalvia and nearly all gastropoda molluscs have evolved only one foot. Through accidents (i.e. amputation) or birth abnormalities it is also possible for an animal, including humans, to end up with only a single leg.
In fiction and mythology
One major study of mythological unipeds is Teresa Pàroli (2009): "How many are the unipeds' feet? Their tracks in texts and sources", in Analecta Septentrionalia: Beiträge zur nordgermanischen Kultur- und Literaturgeschichte, ed. by Wilhelm Heizmann, Klaus Böldl and Heinrich Beck (Berlin/London/New York: De Gruyter), pp. 281–327.
In the Saga of Erik the Red, a native of Vinland who is described as being one-legged kills one of Eric's men (his brother). In the children's fiction book They Came on Viking Ships by Jackie French, a uniped is a one-legged Norse mythical creature that lived in the south of Vinland during the time of the expedition of Freydís Eiríksdóttir.
The sciapod was another mythical one-legged humanoid.
In Japanese mythology and folklore, some yōkai such as the karakasa-obake and the ippon-datara have one leg.
In the Narnia book The Voyage of the Dawn Treader by C. S. Lewis, the heroes meet the "Dufflepuds". These are two-legged dwarfs who have been rendered one-legged by their master, a wizard. He did this to force them to use the water from the stream next to their food garden, rather than walking miles to get the water.
In Brazilian folklore, there is a mythical humanoid uniped called "Saci" who appears in several tales and is associated with dustdevils. Colombian folklore has a female version of this monster, the "Patasola".
In Mayan mythology, God K and his equivalents are represented with one leg. One of these equivalents is the K'iche' Maya storm deity Huracan, whose name means "one-leg".
In the Indian epic Mahabharata, there is a mention of a Southern Indian tribe of humans named 'Ekapada' (literally 'one-footed') living, which Sahadeva conquers.
In Hindu culture, there is a form of the god Shiva known as Ekapada.
| Biology and health sciences | Ethology | Biology |
14067771 | https://en.wikipedia.org/wiki/Thalattosauria | Thalattosauria | Thalattosauria (Greek for "sea lizards") is an extinct order of marine reptiles that lived in the Middle to Late Triassic. Thalattosaurs were diverse in size and shape, and are divided into two superfamilies: Askeptosauroidea and Thalattosauroidea. Askeptosauroids were endemic to the Tethys Ocean, their fossils have been found in Europe and China, and they were likely semiaquatic fish eaters with straight snouts and decent terrestrial abilities. Thalattosauroids were more specialized for aquatic life and most had unusual downturned snouts and crushing dentition. Thalattosauroids lived along the coasts of both Panthalassa and the Tethys Ocean, and were most diverse in China and western North America. The largest species of thalattosaurs grew to over 4 meters (13 feet) in length, including a long, flattened tail utilized in underwater propulsion. Although thalattosaurs bore a superficial resemblance to lizards, their exact relationships are unresolved. They are widely accepted as diapsids, but experts have variously placed them on the reptile family tree among Lepidosauromorpha (squamates, rhynchocephalians and their relatives), Archosauromorpha (archosaurs and their relatives), ichthyosaurs, and/or other marine reptiles.
Description
Thalattosaurs have moderate adaptations to marine lifestyles, including long, paddle-like tails and slender bodies with more than 20 dorsal vertebrae. There are few unique traits of the postcranial skeleton shared by all thalattosaurs, but the skeleton is still useful for distinguishing between askeptosauroids and thalattosauroids. Askeptosauroids are characterized by elongated necks with short neural spines and at least 11 vertebrae, while thalattosauroids have shorter necks sometimes involving as few as four vertebrae. Thalattosauroids also have tall neural spines on their neck, back, and especially the tail vertebrae, increasing the surface area for swimming via lateral undulation. Thalattosauroids additionally possess short, wide limb bones poorly adapted for movement on land. In this superfamily, the humerus is widest near the shoulder, the femur is widest near the knee, the radius is reniform ("kidney-shaped"), and phalanges are long and plate-like. Askeptosauroids retain hourglass-shaped limb bones like land reptiles, but even they share specializations with thalattosauroids such as a short tibia and fibula, with the latter expanding near the ankle.
Skull
Thalattosaurs are diapsid reptiles, meaning that they have temporal fenestrae, two holes in the head behind the orbit (eye socket). However, many thalattosaurs have a vestigial upper temporal fenestra which is slit-like, and some have it fully closed up by surrounding bones. Thalattosaurs lack a quadratojugal bone, leaving the lower temporal fenestra open from below. They also lack postparietal and tabular bones, while the squamosal bone is small, the supratemporal bone is extensive, and the quadrate bone is large. When seen from above, the rear edge of the skull bears a large, triangular embayment that reaches further forwards than the quadrates.
Thalattosaurs have a rostrum (snout) significantly longer than the portion of the skull behind the eyes. A majority of this length is formed from the premaxillary bones, and the nares (nostril holes) are shifted back close to the eyes. The premaxillae stretch back very far and are incised into the frontal bones. This leads to an unusual trait that is characteristic of thalattosaurs, where the left and right nasal bones are separated from each other and restricted to a small portion of the snout near the nares. The lacrimal bone is typically lost or fused to the large crescent-shaped prefrontal bone in front of the orbit, mirroring the postfrontal bone which is usually fused to the three-pronged postorbital bone behind the orbit.
Askeptosauroidea have narrow, straight-edged snouts which are often elongated and filled with conical teeth. One askeptosauroid, Endennasaurus, is entirely toothless while another, Miodentosaurus, has a short, blunt snout. Most members of the second thalattosaur group, Thalattosauroidea, have more distinctive downturned snouts. Clarazia and Thalattosaurus both have snouts that taper into a narrow tip. Most of the snout is straight, but premaxillae at the tip are downturned. Xinpusaurus and Concavispina also have downturned premaxillae, but the end of the maxillae are sharply upturned, forming a notch in their skull. In Hescheleria (and potentially Nectosaurus and Paralonectes), the premaxillae are abruptly downturned at the end of the snout, nearly forming a right angle with the rest of the jaw. In these forms, the end of the snout is a toothy hook separated from the rest of the jaw by a space called a diastema. Thalattosauroids also have heterodont dentition, with pointed piercing teeth at the front of the snout and low crushing teeth further back. The exception to this rule is Gunakadeit, which has a straight snout and many slender teeth. Thalattosaurs often have a pronounced retroarticular process at the rear of the mandible. Thalattosauroids are more specialized than askeptosauroids in jaw anatomy, as they have evolved a large peak-like coronoid bone and an angular bone that extends far forwards along the lower edge of the jaw. Palatal dentition is extensive in thalattosauroids but absent in askeptosauroids.
Paleobiology
Thalattosaurs are only known from marine deposits, indicating that they were all primarily aquatic reptiles. The retracted nostrils and long, paddle-shaped tail are further evidence for aquatic habits. Thalattosauroids seemingly spent all of their time in the water, with short, wide limbs, poorly developed wrist and ankle bones, and tall vertebrae adapted for swimming via lateral undulation. Even so, they retained strong claws and functional digits which had not transformed into flippers, in contrast to ichthyosaurs and sauropterygians. Unlike these other marine reptiles, there is no evidence that thalattosaurs fully adapted to a pelagic life out in the open ocean, and instead they probably all lived in warm waters close to the coast. Askeptosauroids had stronger limbs more typical of terrestrial reptiles, indicating they would have been capable of moving around on land to some extent. They likely primarily used their tails when swimming, while thalattosauroids may have utilized their body and tail in conjunction.
Thalattosaurs had diverse diets, though they probably all involved marine animals in one way or another. Endennasaurus probably predated small animals like fish fry or small crustaceans due to its lack of teeth. Various thalattosauroids (like Thalattosaurus, Xinpusaurus, and Concavispina) had large fang-like teeth at the front of the mouth and thick button-like teeth at the back of the mouth. Based on Massare (1987)'s technique of correlating diet with tooth shape, the taller teeth were suited for a "crunching" diet, involving armored fish, large crustaceans, and thin-shelled ammonites. The low, robust teeth would have been useful for a "crushing" diet specialized in large molluscs or other thick-shelled prey. Gunakadeit's slender teeth correlated with the "Pierce II" guild of Massare (1987), indicating it likely fed on soft, fast-moving fish and squid. It also had a large hyoid apparatus which may have played a role in suction feeding. Thalattosaurs also fell prey to other marine reptiles: the torso of a ~4 meter (13 feet) long Xinpusaurus xingyiensis has been found within the body cavity of a 5-meter (16 feet) long skeleton of the predatory ichthyosaur Guizhouichthyosaurus. This is the oldest known predatory interaction between marine reptiles, and Xinpusaurus may also be the largest prey item preserved within another marine reptile.
Distribution
It is not certain where thalattosaurs originated from. During the Triassic period, the earth had one giant supercontinent, Pangaea, which was surrounded by the superocean Panthalassa. The eastern portion of Pangaea was incised by a massive tropical inland sea, the Tethys Ocean, which extended all the way from China to Western Europe. While thalattosauroids are known from worldwide Triassic marine deposits, askeptosauroids are only known in Tethyan deposits. Assuming Endennasaurus and Askeptosaurus were the most basal askeptosauroids, Askeptosauroidea would have originated in the Western Tethys Ocean, now the Alpine region of Europe. However, if Miodentosaurus is more basal, a Western Tethys (European) origin would be significantly less likely. Although the sister group to Thalattosauria is still debated, one possibility, the icthyosauromorphs, seemingly evolved in the Eastern Tethys (China) during the early Triassic or earlier.
The oldest known thalattosauroids (Thalattosaurus, Paralonectes, and Agkistrognathus of British Columbia's Sulphur Mountain Formation) lived in eastern Panthalassa, along what is now the western coast of North America. Müller (2005, 2007) argued that at least one branch of thalattosauroids had managed to spread worldwide early in their evolution. However, this is based on the hypothesis that Nectosaurus (from California), Xinpusaurus (from China), and an unnamed species from Austria formed a clade basal to other thalattosaurs, a classification scheme which contrasts with many other studies. The worldwide distribution of Thalattosauroidea is intriguing considering that thalattosaurs are considered to be poorly adapted for traversing open oceans, which would have been a necessity for spreading between the eastern coast of Panthalassa and the Tethys Ocean. Coastal "refuges" such as volcanic island arcs and guyots may have facilitated the ability of thalattosaurs to spread between ocean basins. Hescheleria-like forms were previously only reported from North America and Europe, but in 2021 a Hescheleria-like snout fragment was reported from China, indicating that they also had a widespread distribution. Trans-Panthalassa connections are also observed in other Triassic marine life such as pistosaurs and ammonites. Evidently thalattosaurs were capable of dispersing throughout major marine regions multiple times before the group's extinction, with thalattosauroids likely more prolific at spreading than askeptosauroids due to their greater aquatic adaptations.
Classification
Early hypotheses
When first named by Merriam in 1904, Thalattosauria was only known by the species Thalattosaurus alexandrae. Based primarily on the overall skull shape, it was hypothesized to have been close to the reptile order Rhynchocephalia, which includes Sphenodon (the living tuatara). Nevertheless, Thalattosaurus was recognized as distinct enough to be given its own order, and was tentatively grouped along with Rhynchocephalia in the group Diaptosauria, a collection of various "primitive" reptiles now known to be polyphyletic. Within Diaptosauria, thalattosaurs were also considered very closely related to choristoderes and "Proganosauria" (parareptiles). Comparisons were also made with Parasuchia (phytosaurs), Lacertilia (lizards), and Proterosuchus, but dismissed as incompatible with proposed evolutionary schemes.
Further discussion by Merriam (1905) considered a relationship with ichthyosaurs due to their similar ecology, but questioned why their skull and vertebral anatomy would diverge so widely if they had a close common ancestor. He proposed that potential similarities were best explained as convergent evolution. The possibility that thalattosaurs diverged from reptiles close to lizards (such as Paliguana) was described in more detail, with thalattosaurs serving as a short-lived early attempt for near-lizards to return to the sea, an evolutionary process later repeated more successfully when mosasaurs evolved from true lizards. Nevertheless, Merriam found no clear evidence that any previously known reptile group was directly ancestral to thalattosaurs or vice versa. They were probably descended from land-dwelling Permian reptiles, and not closely related to other marine reptile groups which first evolved in the Triassic. Later 20th-century workers typically placed thalattosaurs close to rhynchocephalians or squamates as part of the group now known as Lepidosauromorpha.
Modern classification and external relationships
The rising popularity of cladistics in the late 1980s had some effects on thalattosaur classification. Continued research has helped cement some aspects of reptile classification, such as how Sauria (a major clade of diapsids including all living reptiles) split in the Permian into two branches: Lepidosauromorpha (which leads to lizards, snakes, and the tuatara) and Archosauromorpha (which leads to crocodilians and dinosaurs, including birds). While many paleontologists still consider thalattosaurs probable lepidosauromorphs, a few studies (such as a phylogenetic analysis by Evans, 1988) have instead suggested that they may be on the archosauromorph branch of Sauria. Rieppel (1998)'s re-evaluation of the thalattosaur-like pachypleurosaur Hanosaurus argued that thalattosaurs have affinities with the aquatic reptile order Sauropterygia, which itself is aligned with turtles within an expansive interpretation of Lepidosauromorpha.
An analysis by Müller (2004) has even considered thalattosaurs to belong just outside of Sauria. Unusually, thalattosaurs have an affinity to shift near ichthyosaurs (in the group Ichthyosauromorpha) when certain basal saurians or near-saurians are excluded from the data set. Some analyses derived from Müller (2004) group thalattosaurs in a "marine superclade" with ichthyosauromorphs and sauropterygians, and sometimes with turtles, archosauromorphs, or lepidosauromorphs as well. For example, Simões et al (2022) classify thalattosaurs as the sister group of the sauropterygians, with their clade being sister to the ichthyosauromorphs, and all three being basal archosauromorphs. However, cladograms generated by these analyses change in unpredictable ways through alterations to their methodology (such as including or excluding aquatic adaptations or switching between parsimony and bayesian inference), leading some to have concerns over the validity of the "marine superclade". While thalattosaurs are almost certainly diapsids, the large degree of uncertainty surrounding their outgroup relations has led most modern paleontologists to classify them as Diapsida incertae sedis.
Internal relationships
One of the first phylogenetic analyses specifically focusing on thalattosaurs was part of Nicholls (1999)'s reevaluation of Thalattosaurus and Nectosaurus. She used a restricted definition of Thalattosauria which referred to a clade including all reptiles more closely related to Nectosaurus and Hescheleria than to Endennasaurus or Askeptosaurus. The more inclusive group including Askeptosaurus, Endennasaurus, and traditional thalattosaurs was given the name Thalattosauriformes.
However, most studies focusing on the group have preferred to retain a broader definition of Thalattosauria equivalent to Nicholls' Thalattosauriformes clade, including reptiles close to both Askeptosaurus and Thalattosaurus. In these studies, Thalattosauria is divided into two branches, one leading to relatives of Askeptosaurus and the other leading to relatives of Thalattosaurus. The clade containing reptiles closer to Thalattosaurus than to askeptosaurids is given the name Thalattosauroidea (and sometimes called Thalattosauridea). Meanwhile, the clade containing reptiles closer to askeptosaurids is termed Askeptosauroidea or Askeptosauridea.
Subsequent studies since Nicholls (1999) started to include more taxa, including newly described Chinese taxa such as Anshunsaurus and Xinpusaurus. However, uncertainty over Endennasaurus's thalattosaurian ancestry led to it being excluded from these analyses. After Müller et al. (2005) re-affirmed that Endennasaurus was closely related to Askeptosaurus, all thalattosaurs known at the time were finally combined into phylogenetic analyses. Studies by Rieppel, Liu, Cheng, Wu, and others continued to identify new Chinese taxa such as Miodentosaurus and various species of Anshunsaurus and Xinpusaurus, though homoplasy in these new taxa has led to little resolution in the structure of the two major branches of Thalattosauria. In an attempt to remedy this problem, new phylogenetic analyses were developed by Liu et al. (2013) during the description of Concavispina, and Druckenmiller et al. (2020) during the description of Gunakadeit.
The internal relationships of thalattosaurs is still considered tentative and inconclusive, although the fundamental structure of the group (a monophyletic Thalattosauria clade split into askeptosauroids and thalattosauroids) is very stable. Some paleontologists have attempted to divide thalattosaurs into families. One family, Askeptosauridae, is typically considered to include Askeptosaurus and Anshunsaurus, with a few studies also placing Miodentosaurus or Endennasaurus within it. Another family, Thalattosauridae, was originally used to group Thalattosaurus and Nectosaurus, was later redefined to exclude Nectosaurus, and later still encompassed practically all thalattosauroids. Many thalattosaur-focused paleontologists avoid using family names due to their inconsistent usage and questionable validity.
The cladogram presented here is based on the largest and most recent analysis of thalattosaur ingroup relations, Druckenmiller et al. (2020). It shows all thalattosaur genera except for the fragmentary Agkistrognathus.
List of genera
Other thalattosaurs include unnamed or indeterminate species from the Kössen Formation of Austria, the Sulphur Mountain and Pardonet Formations of British Columbia, the Natchez Pass Formation of Nevada, and the Vester Formation of Oregon. Blezingeria, a fragmentary marine reptile from the Muschelkalk of Germany, has also been considered a thalattosaur by some authors but this assignment is uncertain at best. Neosinasaurus, a poorly-known reptiles from the Xiaowa Formation of China, has also been considered a thalattosaur. Thalattosaurian fragments are known from Spanish Muschelkalk. A previously unnamed specimen from Alaska was described as Gunakadeit in 2020.
| Biology and health sciences | Prehistoric marine reptiles | Animals |
1735128 | https://en.wikipedia.org/wiki/Disproportionation | Disproportionation | In chemistry, disproportionation, sometimes called dismutation, is a redox reaction in which one compound of intermediate oxidation state converts to two compounds, one of higher and one of lower oxidation state. The reverse of disproportionation, such as when a compound in an intermediate oxidation state is formed from precursors of lower and higher oxidation states, is called comproportionation, also known as symproportionation.
More generally, the term can be applied to any desymmetrizing reaction where two molecules of one type react to give one each of two different types:
This expanded definition is not limited to redox reactions, but also includes some molecular autoionization reactions, such as the self-ionization of water. In contrast, some authors use the term redistribution to refer to reactions of this type (in either direction) when only ligand exchange but no redox is involved and distinguish such processes from disproportionation and comproportionation.For example, the Schlenk equilibrium
is an example of a redistribution reaction.
History
The first disproportionation reaction to be studied in detail was:
This was examined using tartrates by Johan Gadolin in 1788. In the Swedish version of his paper he called it .
Examples
Mercury(I) chloride disproportionates upon UV-irradiation:
Phosphorous acid disproportionates upon heating to 200°C to give phosphoric acid and phosphine:
Desymmetrizing reactions are sometimes referred to as disproportionation, as illustrated by the thermal degradation of bicarbonate:
The oxidation numbers remain constant in this acid-base reaction.
Another variant on disproportionation is radical disproportionation, in which two radicals form an alkene and an alkane.
{2CH3-\underset{^\bullet}CH2 -> {H2C=CH2} + H3C-CH3}
Disproportionation of sulfur intermediates by microorganisms is widely observed in sediments.
Chlorine gas reacts with concentrated sodium hydroxide to form sodium chloride, sodium chlorate and water. The ionic equation for this reaction is as follows:
The chlorine reactant is in oxidation state 0. In the products, the chlorine in the Cl− ion has an oxidation number of −1, having been reduced, whereas the oxidation number of the chlorine in the ion is +5, indicating that it has been oxidized.
Decomposition of numerous interhalogen compounds involve disproportionation. Bromine fluoride undergoes a disproportionation reaction to form bromine trifluoride and bromine in non-aqueous media:
The dismutation of superoxide free radical to hydrogen peroxide and oxygen, catalysed in living systems by the enzyme superoxide dismutase:
The oxidation state of oxygen is − in the superoxide free radical anion, −1 in hydrogen peroxide and 0 in dioxygen.
In the Cannizzaro reaction, an aldehyde is converted into an alcohol and a carboxylic acid. In the related Tishchenko reaction, the organic redox reaction product is the corresponding ester. In the Kornblum–DeLaMare rearrangement, a peroxide is converted to a ketone and an alcohol.
The disproportionation of hydrogen peroxide into water and oxygen catalysed by either potassium iodide or the enzyme catalase:
In the Boudouard reaction, carbon monoxide disproportionates to carbon and carbon dioxide. The reaction is for example used in the HiPco method for producing carbon nanotubes; high-pressure carbon monoxide disproportionates when catalysed on the surface of an iron particle:
Nitrogen has oxidation state +4 in nitrogen dioxide, but when this compound reacts with water, it forms both nitric acid and nitrous acid, where nitrogen has oxidation states +5 and +3 respectively:
In hydrazoic acid and sodium azide, each of the 3 nitrogen atoms of these very energetic linear polyatomic species has an oxidation state of −. These unstable and highly toxic compounds will disproportionate in aqueous solution to form gaseous nitrogen () and ammonium ions, or ammonia, depending on pH conditions, as it can be conveniently verified by means of the Frost diagram for nitrogen:
Under acidic conditions, hydrazoic acid disproportionates as:
Under neutral, or basic, conditions, the azide anion disproportionates as:
Dithionite undergoes acid hydrolysis to thiosulfate and bisulfite:
Dithionite also undergoes alkaline hydrolysis to sulfite and sulfide:
Dithionate is prepared on a larger scale by oxidizing a cooled aqueous solution of sulfur dioxide with manganese dioxide:
Polymer chemistry
In free-radical chain-growth polymerization, chain termination can occur by a disproportionation step in which a hydrogen atom is transferred from one growing chain molecule to another one, which produces two dead (non-growing) chains.
Chain—CH2–CHX• + Chain—CH2–CHX• → Chain—CH=CHX + Chain—CH2–CH2X
in which, Chain— represents the already formed polymer chain, and • indicates a reactive free radical.
Biochemistry
In 1937, Hans Adolf Krebs, who discovered the citric acid cycle bearing his name, confirmed the anaerobic dismutation of pyruvic acid into lactic acid, acetic acid and CO2 by certain bacteria according to the global reaction:
The dismutation of pyruvic acid in other small organic molecules (ethanol + CO2, or lactate and acetate, depending on the environmental conditions) is also an important step in fermentation reactions. Fermentation reactions can also be considered as disproportionation or dismutation biochemical reactions. Indeed, the donor and acceptor of electrons in the redox reactions supplying the chemical energy in these complex biochemical systems are the same organic molecules simultaneously acting as reductant or oxidant.
Another example of biochemical dismutation reaction is the disproportionation of acetaldehyde into ethanol and acetic acid.
While in respiration electrons are transferred from substrate (electron donor) to an electron acceptor, in fermentation part of the substrate molecule itself accepts the electrons. Fermentation is therefore a type of disproportionation, and does not involve an overall change in oxidation state of the substrate. Most of the fermentative substrates are organic molecules. However, a rare type of fermentation may also involve the disproportionation of inorganic sulfur compounds in certain sulfate-reducing bacteria.
Disproportionation of sulfur intermediates
Sulfur isotopes of sediments are often measured for studying environments in the Earth's past (paleoenvironment). Disproportionation of sulfur intermediates, being one of the processes affecting sulfur isotopes of sediments, has drawn attention from geoscientists for studying the redox conditions in the oceans in the past.
Sulfate-reducing bacteria fractionate sulfur isotopes as they take in sulfate and produce sulfide. Prior to 2010s, it was thought that sulfate reduction could fractionate sulfur isotopes up to 46 ‰ and fractionation larger than 46 ‰ recorded in sediments must be due to disproportionation of sulfur intermediates in the sediment. This view has changed since the 2010s. As substrates for disproportionation are limited by the product of sulfate reduction, the isotopic effect of disproportionation should be less than 16 ‰ in most sedimentary settings.
Disproportionation can be carried out by microorganisms obligated to disproportionation or microorganisms that can carry out sulfate reduction as well. Common substrates for disproportionation include elemental sulfur (), thiosulfate () and sulfite ().
Claus reaction: a comproportionation reaction
The Claus reaction is an example of comproportionation reaction (the inverse of disproportionation) involving hydrogen sulfide () and sulfur dioxide () to produce elemental sulfur and water as follows:
The Claus reaction is one of the chemical reactions involved in the Claus process used for the desulfurization of gases in the oil refinery plants and leading to the formation of solid elemental sulfur (), which is easier to store, transport, reuse when possible, and dispose of.
| Physical sciences | Redox reactions | Chemistry |
1735240 | https://en.wikipedia.org/wiki/Dichlorodifluoromethane | Dichlorodifluoromethane | Dichlorodifluoromethane (R-12) is a colorless gas popularly known by the genericized brand name Freon (as Freon-12). It is a chlorofluorocarbon halomethane (CFC) used as a refrigerant and aerosol spray propellant. In compliance with the Montreal Protocol, its manufacture was banned in developed countries (non-article 5 countries) in 1996, and in developing countries (Article 5 countries) in 2010 out of concerns about its damaging effect on the ozone layer. Its only allowed usage is as a fire retardant in submarines and aircraft. It is soluble in many organic solvents. R-12 cylinders are colored white.
Preparation
It can be prepared by reacting carbon tetrachloride with hydrogen fluoride in the presence of a catalytic amount of antimony pentachloride:
CCl4 + 2HF → CCl2F2 + 2HCl
This reaction can also produce trichlorofluoromethane (CCl3F), chlorotrifluoromethane (CClF3) and tetrafluoromethane (CF4).
History
Charles F. Kettering, vice president of General Motors Research Corporation, was seeking a refrigerant replacement that would be colorless, odorless, tasteless, nontoxic, and nonflammable. He assembled a team that included Thomas Midgley Jr., Albert Leon Henne, and Robert McNary. From 1930 to 1935, they developed dichlorodifluoromethane (CCl2F2 or R12), trichlorofluoromethane (CCl3F or R11), chlorodifluoromethane (CHClF2 or R22), trichlorotrifluoroethane (CCl2FCClF2 or R113), and dichlorotetrafluoroethane (CClF2CClF2 or R114), through Kinetic Chemicals which was a joint venture between DuPont and General Motors.
Use as an aerosol
The use of chlorofluorocarbons as aerosols in medicine, such as USP-approved salbutamol, has been phased out by the U.S. Food and Drug Administration. A different propellant known as hydrofluoroalkane, or HFA, which was not known to harm the environment, was chosen to replace it. That being said it still listed on the FDA's approved food additive list.
Environmental Effects
R-12 has the highest ozone depletion potential among chlorocarbons due to the presence of 2 chlorine atoms in the molecule.
R-12 also has intense global warming potential (GWP) with the 20yr, 100yr and 500yr GWP being 11400, 11200 and 5100 times greater than .
Retrofitting
R-12 was used in most refrigeration and vehicle air conditioning applications prior to 1994 before being replaced by 1,1,1,2-tetrafluoroethane (R-134a), which has an insignificant ozone depletion potential. Automobile manufacturers began phasing in R-134a around 1993. When older units leak or require repair involving removal of the refrigerant, retrofitment to a refrigerant other than R-12 (most commonly R-134a) is required in some jurisdictions. The United States does not require such conversion. Retrofitment requires a system flush and a new filter/dryer or accumulator, and may also involve the installation of new seals and/or hoses made of materials compatible with the refrigerant being installed. Mineral oil used with R-12 is not compatible with R-134a. Some oils designed for conversion to R-134a are advertised as compatible with residual R-12 mineral oil. Illegal replacements for R-12 include highly flammable hydrocarbon blends such as HC-12a, the flammability of which has caused injuries and deaths.
Dangers
Aside from its environmental impacts, R12, like most chlorofluoroalkanes, forms phosgene gas when exposed to a naked flame.
Properties
Table of thermal and physical properties of saturated liquid refrigerant 12:
Gallery
| Physical sciences | Halocarbons | Chemistry |
1738148 | https://en.wikipedia.org/wiki/Limpet | Limpet | Limpets are a group of aquatic snails with a conical shell shape (patelliform) and a strong, muscular foot. This general category of conical shell is known as "patelliform" (dish-shaped). Existing within the class Gastropoda, limpets are a polyphyletic group (its members descending from different immediate ancestors).
All species of Patellogastropoda are limpets, with the Patellidae family in particular often referred to as "true limpets". Examples of other clades commonly referred to as limpets include the Vetigastropoda family Fissurellidae ("keyhole limpet"), which use a siphon to pump water over their gills, and the Siphonariidae ("false limpets"), which have a pneumostome for breathing air like the majority of terrestrial Gastropoda.
Description
The basic anatomy of a limpet consists of the usual molluscan organs and systems:
A nervous system centered around the paired cerebral, pedal, and pleural sets of ganglia. These ganglia create a ring around the limpet's esophagus called a circumesophageal nerve ring or nerve collar. Other nerves in the head/ snout are the optic nerves which connect to the two eye spots located at the base of the cerebral tentacles (these eyespots, when present, are only able to sense light and darkness and do not provide any imagery), as well as the labial and buccal ganglia which are associated with feeding and controlling the animal's odontophore, the muscular cushion used to support the limpet's radula (a kind of tongue) that scrapes algae off the surrounding rock for nutrition. Behind these ganglia lie the pedal nerve cords which control the movement of the foot, and the visceral ganglion which in limpets has been torted during the course of evolution. This means, among other things, that the limpet's left osphradium and oshradial ganglion (an organ believed used to sense the time to produce gametes) is controlled by its right pleural ganglion and vice versa.
For most limpets, the circulatory system is based around a single triangular three-chambered heart consisting of an atrium, a ventricle, and a bulbous aorta. Blood enters the atrium via the circumpallial vein (after being oxygenated by the ring of gills located around the edge of the shell) and through a series of small vesicles that deliver more oxygenated blood from the nuchal cavity (the area above the head and neck). Many limpets still retain a ctenidium (sometimes two) in this nuchal chamber instead of the circumpallial gills as a means for exchanging oxygen and carbon dioxide with the surrounding water or air (many limpets can breathe air during periods of low tide, but those limpet species which never leave the water do not have this ability and will suffocate if deprived of water). Blood moves from the atrium into the ventricle and into the aorta where it is then pumped out to the various lacunar blood spaces / sinuses in the hemocoel. The odontophore may play a large role in assisting with blood circulation as well.
The two kidneys are very different in size and location. This is a result of torsion. The left kidney is diminutive and in most limpets is barely functional. The right kidney, however, has taken over the majority of blood filtration and often extends over and around the entire mantle of the animal in a thin, almost-invisible layer.
The digestive system is extensive and takes up a large part of the animal's body. Food (algae) is collected by the radula and odontophore and enters via the downward-facing mouth. It then moves through the esophagus and into the numerous loops of the intestines. The large digestive gland helps break down the microscopic plant material, and the long rectum helps compact used food which is then excreted through the anus located in the nuchal cavity. The anus of most molluscs and indeed many animals is located far from the head. In limpets and most gastropods, however, the evolutionary torsion which took place and allowed the gastropods to have a shell into which they could completely withdraw has caused the anus to be located near the head. Used food would quickly foul the nuchal cavity unless it was firmly compacted prior to being excreted. The torted condition of the limpets remains even though they no longer have a shell into which they can withdraw and even though the evolutionary advantages of torsion appear to therefore be negligible (some species of gastropod have subsequently de-torted and now have their anus located once again at the posterior end of the body; these groups no longer have a visceral twist to their nervous systems).
The gonad of a limpet is located beneath its digestive system just above its foot. It swells and eventually bursts, sending gametes into the right kidney which then releases them into the surrounding water on a regular schedule. Fertilized eggs hatch and the floating veliger larvae are free-swimming for a period before settling to the bottom and becoming an adult animal.
True limpets in the family Patellidae live on hard surfaces in the intertidal zone. Unlike barnacles (which are not molluscs but may resemble limpets in appearance) and mussels (which are bivalve molluscs that cement themselves to a substrate for their entire adult lives), limpets are capable of locomotion instead of being permanently attached to a single spot. However, when they need to resist strong wave action or other disturbances, limpets cling extremely firmly to the surfaces on which they live, using their muscular foot to apply suction combined with the effect of adhesive mucus. It often is very difficult to remove a true limpet from a rock without injuring or killing it.
All "true" limpets are marine. The most primitive group have one pair of gills, in others only a single gill remains, the lepetids do not have any gills at all, while the patellids have evolved secondary gills as they have lost the original pair. However, because the adaptive feature of a simple conical shell has repeatedly arisen independently in gastropod evolution, limpets from many different evolutionary lineages occur in widely different environments. Some saltwater limpets such as Trimusculidae breathe air, and some freshwater limpets are descendants of air-breathing land snails (e.g. the genus Ancylus) whose ancestors had a pallial cavity serving as a lung. In these small freshwater limpets, that "lung" underwent secondary adaptation to allow the absorption of dissolved oxygen from water.
Teeth
Function and formation
In order to obtain food, limpets rely on an organ called the radula, which contains iron-mineralized teeth. Although limpets contain over 100 rows of teeth, only the outermost 10 are used in feeding. These teeth form via matrix-mediated biomineralization, a cyclic process involving the delivery of iron minerals to reinforce a polymeric chitin matrix. Upon being fully mineralized, the teeth reposition themselves within the radula, allowing limpets to scrape off algae from rock surfaces. As limpet teeth wear out, they are subsequently degraded (occurring anywhere between 12 and 48 hours) and replaced with new teeth. Different limpet species exhibit different overall shapes of their teeth.
Growth and development
Development of limpet teeth occurs in conveyor belt style, where teeth start growing at the back of the radula, and move toward the front of this structure as they mature. The growth rate of the limpet's teeth is around 47 hours per row. Fully mature teeth are located in the scraping zone, the very front of the radula. The scraping zone is in contact with the substrate that the limpet feeds off of. As a result, the fully mature teeth are subsequently worn down until they are discarded – at a rate equal to the growth rate. To counter this degradation, a new row of teeth begin to grow.
Biomineralization
The exact mechanism behind the biomineralization of limpet teeth is unknown. However, it is suggested that limpet teeth biomineralize using a dissolution-reprecipitation mechanism. Specifically, this mechanism is associated with the dissolution of iron stored in epithelial cells of the radula to create ferrihydrite ions. These ferrihydrite ions are transported through ion channels to the tooth surface. The build-up of enough ferrihydrite ions leads to nucleation, the rate of which can be altered via changing the pH at the site of nucleation. After one to two days, these ions are converted to goethite crystals. The unmineralized matrix consists of relatively well-ordered, densely packed arrays of chitin fibers, with only a few nanometers between adjacent fibers. The lack of space leads to the absence of pre-formed compartments within the matrix that control goethite crystal size and shape. Because of this, the main factor influencing goethite crystal growth is the chitin fibers of the matrix. Specifically, goethite crystals nucleate on these chitin fibers and push aside or engulf the chitin fibers as they grow, influencing their resulting orientation.
Strength
Looking into limpet teeth of Patella vulgata, Vickers hardness values are between 268 and 646 kg⋅m−1⋅s−2, while tensile strength values range between 3.0 and 6.5 GPa. As spider silk has a tensile strength only up to 4.5 GPa, limpet teeth outperforms spider silk to be the strongest biological material. These considerably high values exhibited by limpet teeth are due to the following factors:
The first factor is the nanometer length scale of goethite nanofibers in limpet teeth; at this length scale, materials become insensitive to flaws that would otherwise decrease failure strength. As a result, goethite nanofibers are able to maintain substantial failure strength despite the presence of defects.
The second factor is the small critical fiber length of the goethite fibers in limpet teeth. Critical fiber length is a parameter defining the fiber length that a material must be to transfer stresses from the matrix to the fibers themselves during external loading. Materials with a large critical fiber length (relative to the total fiber length) act as poor reinforcement fibers, meaning that most stresses are still loaded on the matrix. Materials with small critical fiber lengths (relative to the total fiber length) act as effective reinforcement fibers that are able to transfer stresses on the matrix to themselves. Goethite nanofibers express a critical fiber length of around 420 to 800 nm, which is several orders of magnitude away from their estimated fiber length of 3.1 μm. This suggests that the goethite nanofibers serve as effective reinforcement for the collagen matrix and significantly contribute to the load-bearing capabilities of limpet teeth. This is further supported by the large mineral volume fraction of elongated goethite nanofibers within limpet teeth, around 0.81.
Applications of limpet teeth involve structural designs requiring high strength and hardness, such as biomaterials used in next-generation dental restorations.
Role in distributing stress
The structure, composition, and morphological shape of the teeth of the limpet allow for an even distribution of stress throughout the tooth. The teeth have a self-sharpening mechanism which allows for the teeth to be more highly functional for longer periods of time. Stress wears preferentially on the front surface of the cusp of the teeth, allowing the back surface to stay sharp and more effective.
There is evidence that different regions of the limpet teeth show different mechanical strengths. Measurements taken from the tip of the anterior edge of the tooth show that the teeth can exhibit an elastic modulus of around 140 GPa. Traveling down the anterior edge toward the anterior cusp of the teeth however, the elastic modulus decreases ending around 50 GPa at the edge of the teeth. The orientation of the goethite fibers can be correlated to this decrease in elastic modulus, as towards the tip of the tooth the fibers are more aligned with each other, correlating to a high modulus and vice versa.
Critical length of the goethite fibers is the reason the structural chitin matrix has extreme support. The critical length of goethite fibers has been estimated to be around 420 to 800 nm and when compared with the actual length of the fibers found in the teeth, around 3.1 um, shows that the teeth have fibers much larger than the critical length. This paired with orientation of the fibers leads to effective stress distribution onto the goethite fibers and not onto the weaker chitin matrix in the limpet teeth.
Causes of structure degradation
The overall structure of the limpet teeth is relatively stable within most natural conditions given the limpet's ability to produce new teeth at a similar rate to the degradation. Individual teeth are subjected to shear stresses as the tooth is dragged along the rock. Goethite as a mineral is a relatively soft iron based material, which increases the chance of physical damage to the structure. Limpet teeth and the radula have also been shown to experience greater levels of damage in CO2 acidified water.
Crystal structure
Goethite crystals form at the start of the tooth production cycle and remain as a fundamental part of the tooth with intercrystal space filled with amorphous silica. Existing in multiple morphologies, prisms with rhomb-shaped sections are the most frequent". The goethite crystals are stable and well formed for a biogenic crystal. The transport of the mineral to create the crystal structures has been suggested to be a dissolution-reprecipitation mechanism as of 2011. Limpet tooth structure is dependent upon the living depth of the specimen. While deep water limpets have been shown to have the same elemental composition as shallow water limpets, deep water limpets do not show crystalline phases of goethite.
Crystallization process
The initial event that takes place when the limpet creates a new row of teeth is the creation of the main macromolecular α-chitin component. The resulting organic matrix serves as framework for the crystallization of the teeth themselves. The first mineral to be deposited is goethite (α-FeOOH), a soft iron oxide which forms crystals parallel to the chitin fibers. The goethite, however, has varying crystal habits. The crystals arrange in various shapes and thicknesses throughout the chitin matrix. The varying formation of the chitin matrix has profound effects on the formation of the goethite crystals. The space in between the crystals and the chitin matrix is filled with amorphous hydrated silica (SiO2).
Characterizing composition
The most prominent metal by percent composition is iron in the form of goethite. Goethite has the chemical formula FeO(OH) and belongs to a group known as oxy-hydroxides. There exists amorphous silica between the goethite crystals; surrounding the goethite is a matrix of chitin. Chitin has a chemical formula of C8H13O5N. Other metals have been shown to be present with the relative percent compositions varying on geographic locations. The goethite has been reported to have a volume fraction of approximately 80%.
Regional dependency
Limpets from different locations were shown to have different elemental ratios within their teeth. Iron is consistently most abundant however other metals such as sodium, potassium, calcium, and copper were all shown to be present to varying degrees. The relative percentages of the elements have also been shown to differ from one geographic location to another. This demonstrates an environmental dependency of some kind; however the specific variables are currently undetermined.
Taxonomy
Gastropods that have limpet-like or patelliform shells are found in several different clades:
Clade Patellogastropoda, example Patellidae, the true limpets, all marine, in five living families and two fossil families
Clade Vetigastropoda, examples Fissurellidae, (the keyhole limpets and slit limpets), and Lepetelloidea, small deepwater limpets
Clade Neritimorpha, example Phenacolepadidae, small limpets related to nerites
Clade Heterobranchia, group Opisthobranchia, example Tylodinidae, the umbrella slugs with a limpet-shaped shell
Clade Heterobranchia, group Pulmonata, examples Siphonariidae, Latiidae, Trimusculidae, all air-breathing limpets
Other limpets
Marine
The hydrothermal vent limpets – Neomphaloidea and Lepetodriloidea
The hoof snails – Hipponix and other Hipponicidae
Slipper snails – Crepidula species, which are sometimes known as slipper limpets
Freshwater
The pulmonate river and lake limpets – Ancylidae
Some species of limpet live in fresh water, but these are the exception. Most marine limpets have gills, whereas all freshwater limpets and a few marine limpets have a mantle cavity adapted to breathe air and function as a lung (and in some cases again adapted to absorb oxygen from water). All these kinds of snail are only very distantly related.
Naming
The common name "limpet" also is applied to a number of not very closely related groups of sea snails and freshwater snails (aquatic gastropod mollusks). Thus the common name "limpet" has very little taxonomic significance in and of itself; the name is applied not only to true limpets (the Patellogastropoda), but also to all snails that have a simple, broadly conical shell, and either is not spirally coiled, or appears not to be coiled in the adult snail. In other words, the shell of all limpets is patelliform, which means the shell is shaped more or less like the shell of most true limpets. The term "false limpets" is used for some (but not all) of these other groups that have a conical shell.
Thus, the name limpet is used to describe various extremely diverse groups of gastropods that have independently evolved a shell of the same basic shape (see convergent evolution). And although the name "limpet" is given on the basis of a limpet-like or patelliform shell, the several groups of snails that have a shell of this type are not at all closely related to one another.
Ecology
Symbiosis
Limpets have a mutualistic relationship with several other beings. Clathromorphum, a type of algae, provides food to limpets, which clean the algae's surface and allow its persistence.
The rough keyhole limpet (Diodora aspera) is host to the scale worm copepod Anthessius nortoni, which bites predatory starfish to discourage them from eating the limpet.
Homescars
Limpets wander over the surface of the rocks during high tide and tend to return to their favourite spot by following a trail of mucus left whilst grazing. Over a period of time the edges of the limpet's shell wear a shallow hollow in the rock called a homescar. The homescar helps the limpet to stay attached to the rock and not to dry out during low tide periods.
Bio-erosion
Limpets are known to cause bio-erosion on sedimentary rocks by the formation of homescars and by ingesting tiny particles of rock through the action of feeding. C.Andrews & R.B.G. Williams in their research paper titled Limpet erosion of chalk shore platforms in southeast England from Oct 2000 estimate from the amount of calcium carbonate deposits in faeces of captive limpets, that an adult limpet will ingest around 4.9 g of chalk per year. Suggesting that limpets are on average responsible for 12% of the chalk platform erosion in areas that they frequent, potentially rising to 35% + in areas where the limpet population has reached its maximum.
In culture
Many species of limpet have historically been used, or are still used, by pigs for food.
Limpet mines are a type of naval mine attached to a target by magnets. They are named after the tenacious grip of the limpet.
The humorous author Edward Lear wrote "Cheer up, as the limpet said to the weeping willow" in one of his letters. Simon Grindle wrote the 1964 illustrated children's book of nonsense poetry The Loving Limpet and Other Peculiarities, said to be "in the great tradition of Edward Lear and Lewis Carroll".
In his book South, Sir Ernest Shackleton relates the stories of his twenty-two men left behind on Elephant Island harvesting limpets from the icy waters on the shore of the Southern Ocean. Near the end of their four-month stay on the island, as their stocks of seal and penguin meat dwindled, they derived a major portion of their sustenance from limpets.
The light-hearted comedy film The Incredible Mr. Limpet is about a patriotic but weak American who desperately clings to the idea of joining the U.S. military to serve his country; by the end of the film, having been transformed into a fish, he is able to use his new body to save U.S. naval vessels from disaster. Although he does not become a snail but a fish, his name limpet hints at his tenacity.
| Biology and health sciences | Gastropods | Animals |
1738488 | https://en.wikipedia.org/wiki/Brain%20coral | Brain coral | Brain coral is a common name given to various corals in the families Mussidae and Merulinidae, so called due to their generally spheroid shape and grooved surface which resembles a brain. Each head of coral is formed by a colony of genetically identical polyps which secrete a hard skeleton of calcium carbonate; this makes them important coral reef builders like other stony corals in the order Scleractinia.
Brain corals are found in shallow warm water coral reefs in all the world's oceans. They are part of the phylum Cnidaria, in a class called Anthozoa or "flower animals". The lifespan of the largest brain corals is 900 years. Colonies can grow as large as 1.8 m (6 ft) or more in height.
Brain corals extend their tentacles to catch food at night. During the day, they use their tentacles for protection by wrapping them over the grooves on their surface. The surface is hard and offers good protection against fish or hurricanes. Branching corals, such as staghorn corals, grow more rapidly, but are more vulnerable to storm damage.
Like other genera of corals, brain corals feed on small drifting animals, and also receive nutrients provided by the algae which live within their tissues. The behavior of one of the most common genera, Favia, is semiaggressive; it will sting other corals with its extended sweeper tentacles during the night.
The grooved surface of brain corals has been used by scientists to investigate methods of giving spherical wheels appropriate grip strength.
Genera
Barabattoia Yabe and Sugiyama, 1941
Bikiniastrea Wells, 1954
Caulastraea Dana, 1846 – candy cane coral
Colpophyllia Milne-Edwards and Haime, 1848 – boulder brain coral or large-grooved brain coral
Cyphastrea Milne-Edwards and Haime, 1848
Diploastrea Matthai, 1914 – diploastrea brain coral or honeycomb coral
Diploria Milne-Edwards and Haime, 1848 – grooved brain coral
Echinopora Lamarck, 1816
Erythrastrea Pichon, Scheer and Pillai, 1983
Favia Oken, 1815
Favites Link, 1807 – moon, pineapple, brain, closed brain, star, worm, or honeycomb coral
Goniastrea Milne-Edwards and Haime, 1848
Leptastrea Milne-Edwards and Haime, 1848
Leptoria Milne-Edwards and Haime, 1848 – great star coral
Manicina Ehrenberg, 1834
Montastraea de Blainville, 1830 – great star coral
Moseleya Quelch, 1884
Oulastrea Milne-Edwards and Haime, 1848
Oulophyllia Milne-Edwards and Haime, 1848
Parasimplastrea Sheppard, 1985
Platygyra Ehrenberg, 1834
Plesiastrea Milne-Edwards and Haime, 1848
Solenastrea Milne-Edwards and Haime, 1848
Gallery
| Biology and health sciences | Cnidarians | Animals |
3307045 | https://en.wikipedia.org/wiki/Pyramid%20%28geometry%29 | Pyramid (geometry) | In geometry, a pyramid is a polyhedron formed by connecting a polygonal base and a point, called the apex. Each base edge and apex form a triangle, called a lateral face. A pyramid is a conic solid with a polygonal base. Many types of pyramids can be found by determining the shape of bases, either by based on a regular polygon (regular pyramids) or by cutting off the apex (truncated pyramid). It can be generalized into higher dimensions, known as hyperpyramid. All pyramids are self-dual.
Etymology
The word "pyramid" derives from the ancient Greek term "πυραμίς" (pyramis), which referred to a pyramid-shaped structure and a type of wheat cake. The term is rooted in the Greek "πυρ" (pyr, 'fire') and "άμις" (amis, 'vessel'), highlighting the shape's pointed, flame-like appearance.
In Byzantine Greek, the term evolved to "πυραμίδα" (pyramída), continuing to denote pyramid structures. The Greek term "πυραμίς" was borrowed into Latin as "pyramis." The term "πυραμίδα" influenced the evolution of the word into "pyramid" in English and other languages.
Definition
A pyramid is a polyhedron that may be formed by connecting a polygonal base and a point, called the apex. Each base edge and apex form an isosceles triangle called a lateral face. The edges connected from the polygonal base's vertices to the apex are called lateral edges. Historically, the definition of a pyramid has been described by many mathematicians in ancient times. Euclides in his Elements defined a pyramid as a solid figure, constructed from one plane to one point. The context of his definition was vague until Heron of Alexandria defined it as the figure by putting the point together with a polygonal base.
A prismatoid is defined as a polyhedron where its vertices lie on two parallel planes, with its lateral faces as triangles, trapezoids, and parallelograms. Pyramids are classified as prismatoid.
Classification and types
A right pyramid is a pyramid whose base is circumscribed about a circle and the altitude of the pyramid meets the base at the circle's center; otherwise, it is oblique. This pyramid may be classified based on the regularity of its bases. A pyramid with a regular polygon as the base is called a regular pyramid. For the pyramid with an sided regular base, it has vertices, faces, and edges. Such pyramid has isosceles triangles as its faces, with its symmetry is , a symmetry of order : the pyramids are symmetrical as they rotated around their axis of symmetry (a line passing through the apex and the base centroid), and they are mirror symmetric relative to any perpendicular plane passing through a bisector of the base. Examples are square pyramid and pentagonal pyramid, a four- and five-triangular faces pyramid with a square and pentagon base, respectively; they are classified as the first and second Johnson solid if their regular faces and edges that are equal in length, and their symmetries are of order 8 and of order 10, respectively. A tetrahedron or triangular pyramid is an example that has four equilateral triangles, with all edges equal in length, and one of them is considered as the base. Because the faces are regular, it is an example of a Platonic solid and deltahedra, and it has tetrahedral symmetry. A pyramid with the base as circle is known as cone. Pyramids have the property of self-dual, meaning their duals are the same as vertices corresponding to the edges and vice versa. Their skeleton may be represented as the wheel graph, that is they can be depicted as a polygon in which its vertices connect a vertex in the center called the universal vertex.
A right pyramid may also have a base with an irregular polygon. Examples are the pyramids with rectangle and rhombus as their bases. These two pyramids have the symmetry of of order 4.
The type of pyramids can be derived in many ways. The base regularity of a pyramid's base may be classified based on the type of polygon: one example is the star pyramid in which its base is the regular star polygon. The truncated pyramid is a pyramid cut off by a plane; if the truncation plane is parallel to the base of a pyramid, it is called a frustum.
Mensuration
The surface area is the total area of each polyhedra's faces. In the case of a pyramid, its surface area is the sum of the area of triangles and the area of the polygonal base.
The volume of a pyramid is the one-third product of the base's area and the height. The pyramid height is defined as the length of the line segment between the apex and its orthogonal projection on the base. Given that is the base's area and is the height of a pyramid, the volume of a pyramid is:
The volume of a pyramid was recorded back in ancient Egypt, where they calculated the volume of a square frustum, suggesting they acquainted the volume of a square pyramid. The formula of volume for a general pyramid was discovered by Indian mathematician Aryabhata, where he quoted in his Aryabhatiya that the volume of a pyramid is incorrectly the half product of area's base and the height.
Generalization
The hyperpyramid is the generalization of a pyramid in dimensional space. In the case of the pyramid, one connects all vertices of the base, a polygon in a plane, to a point outside the plane, which is the peak. The pyramid's height is the distance of the peak from the plane. This construction gets generalized to dimensions. The base becomes a polytope in a dimensional hyperplane. A point called the apex is located outside the hyperplane and gets connected to all the vertices of the polytope and the distance of the apex from the hyperplane is called height.
The dimensional volume of a dimensional hyperpyramid can be computed as follows:
Here denotes the dimensional volume of the hyperpyramid. denotes the dimensional volume of the base and the height, that is the distance between the apex and the dimensional hyperplane containing the base .
| Mathematics | Three-dimensional space | null |
3308069 | https://en.wikipedia.org/wiki/Thai%20basil | Thai basil | Thai basil is a type of basil native to Southeast Asia that has been cultivated to provide distinctive traits. Widely used throughout Southeast Asia, its flavor, described as anise- and licorice-like and slightly spicy, is more stable under high or extended cooking temperatures than that of sweet basil. Thai basil has small, narrow leaves, purple stems, and pink-purple flowers.
Description
Thai basil is sturdy and compact, growing up to , and has shiny green, slightly serrated, narrow leaves with a sweet, anise-like scent and hints of licorice, along with a slight spiciness lacking in sweet basil. Thai basil has a purple stem, and like other plants in the mint family, the stem is square. Its leaves are opposite and decussate. As implied by its scientific name, Thai basil flowers in the form of a thyrse. The inflorescence is purple, and the flowers when open are pink.
Taxonomy and nomenclature
Sweet basil (Ocimum basilicum) has multiple cultivars — Thai basil, O. basilicum var. thyrsiflora, is one variety. Thai basil may sometimes be called chi neang vorng, anise basil or licorice basil, in reference to its anise- and licorice-like scent and taste, but it is different from the Western strains bearing these same names.
Occasionally, Thai basil may be called cinnamon basil, which is its literal name in Vietnamese, but cinnamon basil typically refers to a separate cultivar.
The genus name Ocimum is derived from the Greek word meaning "to smell", which is appropriate for most members of family Lamiaceae. With over 40 cultivars of basil, this abundance of flavors, aromas, and colors leads to confusion when identifying specific cultivars.
Three types of basil are commonly used in Thai cuisine.
<LI>Thai basil, or horapha (), is widely used throughout Southeast Asia and plays a prominent role in Vietnamese cuisine. It is the cultivar most often used for Asian cooking in Western kitchens. <LI>Holy basil (O. tenuiflorum), mreah-pruv (), or kaphrao (), which has a spicy, peppery, clove-like taste, may be the basil Thai people love most. It is also known as Thai holy basil or by its Indian name, tulasi or tulsi; it is widely used in India for culinary, medicinal, and religious purposes.
<LI>Lemon basil (O. × citriodorum), or maenglak (), as its name implies, has undertones of lemon in scent and taste. Lemon basil is the least commonly used type of basil in Thailand. It is also known as Thai lemon basil, in contradistinction to Mrs. Burns' Lemon basil, another cultivar.
In Taiwan the Thai basil is called káu-chàn-thah (), which literally means "nine-storey pagoda".
Uses
Culinary
Thai basil is widely used in the cuisines of Southeast Asia, including Thai, Vietnamese, Lao, and Cambodian cuisines. Thai basil leaves are a frequent ingredient in Thai green and red curries, though in Thailand the basil used in drunken noodles and many chicken, pork, and seafood dishes is holy basil. In the West, however, such dishes typically contain Thai basil instead, which is much more readily available than holy basil. Thai basil is also an important ingredient in the very popular Taiwanese dish sanbeiji (three-cup chicken). Used as a condiment, a plate of raw Thai basil leaves is often served as an accompaniment to many Vietnamese dishes, such as phở (Southern style), bún bò Huế, or bánh xèo, so that each person can season to taste with the anise-flavored leaves.
Cultivation
Thai basil is a tender perennial but is typically grown as an annual. As a tropical plant, Thai basil is hardy only in very warm climates where there is no chance of frost. It is generally hardy to USDA plant hardiness zone 10. Thai basil, which can be grown from seed or cuttings, requires fertile, well-draining soil with a pH ranging from 6.5 to 7.5 and 6 to 8 hours of full sunlight per day. The flowers should be pinched to prevent the leaves from becoming bitter. Thai basil can be repeatedly harvested by taking a few leaves at a time and should be harvested periodically to encourage regrowth.
| Biology and health sciences | Herbs and spices | Plants |
3309189 | https://en.wikipedia.org/wiki/Oriental%20giant%20squirrel | Oriental giant squirrel | Oriental giant squirrels are cat-sized tree squirrels from the genus Ratufa in the subfamily Ratufinae. They are a distinctive element of the fauna of south and southeast Asia.
Species
There are four living species of oriental giant squirrels:
In prehistoric times this lineage was more widespread. For example, animals very similar to Ratufa and possibly belonging to this genus, at least belonging to the Ratufinae, were part of the early Langhian (Middle Miocene, some 16–15.2 million years ago) Hambach fauna of Germany.
| Biology and health sciences | Rodents | Animals |
3311468 | https://en.wikipedia.org/wiki/List%20of%20copper%20alloys | List of copper alloys | Copper alloys are metal alloys that have copper as their principal component. They have high resistance against corrosion. Of the large number of different types, the best known traditional types are bronze, where tin is a significant addition, and brass, using zinc instead. Both of these are imprecise terms. Latten is a further term, mostly used for coins with a very high copper content. Today the term copper alloy tends to be substituted for all of these, especially by museums.
Copper deposits are abundant in most parts of the world (globally 70 parts per million), and it has therefore always been a relatively cheap metal. By contrast, tin is relatively rare (2 parts per million), and in Europe and the Mediterranean region, and even in prehistoric times had to be traded considerable distances, and was expensive, sometimes virtually unobtainable. Zinc is even more common at 75 parts per million, but is harder to extract from its ores. Bronze with the ideal percentage of tin was therefore expensive and the proportion of tin was often reduced to save cost. The discovery and exploitation of the Bolivian tin belt in the 19th century made tin far cheaper, although forecasts for future supplies are less positive.
There are as many as 400 different copper and copper alloy compositions loosely grouped into the categories: copper, high copper alloy, brasses, bronzes, cupronickel, copper–nickel–zinc (nickel silver), leaded copper, and special alloys.
Composition
The similarity in external appearance of the various alloys, along with the different combinations of elements used when making each alloy, can lead to confusion when categorizing the different compositions. The following table lists the principal alloying element for four of the more common types used in modern industry, along with the name for each type. Historical types, such as those that characterize the Bronze Age, are vaguer as the mixtures were generally variable.
The following table outlines the chemical composition of various grades of copper alloys.
Brasses
A brass is an alloy of copper with zinc. Brasses are usually yellow in colour. The zinc content can vary between few % to about 40%; as long as it is kept under 15%, it does not markedly decrease corrosion resistance of copper.
Brasses can be sensitive to selective leaching corrosion under certain conditions, when zinc is leached from the alloy (dezincification), leaving behind a spongy copper structure.
Nordic Gold
Bronzes
A bronze is an alloy of copper and other metals, most often tin, but also aluminium and silicon.
Aluminium bronzes are alloys of copper and aluminium. The content of aluminium ranges mostly between 5% and 11%. Iron, nickel, manganese and silicon are sometimes added. They have higher strength and corrosion resistance than other bronzes, especially in marine environment, and have low reactivity to sulphur compounds. Aluminium forms a thin passivation layer on the surface of the metal.
Bell metal
Brastil
Phosphor bronze
Nickel bronzes, e.g. nickel silver and cupronickel
Speculum metal
UNS C69100
Precious metal alloys
Copper is often alloyed with precious metals like gold (Au) and silver (Ag).
† amount unspecified
| Physical sciences | Copper alloys | Chemistry |
3311599 | https://en.wikipedia.org/wiki/Savoy%20cabbage | Savoy cabbage | Savoy cabbage (Brassica oleracea var. sabauda L. or Brassica oleracea Savoy Cabbage Group) is a variety or cultivar group of the plant species Brassica oleracea. Savoy cabbage is a winter vegetable and one of several cabbage varieties. It has crinkled, emerald green leaves,
which are crunchy with a slightly elastic consistency on the palate.
Named after the Savoy region in France, it is also known as Milan cabbage () or Lombard cabbage (), after Milan and its Lombardy region in Italy. Known cultivars include 'Savoy King' (in the US), 'Tundra' (green with a firm, round heart) and 'Winter King' (with dark crumpled leaves).
Uses
Savoy cabbage maintains a firm texture when cooked, which is desired in some recipes. Savoy cabbage can be used in a variety of ways. It pairs well with white wine, apples, spices, horseradish and meat. It can be used for roulades, in stews and soups, such as borscht, as well as roasted plain and drizzled with olive oil. It can be used in preserved recipes such as kimchi or sauerkraut, and with strong and unusual seasonings such as juniper.
Signs of desirable quality include cabbage that is heavy for its size with leaves that are unblemished and have a bright, fresh look. Peak season for most cabbages in the Northern Hemisphere runs from November through April.
Fresh whole cabbage will keep in the refrigerator for one to six weeks depending on type and variety. Hard green, white or red cabbages will keep the longest while the looser Savoy and Chinese varieties such as bok choy need to be consumed more quickly. It is necessary to keep the outer leaves intact without washing when storing since moisture hastens decay.
Savoy can be difficult to grow as it is vulnerable to caterpillars, pigeons, and club root disease. It does best in full sun, and is winter-hardy, able to tolerate the cold, frost, and snow.
Nutrition
Raw Savoy cabbage is 91% water, 6% carbohydrates, 2% protein, and contains negligible fat (table). In a reference amount of , it supplies 27 calories, and is a rich source (20% or more of the Daily Value, DV) of vitamin K (66% DV), vitamin C (37% DV), and folate (20% DV), with a moderate amount of vitamin B6 (15% DV). There are no other micronutrients in significant content (table).
| Biology and health sciences | Leafy vegetables | Plants |
11484716 | https://en.wikipedia.org/wiki/Soil%20vapor%20extraction | Soil vapor extraction | Soil vapor extraction (SVE) is a physical treatment process for in situ remediation of volatile contaminants in vadose zone (unsaturated) soils (EPA, 2012). SVE (also referred to as in situ soil venting or vacuum extraction) is based on mass transfer of contaminant from the solid (sorbed) and liquid (aqueous or non-aqueous) phases into the gas phase, with subsequent collection of the gas phase contamination at extraction wells. Extracted contaminant mass in the gas phase (and any condensed liquid phase) is treated in aboveground systems. In essence, SVE is the vadose zone equivalent of the pump-and-treat technology for groundwater remediation. SVE is particularly amenable to contaminants with higher Henry’s Law constants, including various chlorinated solvents and hydrocarbons. SVE is a well-demonstrated, mature remediation technology and has been identified by the U.S. Environmental Protection Agency (EPA) as presumptive remedy.
SVE Configuration
The soil vapor extraction remediation technology uses vacuum blowers and extraction wells to induce gas flow through the subsurface, collecting contaminated soil vapor, which is subsequently treated aboveground. SVE systems can rely on gas inflow through natural routes or specific wells may be installed for gas inflow (forced or natural). The vacuum extraction of soil gas induces gas flow across a site, increasing the mass transfer driving force from aqueous (soil moisture), non-aqueous (pure phase), and solid (soil) phase into the gas phase. Air flow across a site is thus a key aspect, but soil moisture and subsurface heterogeneity (i.e., a mixture of low and high permeability materials) can result in less gas flow across some zones. In some situations, such as enhancement of monitored natural attenuation, a passive SVE system that relies on barometric pumping may be employed.
SVE has several advantages as a vadose zone remediation technology. The system can be implemented with standard wells and off-the-shelf equipment (blowers, instrumentation, vapor treatment, etc.). SVE can also be implemented with a minimum of site disturbance, primarily involving well installation and minimal aboveground equipment. Depending on the nature of the contamination and the subsurface geology, SVE has the potential to treat large soil volumes at reasonable costs.
The soil gas (vapor) that is extracted by the SVE system generally requires treatment prior to discharge back into the environment. The aboveground treatment is primarily for a gas stream, although condensation of liquid must be managed (and in some cases may specifically be desired). A variety of treatment techniques are available for aboveground treatment and include thermal destruction (e.g., direct flame thermal oxidation, catalytic oxidizers), adsorption (e.g., granular activated carbon, zeolites, polymers), biofiltration, non-thermal plasma destruction, photolytic/photocatalytic destruction, membrane separation, gas absorption, and vapor condensation. The most commonly applied aboveground treatment technologies are thermal oxidation and granular activated carbon adsorption. The selection of a particular aboveground treatment technology depends on the contaminant, concentrations in the offgas, throughput, and economic considerations.
SVE Effectiveness
The effectiveness of SVE, that is, the rate and degree of mass removal, depends on a number of factors that influence the transfer of contaminant mass into the gas phase. The effectiveness of SVE is a function of the contaminant properties (e.g., Henry’s Law constant, vapor pressure, boiling point, adsorption coefficient), temperature in the subsurface, vadose zone soil properties (e.g., soil grain size, soil moisture content, soil permeability, soil carbon content), subsurface heterogeneity, and the air flow driving force (applied pressure gradient). As an example, a residual quantity of a highly volatile contaminant (such as trichloroethene) in a homogeneous sand with high permeability and low carbon content (i.e., low/negligible adsorption) will be readily treated with SVE. In contrast, a heterogeneous vadose zone with one or more clay layers containing residual naphthalene would require a longer treatment time and/or SVE enhancements. SVE effectiveness issues include tailing and rebound, which result from contaminated zones with lower air flow (i.e., low permeability zones or zones of high moisture content) and/or lower volatility (or higher adsorption). Recent work at U.S. Department of Energy sites has investigated layering and low permeability zones in the subsurface and how they affect SVE operations.
Enhancement of SVE
Enhancements for improving the effectiveness of SVE can include directional drilling, pneumatic and hydraulic fracturing, and thermal enhancement (e.g., hot air or steam injection). Directional drilling and fracturing enhancements are generally intended to improve the gas flow through the subsurface, especially in lower permeability zones. Thermal enhancements such as hot air or steam injection increase the subsurface soil temperature, thereby improving the volatility of the contamination. In addition, injection of hot (dry) air can remove soil moisture and thus improve the gas permeability of the soil. Additional thermal technologies (such as electrical resistance heating, six-phase soil heating, radio-frequency heating, or thermal conduction heating) can be applied to the subsurface to heat the soil and volatilize/desorb contaminants, but these are generally viewed as separate technologies (versus a SVE enhancement) that may use vacuum extraction (or other methods) for collecting soil gas.
Design, Optimization, Performance Assessment, and Closure
On selection as a remedy, implementation of SVE involves the following elements: system design, operation, optimization, performance assessment, and closure. Several guidance documents provide information on these implementation aspects. EPA and U.S. Army Corps of Engineers (USACE) guidance documents establish an overall framework for design, operation, optimization, and closure of a SVE system. The Air Force Center for Engineering and the Environment (AFCEE) guidance presents actions and considerations for SVE system optimization, but has limited information related to approaches for SVE closure and meeting remediation goals. Guidance from the Pacific Northwest National Laboratory (PNNL) supplements these documents by discussing specific actions and decisions related to SVE optimization, transition, and/or closure.
Design and operation of a SVE system is relatively straightforward, with the major uncertainties having to do with subsurface geology/formation characteristics and the location of contamination. As time goes on, it is typical for a SVE system to exhibit a diminishing rate of contaminant extraction due to mass transfer limitations or removal of contaminant mass. Performance assessment is a key aspect to provide input for decisions about whether the system should be optimized, terminated, or transitioned to another technology to replace or augment SVE. Assessment of rebound and mass flux provide approaches to evaluate system performance and obtain information on which to base decisions.
Related Technologies
Several technologies are related to soil vapor extraction. As noted above, various soil-heating remediation technologies (e.g., electrical resistive heating, in situ vitrification) require a soil gas collection component, which may take the form of SVE and/or a surface barrier (i.e., hood). Bioventing is a related technology, the goal of which is to introduce additional oxygen (or possibly other reactive gases) into the subsurface to stimulate biological degradation of the contamination. In situ air sparging is a remediation technology for treating contamination in groundwater. Air is injected and "sparged" through the groundwater and then collected via soil vapor extraction wells.
| Technology | Environmental remediation | null |
7774494 | https://en.wikipedia.org/wiki/Simosuchus | Simosuchus | Simosuchus is an extinct genus of notosuchian crocodyliforms from the Late Cretaceous of Madagascar. It is named for its unusually short skull. Fully grown individuals were about in length. The type species is Simosuchus clarki, found from the Maevarano Formation in Mahajanga Province, although one isolated multicuspid tooth of this genus was discovered in Kallamedu Formation of India.
The teeth of S. clarki were shaped like maple leaves, which coupled with its short and deep snout suggests it was not a carnivore like most other crocodylomorphs. In fact, these features have led many palaeontologists to consider it a herbivore.
History
The first specimen of Simosuchus clarki, which served as the basis for its initial description in 2000, included a complete skull and lower jaw, the front of the postcranial skeleton, and parts of the posterior postcranial skeleton. Five more specimens were later described, representing the majority of the skeleton. Many isolated teeth have also been found in the Mahajanga Basin. Most remains of Simosuchus were found as part of the Mahajanga Basin Project, directed by the Université d'Antananarivo and Stony Brook University. Material was usually found in clays that were part of flow deposits in the Anembalemba Member of the Maevarano Formation.
Description
Simosuchus was small, about long based on the skeletons of mature individuals. In contrast to most other crocodyliforms, which have long, low skulls, Simosuchus has a distinctively short snout. The snout resembles that of a pug, giving the genus its name, which means "pug-nosed crocodile" in Greek. The shape of skulls differs considerably between specimens, with variation in ornamentation and bony projections. These differences may be indications of sexual dimorphism. The front (preorbita) portion of the skull is angled downwards. Simosuchus likely held its head so that the preorbital area was angled about 45° from horizontal. The teeth line the front of the jaws and are shaped like maple leaves. At the back of the skull, the occipital condyle (which articulates with the neck vertebrae) is downturned. 45 autapomorphies, or features unique to Simosuchus, can be found in the skull alone.
In most respects, the postcranial skeleton of Simosuchus resembles that of other terrestrial crocodyliforms. There are several differences, however, that have been used to distinguish it from related forms. The scapula is broad and tripartite (three-pronged). On its surface, there is a laterally directed prominence. The deltopectoral crest, a crest on the upper end of the humerus, is small. The glenohumeral condyle of the humerus, which connects to the pectoral girdle in the shoulder joint, has a distinctive rounded ellipsoid shape. The limbs are robust. The radius and ulna of the forearm fit tightly together. The front feet are small with large claws, and the back feet are also reduced in size. There is a small crest along the anterior edge of the femur. On the pelvis, the anterior process of the ischium is spur-like.
Most of the spinal column of Simosuchus is known. There are eight cervical vertebrae in the neck, at least fifteen dorsal vertebrae in the back, two sacral vertebrae at the hip, and no more than twenty caudal vertebrae in the tail. The number of vertebrae in the tail is less than that of most crocodyliforms, giving Simosuchus a very short tail.
Like other crocodyliforms, Simosuchus was covered in bony plates called osteoderms. These form shields over the back, underside, and tail. Unusually among crocodyliforms, Simosuchus also has osteoderms covering much of the limbs. Osteoderms covering the back, tail, and limbs are light and porous, while the osteoderms covering the belly are plate-like and have an inner structure resembling spongy diploë. Simosuchus has a tetraserial paravertebral shield over its back, meaning that there are four rows of tightly locking paramedial osteoderms (osteoderms to either side of the midline of the back). To either side of the shield, there are four rows of accessory parasagittal osteoderms. These accessory osteoderms tightly interlock with one another.
Classification
Simosuchus was first considered to be a basal member of the clade Notosuchia, and was often considered to be closely related to Uruguaysuchus from the Late Cretaceous of Uruguay and Malawisuchus from the Early Cretaceous of Malawi. Later phylogenetic studies have placed it closer to the genus Libycosuchus and in a more derived position than some other notosuchians such as Uruguaysuchus. In its initial description by Buckley et al. (2000), Simosuchus was placed in the family Notosuchidae. Its sister taxon was Uruguaysuchus, and the two were allied with Malawisuchus. These taxa were placed in Notosuchidae along with Libycosuchus and Notosuchus. Most of the following phylogenetic analyses resulted in a similar placement of Simosuchus and other genera within Notosuchia. Turner and Calvo (2005) also found a clade including Simosuchus, Uruguaysuchus, and Malawisuchus in their study.
The phylogenetic analysis of Carvalho et al. (2004), based on different character values than previous studies, produced a very different relationship among Simosuchus and other notosuchians. Simosuchus, along with Uruguaysuchus and Comahuesuchus, were placed outside Notosuchia. Simosuchus was found to be the sister taxon of the Chinese genus Chimaerasuchus in the family Chimaerasuchidae. Like Simosuchus, Chimaerasuchus has a short snout and was probably herbivorous. Both genera were placed outside Notosuchia in the larger clade Gondwanasuchia. Uruguaysuchus, previously considered to be a basal notosuchian and a close relative of Simosuchus, was placed in its own family, Uruguaysuchidae, also outside Notosuchia. Malawisuchus was found to be a member of Peirosauroidea, specifically a member of the family Itasuchidae.
The following cladogram simplified after a comprehensive analysis of notosuchians which focused on Simosuchus clarki presented by Alan H. Turner and Joseph J. W. Sertich in 2010.
*Note: Based on a specimen that was reassigned from Peirosaurus.
Paleobiology
Simosuchus was probably a herbivore, with its complex dentition resembling that of herbivorous iguanids. Like other notosuchians, it was fully terrestrial, and the short tail would have had little use in swimming.
The osteoderm shield was inflexible, restricting lateral movement in Simosuchus as a possible adaptation to an entirely terrestrial lifestyle. Robust legs are also consistent with terrestrial locomotion. The deltopectoral crest on the humerus and the anterior crest on the femur served as attachment points for strong limb muscles. The hindlimbs of Simosuchus were semierect, unlike the fully erect posture of most other notosuchians.
Possible fossorial lifestyle
A fossorial, or burrowing, lifestyle for Simosuchus has been suggested in its initial description based on the robust limbs and short snout, which appears shovel-like, and the underslung lower jaw that would prevent friction when the animal opens its jaws during burrowing. There are also areas on the skull that may have attached to strong neck muscles that would have been well suited for burrowing.
In 2010, Kley and colleagues have argued against this hypothesis on the basis of its morphology which doesn't show an adaptation as head-first burrowers, with its relatively large, wide head being disadvantageous and contrasting modern head-first burrowers. They also pointed out that the snout is not shovel-like, since it doesn't have a "dorsoventrally narrowed cutting edge" on it, but rather has a "relatively expansive anterior surface" which is almost completely flat. The lower jaw also wouldn't be "shielded" to a significant extent in response to the reaction force during burrowing; even the shielded portion, the premaxillary teeth which is crucial for feeding, would not have been able to endure the "extreme loading regimes associated with head-first burrowing." The upper jaw was also suggested to be not adapted for keeping its mouth shut when faced by reaction forces associated with burrowing. Its relatively large orbits were also inconsistent with a burrowing lifestyle, since large eyes have a risk of being damaged in head-first burrowing.
In the same year, Sertich and Groenke noted that while the appendicular skeleton including the scapula and the forelimb doesn't show specific morphological adaptation for digging, it doesn't exclude the idea that Simosuchus was adept at burrowing, since many extant crocodylians that are able to burrow also don't show any of the specific morphological specializations. Georgi and Krause, also in the same year, agreed that the burrowing hypothesis couldn't be confidently excluded, and claimed that the hypothesis needs more extensive comparative and functional analyses. They also suggested that its relatively short neck would have been a potential advantage for moving through a dense medium like a tunnel, and that it may have likely used its head on occasions to aid scratch-digging with its limbs.
Paleobiogeography
It is unknown how Simosuchus arrived in Madagascar. A similar crocodyliform, Araripesuchus tsangatsangana, is also known from the Maevarano Formation, but its relation to Simosuchus is unclear. It has been classified as both a notosuchian and a basal neosuchian in various phylogenetic analyses. Nearly all notosuchians are known from Gondwana, the southern supercontinent that existed throughout much of the Mesozoic and encompassed South America, Africa, India, Australia, and Antarctica. Libycosuchus, regarded as one of the closest relatives of Simosuchus, lived in Egypt. Unresolved relationships among notosuchians along with an incomplete fossil record have made it difficult to determine the biogeographic origins of Simosuchus.
| Biology and health sciences | Prehistoric crocodiles | Animals |
7780536 | https://en.wikipedia.org/wiki/White%20tiger | White tiger | The Ashy Tiger is a leucistic morph of the tiger. It is occasionally reported in the Indian wilderness. It has the typical black stripes of a tiger, but its coat is otherwise white or near-white, and it has blue eyes.
Variation
The white Bengal tigers are distinctive due to the color of their fur. The white fur is caused by a lack of the pigment pheomelanin, which is found in Bengal tigers with orange color fur. When compared to Bengal tigers, the white Bengal tigers tend to grow faster and become heavier than the orange Bengal tiger. They also tend to be somewhat bigger at birth, and as fully grown adults. White Bengal tigers are fully grown when they are 2–3 years of age. White male tigers reach weights of and can grow up to in length. As with all tigers, the white Bengal tiger's stripes are like fingerprints, with no two tigers having the same pattern. The stripes of the tiger are a pigmentation of the skin; if an individual were to be shaved, its distinctive coat pattern would still be visible.
For a white Bengal tiger to be born, both parents must carry the unusual gene for white colouring, which happens naturally only about once in 10,000 births. Dark-striped white individuals are well-documented in the Bengal tiger subspecies (Panthera tigris) as well as having been reported historically in several other subspecies. Currently [when?], several hundred white tigers are in captivity worldwide, with about one hundred being found in India. Their unique colouring has made them popular in entertainment showcasing exotic animals, and at zoos. Their rarity could be because the recessive allele is the result of a one-time mutation, or because white tigers lack adequate camouflage, reducing their ability to stalk prey or avoid other predators.
Genetics
A white tiger's pale coloration is due to the lack of the red and yellow pheomelanin pigments that normally produce the orange coloration. This had long been attributed to a mutation in the gene for the tyrosinase (TYR) enzyme. A knockout mutation in this gene results in albinism, the ability to make neither pheomelanin (red and yellow pigments) nor eumelanin (black and brown pigments), while a less severe mutation in the same gene in other mammals results in selective loss of pheomelanin, the so-called Chinchilla trait. The white phenotype in tigers had been attributed to such a Chinchilla mutation in tyrosinase, and in the past white tigers were sometimes referred to as 'partial albinos'. While whole genome sequencing determined that such a TYR mutation is responsible for the white lion leucistic variant, a normal TYR gene was found in both white tigers and snow leopards. Instead, in white tigers, a naturally-occurring point mutation in the SLC45A2 transport protein gene was found to underlie its pigmentation. The resultant single amino acid substitution introduces an alanine residue that protrudes into the transport protein's central passageway, apparently blocking it, and by a mechanism yet to be determined, this prevents pheomelanin expression in the fur. Mutations in the same gene are known to result in 'cream' coloration in horses, and play a role in the paler skin of humans of European descent. This is a recessive trait, meaning that it is seen only in individuals that are homozygous for this mutation, and that while the progeny of white tigers will all be white, white tigers can be also bred from colored Bengal tiger pairs in which each possesses a single copy of the unique mutation. Inbreeding promotes recessive traits and has been used as a strategy to produce white tigers in captivity, but this has also resulted in a range of other genetic defects.
The stripe color varies due to the influence and interaction of other genes. Another genetic characteristic makes the stripes of the tiger very pale; white tigers of this type are called snow-white or "pure white". White tigers, Siamese cats, and Himalayan rabbits have enzymes in their fur which react to temperature, causing them to grow darker in the cold. In the Bristol Zoo, a white tiger named Mohini was whiter than her relatives, who showed more cream tones. This may have been because she spent less time outdoors in the winter. Kailash Sankhala observed that white tigers were always whiter in Rewa State, even when they were born in New Delhi and returned there. "In spite of living in a dusty courtyard, they were always snow white." A weakened immune system is directly linked to reduced pigmentation in white tigers.
Stripeless tigers
An additional genetic condition can result in near-complete absence of stripes, making the tiger almost pure white. One such specimen was exhibited at Exeter Change in England in 1820, and described by Georges Cuvier as "A white variety of Tiger is sometimes seen, with the stripes very opaque, and not to be observed except in certain angles of light." Naturalist Richard Lydekker said that, "a white tiger, in which the fur was of a creamy tint, with the usual stripes faintly visible in certain parts, was exhibited at the old menagerie at Exeter Change about the year 1820." Hamilton Smith said, "A wholly white tiger, with the stripe-pattern visible only under reflected light, like the pattern of a white tabby cat, was exhibited in the Exeter Change Menagerie in 1820" and John George Wood stated that, "a creamy white, with the ordinary tigerine stripes so faintly marked that they were only visible in certain lights." Edwin Henry Landseer also drew this tigress in 1824.
The modern strain of snow white tigers came from repeated brother–sister matings of Bhim and Sumita at Cincinnati Zoo. The gene involved may have come from a Siberian tiger, their part-Siberian ancestor Tony. Continued inbreeding appears to have caused a recessive gene to become homozygous and produce the stripeless phenotype. About one fourth of Bhim and Sumita's offspring were stripeless. Their striped white offspring, which have been sold to zoos around the world, may also carry the gene for the stripeless trait. Because Tony's genome is present in many white tiger pedigrees, the gene may also be present in other captive white tigers. As a result, stripeless white tigers have appeared in zoos as far afield as the Czech Republic (Liberec), Spain and Mexico. Stage magicians Siegfried & Roy were the first to attempt to selectively breed for stripeless tigers; they owned snow-white Bengal tigers taken from Cincinnati Zoo (Tsumura, Mantra, Mirage and Akbar-Kabul) and Guadalajara, Mexico (Vishnu and Jahan), as well as a stripeless Siberian tiger called Apollo.
In 2004, a blue-eyed, stripeless white tiger was born in a wildlife refuge in Alicante, Spain. Its parents are normal orange Bengals. The cub was named "Artico" ("Arctic").
Defects
Outside of India, inbred white tigers have been prone to crossed eyes, a condition known as strabismus, due to incorrectly routed visual pathways in the brains of white tigers. When stressed or confused, all white tigers cross their eyes. Strabismus is associated with white tigers of mixed Bengal and Siberian ancestry. The only pure-Bengal white tiger reported to be cross-eyed was Mohini's daughter Rewati. Strabismus is directly linked to the white gene and is not a separate consequence of inbreeding.
The orange litter-mates of white tigers are not prone to strabismus. Siamese cats and albinos of every species which have been studied all exhibit the same visual pathway abnormality found in white tigers. Siamese cats are also sometimes cross-eyed, as are some albino ferrets. The visual pathway abnormality was first documented in white tigers in the brain of a white tiger called Moni after he died, although his eyes were of normal alignment. The abnormality is that there is a disruption in the optic chiasm. The examination of Moni's brain suggested the disruption is less severe in white tigers than it is in Siamese cats. Because of the visual pathway abnormality, by which some optic nerves are routed to the wrong side of the brain, white tigers have a problem with spatial orientation, and bump into things until they learn to compensate. Some tigers compensate by crossing their eyes. When the neurons pass from the retina to the brain and reach the optic chiasma, some cross and some do not, so that visual images are projected to the wrong hemisphere of the brain. White tigers cannot see as well as normal tigers and suffer from photophobia, like albinos.
Other genetic problems include shortened tendons of the forelegs, club foot, kidney problems, arched or crooked backbone and twisted neck. Reduced fertility and miscarriages, noted by "tiger man" Kailash Sankhala in pure-Bengal white tigers, were attributed to inbreeding depression. A condition known as "star-gazing" (the head and neck are raised almost straight up, as if the affected animal is gazing at the stars), which is associated with inbreeding in big cats, has also been reported in white tigers.
There was a male cross-eyed white tiger at the Pana'ewa Rainforest Zoo in Hawaii, which was donated to the zoo by Las Vegas magician Dirk Arthur. There is a picture of a white tiger which appears to be cross-eyed on just one side in the book Siegfried and Roy: Mastering the Impossible. A white tiger, named Scarlett O'Hara, who was Tony's sister, was cross-eyed only on the right side.
A male tiger named 'Cheytan', a son of Bhim and Sumita who was born at the Cincinnati Zoo, died at the San Antonio Zoo in 1992, from anaesthesia complications during root canal therapy. It appears that white tigers also react strangely to anaesthesia. The best drug for immobilizing a tiger is CI 744, but a few tigers, white ones in particular, undergo a re-sedation effect 24–36 hours later. This is due to their inability to produce normal tyrosinase, a trait they share with albinos, according to zoo veterinarian David Taylor. He treated a pair of white tigers from the Cincinnati Zoo at Fritz Wurm's safari park in Stukenbrock, Germany, for salmonella poisoning, which reacted strangely to the anaesthesia.
Mohini was checked for Chédiak–Higashi syndrome in 1960, but the results were inconclusive. This condition is similar to albino mutations and causes bluish lightening of the fur color, crossed eyes, and prolonged bleeding after surgery. Also, in the event of an injury, the blood is slow to coagulate. This condition has been observed in domestic cats, but there has never been a case of a white tiger having Chédiak–Higashi syndrome. There has been a single case of a white tiger having central retinal degeneration, reported from the Milwaukee County Zoo, which could be related to reduced pigmentation in the eye. The white tiger in question was a male named Mota on loan from the Cincinnati Zoo.
There is a myth that white tigers have an 80% infant mortality rate. However, the infant mortality rate for white tigers is no higher than it is for normal orange tigers bred in captivity. Cincinnati Zoo director Ed Maruska said:
"We have not experienced premature death among our white tigers. Forty-two animals born in our collection are still alive. Mohan, a large white tiger, died just short of his 20th birthday, an enviable age for a male of any subspecies, since most males live shorter captive lives. Premature deaths in other collections may be artifacts of captive environmental conditions...in 52 births we had four stillbirths, one of which was an unexplained loss. We lost two additional cubs from viral pneumonia, which is not excessive. Without data from non-inbred tiger lines, it is difficult to determine whether this number is high or low with any degree of accuracy." Ed Maruska also addressed the issue of deformities:
"Other than a case of hip dysplasia that occurred in a male white tiger, we have not encountered any other body deformities or any physiological or neurological disorders. Some of these reported maladies in mutant tigers in other collections may be a direct result of inbreeding or improper rearing management of tigers generally."
Inbreeding and outcrossing
Because of the extreme rarity of the white tiger allele in the wild, the breeding pool was limited to the small number of white tigers in captivity. According to Kailash Sankhala, the last white tiger ever seen in the wild was shot in 1958. Today there is a large number of white tigers in captivity. A white Amur tiger may have been born at Center Hill and has given rise to a strain of white Amur tigers. A man named Robert Baudy realized that his tigers had white genes when a tiger he sold to Marwell Zoo in England developed white spots, and bred them accordingly. The Lowry Park Zoo in Tampa Bay had four of these white Amur tigers, descended from Robert Baudy's stock.
It has also been possible to expand the white-gene pool by outcrossing white tigers with unrelated orange tigers and then using the cubs to produce more white tigers. The white tigers Ranjit, Bharat, Priya and Bhim were all outcrossed, in some instances to more than one tiger. Bharat was bred to an unrelated orange tiger named Jack from the San Francisco Zoo and had an orange daughter named Kanchana. Bharat and Priya were also bred with an unrelated orange tiger from Knoxville Zoo, and Ranjit was bred to this tiger's sister, also from Knoxville Zoo. Bhim fathered several litters with an unrelated orange tigress named Kimanthi at the Cincinnati Zoo.
The last descendants of Bristol Zoo's white tigers were a group of orange tigers from outcrosses which were bought by a Pakistani senator and shipped to Pakistan. Rajiv, Pretoria Zoo's white tiger, who was born in the Cincinnati Zoo, was also outcrossed and sired at least two litters of orange cubs at Pretoria Zoo. Outcrossing is not necessarily done with the intent of producing more white cubs by resuming inbreeding further down the line. Outcrossing is a way of bringing fresh blood into the white strain. The New Delhi Zoo loaned out white tigers to some of India's better zoos for outcrossing, and the government had to impose a whip to force zoos to return either the white tigers or their orange offspring.
Siegfried & Roy performed at least one outcross. In the mid-1980s they offered to work with the Indian government in the creation of a healthier strain of white tigers. The Indian government reportedly considered the offer; however, India had a moratorium on breeding white tigers after cubs were born at New Delhi Zoo with arched backs and clubbed feet, necessitating euthanasia. Siegfried & Roy have bred white tigers in collaboration with the Nashville Zoo.
To better preserve genetic diversity and avoid genetic defects, the Association of Zoos and Aquariums barred member zoos from intentionally breeding to produce white tigers, white lions, or king cheetahs in a white paper adopted by the board of directors in July 2011. The paper explains that selecting for or against any particular allele would result in a loss of genetic diversity. Instead, the alleles should be maintained at their natural frequencies. Inbreeding to produce abnormal appearances can also produce congenital defects that impact health and welfare. Sometimes the traits themselves can cause problems, such as albinism's visual and neural effects. Additionally, animals with an abnormal appearance do not serve as well as ambassadors for their species in the zoos' mission to educate the public.
| Biology and health sciences | Felines | Animals |
4490796 | https://en.wikipedia.org/wiki/Pharmacotherapy | Pharmacotherapy | Pharmacotherapy, also known as pharmacological therapy or drug therapy, is defined as medical treatment that utilizes one or more pharmaceutical drugs to improve ongoing symptoms (symptomatic relief), treat the underlying condition, or act as a prevention for other diseases (prophylaxis).
It can be distinguished from therapy using surgery (surgical therapy), radiation (radiation therapy), movement (physical therapy), or other modes. Among physicians, sometimes the term medical therapy refers specifically to pharmacotherapy as opposed to surgical or other therapy; for example, in oncology, medical oncology is thus distinguished from surgical oncology.
Today's pharmacological therapy has evolved from a long history of medication use, and it has changed most rapidly in the last century due to advancements in drug discovery. The therapy is administered and adjusted by healthcare professionals according to the evidence-based guidelines and the patient's health condition. Personalized medicine also plays a crucial role in pharmacological therapy. Personalized medicine, or precision medicine, takes account of the patient's genetic variation, liver function, kidney function, etc, to provide a tailor-made treatment for a patient. In pharmacological therapy, pharmacists will also consider medication compliance. Medication compliance, or medication adherence, is defined as the degree to which the patient follows the therapy that is recommended by the healthcare professionals.
History
From natural compounds to pharmaceutical drugs
The use of medicinal substances can be traced back to 4000 BC in the Sumer civilization. Healers at the time (called apothecaries), for example, understood the application of opium for pain relief. The history of natural remedies can also be found in other cultures, including traditional Chinese medicine in China and Ayurvedic medicine in India, which are still in use nowadays. Dioscorides, a 1st -century Greek surgeon, described more than six hundred animals, plants, and their derivatives in his medical botany, which remained the most influential pharmacopeia for fourteen hundred years. Besides substances derived from living organisms, metals, including copper, mercury, and antimony, were also used as medical therapies. They were said to cure various diseases during the late Renaissance. In 1657, tartar emetic, which is an antimony compound, was credited with curing Louis XIV of typhoid fever. The drug was also administered intravenously for the treatment of schistosomiasis in the 20th century. However, due to the concern over acute and chronic antimony poisoning, the role of tartar emetic as an antischistosomal agent was gradually replaced after the advent of praziquantel.
Other than using natural products, humans also learned to compound medicine by themselves. The first pharmaceutical text was found on clay tablets from the Mesopotamians, who lived around 2100 BC. Later in the 2nd century AD, compounding was formally introduced by Galen as “a process of mixing two or more medicines to meet the individual needs of a patient”. Initially, compounding was only done by individual pharmacists, but in the post-World War II period, pharmaceutical manufacturers surged in number and took over the role of making medicine. Meanwhile, there was a marked increase in pharmaceutical research, which led to a growing number of new drugs. Most drug discovery milestones were made in the last hundred years, from antibiotics to biologics, contributing to the foundation of current pharmacological therapy.
Drug discovery
Most drugs were discovered by empirical means, including observation, accident, and trial and error. One famous example is the discovery of penicillin, the first antibiotic in the world. The substance was discovered by Alexander Fleming in 1928 after a combination of unanticipated events occurred in his laboratory during his summer vacation. The Penicillium mold on the petri dish was believed to secrete a substance (later named "penicillin") that inhibited bacterial growth. Large pharmaceutical companies then started to establish their microbiological departments and search for new antibiotics. The screening program for antimicrobial compounds also led to the discovery of drugs with other pharmacological properties, such as immunosuppressants like Cyclosporin A.The discovery of penicillin was a serendipitous (i.e. chance) discovery. Another, more advanced approach to drug discovery is rational drug design. The method is underpinned by an understanding of the biological targets of the drugs, including enzymes, receptors, and other proteins. In the late 19th century, Paul Ehrlich observed the selective affinity of dyes for different tissues and proposed the existence of chemoreceptors in our bodies. Receptors were believed to be the specific binding sites for drugs. The drug-receptor recognition was described as a key-and-lock interplay by Emil Fischer in the early 1890s. It was later found that the receptors can either be stimulated or inhibited by chemotherapeutic agents to attain the desired physiological response. Once the ligand interacting with the target macromolecule is identified, drug candidates can be designed and optimized based on the structure-activity relationship. Nowadays, artificial intelligence is employed in drug design to predict drug-protein interactions, drug activity, the 3D configuration of proteins, etc.
Evidence-based medicine
Evidence-based medicine is defined as deploying the best current scientific evidence that is available to give the best treatment and make the best decision effectively and efficiently. Clinical guidelines are developed based on scientific evidence; for example, the ACC/AHA guidelines (for cardiovascular diseases), the GOLD guidelines (for chronic obstructive pulmonary disease), the GINA guidelines (for asthma), etc. They convert and classify the evidence using a systematic method, aiming to provide care with quality. The guidelines cannot substitute clinical judgment, as they cannot meet all the circumstances. Healthcare professionals can use the clinical guidelines as references or evidence to support their clinical judgement when prescribing therapy to patients.
Example: Clinical Guideline for controlling blood pressure (hypertension)
If there is an Asian male patient who is 40 years old and has recently been diagnosed with high blood pressure (with a blood pressure of 140/90) and without any other chronic diseases (comorbidities), such as type-2 diabetes, gout, benign prostatic hyperplasia, etc. His estimated 10-year risk of cardiovascular disease is 15%.
According to the NICE 2019 Hypertension guideline, the healthcare professional can consider starting anti-hypertensive therapy after a discussion with the patient. The first-line therapy will be either an Angiotensin Converting Enzyme Inhibitor (ACEi) or an Angiotensin receptor blocker (ARB) (if the patient cannot tolerate ACEi). If the blood pressure of the patient is not well controlled, the healthcare professionals can consider adding a calcium channel blocker (CCB) or a Thiazide-like diuretic to the previous therapy, i,e, ACEi or ARBs with a CCB or a thiazide-like diuretic.
Personalized medicine
Every patient has their own body condition, for example, kidney function, liver function, genetic variations, medical history, etc. These are all the factors that should be considered by healthcare professionals before giving any pharmacological therapy. Most importantly, the advancing technology in genetics guides us to have more insight into the linkage between health and genes. In pharmacological therapy, two areas of study are evolving: pharmacogenetics and pharmacogenomics. Age will affect the pharmacokinetics and pharmacodynamics of drugs, and hence the efficacy of the therapy. The effect of age causes deterioration of organ function, like liver function and kidney function. Pharmacokinetics is the study of drugs' effects on absorption, distribution, metabolism, and elimination. Pharmacodynamics is the study of drugs' effects on our body and their mechanisms.
Pharmacogenetics and pharmacogenomics
Pharmacogenetics is defined as the study of inherited genes causing different drug metabolisms that vary from each other, such as the rate of metabolism and metabolites. Pharmacogenomics is defined as the study of associating the drug response with one's gene. Both terms are similar in nature, so they are used interchangeably.
Multiple alleles can contribute together to a change in response to a drug by expressing a different form of an enzyme that responds differently than the normal ones. The different forms of enzymes (phenotypes) include ultra-rapid metabolizers, moderate metabolizers, no-enzyme activity, etc. The genetic variations can also be used to match the particular adverse drug reaction in order to prevent the patient from suffering the unfavorable outcomes. The genetic make-up can affect the pharmacokinetics.
Example: Azathioprine Therapy
Azathioprine is an immunomodulator for inflammatory bowel disease, for instance. Its metabolite relies on two different enzymes (TPMT and NUDT15) to eliminate its effect on our body during its metabolism. If the patient has the phenotype of the enzymes that metabolize it poorly, i.e., the poor metabolizer, more toxic metabolites are accumulated in the body. Thus, the patient has a greater risk of the related side-effect. The side effect causes the adjustment of dosage or switching to another drug.
Example: Omalizumab Therapy
Omalizumab is a humanized monoclonal antibody for the treatment of various allergic diseases, including asthma, urticaria, and allergic rhinitis. It targets the immunoglobulin E (IgE) in human body, which plays an important role in allergic reactions. The efficacy of omalizumab may vary among patients. To identify responders to omalizumab, the level of several biomarkers can be measured, including serum eosinophils, fractional exhaled nitric oxide, and serum IgE. For instance, patients with higher baseline eosinophil counts are likely to respond better to omalizumab therapy.
Medication compliance
Medication compliance is defined as the degree to which the patient follows the therapy that is recommended by healthcare professionals. There are direct and indirect methods to evaluate compliance. Direct method refers to the measurement that the healthcare professionals observed or measure the patient's drug-taking behavior. Indirect method refers to the healthcare professionals do not observe or measure the drug-taking behavior of the patient but use the other source of information to evaluate the compliance.
The direct method includes measurement of drug (or the corresponding metabolite) concentration, while the indirect method includes pill counting and the self-report from the patient. The direct method is more time-consuming, more expensive, more invasive, but it is more accurate. The indirect method has a lower accuracy but is easier to administer to the patient. If the patient fails to comply with treatment, for example, by not taking the medication according to the instructions, it leads to risk and a poor treatment outcome.
Example: Tuberculosis treatment
For tuberculosis patients, directly observed therapy is still part of the treatment. This is to increase medication compliance. This is to prevent treatment failure, relapse, and transmission in the community. Apart from the traditional direct observed therapy (DOT), there is another method proposed to try increasing medication compliance. Video-observed therapy (VOT) is one of the methods. It has some advantages and disadvantages. It reduces the cost of healthcare and the travel costs for the patient. The downside of the intervention is the need for quality control training as it would be hard to confirm the patient's adherence.
Role of Pharmacists
Pharmacists are experts in pharmacotherapy and are responsible for ensuring the safe, appropriate, and economical use of pharmaceutical drugs. The skills required to function as a pharmacist require knowledge, training and experience in biomedical, pharmaceutical and clinical sciences. Pharmacology is the science that aims to continually improve pharmacotherapy. The pharmaceutical industry and academia use basic science, applied science, and translational science to create new pharmaceutical drugs.
As pharmacotherapy specialists and pharmacists have responsibility for direct patient care, often functioning as a member of a multidisciplinary team, and acting as the primary source of drug-related information for other healthcare professionals. A pharmacotherapy specialist is an individual who is specialized in administering and prescribing medication, and requires extensive academic knowledge in pharmacotherapy.
In the US, a pharmacist can gain Board Certification in the area of pharmacotherapy upon fulfilling eligibility requirements and passing a certification examination. While pharmacists provide valuable information about medications for patients and healthcare professionals, they are not typically considered covered pharmacotherapy providers by insurance companies.
| Biology and health sciences | Treatments | Health |
4492335 | https://en.wikipedia.org/wiki/Seagrass%20meadow | Seagrass meadow | A seagrass meadow or seagrass bed is an underwater ecosystem formed by seagrasses. Seagrasses are marine (saltwater) plants found in shallow coastal waters and in the brackish waters of estuaries. Seagrasses are flowering plants with stems and long green, grass-like leaves. They produce seeds and pollen and have roots and rhizomes which anchor them in seafloor sand.
Seagrasses form dense underwater meadows which are among the most productive ecosystems in the world. They provide habitats and food for a diversity of marine life comparable to that of coral reefs. This includes invertebrates like shrimp and crabs, cod and flatfish, marine mammals and birds. They provide refuges for endangered species such as seahorses, turtles, and dugongs. They function as nursery habitats for shrimps, scallops and many commercial fish species. Seagrass meadows provide coastal storm protection by the way their leaves absorb energy from waves as they hit the coast. They keep coastal waters healthy by absorbing bacteria and nutrients, and slow the speed of climate change by sequestering carbon dioxide into the sediment of the ocean floor.
Seagrasses evolved from land plants(angiosperms), that returned to the ocean about 100 million years ago. However, today seagrass meadows are being damaged by human activities such as pollution from land runoff, fishing boats that drag dredges or trawls across the meadows uprooting the grass, and overfishing which unbalances the ecosystem. Seagrass meadows are currently being destroyed at a rate of about .
Background
Seagrasses are flowering plants (angiosperms) which grow in marine environments. They evolved from terrestrial plants which migrated back into the ocean about 75 to 100 million years ago. In the present day they occupy the sea bottom in shallow and sheltered coastal waters anchored in sand or mud bottoms.
There are four lineages of seagrasses containing relatively few species (all in a single order of monocotyledon). They occupy shallow environments on all continents except Antarctica: their distribution also extends to the High Seas, such as on the Mascarene Plateau.
Seagrasses are formed by a polyphyletic group of monocotyledons (order Alismatales), which recolonised marine environments about 80 million years ago. Seagrasses are habitat-forming species because they are a source of food and shelter for a wide variety of fish and invertebrates, and they perform relevant ecosystem services.
There are about 60 species of fully marine seagrasses belonging to four families (Posidoniaceae, Zosteraceae, Hydrocharitaceae and Cymodoceaceae), all in the order Alismatales (in the class of monocotyledons). Seagrasses beds or meadows can be made up of either a single species (monospecific) or mixed. In temperate areas one or a few species usually dominate (like the eelgrass Zostera marina in the North Atlantic), whereas tropical beds are usually more diverse, with up to thirteen species recorded in the Philippines. Like all autotrophic plants, seagrasses photosynthesize, in the submerged photic zone. Most species undergo submarine pollination and complete their life cycle underwater.
Seagrass meadows are found in depths up to about , depending on water quality and light availability. These seagrass meadows are highly productive habitats that provide many ecosystem services, including protecting the coast from storms and big waves, stabilising sediment, providing safe habitats for other species and encouraging biodiversity, enhancing water quality, and sequestering carbon and nutrients.
Seagrass meadows are sometimes called prairies of the sea. They are diverse and productive ecosystems sheltering to and harbouring species from all phyla, such as juvenile and adult fish, epiphytic and free-living macroalgae and microalgae, mollusks, bristle worms, and nematodes. Few species were originally considered to feed directly on seagrass leaves (partly because of their low nutritional content), but scientific reviews and improved working methods have shown that seagrass herbivory is an important link in the food chain, feeding hundreds of species, including green turtles, dugongs, manatees, fish, geese, swans, sea urchins and crabs. Some fish species that visit or feed on seagrasses raise their young in adjacent mangroves or coral reefs.
Seagrass meadows are rich biodiverse ecosystems that occur all over the globe, in both tropical and temperate seas. They contain complex food webs that provide trophic subsidy to species and habitats way beyond the extent of their distribution. Given the wide variety of food sources provided by this productive habitat, it is no surprise that seagrass meadows support an equally wide array of grazers and predators. However, despite its importance for sustaining biodiversity and many other ecosystem services, the global distribution of seagrass is a fraction of what was historically present. Recent estimates from where records exist indicate that at least 20% of the world's seagrass has been lost. Seagrasses also provide other services in the coastal zone such as preventing coastal erosion, storing and trapping carbon and filtering the water column.
The true ecosystem-level consequences of such decline and the benefits that can be afforded through habitat restoration are poorly understood. Given the relatively high-per-unit area costs of marine habitat restoration, making the case for such work requires a thorough examination of the ecosystem service benefits of such new habitat creation.
Global distribution
Seagrass meadows are found in the shallow seas of the continental shelves of all continents except Antarctica. Continental shelves are underwater areas of land surrounding each continent, creating areas of relatively shallow water known as shelf seas. The grasses live in areas with soft sediment that are either intertidal (uncovered daily by seawater, as the tide goes in and out) or subtidal (always under the water). They prefer sheltered places, such as shallow bays, lagoons, and estuaries (sheltered areas where rivers flow in to the sea), where waves are limited and light and nutrient levels are high.
Seagrasses can survive to maximum depths of about 60 metres. However, this depends on the availability of light, because, like plants on the land, seagrass meadows need sunlight if photosynthesis is to occur. Tides, wave action, water clarity, and low salinity (low amounts of salt in the water) control where seagrasses can live at their shallow edge nearest the shore; all of these things must be right for seagrass to survive and grow.
The current documented seagrass area is , but is thought to underestimate the total area since many areas with large seagrass meadows have not been thoroughly documented. Most common estimates are 300,000 to 600,000 km2, with up to 4,320,000 km2 suitable seagrass habitat worldwide.
Ecosystem services
Seagrass meadows provide coastal zones with significant ecosystem goods and services. They enhance water quality by stabilizing heavy metals and other toxic pollutants, as well as cleansing the water of excess nutrients, and lowering acidity levels in coastal waters. Further, because seagrasses are underwater plants, they produce significant amounts of oxygen which oxygenate the water column. Their root systems also assist in oxygenating the sediment, providing hospitable environments for sediment-dwelling organisms. Additionally, the conservation of seagrass meadows contributes to 16 of the 17 UN Sustainable Development Goals.
Many epiphytes can grow on the leaf blades of seagrasses, and algae, diatoms and bacterial films can cover the surface. The grass is eaten by turtles, herbivorous parrotfish, surgeonfish, and sea urchins, while the leaf surface films are a food source for many small invertebrates.
Blue carbon
The meadows also account for more than 10% of the ocean's total carbon storage. Per hectare, they hold twice as much carbon dioxide as rain forests and can sequester about 27 million tons of CO2 annually. This ability to store carbon is important as atmospheric carbon levels continue to rise.
Blue carbon refers to carbon dioxide removed from the atmosphere by the world's coastal marine ecosystems, mostly mangroves, salt marshes, seagrasses and potentially macroalgae, through plant growth and the accumulation and burial of organic matter in the sediment.
Although seagrass meadows occupy only 0.1% of the area of the ocean floor, they account for 10–18% of the total oceanic carbon burial. Currently global seagrass meadows are estimated to store as much as 19.9 Pg (petagrams or gigatons, equals a billion tons) of organic carbon. Carbon primarily accumulates in marine sediments, which are anoxic and thus continually preserve organic carbon from decadal-millennial time scales. High accumulation rates, low oxygen, low sediment conductivity and slower microbial decomposition rates all encourage carbon burial and carbon accumulation in these coastal sediments. Compared to terrestrial habitats that lose carbon stocks as CO2 during decomposition or by disturbances like fires or deforestation, marine carbon sinks can retain C for much longer time periods. Carbon sequestration rates in seagrass meadows vary depending on the species, characteristics of the sediment, and depth of the habitats, but on average the carbon burial rate is about 140 g C m−2 yr−1.
Coastal protection
Seagrasses are also ecosystem engineers, which means they alter the ecosystem around them, adjusting their surroundings in both physical and chemical ways. The long blades of seagrasses slow the movement of water which reduces wave energy and offers further protection against coastal erosion and storm surge. Many seagrass species produce an extensive underground network of roots and rhizome which stabilizes sediment and reduces coastal erosion. Seagrasses are not only affected by water in motion; they also affect the currents, waves and turbulence environment.
Seagrasses help trap sediment particles transported by sea currents. The leaves, extending toward the sea surface, slow down the water currents. The slower current is not able to carry the particles of sediment, so the particles drop down and become part of the seafloor, eventually building it up. When seagrasses are not present, the sea current has no obstacles and carries the sediment particles away, lifting them and eroding the seafloor.
Seagrasses prevent erosion of the seafloor to the point that their presence can raise the seafloor. They contribute to coast protection by trapping rock debris transported by the sea. Seagrasses reduce erosion of the coast and protect houses and cities from both the force of the sea and from sea-level rise caused by global warming. They do this by softening the force of the waves with their leaves, and helping sediment transported in the seawater to accumulate on the seafloor. Seagrass leaves act as baffles in turbulent water that slow down water movement and encourage particulate matter to settle out. Seagrass meadows are one of the most effective barriers against erosion, because they trap sediment amongst their leaves.
Archaeologists have learned from seagrasses how to protect underwater archaeological sites, like a site in Denmark where dozens of ancient Roman and Viking shipwrecks have been discovered. The archaeologists use seagrass-like covers as sediment traps, to build up sediment so that it buries the ships. Burial creates low-oxygen conditions and keeps the wood from rotting.
Links to seabirds
Birds are an often-overlooked part of marine ecosystems, not only are they crucial to the health of marine ecosystems, but their populations are also supported by the productivity and biodiversity of marine and coastal ecosystems. The links of birds to specific habitat types such as seagrass meadows are largely not considered except in the context of direct herbivorous consumption by wildfowl. This is despite the fact that both bottom-up and top-down processes have been considered as pathways for the population maintenance of some coastal birds.
Given the long-term decline in the population of many coastal and seabirds, the known response of many seabird populations to fluctuations in their prey, and the need for compensatory restorative actions to enhance their populations, there is a need for understanding the role of key marine habitats such as seagrass in supporting coastal and seabirds.
Nursery habitats for fisheries
Seagrass meadows provide nursery habitats for many commercially important fish species. It's estimated that about half of the global fisheries get their start because they are supported by seagrass habitats. If the seagrass habitats are lost, then the fisheries are lost as well. According to a 2019 paper by Unsworth et al, the significant role seagrass meadows play in supporting fisheries productivity and food security across the globe is not adequately reflected in the decisions made by authorities with statutory responsibility for their management. They argue that: (1) Seagrass meadows provide valuable nursery habitat to over 1/5th of the world's largest 25 fisheries, including walleye pollock, the most landed species on the planet. (2) In complex small‐scale fisheries from around the world (poorly represented in fisheries statistics), there is evidence that many of those in proximity to seagrass are supported to a large degree by these habitats. (3) Intertidal fishing activity in seagrass is a global phenomenon, often directly supporting human livelihoods. According to the study, seagrasses should be recognized and managed to maintain and maximize their role in global fisheries production. In 2022, Jones et al showed seagrass associated small-scale fisheries can provide a safety net for the poor, and are used more commonly than reef-associated fisheries across the Indo-Pacific. Nearly half the people interviewed in the study preferred fishing in seagrass, since their function as a nursery habitat could result in large and reliable catches of fish.
In the oceans, gleaning can be defined as fishing with basic gear, including bare hands, in shallow water not
deeper than that one can stand. Invertebrate gleaning (walking) fisheries are common within intertidal seagrass meadows globally, contributing to the food supply of hundreds of millions of people, but understanding of these fisheries and their ecological drivers are extremely limited. A 2019 study by Nessa et al. analysed these fisheries using a combined social and ecological approach. Catches were dominated by bivalves, sea urchins and gastropods. The catch per unit effort (CPUE) in all sites varied from 0.05 to 3 kg per gleaner per hour, with the majority of fishers being women and children. Landings were of major significance for local food supply and livelihoods at all sites. Local ecological knowledge suggests seagrass meadows are declining in line with other regional trends. Increasing seagrass density significantly and positively correlated with CPUE of the invertebrate gleaning highlighting the importance of conserving these threatened habitats.
Gallery of species habitats
Other services
Historically, seagrasses were collected as fertilizer for sandy soil. This was an important use in the Aveiro Lagoon, Portugal, where the plants collected were known as moliço. In the early 20th century, in France and, to a lesser extent, the Channel Islands, dried seagrasses were used as a mattress (paillasse) filling – such mattresses were in high demand by French forces during World War I. It was also used for bandages and other purposes.
In February 2017, researchers found that seagrass meadows may be able to remove various pathogens from seawater. On small islands without wastewater treatment facilities in central Indonesia, levels of pathogenic marine bacteria – such as Enterococcus – that affect humans, fish and invertebrates were reduced by 50 percent when seagrass meadows were present, compared to paired sites without seagrass, although this could be a detriment to their survival.
Movement ecology
Understanding the movement ecology of seagrasses provides a way to assess the capacity of populations to recover from impacts associated with existing and future pressures. These include the (re)-colonization of altered or fragmented landscapes, and movement associated with climate change.
The marine environment acts as an abiotic dispersal vector and its physical properties significantly influence movement, presenting both challenges and opportunities that differ from terrestrial environments. Typical flow speeds in the ocean are around 0.1 m s−1, generally one to two orders of magnitude weaker than typical atmospheric flows (1–10 m s−1), that can limit dispersal. However, as seawater density is approximately 1000 times greater than air, momentum of a moving mass of water at the same speed is three orders of magnitude greater than in air. Therefore, drag forces acting on individuals (proportional to density) are also three orders of magnitude higher, enabling relatively larger-sized propagules to be mobilized. But most importantly, buoyancy forces (proportional to the density difference between seawater and the propagule) significantly reduce the effective weight of submerged propagules. Within seagrasses, propagules can weakly settle (negatively buoyant), remain effectively suspended in the interior of the water column (neutrally buoyant), or float at the surface (positively buoyant).
With positive buoyancy (e.g. floating fruit), ocean surface currents freely move propagules, and dispersal distances are only limited by the viability time of the fruit, leading to exceptionally long single dispersal events (more than 100 km), which is rare for passive abiotic movement of terrestrial fruit and seeds.
There are a variety of biotic dispersal vectors for seagrasses, as they feed on or live in seagrass habitat. These include dugongs, manatees, turtles, waterfowl, fish and invertebrates. Each biotic vector has its own internal state, motion capacity, navigation capacity and external factors influencing its movement. These interact with plant movement ecology to determine the ultimate movement path of the plant.
For example, if a waterbird feeds on a seagrass containing fruit with seeds that are viable after defecation, then the bird has the potential to transport the seeds from one feeding ground to another. Therefore, the movement path of the bird determines the potential movement path of the seed. Particular traits of the animal, such as its digestive passage time, directly influence the plant's movement path.
Biogeochemistry
The primary nutrients determining seagrass growth are carbon (C), nitrogen (N), phosphorus (P), and light for photosynthesis. Nitrogen and phosphorus can be acquired from sediment pore water or from the water column, and sea grasses can uptake N in both ammonium (NH4+) and nitrate (NO3−) form.
A number of studies from around the world have found that there is a wide range in the concentrations of C, N, and P in seagrasses depending on their species and environmental factors. For instance, plants collected from high-nutrient environments had lower C:N and C:P ratios than plants collected from low-nutrient environments. Seagrass stoichiometry does not follow the Redfield ratio commonly used as an indicator of nutrient availability for phytoplankton growth. In fact, a number of studies from around the world have found that the proportion of C:N:P in seagrasses can vary significantly depending on their species, nutrient availability, or other environmental factors. Depending on environmental conditions, seagrasses can be either P-limited or N-limited.
An early study of seagrass stoichiometry suggested that the Redfield balanced ratio between N and P for seagrasses is approximately 30:1. However, N and P concentrations are strictly not correlated, suggesting that seagrasses can adapt their nutrient uptake based on what is available in the environment. For example, seagrasses from meadows fertilized with bird excrement have shown a higher proportion of phosphate than unfertilized meadows. Alternately, seagrasses in environments with higher loading rates and organic matter diagenesis supply more P, leading to N-limitation. P availability in Thalassia testudinum is the limiting nutrient. The nutrient distribution in Thalassia testudinum ranges from 29.4 to 43.3% C, 0.88-3.96% N, and 0.048-0.243% P. This equates to a mean ratio of 24.6 C:N, 937.4 C:P, and 40.2 N:P. This information can also be used to characterize the nutrient availability of a bay or other water body (which is difficult to measure directly) by sampling the seagrasses living there.
Light availability is another factor that can affect the nutrient stoichiometry of seagrasses. Nutrient limitation can only occur when photosynthetic energy causes grasses to grow faster than the influx of new nutrients. For example, low light environments tend to have a lower C:N ratio. Alternately, high-N environments can have an indirect negative effect to seagrass growth by promoting growth of algae that reduce the total amount of available light.
Nutrient variability in seagrasses can have potential implications for wastewater management in coastal environments. High amounts of anthropogenic nitrogen discharge could cause eutrophication in previously N-limited environments, leading to hypoxic conditions in the seagrass meadow and affecting the carrying capacity of that ecosystem.
A study of annual deposition of C, N, and P from Posidonia oceanica seagrass meadows in northeast Spain found that the meadow sequestered 198 g C m−2 yr−1, 13.4 g N m−2 yr−1, and 2.01 g P m−2 yr−1 into the sediment. Subsequent remineralization of carbon from the sediments due to respiration returned approximately 8% of the sequestered carbon, or 15.6 g C m−2 yr −1.
Threats
Seagrasses are in global decline, with some lost during recent decades. Seagrass loss has accelerated over the past few decades, from 0.9% per year prior to 1940 to 7% per year in 1990.
Natural disturbances, such as grazing, storms, ice-scouring and desiccation, are an inherent part of seagrass ecosystem dynamics. Seagrasses display a high degree of phenotypic plasticity, adapting rapidly to changing environmental conditions. Human activities, on the other hand, have caused significant disturbance and are accountable for the majority of the losses.
The seagrass can be damaged from direct mechanical destruction of habitat through fishing methods that rely on heavy nets that are dragged across the sea floor, putting this important ecosystem at serious risk. When humans drive motor boats over shallow seagrass areas, the propeller blade can also damage the seagrass.
Seagrass habitats are threatened by coastal eutrophication, which is caused by excessive input of nutrients (nitrogen, phosphorus). That excessive input is directly toxic to seagrasses, but most importantly, it stimulates the growth of epiphytic and free-floating macro- and micro-algae. Known as nuisance species, macroalgae grow in filamentous and sheet-like forms and form thick unattached mats over seagrass, occurring as epiphytes on seagrass leaves. Eutrophication leads to the forming of algal blooms, causing the attenuation of light in the water column, which eventually leads to anoxic conditions for the seagrass and organisms living in/around the plant(s). In addition to the direct blockage of light to the plant, benthic macroalgae have low carbon/nitrogen content, causing their decomposition to stimulate bacterial activity, leading to sediment resuspension, an increase in water turbidity, and further light attenuation. When the seagrass does not get enough sunlight, it reduces the photosynthesis that nourishes the seagrass and the primary production results, and then decaying seagrass leaves and algae fuel algal blooms even further, resulting in a positive feedback loop. This can cause the decline and eradication of seagrasses to algal dominance.
Accumulating evidence also suggests that overfishing of top predators (large predatory fish) could indirectly increase algal growth by reducing grazing control performed by mesograzers, such as crustaceans and gastropods, through a trophic cascade.
Increased seawater temperatures, increased sedimentation, and coastal development have also had a significant impact in the decline of seagrasses.
The most-used methods to protect and restore seagrass meadows include nutrient and pollution reduction, marine protected areas, and restoration using seagrass transplanting. Seagrass is not seen as resilient to the impacts of future environmental change.
Ocean deoxygenation
Globally, seagrass has been declining rapidly. Hypoxia that leads to eutrophication caused from ocean deoxygenation is one of the main underlying factors of these die-offs. Eutrophication causes enhanced nutrient enrichment which can result in seagrass productivity, but with continual nutrient enrichment in seagrass meadows, it can cause excessive growth of microalgae, epiphytes and phytoplankton resulting in hypoxic conditions.
Seagrass is both a source and a sink for oxygen in the surrounding water column and sediments. At night, the inner part of seagrass oxygen pressure is linearly related to the oxygen concentration in the water column, so low water column oxygen concentrations often result in hypoxic seagrass tissues, which can eventually kill off the seagrass. Normally, seagrass sediments must supply oxygen to the below-ground tissue through either photosynthesis or by diffusing oxygen from the water column through leaves to rhizomes and roots. However, with the change in seagrass oxygen balances, it can often result in hypoxic seagrass tissues. Seagrass exposed to this hypoxic water column show increased respiration, reduced rates of photosynthesis, smaller leaves, and reduced number of leaves per shoot. This causes insufficient supply of oxygen to the belowground tissues for aerobic respiration, so seagrass must rely on the less-efficient anaerobic respiration. Seagrass die-offs create a positive feedback loop in which the mortality events cause more death as higher oxygen demands are created when dead plant material decomposes.
Because hypoxia increases the invasion of sulfides in seagrass, this negatively affects seagrass through photosynthesis, metabolism and growth. Generally, seagrass is able to combat the sulfides by supplying enough oxygen to the roots. However, deoxygenation causes the seagrass to be unable to supply this oxygen, thus killing it off. Deoxygenation reduces the diversity of organisms inhabiting seagrass beds by eliminating species that cannot tolerate the low oxygen conditions. Indirectly, the loss and degradation of seagrass threatens numerous species that rely on seagrass for either shelter or food. The loss of seagrass also effects the physical characteristics and resilience of seagrass ecosystems. Seagrass beds provide nursery grounds and habitat to many harvested commercial, recreational, and subsistence fish and shellfish. In many tropical regions, local people are dependent on seagrass associated fisheries as a source of food and income.
Diminishing meadows
The storage of carbon is an essential ecosystem service as we move into a period of elevated atmospheric carbon levels. However, some climate change models suggest that some seagrasses will go extinct – Posidonia oceanica is expected to go extinct, or nearly so, by 2050.
The UNESCO World Heritage Site around the Balearic islands of Mallorca and Formentera includes about of Posidonia oceanica, which has global significance because of the amount of carbon dioxide it absorbs. However, the meadows are being threatened by rising temperatures, which slows down its growth, as well as damage from anchors.
Restoration
Using propagules
Seagrass propagules are materials that help propagate seagrass. Seagrasses pollinate by hydrophily, that is, by dispersing in the water. Sexually and asexually produced propagules are important for this dispersal.
Species from the genera Amphibolis and Thalassodendron produce viviparous seedlings. Most others produce seeds, although their characteristics vary widely; some species produce seeds or fruit that are positively buoyant and have potential for long-distance dispersal (e.g., Enhalus, Posidonia, and Thalassia). Others produce seeds that are negatively buoyant with limited dispersal potential (e.g., Zostera and Halophila). although long-distance dispersal can still occur via transport of detached fragments carrying spathes (modified leaves which enclose the flower cluster); e.g., Zostera spp. Nearly all species are also capable of asexual reproduction through rhizome elongation or the production of asexual fragments (e.g., rhizome fragments, pseudoviviparous plantlets). Sexually derived propagules of some species lack the ability to be dormant (e.g., Amphibolis and Posidonia), while others can remain dormant for long periods. These differences in biology and ecology of propagules strongly influence patterns of recruitment and dispersal, and the way they can be used effectively in restoration.
Seagrass restoration has primarily involved using asexual material (e.g., cuttings, rhizome fragments or cores) collected from donor meadows. Relatively few seagrass restoration efforts have used sexually derived propagules. The infrequent use of sexually derived propagules is probably in part due to the temporal and spatial variability of seed availability, as well as the perception that survival rates of seeds and seedlings are poor. Although survival rates are often low, recent reviews of seed-based research highlight that this is probably because of limited knowledge about availability and collection of quality seed, skills in seed handling and delivery, and suitability of restoration sites.
Methods for collecting and preparing propagules vary according to their characteristics and typically harness their natural dispersal mechanisms. For example, for viviparous taxa such as Amphibolis, recently detached seedlings can be collected by placing fibrous and weighted material, such as sand-filled hessian bags, which the seedlings' grappling structures attach to as they drift past. In this way thousands of seedlings can be captured in less than a square meter. Typically, sandbags are deployed in locations where restoration is required, and are not collected and re-deployed elsewhere.
For species which have seeds contained within spathes (e.g., Zostera spp.), these can be harvested using divers or mechanical harvesters. In Chesapeake Bay several million Zostera marina seeds have been collected each year during the peak reproductive season using a mechanical harvester. Seeds are extracted from spathes after harvesting, but the methods of extraction and delivery vary. For example, some methods involve keeping the spathes within large holding tanks where they eventually split open and release the (negatively buoyant) seeds, which are then collected from the tank bottom. The seeds are then placed in a flume to determine seed quality based on settling velocity, after which they are scattered by hand from boats over recipient habitats. Alternatively, using buoys anchored in place, Z. marina spathes can be suspended over restoration sites in mesh bags; the spathes release and deliver the seeds to the seafloor.
For species that release seeds from fruits that float (Posidonia spp., Halophila spp.), fruits can be detached from the parent plant by shaking; they then float to the surface where they are collected in nets. Seeds are then extracted from the fruit via vigorous aeration and water movement from pumps at stable temperatures (25 °C) within tanks. The negatively buoyant seeds are then collected from the tank bottom and scattered by-hand over recipient habitats. Other methods have been trialed with limited success, including direct planting of seeds by hand, injecting seeds using machinery, or planting and deploying within hessian sandbags.
Restoration using seagrass propagules has so far demonstrated low and variable outcomes, with more than 90% of propagules failing to survive. For propagules to be successfully incorporated within seagrass restoration programs, there will need to be a reduction in propagule wastage (which includes mortality, but also failure to germinate or dispersal away from the restoration site), to facilitate higher rates of germination and survival. A major barrier to effective use of seeds in seagrass restoration is knowledge about seed quality. Seed quality includes aspects such as viability, size (which can confer energy reserves available for initial growth and establishment), damage to the seed coat or seedling, bacterial infection, genetic diversity and ecotype (which may influence a seeds ability to respond to the restoration environment). Nevertheless, the diversity of propagules and species used in restoration is increasing and understanding of seagrass seed biology and ecology is advancing. To improve chances of propagule establishment, better understanding is needed about the steps that precede seed delivery to restoration sites, including seed quality, as well as the environmental and social barriers that influence survival and growth.
Other efforts
In various locations, communities are attempting to restore seagrass beds that were lost to human action, including in the US states of Virginia, Florida and Hawaii, as well as the United Kingdom. Such reintroductions have been shown to improve ecosystem services.
Dr. Fred Short of the University of New Hampshire developed a specialized transplant methodology known as "Transplanting Eelgrass Remotely with Frames" (TERF). This method involves using clusters of plants which are temporarily tied with degradable crepe paper unto a weighted frame of wire mesh. The method has already been tried out by Save The Bay.
In 2001, Steve Granger, from the University of Rhode Island Graduate School of Oceanography used a boat-pulled sled that is able to deposit seeds below the sediment surface. Together with colleague Mike Traber (who developed a Knox gelatin matrix to encase the seeds in), they conducted a test planting at Narragansett Bay. They were able to plant a area in less than two hours.
the Coastal Marine Ecosystems Research Centre of Central Queensland University has been growing seagrass for six years and has been producing seagrass seeds. They have been running trials in germination and sowing techniques.
| Physical sciences | Oceanography | Earth science |
1137568 | https://en.wikipedia.org/wiki/Artificial%20gravity | Artificial gravity | Artificial gravity is the creation of an inertial force that mimics the effects of a gravitational force, usually by rotation.
Artificial gravity, or rotational gravity, is thus the appearance of a centrifugal force in a rotating frame of reference (the transmission of centripetal acceleration via normal force in the non-rotating frame of reference), as opposed to the force experienced in linear acceleration, which by the equivalence principle is indistinguishable from gravity.
In a more general sense, "artificial gravity" may also refer to the effect of linear acceleration, e.g. by means of a rocket engine.
Rotational simulated gravity has been used in simulations to help astronauts train for extreme conditions.
Rotational simulated gravity has been proposed as a solution in human spaceflight to the adverse health effects caused by prolonged weightlessness.
However, there are no current practical outer space applications of artificial gravity for humans due to concerns about the size and cost of a spacecraft necessary to produce a useful centripetal force comparable to the gravitational field strength on Earth (g).
Scientists are concerned about the effect of such a system on the inner ear of the occupants. The concern is that using centripetal force to create artificial gravity will cause disturbances in the inner ear leading to nausea and disorientation. The adverse effects may prove intolerable for the occupants.
Centripetal force
In the context of a rotating space station, it is the radial force provided by the spacecraft's hull that acts as centripetal force. Thus, the "gravity" force felt by an object is the centrifugal force perceived in the rotating frame of reference as pointing "downwards" towards the hull.
By Newton's Third Law, the value of little g (the perceived "downward" acceleration) is equal in magnitude and opposite in direction to the centripetal acceleration. It was tested with satellites like Bion 3 (1975) and Bion 4 (1977); they both had centrifuges on board to put some specimens in an artificial gravity environment.
Differences from normal gravity
From the perspective of people rotating with the habitat, artificial gravity by rotation behaves similarly to normal gravity but with the following differences, which can be mitigated by increasing the radius of a space station.
Centrifugal force varies with distance: Unlike real gravity, the apparent force felt by observers in the habitat pushes radially outward from the axis, and the centrifugal force is directly proportional to the distance from the axis of the habitat. With a small radius of rotation, a standing person's head would feel significantly less gravity than their feet. Likewise, passengers who move in a space station experience changes in apparent weight in different parts of the body.
The Coriolis effect gives an apparent force that acts on objects that are moving relative to a rotating reference frame. This apparent force acts at right angles to the motion and the rotation axis and tends to curve the motion in the opposite sense to the habitat's spin. If an astronaut inside a rotating artificial gravity environment moves towards or away from the axis of rotation, they will feel a force pushing them in or against the direction of spin. These forces act on the semicircular canals of the inner ear and can cause dizziness. Lengthening the period of rotation (lower spin rate) reduces the Coriolis force and its effects. It is generally believed that at 2 rpm or less, no adverse effects from the Coriolis forces will occur, although humans have been shown to adapt to rates as high as 23 rpm.
Changes in the rotation axis or rate of a spin would cause a disturbance in the artificial gravity field and stimulate the semicircular canals (refer to above). Any movement of mass within the station, including a movement of people, would shift the axis and could potentially cause a dangerous wobble. Thus, the rotation of a space station would need to be adequately stabilized, and any operations to deliberately change the rotation would need to be done slowly enough to be imperceptible. One possible solution to prevent the station from wobbling would be to use its liquid water supply as ballast which could be pumped between different sections of the station as required.
Human spaceflight
The Gemini 11 mission attempted in 1966 to produce artificial gravity by rotating the capsule around the Agena Target Vehicle to which it was attached by a 36-meter tether. They were able to generate a small amount of artificial gravity, about 0.00015 g, by firing their side thrusters to slowly rotate the combined craft like a slow-motion pair of bolas. The resultant force was too small to be felt by either astronaut, but objects were observed moving towards the "floor" of the capsule.
Health benefits
Artificial gravity has been suggested as a solution to various health risks associated with spaceflight. In 1964, the Soviet space program believed that a human could not survive more than 14 days in space for fear that the heart and blood vessels would be unable to adapt to the weightless conditions. This fear was eventually discovered to be unfounded as spaceflights have now lasted up to 437 consecutive days, with missions aboard the International Space Station commonly lasting 6 months. However, the question of human safety in space did launch an investigation into the physical effects of prolonged exposure to weightlessness. In June 1991, a Spacelab Life Sciences 1 flight performed 18 experiments on two men and two women over nine days. In an environment without gravity, it was concluded that the response of white blood cells and muscle mass decreased. Additionally, within the first 24 hours spent in a weightless environment, blood volume decreased by 10%. Long weightless periods can cause brain swelling and eyesight problems. Upon return to Earth, the effects of prolonged weightlessness continue to affect the human body as fluids pool back to the lower body, the heart rate rises, a drop in blood pressure occurs, and there is a reduced tolerance for exercise.
Artificial gravity, for its ability to mimic the behavior of gravity on the human body, has been suggested as one of the most encompassing manners of combating the physical effects inherent in weightless environments. Other measures that have been suggested as symptomatic treatments include exercise, diet, and Pingvin suits. However, criticism of those methods lies in the fact that they do not fully eliminate health problems and require a variety of solutions to address all issues. Artificial gravity, in contrast, would remove the weightlessness inherent in space travel. By implementing artificial gravity, space travelers would never have to experience weightlessness or the associated side effects. Especially in a modern-day six-month journey to Mars, exposure to artificial gravity is suggested in either a continuous or intermittent form to prevent extreme debilitation to the astronauts during travel.
Proposals
Several proposals have incorporated artificial gravity into their design:
Discovery II: a 2005 vehicle proposal capable of delivering a 172-metric-ton crew to Jupiter's orbit in 118 days. A very small portion of the 1,690-metric-ton craft would incorporate a centrifugal crew station.
Multi-Mission Space Exploration Vehicle (MMSEV): a 2011 NASA proposal for a long-duration crewed space transport vehicle; it included a rotational artificial gravity space habitat intended to promote crew health for a crew of up to six persons on missions of up to two years in duration. The torus-ring centrifuge would utilize both standard metal-frame and inflatable spacecraft structures and would provide 0.11 to 0.69 g if built with the diameter option.
ISS Centrifuge Demo: a 2011 NASA proposal for a demonstration project preparatory to the final design of the larger torus centrifuge space habitat for the Multi-Mission Space Exploration Vehicle. The structure would have an outside diameter of with a ring interior cross-section diameter of . It would provide 0.08 to 0.51 g partial gravity. This test and evaluation centrifuge would have the capability to become a Sleep Module for the ISS crew.
Mars Direct: A plan for a crewed Mars mission created by NASA engineers Robert Zubrin and David Baker in 1990, later expanded upon in Zubrin's 1996 book The Case for Mars. The "Mars Habitat Unit", which would carry astronauts to Mars to join the previously launched "Earth Return Vehicle", would have had artificial gravity generated during flight by tying the spent upper stage of the booster to the Habitat Unit, and setting them both rotating about a common axis.
The proposed Tempo3 mission rotates two halves of a spacecraft connected by a tether to test the feasibility of simulating gravity on a crewed mission to Mars.
The Mars Gravity Biosatellite was a proposed mission meant to study the effect of artificial gravity on mammals. An artificial gravity field of 0.38 g (equivalent to Mars's surface gravity) was to be produced by rotation (32 rpm, radius of ca. 30 cm). Fifteen mice would have orbited Earth (Low Earth orbit) for five weeks and then land alive. However, the program was canceled on 24 June 2009, due to a lack of funding and shifting priorities at NASA.
Vast Space is a private company that proposes to build the world's first artificial gravity space station using the rotating spacecraft concept.
A Mars gravity simulator could be built on the Moon to prepare for Mars missions. The surface gravity of Mars is somewhat more than twice that of the Moon. It has been proposed to build a large low-pressure bubble, and within it up to twenty higher-pressure rotating tori, all within a cave or lava tube. An analogous system could be built on Mars to prepare people to return to Earth, whose surface gravity is more than twice that of Mars.
Issues with implementation
Some of the reasons that artificial gravity remains unused today in spaceflight trace back to the problems inherent in implementation. One of the realistic methods of creating artificial gravity is the centrifugal effect caused by the centripetal force of the floor of a rotating structure pushing up on the person. In that model, however, issues arise in the size of the spacecraft. As expressed by John Page and Matthew Francis, the smaller a spacecraft (the shorter the radius of rotation), the more rapid the rotation that is required. As such, to simulate gravity, it would be better to utilize a larger spacecraft that rotates slowly.
The requirements on size about rotation are due to the differing forces on parts of the body at different distances from the axis of rotation. If parts of the body closer to the rotational axis experience a force that is significantly different from parts farther from the axis, then this could have adverse effects. Additionally, questions remain as to what the best way is to initially set the rotating motion in place without disturbing the stability of the whole spacecraft's orbit. At the moment, there is not a ship massive enough to meet the rotation requirements, and the costs associated with building, maintaining, and launching such a craft are extensive.
In general, with the small number of negative health effects present in today's typically shorter spaceflights, as well as with the very large cost of research for a technology which is not yet really needed, the present day development of artificial gravity technology has necessarily been stunted and sporadic.
As the length of typical space flights increases, the need for artificial gravity for the passengers in such lengthy spaceflights will most certainly also increase, and so will the knowledge and resources available to create such artificial gravity, most likely also increase. In summary, it is probably only a question of time, as to how long it might take before the conditions are suitable for the completion of the development of artificial gravity technology, which will almost certainly be required at some point along with the eventual and inevitable development of an increase in the average length of a spaceflight.
In science fiction
Several science fiction novels, films, and series have featured artificial gravity production.
In the movie 2001: A Space Odyssey, a rotating centrifuge in the Discovery spacecraft provides artificial gravity.
The 1999 television series Cowboy Bebop, a rotating ring in the Bebop spacecraft creates artificial gravity throughout the spacecraft.
In the novel The Martian, the Hermes spacecraft achieves artificial gravity by design; it employs a ringed structure, at whose periphery forces around 40% of Earth's gravity are experienced, similar to Mars' gravity.
In the novel Project Hail Mary by the same author, weight on the titular ship Hail Mary is provided initially by engine thrust, as the ship is capable of constant acceleration up to and is also able to separate, turn the crew compartment inwards, and rotate to produce while in orbit.
The movie Interstellar features a spacecraft called the Endurance that can rotate on its central axis to create artificial gravity, controlled by retro thrusters on the ship.
The 2021 film Stowaway features the upper stage of a launch vehicle connected by 450-meter long tethers to the ship's main hull, acting as a counterweight for inertia-based artificial gravity.
In the television series For All Mankind, the space hotel Polaris, later renamed Phoenix after being purchased and converted into a space vessel by Helios Aerospace for their own Mars mission, features a wheel-like structure controlled by thrusters to create artificial gravity, whilst a central axial hub operates in zero gravity as a docking station.
Linear acceleration
Linear acceleration is another method of generating artificial gravity, by using the thrust from a spacecraft's engines to create the illusion of being under a gravitational pull. A spacecraft under constant acceleration in a straight line would have the appearance of a gravitational pull in the direction opposite to that of the acceleration, as the thrust from the engines would cause the spacecraft to "push" itself up into the objects and persons inside of the vessel, thus creating the feeling of weight. This is because of Newton's third law: the weight that one would feel standing in a linearly accelerating spacecraft would not be a true gravitational pull, but simply the reaction of oneself pushing against the craft's hull as it pushes back. Similarly, objects that would otherwise be free-floating within the spacecraft if it were not accelerating would "fall" towards the engines when it started accelerating, as a consequence of Newton's first law: the floating object would remain at rest, while the spacecraft would accelerate towards it, and appear to an observer within that the object was "falling".
To emulate artificial gravity on Earth, spacecraft using linear acceleration gravity may be built similar to a skyscraper, with its engines as the bottom "floor". If the spacecraft were to accelerate at the rate of 1 g—Earth's gravitational pull—the individuals inside would be pressed into the hull at the same force, and thus be able to walk and behave as if they were on Earth.
This form of artificial gravity is desirable because it could functionally create the illusion of a gravity field that is uniform and unidirectional throughout a spacecraft, without the need for large, spinning rings, whose fields may not be uniform, not unidirectional with respect to the spacecraft, and require constant rotation. This would also have the advantage of relatively high speed: a spaceship accelerating at 1 g, 9.8 m/s2, for the first half of the journey, and then decelerating for the other half, could reach Mars within a few days. Similarly, a hypothetical space travel using constant acceleration of 1 g for one year would reach relativistic speeds and allow for a round trip to the nearest star, Proxima Centauri. As such, low-impulse but long-term linear acceleration has been proposed for various interplanetary missions. For example, even heavy (100 ton) cargo payloads to Mars could be transported to Mars in and retain approximately 55 percent of the LEO vehicle mass upon arrival into a Mars orbit, providing a low-gravity gradient to the spacecraft during the entire journey.
This form of gravity is not without challenges, however. At present, the only practical engines that could propel a vessel fast enough to reach speeds comparable to Earth's gravitational pull require chemical reaction rockets, which expel reaction mass to achieve thrust, and thus the acceleration could only last for as long as a vessel had fuel. The vessel would also need to be constantly accelerating and at a constant speed to maintain the gravitational effect, and thus would not have gravity while stationary, and could experience significant swings in g-forces if the vessel were to accelerate above or below 1 g. Further, for point-to-point journeys, such as Earth-Mars transits, vessels would need to constantly accelerate for half the journey, turn off their engines, perform a 180° flip, reactivate their engines, and then begin decelerating towards the target destination, requiring everything inside the vessel to experience weightlessness and possibly be secured down for the duration of the flip.
A propulsion system with a very high specific impulse (that is, good efficiency in the use of reaction mass that must be carried along and used for propulsion on the journey) could accelerate more slowly producing useful levels of artificial gravity for long periods of time. A variety of electric propulsion systems provide examples. Two examples of this long-duration, low-thrust, high-impulse propulsion that have either been practically used on spacecraft or are planned in for near-term in-space use are Hall effect thrusters and Variable Specific Impulse Magnetoplasma Rockets (VASIMR). Both provide very high specific impulse but relatively low thrust, compared to the more typical chemical reaction rockets. They are thus ideally suited for long-duration firings which would provide limited amounts of, but long-term, milli-g levels of artificial gravity in spacecraft.
In a number of science fiction plots, acceleration is used to produce artificial gravity for interstellar spacecraft, propelled by as yet theoretical or hypothetical means.
This effect of linear acceleration is well understood, and is routinely used for 0 g cryogenic fluid management for post-launch (subsequent) in-space firings of upper stage rockets.
Roller coasters, especially launched roller coasters or those that rely on electromagnetic propulsion, can provide linear acceleration "gravity", and so can relatively high acceleration vehicles, such as sports cars. Linear acceleration can be used to provide air-time on roller coasters and other thrill rides.
Simulating lunar gravity
In January 2022, China was reported by the South China Morning Post to have built a small ( diameter) research facility to simulate low lunar gravity with the help of magnets. The facility was reportedly partly inspired by the work of Andre Geim (who later shared the 2010 Nobel Prize in Physics for his research on graphene) and Michael Berry, who both shared the Ig Nobel Prize in Physics in 2000 for the magnetic levitation of a frog.
Graviton control or generator
Speculative or fictional mechanisms
In science fiction, artificial gravity (or cancellation of gravity) or "paragravity" is sometimes present in spacecraft that are neither rotating nor accelerating. At present, there is no confirmed technique as such that can simulate gravity other than actual rotation or acceleration. There have been many claims over the years of such a device. Eugene Podkletnov, a Russian engineer, has claimed since the early 1990s to have made such a device consisting of a spinning superconductor producing a powerful "gravitomagnetic field." In 2006, a research group funded by ESA claimed to have created a similar device that demonstrated positive results for the production of gravitomagnetism, although it produced only 0.0001 g.
| Technology | Basics_6 | null |
1137672 | https://en.wikipedia.org/wiki/Canonical%20ensemble | Canonical ensemble | In statistical mechanics, a canonical ensemble is the statistical ensemble that represents the possible states of a mechanical system in thermal equilibrium with a heat bath at a fixed temperature. The system can exchange energy with the heat bath, so that the states of the system will differ in total energy.
The principal thermodynamic variable of the canonical ensemble, determining the probability distribution of states, is the absolute temperature (symbol: ). The ensemble typically also depends on mechanical variables such as the number of particles in the system (symbol: ) and the system's volume (symbol: ), each of which influence the nature of the system's internal states. An ensemble with these three parameters, which are assumed constant for the ensemble to be considered canonical, is sometimes called the ensemble.
The canonical ensemble assigns a probability to each distinct microstate given by the following exponential:
where is the total energy of the microstate, and is the Boltzmann constant.
The number is the free energy (specifically, the Helmholtz free energy) and is assumed to be a constant for a specific ensemble to be considered canonical. However, the probabilities and will vary if different N, V, T are selected. The free energy serves two roles: first, it provides a normalization factor for the probability distribution (the probabilities, over the complete set of microstates, must add up to one); second, many important ensemble averages can be directly calculated from the function .
An alternative but equivalent formulation for the same concept writes the probability as
using the canonical partition function
rather than the free energy. The equations below (in terms of free energy) may be restated in terms of the canonical partition function by simple mathematical manipulations.
Historically, the canonical ensemble was first described by Boltzmann (who called it a holode) in 1884 in a relatively unknown paper. It was later reformulated and extensively investigated by Gibbs in 1902.
Applicability of canonical ensemble
The canonical ensemble is the ensemble that describes the possible states of a system that is in thermal equilibrium with a heat bath (the derivation of this fact can be found in Gibbs).
The canonical ensemble applies to systems of any size; while it is necessary to assume that the heat bath is very large (i.e., take a macroscopic limit), the system itself may be small or large.
The condition that the system is mechanically isolated is necessary in order to ensure it does not exchange energy with any external object besides the heat bath. In general, it is desirable to apply the canonical ensemble to systems that are in direct contact with the heat bath, since it is that contact that ensures the equilibrium. In practical situations, the use of the canonical ensemble is usually justified either 1) by assuming that the contact is mechanically weak, or 2) by incorporating a suitable part of the heat bath connection into the system under analysis, so that the connection's mechanical influence on the system is modeled within the system.
When the total energy is fixed but the internal state of the system is otherwise unknown, the appropriate description is not the canonical ensemble but the microcanonical ensemble. For systems where the particle number is variable (due to contact with a particle reservoir), the correct description is the grand canonical ensemble. In statistical physics textbooks for interacting particle systems the three ensembles are assumed to be thermodynamically equivalent: the fluctuations of macroscopic quantities around their average value become small and, as the number of particles tends to infinity, they tend to vanish. In the latter limit, called the thermodynamic limit, the average constraints effectively become hard constraints. The assumption of ensemble equivalence dates back to Gibbs and has been verified for some models of physical systems with short-range interactions and subject to a small number of macroscopic constraints. Despite the fact that many textbooks still convey the message that ensemble equivalence holds for all physical systems, over the last decades various examples of physical systems have been found for which breaking of ensemble equivalence occurs.
Properties
Free energy, ensemble averages, and exact differentials
The partial derivatives of the function give important canonical ensemble average quantities:
the average pressure is
the Gibbs entropy is
the partial derivative is approximately related to chemical potential, although the concept of chemical equilibrium does not exactly apply to canonical ensembles of small systems.
and the average energy is
Exact differential: From the above expressions, it can be seen that the function , for a given , has the exact differential
First law of thermodynamics: Substituting the above relationship for into the exact differential of , an equation similar to the first law of thermodynamics is found, except with average signs on some of the quantities:
Energy fluctuations: The energy in the system has uncertainty in the canonical ensemble. The variance of the energy is
Example ensembles
"We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs (1903)
Boltzmann distribution (separable system)
If a system described by a canonical ensemble can be separated into independent parts (this happens if the different parts do not interact), and each of those parts has a fixed material composition, then each part can be seen as a system unto itself and is described by a canonical ensemble having the same temperature as the whole. Moreover, if the system is made up of multiple similar parts, then each part has exactly the same distribution as the other parts.
In this way, the canonical ensemble provides exactly the Boltzmann distribution (also known as Maxwell–Boltzmann statistics) for systems of any number of particles. In comparison, the justification of the Boltzmann distribution from the microcanonical ensemble only applies for systems with a large number of parts (that is, in the thermodynamic limit).
The Boltzmann distribution itself is one of the most important tools in applying statistical mechanics to real systems, as it massively simplifies the study of systems that can be separated into independent parts (e.g., particles in a gas, electromagnetic modes in a cavity, molecular bonds in a polymer).
Ising model (strongly interacting system)
In a system composed of pieces that interact with each other, it is usually not possible to find a way to separate the system into independent subsystems as done in the Boltzmann distribution. In these systems it is necessary to resort to using the full expression of the canonical ensemble in order to describe the thermodynamics of the system when it is thermostatted to a heat bath. The canonical ensemble is generally the most straightforward framework for studies of statistical mechanics and even allows one to obtain exact solutions in some interacting model systems.
A classic example of this is the Ising model, which is a widely discussed toy model for the phenomena of ferromagnetism and of self-assembled monolayer formation, and is one of the simplest models that shows a phase transition. Lars Onsager famously calculated exactly the free energy of an infinite-sized square-lattice Ising model at zero magnetic field, in the canonical ensemble.
Precise expressions for the ensemble
The precise mathematical expression for a statistical ensemble depends on the kind of mechanics under consideration—quantum or classical—since the notion of a "microstate" is considerably different in these two cases. In quantum mechanics, the canonical ensemble affords a simple description since diagonalization provides a discrete set of microstates with specific energies. The classical mechanical case is more complex as it involves instead an integral over canonical phase space, and the size of microstates in phase space can be chosen somewhat arbitrarily.
Quantum mechanical
A statistical ensemble in quantum mechanics is represented by a density matrix, denoted by . In basis-free notation, the canonical ensemble is the density matrix
where is the system's total energy operator (Hamiltonian), and is the matrix exponential operator. The free energy is determined by the probability normalization condition that the density matrix has a trace of one, :
The canonical ensemble can alternatively be written in a simple form using bra–ket notation, if the system's energy eigenstates and energy eigenvalues are known. Given a complete basis of energy eigenstates , indexed by , the canonical ensemble is:
where the are the energy eigenvalues determined by . In other words, a set of microstates in quantum mechanics is given by a complete set of stationary states. The density matrix is diagonal in this basis, with the diagonal entries each directly giving a probability.
Classical mechanical
In classical mechanics, a statistical ensemble is instead represented by a joint probability density function in the system's phase space,
, where the and are the canonical coordinates (generalized momenta and generalized coordinates) of the system's internal degrees of freedom.
In a system of particles, the number of degrees of freedom depends on the number of particles in a way that depends on the physical situation. For a three-dimensional monoatomic gas (not molecules), . In diatomic gases there will also be rotational and vibrational degrees of freedom.
The probability density function for the canonical ensemble is:
where
is the energy of the system, a function of the phase ,
is an arbitrary but predetermined constant with the units of , setting the extent of one microstate and providing correct dimensions to .
is an overcounting correction factor, often used for particle systems where identical particles are able to change place with each other.
provides a normalizing factor and is also the characteristic state function, the free energy.
Again, the value of is determined by demanding that is a normalized probability density function:
This integral is taken over the entire phase space.
In other words, a microstate in classical mechanics is a phase space region, and this region has volume . This means that each microstate spans a range of energy, however this range can be made arbitrarily narrow by choosing to be very small. The phase space integral can be converted into a summation over microstates, once phase space has been finely divided to a sufficient degree.
| Physical sciences | Statistical mechanics | Physics |
1138578 | https://en.wikipedia.org/wiki/Crystal%20field%20theory | Crystal field theory | In molecular physics, crystal field theory (CFT) describes the breaking of degeneracies of electron orbital states, usually d or f orbitals, due to a static electric field produced by a surrounding charge distribution (anion neighbors). This theory has been used to describe various spectroscopies of transition metal coordination complexes, in particular optical spectra (colors). CFT successfully accounts for some magnetic properties, colors, hydration enthalpies, and spinel structures of transition metal complexes, but it does not attempt to describe bonding. CFT was developed by physicists Hans Bethe and John Hasbrouck van Vleck in the 1930s. CFT was subsequently combined with molecular orbital theory to form the more realistic and complex ligand field theory (LFT), which delivers insight into the process of chemical bonding in transition metal complexes. CFT can be complicated further by breaking assumptions made of relative metal and ligand orbital energies, requiring the use of inverted ligand field theory (ILFT) to better describe bonding.
Overview
According to crystal field theory, the interaction between a transition metal and ligands arises from the attraction between the positively charged metal cation and the negative charge on the non-bonding electrons of the ligand. The theory is developed by considering energy changes of the five degenerate d-orbitals upon being surrounded by an array of point charges consisting of the ligands. As a ligand approaches the metal ion, the electrons from the ligand will be closer to some of the d-orbitals and farther away from others, causing a loss of degeneracy. The electrons in the d-orbitals and those in the ligand repel each other due to repulsion between like charges. Thus the d-electrons closer to the ligands will have a higher energy than those further away which results in the d-orbitals splitting in energy. This splitting is affected by the following factors:
the nature of the metal ion.
the metal's oxidation state. A higher oxidation state leads to a larger splitting relative to the spherical field.
the arrangement of the ligands around the metal ion.
the coordination number of the metal (i.e. tetrahedral, octahedral...)
the nature of the ligands surrounding the metal ion. The stronger the effect of the ligands then the greater the difference between the high and low energy d groups.
The most common type of complex is octahedral, in which six ligands form the vertices of an octahedron around the metal ion. In octahedral symmetry the d-orbitals split into two sets with an energy difference, Δoct (the crystal-field splitting parameter, also commonly denoted by 10Dq for ten times the "differential of quanta") where the dxy, dxz and dyz orbitals will be lower in energy than the dz2 and dx2-y2, which will have higher energy, because the former group is farther from the ligands than the latter and therefore experiences less repulsion. The three lower-energy orbitals are collectively referred to as t2g, and the two higher-energy orbitals as eg. These labels are based on the theory of molecular symmetry: they are the names of irreducible representations of the octahedral point group, Oh.(see the Oh character table) Typical orbital energy diagrams are given below in the section High-spin and low-spin.
Tetrahedral complexes are the second most common type; here four ligands form a tetrahedron around the metal ion. In a tetrahedral crystal field splitting, the d-orbitals again split into two groups, with an energy difference of Δtet. The lower energy orbitals will be dz2 and dx2-y2, and the higher energy orbitals will be dxy, dxz and dyz - opposite to the octahedral case. Furthermore, since the ligand electrons in tetrahedral symmetry are not oriented directly towards the d-orbitals, the energy splitting will be lower than in the octahedral case. Square planar and other complex geometries can also be described by CFT.
The size of the gap Δ between the two or more sets of orbitals depends on several factors, including the ligands and geometry of the complex. Some ligands always produce a small value of Δ, while others always give a large splitting. The reasons behind this can be explained by ligand field theory. The spectrochemical series is an empirically-derived list of ligands ordered by the size of the splitting Δ that they produce (small Δ to large Δ; see also this table):
I− < Br− < S2− < SCN− (S–bonded) < Cl− < NO3− < N3− < F− < OH− < C2O42− < H2O < NCS− (N–bonded) < CH3CN < py < NH3 < en < 2,2'-bipyridine < phen < NO2− < PPh3 < CN− < CO.
It is useful to note that the ligands producing the most splitting are those that can engage in metal to ligand back-bonding.
The oxidation state of the metal also contributes to the size of Δ between the high and low energy levels. As the oxidation state increases for a given metal, the magnitude of Δ increases. A V3+ complex will have a larger Δ than a V2+ complex for a given set of ligands, as the difference in charge density allows the ligands to be closer to a V3+ ion than to a V2+ ion. The smaller distance between the ligand and the metal ion results in a larger Δ, because the ligand and metal electrons are closer together and therefore repel more.
High-spin and low-spin
Ligands which cause a large splitting Δ of the d-orbitals are referred to as strong-field ligands, such as CN− and CO from the spectrochemical series. In complexes with these ligands, it is unfavourable to put electrons into the high energy orbitals. Therefore, the lower energy orbitals are completely filled before population of the upper sets starts according to the Aufbau principle. Complexes such as this are called "low spin". For example, NO2− is a strong-field ligand and produces a large Δ. The octahedral ion [Fe(NO2)6]3−, which has 5 d-electrons, would have the octahedral splitting diagram shown at right with all five electrons in the t2g level. This low spin state therefore does not follow Hund's rule.
Conversely, ligands (like I− and Br−) which cause a small splitting Δ of the d-orbitals are referred to as weak-field ligands. In this case, it is easier to put electrons into the higher energy set of orbitals than it is to put two into the same low-energy orbital, because two electrons in the same orbital repel each other. So, one electron is put into each of the five d-orbitals in accord with Hund's rule, and "high spin" complexes are formed before any pairing occurs. For example, Br− is a weak-field ligand and produces a small Δoct. So, the ion [FeBr6]3−, again with five d-electrons, would have an octahedral splitting diagram where all five orbitals are singly occupied.
In order for low spin splitting to occur, the energy cost of placing an electron into an already singly occupied orbital must be less than the cost of placing the additional electron into an eg orbital at an energy cost of Δ. As noted above, eg refers to the
dz2 and dx2-y2 which are higher in energy than the t2g in octahedral complexes. If the energy required to pair two electrons is greater than Δ, the energy cost of placing an electron in an eg, high spin splitting occurs.
The crystal field splitting energy for tetrahedral metal complexes (four ligands) is referred to as Δtet, and is roughly equal to 4/9Δoct (for the same metal and same ligands). Therefore, the energy required to pair two electrons is typically higher than the energy required for placing electrons in the higher energy orbitals. Thus, tetrahedral complexes are usually high-spin.
The use of these splitting diagrams can aid in the prediction of magnetic properties of co-ordination compounds. A compound that has unpaired electrons in its splitting diagram will be paramagnetic and will be attracted by magnetic fields, while a compound that lacks unpaired electrons in its splitting diagram will be diamagnetic and will be weakly repelled by a magnetic field.
Stabilization energy
The crystal field stabilization energy (CFSE) is the stability that results from placing a transition metal ion in the crystal field generated by a set of ligands. It arises due to the fact that when the d-orbitals are split in a ligand field (as described above), some of them become lower in energy than before with respect to a spherical field known as the barycenter in which all five d-orbitals are degenerate. For example, in an octahedral case, the t2g set becomes lower in energy than the orbitals in the barycenter. As a result of this, if there are any electrons occupying these orbitals, the metal ion is more stable in the ligand field relative to the barycenter by an amount known as the CFSE. Conversely, the eg orbitals (in the octahedral case) are higher in energy than in the barycenter, so putting electrons in these reduces the amount of CFSE.
If the splitting of the d-orbitals in an octahedral field is Δoct, the three t2g orbitals are stabilized relative to the barycenter by 2/5 Δoct, and the eg orbitals are destabilized by 3/5 Δoct. As examples, consider the two d5 configurations shown further up the page. The low-spin (top) example has five electrons in the t2g orbitals, so the total CFSE is 5 x 2/5 Δoct = 2Δoct. In the high-spin (lower) example, the CFSE is (3 x 2/5 Δoct) - (2 x 3/5 Δoct) = 0 - in this case, the stabilization generated by the electrons in the lower orbitals is canceled out by the destabilizing effect of the electrons in the upper orbitals.
Optical properties
The optical properties (details of absorption and emission spectra) of many coordination complexes can be explained by Crystal Field Theory. Often, however, the deeper colors of metal complexes arise from more intense charge-transfer excitations.
Geometries and splitting diagrams
| Physical sciences | Bond structure | Chemistry |
1139338 | https://en.wikipedia.org/wiki/Computable%20function | Computable function | Computable functions are the basic objects of study in computability theory. Computable functions are the formalized analogue of the intuitive notion of algorithms, in the sense that a function is computable if there exists an algorithm that can do the job of the function, i.e. given an input of the function domain it can return the corresponding output. Computable functions are used to discuss computability without referring to any concrete model of computation such as Turing machines or register machines. Any definition, however, must make reference to some specific model of computation but all valid definitions yield the same class of functions.
Particular models of computability that give rise to the set of computable functions are the Turing-computable functions and the general recursive functions.
According to the Church–Turing thesis, computable functions are exactly the functions that can be calculated using a mechanical (that is, automatic) calculation device given unlimited amounts of time and storage space. More precisely, every model of computation that has ever been imagined can compute only computable functions, and all computable functions can be computed by any of several models of computation that are apparently very different, such as Turing machines, register machines, lambda calculus and general recursive functions.
Before the precise definition of computable function, mathematicians often used the informal term effectively calculable. This term has since come to be identified with the computable functions. The effective computability of these functions does not imply that they can be efficiently computed (i.e. computed within a reasonable amount of time). In fact, for some effectively calculable functions it can be shown that any algorithm that computes them will be very inefficient in the sense that the running time of the algorithm increases exponentially (or even superexponentially) with the length of the input. The fields of feasible computability and computational complexity study functions that can be computed efficiently.
The Blum axioms can be used to define an abstract computational complexity theory on the set of computable functions. In computational complexity theory, the problem of determining the complexity of a computable function is known as a function problem.
Definition
Computability of a function is an informal notion. One way to describe it is to say that a function is computable if its value can be obtained by an effective procedure. With more rigor, a function
is computable if and only if there is an effective procedure that, given any -tuple of natural numbers, will produce the value . In agreement with this definition, the remainder of this article presumes that computable functions take finitely many natural numbers as arguments and produce a value which is a single natural number.
As counterparts to this informal description, there exist multiple formal, mathematical definitions. The class of computable functions can be defined in many equivalent models of computation, including
Turing machines
μ-recursive functions
Lambda calculus
Post machines (Post–Turing machines and tag machines).
Register machines
Although these models use different representations for the functions, their inputs, and their outputs, translations exist between any two models, and so every model describes essentially the same class of functions, giving rise to the opinion that formal computability is both natural and not too narrow. These functions are sometimes referred to as "recursive", to contrast with the informal term "computable", a distinction stemming from a 1934 discussion between Kleene and Gödel.p.6
For example, one can formalize computable functions as μ-recursive functions, which are partial functions that take finite tuples of natural numbers and return a single natural number (just as above). They are the smallest class of partial functions that includes the constant, successor, and projection functions, and is closed under composition, primitive recursion, and the μ operator.
Equivalently, computable functions can be formalized as functions which can be calculated by an idealized computing agent such as a Turing machine or a register machine. Formally speaking, a partial function can be calculated if and only if there exists a computer program with the following properties:
If is defined, then the program will terminate on the input with the value stored in the computer memory.
If is undefined, then the program never terminates on the input .
Characteristics of computable functions
The basic characteristic of a computable function is that there must be a finite procedure (an algorithm) telling how to compute the function. The models of computation listed above give different interpretations of what a procedure is and how it is used, but these interpretations share many properties. The fact that these models give equivalent classes of computable functions stems from the fact that each model is capable of reading and mimicking a procedure for any of the other models, much as a compiler is able to read instructions in one computer language and emit instructions in another language.
Enderton [1977] gives the following characteristics of a procedure for computing a computable function; similar characterizations have been given by Turing [1936], Rogers [1967], and others.
"There must be exact instructions (i.e. a program), finite in length, for the procedure." Thus every computable function must have a finite program that completely describes how the function is to be computed. It is possible to compute the function by just following the instructions; no guessing or special insight is required.
"If the procedure is given a k-tuple x in the domain of f, then after a finite number of discrete steps the procedure must terminate and produce f(x)." Intuitively, the procedure proceeds step by step, with a specific rule to cover what to do at each step of the calculation. Only finitely many steps can be carried out before the value of the function is returned.
"If the procedure is given a k-tuple x which is not in the domain of f, then the procedure might go on forever, never halting. Or it might get stuck at some point (i.e., one of its instructions cannot be executed), but it must not pretend to produce a value for f at x." Thus if a value for f(x) is ever found, it must be the correct value. It is not necessary for the computing agent to distinguish correct outcomes from incorrect ones because the procedure is defined as correct if and only if it produces an outcome.
Enderton goes on to list several clarifications of these 3 requirements of the procedure for a computable function:
The procedure must theoretically work for arbitrarily large arguments. It is not assumed that the arguments are smaller than the number of atoms in the Earth, for example.
The procedure is required to halt after finitely many steps in order to produce an output, but it may take arbitrarily many steps before halting. No time limitation is assumed.
Although the procedure may use only a finite amount of storage space during a successful computation, there is no bound on the amount of space that is used. It is assumed that additional storage space can be given to the procedure whenever the procedure asks for it.
To summarise, based on this view a function is computable if:
The field of computational complexity studies functions with prescribed bounds on the time and/or space allowed in a successful computation.
Computable sets and relations
A set of natural numbers is called computable (synonyms: recursive, decidable) if there is a computable, total function such that for any natural number , if is in and if is not in .
A set of natural numbers is called computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function such that for each number , is defined if and only if is in the set. Thus a set is computably enumerable if and only if it is the domain of some computable function. The word enumerable is used because the following are equivalent for a nonempty subset of the natural numbers:
is the domain of a computable function.
is the range of a total computable function. If is infinite then the function can be assumed to be injective.
If a set is the range of a function then the function can be viewed as an
enumeration of , because the list will include every element of .
Because each finitary relation on the natural numbers can be identified with a corresponding set of finite sequences of natural numbers, the notions of computable relation and computably enumerable relation can be defined from their analogues for sets.
Formal languages
In computability theory in computer science, it is common to consider formal languages. An alphabet is an arbitrary set. A word on an alphabet is a finite sequence of symbols from the alphabet; the same symbol may be used more than once. For example, binary strings are exactly the words on the alphabet }. A language is a subset of the collection of all words on a fixed alphabet. For example, the collection of all binary strings that contain exactly 3 ones is a language over the binary alphabet.
A key property of a formal language is the level of difficulty required to decide whether a given word is in the language. Some coding system must be developed to allow a computable function to take an arbitrary word in the language as input; this is usually considered routine. A language is called computable (synonyms: recursive, decidable) if there is a computable function such that for each word over the alphabet, if the word is in the language and if the word is not in the language. Thus a language is computable just in case there is a procedure that is able to correctly tell whether arbitrary words are in the language.
A language is computably enumerable (synonyms: recursively enumerable, semidecidable) if there is a computable function such that is defined if and only if the word is in the language. The term enumerable has the same etymology as in computably enumerable sets of natural numbers.
Examples
The following functions are computable:
Each function with a finite domain; e.g., any finite sequence of natural numbers.
Each constant function f : Nk → N, f(n1,...nk) := n.
Addition f : N2 → N, f(n1,n2) := n1 + n2
The greatest common divisor of two numbers
A Bézout coefficient of two numbers
The smallest prime factor of a number
If f and g are computable, then so are: f + g, f * g, if
f is unary, max(f,g), min(f,g), and many more combinations.
The following examples illustrate that a function may be computable though it is not known which algorithm computes it.
The function f such that f(n) = 1 if there is a sequence of at least n consecutive fives in the decimal expansion of , and f(n) = 0 otherwise, is computable. (The function f is either the constant 1 function, which is computable, or else there is a k such that f(n) = 1 if n < k and f(n) = 0 if n ≥ k. Every such function is computable. It is not known whether there are arbitrarily long runs of fives in the decimal expansion of π, so we don't know which of those functions is f. Nevertheless, we know that the function f must be computable.)
Each finite segment of an uncomputable sequence of natural numbers (such as the Busy Beaver function Σ) is computable. E.g., for each natural number n, there exists an algorithm that computes the finite sequence Σ(0), Σ(1), Σ(2), ..., Σ(n) — in contrast to the fact that there is no algorithm that computes the entire Σ-sequence, i.e. Σ(n) for all n. Thus, "Print 0, 1, 4, 6, 13" is a trivial algorithm to compute Σ(0), Σ(1), Σ(2), Σ(3), Σ(4); similarly, for any given value of n, such a trivial algorithm exists (even though it may never be known or produced by anyone) to compute Σ(0), Σ(1), Σ(2), ..., Σ(n).
Church–Turing thesis
The Church–Turing thesis states that any function computable from a procedure possessing the three properties listed above is a computable function. Because these three properties are not formally stated, the Church–Turing thesis cannot be proved. The following facts are often taken as evidence for the thesis:
Many equivalent models of computation are known, and they all give the same definition of computable function (or a weaker version, in some instances).
No stronger model of computation which is generally considered to be effectively calculable has been proposed.
The Church–Turing thesis is sometimes used in proofs to justify that a particular function is computable by giving a concrete description of a procedure for the computation. This is permitted because it is believed that all such uses of the thesis can be removed by the tedious process of writing a formal procedure for the function in some model of computation.
Provability
Given a function (or, similarly, a set), one may be interested not only if it is computable, but also whether this can be proven in a particular proof system (usually first order Peano arithmetic). A function that can be proven to be computable is called provably total.
The set of provably total functions is recursively enumerable: one can enumerate all the provably total functions by enumerating all their corresponding proofs, that prove their computability. This can be done by enumerating all the proofs of the proof system and ignoring irrelevant ones.
Relation to recursively defined functions
In a function defined by a recursive definition, each value is defined by a fixed first-order formula of other, previously defined values of the same function or other functions, which might be simply constants. A subset of these is the primitive recursive functions. Another example is the Ackermann function, which is recursively defined but not primitive recursive.
For definitions of this type to avoid circularity or infinite regress, it is necessary that recursive calls to the same function within a definition be to arguments that are smaller in some well-partial-order on the function's domain. For instance, for the Ackermann function , whenever the definition of refers to , then w.r.t. the lexicographic order on pairs of natural numbers. In this case, and in the case of the primitive recursive functions, well-ordering is obvious, but some "refers-to" relations are nontrivial to prove as being well-orderings. Any function defined recursively in a well-ordered way is computable: each value can be computed by expanding a tree of recursive calls to the function, and this expansion must terminate after a finite number of calls, because otherwise Kőnig's lemma would lead to an infinite descending sequence of calls, violating the assumption of well-ordering.
Total functions that are not provably total
In a sound proof system, every provably total function is indeed total, but the converse is not true: in every first-order proof system that is strong enough and sound (including Peano arithmetic), one can prove (in another proof system) the existence of total functions that cannot be proven total in the proof system.
If the total computable functions are enumerated via the Turing machines that produces them, then the above statement can be shown, if the proof system is sound, by a similar diagonalization argument to that used above, using the enumeration of provably total functions given earlier. One uses a Turing machine that enumerates the relevant proofs, and for every input n calls fn(n) (where fn is n-th function by this enumeration) by invoking the Turing machine that computes it according to the n-th proof. Such a Turing machine is guaranteed to halt if the proof system is sound.
Uncomputable functions and unsolvable problems
Every computable function has a finite procedure giving explicit, unambiguous instructions on how to compute it. Furthermore, this procedure has to be encoded in the finite alphabet used by the computational model, so there are only countably many computable functions. For example, functions may be encoded using a string of bits (the alphabet }).
The real numbers are uncountable so most real numbers are not computable. See computable number. The set of finitary functions on the natural numbers is uncountable so most are not computable. Concrete examples of such functions are Busy beaver, Kolmogorov complexity, or any function that outputs the digits of a noncomputable number, such as Chaitin's constant.
Similarly, most subsets of the natural numbers are not computable. The halting problem was the first such set to be constructed. The Entscheidungsproblem, proposed by David Hilbert, asked whether there is an effective procedure to determine which mathematical statements (coded as natural numbers) are true. Turing and Church independently showed in the 1930s that this set of natural numbers is not computable. According to the Church–Turing thesis, there is no effective procedure (with an algorithm) which can perform these computations.
Extensions of computability
Relative computability
The notion of computability of a function can be relativized to an arbitrary set of natural numbers A. A function f is defined to be computable in A (equivalently A-computable or computable relative to A) when it satisfies the definition of a computable function with modifications allowing access to A as an oracle. As with the concept of a computable function relative computability can be given equivalent definitions in many different models of computation. This is commonly accomplished by supplementing the model of computation with an additional primitive operation which asks whether a given integer is a member of A. We can also talk about f being computable in g by identifying g with its graph.
Higher recursion theory
Hyperarithmetical theory studies those sets that can be computed from a computable ordinal number of iterates of the Turing jump of the empty set. This is equivalent to sets defined by both a universal and existential formula in the language of second order arithmetic and to some models of Hypercomputation. Even more general recursion theories have been studied, such as E-recursion theory in which any set can be used as an argument to an E-recursive function.
Hyper-computation
Although the Church–Turing thesis states that the computable functions include all functions with algorithms, it is possible to consider broader classes of functions that relax the requirements that algorithms must possess. The field of Hypercomputation studies models of computation that go beyond normal Turing computation.
| Mathematics | Discrete mathematics | null |
1139347 | https://en.wikipedia.org/wiki/Titanosauria | Titanosauria | Titanosaurs (or titanosaurians; members of the group Titanosauria) were a diverse group of sauropod dinosaurs, including genera from all seven continents. The titanosaurs were the last surviving group of long-necked sauropods, with taxa still thriving at the time of the extinction event at the end of the Cretaceous. This group includes some of the largest land animals known to have ever existed, such as Patagotitan, estimated at long with a weight of , and the comparably-sized Argentinosaurus and Puertasaurus from the same region.
The group's name alludes to the mythological Titans of ancient Greek mythology, via the type genus (now considered a nomen dubium) Titanosaurus. Together with the brachiosaurids and relatives, titanosaurs make up the larger sauropod clade Titanosauriformes. Titanosaurs have long been a poorly-known group, and the relationships between titanosaur species are still not well-understood.
Fossil record
Due to the near-global distribution of titanosaurs during the Cretaceous, titanosaur fossils have been found on every continent, including Antarctica. However, titanosaurs have the least complete fossil record of any major sauropodomorph group. No complete titanosaur skeletons are known, and many species are only known from a few bones. Titanosaur skulls are especially rare. Though fragmentary cranial remains are known for several titanosaur genera, nearly complete skulls have been described for only four: Nemegtosaurus, Rapetosaurus, Sarmientosaurus, and Tapuiasaurus. As is the case in most other sauropod groups, there are few titanosaur specimens with complete necks preserving all of the cervical vertebrae in sequence. Only three complete titanosaur necks are known: the holotype of Futalognkosaurus and two undescribed specimens from Argentina. A fourth specimen, of an unidentified titanosaur from Brazil, preserves a nearly complete neck, with only the atlas, the tiny vertebra forming the joint between the skull and neck, missing. Only five titanosaur specimens preserve complete, articulated hind feet. This incompleteness is especially significant for giant titanosaurs, which are generally known from disarticulated and fragmentary remains.
Titanosaurs are one of the few groups of dinosaurs for which fossil eggs are known. The fossil site of Auca Mahuevo preserves a titanosaur nesting ground. Some titanosaur eggs have been found containing fossil embryos, which even preserve fossil skin. These fossil embryos are among the few titanosaur specimens to preserve complete skulls.
Description
Titanosauria have the largest range of body size of any sauropod clade, and includes both the largest known sauropods and some of the smallest. One of the largest titanosaurs, Patagotitan, had a body mass estimated to be , whereas one of the smallest, Magyarosaurus, had a body mass of approximately . Even relatively closely related titanosaurs could have very different body sizes, as the small rinconsaurs were closely related to the gigantic lognkosaurs. Fossils from perhaps the largest dinosaur ever found were discovered in 2021 in the Neuquén Province of northwest Patagonia, Argentina. It is believed that they are from a titanosaur. Some of smallest titanosaurs, such as Magyarosaurus, inhabited Europe, which was largely made up of islands during the Cretaceous, and were likely island dwarfs. Another taxon of tiny titanosaurs, Ibirania, lived a non-insular context in Upper Creaceous Brazil, and is an example of nanism resultant from other ecological pressures.
Head and neck
The heads of titanosaurs are poorly known. However, several different cranial morphologies are apparent. In some species, such as Sarmientosaurus, the head resembled that of brachiosaurids. In others, such as Rapetosaurus and Nemegtosaurus, the head resembled that of diplodocids. In some titanosaurs, the skull was especially diplodocid-like due to square-shaped jaws; the titanosaur Antarctosaurus is especially similar to the rebbachisaurid Nigersaurus. Titanosaurs had small heads, even when compared with other sauropods. The head was also wide, similar to the heads of Camarasaurus and Brachiosaurus, though somewhat more elongated. Titanosaurian nostrils were large ("macronarian") and all had crests formed by the nasal bones. Their teeth were either somewhat spatulate (spoon-like) or like pegs or pencils, but were always very small.
Titanosaur necks were of average length for sauropods, and their tails were whip-like though not as long as in the diplodocids. While the pelvis was slimmer than some sauropods, the pectoral (chest) area was much wider, giving them a uniquely "wide-legged" stance. As a result, the fossilized trackways of titanosaurs are distinctly broader than other sauropods. Their forelimbs were also stocky, and often longer than their hind limbs. Unlike other sauropods, some titanosaurs had no digits, walking only on horseshoe-shaped "stumps" made up of the columnar metacarpal bones. Their vertebrae (back bones) were solid (not hollowed-out), which may be a reversal to more basal saurischian characteristics. Their spinal column was relatively flexible, likely making them more agile than other sauropods, though at the expense of rearing on their hind legs compared to the Diplodocoids. One of the most characteristic features shared by most titanosaurs were their procoelous caudal vertebrae, with ball-and-socket articulations between the vertebral centra.
Torso and limbs
The dorsal vertebrae of titanosaurs show multiple derived features among sauropods. Similarly to the Rebbachisauridae, titanosaurs lost the hyposphene-hypantrum articulations, a set of surfaces between vertebrae that prevent additional rotation of the bones. Andesaurus, one of the most basal titanosaurs, shows a normal hyposphene. The same area is reduced in Argentinosaurus to only two ridges, and is fully absent in taxa like Opisthocoelicaudia and Saltasaurus. Both Argentinosaurus and Epachthosaurus bear similar intermediate "hyposphenal ridges", which suggests they represent a more primitive form of dorsal vertebrae.
Sauropod hands already are highly derived from other dinosaurs, being reduced into columnar metacarpals and blocky phalanges with fewer claws. However, titanosaurs evolved the manus even further, completely losing the phalanges and heavily modifying the metacarpals. Argyrosaurus is the only titanosaur known to possess carpals. Other taxa like Epachthosaurus show a reduction of phalanges to one or two bones. Opisthoeoclicaudia shows even more reduction of the hand than other titanosaurs, with both carpals and phalanges completely absent. However, Diamantinasaurus, while lacking carpals, preserves a manual formula of , including a thumb claw and phalanges on all other digits. This, coupled with the preservation of a single phalanx on digit IV of Epachthosaurus and potentially Opisthocoelicaudia (further study is necessary), show that preservation biases may be responsible for the lack of hand phalanges in these taxa. This suggests that Alamosaurus, Neuquensaurus, Saltasaurus and Rapetosaurus - all known from imperfect or disarticulated remains previously associated with a lack of phalanges - may have had phalanges but lost them after death.
Titanosaurs have a poor fossil record of their pedes (feet), only being complete in five definitive titanosaurs. Among these, Notocolossus is the largest, and also has the most specialized pes: like all titanosaurs, its pes is composed of short, thick metatarsals of approximately the same lengths; however, metatarsals I and V are notably more robust than in other taxa.
Integument
From skin impressions found with fossils, it has been determined that the skin of many titanosaurs was armored with a small mosaic of small, bead-like scales surrounding larger scales. While most titanosaurs were very large animals, many were fairly average in size compared to other giant dinosaurs. Some island-dwelling dwarf titanosaurs, such as Magyarosaurus, were probably the result of allopatric speciation and insular dwarfism.
Some titanosaurs had osteoderms. Osteoderms were first confirmed in the genus Saltasaurus but are now known to have been present in a variety of titanosaurs within the clade Lithostrotia. The exact arrangement of osteoderms on the body of a titanosaur is not known, but some paleontologists consider it likely that the osteoderms were arranged in two parallel rows on the animal's back, an arrangement similar to the plates of stegosaurs. Several other arrangements have been proposed, such as a single row along the midline, and it is possible that different species had different arrangements. The osteoderms were certainly far more sparse than those of ankylosaurs, and did not completely cover the back in scutes. Because of their sparse arrangement, it was unlikely that they served a significant role in defense. However, they may have played an important role in nutrient storage for titanosaurs living in highly seasonal climates and for female titanosaurs laying eggs. Osteoderms were present on both large and small species, so they were not solely used by smaller species as protection against predators. New evidence published in 2021 suggests there were indeed some defensive purposes in titanosaur osteoderms; simulated bite marks from both baurusuchid crocodylomorphs and abelisaurids on titanosaurid osteoderms suggest they could be useful for protecting the animals in addition to functioning in mineral storage.
Classification
Titanosaurs are classified as sauropod dinosaurs. This highly diverse group forms the dominant clade of Cretaceous sauropods. Within Sauropoda, titanosaurs were once classified as close relatives of Diplodocidae due to their shared characteristic of narrow teeth, but this is now known to be the result of convergent evolution. Titanosaurs are now known to be most closely related to euhelopodids and brachiosaurids; together they form a clade named Titanosauriformes.
For much of the 20th century, most known species of titanosaurs were classified in the family Titanosauridae, which is no longer in widespread use. Titanosauria was first proposed in 1993 as a taxon to encompass titanosaurids and their close relatives. It has been phylogenetically defined as the clade composed of the most recent common ancestor of Saltasaurus and Andesaurus and all of its descendants. The relationships of species within Titanosauria remain largely unresolved, and it is considered one of the most poorly-understood areas of dinosaur classification. One of the few areas of agreement is that the majority of titanosaurs except Andesaurus and some other basal species form a clade called Lithostrotia, which some researchers consider equivalent to the deprecated Titanosauridae. Lithostrotians include titanosaurs such as Alamosaurus, Isisaurus, Malawisaurus, Rapetosaurus, and Saltasaurus.
Early history
Titanosaurus indicus was first named by British paleontologist Richard Lydekker in 1877, as a new taxon of dinosaur based on two caudals and a femur collected on different occasions at the same location in India. While it was later given a position as a sauropod within Cetiosauridae by Lydekker in 1888, he named the new sauropod family Titanosauridae for the genus in 1893, which included only Titanosaurus and Argyrosaurus, united by caudals, presacrals, a lack of pleurocoels and open chevrons. Following this, Austro-Hungarian paleontologist Franz Nopcsa reviewed reptile genera in 1928, and provided a short classification of Sauropoda, where he placed the Titanosaurinae (a reranking of Lydekker's Titanosauridae) in Morosauridae, and included the genera Titanosaurus, Hypselosaurus and Macrurosaurus because they all had strongly procoelous caudals. German paleontologist Friedrich von Huene provided a significant revision of Titanosauridae the following year in 1929, where he reviewed the dinosaurs of Cretaceous Argentina, and named multiple new genera. Huene included multiple species of Titanosaurus from India, England, France, Romania, Madagascar and Argentina, Hypselosaurus and Aepisaurus from France, Macrurosaurus from England, Alamosaurus from United States, and Argyrosaurus, Antarctosaurus, and Laplatasaurus from Argentina. The material between them represented almost all regions of the skeleton, which showed they were derived sauropods Huene interpreted as closest to Pleurocoelus of the various non-titanosaurid genera.
For his 1986 thesis, Argentinian paleontologist Jaime Powell described and classified many new genera of South American titanosaurs. Using the family Titanosauridae to include them all, he grouped the genera into Titanosaurinae, Saltasaurinae, Antarctosaurinae, Argyrosaurinae and Titanosauridae indet. Titanosaurinae included Titanosaurus and the new genus Aeolosaurus, united by multiple features of the caudal vertebrae; the new clade Saltasaurinae was created to include Saltasaurus and the new genus Neuquensaurus, united by very distinct dorsals, caudals, and ilia; the new clade Antarctosaurinae was created to include Antarctosaurus, distinguished by large size, a different form of braincase, more elongate girdle bones, and more robust limb bones; and Argyrosaurinae was created for Argyrosaurus, bearing a more robust forelimb and hand and more primitive dorsals. The new genus Epachthosaurus was named for a more basal titanosaurid classified as Titanosauridae indet. along with unnamed specimens, Clasmodosaurus and Campylodoniscus.
John Stanton McIntosh provided a synopsis of sauropod relationships in 1990, using Titanosauridae as the group to contain all taxa like previous authors. Opisthocoelicaudia was placed in Opisthocoelicaudiinae within Camarasauridae, following its original description and not later works, and Nemegtosaurus and Quaesitosaurus were placed within Dicraeosaurinae. Titanosauridae included many previously named genera, plus taxa like Tornieria and Janenschia. Saltasaurus included the species previously known as Titanosaurus australis and T. robustus, which were named Neuquensaurus by Powell in 1986. McIntosh provided a large diagnosis of the family: "dorsals with irregularly shaped pleurocoels and spines directed strongly backward; transverse processes directed dorsally as well as laterally, very robust in shoulder region; a second dorsosacral, its rib fused to ilium; caudals strongly procoelous with a prominent ball on distal end of centrum throughout tail; caudal arches on front half of centrum; sternal plates large; preacetabular process of ilium swept outward to become almost horizontal", but stressed that the relationships of titanosaurids to other sauropod groups couldn't be determined due to a lack of cranial material.
A brief review of putative titanosaurids from Europe was authored by Jean Le Loeuff in 1993, and covered the supposed genera known so far. The Barremian (middle Early Cretaceous) species Titanosaurus valdensis, named decades previous by Huene, was kept as the oldest of the titanosaurid and given the new genus name Iuticosaurus. The French taxon Aepisaurus was removed from the family and placed in undetermined Sauropoda. Macrurosaurus was considered a chimaera of titanosaurid and non-titanosaurid material because of the presence of both procoelous and caudals. Huene's species Titanosaurus lydekkeri was left as a nomen dubium, but left within Titanosauridae. Maastrichtian fossils from France and Spain were removed from Hypselosaurus and Titanosaurus, with Hypselosaurus being declared dubious like T. lydekkeri. The variety of Romanian fossils named as Magyarosaurus by Huene were also moved into the same species again, M. dacus as originally named by Nopcsa.
Titanosauria named
José Bonaparte and Rodolfo Coria in 1993 concluded that a new clade of derived sauropods was necessary because Argentinosaurus, Andesaurus and Epachthosaurus were distinct from Titanosauridae as they possessed , but were still very closely related to the titanosaurids. The taxa that possessed the articulations were united within the new family Andesauridae, and the two families were grouped together within the new clade Titanosauria. The titanosaurs were diagnosed by possessing small centered within an anteroposteriorly elongate depression and the presence of two well defined depressions on the posterior face of the neural arch. The entire group was compared favourably with cetiosaurids like Patagosaurus and Volkheimeria.
Overlooking the naming of Titanosauria, Paul Upchurch in 1995 named the clade Titanosauroidea, to include Opisthocoelicaudia and the more derived Titanosauridae (Malawisaurus, Alamosaurus and Saltasaurus). United by: caudals with anteriorly-shifted neural spines, extremely robust forearm bones, a prominent concavity on the ulna for articulation with the humerus, a laterally flared and flattened ilium, and a less robust pubis; Upchurch considered the clade sister taxon to Diplodocoidea, because of their shared dental anatomy, although he noted that peg-like teeth might have been independently evolved. This was followed up by Upchurch's 1998 study on sauropod phylogenetics, which additionally recovered Phuwiangosaurus and Andesaurus within Titanosauroidea and resolved Opisthocoelicaudia as the sister of Saltasaurus instead of the most basal titanosauroid. This result places Titanosauroidea in a group with Camarasaurus and Brachiosaurus, although Nemegtosauridae (Nemegtosaurus and Quaesitosaurus) was still classified as the basalmost family of diplodocoids. Upchurch chose to use Titanosauroidea as a replacement name for Titanosauria due to the recommended use of Linnean taxonomy and ranks.
In 1997, Leonardo Salgado et al. published a phylogenetic study on Titanosauriformes, including relationships within Titanosauria. They provided a definition for the clade of "including the most recent common ancestor of Andesaurus delgadoi and Titanosauridae and all of its descendants". Titanosauria resolved including the same two subclades as Bonaparte & Coria (1993), where Andesauridae was monotypic, only including the name genus, and Titanosauridae was all other titanosaurs. Titanosauria was additionally rediagnosed, with eye-shaped pleurocoels, forked infradiapophyseal , centro-parapophyseal laminae, procoelous anterior caudals, and a significantly longer pubis than ischium. Titanosauridae was less strongly defined because of the polytomy between Malawisaurus and Epachthosaurus, so some diagnostic features couldn't be resolved. Saltasaurinae was defined as the most recent ancestor of Neuquensaurus, Saltasaurus and its descendants, and diagnosed by short cervical , vertically compressed anterior caudals, and a posteriorly shifted anterior caudal neural spine.
Contributing additional work to the systematics of titanosaurs, Spanish paleontologist José Sanz et al. published an additional study in 1999, utilizing both the names Titanosauria and Titanosauroidea in displaying their results. Similar to Upchurch (1995), Sanz et al. recovered Opisthocoelicaudia as a titanosauroid outside Titanosauria, while Titanosauria was redefined to include only the taxa classified by their study. Eutitanosauria was proposed as a name for the titanosaurs more derived than Epachthosaurus, and noted the presence of osteoderms as a probable synapomorphy of this clade. Aeolosaurus, Alamosaurus, Ampelosaurus and Magyarosaurus were looked at using their character list, but were considered too incomplete to add to the final study.
Argentinian paleontologist Jaime Powell published his 1986 thesis in 2003, with revisions to bring his old work up to date, including the addition of more phylogenetics and the recognition of Titanosauria as a clade name. Using the datamatrix of Sanz et al. (1999) and modifying it to include additional taxa and some character changes, Powell found that titanosaurs formed mostly a single gradual radiation beginning with Epachthosaurus as the most basal titanosaur, and Ampelosaurus and Isisaurus as the most derived. Titanosauroidea (following Upchurch 1995), was distinguished by pre- and post-spinal laminae in anterior caudals, a laterally flared ilium, a lateral expansion of the upper femur, and strongly opisthocoelous posterior dorsals. Less inclusive, Titanosauria was diagnosed by horizontally facing dorsal , prominent procoelous anterior caudals, and a ridge on the sternal plates. Within Titanosauria, Eutitanosauria was characterized by the absence of a hyposphene-hypantrum, no femoral fourth trochanter, and osteoderms. A small clade of Alamosaurus, Lirainosaurus and the "Peirópolis titanosaur" (Trigonosaurus) was resolved, and diagnosed by only a rotation of the tibia so the proximal end is perpendicular to the distal end. More derived clades, while resolved, were only weakly supported, or characterized by reversions of diagnostic traits of larger groups (below and left).
Powell (2003)
Curry-Rogers & Forster (2001)
Rapetosaurus was described in 2001 by Kristina Curry-Rogers and Catherine Forster, who additionally provided a new phylogenetic analysis of Titanosauriformes (above and right). Titanosauria was strongly supported, distinguished by up to 20 characters depending on unknown traits in basal taxa. Similarly, Saltasaurinae was characterised by up to 16 traits, and the clade of Rapetosaurus and related taxa possessed four unique features. Nemegtosaurus and Quaesitosaurus were resolved within Titanosauria for the first time, after being placed in Diplodocoidea by multiple other analyses, because Rapetosaurus provided the first significant titanosaur cranial material with associated postcrania. All three genera were resolved in a clade together, although Curry-Rogers & Forster noted that it was possible the group was only resolved because no other titanosaurs had comparable cranial material. Opisthocoelicaudia was also nested deeply in Saltasaurinae, though a further investigation of titanosaur interrelationships was proposed.
American paleontologist Jeff Wilson presented another revision of overall sauropod phylogeny in 2002, resolving strong support for most groups, and a similar result to Upchurch (1998) although with Euhelopus closest to titanosaurs instead of outside Neosauropoda. More internal clades were resolved for Titanosauria, with Nemegtosaurus and Rapetosaurus united within Nemegtosauridae, and Saltasauridae including two subfamilies, Opisthocoelicaudiinae and Saltasaurinae. Saltasauridae was defined as a node-stem triplet, where everything descended from the common ancestor of Opisthocoelicaudia and Saltasaurus was within Saltasauridae, and the subfamilies Saltasaurinae and Opisthocoelicaudiinae were for every taxon on one branch of the saltasaurid tree or the other.
Wilson and Paul Upchurch followed this study up in 2003 with a significant revision of the type genus Titanosaurus, and revisited all the material that had been assigned to the genus while reviewing titanosaur inter-relationships. Because they found Titanosaurus to be a dubious name, they proposed that Linnaean-named groups Titanosauridae and Titanosauroidea should be considered invalid as well. Wilson & Upchurch (2003) supported the definition of Salgado et al. (1997) for Titanosauria, since it was oldest and most similar to the original content of the group when named by Bonaparte & Coria (1993). Lithostrotia (Upchurch et al. 2004) was defined to be Malawisaurus and all more derived titanosaurs, and the clade Eutitanosauria (Sanz et al. 1999) was considered a possible synonym of Saltasauridae. Wilson & Upchurch (2003) presented a reduced cladogram of Titanosauria, including only the most commonly-analyzed taxa from previous studies, resulting in a tree similar to that of Wilson (2002) but with Rapetosaurus and Nemegtosaurus excluded and Epachthosaurus included. Alamosaurus and Opisthocoelicaudia were united within Opisthocoelicaudiinae, Neuquensaurus and Saltasaurus formed Saltasaurinae, and Isisaurus placed as the next most derived titanosaurid.
At the same time as Wilson & Upchurch redescribing the species of Titanosaurus, Saldago (2003) looked over the potential invalidity of the family Titanosauridae and redefined the internal clades of Titanosauria. Titanosauria was defined as more inclusive than Titanosauroidea, contrasting with earlier used by Upchurch (1995) and Sanz et al. (1999), as all taxa in Somphospondyli closer to Saltasaurus than Euhelopus. In order to create additional stability, Saldago also defined Andesauroidea for only Andesaurus, as every titanosaur closer to that genus than Saltasaurus, and also it's opposite Titanosauroidea as every titanosaur closer to Saltasaurus than Andesaurus. Next most inclusive, Salgado revitalised Titanosauridae to include everything descended from the ancestor of Epachthosaurus and Saltasaurus, and to replace the node-stem triplet of Saltasauridae, defined the clades Epachthosaurinae and Eutitanosauria as Epachthosaurus>Saltasaurus and Saltasaurus<Epachthosaurus respectively. Saltasaurinae and Opisthocoelicaudiinae were retained with their original definitions, but Lithostrotia was considered a synonym of Titanosauridae, and Titanosaurinae was considered a paraphyletic clade of unrelated titanosaurids.
Following the clade definitions proposed in previous Salgado studies, Bernardo González-Riga published two papers in 2003 describing new taxa in Titanosauria: Mendozasaurus, and Rinconsaurus (with Jorge O. Calvo). In both studies, the new taxa formed clades within Titanosauridae, although neither were named, and new diagnostic features were proposed for the family. For Mendozasaurus, the new genus grouped with Malawisaurus as basal within Titanosauridae, but because of the features of caudal vertebrae in these basal taxa, González-Riga recommended revising the diagnosis of the family, instead of changing the content. The situation of caudals in Rinconsaurus also suggested procoelous caudals were no longer diagnostic, because in the tail of Rinconsaurus the vertebrae regularly changed their articular surfaces, being from procoelous caudals interspersed with amphicoelous, opisthocoelous and biconvex vertebrae. Rinconsaurus was then included in Aeolosaurini, a clade named the following year by Aldirene Franco-Rosas et al. containing everything closer to Aeolosaurus and Gondwanatitan than Saltasaurus or Opisthocoelicaudia. Only the three genera and various intermediate specimens were included in Aeolosaurini in their 2004 paper, with the tribe being considered to be within Saltasaurinae.
The second edition of The Dinosauria, published in 2004, included newly described titanosaurs and other taxa reidentified as titanosaurs. Written by Upchurch, Paul Barrett and Peter Dodson, a review of Sauropoda included a more expansive Titanosauria for sauropods more derived than brachiosaurids. Titanosauria, defined as everything closer to Saltasaurus than Brachiosaurus, included a very large variety of taxa, and the new clade Lithostrotia was named for a large number of more derived taxa, although Nemegtosauridae was placed in Diplodocoidea following earlier publications of Upchurch. Lithostrotia adopted the distinguishing feature of strongly procoelous caudals, previously used for Titanosauria.
New phylogenetic frameworks
In 2005, Curry-Rogers proposed a new phylogenetic analysis that focused on the inter-relationships of Titanosauria and included the most expansive character and taxon list of any study before it. 364 characters were selected from all previous phylogenetic analyses and scored across 29 probable titanosaurs, ranging from the Late Jurassic African Janenschia to the large variety of Late Cretaceous global genera. Proposing her analysis as the basis for a new phylogenetic framework of Titanosauria, Curry-Rogers recommended only using named for clades that were very strongly supported. For the strict consensus, every taxon more derived than Brachiosaurus was in an unresolved polytomy except for a clade of Rapetosaurus and Nemegtosaurus, and one of Saltasaurinae. Within the recommended results, she only named Titanosauria, Lithostrotia, Saltasauridae, Saltasaurinae and Opisthocoelicaudiinae, because of the weakness of support (below and left).
Curry-Rogers (2005)
Carballido et al. (2017)
Another form of composite matrix was created by Calvo, González-Riga and Juan Porfiri in 2007, based upon multiple previous studies between 1997 and 2003. The final analysis included 15 titanosaurs and 65 characters, and the typical titanosaur subclades were resolved, Titanosauridae being used over Lithostrotia following Salgado (2003), and the new clade Rinconsauria for the clade of Rinconsaurus and Muyelensaurus. The new clade (defined as Rinconsaurus and Muyelensaurus) was placed as the sister taxon of Aeolosaurini, which together grouped with Rapetosaurus as sister to Saltasauridae. In the same year, Calvo et al. published another paper, describing the basal titanosaur Futalognkosaurus. The only difference in the resulting phylogeny, based on the matrix of the Calvo, González-Riga & Porfiri (2007), was the addition of Futalognkosaurus as the sister taxon to Mendozasaurus in a clade Calvo et al. named Lognkosauria, defined by the two genera classified within it. A very similar result was also recovered by González-Riga et al. in 2009 in a phylogenetic analysis based partially on that of Calvo et al. (2007), although Epachthosaurus was nested with Rapetosaurus outside the clades of aeolosaurines. Further updates and modifications were then made by Palbo Gallina & Apesteguía in 2011, with the additions of Ligabuesaurus, Antarctosaurus, Nemegtosaurus and Bonitasaura and character updates to match, bringing the total to 77 characters and 22 taxa. Significantly contrasting the earlier results, internal relationships of Titanosauria were rearranged. Malawisaurus nested with Andesaurus in a clade of the basalmost titanosaurs outside Titanosauroidea, where Lirainosaurus, instead of being the basal member of the saltasaur-branch was instead basalmost titanosauroid. Lognkosauria moved to be within rinconsaurs, while Nemegtosauridae was resolved as the sister of Aeolosaurus and Gondwanatitan, and the rinconsaur-lognkosaur branch. Antarctosaurus was unstable, but placed in a polytomy with the lognkosaurs and rinconsaurs before being excluded. Saltasaurinae and its relationship with Opisthocoelicaudia remained the same.
Nemegtosauridae was additionally revised by Hussam Zaher et al. (2011) with the description of Tapuiasaurus, which nested closer to Rapetosaurus than Nemegtosaurus, with all three forming a clade of derived lithostrotians. Using the matrix of Wilson (2002), following the additions of a few cranial characters and Diamantinasaurus, Tangvayosaurus and Phuwiangosaurus, remained the same as originally found by Wilson but with Diamantinasaurus sister to Saltasauridae and the other two genera as basal titanosaurs outside Lithostrotia, since Titanosauria, while undefined, was labelled to include all taxa closer to Saltasaurus than Euhelopus. Following a revision of the skull of Tapuiasaurus, Wilson et al. (2016) rescored the analysis of Zaher et al. and recovered similar results for everything but Nemegtosauridae, where the family dissolved into a more basal Tapuiasaurus outside Lithostrota and Nemegtosaurus outside Saltasauridae. While non-titanosaur phylogeny remained identical in every single result, the topology within Titanosauria was very labile and prone to change with minor adjustments.
Also following the 2002 analysis of Wilson, José Carballido and colleagues published a redescription of Chubutisaurus in 2011, and utilized an updated Wilson matrix, expanded to 289 characters across 41 taxa, including 15 titanosaurs. The primary focus of the analysis was on the basal titanosauriform taxa, but Titanosauria was defined, as the most recent common ancestor of Andesaurus delgadoi and Saltasaurus loricatus, and all its descendants, although the only autapomorphy of the group recovered was the absence of a prominent ventral process on the scapula. This same matrix and basis of characters was further utilized and expanded for analyses on Tehuelchesaurus, Comahuesaurus and related rebbachisaurs, Europasaurus, and Padillasaurus, before being expanded upon once again in 2017 by Carballido et al. during the description of Patagotitan to 405 characters and 87 taxa, including 28 titanosaurs (above and right). The definition of Titanosauria was preserved following Salgado et al. (1997) as Andesaurus plus Saltasaurus. Eutitanosauria (closer to Saltasaurus than Epachthosaurus) was resolved as a very inclusive clade composed of two distinct branches, one leading to the larger-bodied lognkosaurs and the other to the smaller-bodied saltasaurs. On the lognkosaur branch of Eutitanosauria, there is a branch of lognkosaurs and one of Rinconsauria. Following Calvo, González-Riga and Porfiri (2007), Rinconsauria was defined as Muyelensaurus plus Rinconsaurus, and Lognkosauria was defined as Mendozasaurus plus Futalognkosaurus. Rinconsauria included taxa typically found within Aeolosaurini as well, so Aeolosaurini was redefined as Aeolosaurus rionegrinus plus Gondwanatitan to preserve the original restricted content, otherwise the entire rinconsaur-lognkosaur branch would be classified within Aeolosaurini. Lithostrotia, Saltasauridae and Saltasaurinae had their definitions preserved from earlier studies, and included their typical content.
Philip Mannion and colleagues redescribed Lusotitan in 2013, creating a new analysis of 279 characters drawn from significant previous analyses by Upchurch and Wilson supplemented by other studies. 63 sauropods were included, focusing on non-titanosaurian sauropods, although 14 probable titanosaurs were included. Unique to Mannion et al., continuous characters were distinguished in a run of the matrix, which resolved almost all of Somphospondyli within Titanosauria because of Andesaurus placing very basal in a large group of Andesauroidea. Titanosauroidea was tentatively retained as the opposite clade of titanosaurs, which included all other traditional titanosaurs, although it was noted because of the invalidity of Titanosaurus, Titanosauroidea should be considered an invalid name as well. While the original analysis didn't focus on titanosaurs, it was utilised during the descriptions of Savannasaurus and Diamantinasaurus, Yongjinglong, an osteology of Mendozasaurus, and redescribing Tendaguria. From these updates, an analysis of 548 characters and 124 taxa was published by Mannion et al. in 2019 for a redescription of Jiangshanosaurus and Dongyangosaurus, and additional revisions of Ruyangosaurus were made. No differentiation between continuous and discrete characters was made like performed by Mannion et al. (2013), but a large clade of Andesauroidea was still resolved with implied weights. Both redescribed Asian taxa, as well as Yongjinglong, previously considered derived titanosaurs related to Saltasauridae, were removed to outside the clade.
In the description of Mansourasaurus, Sallam et al. (2017) published a phylogenetic analysis of Titanosauria including the most taxa of any analysis of the clade. In an updated version of the analysis, with the taxon Mnyamawamtuka added, Gorscak & O'Connor (2019) got similar results, with slightly different relationships within small clades.
Paleobiology
Diet
Fossilized dung associated with late Cretaceous titanosaurids from India has revealed phytoliths, silicified plant fragments, that offer clues to a broad, unselective plant diet. Besides the plant remains that might have been expected, such as cycads and conifers, discoveries published in 2005 revealed an unexpectedly wide range of monocotyledons, including palms and grasses (Poaceae), including ancestors of rice and bamboo, which has given rise to speculation that herbivorous dinosaurs and grasses co-evolved.
Nesting
A large titanosaurid nesting ground was discovered in Auca Mahuevo, in Patagonia, Argentina and another colony has reportedly been discovered in Spain. Several hundred female saltasaurs dug holes with their back feet, laid eggs in clutches averaging around 25 eggs each, and buried the nests under dirt and vegetation. The small eggs, about in diameter, contained fossilised embryos, complete with skin impressions. The impressions showed that titanosaurs were covered in a mosaic armour of small bead-like scales. The huge number of individuals gives evidence of herd behavior, which, along with their armor, could have helped provide protection against large predators such as Abelisaurus.
Range
The titanosaurs were the last great group of sauropods, which existed from about 136 to 66 million years ago, before the Cretaceous–Paleogene extinction event, and were the dominant herbivores of their time. The fossil evidence suggests they replaced the other sauropods, like the diplodocids and the brachiosaurids, which died out between the late Jurassic and the mid-Cretaceous Periods.
Titanosaurs were widespread. In December 2011, Argentine scientists announced titanosaur fossils had been found on Antarctica—meaning that titanosaur fossils have been found on all continents. They are especially numerous in the southern continents (then part of the supercontinent of Gondwana). Australia had titanosaurs around 96 million years ago: fossils have been discovered in Queensland of a creature around long. Remains have also been discovered in New Zealand. One of the largest ever titanosaur footprints was discovered in the Gobi Desert in 2016. One of the oldest remains of this group was described by Ghilardi et al. (2016). It was found from the Valley of the Dinosaurs, Paraíba state of Brazil, representing a 136-million-year-old subadult individual.
Paleopathology
Ibirania, a nanoid titanosaur fossil from Brazil suggests that individuals of various genera were susceptible to diseases such as osteomyelitis and parasite infestations. The specimen hails from the late cretaceous São José do Rio Preto Formation, Bauru Basin, and was described in the journal Cretaceous Research by Aureliano et al. (2021). Examination of the titanosaur's bones revealed what appear to be parasitic blood worms similar to the prehistoric Paleoleishmania but are 10-100 times larger, that seemed to have caused the osteomyelitis. The fossil is the first known instance of an aggressive case of osteomyelitis being caused by blood worms in an extinct animal.
| Biology and health sciences | Sauropods | Animals |
1139981 | https://en.wikipedia.org/wiki/Hyperbolic%20angle | Hyperbolic angle | In geometry, hyperbolic angle is a real number determined by the area of the corresponding hyperbolic sector of xy = 1 in Quadrant I of the Cartesian plane. The hyperbolic angle parametrizes the unit hyperbola, which has hyperbolic functions as coordinates. In mathematics, hyperbolic angle is an invariant measure as it is preserved under hyperbolic rotation.
The hyperbola xy = 1 is rectangular with semi-major axis , analogous to the circular angle equaling the area of a circular sector in a circle with radius .
Hyperbolic angle is used as the independent variable for the hyperbolic functions sinh, cosh, and tanh, because these functions may be premised on hyperbolic analogies to the corresponding circular (trigonometric) functions by regarding a hyperbolic angle as defining a hyperbolic triangle.
The parameter thus becomes one of the most useful in the calculus of real variables.
Definition
Consider the rectangular hyperbola , and (by convention) pay particular attention to the branch .
First define:
The hyperbolic angle in standard position is the angle at between the ray to and the ray to , where .
The magnitude of this angle is the area of the corresponding hyperbolic sector, which turns out to be .
Note that, because of the role played by the natural logarithm:
Unlike circular angle, the hyperbolic angle is unbounded (because is unbounded); this is related to the fact that the harmonic series is unbounded.
The formula for the magnitude of the angle suggests that, for , the hyperbolic angle should be negative. This reflects the fact that, as defined, the angle is directed.
Finally, extend the definition of hyperbolic angle to that subtended by any interval on the hyperbola. Suppose are positive real numbers such that and , so that and are points on the hyperbola and determine an interval on it. Then the squeeze mapping maps the angle to the standard position angle . By the result of Gregoire de Saint-Vincent, the hyperbolic sectors determined by these angles have the same area, which is taken to be the magnitude of the angle. This magnitude is .
Comparison with circular angle
A unit circle has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola has a hyperbolic sector with an area half of the hyperbolic angle.
There is also a projective resolution between circular and hyperbolic cases: both curves are conic sections, and hence are treated as projective ranges in projective geometry. Given an origin point on one of these ranges, other points correspond to angles. The idea of addition of angles, basic to science, corresponds to addition of points on one of these ranges as follows:
Circular angles can be characterized geometrically by the property that if two chords PP and PP subtend angles L and L at the centre of a circle, their sum is the angle subtended by a chord PQ, where PQ is required to be parallel to PP.
The same construction can also be applied to the hyperbola. If P is taken to be the point , P the point , and P the point , then the parallel condition requires that Q be the point . It thus makes sense to define the hyperbolic angle from P to an arbitrary point on the curve as a logarithmic function of the point's value of x.
Whereas in Euclidean geometry moving steadily in an orthogonal direction to a ray from the origin traces out a circle, in a pseudo-Euclidean plane steadily moving orthogonally to a ray from the origin traces out a hyperbola. In Euclidean space, the multiple of a given angle traces equal distances around a circle while it traces exponential distances upon the hyperbolic line.
Both circular and hyperbolic angle provide instances of an invariant measure. Arcs with an angular magnitude on a circle generate a measure on certain measurable sets on the circle whose magnitude does not vary as the circle turns or rotates. For the hyperbola the turning is by squeeze mapping, and the hyperbolic angle magnitudes stay the same when the plane is squeezed by a mapping
(x, y) ↦ (rx, y / r), with r > 0 .
Relation To The Minkowski Line Element
There is also a curious relation to a hyperbolic angle and the metric defined on Minkowski space. Just as two dimensional Euclidean geometry defines its line element as
the line element on Minkowski space is
Consider a curve embedded in two dimensional Euclidean space,
Where the parameter is a real number that runs between and (). The arclength of this curve in Euclidean space is computed as:
If defines a unit circle, a single parameterized solution set to this equation is and . Letting , computing the arclength gives . Now doing the same procedure, except replacing the Euclidean element with the Minkowski line element,
and defining a unit hyperbola as with its corresponding parameterized solution set and , and by letting (the hyperbolic angle), we arrive at the result of . Just as the circular angle is the length of a circular arc using the Euclidean metric, the hyperbolic angle is the length of a hyperbolic arc using the Minkowski metric.
History
The quadrature of the hyperbola is the evaluation of the area of a hyperbolic sector. It can be shown to be equal to the corresponding area against an asymptote. The quadrature was first accomplished by Gregoire de Saint-Vincent in 1647 in Opus geometricum quadrature circuli et sectionum coni. As expressed by a historian,
[He made the] quadrature of a hyperbola to its asymptotes, and showed that as the area increased in arithmetic series the abscissas increased in geometric series.
A. A. de Sarasa interpreted the quadrature as a logarithm and thus the geometrically defined natural logarithm (or "hyperbolic logarithm") is understood as the area under to the right of . As an example of a transcendental function, the logarithm is more familiar than its motivator, the hyperbolic angle. Nevertheless, the hyperbolic angle plays a role when the theorem of Saint-Vincent is advanced with squeeze mapping.
Circular trigonometry was extended to the hyperbola by Augustus De Morgan in his textbook Trigonometry and Double Algebra. In 1878 W.K. Clifford used the hyperbolic angle to parametrize a unit hyperbola, describing it as "quasi-harmonic motion".
In 1894 Alexander Macfarlane circulated his essay "The Imaginary of Algebra", which used hyperbolic angles to generate hyperbolic versors, in his book Papers on Space Analysis. The following year Bulletin of the American Mathematical Society published Mellen W. Haskell's outline of the hyperbolic functions.
When Ludwik Silberstein penned his popular 1914 textbook on the new theory of relativity, he used the rapidity concept based on hyperbolic angle a, where , the ratio of velocity v to the speed of light. He wrote:
It seems worth mentioning that to unit rapidity corresponds a huge velocity, amounting to 3/4 of the velocity of light; more accurately we have for .
[...] the rapidity , [...] consequently will represent the velocity .76 c which is a little above the velocity of light in water.
Silberstein also uses Lobachevsky's concept of angle of parallelism Π(a) to obtain .
Imaginary circular angle
The hyperbolic angle is often presented as if it were an imaginary number, and so that the hyperbolic functions cosh and sinh can be presented through the circular functions. But in the Euclidean plane we might alternately consider circular angle measures to be imaginary and hyperbolic angle measures to be real scalars, and
These relationships can be understood in terms of the exponential function, which for a complex argument can be broken into even and odd parts and respectively. Then
or if the argument is separated into real and imaginary parts the exponential can be split into the product of scaling and rotation
As infinite series,
The infinite series for cosine is derived from cosh by turning it into an alternating series, and the series for sine comes from making sinh into an alternating series.
| Mathematics | Non-Euclidean geometry | null |
213583 | https://en.wikipedia.org/wiki/Mount%20Wilson%20Observatory | Mount Wilson Observatory | The Mount Wilson Observatory (MWO) is an astronomical observatory in Los Angeles County, California, United States. The MWO is located on Mount Wilson, a peak in the San Gabriel Mountains near Pasadena, northeast of Los Angeles.
The observatory contains two historically important telescopes: the Hooker telescope, which was the largest aperture telescope in the world from its completion in 1917 to 1949, and the 60-inch telescope which was the largest operational telescope in the world when it was completed in 1908. It also contains the Snow solar telescope completed in 1905, the solar tower completed in 1908, the solar tower completed in 1912, and the CHARA array, built by Georgia State University, which became fully operational in 2004 and was the largest optical interferometer in the world at its completion.
Due to the inversion layer that traps warm air and smog over Los Angeles, Mount Wilson has steadier air than any other location in North America, making it ideal for astronomy and in particular for interferometry. The increasing light pollution due to the growth of greater Los Angeles has limited the ability of the observatory to engage in deep space astronomy, but it remains a productive center, with the CHARA array continuing important stellar research.
The initial efforts to mount a telescope to Mount Wilson occurred in the 1880s by one of the founders of University of Southern California, Edward Falles Spence, but he died without finishing the funding effort. The observatory was conceived and founded by George Ellery Hale, who had previously built the 1 meter telescope at the Yerkes Observatory, then the world's largest telescope. The Mount Wilson Solar Observatory was first funded by the Carnegie Institution of Washington in 1904, leasing the land from the owners of the Mount Wilson Hotel in 1904. Among the conditions of the lease was that it allow public access.
Solar telescopes
There are three solar telescopes at Mount Wilson Observatory. Just one of these telescopes, the 60-foot Solar Tower, is still used for solar research.
Snow Solar Telescope
The Snow Solar Telescope was the first telescope installed at the fledgling Mount Wilson Solar Observatory. It was the world's first permanently mounted solar telescope. Solar telescopes had previously been portable so they could be taken to solar eclipses around the world. The telescope was donated to Yerkes Observatory by Helen Snow of Chicago. George Ellery Hale, then director of Yerkes, had the telescope brought to Mount Wilson to put it into service as a proper scientific instrument. Its primary mirror with a focal length, coupled with a spectrograph, did groundbreaking work on the spectra of sunspots, doppler shift of the rotating solar disc and daily solar images in several wavelengths. Stellar research soon followed as the brightest stars could have their spectra recorded with very long exposures on glass plates. The Snow solar telescope is mostly used by undergraduate students who get hands-on training in solar physics and spectroscopy.
It was also used publicly for the May 9, 2016 transit of Mercury across the face of the Sun.
60-foot Solar Tower
The Solar Tower soon built on the work started at the Snow telescope. At its completion in 1908, the vertical tower design of the 60-foot focal length solar telescope allowed much higher resolution of the solar image and spectrum than the Snow telescope could achieve. The higher resolution came from situating the optics higher above the ground, thereby avoiding the distortion caused by the heating of the ground by the Sun. On June 25, 1908, Hale would record Zeeman splitting in the spectrum of a sunspot, showing for the first time that magnetic fields existed somewhere besides the Earth. A later discovery was of the reversed polarity in sunspots of the new solar cycle of 1912. The success of the 60-foot Tower prompted Hale to pursue yet another, taller tower telescope. In the 1960s, Robert Leighton discovered the Sun had a 5-minute oscillation and the field of helioseismology was born. The 60-foot Tower is operated by the Department of Physics and Astronomy at University of Southern California.
150-foot Solar Tower
The focal length solar tower expanded on the solar tower design with its tower-in-a-tower design. (The tower is actually tall.) An inner tower supports the optics above, while an outer tower, which completely surrounds the inner tower, supports the dome and floors around the optics. This design allowed complete isolation of the optics from the effect of wind swaying the tower. Two mirrors feed sunlight to a lens which focuses light down at the ground floor. It was first completed in 1910, but unsatisfactory optics caused a two-year delay before a suitable doublet lens was installed. Research included solar rotation, sunspot polarities, daily sunspot drawings, and many magnetic field studies. The solar telescope would be the world's largest for 50 years until the McMath-Pierce Solar telescope was completed at Kitt Peak in Arizona in 1962. In 1985, UCLA took over operation of the solar tower from the Carnegie Observatories after they decided to stop funding the observatory.
60-inch telescope
For the 60-inch telescope, George Ellery Hale received the mirror blank, cast by Saint-Gobain in France, in 1896 as a gift from his father, William Hale. It was a glass disk 19 cm thick and weighing 860 kg. However it was not until 1904 that Hale received funding from the Carnegie Institution to build an observatory. Grinding began in 1905 and took two years. The mounting and structure for the telescope were built in San Francisco and barely survived the 1906 earthquake. Transporting the pieces to the top of Mount Wilson was an enormous task. First light was December 8, 1908. It was, at the time, the largest operational telescope in the world. Lord Rosse's Leviathan of Parsonstown, a 72-inch (1.8-meter) telescope built in 1845, was, by the 1890s, out of commission.
Although slightly smaller than the Leviathan, the 60-inch had many advantages including a far better site, a glass mirror instead of speculum metal, and a precision mount which could accurately track any direction in the sky, so the 60-inch was a major advance.
The 60-inch telescope is a reflector telescope built for Newtonian, Cassegrain and coudé configurations. It is currently used in the bent Cassegrain configuration. It became one of the most productive and successful telescopes in astronomical history. Its design and light-gathering power allowed the pioneering of spectroscopic analysis, parallax measurements, nebula photography, and photometric photography. Though surpassed in size by the nine years later, the 60-inch telescope remained one of the largest in use for decades.
In 1992, the 60-inch telescope was fitted with an early adaptive optics system, the Atmospheric Compensation Experiment (ACE). The 69-channel system improved the potential resolving power of the telescope from 0.5 to 1.0 arc sec to 0.07 arc sec. ACE was developed by DARPA for the Strategic Defense Initiative system, and the National Science Foundation funded the civilian conversion.
The telescope is used for public outreach as the second largest telescope in the world devoted to the general public. Custom made 10 cm eyepieces are fitted to its focus using the bent cassegrain configuration to provide views of the Moon, planetary, and deep-sky objects. Groups may book the telescope for an evening of observing.
100-inch Hooker telescope
The Hooker telescope located at Mount Wilson Observatory, California, was completed in 1917, and was the world's largest telescope until 1949. It is one of the most famous telescopes in observational astronomy of the 20th century. It was used by Edwin Hubble to make observations with which he produced two fundamental results which changed the scientific view of the Universe. Using observations he made in 1922–1923, Hubble was able to prove that the Universe extends beyond the Milky Way galaxy, and that several nebulae were millions of light-years away. He then showed that the universe was expanding.
Construction
Once the sixty-inch telescope project was well underway, Hale immediately set about creating a larger telescope. John D. Hooker provided crucial funding of $45,000 for the purchase and grinding of the mirror, while Andrew Carnegie provided funds to complete the telescope and dome. The Saint-Gobain factory was again chosen to cast a blank in 1906, which it completed in 1908. After considerable trouble over the blank (and potential replacements), the Hooker telescope was completed and saw "first light" on November 2, 1917. As with the sixty-inch telescope, the bearings are assisted by the use of mercury floats to support the 100 ton weight of the telescope.
In 1919 the Hooker telescope was equipped with a special attachment, a 6-meter optical astronomical interferometer developed by Albert A. Michelson, much larger than the one he had used to measure Jupiter's satellites. Michelson was able to use the equipment to determine the precise diameter of stars, such as Betelgeuse, the first time the size of a star had ever been measured. Henry Norris Russell developed his star classification system based on observations using the Hooker.
In 1935 the silver coating used since 1917 on the Hooker mirror was replaced with a more modern and longer lasting aluminum coating that reflected 50% more light than the older silver coating. The newer method of coating for the telescope mirrors was first tested on the older 1.5 meter mirror.
Edwin Hubble performed many critical calculations from work on the Hooker telescope. In 1923, Hubble discovered the first Cepheid variable in the spiral nebula of Andromeda using the 2.5-meter telescope. This discovery allowed him to calculate the distance to the spiral nebula of Andromeda and show that it was actually a galaxy outside the Milky Way. Hubble, assisted by Milton L. Humason, observed the magnitude of the redshift in many galaxies and published a paper in 1929 that showed the universe is expanding.
The Hooker's reign of three decades as the largest telescope came to an end when the Caltech-Carnegie consortium completed its Hale Telescope at Palomar Observatory, 144 km south, in San Diego County, California. The Hale Telescope saw first light in January 1949.
By the 1980s, the focus of astronomy research had turned to deep space observation, which required darker skies than what could be found in the Los Angeles area, due to the ever-increasing problem of light pollution. In 1989, the Carnegie Institution, which ran the observatory, handed it over to the non-profit Mount Wilson Institute. At that time, the 2.5-meter telescope was deactivated, but it was restarted in 1992 and in 1995 it was outfitted with a visible light adaptive optics system and later in 1997, it hosted the UnISIS, laser guide star adaptive optics system.
As the use of the telescope for scientific work diminished again, a decision was made to convert it to use for visual observing. Because of the high position of the Cassegrain focus above the observing floor, a system of mirrors and lenses was developed to allow viewing from a position at the bottom of the telescope tube. With the conversion completed in 2014, the 2.5 meter telescope began its new life as the world's largest telescope dedicated to public use. Regularly scheduled observing began with the 2015 observing season.
The telescope has a resolving power of 0.05 arcsecond.
Interferometry
Astronomical interferometry has a rich history at Mount Wilson. No fewer than seven interferometers have been located here. The reason for this is the extremely steady air over Mount Wilson is well suited to interferometry, the use of multiple viewing points to increase resolution enough to allow for the direct measurement of details such as star diameters.
20-foot Stellar Interferometer
The first of these interferometers was the 20-foot Stellar Interferometer. In 1919 the 100-inch Hooker telescope was equipped with a special attachment, a 20-foot optical astronomical interferometer developed by Albert A. Michelson and Francis G. Pease. It was attached to the end of the 100-inch telescope and used the telescope as a guiding platform to maintain alignment with the stars being studied. By December 1920, Michelson and Pease were able to use the equipment to determine the precise diameter of a star, the red giant Betelgeuse, the first time the angular size of a star had ever been measured. In the next year, Michelson and Pease measured the diameters of six more red giants before reaching the resolution limit of the 20-foot beam interferometer.
50-foot Stellar Interferometer
To expand on the work of the 20-foot interferometer, Pease, Michelson and George E. Hale designed a 50-foot interferometer which was installed at Mount Wilson Observatory in 1929. It successfully measured the diameter of Betelgeuse, but, other than beta Andromedae, could not measure any stars not already measured by the 20-foot interferometer.
Optical interferometry reached the limit of the available technology and it took about thirty years for faster computing, electronic detectors and lasers to make larger interferometers possible again.
Infrared Spatial Interferometer
The Infrared Spatial Interferometer (ISI), run by an arm of the University of California, Berkeley, is an array of three 1.65 meter telescopes operating in the mid-infrared. The telescopes are fully mobile and their current site on Mount Wilson allows for placements as far as 70 meters apart, giving the resolution of a telescope of that diameter. The signals are converted to radio frequencies through heterodyne circuits and then combined electronically using techniques copied from radio astronomy. The longest, 70-meter baseline provides a resolution of 0.003 arcsec at a wavelength of 11 micrometers. On July 9, 2003, ISI recorded the first closure phase aperture synthesis measurements in the mid infrared.
CHARA array
The Center for High Angular Resolution Astronomy (CHARA), built and operated by Georgia State University, is an interferometer formed from six 1 meter telescopes arranged along three axes with a maximum separation of 330 m. The light beams travel through vacuum pipes and are delayed and combined optically, requiring a building 100 meters long with movable mirrors on carts to keep the light in phase as the Earth rotates. CHARA began scientific use in 2002 and "routine operations" in early 2004. In the infrared, the integrated image can resolve down to 0.0005 arcseconds. Six telescopes are in regular use for scientific observations and as of late 2005 imaging results are routinely acquired. The array captured the first image of the surface of a main sequence star other than the Sun published in early 2007.
Other telescopes
A 61 cm telescope fitted with an infrared detector purchased from a military contractor was used by Eric Becklin in 1966 to determine the center of the Milky Way for the first time.
In 1968, the first large-area near-IR (2.2 μm) survey of the sky was conducted by Gerry Neugebauer and Robert B. Leighton using a 157 cm reflecting dish they had built in the early 1960s. Known as the Caltech Infrared Telescope, it operated in an unguided drift scanning mode using a lead(II) sulfide (PbS) photomultiplier read out on paper charts. The telescope is now on display at the Udvar-Hazy Center, part of the Smithsonian Air and Space Museum.
Events
Letters to the Mount Wilson Observatory are the subject of a permanent exhibition at the Museum of Jurassic Technology in Los Angeles, California. A small room is dedicated to a collection of unusual letters and theories received by the observatory circa 1915–1935. These letters were also collected in the book No One May Ever Have the Same Knowledge Again: Letters to Mt. Wilson Observatory 1915–1935 ().
The historic monument came under threat during the August 2009 California wildfires.
The English poet Alfred Noyes was present for the "first light" of the Hooker telescope on November 2, 1917. Noyes used this night as the setting in the opening of Watchers of the Sky, the first volume in his trilogy The Torchbearers, an epic poem about the history of science. According to his account of the night, the first object viewed in the telescope was Jupiter, and Noyes himself was the first to see one of the planet's moons through the telescope.
In September 2020, the observatory was evacuated due to the Bobcat Fire. Flames approached within of the observatory on September 15, but the observatory was declared safe on September 19.
In January 2025, the observatory was evacuated due to the Eaton Fire, which approached Mount Wilson on January 9.
Sunday Afternoon Concerts in the Dome
On one Sunday each month during the warmer months of the year, Mt. Wilson Observatory hosts a chamber music or jazz concert in the dome. The idea to use the dome as a venue for live music originated in 2017 from a conversation between Dan Kohne, a board member of the Mt. Wilson Institute, and Cécilia Tsan, an internationally recognized cellist. Tsan agreed that the acoustics in the dome were "extraordinary", comparable to such world-renowned venues as the Palais Garnier (Opéra de Paris) and the Coolidge Auditorium at the Library of Congress. Kohne and Tsan worked together to create the series, which has run every concert season except for a break during the Covid-19 pandemic. Given that the observatory is no longer able to do significant research due to light pollution, it receives no scientific funding; the concerts therefore provide a significant portion of the budget needed to maintain the observatory as an historic landmark, along with ticketed events such as public viewing nights.
In popular culture
The observatory was the primary setting of "Nothing Behind the Door," the first episode of the radio series Quiet, Please which originally aired June 8, 1947.
The observatory was a filming location in a space-themed episode of Check It Out! with Dr. Steve Brule.
| Technology | Ground-based observatories | null |
213614 | https://en.wikipedia.org/wiki/Fractional%20distillation | Fractional distillation | Fractional distillation is the separation of a mixture into its component parts, or fractions. Chemical compounds are separated by heating them to a temperature at which one or more fractions of the mixture will vaporize. It uses distillation to fractionate. Generally the component parts have boiling points that differ by less than 25 °C (45 °F) from each other under a pressure of one atmosphere. If the difference in boiling points is greater than 25 °C, a simple distillation is typically used.
A crude oil distillation unit uses fractional distillation in the process of refining crude oil.
History
The fractional distillation of organic substances played an important role in the 9th-century works attributed to the Islamic alchemist Jabir ibn Hayyan, as for example in the ('The Book of Seventy'), translated into Latin by Gerard of Cremona (c. 1114–1187) under the title . The Jabirian experiments with fractional distillation of animal and vegetable substances, and to a lesser degree also of mineral substances, formed the main topic of the , an originally Arabic work falsely attributed to Avicenna that was translated into Latin and would go on to form the most important alchemical source for Roger Bacon ().
Laboratory setup
Fractional distillation in a laboratory makes use of common laboratory glassware and apparatuses, typically including a Bunsen burner, a round-bottomed flask and a condenser, as well as the single-purpose fractionating column.
As an example, consider the distillation of a mixture of water and ethanol. Ethanol boils at while water boils at . So, by heating the mixture, the most volatile component (ethanol) will concentrate to a greater degree in the vapor leaving the liquid. Some mixtures form azeotropes, where the mixture boils at a lower temperature than either component. In this example, a mixture of ethanol and water boils at ; the mixture is more volatile than pure ethanol. For this reason, ethanol cannot be completely purified by direct fractional distillation of ethanol–water mixtures.
The apparatus is assembled as in the diagram. (The diagram represents a batch apparatus as opposed to a continuous apparatus.) The mixture is put into the round-bottomed flask along with a few anti-bumping granules (or a Teflon-coated magnetic stirrer bar if using magnetic stirring), and the fractionating column is fitted into the top. The fractional distillation column is set up with the heat source at the bottom of the still pot. As the distance from the still pot increases, a temperature gradient is formed in the column; it is coolest at the top and hottest at the bottom. As the mixed vapor ascends the temperature gradient, some of the vapor condenses and vaporizes along the temperature gradient. Each time the vapor condenses and vaporizes, the composition of the more volatile component in the vapor increases. This distills the vapor along the length of the column, and eventually, the vapor is composed solely of the more volatile component (or an azeotrope). The vapor condenses on the glass platforms, known as trays, inside the column, and runs back down into the liquid below, refluxing distillate. The efficiency in terms of the amount of heating and time required to get fractionation can be improved by insulating the outside of the column in an insulator such as wool, aluminum foil, or preferably a vacuum jacket. The hottest tray is at the bottom and the coolest is at the top. At steady-state conditions, the vapor and liquid on each tray are at equilibrium. The most volatile component of the mixture exits as a gas at the top of the column. The vapor at the top of the column then passes into the condenser, which cools it down until it liquefies. The separation is more pure with the addition of more trays (to a practical limitation of heat, flow, etc.) Initially, the condensate will be close to the azeotropic composition, but when much of the ethanol has been drawn off, the condensate becomes gradually richer in water. The process continues until all the ethanol boils out of the mixture. This point can be recognized by the sharp rise in temperature shown on the thermometer.
The above explanation reflects the theoretical way fractionation works. Normal laboratory fractionation columns will be simple glass tubes (often vacuum-jacketed, and sometimes internally silvered) filled with a packing, often small glass helices of diameter. Such a column can be calibrated by the distillation of a known mixture system to quantify the column in terms of number of theoretical trays. To improve fractionation the apparatus is set up to return condensate to the column by the use of some sort of reflux splitter (reflux wire, gago, Magnetic swinging bucket, etc.) – a typical careful fractionation would employ a reflux ratio of around 4:1 (4 parts returned condensate to 1 part condensate take off).
In laboratory distillation, several types of condensers are commonly found. The Liebig condenser is simply a straight tube within a water jacket and is the simplest (and relatively least expensive) form of condenser. The Graham condenser is a spiral tube within a water jacket, and the Allihn condenser has a series of large and small constrictions on the inside tube, each increasing the surface area upon which the vapor constituents may condense.
Alternate set-ups may use a multi-outlet distillation receiver flask (referred to as a "cow" or "pig") to connect three or four receiving flasks to the condenser. By turning the cow or pig, the distillates can be channeled into any chosen receiver. Because the receiver does not have to be removed and replaced during the distillation process, this type of apparatus is useful when distilling under an inert atmosphere for air-sensitive chemicals or at reduced pressure. A Perkin triangle is an alternative apparatus often used in these situations because it allows isolation of the receiver from the rest of the system, but does require removing and reattaching a single receiver for each fraction.
Vacuum distillation systems operate at reduced pressure, thereby lowering the boiling points of the materials. Anti-bumping granules, however, become ineffective at reduced pressures.
Industrial distillation
Fractional distillation is the most common form of separation technology used in petroleum refineries, petrochemical and chemical plants, natural gas processing and cryogenic air separation plants. In most cases, the distillation is operated at a continuous steady state. New feed is always being added to the distillation column and products are always being removed. Unless the process is disturbed due to changes in feed, heat, ambient temperature, or condensing, the amount of feed being added and the amount of product being removed are normally equal. This is known as continuous, steady-state fractional distillation.
Industrial distillation is typically performed in large, vertical cylindrical columns known as "distillation or fractionation towers" or "distillation columns" with diameters ranging from about and heights ranging from about or more. The distillation towers have liquid outlets at intervals up the column which allow for the withdrawal of different fractions or products having different boiling points or boiling ranges. By increasing the temperature of the product inside the columns, the different products are separated. The "lightest" products (those with the lowest boiling point) exit from the top of the columns and the "heaviest" products (those with the highest boiling point) exit from the bottom of the column.
For example, fractional distillation is used in oil refineries to separate crude oil into useful substances (or fractions) having different hydrocarbons of different boiling points. The crude oil fractions with higher boiling points:
have more carbon atoms
have higher molecular weights
are less branched-chain alkanes
are darker in color
are more viscous
are more difficult to ignite and to burn
Large-scale industrial towers use reflux to achieve a more complete separation of products. Reflux refers to the portion of the condensed overhead liquid product from a distillation or fractionation tower that is returned to the upper part of the tower as shown in the schematic diagram of a typical, large-scale industrial distillation tower. Inside the tower, the reflux liquid flowing downwards provides the cooling needed to condense the vapors flowing upwards, thereby increasing the effectiveness of the distillation tower. The more reflux is provided for a given number of theoretical plates, the better the tower's separation of lower boiling materials from higher boiling materials. Alternatively, the more reflux provided for a given desired separation, the fewer theoretical plates are required.
Fractional distillation is also used in air separation, producing liquid oxygen, liquid nitrogen, and highly concentrated argon. Distillation of chlorosilanes also enable the production of high-purity silicon for use as a semiconductor.
In industrial uses, sometimes a packing material is used in the column instead of trays, especially when low-pressure drops across the column are required, as when operating under vacuum. This packing material can either be random dumped packing ( wide) such as Raschig rings or structured sheet metal. Typical manufacturers are Koch, Sulzer, and other companies. Liquids tend to wet the surface of the packing and the vapors pass across this wetted surface, where mass transfer takes place. Unlike conventional tray distillation in which every tray represents a separate point of vapor liquid equilibrium the vapor-liquid equilibrium curve in a packed column is continuous. However, when modeling packed columns it is useful to compute several "theoretical plates" to denote the separation efficiency of the packed column concerning more traditional trays. Differently shaped packings have different surface areas and porosity. Both of these factors affect packing performance.
Design of industrial distillation columns
Design and operation of a distillation column depends on the feed and desired products. Given a simple, binary component feed, analytical methods such as the McCabe–Thiele method or the Fenske equation can be used. For a multi-component feed, simulation models are used both for design and operation.
Moreover, the efficiencies of the vapor-liquid contact devices (referred to as plates or trays) used in distillation columns are typically lower than that of a theoretical efficient equilibrium stage. Hence, a distillation column needs more plates than the number of theoretical vapor-liquid equilibrium stages.
Reflux refers to the portion of the condensed overhead product that is returned to the tower. The reflux flowing downwards provides the cooling required for condensing the vapors flowing upwards. The reflux ratio, which is the ratio of the (internal) reflux to the overhead product, is conversely related to the theoretical number of stages required for efficient separation of the distillation products.
Fractional distillation towers or columns are designed to achieve the required separation efficiently. The design of fractionation columns is normally made in two steps; a process design, followed by a mechanical design. The purpose of the process design is to calculate the number of required theoretical stages and stream flows including the reflux ratio, heat reflux, and other heat duties. The purpose of the mechanical design, on the other hand, is to select the tower internals, column diameter, and height. In most cases, the mechanical design of fractionation towers is not straightforward. For the efficient selection of tower internals and the accurate calculation of column height and diameter, many factors must be taken into account. Some of the factors involved in design calculations include feed load size and properties and the type of distillation column used.
The two major types of distillation columns used are tray and packing columns. Packing columns are normally used for smaller towers and loads that are corrosive or temperature-sensitive or for vacuum service where pressure drop is important. Tray columns, on the other hand, are used for larger columns with high liquid loads. They first appeared on the scene in the 1820s. In most oil refinery operations, tray columns are mainly used for the separation of petroleum fractions at different stages of oil refining.
In the oil refining industry, the design and operation of fractionation towers is still largely accomplished on an empirical basis. The calculations involved in the design of petroleum fractionation columns require in the usual practice the use of numerable charts, tables, and complex empirical equations. In recent years, however, a considerable amount of work has been done to develop efficient and reliable computer-aided design procedures for fractional distillation.
| Physical sciences | Phase separations | Chemistry |
213671 | https://en.wikipedia.org/wiki/Runge%E2%80%93Kutta%20methods | Runge–Kutta methods | In numerical analysis, the Runge–Kutta methods ( ) are a family of implicit and explicit iterative methods, which include the Euler method, used in temporal discretization for the approximate solutions of simultaneous nonlinear equations. These methods were developed around 1900 by the German mathematicians Carl Runge and Wilhelm Kutta.
The Runge–Kutta method
The most widely known member of the Runge–Kutta family is generally referred to as "RK4", the "classic Runge–Kutta method" or simply as "the Runge–Kutta method".
Let an initial value problem be specified as follows:
Here is an unknown function (scalar or vector) of time , which we would like to approximate; we are told that , the rate at which changes, is a function of and of itself. At the initial time the corresponding value is . The function and the initial conditions , are given.
Now we pick a step-size h > 0 and define:
for n = 0, 1, 2, 3, ..., using
(Note: the above equations have different but equivalent definitions in different texts.)
Here is the RK4 approximation of , and the next value () is determined by the present value () plus the weighted average of four increments, where each increment is the product of the size of the interval, h, and an estimated slope specified by function f on the right-hand side of the differential equation.
is the slope at the beginning of the interval, using (Euler's method);
is the slope at the midpoint of the interval, using and ;
is again the slope at the midpoint, but now using and ;
is the slope at the end of the interval, using and .
In averaging the four slopes, greater weight is given to the slopes at the midpoint. If is independent of , so that the differential equation is equivalent to a simple integral, then RK4 is Simpson's rule.
The RK4 method is a fourth-order method, meaning that the local truncation error is on the order of , while the total accumulated error is on the order of .
In many practical applications the function is independent of (so called autonomous system, or time-invariant system, especially in physics), and their increments are not computed at all and not passed to function , with only the final formula for used.
Explicit Runge–Kutta methods
The family of explicit Runge–Kutta methods is a generalization of the RK4 method mentioned above. It is given by
where
(Note: the above equations may have different but equivalent definitions in some texts.)
To specify a particular method, one needs to provide the integer s (the number of stages), and the coefficients aij (for 1 ≤ j < i ≤ s), bi (for i = 1, 2, ..., s) and ci (for i = 2, 3, ..., s). The matrix [aij] is called the Runge–Kutta matrix, while the bi and ci are known as the weights and the nodes. These data are usually arranged in a mnemonic device, known as a Butcher tableau (after John C. Butcher):
{| style="text-align: center" cellspacing="0" cellpadding="3"
| style="border-right:1px solid;" |
|-
| style="border-right:1px solid;" | ||
|-
| style="border-right:1px solid;" | || ||
|-
| style="border-right:1px solid;" | || || ||
|-
| style="border-right:1px solid; border-bottom:1px solid;" |
| style="border-bottom:1px solid;" |
| style="border-bottom:1px solid;" |
| style="border-bottom:1px solid;" |
| style="border-bottom:1px solid;" | || style="border-bottom:1px solid;" |
|-
| style="border-right:1px solid;" | || || || || ||
|}
A Taylor series expansion shows that the Runge–Kutta method is consistent if and only if
There are also accompanying requirements if one requires the method to have a certain order p, meaning that the local truncation error is O(hp+1). These can be derived from the definition of the truncation error itself. For example, a two-stage method has order 2 if b1 + b2 = 1, b2c2 = 1/2, and b2a21 = 1/2. Note that a popular condition for determining coefficients is
This condition alone, however, is neither sufficient, nor necessary for consistency.
In general, if an explicit -stage Runge–Kutta method has order , then it can be proven that the number of stages must satisfy and if , then .
However, it is not known whether these bounds are sharp in all cases. In some cases, it is proven that the bound cannot be achieved. For instance, Butcher proved that for , there is no explicit method with stages. Butcher also proved that for , there is no explicit Runge-Kutta method with stages. In general, however, it remains an open problem what the precise minimum number of stages is for an explicit Runge–Kutta method to have order . Some values which are known are:
The provable bound above then imply that we can not find methods of orders that require fewer stages than the methods we already know for these orders. The work of Butcher also proves that 7th and 8th order methods have a minimum of 9 and 11 stages, respectively. An example of an explicit method of order 6 with 7 stages can be found in Ref. Explicit methods of order 7 with 9 stages and explicit methods of order 8 with 11 stages are also known. See Refs. for a summary.
Examples
The RK4 method falls in this framework. Its tableau is
{| style="text-align: center" cellspacing="0" cellpadding="3"
| style="border-right:1px solid;" | 0
|-
| style="border-right:1px solid;" | 1/2 || 1/2
|-
| style="border-right:1px solid;" | 1/2 || 0 || 1/2
|-
| style="border-right:1px solid; border-bottom:1px solid;" | 1 || style="border-bottom:1px solid;" | 0
| style="border-bottom:1px solid;" | 0 || style="border-bottom:1px solid;" | 1
| style="border-bottom:1px solid;" |
|-
| style="border-right:1px solid;" | || 1/6 || 1/3 || 1/3 || 1/6
|}
A slight variation of "the" Runge–Kutta method is also due to Kutta in 1901 and is called the 3/8-rule. The primary advantage this method has is that almost all of the error coefficients are smaller than in the popular method, but it requires slightly more FLOPs (floating-point operations) per time step. Its Butcher tableau is
{| style="text-align: center" cellspacing="0" cellpadding="3"
| style="border-right:1px solid;" | 0
|-
| style="border-right:1px solid;" | 1/3 || 1/3
|-
| style="border-right:1px solid;" | 2/3 || −1/3 || 1
|-
| style="border-right:1px solid; border-bottom:1px solid;" | 1 || style="border-bottom:1px solid;" | 1
| style="border-bottom:1px solid;" | −1 || style="border-bottom:1px solid;" | 1
| style="border-bottom:1px solid;" |
|-
| style="border-right:1px solid;" | || 1/8 || 3/8 || 3/8 || 1/8
|}
However, the simplest Runge–Kutta method is the (forward) Euler method, given by the formula . This is the only consistent explicit Runge–Kutta method with one stage. The corresponding tableau is
{| style="text-align: center" cellspacing="0" cellpadding="3"
| width="10" style="border-right:1px solid; border-bottom:1px solid;" | 0
| width="10" style="border-bottom:1px solid;" |
|-
| style="border-right:1px solid;" | || 1
|}
Second-order methods with two stages
An example of a second-order method with two stages is provided by the explicit midpoint method:
The corresponding tableau is
{| style="text-align: center" cellspacing="0" cellpadding="3"
| style="border-right:1px solid;" | 0
|-
| style="border-right:1px solid; border-bottom:1px solid;" | 1/2 || style="border-bottom:1px solid;" | 1/2 || style="border-bottom:1px solid;" |
|-
| style="border-right:1px solid;" | || 0 || 1
|}
The midpoint method is not the only second-order Runge–Kutta method with two stages; there is a family of such methods, parameterized by α and given by the formula
Its Butcher tableau is
{| style="text-align: center" cellspacing="0" cellpadding="8"
| style="border-right:1px solid;" | 0
|-
| style="border-right:1px solid; border-bottom:1px solid;" | || style="border-bottom:1px solid; text-align: center;" | || style="border-bottom:1px solid;" |
|-
| style="border-right:1px solid;" | || ||
|}
In this family, gives the midpoint method, is Heun's method, and is Ralston's method.
Use
As an example, consider the two-stage second-order Runge–Kutta method with α = 2/3, also known as Ralston method. It is given by the tableau
with the corresponding equations
This method is used to solve the initial-value problem
with step size h = 0.025, so the method needs to take four steps.
The method proceeds as follows:
The numerical solutions correspond to the underlined values.
Adaptive Runge–Kutta methods
Adaptive methods are designed to produce an estimate of the local truncation error of a single Runge–Kutta step. This is done by having two methods, one with order and one with order . These methods are interwoven, i.e., they have common intermediate steps. Thanks to this, estimating the error has little or negligible computational cost compared to a step with the higher-order method.
During the integration, the step size is adapted such that the estimated error stays below a user-defined threshold: If the error is too high, a step is repeated with a lower step size; if the error is much smaller, the step size is increased to save time. This results in an (almost), optimal step size, which saves computation time. Moreover, the user does not have to spend time on finding an appropriate step size.
The lower-order step is given by
where are the same as for the higher-order method. Then the error is
which is .
The Butcher tableau for this kind of method is extended to give the values of :
The Runge–Kutta–Fehlberg method has two methods of orders 5 and 4. Its extended Butcher tableau is:
However, the simplest adaptive Runge–Kutta method involves combining Heun's method, which is order 2, with the Euler method, which is order 1. Its extended Butcher tableau is:
Other adaptive Runge–Kutta methods are the Bogacki–Shampine method (orders 3 and 2), the Cash–Karp method and the Dormand–Prince method (both with orders 5 and 4).
Nonconfluent Runge–Kutta methods
A Runge–Kutta method is said to be nonconfluent if all the are distinct.
Runge–Kutta–Nyström methods
Runge–Kutta–Nyström methods are specialized Runge–Kutta methods that are optimized for second-order differential equations. A general Runge–Kutta–Nyström method for a second-order ODE system
with order is with the form
which forms a Butcher table with the form
Two fourth-order explicit RKN methods are given by the following Butcher tables:
These two schemes also have the symplectic-preserving properties when the original equation is derived from a conservative classical mechanical system, i.e. when
for some scalar function .
Implicit Runge–Kutta methods
All Runge–Kutta methods mentioned up to now are explicit methods. Explicit Runge–Kutta methods are generally unsuitable for the solution of stiff equations because their region of absolute stability is small; in particular, it is bounded.
This issue is especially important in the solution of partial differential equations.
The instability of explicit Runge–Kutta methods motivates the development of implicit methods. An implicit Runge–Kutta method has the form
where
The difference with an explicit method is that in an explicit method, the sum over j only goes up to i − 1. This also shows up in the Butcher tableau: the coefficient matrix of an explicit method is lower triangular. In an implicit method, the sum over j goes up to s and the coefficient matrix is not triangular, yielding a Butcher tableau of the form
See Adaptive Runge-Kutta methods above for the explanation of the row.
The consequence of this difference is that at every step, a system of algebraic equations has to be solved. This increases the computational cost considerably. If a method with s stages is used to solve a differential equation with m components, then the system of algebraic equations has ms components. This can be contrasted with implicit linear multistep methods (the other big family of methods for ODEs): an implicit s-step linear multistep method needs to solve a system of algebraic equations with only m components, so the size of the system does not increase as the number of steps increases.
Examples
The simplest example of an implicit Runge–Kutta method is the backward Euler method:
The Butcher tableau for this is simply:
This Butcher tableau corresponds to the formulae
which can be re-arranged to get the formula for the backward Euler method listed above.
Another example for an implicit Runge–Kutta method is the trapezoidal rule. Its Butcher tableau is:
The trapezoidal rule is a collocation method (as discussed in that article). All collocation methods are implicit Runge–Kutta methods, but not all implicit Runge–Kutta methods are collocation methods.
The Gauss–Legendre methods form a family of collocation methods based on Gauss quadrature. A Gauss–Legendre method with s stages has order 2s (thus, methods with arbitrarily high order can be constructed). The method with two stages (and thus order four) has Butcher tableau:
Stability
The advantage of implicit Runge–Kutta methods over explicit ones is their greater stability, especially when applied to stiff equations. Consider the linear test equation . A Runge–Kutta method applied to this equation reduces to the iteration , with r given by
where e stands for the vector of ones. The function r is called the stability function. It follows from the formula that r is the quotient of two polynomials of degree s if the method has s stages. Explicit methods have a strictly lower triangular matrix A, which implies that det(I − zA) = 1 and that the stability function is a polynomial.
The numerical solution to the linear test equation decays to zero if | r(z) | < 1 with z = hλ. The set of such z is called the domain of absolute stability. In particular, the method is said to be absolute stable if all z with Re(z) < 0 are in the domain of absolute stability. The stability function of an explicit Runge–Kutta method is a polynomial, so explicit Runge–Kutta methods can never be A-stable.
If the method has order p, then the stability function satisfies as . Thus, it is of interest to study quotients of polynomials of given degrees that approximate the exponential function the best. These are known as Padé approximants. A Padé approximant with numerator of degree m and denominator of degree n is A-stable if and only if m ≤ n ≤ m + 2.
The Gauss–Legendre method with s stages has order 2s, so its stability function is the Padé approximant with m = n = s. It follows that the method is A-stable. This shows that A-stable Runge–Kutta can have arbitrarily high order. In contrast, the order of A-stable linear multistep methods cannot exceed two.
B-stability
The A-stability concept for the solution of differential equations is related to the linear autonomous equation . proposed the investigation of stability of numerical schemes when applied to nonlinear systems that satisfy a monotonicity condition. The corresponding concepts were defined as G-stability for multistep methods (and the related one-leg methods) and B-stability (Butcher, 1975) for Runge–Kutta methods. A Runge–Kutta method applied to the non-linear system , which verifies , is called B-stable, if this condition implies for two numerical solutions.
Let , and be three matrices defined by
A Runge–Kutta method is said to be algebraically stable if the matrices and are both non-negative definite. A sufficient condition for B-stability is: and are non-negative definite.
Derivation of the Runge–Kutta fourth-order method
In general a Runge–Kutta method of order can be written as:
where:
are increments obtained evaluating the derivatives of at the -th order.
We develop the derivation for the Runge–Kutta fourth-order method using the general formula with evaluated, as explained above, at the starting point, the midpoint and the end point of any interval ; thus, we choose:
and otherwise. We begin by defining the following quantities:
where and
If we define:
and for the previous relations we can show that the following equalities hold up to :
where:
is the total derivative of with respect to time.
If we now express the general formula using what we just derived we obtain:
and comparing this with the Taylor series of around :
we obtain a system of constraints on the coefficients:
which when solved gives as stated above.
| Mathematics | Differential equations | null |
213739 | https://en.wikipedia.org/wiki/Dry%20dock | Dry dock | A dry dock (sometimes drydock or dry-dock) is a narrow basin or vessel that can be flooded to allow a load to be floated in, then drained to allow that load to come to rest on a dry platform. Dry docks are used for the construction, maintenance, and repair of ships, boats, and other watercraft.
History
China
The use of dry docks in China goes at least as far back as the 10th century A.D. In 1088, Song dynasty scientist and statesman Shen Kuo (1031–1095) wrote in his Dream Pool Essays:
Europe
Greco-Roman world
The Greek author Athenaeus of Naucratis (V 204c-d) reports something that may have been a dry dock in Ptolemaic Egypt in the reign of Ptolemy IV Philopator (221-204 BC) on the occasion of the launch of the enormous Tessarakonteres rowing ship. However a more recent survey by Goodchild and Forbes does not substantiate its existence.
It has been calculated that a dock for a vessel of such a size might have had a volume of 750,000 gallons of water.
Renaissance Europe
Before the 15th century, when the hull below the waterline needed attention, careening was practised: at high tide the vessel was floated over a beach of hard sand and allowed to rest on one side when the tide receded. An account of 1434 described how a site near Southampton with a bottom of soft mud was selected for the warship Grace Dieu, so that the hull would bed itself in and remain upright at low tide. A timber, brushwood and clay wall was then built up around the hull. The first early modern purpose-built European and oldest surviving dry dock still in use was commissioned by Henry VII of England at HMNB Portsmouth in 1495. This was a timber-lined excavation, with the seaward end closed off by a temporary revetted bank of rock and clay that had to be dug away by hand (an operation taking typically 29 days, working night and day to accord with the tides) to allow the passage of a ship. Emptying was by a pump, possibly in the form of a bucket-chain powered by horses. This dry dock currently holds First World War monitor HMS M33.
Possibly the earliest description of a floating dock comes from a small Italian book printed in Venice in 1560, called Descrittione dell'artifitiosa machina. In the booklet, an unknown author asks for the privilege of using a new method for the salvaging of a grounded ship and then proceeds to describe and illustrate his approach. The included woodcut shows a ship flanked by two large floating trestles, forming a roof above the vessel. The ship is pulled in an upright position by a number of ropes attached to the superstructure.
Modern era
The Saint-Nazaire's Chantiers de l'Atlantique owns one of the biggest in the world: . The Alfredo da Silva Dry Dock in Almada, Portugal, was closed in 2000. The largest roofed dry dock is at the German Meyer Werft Shipyard in Papenburg, Germany, it is 504 m long, 125 m wide and stands 75 m tall.
Harland and Wolff Heavy Industries in Belfast, Northern Ireland, is the site of a large dry dock . The massive cranes are named after the Biblical figures Samson and Goliath.
Dry Dock 12 at Newport News Shipbuilding at is the largest dry dock in the United States. The largest floating-dock in North America is named The Vigorous. It is operated by Vigor Industries in Portland, OR, in the Swan Island industrial area along the Willamette River.
Types
Graving
A is the traditional form of dry dock. It is a narrow basin, usually made of earthen berms and concrete, closed by gates or a caisson. A vessel is floated in with the gates open, then the gates are closed and the water is pumped out, leaving the craft supported on blocks.
The keel blocks as well as the bilge block are placed on the floor of the dock in accordance with the "docking plan" of the ship. Routine use of dry docks is for the "graving" i.e. the cleaning, removal of barnacles and rust, and re-painting of ships' hulls.
Some fine-tuning of the ship's position can be done by divers while there is still some water left to manoeuvre the vessel. It is extremely important that supporting blocks conform to the structural members so that the ship is not damaged when its weight is supported by the blocks. Some anti-submarine warfare warships have sonar domes protruding beneath the hull, requiring the hull to be supported several metres above the bottom of the dry dock, or depressions built into the floor of the dock, to accommodate the protrusions.
Once the remainder of the water is pumped out, the ship can be freely inspected or serviced. When work on the ship is finished, the gates are opened to allow water in, and the ship is carefully refloated.
Modern graving docks are box-shaped, to accommodate newer, boxier ships, whereas old dry docks are often shaped like the ships expected to dock there. This shaping was advantageous because such a dock was easier to build, it was easier to side-support the ships, and less water had to be pumped away.
Dry docks used for building naval vessels may occasionally be built with a roof, to prevent spy satellites from taking pictures of the dry dock and any vessels that may be in it. During World War II, the German used fortified dry docks to protect its submarines from Allied air raids (see submarine pen).
An advantage of covered dry docks is that work can take place in any weather; this is frequently used by modern shipyards for construction especially of complex, high-value vessels like cruise ships, where delays would incur a high cost.
Floating
A floating dry dock is a type of pontoon for dry docking ships, possessing floodable buoyancy chambers and a U-shaped cross-section. The walls are used to give the dry dock stability when the floor or deck is below the surface of the water. When valves are opened, the chambers fill with water, causing the dry dock to float lower in the water. The deck becomes submerged and this allows a ship to be moved into position inside. When the water is pumped out of the chambers, the dry dock rises and the ship is lifted out of the water on the rising deck, allowing work to proceed on the ship's hull.
A large floating dry dock involves multiple rectangular sections. These sections can be combined to handle ships of various lengths, and the sections themselves can come in different dimensions. Each section contains its own equipment for emptying the ballast and to provide the required services, and the addition of a bow section can facilitate the towing of the dry dock once assembled. For smaller boats, one-piece floating dry docks can be constructed or converted out of an existing obsolete barge, potentially coming with their own bow and steering mechanism.
Shipyards operate floating dry docks as one method for hauling or docking vessels. Floating drydocks are important in locations where porous ground prevents the use of conventional drydocks, such as at the Royal Naval Dockyard on the limestone archipelago of Bermuda. Another advantage of floating dry docks is that they can be moved to wherever they are needed and can also be sold second-hand. During World War II, the U.S. Navy used such auxiliary floating drydocks extensively to provide maintenance in remote locations. Two examples of these were the 1,000-foot AFDB-1 and the 850-foot AFDB-3. The latter, an Advance Base Sectional Dock which saw action in Guam, was mothballed near Norfolk, Virginia, and was eventually towed to Portland, Maine, to become part of Bath Iron Works' repair facilities.
A downside of floating dry docks is that unscheduled sinkings and off-design dives may take place, as with the Russian dock PD-50 in 2018.
The "Hughes Mining Barge", or HMB-1, is a covered, floating drydock that is also submersible to support the secret transfer of a mechanical lifting device underneath the Glomar Explorer ship, as well as the development of the Sea Shadow stealth ship.
The Great Balance Dock, built in New York City in 1854, was the largest floating drydock in the world when it was launched. It was long and could lift 8,000 tons, accommodating the largest ships of its day.
Alternative dry dock systems
Apart from graving docks and floating dry docks, ships can also be dry docked and launched by:
Marine railway — For repair of larger ships up to about 3000 tons ship weight
Shiplift — For repair as well as for new-building. From 800 to 25000 ton ship-weight
Slipway, patent slip — For repair of smaller boats and the new-building launch of larger vessels
Other uses
Some dry docks are used during the construction of bridges, dams, and other large objects. For example, the dry dock on the artificial island of Neeltje-Jans was used for the construction of the Oosterscheldekering, a large dam in the Netherlands that consists of 65 concrete pillars weighing 18,000 tonnes each. The pillars were constructed in a drydock and towed to their final place on the seabed.
A dry dock may also be used for the prefabrication of the elements of an immersed tube tunnel, before they are floated into position, as was done with Boston's Silver Line.
Gallery
| Technology | Coastal infrastructure | null |
213968 | https://en.wikipedia.org/wiki/Relative%20atomic%20mass | Relative atomic mass | Relative atomic mass (symbol: A; sometimes abbreviated RAM or r.a.m.), also known by the deprecated synonym atomic weight, is a dimensionless physical quantity defined as the ratio of the average mass of atoms of a chemical element in a given sample to the atomic mass constant. The atomic mass constant (symbol: m) is defined as being of the mass of a carbon-12 atom. Since both quantities in the ratio are masses, the resulting value is dimensionless. These definitions remain valid even after the 2019 revision of the SI.
For a single given sample, the relative atomic mass of a given element is the weighted arithmetic mean of the masses of the individual atoms (including all its isotopes) that are present in the sample. This quantity can vary significantly between samples because the sample's origin (and therefore its radioactive history or diffusion history) may have produced combinations of isotopic abundances in varying ratios. For example, due to a different mixture of stable carbon-12 and carbon-13 isotopes, a sample of elemental carbon from volcanic methane will have a different relative atomic mass than one collected from plant or animal tissues.
The more common, and more specific quantity known as standard atomic weight (A) is an application of the relative atomic mass values obtained from many different samples. It is sometimes interpreted as the expected range of the relative atomic mass values for the atoms of a given element from all terrestrial sources, with the various sources being taken from Earth. "Atomic weight" is often loosely and incorrectly used as a synonym for standard atomic weight (incorrectly because standard atomic weights are not from a single sample). Standard atomic weight is nevertheless the most widely published variant of relative atomic mass.
Additionally, the continued use of the term "atomic weight" (for any element) as opposed to "relative atomic mass" has attracted considerable controversy since at least the 1960s, mainly due to the technical difference between weight and mass in physics. Still, both terms are officially sanctioned by the IUPAC. The term "relative atomic mass" now seems to be replacing "atomic weight" as the preferred term, although the term "standard atomic weight" (as opposed to the more correct "standard relative atomic mass") continues to be used.
Definition
Relative atomic mass is determined by the average atomic mass, or the weighted mean of the atomic masses of all the atoms of a particular chemical element found in a particular sample, which is then compared to the atomic mass of carbon-12. This comparison is the quotient of the two weights, which makes the value dimensionless (having no unit). This quotient also explains the word relative: the sample mass value is considered relative to that of carbon-12.
It is a synonym for atomic weight, though it is not to be confused with relative isotopic mass. Relative atomic mass is also frequently used as a synonym for standard atomic weight and these quantities may have overlapping values if the relative atomic mass used is that for an element from Earth under defined conditions. However, relative atomic mass (atomic weight) is still technically distinct from standard atomic weight because of its application only to the atoms obtained from a single sample; it is also not restricted to terrestrial samples, whereas standard atomic weight averages multiple samples but only from terrestrial sources. Relative atomic mass is therefore a more general term that can more broadly refer to samples taken from non-terrestrial environments or highly specific terrestrial environments which may differ substantially from Earth-average or reflect different degrees of certainty (e.g., in number of significant figures) than those reflected in standard atomic weights.
Current definition
The prevailing IUPAC definitions (as taken from the "Gold Book") are:
atomic weight – See: relative atomic mass
and
relative atomic mass (atomic weight) – The ratio of the average mass of the atom to the unified atomic mass unit.
Here the "unified atomic mass unit" refers to 1/12 of the mass of an atom of C in its ground state.
The IUPAC definition of relative atomic mass is:
An atomic weight (relative atomic mass) of an element from a specified source is the ratio of the average mass per atom of the element to 1/12 of the mass of an atom of C.
The definition deliberately specifies "An atomic weight ...", as an element will have different relative atomic masses depending on the source. For example, boron from Turkey has a lower relative atomic mass than boron from California, because of its different isotopic composition. Nevertheless, given the cost and difficulty of isotope analysis, it is common practice to instead substitute the tabulated values of standard atomic weights, which are ubiquitous in chemical laboratories and which are revised biennially by the IUPAC's Commission on Isotopic Abundances and Atomic Weights (CIAAW).
Historical usage
Older (pre-1961) historical relative scales based on the atomic mass unit (symbol: a.m.u. or amu) used either the oxygen-16 relative isotopic mass or else the oxygen relative atomic mass (i.e., atomic weight) for reference. See the article on the history of the modern unified atomic mass unit for the resolution of these problems.
Standard atomic weight
The IUPAC commission CIAAW maintains an expectation-interval value for relative atomic mass (or atomic weight) on Earth named standard atomic weight. Standard atomic weight requires the sources be terrestrial, natural, and stable with regard to radioactivity. Also, there are requirements for the research process. For 84 stable elements, CIAAW has determined this standard atomic weight. These values are widely published and referred to loosely as 'the' atomic weight of elements for real-life substances like pharmaceuticals and commercial trade.
Also, CIAAW has published abridged (rounded) values and simplified values (for when the Earthly sources vary systematically).
Other measures of the mass of atoms
Atomic mass (ma) is the mass of a single atom. It defines the mass of a specific isotope, which is an input value for the determination of the relative atomic mass. An example for three silicon isotopes is given below. A convenient unit of mass for atomic mass is the dalton (Da), which is also called the unified atomic mass unit (u).
The relative isotopic mass is the ratio of the mass of a single atom to the atomic mass constant (). This ratio is dimensionless.
Determination of relative atomic mass
Modern relative atomic masses (a term specific to a given element sample) are calculated from measured values of atomic mass (for each nuclide) and isotopic composition of a sample. Highly accurate atomic masses are available for virtually all non-radioactive nuclides, but isotopic compositions are both harder to measure to high precision and more subject to variation between samples. For this reason, the relative atomic masses of the 22 mononuclidic elements (which are the same as the isotopic masses for each of the single naturally occurring nuclides of these elements) are known to especially high accuracy. For example, there is an uncertainty of only one part in 38 million for the relative atomic mass of fluorine, a precision which is greater than the current best value for the Avogadro constant (one part in 20 million).
The calculation is exemplified for silicon, whose relative atomic mass is especially important in metrology. Silicon exists in nature as a mixture of three isotopes: Si, Si and Si. The atomic masses of these nuclides are known to a precision of one part in 14 billion for Si and about one part in one billion for the others. However, the range of natural abundance for the isotopes is such that the standard abundance can only be given to about ±0.001% (see table).
The calculation is as follows:
A(Si) = ( × ) + ( × ) + ( × ) =
The estimation of the uncertainty is complicated, especially as the sample distribution is not necessarily symmetrical: the IUPAC standard relative atomic masses are quoted with estimated symmetrical uncertainties, and the value for silicon is 28.0855(3). The relative standard uncertainty in this value is or 10 ppm.
Apart from this uncertainty by measurement, some elements have variation over sources. That is, different sources (ocean water, rocks) have a different radioactive history and so different isotopic composition. To reflect this natural variability, the IUPAC made the decision in 2010 to list the standard relative atomic masses of 10 elements as an interval rather than a fixed number.
| Physical sciences | Periodic table | Chemistry |
214067 | https://en.wikipedia.org/wiki/Virgo%20Cluster | Virgo Cluster | The Virgo Cluster is a cluster of galaxies whose center is 53.8 ± 0.3 Mly (16.5 ± 0.1 Mpc) away in the constellation Virgo. Comprising approximately 1,300 (and possibly up to 2,000) member galaxies, the cluster forms the heart of the larger Virgo Supercluster, of which the Local Group (containing the Milky Way galaxy) is a member. The Local Group actually experiences the mass of the Virgo Supercluster as the Virgocentric flow. It is estimated that the Virgo Cluster's mass is 1.2 out to 8 degrees of the cluster's center or a radius of about 2.2 Mpc.
Many of the brighter galaxies in this cluster, including the giant elliptical galaxy Messier 87, were discovered in the late 1770s and early 1780s and subsequently included in Charles Messier's catalogue of non-cometary fuzzy objects. Described by Messier as nebulae without stars, their true nature was not recognized until the 1920s.
The cluster extends across approximately 8 degrees centered in the constellation Virgo. Some of its most prominent members can be seen with binoculars and small telescopes, while a 6-inch telescope will reveal about 160 of the cluster's galaxies on a clear night. Its brightest member is the elliptical galaxy Messier 49.
Characteristics
The cluster is a fairly heterogeneous mixture of spiral and elliptical galaxies. , it is believed that the spiral galaxies of the cluster are distributed in an oblong prolate filament, approximately four times as long as it is wide, stretching along the line of sight from the Milky Way. The elliptical galaxies are more centrally concentrated than the spiral galaxies.
The cluster is an aggregate of at least three separate subclumps: Virgo A, centered on M87, a second centered on the galaxy M86, and Virgo B, centered on M49, with some authors including a Virgo C subcluster, centered on the galaxy M60 as well as a Low Velocity Cloud (LVC) subclump, centered on the large spiral galaxy NGC 4216. The giant elliptical galaxy M87 contains a supermassive black hole, whose event horizon was observed by the Event Horizon Telescope Collaboration in 2019.
Virgo A is the dominant subclump; its mass of approximately 1014 is approximately ten times larger than the other two subclumps. It contains a mixture of elliptical, lenticular, and spiral galaxies which are generally gas-poor,
The three subgroups are in the process of merging to form a larger single cluster, and are surrounded by other smaller galaxy clouds, mostly composed of spiral galaxies, known as N Cloud, S Cloud, and Virgo E that are in the process of infalling to merge with them, plus other farther isolated galaxies and galaxy groups (like the galaxy cloud Coma I) that are also attracted by the gravity of Virgo to merge with it in the future. This strongly suggests the Virgo cluster is a dynamically young cluster that is still forming.
Nearby aggregations known as M Cloud, W Cloud, and W' Cloud seem to be background systems independent of the main cluster.
The large mass of the cluster is indicated by the high peculiar velocities of many of its galaxies, sometimes as high as 1,600 km/s with respect to the cluster's center.
The Virgo cluster lies within the Virgo Supercluster, and its gravitational effect slows down the nearby galaxies. The large mass of the cluster has the effect of slowing down the recession of the Local Group from the cluster by approximately ten percent.
Molecular gasses in Virgo Cluster has been swept away by a huge cosmic broom that is preventing nearby galaxies from birthing new stars. The actual cause of it has been a long standing mystery in astrophysics. According to scientists, it occurs because of the extreme environment of the Virgo Cluster.
Intracluster medium
As with many other rich galaxy clusters, Virgo's intracluster medium is filled with a hot, rarefied plasma at temperatures of 30 million kelvins that emits X-Rays. Within the intracluster medium (ICM) are found a large number of intergalactic stars (up to 10% of the stars in the cluster), including some planetary nebulae. It is theorized that these were expelled from their home galaxies by interactions with other galaxies. The ICM also contains some globular clusters, possibly stripped off dwarf galaxies, and even at least one star formation region.
Galaxies
Below is a table of bright or notable objects in the cluster and their subcluster. In some cases a galaxy may be considered to be in a different subcluster by other researchers (sources:)
Column 1: The name of the galaxy.
Column 2: The right ascension for epoch 2000.
Column 3: The declination for epoch 2000.
Column 4: The blue apparent magnitude of the galaxy.
Column 5: The galaxy type: E=Elliptical, S0=Lenticular, Sa,Sb,Sc,Sd=Spiral, SBa,SBb,SBc,SBd=Barred spiral, Sm,SBm,Irr=Irregular.
Column 6: The angular diameter of the galaxy (arcminutes).
Column 7: The diameter of the galaxy (thousands of light years).
Column 8: The recessional velocity (km/s) of the galaxy relative to the cosmic microwave background.
Column 9: Subcluster where the galaxy is located.
Fainter galaxies within the cluster are usually known by their numbers in the Virgo Cluster Catalog, particularly members of the numerous dwarf galaxy population.
| Physical sciences | Other notable objects | null |
214077 | https://en.wikipedia.org/wiki/Messier%2087 | Messier 87 | Messier 87 (also known as Virgo A or NGC 4486, generally abbreviated to M87) is a supergiant elliptical galaxy in the constellation Virgo that contains several trillion stars. One of the largest and most massive galaxies in the local universe, it has a large population of globular clusters—about 15,000 compared with the 150–200 orbiting the Milky Way—and a jet of energetic plasma that originates at the core and extends at least , traveling at a relativistic speed. It is one of the brightest radio sources in the sky and a popular target for both amateur and professional astronomers.
The French astronomer Charles Messier discovered M87 in 1781, and cataloged it as a nebula. M87 is about from Earth and is the second-brightest galaxy within the northern Virgo Cluster, having many satellite galaxies. Unlike a disk-shaped spiral galaxy, M87 has no distinctive dust lanes. Instead, it has an almost featureless, ellipsoidal shape typical of most giant elliptical galaxies, diminishing in luminosity with distance from the center. Forming around one-sixth of its mass, M87's stars have a nearly spherically symmetric distribution. Their population density decreases with increasing distance from the core. It has an active supermassive black hole at its core, which forms the primary component of an active galactic nucleus. The black hole was imaged using data collected in 2017 by the Event Horizon Telescope (EHT), with a final, processed image released on 10 April 2019. In March 2021, the EHT Collaboration presented, for the first time, a polarized-based image of the black hole which may help better reveal the forces giving rise to quasars.
The galaxy is a strong source of multi-wavelength radiation, particularly radio waves. It has an isophotal diameter of , with a diffuse galactic envelope that extends to a radius of about , where it is truncated—possibly by an encounter with another galaxy. Its interstellar medium consists of diffuse gas enriched by elements emitted from evolved stars.
Observation history
In 1781, the French astronomer Charles Messier published a catalogue of 103 objects that had a nebulous appearance as part of a list intended to identify objects that might otherwise be confused with comets. In subsequent use, each catalogue entry was prefixed with an "M". Thus, M87 was the eighty-seventh object listed in Messier's catalogue. During the 1880s, the object was included as NGC 4486 in the New General Catalogue of nebulae and star clusters assembled by the Danish-Irish astronomer John Dreyer, which he based primarily on the observations of the English astronomer John Herschel.
In 1918, the American astronomer Heber Curtis of Lick Observatory noted M87's lack of a spiral structure and observed a "curious straight ray ... apparently connected with the nucleus by a thin line of matter." The ray appeared brightest near the galactic center. The following year, supernova SN 1919A within M87 reached a peak photographic magnitude of 11.5, although this event was not reported until photographic plates were examined by the Russian astronomer Innokentii A. Balanowski in 1922.
Identification as a galaxy
In 1922, the American astronomer Edwin Hubble categorized M87 as one of the brighter globular nebulae, as it lacked any spiral structure, but like spiral nebulae, appeared to belong to the family of non-galactic nebulae. In 1926 he produced a new categorization, distinguishing extragalactic from galactic nebulae, the former being independent star systems. M87 was classified as a type of elliptical extragalactic nebula with no apparent elongation (class E0).
In 1931, Hubble described M87 as a member of the Virgo Cluster, and gave a provisional estimate of from Earth. It was then the only known elliptical nebula for which individual stars could be resolved, although it was pointed out that globular clusters would be indistinguishable from individual stars at such distances. In his 1936 The Realm of the Nebulae, Hubble examines the terminology of the day; some astronomers labeled extragalactic nebulae as external galaxies on the basis that they were stellar systems at far distances from our own galaxy, while others preferred the conventional term extragalactic nebulae, as galaxy was at that time a synonym for the Milky Way. M87 continued to be labelled as an extragalactic nebula at least until 1954.
Modern research
In 1947, a prominent radio source, Virgo A, was identified with errors in its measured position that overlapped the location of M87. The source was confirmed to be M87 by 1953, and the linear relativistic jet emerging from the core of the galaxy was suggested as the cause. This jet extended from the core at a position angle of 260° to an angular distance of 20″ with an angular width of 2″. In 1969–1970, a strong component of the radio emission was found to closely align with the optical source of the jet.
In 1966, the United States Naval Research Laboratory's Aerobee 150 rocket identified Virgo X-1, the first X-ray source in Virgo. The Aerobee rocket launched from White Sands Missile Range on 7 July 1967 yielded further evidence that the source of Virgo X-1 was the radio galaxy M87. Subsequent X-ray observations by the HEAO 1 and Einstein Observatory showed a complex source that included the active galactic nucleus of M87. However, there is little central concentration of the X-ray emission.
M87 has been an important testing ground for techniques that measure the masses of central supermassive black holes in galaxies. In 1978, stellar-dynamical modeling of the mass distribution in M87 gave evidence for a central mass of five billion solar masses. After the installation of the COSTAR corrective-optics module in the Hubble Space Telescope in 1993, the Hubble Faint Object Spectrograph (FOS) was used to measure the rotation velocity of the ionized gas disk at the center of M87, as an "early release observation" designed to test the scientific performance of the post-repair Hubble instruments. The FOS data indicated a central black hole mass of 2.4 billion , with 30% uncertainty. Globular clusters within M87 have been used to calibrate metallicity relations as well.
M87 was observed by the Event Horizon Telescope (EHT) during much of 2017. The event horizon of the black hole at the center was directly imaged by the EHT, then revealed in a press conference on the issue date stated, filtering out from this the first image of a black hole's shadow.
Visibility
M87 is near a high declination limit of the Virgo constellation, abutting Coma Berenices. It lies along the line between the stars Epsilon Virginis and Denebola (Beta Leonis). The galaxy can be observed using a small telescope with a aperture, extending across an angular area of at a surface brightness of 12.9, with a very bright, 45 arcsecond core. Viewing the jet is a challenge without the aid of photography. Before 1991, the Ukrainian-American astronomer Otto Struve was the only person known to have seen the jet visually, using the Hooker telescope. In more recent years it has been observed in larger amateur telescopes under excellent conditions.
Properties
In the modified Hubble sequence galaxy morphological classification scheme of the French astronomer Gérard de Vaucouleurs, M87 is categorized as an E0p galaxy. "E0" designates an elliptical galaxy that displays no flattening—that is, it appears spherical. A "p" suffix indicates a peculiar galaxy that does not fit cleanly into the classification scheme; in this case, the peculiarity is the presence of the jet emerging from the core. In the Yerkes (Morgan) scheme, M87 is classified as a type-cD galaxy. A D galaxy has an elliptical-like nucleus surrounded by an extensive, dustless, diffuse envelope. A D type supergiant is called a cD galaxy.
The distance to M87 has been estimated using several independent techniques. These include measurement of the luminosity of planetary nebulae, comparison with nearby galaxies whose distance is estimated using standard candles such as cepheid variables, the linear size distribution of globular clusters, and the tip of the red-giant branch method using individually resolved red giant stars. These measurements are consistent with each other, and their weighted average yields a distance estimate of .
M87 is one of the most massive galaxies in the local Universe. Its diameter is estimated at 132,000 light-years, which is approximately 51% larger than that of the Milky Way. As an elliptical galaxy, the galaxy is a spheroid rather than a flattened disc, accounting for the substantially larger mass of M87. Within a radius of , the mass is times the mass of the Sun, which is double the mass of the Milky Way galaxy. As with other galaxies, only a fraction of this mass is in the form of stars: M87 has an estimated mass to luminosity ratio of ; that is, only about one part in six of the galaxy's mass is in the form of stars that radiate energy. This ratio varies from 5 to 30, approximately in proportion to in the region of from the core. The total mass of M87 may be 200 times that of the Milky Way.
The galaxy experiences an infall of gas at the rate of two to three solar masses per year, most of which may be accreted onto the core region. The extended stellar envelope of this galaxy reaches a radius of about , compared with about for the Milky Way. Beyond that distance the outer edge of the galaxy has been truncated by some means; possibly by an earlier encounter with another galaxy. There is evidence of linear streams of stars to the northwest of the galaxy, which may have been created by tidal stripping of orbiting galaxies or by small satellite galaxies falling in toward M87. Moreover, a filament of hot, ionized gas in the northeastern outer part of the galaxy may be the remnant of a small, gas-rich galaxy that was disrupted by M87 and could be feeding its active nucleus. M87 is estimated to have at least 50 satellite galaxies, including NGC 4486B and NGC 4478.
The spectrum of the nuclear region of M87 shows the emission lines of various ions, including hydrogen (HI, HII), helium (HeI), oxygen (OI, OII, OIII), nitrogen (NI), magnesium (MgII), and sulfur (SII). The line intensities for weakly ionized atoms (such as neutral atomic oxygen, OI) are stronger than those of strongly ionized atoms (such as doubly ionized oxygen, OIII). A galactic nucleus with such spectral properties is termed a LINER, for "low-ionization nuclear emission-line region". The mechanism and source of weak-line-dominated ionization in LINERs and M87 are under debate. Possible causes include shock-induced excitation in the outer parts of the disk or photoionization in the inner region powered by the jet.
Elliptical galaxies such as M87 are believed to form as the result of one or more mergers of smaller galaxies. They generally contain relatively little cold interstellar gas (in comparison with spiral galaxies) and they are populated mostly by old stars, with little or no ongoing star formation. M87's elliptical shape is maintained by the random orbital motions of its constituent stars, in contrast to the more orderly rotational motions found in a spiral galaxy such as the Milky Way. Using the Very Large Telescope to study the motions of about 300 planetary nebulae, astronomers have determined that M87 absorbed a medium-sized star-forming spiral galaxy over the last billion years. This has resulted in the addition of some younger, bluer stars to M87. The distinctive spectral properties of the planetary nebulae allowed astronomers to discover a chevron-like structure in M87's halo which was produced by the incomplete phase-space mixing of a disrupted galaxy.
Components
Supermassive black hole M87*
The core of the galaxy contains a supermassive black hole (SMBH), designated M87*, whose mass is billions of times that of the Earth's Sun; estimates had ranged from to , surpassed by in 2016. In April 2019, the Event Horizon Telescope collaboration released measurements of the black hole's mass as . This is one of the highest known masses for such an object. A rotating disk of ionized gas surrounds the black hole, and is roughly perpendicular to the relativistic jet. The disk rotates at velocities of up to roughly and spans a maximum diameter of . By comparison, Pluto averages from the Sun. Gas accretes onto the black hole at an estimated rate of one solar mass every ten years (about 90 Earth masses per day). The Schwarzschild radius of the black hole is . The diameter of the ring of emission, as seen from Earth, is 42 μas (microarcsecond). By comparison, the diameter of the core of M87 is 45" (as, arcsecond), and the size of M87 is 7.2' x 6.8' (am, arcminute).
A 2010 paper suggested that the black hole may be displaced from the galactic center by about . This was claimed to be in the opposite direction of the known jet, indicating acceleration of the black hole by it. Another suggestion was that the offset occurred during the merger of two supermassive black holes. However, a 2011 study did not find any statistically significant displacement, and a 2018 study of high-resolution images of M87 concluded that the apparent spatial offset was caused by temporal variations in the jet's brightness rather than a physical displacement of the black hole from the galaxy's center.
This black hole is the first to be imaged. Data to produce the image were taken in April 2017, the image was produced during 2018 and was published on 10 April 2019. The image shows the shadow of the black hole, surrounded by an asymmetric emission ring with a diameter of . The shadow radius is 2.6 times that of the black hole's Schwarzschild radius. The asymmetry in the brightness of the ring is due to relativistic beaming, whereby material moving towards the observer at relativistic velocities appears brighter. The visible material around the black hole rotates mostly clockwise with respect to the observer, which due to the direction of the axis of rotation causes the bottom part of the emission region to have a component of velocity toward the observer. The rotation parameter was estimated at , corresponding to a rotation speed
This is consistent with the value of
obtained with the outflow method.
After the black hole had been imaged, it was named Pōwehi, a Hawaiian word meaning "the adorned fathomless dark creation", taken from the ancient creation chant Kumulipo.
On 24 March 2021, the Event Horizon Telescope collaboration revealed an unprecedented unique view of the M87 black hole shadow: how it looks in polarized light. Polarization is a powerful tool which allows astronomers to probe physics behind the image in more detail. Light polarization informs us about the strength and orientation of magnetic fields in the ring of light around the black hole shadow. Knowing those is essential to understand how M87's supermassive black hole is launching jets of magnetized plasma, which expand at relativistic speeds beyond the M87 galaxy.
On 14 April 2021, astronomers further reported that the M87 black hole and its surroundings were studied during Event Horizon Telescope 2017 observing run also by many multi-wavelength observatories from around the world.
In April 2023, a team developed a new principal-component interferometric modeling (PRIMO) technique to produce sharper image reconstructions from EHT data. They applied this to the original EHT observations of the M87 black hole, yielding a crisper final image and allowing closer testing of the alignment of observations to theory.
Jet
The relativistic jet of matter emerging from the core extends at least from the nucleus and consists of matter ejected from a supermassive black hole. The jet is highly collimated, appearing constrained to an angle of 60° within of the core, to about 16° at , and to 6–7° at . Its base has the diameter of and is probably powered by a prograde accretion disk around the spinning supermassive black hole. The German-American astronomer Walter Baade found that light from the jet was plane polarized, which suggests that the energy is generated by the acceleration of electrons moving at relativistic velocities in a magnetic field. The total energy of these electrons is estimated at ergs ( or ). This is roughly times the energy produced in the entire Milky Way in one second, which is estimated at The jet is surrounded by a lower-velocity non-relativistic component. There is evidence of a counter jet, but it remains unseen from the Earth due to relativistic beaming. The jet is precessing, causing the outflow to form a helical pattern out to . Lobes of expelled matter extend out to .
In pictures taken by the Hubble Space Telescope in 1999, the motion of M87's jet was measured at four to six times the speed of light. This phenomenon, called superluminal motion, is an illusion caused by the relativistic velocity of the jet. The time interval between any two light pulses emitted by the jet is, as registered by the observer, less than the actual interval due to the relativistic speed of the jet moving in the direction of the observer. This results in perceived faster-than-light speeds, though the jet itself has a velocity of only 80–85% the speed of light. Detection of such motion is used to support the theory that quasars, BL Lacertae objects and radio galaxies may all be the same phenomenon, known as active galaxies, viewed from different perspectives. It is proposed that the nucleus of M87 is a BL Lacertae object (of lower luminosity than its surrounds) seen from a relatively large angle. Flux variations, characteristic of the BL Lacertae objects, have been observed in M87.
Observations indicate that the rate at which material is ejected from the supermassive black hole is variable. These variations produce pressure waves in the hot gas surrounding M87. The Chandra X-ray Observatory has detected loops and rings in the gas. Their distribution suggests that minor eruptions occur every few million years. One of the rings, caused by a major eruption, is a shock wave in diameter around the black hole. Other features observed include narrow X-ray-emitting filaments up to long, and a large cavity in the hot gas caused by a major eruption 70 million years ago. The regular eruptions prevent a huge reservoir of gas from cooling and forming stars, implying that M87's evolution may have been seriously affected, preventing it from becoming a large spiral galaxy.
M87 is a very strong source of gamma rays, the most energetic rays of the electromagnetic spectrum. Gamma rays emitted by M87 have been observed since the late 1990s. In 2006, using the High Energy Stereoscopic System Cherenkov telescopes, scientists measured the variations of the gamma ray flux coming from M87, and found that the flux changes over a matter of days. This short period indicates that the most likely source of the gamma rays is a supermassive black hole. In general, the smaller the diameter of the emission source, the faster the variation in flux.
A knot of matter in the jet (designated HST-1), about from the core, has been tracked by the Hubble Space Telescope and the Chandra X-ray Observatory. By 2006, the X-ray intensity of this knot had increased by a factor of 50 over a four-year period, while the X-ray emission has since been decaying in a variable manner.
The interaction of relativistic jets of plasma emanating from the core with the surrounding medium gives rise to radio lobes in active galaxies. The lobes occur in pairs and are often symmetrical. The two radio lobes of M87 together span about 80 kiloparsecs; the inner parts, extending up to 2 kiloparsecs, emit strongly at radio wavelengths. Two flows of material emerge from this region, one aligned with the jet itself and the other in the opposite direction. The flows are asymmetrical and deformed, implying that they encounter a dense intra-cluster medium. At greater distances, both flows diffuse into two lobes. The lobes are surrounded by a fainter halo of radio-emitting gas.
Interstellar medium
The space between the stars in M87 is filled with a diffuse interstellar medium of gas that has been chemically enriched by the elements ejected from stars as they passed beyond their main sequence lifetime. Carbon and nitrogen are continuously supplied by stars of intermediate mass as they pass through the asymptotic giant branch. The heavier elements from oxygen to iron are produced largely by supernova explosions within the galaxy. Of the heavy elements, about 60% were produced by core-collapse supernovae, while the remainder came from type Ia supernovae.
The distribution of oxygen is roughly uniform throughout, at about half of the solar value (i.e., oxygen abundance in the Sun), while iron distribution peaks near the center where it approaches the solar iron value. Since oxygen is produced mainly by core-collapse supernovae, which occur during the early stages of galaxies, and mostly in outer star-forming regions, the distribution of these elements suggests an early enrichment of the interstellar medium from core-collapse supernovae and a continuous contribution from type Ia supernovae throughout the history of M87. The contribution of elements from these sources was much lower than in the Milky Way.
Examination of M87 at far infrared wavelengths shows an excess emission at wavelengths longer than 25 μm. Normally, this may be an indication of thermal emission by warm dust. In the case of M87, the emission can be fully explained by synchrotron radiation from the jet; within the galaxy, silicate grains are expected to survive for no more than 46 million years because of the X-ray emission from the core. This dust may be destroyed by the hostile environment or expelled from the galaxy. The combined mass of dust in M87 is no more than 70,000 times the mass of the Sun. By comparison, the Milky Way's dust equals about a hundred million () solar masses.
Although M87 is an elliptical galaxy and therefore lacks the dust lanes of a spiral galaxy, optical filaments have been observed in it, which arise from gas falling towards the core. Emission probably comes from shock-induced excitation as the falling gas streams encounter X-rays from the core region. These filaments have an estimated mass of about 10,000 . Surrounding the galaxy is an extended corona with hot, low-density gas.
Globular clusters
M87 has an abnormally large population of globular clusters. A 2006 survey out to an angular distance of 25′ from the core estimates that there are clusters in orbit around M87, compared with 150–200 in and around the Milky Way. The clusters are similar in size distribution to those of the Milky Way, most having an effective radius of 1 to 6 parsecs. The size of the M87 clusters gradually increases with distance from the galactic center. Within a radius of the core, the cluster metallicity—the abundance of elements other than hydrogen and helium—is about half the abundance in the Sun. Outside this radius, metallicity steadily declines as the cluster distance from the core increases. Clusters with low metallicity are somewhat larger than metal-rich clusters. In 2014, HVGC-1, the first hypervelocity globular cluster, was discovered escaping from M87 at 2,300 km/s. The escape of the cluster with such a high velocity was speculated to have been the result of a close encounter with, and subsequent gravitational kick from, a supermassive black hole binary.
Almost a hundred ultra-compact dwarfs have been identified in M87. They resemble globular clusters but have a diameter of or more, much larger than the maximum of globular clusters. It is unclear whether they are dwarf galaxies captured by M87 or a new class of massive globular cluster.
Environment
M87 is near (or at) the center of the Virgo Cluster, a closely compacted structure of about 2,000 galaxies. This forms the core of the larger Virgo Supercluster, of which the Local Group (including the Milky Way) is an outlying member. It is organized into at least three distinct subsystems associated with the three large galaxies—M87, M49 and M86—with the core subgroup including M87 (Virgo A) and M49 (Virgo B). There is a preponderance of elliptical and S0 galaxies around M87. A chain of elliptical galaxies roughly aligns with the jet. In terms of mass, M87 is likely to be the largest, and coupled with centrality appears to be moving very little relative to the cluster as a whole. It is defined in one study as the cluster center. The cluster has a sparse gaseous medium that emits X-rays, lower in temperature toward the middle. The combined mass of the cluster is estimated to be 0.15 to
Measurements of the motion of those intracluster starburst ("planetary") nebulae between M87 and M86 suggest that the two galaxies are moving toward each other and that this may be their first encounter. M87 may have interacted with M84, as evidenced by the truncation of M87's outer halo by tidal interactions. The truncated halo may also have been caused by contraction due to an unseen mass falling into M87 from the rest of the cluster, which may be the hypothesized dark matter. A third possibility is that the halo's formation was truncated by early feedback from the active galactic nucleus.
| Physical sciences | Notable galaxies | null |
214096 | https://en.wikipedia.org/wiki/Active%20optics | Active optics | Active optics is a technology used with reflecting telescopes developed in the 1980s, which actively shapes a telescope's mirrors to prevent deformation due to external influences such as wind, temperature, and mechanical stress. Without active optics, the construction of 8 metre class telescopes is not possible, nor would telescopes with segmented mirrors be feasible.
This method is used by, among others, the Nordic Optical Telescope, the New Technology Telescope, the Telescopio Nazionale Galileo and the Keck telescopes, as well as all of the largest telescopes built since the mid-1990s.
Active optics is not to be confused with adaptive optics, which operates at a shorter timescale and corrects atmospheric distortions.
In astronomy
Most modern telescopes are reflectors, with the primary element being a very large mirror. Historically, primary mirrors were quite thick in order to maintain the correct surface figure in spite of forces tending to deform it, like wind and the mirror's own weight. This limited their maximum diameter to 5 or 6 metres (200 or 230 inches), such as Palomar Observatory's Hale Telescope.
A new generation of telescopes built since the 1980s uses thin, lighter weight mirrors instead. They are too thin to maintain themselves rigidly in the correct shape, so an array of actuators is attached to the rear side of the mirror. The actuators apply variable forces to the mirror body to keep the reflecting surface in the correct shape over repositioning. The telescope may also be segmented into multiple smaller mirrors, which reduce the sagging due to weight that occurs for large, monolithic mirrors.
The combination of actuators, an image quality detector, and a computer to control the actuators to obtain the best possible image, is called active optics.
The name active optics means that the system keeps a mirror (usually the primary) in its optimal shape against environmental forces such as wind, sag, thermal expansion, and telescope axis deformation. Active optics compensate for distorting forces that change relatively slowly, roughly on timescales of seconds. The telescope is therefore actively still, in its optimal shape.
Comparison with adaptive optics
Active optics should not be confused with adaptive optics, which operates on a much shorter timescale to compensate for atmospheric effects, rather than for mirror deformation. The influences that active optics compensate (temperature, gravity) are intrinsically slower (1 Hz) and have a larger amplitude in aberration. Adaptive optics on the other hand corrects for atmospheric distortions that affect the image at 100–1000 Hz (the Greenwood frequency,
depending on wavelength and weather conditions). These corrections need to be much faster, but also have smaller amplitude. Because of this, adaptive optics uses smaller corrective mirrors. This used to be a separate mirror not integrated in the telescope's light path, but nowadays this can be the second, third or fourth mirror in a telescope.
Other applications
Complicated laser set-ups and interferometers can also be actively stabilized.
A small part of the beam leaks through beam steering mirrors and a four-quadrant-diode is used to measure the position of a laser beam and another in the focal plane behind a lens is used to measure the direction. The system can be sped up or made more noise-immune by using a PID controller. For pulsed lasers the controller should be locked to the repetition rate. A continuous (non-pulsed) pilot beam can be used to allow for up to 10 kHz bandwidth of stabilization (against vibrations, air turbulence, and acoustic noise) for low repetition rate lasers.
Sometimes Fabry–Pérot interferometers have to be adjusted in length to pass a given wavelength. Therefore, the reflected light is extracted by means of a Faraday rotator and a polarizer. Small changes of the incident wavelength generated by an acousto-optic modulator or interference with a fraction of the incoming radiation delivers the information whether the Fabry Perot is too long or too short.
Long optical cavities are very sensitive to the mirror alignment. A control circuit can be used to peak power. One possibility is to perform small rotations with one end mirror. If this rotation is about the optimum position, no power oscillation occurs. Any beam pointing oscillation can be removed using the beam steering mechanism mentioned above.
X-ray active optics, using actively deformable grazing incidence mirrors, are also being investigated.
| Technology | Telescope | null |
214102 | https://en.wikipedia.org/wiki/Spherical%20aberration | Spherical aberration | In optics, spherical aberration (SA) is a type of aberration found in optical systems that have elements with spherical surfaces. This phenomenon commonly affects lenses and curved mirrors, as these components are often shaped in a spherical manner for ease of manufacturing. Light rays that strike a spherical surface off-centre are refracted or reflected more or less than those that strike close to the centre. This deviation reduces the quality of images produced by optical systems. The effect of spherical aberration was first identified in the 11th century by Ibn al-Haytham who discussed it in his work Kitāb al-Manāẓir.
Overview
A spherical lens has an aplanatic point (i.e., no spherical aberration) only at a lateral distance from the optical axis that equals the radius of the spherical surface divided by the index of refraction of the lens material.
Spherical aberration makes the focus of telescopes and other instruments less than ideal. This is an important effect, because spherical shapes are much easier to produce than aspherical ones. In many cases, it is cheaper to use multiple spherical elements to compensate for spherical aberration than it is to use a single aspheric lens.
"Positive" spherical aberration means rays near the outer edge of a lens are bent more than would be predicted for an ideal lens. "Negative" spherical aberration means such rays are bent less than would be predicted.
The effect is proportional to the fourth power of the diameter and inversely proportional to the third power of the focal length, so it is much more pronounced at short focal ratios, i.e., "fast" lenses.
Correction
In lens systems, aberrations can be minimized using combinations of convex and concave lenses, or by using aspheric lenses or aplanatic lenses.
Lens systems with aberration correction are usually designed by numerical ray tracing. For simple designs, one can sometimes analytically calculate parameters that minimize spherical aberration. For example, in a design consisting of a single lens with spherical surfaces and a given object distance o, image distance i, and refractive index n, one can minimize spherical aberration by adjusting the radii of curvature and of the front and back surfaces of the lens such that
, where the signs of the radii follow the Cartesian sign convention.
For small telescopes using spherical mirrors with focal ratios shorter than , light from a distant point source (such as a star) is not all focused at the same point. Particularly, light striking the inner part of the mirror focuses farther from the mirror than light striking the outer part. As a result, the image cannot be focused as sharply as if the aberration were not present. Because of spherical aberration, telescopes with focal ratio less than are usually made with non-spherical mirrors or with correcting lenses.
Spherical aberration can be eliminated by making lenses with an aspheric surface. Descartes showed that lenses whose surfaces are well-chosen Cartesian ovals (revolved around the central symmetry axis) can perfectly image light from a point on the axis or from infinity in the direction of the axis. Such a design yields completely aberration-free focusing of light from a distant source.
In 2018, Rafael G. González-Acuña and Héctor A. Chaparro-Romo, graduate students at the National Autonomous University of Mexico and the Monterrey Institute of Technology and Higher Education in Mexico, found a closed formula for a lens surface that eliminates spherical aberration. Their equation can be applied to specify a shape for one surface of a lens, where the other surface has any given shape.
Estimation of the aberrated spot diameter
Many ways to estimate the diameter of the focused spot due to spherical aberration are based on ray optics. Ray optics, however, does not consider that light is an electromagnetic wave. Therefore, the results can be wrong due to interference effects arisen from the wave nature of light.
Coddington notation
A rather simple formalism based on ray optics, which holds for thin lenses only, is the Coddington notation. In the following, n is the lens' refractive index, o is the object distance, i is the image distance, h is the distance from the optical axis at which the outermost ray enters the lens, is the first lens radius, is the second lens radius, and f is the lens' focal length. The distance h can be understood as half of the clear aperture.
By using the Coddington factors for shape, s, and position, p,
one can write the longitudinal spherical aberration as
If the focal length f is very much larger than the longitudinal spherical aberration LSA, then the transverse spherical aberration, TSA, which corresponds to the diameter of the focal spot, is given by
| Physical sciences | Optics | Physics |
214116 | https://en.wikipedia.org/wiki/Cubic%20inch | Cubic inch | The cubic inch (symbol in3) is a unit of volume in the Imperial units and United States customary units systems. It is the volume of a cube with each of its three dimensions (length, width, and height) being one inch long which is equivalent to 1/231 of a US gallon.
The cubic inch and the cubic foot are used as units of volume in the United States, although the common SI units of volume, the liter, milliliter, and cubic meter, are also used, especially in manufacturing and high technology. One cubic inch is approximately .
One cubic foot is equal to exactly because 123 = 1,728.
Notation conventions
The following abbreviations have been used to denote the cubic inch: cubic in, cu inch, cu in, cui, cu. in.
The IEEE standard symbol is: in3
In internal combustion engines, the following abbreviations are used to denote cubic inch displacement: c.i.d., cid, CID, c.i., ci
Equivalence with other units of volume
One cubic inch (assuming an international inch) is equal to:
(1 cu ft equals 1,728 cu in)
Roughly 1 tablespoon (1.0 U.S. gallon = 256 U.S. tablespoons = 231 cubic inches)
About
About
About 0.06926407 American/English cups
About
About
About
About (1.0 gallon equals 231 cu in exactly [3 in × 7 in × 11 in])
About of crude oil (1.0 barrel equals 42 gallons, by definition, or 9,702 cu in)
Exactly (1.0 liter is about )
Exactly or cubic centimeters (which in turn is approximately )
Exactly ( is about )
Uses of the cubic inch
Electrical box volume
The cubic inch was established decades ago in the National Electrical Code as the conventional unit in North America for measuring the volume of electrical boxes. Because of the extensive export of electrical equipment to other countries, some usage of the non-SI unit can be found outside North America.
Engine displacement
North America
The cubic inch was formerly used by the automotive industry and aircraft industry in North America (through the early 1980s) to express the nominal engine displacement for the engines of new automobiles, trucks, aircraft, etc. The cubic inch is still used for this purpose in classic car collecting. The auto industry now uses liters for this purpose, while reciprocating engines used in commercial aircraft often have model numbers based on the cubic inch displacement. The fifth generation Ford Mustang has a Boss 302 version that reflects this heritage - with a five-liter (302 cubic inch) engine similar to the original Boss. Chevrolet has also revived this usage on its 427 Corvette. Dodge has a "Challenger 392" (a conversion from its 6.4 liter V8 engine).
| Physical sciences | Volume | Basics and measurement |
214209 | https://en.wikipedia.org/wiki/Strategic%20bomber | Strategic bomber | A strategic bomber is a medium- to long-range penetration bomber aircraft designed to drop large amounts of air-to-ground weaponry onto a distant target for the purposes of debilitating the enemy's capacity to wage war. Unlike tactical bombers, penetrators, fighter-bombers, and attack aircraft, which are used in air interdiction operations to attack enemy combatants and military equipment, strategic bombers are designed to fly into enemy territory to destroy strategic targets (e.g., infrastructure, logistics, military installations, factories, etc.). In addition to strategic bombing, strategic bombers can be used for tactical missions. There are currently only three countries that operate strategic bombers: the United States, Russia and China.
The modern strategic bomber role appeared after strategic bombing was widely employed, and atomic bombs were first used during World War II. Nuclear strike missions (i.e., delivering nuclear-armed missiles or bombs) can potentially be carried out by most modern fighter-bombers and strike fighters, even at intercontinental range, with the use of aerial refueling, so any nation possessing this combination of equipment and techniques theoretically has such capability. Primary delivery aircraft for a modern strategic bombing mission need not always necessarily be a heavy bomber type, and any modern aircraft capable of nuclear strikes at long range is equally able to carry out tactical missions with conventional weapons. An example is France's Mirage IV, a small strategic bomber replaced in service by the ASMP-equipped Mirage 2000N fighter-bomber and Rafale multirole fighter.
History
First and Second World Wars
The first strategic bombing efforts took place during World War I (1914–18), by the Russians with their Sikorsky Ilya Muromets bomber (the first heavy four-engine aircraft), and by the Germans using Zeppelins or long-range multi-engine Gotha aircraft. Zeppelins reached England on bombing raids by 1916, forcing the British to create extensive defense systems including some of the first anti-aircraft guns which were often used with searchlights to highlight the enemy machines overhead. Late in the war, American fliers under the command of Brig. Gen. Billy Mitchell were developing multi-aircraft "mass" bombing missions behind German lines, although the Armistice ended full realization of what was being planned.
Study of strategic bombing continued in the interwar years. Many books and articles predicted a fearful prospect for any future war, paced by political fears such as those expressed by British Prime Minister Stanley Baldwin who told the House of Commons early in the 1930s that "the bomber will always get through" no matter what defensive systems were undertaken. It was widely believed by the late 1930s that strategic "terror" bombing of cities in any war would quickly result in devastating losses and might decide a conflict in a matter of days or weeks. But theory far exceeded what most air forces could actually put into the air. Germany focused on short-range tactical bombers. Britain's Royal Air Force began developing four-engine long-range bombers only in the late 1930s. The U.S. Army Air Corps (Army Air Forces as of mid-1941) was severely limited by small budgets in the late 1930s, and only barely saved the B-17 bomber that would soon be vital. The equally important B-24 first flew in 1939. Both aircraft would constitute the bulk of the bomber force for USAAF strategic bombing in Europe and Allied day bomber units more generally.
At the start of World War II, so-called "strategic" bombing was initially carried out by medium bomber aircraft which were typically twin-engined, armed with several defensive guns, but only possessed limited bomb-carrying capacity and range. Both Britain and the US were developing larger two- and four-engined designs, which began to replace or supplement the smaller aircraft by 1941–42. After American entry into the war in December 1941, the U.S. 8th Air Force began to develop a daylight bombing capacity using improved B-17 and B-24 four-engine aircraft. In order to assemble the formations to carry out these bombing campaigns, assembly ships were used to quickly form defensive combat boxes. The RAF concentrated its efforts on night bombing. But neither force was able to develop adequate bombsights or tactics to allow for often-bragged "pinpoint" accuracy. The post-war U.S. Strategic Bombing Survey studies supported the overall notion of strategic bombing, but underlined many of its shortcomings as well. Attempts to create pioneering examples of "smart bombs" resulted in the Azon ordnance, deployed in the European Theater and CBI Theater from B-24s.
Following the untimely death of the top German advocate for strategic bombing, General Walther Wever in early June 1936, the focus of Nazi Germany's Luftwaffe bomber forces, the so-named Kampfgeschwader (bomber wings) became the battlefield support of the German Army as part of the general Blitzkrieg form of warfare, carried out with both medium bombers such as the Heinkel He 111, and Schnellbombers such as the Junkers Ju 88A. Support for the Ural bomber project before the start of WW II dwindled after Wever's death, with the only aircraft design that could closely match the Allied bomber force's aircraft – the Heinkel He 177A, originated in early November 1937, deployed in its initial form in 1941–42, hampered by a RLM requirement for it to also perform medium-angle dive bombing, not rescinded until September 1942 – unable to perform either function properly, with a powerplant selection and particular powerplant installation design features on the 30-meter wingspan Greif, that led to endless problems with engine fires. The trans-Atlantic ranged Amerika Bomber program started in March 1942 sought to ameliorate the lack of a long-range bomber for the Luftwaffe, but led only to three Messerschmitt- and two Junkers-built prototypes ever flying, and no operational "heavy bombers" for strategic use for the Third Reich beyond the roughly one thousand He 177s built.
By the end of the Second World War in 1945, the "heavy" bomber, epitomized by the British Avro Lancaster and American Boeing B-29 Superfortress used in the Pacific Theater, showed what could be accomplished by area bombing of Japan's cities and the often small and dispersed factories within them. Under Major General Curtis LeMay, the U.S. 20th Air Force, based in the Mariana Islands, undertook low-level incendiary bombing missions, results of which were soon measured in the number of square miles destroyed. The air raids on Japan had withered the nation's ability to continue fighting, although the Japanese government delayed surrender until atomic bombs were dropped on Hiroshima and Nagasaki in August 1945.
The Cold War and its aftermath
During the Cold War, the United States and United Kingdom on one side and the Soviet Union on the other kept strategic bombers ready to take off on short notice as part of the deterrent strategy of mutually assured destruction (MAD). Most strategic bombers of the two superpowers were designed to deliver nuclear weapons. For a time, some squadrons of Boeing B-52 Stratofortress bombers were kept in the air around the clock, orbiting some distance away from their fail-safe points near the Soviet border.
The British produced three different "V bombers" for the Royal Air Force which were designed and designated to be able to deliver British-made nuclear bombs to targets in European Russia. These bombers would have been able to reach and destroy cities such as Kiev or Moscow before American strategic bombers. While they were never used against the Soviet Union or its allies, two V bomber types, the Avro Vulcan and the Handley Page Victor, were used in the Falklands War towards the end of their operational lives.
The Soviet Union produced hundreds of unlicensed copies of the American Boeing B-29 Superfortress, which the Soviet Air Forces called the Tupolev Tu-4. The Soviets later developed the jet-powered Tupolev Tu-16 "Badger".
The People's Republic of China produced a version of Tupolev Tu-16 on license from the Soviet Union in the 1960s, which they named the Xian H-6.
During the 1960s France produced its Dassault Mirage IV nuclear-armed bomber for the French Air Force as a part of its independent nuclear strike force, the Force de Frappe, using French-made bombers and IRBMs to deliver French-made nuclear weapons. Mirage IVs served until mid-1996 in the bomber role, and to 2005 as reconnaissance aircraft.
The French Republic limited its strategic armaments to a squadron of four nuclear-powered ballistic missile submarines, with 16 SLBM tubes apiece. France also maintains an active force of supersonic fighter-bombers carrying ASMP stand-off nuclear missiles, with Mach 3 speed and a range of 500 kilometers. These missiles can be delivered by the Dassault Mirage 2000N and Rafale fighter-bombers; the Rafale is also capable of refueling others in flight using a buddy refueling pod.
Newer strategic bombers such as the Rockwell B-1 Lancer, the Tupolev Tu-160, and the Northrop Grumman B-2 Spirit designs incorporate various levels of stealth technology in an effort to avoid detection, especially by radar networks. Despite these advances earlier strategic bombers, for example the B-52 last manufactured in 1962 and the Tupolev Tu-95, remain in service and can also deploy the latest air-launched cruise missiles and other "stand-off" or precision guided weapons such as the JASSM and the JDAM.
The Russian Air Force's new Tu-160M2 strategic bombers are expected to be delivered on a regular basis over the course of 10 to 20 years. The Tu-95 and Tu-160 bombers will be periodically updated, as was done during the 1990s with the Tu-22M bombers.
Strategic bombers of the Cold War were primarily armed with nuclear weapons. During the post-1940s Indochina Wars, and also since the end of the Cold War, modern bombers originally intended for strategic use have been exclusively employed using non-nuclear, high explosive weapons. During the Vietnam War, Operation Menu, Operation Freedom Deal, Gulf War, military action in Afghanistan, and the 2003 invasion of Iraq, American B-52s and B-1s were mostly employed in tactical roles. During the Soviet-Afghan war in 1979–88, Soviet Air Forces Tu-22Ms carried out several mass air raids in various regions of Afghanistan.
Notable strategic bombers
Nomenclature
Bombers listed below were used in the main or represented a shift in long-range bomber design (Maximum bomb load). In practice, bomb loads carried are dependent on factors such as the distance to target and the individual type, size or weight of bombs used.
Nomenclature for size classification of aircraft types used in strategic bombing varies, particularly since the time of World War II due to sequential technological advancements and changes in aerial warfare strategy and tactics. The B-29, for example was a benchmark aircraft of the heavy bomber type at end of World War II due to its size, range and load carrying ability; as the Cold War began, it became an intercontinental range strategic bomber with the development of new techniques, such as aerial refueling (which also greatly extended the range of other medium- to long-range bombers, fighter-bombers and attack aircraft).
During the 1950s the U.S. Strategic Air Command also briefly brought back the outdated term "medium bomber" to distinguish its Boeing B-47 Stratojets from somewhat larger contemporary Boeing B-52 Stratofortress "heavy bombers" in bombardment wings; older B-29 and B-50 heavy bombers were also redesignated as "medium" during this period. SAC's nomenclature here was purely semantic and bureaucratic, however as both the B-47 and B-52 strategic bombers were much larger and had far greater performance and load-carrying ability than any of the World War II-era heavy or medium bombers.
Other aircraft such as the twin-jet US FB-111, Douglas A-3 Skywarrior and France's Dassault Mirage IV had nominal warloads of less than , and were significantly smaller in size and gross weight compared with their strategic bomber contemporaries, based on which they might be classified as medium bombers. In the nuclear strike role, France would replace its Mirage IVs beginning in the late 1980s with the even smaller, single-engine Mirage 2000N fighter-bomber, a further example of advancing technologies and changing tactics in military aviation and aircraft design. France's newer twin-engine Dassault Rafale multirole fighter also has nuclear strike capability.
World War I
Caproni Ca.1
Caproni Ca.3 ()
Gotha G.IV ()
Zeppelin Staaken R.VI ()
Zeppelin (about )
Handley Page Type O ()
Handley Page V/1500 ()
Sikorsky Ilya Muromets ()
Interwar/World War II
Boeing YB-9 (prototype strategic bomber, inspiration for the B-17)
Martin B-10 (successors B-17 and B-24 therefore and theoretically the first strategic bomber of the USAAF at that time despite being a medium bomber.)
Boeing B-17 Flying Fortress () (theoretical maximum: )
Consolidated B-24 Liberator ()
Boeing B-29 Superfortress () (maximum of (2 Grand Slams))
Consolidated B-32 Dominator ()
Handley Page Halifax ()
Avro Lancaster ()
Short Stirling ()
Farman F.220 ()
Heinkel He 177 ()
Petlyakov Pe-8 ()
Piaggio P.108 ()
Cold War
Weapons loads can include nuclear-armed missiles as well as aerial bombs
Reciprocating/Turbine engine
North American AJ Savage ()
Lockheed P-2 Neptune – small number converted as carrier-launched nuclear-armed bombers which would have to ditch/recover at land bases
Boeing B-50 Superfortress ()
Convair B-36 Peacemaker ()
Tupolev Tu-4 – reverse-engineered version of B-29 Superfortress
Tupolev Tu-95 ()
Avro Lincoln ()
Jet engine
North American B-45 Tornado ()
Boeing B-47 Stratojet ()
Douglas A-3 Skywarrior – nuclear-armed, carrier-based
Boeing B-52 Stratofortress ()
Myasishchev M-4 ()
Tupolev Tu-16 ()
Vickers Valiant ()
Avro Vulcan ()
Handley Page Victor ()
Xian H-6 ()
Supersonic
Convair B-58 Hustler ()
General Dynamics FB-111A – strategic bomber version of the F-111 swing wing strike aircraft
North American A-5 Vigilante – nuclear-armed, carrier-based (only deployed for a brief period in strategic nuclear strike role for which it was originally designed before transitioning to reconnaissance role)
Rockwell B-1 Lancer ( – use of external hardpoints restricted by START I)
Dassault Mirage IV ()
Tupolev Tu-22M Backfire ()
Tupolev Tu-160 Blackjack ()
Tupolev Tu-22 Blinder ()
Dassault Mirage 2000N
others designed and built which did not enter operational service:
North American XB-70 Valkyrie
Myasishchev M-50 Bounder
Sukhoi T-4 Sotka
BAC TSR-2
Post Cold War
Boeing B-52 Stratofortress ()
Northrop Grumman B-2 Spirit ()
Rockwell B-1 Lancer ()
Tupolev Tu-22M ()
Tupolev Tu-95 ()
Tupolev Tu-160 ()
Xian H-6 ()
Future
Xian H-20. An under-development stealth bomber by China. Planned to be deployed in 2025.
Northrop Grumman B-21 Raider. An under-development stealth bomber by the United States, with a goal of supplanting the current Rockwell B-1 Lancer and Northrop Grumman B-2 Spirit.
Tupolev PAK DA. An under-development stealth bomber by Russia, with a goal of supplanting a portion or all of the current Tupolev Tu-95. Planned to be deployed in 2027.
| Technology | Military aviation | null |
214241 | https://en.wikipedia.org/wiki/Terpenoid | Terpenoid | The terpenoids, also known as isoprenoids, are a class of naturally occurring organic chemicals derived from the 5-carbon compound isoprene and its derivatives called terpenes, diterpenes, etc. While sometimes used interchangeably with "terpenes", terpenoids contain additional functional groups, usually containing oxygen. When combined with the hydrocarbon terpenes, terpenoids comprise about 80,000 compounds. They are the largest class of plant secondary metabolites, representing about 60% of known natural products. Many terpenoids have substantial pharmacological bioactivity and are therefore of interest to medicinal chemists.
Plant terpenoids are used for their aromatic qualities and play a role in traditional herbal remedies. Terpenoids contribute to the scent of eucalyptus, the flavors of cinnamon, cloves, and ginger, the yellow color in sunflowers, and the red color in tomatoes. Well-known terpenoids include citral, menthol, camphor, salvinorin A in the plant Salvia divinorum, ginkgolide and bilobalide found in Ginkgo biloba and the cannabinoids found in cannabis. The provitamin beta carotene is a terpene derivative called a carotenoid.
The steroids and sterols in animals are biologically produced from terpenoid precursors. Sometimes terpenoids are added to proteins, e.g., to enhance their attachment to the cell membrane; this is known as isoprenylation. Terpenoids play a role in plant defense as prophylaxis against pathogens and attractants for the predators of herbivores.
Structure and classification
Terpenoids are modified terpenes, wherein methyl groups have been moved or removed, or oxygen atoms added. Some authors use the term "terpene" more broadly, to include the terpenoids. Just like terpenes, the terpenoids can be classified according to the number of isoprene units that comprise the parent terpene:
Terpenoids can also be classified according to the type and number of cyclic structures they contain: linear, acyclic, monocyclic, bicyclic, tricyclic, tetracyclic, pentacyclic, or macrocyclic. The Salkowski test can be used to identify the presence of terpenoids.
Biosynthesis
Terpenoids, at least those containing an alcohol functional group, often arise by hydrolysis of carbocationic intermediates produced from geranyl pyrophosphate. Analogously hydrolysis of intermediates from farnesyl pyrophosphate gives sesquiterpenoids, and hydrolysis of intermediates from geranylgeranyl pyrophosphate gives diterpenoids, etc.
Impact on aerosols
In air, terpenoids are converted into various species, such as aldehydes, hydroperoxides, organic nitrates, and epoxides by short-lived free radicals (like the hydroxyl radical) and to a lesser extent by ozone. These new species can dissolve into water droplets and contribute to aerosol and haze formation. Secondary organic aerosols formed from this pathway may have atmospheric impacts.
As an example the Blue Ridge Mountains in the U.S. and Blue Mountains of New South Wales in Australia are noted for having a bluish color when seen from a distance. Trees put the "blue" in Blue Ridge, from their terpenoids released into the atmosphere.
| Physical sciences | Terpenes and terpenoids | Chemistry |
214242 | https://en.wikipedia.org/wiki/Terpene | Terpene | Terpenes () are a class of natural products consisting of compounds with the formula (C5H8)n for n ≥ 2. Terpenes are major biosynthetic building blocks. Comprising more than 30,000 compounds, these unsaturated hydrocarbons are produced predominantly by plants, particularly conifers. In plants, terpenes and terpenoids are important mediators of ecological interactions, while some insects use some terpenes as a form of defense. Other functions of terpenoids include cell growth modulation and plant elongation, light harvesting and photoprotection, and membrane permeability and fluidity control.
Terpenes are classified by the number of carbons: monoterpenes (C10), sesquiterpenes (C15), diterpenes (C20), as examples. The terpene alpha-pinene is a major component of the common solvent, turpentine.
The one terpene that has major applications is natural rubber (i.e., polyisoprene). The possibility that other terpenes could be used as precursors to produce synthetic polymers has been investigated. Many terpenes have been shown to have pharmacological effects. Terpenes are also components of some traditional medicines, such as aromatherapy, and as active ingredients of pesticides in agriculture.
History and terminology
The term terpene was coined in 1866 by the German chemist August Kekulé to denote all hydrocarbons having the empirical formula C10H16, of which camphene was one. Previously, many hydrocarbons having the empirical formula C10H16 had been called "camphene", but many other hydrocarbons of the same composition had different names. Kekulé coined the term "terpene" in order to reduce the confusion. The name "terpene" is a shortened form of "terpentine", an obsolete spelling of "turpentine".
Although sometimes used interchangeably with "terpenes", terpenoids (or isoprenoids) are modified terpenes that contain additional functional groups, usually oxygen-containing. The terms terpenes and terpenoids are often used interchangeably, however. Furthermore, terpenes are produced from terpenoids and many terpenoids are produced from terpenes. Both have strong and often pleasant odors, which may protect their hosts or attract pollinators. The number of terpenes and terpenoids is estimated at 55,000 chemical entities.
The 1939 Nobel Prize in Chemistry was awarded to Leopold Ružička "for his work on polymethylenes and higher terpenes", "including the first chemical synthesis of male sex hormones."
Biological function
Terpenes are major biosynthetic building blocks. Steroids, for example, are derivatives of the triterpene squalene. Terpenes and terpenoids are also the primary constituents of the essential oils of many types of plants and flowers. In plants, terpenes and terpenoids are important mediators of ecological interactions. For example, they play a role in plant defense against herbivory, disease resistance, attraction of mutualists such as pollinators, as well as potentially plant-plant communication. They appear to play roles as antifeedants. Other functions of terpenoids include cell growth modulation and plant elongation, light harvesting and photoprotection, and membrane permeability and fluidity control.
Higher amounts of terpenes are released by trees in warmer weather, where they may function as a natural mechanism of cloud seeding. The clouds reflect sunlight, allowing the forest temperature to regulate.
Some insects use some terpenes as a form of defense. For example, termites of the subfamily Nasutitermitinae ward off predatory insects through the use of a specialized mechanism called a fontanellar gun, which ejects a resinous mixture of terpenes.
Applications
The one terpene that has major applications is natural rubber (i.e., polyisoprene). The possibility that other terpenes could be used as precursors to produce synthetic polymers has been investigated as an alternative to the use of petroleum-based feedstocks. However, few of these applications have been commercialized. Many other terpenes, however, have smaller scale commercial and industrial applications. For example, turpentine, a mixture of terpenes (e.g., pinene), obtained from the distillation of pine tree resin, is used as an organic solvent and as a chemical feedstock (mainly for the production of other terpenoids). Rosin, another by-product of conifer tree resin, is widely used as an ingredient in a variety of industrial products, such as inks, varnishes and adhesives. Rosin is also used by violinists (and players of similar bowed instruments) to increase friction on the bow hair, by ballet dancers on the soles of their shoes to maintain traction on the floor, by gymnasts to keep their grips while performing, and by baseball pitchers to improve their control of the baseball. Terpenes are widely used as fragrances and flavors in consumer products such as perfumes, cosmetics and cleaning products, as well as food and drink products. For example, the aroma and flavor of hops comes, in part, from sesquiterpenes (mainly α-humulene and β-caryophyllene), which affect beer quality. Some form hydroperoxides that are valued as catalysts in the production of polymers.
Many terpenes have been shown to have pharmacological effects, although most studies are from laboratory research, and clinical research in humans is preliminary. Terpenes are also components of some traditional medicines, such as aromatherapy.
Reflecting their defensive role in plants, terpenes are used as active ingredients of pesticides in agriculture.
Physical and chemical properties
Terpenes are colorless, although impure samples are often yellow. Boiling points scale with molecular size: terpenes, sesquiterpenes, and diterpenes respectively at 110, 160, and 220 °C. Being highly non-polar, they are insoluble in water. Being hydrocarbons, they are highly flammable and have low specific gravity (float on water). They are tactilely light oils considerably less viscous than familiar vegetable oils like corn oil (28 cP), with viscosity ranging from 1 cP (à la water) to 6 cP. Terpenes are local irritants and can cause gastrointestinal disturbances if ingested.
Terpenoids (mono-, sesqui-, di-, etc.) have similar physical properties but tend to be more polar and hence slightly more soluble in water and somewhat less volatile than their terpene analogues. Highly polar derivatives of terpenoids are the glycosides, which are linked to sugars. These are water-soluble solids.
Biosynthesis
Isoprene as the building block
Conceptually derived from isoprenes, the structures and formulas of terpenes follow the biogenetic isoprene rule or the C5 rule, as described in 1953 by Leopold Ružička and colleagues. The C5 isoprene units are provided in the form of dimethylallyl pyrophosphate (DMAPP) and isopentenyl pyrophosphate (IPP). DMAPP and IPP are structural isomers to each other. This pair of building blocks are produced by two distinct metabolic pathways: the mevalonate (MVA) pathway and the non-mevalonate (MEP) pathway. These two pathways are mutually exclusive in most organisms, except for some bacteria and land plants. In general, most archaea and eukaryotes use the MVA pathway, while bacteria mostly have the MEP pathway. IPP and DMAPP are final products of both MVA and MEP pathways and the relative abundance of these two isoprene units is enzymatically regulated in host organisms.
Mevalonate pathway
This pathway conjugates three molecules of acetyl CoA.
The mevalonate (MVA) pathway is distributed in all three domains of life; archaea, bacteria and eukaryotes. The MVA pathway is universally distributed in archaea and non-photosynthetic eukaryotes, while the pathway is sparse in bacteria. In photosynthetic eukaryotes, some species possess the MVA pathway, while others have the MEP pathway or both MVA and MEP pathways. This is due to the acquisition of the MEP pathway by a common ancestor of Archaeplastida (algae + land plants) through the endosymbiosis of ancestral cyanobacteria that possessed the MEP pathway. The MVA and MEP pathways were selectively lost in individual photosynthetic lineages.
Also, the archaeal MVA pathway is not completely homologous to the eukaryotic MVA pathway. Instead, the eukaryotic MVA pathway is closer to the bacterial MVA pathway.
Non-mevalonate pathway
The non-mevalonate pathway or the 2-C-methyl-D-erythritol 4-phosphate (MEP) pathway starts with pyruvate and glyceraldehyde 3-phosphate (G3P) as the carbon source.
C5 IPP and C5 DMAPP are the end-products in either pathway and are the precursors of terpenoids with various carbon numbers (typically C5 to C40), side chains of (bacterio)chlorophylls, hemes and quinones. Synthesis of all higher terpenoids proceeds via formation of geranyl pyrophosphate (GPP), farnesyl pyrophosphate (FPP), and geranylgeranyl pyrophosphate (GGPP).
Geranyl pyrophosphate phase and beyond
In both MVA and MEP pathways, IPP is isomerized to DMAPP by the enzyme isopentenyl pyrophosphate isomerase. IPP and DMAPP condense to give geranyl pyrophosphate, the precursor to monoterpenes and monoterpenoids.
Geranyl pyrophosphate is also converted to farnesyl pyrophosphate and geranylgeranyl pyrophosphate, respectively C15 and C20 precursors to sesquiterpenes and diterpenes (as well as sesequiterpenoids and diterpenoids). Biosynthesis is mediated by terpene synthase.
Terpenes to terpenoids
The genomes of many plant species contain genes that encode terpenoid synthase enzymes imparting terpenes with their basic structure, and cytochrome P450s that modify this basic structure.
Structure
Terpenes can be visualized as the result of linking isoprene (C5H8) units "head to tail" to form chains and rings. A few terpenes are linked “tail to tail”, and larger branched terpenes may be linked “tail to mid”.
Formula
Strictly speaking all monoterpenes have the same chemical formula C10H16. Similarly all sesquiterpenes and diterpenes have formulas of C15H24 and C20H32 respectively. The structural diversity of mono-, sesqui-, and diterpenes is a consequence of isomerism.
Chirality
Terpenes and terpenoids are usually chiral. Chiral compounds can exist as non-superposable mirror images, which exhibit distinct physical properties such as odor or toxicity.
Unsaturation
Most terpenes and terpenoids feature C=C groups, i.e. they exhibit unsaturation. Since they carry no functional groups aside from their unsaturation, terpenes are structurally distinctive. The unsaturation is associated with di- and trisubstituted alkenes. Di- and trisubstituted alkenes resist polymerization (low ceiling temperatures) but are susceptible to acid-induced carbocation formation.
Classification
Terpenes may be classified by the number of isoprene units in the molecule; a prefix in the name indicates the number of isoprene pairs needed to assemble the molecule. Commonly, terpenes contain 2, 3, 4 or 6 isoprene units; the tetraterpenes (8 isoprene units) form a separate class of compounds called carotenoids; the others are rare.
The basic unit isoprene itself is a hemiterpene. It may form oxygen-containing derivatives such as prenol and isovaleric acid analogous to terpenoids.
Monoterpenes consist of two isoprene units and have the molecular formula C10H16. Examples of monoterpenes and monoterpenoids include geraniol, terpineol (present in lilacs), limonene (present in citrus fruits), myrcene (present in hops), linalool (present in lavender), hinokitiol (present in cypress trees) or pinene (present in pine trees). Iridoids derive from monoterpenes. Examples of iridoids include aucubin and catalpol.
Sesquiterpenes consist of three isoprene units and have the molecular formula C15H24. Examples of sesquiterpenes and sesquiterpenoids include humulene, farnesenes, farnesol, geosmin. (The sesqui- prefix means one and a half.)
Diterpenes are composed of four isoprene units and have the molecular formula C20H32. They derive from geranylgeranyl pyrophosphate. Examples of diterpenes and diterpenoids are cafestol, kahweol, cembrene and taxadiene (precursor of taxol). Diterpenes also form the basis for biologically important compounds such as retinol, retinal, and phytol.
Sesterterpenes, terpenes having 25 carbons and five isoprene units, are rare relative to the other sizes. (The sester- prefix means two and a half.) An example of a sesterterpenoid is geranylfarnesol.
Triterpenes consist of six isoprene units and have the molecular formula C30H48. The linear triterpene squalene, the major constituent of shark liver oil, is derived from the reductive coupling of two molecules of farnesyl pyrophosphate. Squalene is then processed biosynthetically to generate either lanosterol or cycloartenol, the structural precursors to all the steroids.
Sesquarterpenes are composed of seven isoprene units and have the molecular formula C35H56. Sesquarterpenes are typically microbial in their origin. Examples of sesquarterpenoids are ferrugicadiol and tetraprenylcurcumene.
Tetraterpenes contain eight isoprene units and have the molecular formula C40H64. Biologically important tetraterpenoids include the acyclic lycopene, the monocyclic gamma-carotene, and the bicyclic alpha- and beta-carotenes.
Polyterpenes consist of long chains of many isoprene units. Natural rubber consists of polyisoprene in which the double bonds are cis. Some plants produce a polyisoprene with trans double bonds, known as gutta-percha.
Norisoprenoids, characterized by the shortening of a chain or ring by the removal of a methylene group or substitution of one or more methyl side chains by hydrogen atoms. These include the C13-norisoprenoid 3-oxo-α-ionol present in Muscat of Alexandria leaves and 7,8-dihydroionone derivatives, such as and found in Shiraz leaves (both grapes in the species Vitis vinifera) or wine (responsible for some of the spice notes in Chardonnay), can be produced by fungal peroxidases or glycosidases.
Industrial syntheses
While terpenes and terpenoids occur widely, their extraction from natural sources is often problematic. Consequently, they are produced by chemical synthesis, usually from petrochemicals. In one route, acetone and acetylene are condensed to give 2-Methylbut-3-yn-2-ol, which is extended with acetoacetic ester to give geranyl alcohol. Others are prepared from those terpenes and terpenoids that are readily isolated in quantity, say from the paper and tall oil industries. For example, α-pinene, which is readily obtainable from natural sources, is converted to citronellal and camphor. Citronellal is also converted to rose oxide and menthol.
| Physical sciences | Terpenes and terpenoids | Chemistry |
214294 | https://en.wikipedia.org/wiki/Hepatitis%20D | Hepatitis D | Hepatitis D is a type of viral hepatitis caused by the hepatitis delta virus (HDV). HDV is one of five known hepatitis viruses: A, B, C, D, and E. HDV is considered to be a satellite (a type of subviral agent) because it can propagate only in the presence of the hepatitis B virus (HBV). Transmission of HDV can occur either via simultaneous infection with HBV (coinfection) or superimposed on chronic hepatitis B or hepatitis B carrier state (superinfection).
HDV infecting a person with chronic hepatitis B (superinfection) is considered the most serious type of viral hepatitis due to its severity of complications. These complications include a greater likelihood of experiencing liver failure in acute infections and a rapid progression to liver cirrhosis, with an increased risk of developing liver cancer in chronic infections. In combination with hepatitis B virus, hepatitis D has the highest fatality rate of all the hepatitis infections, at 20%. A recent estimate from 2020 suggests that currently 48 million people are infected with this virus.
Virology
Structure and genome
The hepatitis delta viruses, or HDV, are eight species of negative-sense single-stranded RNA viruses (or virus-like particles) classified together as the genus Deltavirus, within the realm Ribozyviria. The HDV virion is a small, spherical, enveloped particle with a 36 nm diameter; its viral envelope contains host phospholipids, as well as three proteins taken from the hepatitis B virus—the large, medium, and small hepatitis B surface antigens. This assembly surrounds an inner ribonucleoprotein (RNP) particle, which contains the genome surrounded by about 200 molecules of hepatitis D antigen (HDAg) for each genome. The central region of HDAg has been shown to bind RNA. Several interactions are also mediated by a coiled-coil region at the N terminus of HDAg.
The HDV genome is negative sense, single-stranded, closed circular RNA; with a genome of approximately 1700 nucleotides, HDV is the smallest "virus" known to infect animals. It has been proposed that HDV may have originated from a class of plant pathogens called viroids, which are much smaller than viruses. Its genome is unique among animal viruses because of its high GC nucleotide content. Its nucleotide sequence is about 70% self-complementary, allowing the genome to form a partially double-stranded, rod-like RNA structure. HDV strains are highly divergent; fusions of different strains exist and sequences had been deposited in public databases employing different start sites for the circular viral RNA involved. This had resulted in something of chaos with respect to molecular classification of this virus, a situation which has been resolved recently with the adoption of a proposed reference genome and a uniform classification system.
Life cycle
Like hepatitis B, HDV gains entry into liver cells via the sodium taurocholate cotransporting polypeptide (NTCP) bile transporter. HDV recognizes its receptor via the N-terminal domain of the large hepatitis B surface antigen, HBsAg. Mapping by mutagenesis of this domain has shown that amino acid residues 9–15 make up the receptor-binding site. After entering the hepatocyte, the virus is uncoated and the nucleocapsid translocated to the nucleus due to a signal in HDAg. Since the HDV genome does not code for an RNA polymerase to replicate the virus' genome, the virus makes use of the host cellular RNA polymerases. Initially thought to use just RNA polymerase II, now RNA polymerases I and III have also been shown to be involved in HDV replication.
Normally RNA polymerase II utilizes DNA as a template and produces mRNA. Consequently, if HDV indeed utilizes RNA polymerase II during replication, it would be the only known animal pathogen capable of using a DNA-dependent polymerase as an RNA-dependent polymerase.
The RNA polymerases treat the RNA genome as double-stranded DNA due to the folded rod-like structure it is in. Three forms of RNA are made; circular genomic RNA, circular complementary antigenomic RNA, and a linear polyadenylated antigenomic RNA, which is the mRNA containing the open reading frame for the HDAg. Synthesis of antigenomic RNA occurs in the nucleolus, mediated by RNA polymerase I, whereas synthesis of genomic RNA takes place in the nucleoplasm, mediated by RNA polymerase II. HDV RNA is synthesized first as linear RNA that contains many copies of the genome. The genomic and antigenomic RNA contain a sequence of 85 nucleotides, the hepatitis delta virus ribozyme, that acts as a ribozyme, which self-cleaves the linear RNA into monomers. These monomers are then ligated to form circular RNA.
Delta antigens
A significant difference between viroids and HDV is that, while viroids produce no proteins, HDV is known to produce one protein, namely HDAg. It comes in two forms; a 27kDa large-HDAg, and a small-HDAg of 24kDa. The N-terminals of the two forms are identical, they differ by 19 more amino acids in the C-terminal of the large HDAg. Both isoforms are produced from the same reading frame which contains an UAG stop codon at codon 196, which normally produces only the small-HDAg. However, editing by cellular enzyme adenosine deaminase acting on RNA (ADAR) changes the stop codon to UGG, allowing the large-HDAg to be produced. Despite having 90% identical sequences, these two proteins play diverging roles during the course of an infection. HDAg-S is produced in the early stages of an infection and enters the nucleus and supports viral replication. HDAg-L, in contrast, is produced during the later stages of an infection, acts as an inhibitor of viral replication, and is required for assembly of viral particles. Thus RNA editing by the cellular enzymes is critical to the virus' life cycle because it regulates the balance between viral replication and virion assembly.
Antigenic loop infectivity
The HDV envelope protein has three of the HBV surface proteins anchored to it. The S region of the genome is most commonly expressed and its main function is to assemble subviral particles. HDV antigen proteins combine with the viral genome to form a ribonucleoprotein (RNP) which when enveloped with the subviral particles can form viral-like particles that are almost identical to mature HDV, but they are not infectious. Researchers had concluded that the determinant of infectivity of HDV was within the N-terminal pre-S1 domain of the large protein (L). It was found to be a mediator in binding to the cellular receptor. Researchers Georges Abou Jaoudé and Camille Sureau published an article in 2005 that studied the role of the antigenic loop, found in HDV envelope proteins, in the infectivity of the virus. The antigenic loop, like the N-terminal pre-S1 domain of the large protein, is exposed at the virion surface. Jaoudé and Sureau's study provided evidence that the antigenic loop may be an important factor in HDV entry into the host cell and by mutating parts of the antigenic loop, the infectivity of HDV may be minimized.
Transmission
The routes of transmission of hepatitis D are similar to those for hepatitis B. Infection is largely restricted to persons at high risk of hepatitis B infection, particularly injecting drug users and persons receiving clotting factor concentrates. Worldwide more than 15 million people are co-infected. HDV is rare in most developed countries, and is mostly associated with intravenous drug use. However, HDV is much more common in the immediate Mediterranean region, sub-Saharan Africa, the Middle East, and the northern part of South America. In all, about 20 million people may be infected with HDV.
People at risk
As previously stated, patients previously diagnosed with hepatitis B are at risk for hepatitis D infection. Hepatitis D infection risk increases if a person uses injecting drugs, is a hemophiliac, if they are a hemodialysis patient, or through sexual contact with other infected persons.
Prevention
Vaccination against hepatitis B protects against hepatitis D viral infection as hepatitis D requires hepatitis B viral infection to be present in order to infect and replicate in people. Universal vaccination against hepatitis B virus is recommended by the World Health Organization. The hepatitis B vaccine is routinely given soon after birth (usually within 24 hours) to protect against hepatitis B and D viral infection.
Latex or polyurethane condoms have been shown to prevent the transmission of hepatitis B, and most likely hepatitis D viral infection.
Women who are pregnant or trying to become pregnant should undergo testing for HBV to know if they carry the virus, this will allow prevention strategies to be implemented during the birth of the child. The CDC recommends that all women who are pregnant be tested for hepatitis B viral infection and that all infants of women with HBV infection be given hepatitis B immune globulin (HBIG) and the hepatitis B vaccine within 12 hours of birth to prevent transmission of the virus from mother to child.
Those who get tattoos or body piercings should do so using sterile equipment to prevent the transmission of hepatitis B and D via infected bodily fluids. Hepatitis B and D can also be transmitted from contaminated needles, so those who inject drugs should seek help to stop drug use or use sterile needles and avoid sharing needles with others. Those with hepatitis B or D should also not share razors or other personal care items which may have been contaminated by potentially infectious bodily fluids.
Diagnosis
Screening for hepatits D requires testing for anti-HDV antibodies, which indicate past exposure to the virus or current infection. If anti-HDV antibodies are present, then active HDV infection is confirmed by measuring hepatitis D RNA levels. Testing for HDV is only indicated in those who are hepatitis B surface antigen positive (those who have had previous or active infection with hepatitis B) as HDV requires hepatitis B viral infection to infect people. Non-invasive measures of liver fibrosis, such as the biomarker based FibroTest or non-invasive liver imaging such as transient elastography (also known as the FibroScan) have not been validated as quantitative measures of liver fibrosis in those with chronic hepatitis D infection. In those with whom liver fibrosis or cirrhosis is suspected, a liver biopsy is usually needed.
Treatment
Current established treatments for chronic hepatitis D include conventional or pegylated interferon alpha therapy. Evidence suggests that pegylated interferon alpha is effective in reducing the viral load and the effect of the disease during the time the drug is given, but the benefit generally stops if the drug is discontinued. The efficiency of this treatment does not usually exceed about 20%, and late relapse after therapy has been reported.
In May 2020, the Committee for Medicinal Products for Human Use of the European Medicines Agency approved the antiviral Hepcludex (bulevirtide) to treat hepatitis D. Bulevirtide binds and inactivates the sodium/bile acid cotransporter, blocking hepatitis D virus (as well as hepatitis B virus) from entering hepatocytes. Bulevirtide may be given with pegylated interferon alpha as the two are thought to have a synergistic effect, leading to greater treatment response rates.
In patients with HDV-related compensated cirrhosis and clinically significant portal hypertension, the treatment with (bulevirtide) was safe, well tolerated and has led to a significant improvement in biochemical variables and an increase in liver function parameters.
Other treatments for hepatitis D which are currently under development include pegylated interferon lambda (λ), which binds to receptors on the hepatocyte surface leading to an intracellular signaling cascade via the JAK-STAT signaling pathway and activation of anti-viral cell mediated immunity. The prenylation inhibitor lonafarnib prevents hepatitis D viral particle assembly by inhibiting the farnesylation of the L-HDAg. REP2139-Ca is a nucleic acid polymer that prevents the release of hepatitis B surface antigen (which is required for assembly of hepatitis D viral particles).
Prognosis
Superinfections, in which hepatitis D viral infection occurs in someone who has chronic hepatitis B (as opposed to co-infection, in which a person is infected with hepatitis B and D simultaneously), are more likely to progress to chronic hepatitis D and are associated with a worse prognosis. 90% of cases of chronic hepatitis D infection are thought to be due to superinfection in those already with hepatitis B. Hepatitis B and D co-infection is likely to lead to acute hepatitis, but is usually self limited with regards to the hepatitis D infection. Chronic hepatitis B and D is associated with a worse prognosis than chronic hepatitis B alone. Infection with both viruses is characterized by a poor prognosis with 75% of those with chronic hepatitis D developing liver cirrhosis within 15 years and a much higher risk of developing liver cancer. Persistent HDV viremia is the most important risk factor for disease progression in those with co-infection or superinfection. Other factors that are responsible for a poor prognosis in chronic hepatitis D include male sex, older age at time of infection, alcohol use, diabetes, obesity and immunodeficiency.
Epidemiology
HDV is prevalent worldwide. However, the prevalence is decreasing in many higher income countries due to hepatitis B vaccination programs (although rates remain high in some groups such as those who inject drugs or immigrants from HDV endemic regions). Infection with HDV is a major medical scourge in low income regions of the globe in which HBV prevalence remains high. Currently the Amazon basin and low income regions of Asia and Africa have high rates of HDV, owing to concurrently high rates of HBV. Globally, five percent of those with chronic hepatitis B infection also have hepatitis D and 12.5% of people with HIV are also co-infected with hepatitis D.
History
Hepatitis D virus was first reported in 1977 as a nuclear antigen in patients infected with HBV who had severe liver disease. This nuclear antigen was then thought to be a hepatitis B antigen and was called the delta antigen. Subsequent experiments in chimpanzees showed that the hepatitis delta antigen (HDAg) was a structural part of a pathogen that required HBV infection to produce a complete viral particle. The entire genome was cloned and sequenced in 1986. It was subsequently placed in its own genus: Deltavirus.
Lábrea fever
Lábrea fever is a lethal tropical infection discovered in the 1950s in the city of Lábrea, in the Brazilian Amazon basin, where it occurs mostly in the area south of the Amazon River, in the states of Acre, Amazonas, and Rondônia. The disease has also been diagnosed in Colombia and Peru. It is now known to be a coinfection or superinfection of hepatitis B (HBV) with hepatitis D.
Lábrea fever has a sudden onset, with jaundice, anorexia (lack of appetite), hematemesis (vomiting of blood), headache, fever and severe prostration. Death occurs by acute liver failure (ALF). In the last phase, neurological symptoms such as agitation, delirium, convulsions and hemorrhagic coma commonly appear. These symptoms arise from a fulminant hepatitis which may kill in less than a week, and which characteristically affects children and young adults, and more males than females. It is accompanied also by an encephalitis in many cases. The disease is highly lethal: in a study carried out in 1986 at Boca do Acre, also in the Amazon, 39 patients out of 44 died in the acute phase of the disease. Survivors may develop chronic disease.
The main discovery of delta virus and HBV association was done by Gilberta Bensabath, of the Instituto Evandro Chagas, of Belém, state of Pará, and her collaborators.
Infected patients show extensive destruction of liver tissue, with steatosis of a particular type (microsteatosis, characterized by small fat droplets inside the cells), and infiltration of large numbers of inflammatory cells called morula cells, comprised mainly by macrophages containing delta virus antigens.
In the 1987 Boca do Acre study, scientists did an epidemiological survey and reported delta virus infection in 24% of asymptomatic HBV carriers, 29% of acute nonfulminant hepatitis B cases, 74% of fulminant hepatitis B cases, and 100% of chronic hepatitis B cases. The delta virus seems to be endemic in the Amazon region.
Evolution
Three genotypes (I–III) were originally described. Genotype I has been isolated in Europe, North America, Africa and some Asia. Genotype II has been found in Japan, Taiwan, and Yakutia (Russia). Genotype III has been found exclusively in South America (Peru, Colombia, and Venezuela). Some genomes from Taiwan and the Okinawa islands have been difficult to type but have been placed in genotype 2. However it is now known that there are at least 8 genotypes of this virus (HDV-1 to HDV-8). Phylogenetic studies suggest an African origin for this pathogen.
An analysis of 36 strains of genotype 3 estimated that the most recent common ancestor of these strains originated around 1930. This genotype spread exponentially from early 1950s to the 1970s in South America. The substitution rate was estimated to be 1.07 substitutions per site per year. Another study found an overall evolution rate of 3.18 substitutions per site per year. The mutation rate varied with position : the hypervariable region evolved faster (4.55 substitutions per site per year) than the hepatitis delta antigen coding region (2.60 substitutions per site per year) and the autocatalytic region (1.11 substitutions per site per year). A third study suggested a mutation rate between 9.5 to 1.2 substitutions/site/year.
Genotypes, with the exception of type 1, appear to be restricted to certain geographical areas: HDV-2 (previously HDV-IIa) is found in Japan, Taiwan and Yakutia; HDV-4 (previously HDV-IIb) in Japan and Taiwan; HDV-3 in the Amazonian region; HDV-5, HDV-6, HDV-7 and HDV-8 in Africa. Genotype 8 has also been isolated from South America. This genotype is usually only found in Africa and may have been imported into South America during the slave trade.
HDV-specific CD8+ T cells can control the virus, but it has been found HDV mutates to escape detection by CD8+ T cells.
Related species
A few other viruses with similarity to HDV have been described in species other than humans. Unlike HDV, none of them depend on a Hepadnaviridae (HBV family) virus to replicate. These agents have rod-like structure, a delta antigen, and a ribozyme. HDV and all such relatives are classified in their own realm, Ribozyviria, by the International Committee on Taxonomy of Viruses.
| Biology and health sciences | Viral diseases | Health |
214340 | https://en.wikipedia.org/wiki/Pavo%20%28constellation%29 | Pavo (constellation) | Pavo is a constellation in the southern sky whose name is Latin for . Pavo first appeared on a 35-cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Petrus Plancius and Jodocus Hondius and was depicted in Johann Bayer's star atlas Uranometria of 1603, and was likely conceived by Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman. French explorer and astronomer Nicolas-Louis de Lacaille gave its stars Bayer designations in 1756. The constellations Pavo, Grus, Phoenix and Tucana are collectively known as the "Southern Birds".
The constellation's brightest member, Alpha Pavonis, is also known as Peacock and appears as a 1.91-magnitude blue-white star, but is actually a spectroscopic binary. Delta Pavonis is a nearby Sun-like star some 19.9 light-years distant. Six of the star systems in Pavo have been found to host planets, including HD 181433 with a super-Earth, and HD 172555 with evidence of a major interplanetary collision in the past few thousand years. The constellation contains NGC 6752, the fourth-brightest globular cluster in the sky, and the spiral galaxy NGC 6744, which closely resembles the Milky Way but is twice as large. Pavo displays an annual meteor shower known as the Delta Pavonids, whose radiant is near the star δ Pav.
History and mythology
History of the modern constellation
Pavo was one of the twelve constellations established by Petrus Plancius from the observations of the southern sky by explorers Pieter Dirkszoon Keyser and Frederick de Houtman, who had sailed on the first Dutch trading expedition, known as the Eerste Schipvaart, to the East Indies. It first appeared on a 35 cm (14 in) diameter celestial globe published in 1598 in Amsterdam by Plancius with Jodocus Hondius. The first depiction of this constellation in a celestial atlas was in German cartographer Johann Bayer's Uranometria of 1603. De Houtman included it in his southern star catalogue the same year under the Dutch name De Pauww . Pavo and the nearby constellations Phoenix, Grus, and Tucana are collectively called the "Southern Birds".
The peacock in Greek mythology
According to Mark Chartrand, former executive director of the National Space Institute, Plancius may not have been the first to designate this group of stars as a peacock: "In Greek myth the stars that are now the Peacock were Argos [or Argus], builder of the ship Argo. He was changed by the goddess Juno into a peacock and placed in the sky along with his ship." Indeed, the peacock "symboliz[ed] the starry firmament" for the Greeks, and the goddess Hera was believed to drive through the heavens in a chariot drawn by peacocks.
The peacock and the "Argus" nomenclature are also prominent in a different myth, in which Io, a beautiful princess of Argos, was lusted after by Zeus (Jupiter). Zeus changed Io into a heifer to deceive his wife (and sister) Hera and couple with her. Hera saw through Zeus's scheme and asked for the heifer as a gift. Zeus, unable to refuse such a reasonable request, reluctantly gave the heifer to Hera, who promptly banished Io and arranged for Argus Panoptes, a creature with one hundred eyes, to guard the now-pregnant Io from Zeus. Meanwhile, Zeus entreated Hermes to save Io; Hermes used music to lull Argus Panoptes to sleep, then slew him. Hera adorned the tail of a peacock—her favorite bird—with Argus's eyes in his honor.
As recounted in Ovid's Metamorphoses, the death of Argus Panoptes also contains an explicit celestial reference: "Argus lay dead; so many eyes, so bright quenched, and all hundred shrouded in one night. Saturnia [Hera] retrieved those eyes to set in place among the feathers of her bird [the peacock, Pavo] and filled his tail with starry jewels."
It is uncertain whether the Dutch astronomers had the Greek mythos in mind when creating Pavo but, in keeping with other constellations introduced by Plancius through Keyser and De Houtmann, the "peacock" in the new constellation likely referred to the green peacock, which the explorers would have encountered in the East Indies, rather than the blue peacock known to the ancient Greeks.
Equivalents in other cultures
The Wardaman people of the Northern Territory in Australia saw the stars of Pavo and the neighbouring constellation Ara as flying foxes.
Characteristics
Pavo is bordered by Telescopium to the north, Apus and Ara to the west, Octans to the south, and Indus to the east and northeast. Covering 378 square degrees, it ranks 44th of the 88 modern constellations in size and covers 0.916% of the night sky. The three-letter abbreviation for the constellation, as adopted by the International Astronomical Union in 1922, is "Pav". The official constellation boundaries, as set by Belgian astronomer Eugène Delporte in 1930, are defined by a polygon of 10 segments. In the equatorial coordinate system, the right ascension coordinates of these borders lie between and , while the declination coordinates are between −56.59° and −74.98°. As one of the deep southern constellations, it remains below the horizon at latitudes north of the 30th parallel in the Northern Hemisphere, and is circumpolar at latitudes south of the 50th parallel in the Southern Hemisphere.
Features
Stars
Although he depicted Pavo on his chart, Bayer did not assign its stars Bayer designations. French explorer and astronomer Nicolas-Louis de Lacaille labelled them Alpha to Omega in 1756, but omitted Psi and Xi, and labelled two pairs of stars close together Mu and Phi Pavonis. In 1879, American astronomer Benjamin Gould designated a star Xi Pavonis as he felt its brightness warranted a name, but dropped Chi Pavonis due to its faintness.
Lying near the constellation's northern border with Telescopium is Alpha Pavonis, the brightest star in Pavo. Its proper name—Peacock—is an English translation of the constellation's name. It was assigned by the British HM Nautical Almanac Office in the late 1930s; the Royal Air Force insisted that all bright stars must have names, the star hitherto having lacked a proper name. Alpha has an apparent (or visual) magnitude of 1.91 and spectral type B2IV. It is a spectroscopic binary system, one estimate placing the distance between the pair of stars as 0.21 astronomical units (AU), or half the distance between Mercury and the Sun. The two stars rotate around each other in a mere 11 days and 18 hours. The star system is located around 180 light years away from Earth.
With an apparent magnitude of 3.43, Beta Pavonis is the second-brightest star in the constellation. A white giant of spectral class A7III, it is an aging star that has used up the hydrogen fuel at its core and has expanded and cooled after moving off the main sequence. It lies 135 light years away from the Solar System.
Lying a few degrees west of Beta is Delta Pavonis, a nearby Sun-like but more evolved star; this is a yellow subgiant of spectral type G8IV and apparent magnitude 3.56 that is only 19.9 light years distant from Earth. East of Beta and at the constellation's eastern border with Indus is Gamma Pavonis, a fainter, solar-type star 30 light years from Earth with a magnitude of 4.22 and stellar class F9V. Other nearby stars in Pavo are much fainter: SCR 1845-6357 (the nearest star in Pavo) is a binary system with an apparent magnitude of 17.4 consisting of a red dwarf and brown dwarf companion lying around 12.6 light years distant, while Gliese 693 is a red dwarf of magnitude 10.78 lying 19 light years away.
Pavo contains several variable stars of note. Lambda Pavonis is a bright irregular variable ranging between magnitudes 3.4 and 4.4; this variation can be observed with the unaided eye. Classed as a Gamma Cassiopeiae variable or shell star, it is of spectral type B2II-IIIe and lies around 1430 light years distant from Earth. Kappa Pavonis is a W Virginis variable—a subclass of Type II Cepheid. It ranges from magnitude 3.91 to 4.78 over 9 days and is a yellow-white supergiant pulsating between spectral classes F5I-II and G5I-II. NU and V Pavonis are pulsating semiregular variable red giant stars. NU has a spectral type M6III and ranges from magnitude 4.9 to 5.3, while V Pavonis ranges from magnitude 6.3 to 8.2 over two periods of 225.4 and 3735 days concurrently. V is a carbon star of spectral type C6,4(Nb) with a prominent red hue.
Located in the west of the constellation and depicting the peacock's tail are Eta and Xi Pavonis. At apparent magnitude 3.6, Eta is a luminous orange giant of spectral type K2II some 350 light years distant from Earth. Xi Pavonis is a multiple star system visible in small telescopes as a brighter orange star and fainter white companion. Located around 470 light years from Earth, the system has a magnitude of 4.38. AR Pavonis is a faint but well-studied eclipsing binary composed of a red giant and smaller hotter star some 18000 light years from Earth. It has some features of a cataclysmic variable, the smaller component most likely having an accretion disc. The visual magnitude ranges from 7.4 to 13.6 over 605 days.
In November 2018, the 8th magnitude star, HD 186302 became the second star identified to be a solar sibling, this one being particularly sun like, same spectra G2, virtually the same mass as well, with a twin spectra revealing identical metallicity.
Planetary systems and debris disks
Six stars with planetary systems have been found. Three planets have been discovered in the system of the orange star HD 181433, an inner super-Earth with an orbital period of 9.4 days and two outer gas giants with periods of 2.6 and 6 years respectively. HD 196050 and HD 175167 are yellow G-class Sun-like stars, while HD 190984 is an F-class main sequence star slightly larger and hotter than the Sun; all three are accompanied by a gas giant companion. HD 172555 is a young white A-type main sequence star, two planets of which appear to have had a major collision in the past few thousand years. Spectrographic evidence of large amounts of silicon dioxide gas indicates the smaller of the two, which had been at least the size of Earth's moon, was destroyed, and the larger, which was at least the size of Mercury, was severely damaged. Evidence of the collision was detected by NASA's Spitzer Space Telescope. In the south of the constellation, Epsilon Pavonis is a 3.95-magnitude white main sequence star of spectral type A0Va located around 105 light years distant from Earth. It appears to be surrounded by a narrow ring of dust at a distance of 107 AU.
Deep-sky objects
The deep-sky objects in Pavo include NGC 6752, the fourth-brightest globular cluster in the sky, after Omega Centauri, 47 Tucanae and Messier 22. An estimated 100 light years across, it is thought to contain 100,000 stars. Barely visible behind the cluster is a dwarf spheroidal galaxy known as Bedin I. Lying three degrees to the south is NGC 6744, a spiral galaxy around 30 million light years away from Earth that resembles the Milky Way, but is twice its diameter. A type 1c supernova was discovered in the galaxy in 2005; known as SN2005at, it peaked at magnitude 16.8. The dwarf galaxy IC 4662 lies 10 arcminutes northeast of Eta Pavonis, and is of magnitude 11.62. Located only 8 million light years away, it has several regions of high star formation. The 14th-magnitude galaxy IC 4965 lies 1.7 degrees west of Alpha Pavonis, and is a central member of the Shapley Supercluster. The galactic wind bearing NGC 6810 and the interacting NGC 6872/IC 4970 galaxies lie 87 and 212 million light-years away from Earth respectively.
Meteor showers
Pavo is the radiant of two annual meteor showers: the Delta Pavonids and August Pavonids. Appearing from 21 March to 8 April and generally peaking around 5 and 6 April, Delta Pavonids are thought to be associated with comet Grigg-Mellish. The shower was discovered by Michael Buhagiar from Perth, Australia, who observed meteors on six occasions between 1969 and 1980. The August Pavonids peak around 31 August and are thought to be associated with the Halley-type Comet Levy (P/1991 L3).
| Physical sciences | Other | Astronomy |
214369 | https://en.wikipedia.org/wiki/Penduline%20tit | Penduline tit | The penduline tits constitute the family Remizidae, of small passerine birds related to the true tits. All but the verdin make elaborate bag nests hanging from trees (whence "penduline", hanging), usually over water.
Characteristics
Penduline tits are tiny passerines, ranging from 7.5 to 11 cm in length, that resemble the true tits (Paridae) but have finer bills with more needle-like points. Their wings are short and rounded and their short tails are notched (except the stub-tailed tit). The penduline tits' typical plumage colors are pale grays and yellows and white, though the European penduline tit has black and chestnut markings and some species have bright yellow or red.
Distribution and habitat
The penduline tits live in Eurasia and Africa and North America. The genus Remiz is almost exclusively Palearctic, ranging discontinuously from Portugal and the tip of northern Morocco through to Siberia and Japan. The largest genus, Anthoscopus, is found in sub-Saharan Africa from the Sahel through to South Africa. The verdin lives in arid parts of the southwestern U.S. and northern Mexico.
Several species of penduline tit are migratory, although this behaviour is only shown in species found in Asia and Europe; African species and the verdin are apparently sedentary. The Eurasian penduline tit is migratory over parts of its range, with birds in northern Europe moving south in the winter but birds in southern Europe remaining close to their breeding areas. In contrast the Chinese penduline tit is fully migratory and undertakes long-distance migrations.
Most live in open country with trees or bushes, ranging from desert to marsh to woodland, but the forest penduline tit lives in rain forest. They spend most of the year in small flocks.
Behaviour
Insects form the larger part of the diet of the penduline tits, and they are active foragers. Their long conical bill is used to probe into cracks and prise open holes in order to obtain prey. Nectar, seeds and fruits may also be taken seasonally. Their foraging behaviour is reminiscent of the true tits (Paridae), foraging upside-down on small branches, manoeuvring branches and leaves with their feet in order to inspect them, and clasping large prey items with one foot while dismembering them.
The common name of the family reflects the tendency of most species to construct elaborate pear-shaped nests. These nests are woven from spiderweb, wool and animal hair and soft plant materials, which are suspended from twigs and branches in trees. The nests of the African genus Anthoscopus are even more elaborate than the Eurasian Remiz, incorporating a false entrance above the true entrance which leads to a false chamber. The true nesting chamber is accessed by the parent opening a hidden flap, entering and then closing the flap shut again, the two sides sealing with sticky spider webs.
The verdin builds a domed nest out of thorny twigs. In some penduline tit species the eggs are white, sometimes with red spots. The verdin lays blue-green eggs with red spots. Incubation lasts about 13 or 14 days, and the nestlings fledge at about 18 days. In penduline tits, higher incidents of extra-pair paternity results in lower rates of male care, suggesting that extra-pair offspring devalues parental care.
Systematics
The family Remizidae was introduced in 1891 (as Remizeae) by the French ornithologist Léon Olphe-Galliard. Sometimes, these birds are included as subfamily Remizinae in the tit family Paridae. Which taxonomic lineup scientists prefers is primarily a matter of taste; that these families are close relatives is well established by now. If they are considered a separate family, the sultan tit and the yellow-browed tit would possibly need to be excluded from the Paridae. The placement of the tit-hylia within this family is particularly controversial, it having variously been placed with the sunbirds, waxbills, honeyeaters and most recently close to the green hylia. It is placed in the family Cettiidae.
The family contains 11 species in 3 genera:
| Biology and health sciences | Passerida | Animals |
214424 | https://en.wikipedia.org/wiki/Wagtail | Wagtail | Wagtails are a group of passerine birds that form the genus Motacilla in the family Motacillidae. The common name and genus name are derived from their characteristic tail pumping behaviour. Together with the pipits and longclaws they form the family Motacillidae.
The forest wagtail belongs to the monotypic genus Dendronanthus which is closely related to Motacilla and sometimes included therein.
The willie wagtail (Rhipidura leucophrys) of Australia is not a true wagtail; it was named as such by early settlers from England from its superficial similarity in colour and behaviour to the pied wagtail, but belongs to an unrelated genus of birds known as fantails.
Taxonomy
The genus Motacilla was described by the Swedish naturalist Carl Linnaeus in 1758 in the tenth edition of his Systema Naturae. The type species is the white wagtail. Motacilla is the Latin name for the pied wagtail; although actually a diminutive of motare, "to move about", from medieval times it led to the misunderstanding of cilla as "tail".
At first glance, the wagtails appear to be divided into a yellow-bellied group and a white-bellied one, or one where the upper head is black and another where it is usually grey, but may be olive, yellow, or other colours. However, these are not evolutionary lineages; change of belly colour and increase of melanin have occurred independently several times in the wagtails, and the colour patterns which actually indicate relationships are more subtle.
mtDNA cytochrome b and NADH dehydrogenase subunit 2 sequence data (Voelker, 2002) is of limited use: the suspicion that there is a superspecies of probably three white-bellied, black-throated wagtails is confirmed. Also, there is another superspecies in sub-Saharan Africa, three white-throated species with a black breast-band. The remaining five species are highly variable morphologically and their relationships with each other and with the two clades have not yet been satisfactorily explained.
The origin of the genus appears to be in the general area of Eastern Siberia/Mongolia. Wagtails spread rapidly across Eurasia and dispersed to Africa in the Zanclean (Early Pliocene)
where the sub-Saharan lineage was later isolated. The African pied wagtail (and possibly the Mekong wagtail) diverged prior to the massive radiation of the white-bellied black-throated and most yellow-bellied forms, all of which took place during the late Piacenzian (early Late Pliocene), c. 3 mya.
Three species are poly- or paraphyletic in the present taxonomical arrangement, and either subspecies need to be reassigned and/or species split up. The western yellow wagtail (AKA blue-headed wagtail, yellow wagtail, and many other names) especially, has always been a taxonomical nightmare, with ten currently accepted subspecies and many more invalid ones. The two remaining "monochrome" species, Mekong and African pied wagtail may be closely related, or a most striking example of convergent evolution.
Prehistoric wagtails known from fossils are Motacilla humata and Motacilla major.
Characteristics
Wagtails are slender, often colourful, ground-feeding insectivores of open country in the Old World. Species of wagtail breed in Africa, Europe and Asia, some of which are fully or partially migratory. Two species also breed in western Alaska, and wintering birds may reach Australia.
They are ground nesters, often in rock crevices on steep banks or walls, laying (3–)4–6(–8) speckled eggs at a time. Among their most conspicuous behaviours is a near constant tail wagging, a trait that has given the birds their common name. In spite of the ubiquity of the behaviour and observations of it, the reasons for it are poorly understood. It has been suggested that it may flush up prey, or that it may signal submissiveness to other wagtails. Recent studies have suggested instead that it is a signal of vigilance that may aid to deter potential predators.
Species list
The genus contains thirteen species.
| Biology and health sciences | Passerida | Animals |
214505 | https://en.wikipedia.org/wiki/Tower | Tower | A tower is a tall structure, taller than it is wide, often by a significant factor. Towers are distinguished from masts by their lack of guy-wires and are therefore, along with tall buildings, self-supporting structures.
Towers are specifically distinguished from buildings in that they are built not to be habitable but to serve other functions using the height of the tower. For example, the height of a clock tower improves the visibility of the clock, and the height of a tower in a fortified building such as a castle increases the visibility of the surroundings for defensive purposes. Towers may also be built for observation, leisure, or telecommunication purposes. A tower can stand alone or be supported by adjacent buildings, or it may be a feature on top of a larger structure or building.
Etymology
Old English torr is from Latin turris via Old French tor. The Latin term together with Greek τύρσις was loaned from a pre-Indo-European Mediterranean language, connected with the Illyrian toponym
Βου-δοργίς. With the Lydian toponyms Τύρρα, Τύρσα, it has been connected with the ethnonym Τυρρήνιοι as well as with Tusci (from *Turs-ci), the Greek and Latin names for the Etruscans (Kretschmer Glotta 22, 110ff.)
History
Towers have been used by humankind since prehistoric times. The oldest known may be the circular stone tower in walls of Neolithic Jericho (8000 BC). Some of the earliest towers were ziggurats, which existed in Sumerian architecture since the 4th millennium BC. The most famous ziggurats include the Sumerian Ziggurat of Ur, built in the 3rd millennium BC, and the Etemenanki, one of the most famous examples of Babylonian architecture.
Some of the earliest surviving examples are the broch structures in northern Scotland, which are conical tower houses. These and other examples from Phoenician and Roman cultures emphasised the use of a tower in fortification and sentinel roles. For example, the name of the Moroccan city of Mogador, founded in the first millennium BC, is derived from the Phoenician word for watchtower ('migdol'). The Romans utilised octagonal towers as elements of Diocletian's Palace in Croatia, which monument dates to approximately 300 AD, while the Servian Walls (4th century BC) and the Aurelian Walls (3rd century AD) featured square ones. The Chinese used towers as integrated elements of the Great Wall of China in 210 BC during the Qin dynasty. Towers were also an important element of castles.
Other well known towers include the Leaning Tower of Pisa in Pisa, Italy built from 1173 until 1372, the Two Towers in Bologna, Italy built from 1109 until 1119 and the Towers of Pavia (25 survive), built between 11th and 13th century. The Himalayan Towers are stone towers located chiefly in Tibet built approximately 14th to 15th century.
Mechanics
Up to a certain height, a tower can be made with the supporting structure with parallel sides. However, above a certain height, the compressive load of the material is exceeded, and the tower will fail. This can be avoided if the tower's support structure tapers up the building.
A second limit is that of buckling—the structure requires sufficient stiffness to avoid breaking under the loads it faces, especially those due to winds. Many very tall towers have their support structures at the periphery of the building, which greatly increases the overall stiffness.
A third limit is dynamic; a tower is subject to varying winds, vortex shedding, seismic disturbances etc. These are often dealt with through a combination of simple strength and stiffness, as well as in some cases tuned mass dampers to damp out movements. Varying or tapering the outer aspect of the tower with height avoids vibrations due to vortex shedding occurring along the entire building simultaneously.
Functions
Although not correctly defined as towers, many modern high-rise buildings (in particular skyscraper) have 'tower' in their name or are colloquially called 'towers'. Skyscrapers are more properly classified as 'buildings'. In the United Kingdom, tall domestic buildings are referred to as tower blocks. In the United States, the original World Trade Center had the nickname the Twin Towers, a name shared with the Petronas Twin Towers in Kuala Lumpur. In addition some of the structures listed below do not follow the strict criteria used at List of tallest towers.
Strategic advantages
The tower throughout history has provided its users with an advantage in surveying defensive positions and obtaining a better view of the surrounding areas, including battlefields. They were constructed on defensive walls, or rolled near a target (see siege tower). Today, strategic-use towers are still used at prisons, military camps, and defensive perimeters.
Potential energy
By using gravity to move objects or substances downward, a tower can be used to store items or liquids like a storage silo or a water tower, or aim an object into the earth such as a drilling tower. Ski-jump ramps use the same idea, and in the absence of a natural mountain slope or hill, can be human-made.
Communication enhancement
In history, simple towers like lighthouses, bell towers, clock towers, signal towers and minarets were used to communicate information over greater distances. In more recent years, radio masts and cell phone towers facilitate communication by expanding the range of the transmitter. The CN Tower in Toronto, Ontario, Canada was built as a communications tower, with the capability to act as both a transmitter and repeater.
Transportation support
Towers can also be used to support bridges, and can reach heights that rival some of the tallest buildings above-water. Their use is most prevalent in suspension bridges and cable-stayed bridges. The use of the pylon, a simple tower structure, has also helped to build railroad bridges, mass-transit systems, and harbors.
Control towers are used to give visibility to help direct aviation traffic.
Other
To access tall or high objects: launch tower, service tower, service structure, scaffold, tower crane
To access atmospheric conditions aloft: wind turbine, meteorological measurement tower, tower telescope, solar power station
To lift high tension cables for electrical power distribution transmission tower
To take advantage of the temperature gradient inherent in a height differential: cooling tower
To expel and disperse potentially harmful gases and particulates into the atmosphere: chimney
To protect from exposure: BREN Tower, lightning rod tower
For industrial production: shot tower
For surveying: Survey tower
To drop objects: Drop tube (drop tower), bomb tower, diving platform
To test height-intensive applications: elevator test tower
To improve structural integrity: thyristor tower
To mimic towers or provide height for training purposes: fire tower, parachute tower
As art: Shukhov Tower
For recreation: rock climbing tower
As a symbol: Tower of Babel, The Tower (Tarot card), church tower
The term "tower" is also sometimes used to refer to firefighting equipment with an extremely tall ladder designed for use in firefighting/rescue operations involving high-rise buildings.
Gallery
| Technology | Mixed-use buildings | null |
214513 | https://en.wikipedia.org/wiki/Transmission%20electron%20microscopy | Transmission electron microscopy | Transmission electron microscopy (TEM) is a microscopy technique in which a beam of electrons is transmitted through a specimen to form an image. The specimen is most often an ultrathin section less than 100 nm thick or a suspension on a grid. An image is formed from the interaction of the electrons with the sample as the beam is transmitted through the specimen. The image is then magnified and focused onto an imaging device, such as a fluorescent screen, a layer of photographic film, or a detector such as a scintillator attached to a charge-coupled device or a direct electron detector.
Transmission electron microscopes are capable of imaging at a significantly higher resolution than light microscopes, owing to the smaller de Broglie wavelength of electrons. This enables the instrument to capture fine detail—even as small as a single column of atoms, which is thousands of times smaller than a resolvable object seen in a light microscope. Transmission electron microscopy is a major analytical method in the physical, chemical and biological sciences. TEMs find application in cancer research, virology, and materials science as well as pollution, nanotechnology and semiconductor research, but also in other fields such as paleontology and palynology.
TEM instruments have multiple operating modes including conventional imaging, scanning TEM imaging (STEM), diffraction, spectroscopy, and combinations of these. Even within conventional imaging, there are many fundamentally different ways that contrast is produced, called "image contrast mechanisms". Contrast can arise from position-to-position differences in the thickness or density ("mass-thickness contrast"), atomic number ("Z contrast", referring to the common abbreviation Z for atomic number), crystal structure or orientation ("crystallographic contrast" or "diffraction contrast"), the slight quantum-mechanical phase shifts that individual atoms produce in electrons that pass through them ("phase contrast"), the energy lost by electrons on passing through the sample ("spectrum imaging") and more. Each mechanism tells the user a different kind of information, depending not only on the contrast mechanism but on how the microscope is used—the settings of lenses, apertures, and detectors. What this means is that a TEM is capable of returning an extraordinary variety of nanometre- and atomic-resolution information, in ideal cases revealing not only where all the atoms are but what kinds of atoms they are and how they are bonded to each other. For this reason TEM is regarded as an essential tool for nanoscience in both biological and materials fields.
The first TEM was demonstrated by Max Knoll and Ernst Ruska in 1931, with this group developing the first TEM with resolution greater than that of light in 1933 and the first commercial TEM in 1939. In 1986, Ruska was awarded the Nobel Prize in physics for the development of transmission electron microscopy.
History
Initial development
In 1873, Ernst Abbe proposed that the ability to resolve detail in an object was limited approximately by the wavelength of the light used in imaging or a few hundred nanometres for visible light microscopes. Developments in ultraviolet (UV) microscopes, led by Köhler and Rohr, increased resolving power by a factor of two. However this required expensive quartz optics, due to the absorption of UV by glass. It was believed that obtaining an image with sub-micrometre information was not possible due to this wavelength constraint.
In 1858, Plücker observed the deflection of "cathode rays" (electrons) by magnetic fields. This effect was used by Ferdinand Braun in 1897 to build simple cathode-ray oscilloscope (CRO) measuring devices. In 1891, Eduard Riecke noticed that the cathode rays could be focused by magnetic fields, allowing for simple electromagnetic lens designs. In 1926, Hans Busch published work extending this theory and showed that the lens maker's equation could, with appropriate assumptions, be applied to electrons.
In 1928, at the Technische Hochschule in Charlottenburg (now Technische Universität Berlin), Adolf Matthias, Professor of High Voltage Technology and Electrical Installations, appointed Max Knoll to lead a team of researchers to advance the CRO design. The team consisted of several PhD students including Ernst Ruska and Bodo von Borries. The research team worked on lens design and CRO column placement, to optimize parameters to construct better CROs, and make electron optical components to generate low magnification (nearly 1:1) images. In 1931, the group successfully generated magnified images of mesh grids placed over the anode aperture. The device used two magnetic lenses to achieve higher magnifications, arguably creating the first electron microscope. In that same year, Reinhold Rudenberg, the scientific director of the Siemens company, patented an electrostatic lens electron microscope.
Improving resolution
At the time, electrons were understood to be charged particles of matter; the wave nature of electrons was not fully realized until the PhD thesis of Louis de Broglie in 1924. Knoll's research group was unaware of this publication until 1932, when they realized that the de Broglie wavelength of electrons was many orders of magnitude smaller than that for light, theoretically allowing for imaging at atomic scales. (Even for electrons with a kinetic energy of just 1 electronvolt the wavelength is already as short as 1.18 nm.) In April 1932, Ruska suggested the construction of a new electron microscope for direct imaging of specimens inserted into the microscope, rather than simple mesh grids or images of apertures. With this device successful diffraction and normal imaging of an aluminium sheet was achieved. However the magnification achievable was lower than with light microscopy. Magnifications higher than those available with a light microscope were achieved in September 1933 with images of cotton fibers quickly acquired before being damaged by the electron beam.
At this time, interest in the electron microscope had increased, with other groups, such as that of Paul Anderson and Kenneth Fitzsimmons of Washington State University and that of Albert Prebus and James Hillier at the University of Toronto, who constructed the first TEMs in North America in 1935 and 1938, respectively, continually advancing TEM design.
Research continued on the electron microscope at Siemens in 1936, where the aim of the research was the development and improvement of TEM imaging properties, particularly with regard to biological specimens. At this time electron microscopes were being fabricated for specific groups, such as the "EM1" device used at the UK National Physical Laboratory. In 1939, the first commercial electron microscope, pictured, was installed in the Physics department of IG Farben-Werke. Further work on the electron microscope was hampered by the destruction of a new laboratory constructed at Siemens by an air raid, as well as the death of two of the researchers, Heinz Müller and Friedrick Krause during World War II.
Further research
After World War II, Ruska resumed work at Siemens, where he continued to develop the electron microscope, producing the first microscope with 100k magnification. The fundamental structure of this microscope design, with multi-stage beam preparation optics, is still used in modern microscopes. The worldwide electron microscopy community advanced with electron microscopes being manufactured in Manchester UK, the USA (RCA), Germany (Siemens) and Japan (JEOL). The first international conference in electron microscopy was in Delft in 1949, with more than one hundred attendees. Later conferences included the "First" international conference in Paris, 1950 and then in London in 1954.
With the development of TEM, the associated technique of scanning transmission electron microscopy (STEM) was re-investigated and remained undeveloped until the 1970s, with Albert Crewe at the University of Chicago developing the field emission gun and adding a high quality objective lens to create the modern STEM. Using this design, Crewe demonstrated the ability to image atoms using annular dark-field imaging. Crewe and coworkers at the University of Chicago developed the cold field electron emission source and built a STEM able to visualize single heavy atoms on thin carbon substrates.
Background
Electrons
Theoretically, the maximum resolution, d, that one can obtain with a light microscope is limited by the wavelength of the photons (λ) and the numerical aperture NA of the system.
where n is the index of refraction of the medium in which the lens is working and α is the maximum half-angle of the cone of light that can enter the lens (see numerical aperture). Early twentieth century scientists theorized ways of getting around the limitations of the relatively large wavelength of visible light (wavelengths of 400–700 nanometres) by using electrons. Like all matter, electrons have both wave and particle properties (matter wave), and their wave-like properties mean that a beam of electrons can be focused and diffracted much like light can. The wavelength of electrons is related to their kinetic energy via the de Broglie equation, which says that the wavelength is inversely proportional to the momentum. Taking into account relativistic effects (as in a TEM an electron's velocity is a substantial fraction of the speed of light, c) the wavelength is
where h is the Planck constant, m0 is the rest mass of an electron and E is the kinetic energy of the accelerated electron.
Electron source
From the top down, the TEM consists of an emission source or cathode, which may be a tungsten filament, a lanthanum hexaboride (LaB6) single crystal or a field emission gun. The gun is connected to a high voltage source (typically ~100–300 kV) and emits electrons either by thermionic or field electron emission into the vacuum. In the case of a thermionic source, the electron source is mounted in a Wehnelt cylinder to provide preliminary focus of the emitted electrons into a beam while also stabilizing the current using a passive feedback circuit. A field emission source uses instead electrostatic electrodes called an extractor, a suppressor, and a gun lens, with different voltages on each, to control the electric field shape and intensity near the sharp tip. The combination of the cathode and these first electrostatic lens elements is collectively called the "electron gun". After it leaves the gun, the beam is typically accelerated until it reaches its final voltage and enters the next part of the microscope: the condenser lens system. These upper lenses of the TEM then further focus the electron beam to the desired size and location on the sample.
Manipulation of the electron beam is performed using two physical effects. The interaction of electrons with a magnetic field will cause electrons to move according to the left hand rule, thus allowing electromagnets to manipulate the electron beam. Additionally, electrostatic fields can cause the electrons to be deflected through a constant angle. Coupling of two deflections in opposing directions with a small intermediate gap allows for the formation of a shift in the beam path, allowing for beam shifting.
Optics
The lenses of a TEM are what gives it its flexibility of operating modes and ability to focus beams down to the atomic scale and magnify them to get an image. A lens is usually made of a solenoid coil nearly surrounded by ferromagnetic materials designed to concentrate the coil's magnetic field into a precise, confined shape. When an electron enters and leaves this magnetic field, it spirals around the curved magnetic field lines in a way that acts very much as an ordinary glass lens does for light—it is a converging lens. But, unlike a glass lens, a magnetic lens can very easily change its focusing power by adjusting the current passing through the coils.
Equally important to the lenses are the apertures. These are circular holes in thin strips of heavy metal. Some are fixed in size and position and play important roles in limiting x-ray generation and improving the vacuum performance. Others can be freely switched among several different sizes and have their positions adjusted. Variable apertures after the sample allow the user to select the range of spatial positions or electron scattering angles to be used in the formation of an image or a diffraction pattern.
The electron-optical system also includes deflectors and stigmators, usually made of small electromagnets. The deflectors allow the position and angle of the beam at the sample position to be independently controlled and also ensure that the beams remain near the low-aberration centers of every lens in the lens stacks. The stigmators compensate for slight imperfections and aberrations that cause astigmatism—a lens having a different focal strength in different directions.
Typically a TEM consists of three stages of lensing. The stages are the condenser lenses, the objective lenses, and the projector lenses. The condenser lenses are responsible for primary beam formation, while the objective lenses focus the beam that comes through the sample itself (in STEM scanning mode, there are also objective lenses above the sample to make the incident electron beam convergent). The projector lenses are used to expand the beam onto the phosphor screen or other imaging device, such as film. The magnification of the TEM is due to the ratio of the distances between the specimen and the objective lens' image plane. TEM optical configurations differ significantly with implementation, with manufacturers using custom lens configurations, such as in spherical aberration corrected instruments, or TEMs using energy filtering to correct electron chromatic aberration.
Reciprocity
The optical reciprocity theorem, or principle of Helmholtz reciprocity, generally holds true for elastically scattered electrons, as is often the case under standard TEM operating conditions. The theorem states that the wave amplitude at some point B as a result of electron point source A would be the same as the amplitude at A due to an equivalent point source placed at B. Simply stated, the wave function for electrons focused through any series of optical components that includes only scalar (i.e. not magnetic) fields will be exactly equivalent if the electron source and observation point are reversed. R
Reciprocity is used to understand scanning transmission electron microscopy (STEM) in the familiar context of TEM, and to obtain and interpret images using STEM.
Display and detectors
The key factors when considering electron detection include detective quantum efficiency (DQE), point spread function (PSF), modulation transfer function (MTF), pixel size and array size, noise, data readout speed, and radiation hardness.
Imaging systems in a TEM consist of a phosphor screen, which may be made of fine (10–100 μm) particulate zinc sulfide, for direct observation by the operator, and an image recording system such as photographic film, doped YAG screen coupled CCDs, or other digital detector. Typically these devices can be removed or inserted into the beam path as required. (Photograph film is no longer used.) The first report of using a Charge-Coupled Device (CCD) detector for TEM was in 1982, but the technology didn't find widespread use until the late 1990s/early 2000s. Monolithic active-pixel sensors (MAPSs) were also used in TEM. CMOS detectors, which are faster and more resistant to radiation damage than CCDs, have been used for TEM since 2005. In the early 2010s, further development of CMOS technology allowed for the detection of single electron counts ("counting mode"). These Direct Electron Detectors are available from Gatan, FEI, Quantum Detectors and Direct Electron.
Components
A TEM is composed of several components, which include a vacuum system in which the electrons travel, an electron emission source for generation of the electron stream, a series of electromagnetic lenses, as well as electrostatic plates. The latter two allow the operator to guide and manipulate the beam as required. Also required is a device to allow the insertion into, motion within, and removal of specimens from the beam path. Imaging devices are subsequently used to create an image from the electrons that exit the system.
Vacuum system
To increase the mean free path of the electron gas interaction, a standard TEM is evacuated to low pressures, typically on the order of . The need for this is twofold: first the allowance for the voltage difference between the cathode and the ground without generating an arc, and secondly to reduce the collision frequency of electrons with gas atoms to negligible levels—this effect is characterized by the mean free path. TEM components such as specimen holders and film cartridges must be routinely inserted or replaced requiring a system with the ability to re-evacuate on a regular basis. As such, TEMs are equipped with multiple pumping systems and airlocks and are not permanently vacuum sealed.
The vacuum system for evacuating a TEM to an operating pressure level consists of several stages. Initially, a low or roughing vacuum is achieved with either a rotary vane pump or diaphragm pumps setting a sufficiently low pressure to allow the operation of a turbo-molecular or diffusion pump establishing high vacuum level necessary for operations. To allow for the low vacuum pump to not require continuous operation, while continually operating the turbo-molecular pumps, the vacuum side of a low-pressure pump may be connected to chambers which accommodate the exhaust gases from the turbo-molecular pump. Sections of the TEM may be isolated by the use of pressure-limiting apertures to allow for different vacuum levels in specific areas such as a higher vacuum of 10−4 to 10−7 Pa or higher in the electron gun in high-resolution or field-emission TEMs.
High-voltage TEMs require ultra-high vacuums on the range of 10−7 to 10−9 Pa to prevent the generation of an electrical arc, particularly at the TEM cathode. As such for higher voltage TEMs a third vacuum system may operate, with the gun isolated from the main chamber either by gate valves or a differential pumping aperture – a small hole that prevents the diffusion of gas molecules into the higher vacuum gun area faster than they can be pumped out. For these very low pressures, either an ion pump or a getter material is used.
Poor vacuum in a TEM can cause several problems ranging from the deposition of gas inside the TEM onto the specimen while viewed in a process known as electron beam induced deposition to more severe cathode damages caused by electrical discharge. The use of a cold trap to adsorb sublimated gases in the vicinity of the specimen largely eliminates vacuum problems that are caused by specimen sublimation.
Specimen stage
TEM specimen stage designs include airlocks to allow for insertion of the specimen holder into the vacuum with minimal loss of vacuum in other areas of the microscope. The specimen holders hold a standard size of sample grid or self-supporting specimen. Standard TEM grid sizes are 3.05 mm diameter, with a thickness and mesh size ranging from a few to 100 μm. The sample is placed onto the meshed area having a diameter of approximately 2.5 mm. Usual grid materials are copper, molybdenum, gold or platinum. This grid is placed into the sample holder, which is paired with the specimen stage. A wide variety of designs of stages and holders exist, depending upon the type of experiment being performed. In addition to 3.05 mm grids, 2.3 mm grids are sometimes, if rarely, used. These grids were particularly used in the mineral sciences where a large degree of tilt can be required and where specimen material may be extremely rare. Electron transparent specimens have a thickness usually less than 100 nm, but this value depends on the accelerating voltage.
Once inserted into a TEM, the sample has to be manipulated to locate the region of interest to the beam, such as in single grain diffraction, in a specific orientation. To accommodate this, the TEM stage allows movement of the sample in the XY plane, Z height adjustment, and commonly a single tilt direction parallel to the axis of side entry holders. Sample rotation may be available on specialized diffraction holders and stages. Some modern TEMs provide the ability for two orthogonal tilt angles of movement with specialized holder designs called double-tilt sample holders. Some stage designs, such as top-entry or vertical insertion stages once common for high resolution TEM studies, may simply only have X-Y translation available. The design criteria of TEM stages are complex, owing to the simultaneous requirements of mechanical and electron-optical constraints and specialized models are available for different methods.
A TEM stage is required to have the ability to hold a specimen and be manipulated to bring the region of interest into the path of the electron beam. As the TEM can operate over a wide range of magnifications, the stage must simultaneously be highly resistant to mechanical drift, with drift requirements as low as a few nm/minute while being able to move several μm/minute, with repositioning accuracy on the order of nanometres. Earlier designs of TEM accomplished this with a complex set of mechanical downgearing devices, allowing the operator to finely control the motion of the stage by several rotating rods. Modern devices may use electrical stage designs, using screw gearing in concert with stepper motors, providing the operator with a computer-based stage input, such as a joystick or trackball.
Two main designs for stages in a TEM exist, the side-entry and top entry version. Each design must accommodate the matching holder to allow for specimen insertion without either damaging delicate TEM optics or allowing gas into TEM systems under vacuum.
The most common is the side entry holder, where the specimen is placed near the tip of a long metal (brass or stainless steel) rod, with the specimen placed flat in a small bore. Along the rod are several polymer vacuum rings to allow for the formation of a vacuum seal of sufficient quality, when inserted into the stage. The stage is thus designed to accommodate the rod, placing the sample either in between or near the objective lens, dependent upon the objective design. When inserted into the stage, the side entry holder has its tip contained within the TEM vacuum, and the base is presented to atmosphere, the airlock formed by the vacuum rings.
Insertion procedures for side-entry TEM holders typically involve the rotation of the sample to trigger micro switches that initiate evacuation of the airlock before the sample is inserted into the TEM column.
The second design is the top-entry holder consists of a cartridge that is several cm long with a bore drilled down the cartridge axis. The specimen is loaded into the bore, possibly using a small screw ring to hold the sample in place. This cartridge is inserted into an airlock with the bore perpendicular to the TEM optic axis. When sealed, the airlock is manipulated to push the cartridge such that the cartridge falls into place, where the bore hole becomes aligned with the beam axis, such that the beam travels down the cartridge bore and into the specimen. Such designs are typically unable to be tilted without blocking the beam path or interfering with the objective lens.
Electron gun
The electron gun is formed from several components: the filament, a biasing circuit, a Wehnelt cap, and an extraction anode. By connecting the filament to the negative component power supply, electrons can be "pumped" from the electron gun to the anode plate and the TEM column, thus completing the circuit. The gun is designed to create a beam of electrons exiting from the assembly at some given angle, known as the gun divergence semi-angle, α. By constructing the Wehnelt cylinder such that it has a higher negative charge than the filament itself, electrons that exit the filament in a diverging manner are, under proper operation, forced into a converging pattern the minimum size of which is the gun crossover diameter.
The thermionic emission current density, J, can be related to the work function of the emitting material via Richardson's law
where A is the Richardson's constant, Φ is the work function and T is the temperature of the material.
This equation shows that in order to achieve sufficient current density it is necessary to heat the emitter, taking care not to cause damage by application of excessive heat. For this reason materials with either a high melting point, such as tungsten, or those with a low work function (LaB6) are required for the gun filament. Furthermore, both lanthanum hexaboride and tungsten thermionic sources must be heated in order to achieve thermionic emission, this can be achieved by the use of a small resistive strip. To prevent thermal shock, there is often a delay enforced in the application of current to the tip, to prevent thermal gradients from damaging the filament, the delay is usually a few seconds for LaB6, and significantly lower for tungsten.
Electron lens
Electron lenses are designed to act in a manner emulating that of an optical lens, by focusing parallel electrons at some constant focal distance. Electron lenses may operate electrostatically or magnetically. The majority of electron lenses for TEM use electromagnetic coils to generate a convex lens. The field produced for the lens must be radially symmetrical, as deviation from the radial symmetry of the magnetic lens causes aberrations such as astigmatism, and worsens spherical and chromatic aberration. Electron lenses are manufactured from iron, iron-cobalt or nickel cobalt alloys, such as permalloy. These are selected for their magnetic properties, such as magnetic saturation, hysteresis and permeability.
The components include the yoke, the magnetic coil, the poles, the polepiece, and the external control circuitry. The pole piece must be manufactured in a very symmetrical manner, as this provides the boundary conditions for the magnetic field that forms the lens. Imperfections in the manufacture of the pole piece can induce severe distortions in the magnetic field symmetry, which induce distortions that will ultimately limit the lenses' ability to reproduce the object plane. The exact dimensions of the gap, pole piece internal diameter and taper, as well as the overall design of the lens is often performed by finite element analysis of the magnetic field, whilst considering the thermal and electrical constraints of the design.
The coils which produce the magnetic field are located within the lens yoke. The coils can contain a variable current, but typically use high voltages, and therefore require significant insulation in order to prevent short-circuiting the lens components. Thermal distributors are placed to ensure the extraction of the heat generated by the energy lost to resistance of the coil windings. The windings may be water-cooled, using a chilled water supply in order to facilitate the removal of the high thermal duty.
Apertures
Apertures are annular metallic plates, through which electrons that are further than a fixed distance from the optic axis may be excluded. These consist of a small metallic disc that is sufficiently thick to prevent electrons from passing through the disc, whilst permitting axial electrons. This permission of central electrons in a TEM causes two effects simultaneously: firstly, apertures decrease the beam intensity as electrons are filtered from the beam, which may be desired in the case of beam sensitive samples. Secondly, this filtering removes electrons that are scattered to high angles, which may be due to unwanted processes such as spherical or chromatic aberration, or due to diffraction from interaction within the sample.
Apertures are either a fixed aperture within the column, such as at the condenser lens, or are a movable aperture, which can be inserted or withdrawn from the beam path, or moved in the plane perpendicular to the beam path. Aperture assemblies are mechanical devices which allow for the selection of different aperture sizes, which may be used by the operator to trade off intensity and the filtering effect of the aperture. Aperture assemblies are often equipped with micrometers to move the aperture, required during optical calibration.
Imaging methods
Imaging methods in TEM use the information contained in the electron waves exiting from the sample to form an image. The projector lenses allow for the correct positioning of this electron wave distribution onto the viewing system. The observed intensity, I, of the image, assuming sufficiently high quality of imaging device, can be approximated as proportional to the time-averaged squared absolute value of the amplitude of the electron wavefunctions, where the wave that forms the exit beam is denoted by Ψ.
Different imaging methods therefore attempt to modify the electron waves exiting the sample in a way that provides information about the sample, or the beam itself. From the previous equation, it can be deduced that the observed image depends not only on the amplitude of beam, but also on the phase of the electrons, although phase effects may often be ignored at lower magnifications. Higher resolution imaging requires thinner samples and higher energies of incident electrons, which means that the sample can no longer be considered to be absorbing electrons (i.e., via a Beer's law effect). Instead, the sample can be modeled as an object that does not change the amplitude of the incoming electron wave function, but instead modifies the phase of the incoming wave; in this model, the sample is known as a pure phase object. For sufficiently thin specimens, phase effects dominate the image, complicating analysis of the observed intensities. To improve the contrast in the image, the TEM may be operated at a slight defocus to enhance contrast, owing to convolution by the contrast transfer function of the TEM, which would normally decrease contrast if the sample was not a weak phase object.
The figure on the right shows the two basic operation modes of TEM – imaging and diffraction modes. In both cases the specimen is illuminated with the parallel beam, formed by electron beam shaping with the system of Condenser lenses and Condenser aperture. After interaction with the sample, on the exit surface of the specimen two types of electrons exist – unscattered (which will correspond to the bright central beam on the diffraction pattern) and scattered electrons (which change their trajectories due to interaction with the material).
In Imaging mode, the objective aperture is inserted in a back focal plane (BFP) of the objective lens (where diffraction spots are formed). If using the objective aperture to select only the central beam, the transmitted electrons are passed through the aperture while all others are blocked, and a bright field image (BF image) is obtained. If we allow the signal from a diffracted beam, a dark field image (DF image) is received. The selected signal is magnified and projected on a screen (or on a camera) with the help of Intermediate and Projector lenses. An image of the sample is thus obtained.
In Diffraction mode, a selected area aperture may be used to determine more precisely the specimen area from which the signal will be displayed. By changing the strength of current to the intermediate lens, the diffraction pattern is projected on a screen. Diffraction is a very powerful tool for doing a cell reconstruction and crystal orientation determination.
Contrast formation
The contrast between two adjacent areas in a TEM image can be defined as the difference in the electron densities in image plane. Due to the scattering of the incident beam by the sample, the amplitude and phase of the electron wave change, which results in amplitude contrast and phase contrast, correspondingly. Most images have both contrast components.
Amplitude–contrast is obtained due to removal of some electrons before the image plane. During their interaction with the specimen some of electrons will be lost due to absorption, or due to scattering at very high angles beyond the physical limitation of microscope or are blocked by the objective aperture. While the first two losses are due to the specimen and microscope construction, the objective aperture can be used by operator to enhance the contrast.
Figure on the right shows a TEM image (a) and the corresponding diffraction pattern (b) of Pt polycrystalline film taken without an objective aperture. In order to enhance the contrast in the TEM image the number of scattered beams as visible in the diffraction pattern should be reduced. This can be done by selecting a certain area in the back focal plane such as only the central beam or a specific diffracted beam (angle), or combinations of such beams. By intentionally selecting an objective aperture which only permits the non-diffracted beam to pass beyond the back focal plane (and onto the image plane): one creates a Bright-Field (BF) image (c), whereas if the central, non-diffracted beam is blocked: one may obtain dark-field (DF) images such as those shown in (d–e). The DF images (d–e) were obtained by selecting the diffracted beams indicated in diffraction pattern with circles (b) using an aperture at the back focal plane. Grains from which electrons are scattered into these diffraction spots appear brighter. More details about diffraction contrast formation are given further.
There are two types of amplitude contrast – mass–thickness and diffraction contrast. First, let's consider mass–thickness contrast. When the beam illuminates two neighbouring areas with low mass (or thickness) and high mass (or thickness), the heavier region scatters electrons at bigger angles. These strongly scattered electrons are blocked in BF TEM mode by objective aperture. As a result, heavier regions appear darker in BF images (have low intensity). Mass–thickness contrast is most important for non–crystalline, amorphous materials.
Diffraction contrast occurs due to a specific crystallographic orientation of a grain. In such a case the crystal is oriented in a way that there is a high probability of diffraction. Diffraction contrast provides information on the orientation of the crystals in a polycrystalline sample, as well as other information such as defects. Note that in case diffraction contrast exists, the contrast cannot be interpreted as due to mass or thickness variations.
Diffraction contrast
Samples can exhibit diffraction contrast, whereby the electron beam undergoes diffraction which in the case of a crystalline sample, disperses electrons into discrete locations in the back focal plane. By the placement of apertures in the back focal plane, i.e. the objective aperture, the desired reciprocal lattice vectors can be selected (or excluded), thus only parts of the sample that are causing the electrons to scatter to the selected reflections will end up projected onto the imaging apparatus.
If the reflections that are selected do not include the unscattered beam (which will appear up at the focal point of the lens), then the image will appear dark wherever no sample scattering to the selected peak is present, as such a region without a specimen will appear dark. This is known as a dark-field image.
Modern TEMs are often equipped with specimen holders that allow the user to tilt the specimen to a range of angles in order to obtain specific diffraction conditions, and apertures placed above the specimen allow the user to select electrons that would otherwise be diffracted in a particular direction from entering the specimen.
Applications for this method include the identification of lattice defects in crystals. By carefully selecting the orientation of the sample, it is possible not just to determine the position of defects but also to determine the type of defect present. If the sample is oriented so that one particular plane is only slightly tilted away from the strongest diffracting angle (known as the Bragg Angle), any distortion of the crystal plane that locally tilts the plane to the Bragg angle will produce particularly strong contrast variations. However, defects that produce only displacement of atoms that do not tilt the crystal towards the Bragg angle (i. e. displacements parallel to the crystal plane) will produce weaker contrast.
Phase contrast
Crystal structure can also be investigated by high-resolution transmission electron microscopy (HRTEM), also known as phase contrast. When using a field emission source and a specimen of uniform thickness, the images are formed due to differences in phase of electron waves, which is caused by specimen interaction. Image formation is given by the complex modulus of the incoming electron beams. As such, the image is not only dependent on the number of electrons hitting the screen, making direct interpretation of phase contrast images slightly more complex. However this effect can be used to an advantage, as it can be manipulated to provide more information about the sample, such as in complex phase retrieval techniques.
Diffraction
As previously stated, by adjusting the magnetic lenses such that the back focal plane of the lens rather than the imaging plane is placed on the imaging apparatus a diffraction pattern can be generated. For thin crystalline samples, this produces an image that consists of a pattern of dots in the case of a single crystal, or a series of rings in the case of a polycrystalline or amorphous solid material. For the single crystal case the diffraction pattern is dependent upon the orientation of the specimen and the structure of the sample illuminated by the electron beam. This image provides the investigator with information about the space group symmetries in the crystal and the crystal's orientation to the beam path. This is typically done without using any information but the position at which the diffraction spots appear and the observed image symmetries.
Diffraction patterns can have a large dynamic range, and for crystalline samples, may have intensities greater than those recordable by CCD. As such, TEMs may still be equipped with film cartridges for the purpose of obtaining these images, as the film is a single use detector.
Analysis of diffraction patterns beyond point-position can be complex, as the image is sensitive to a number of factors such as specimen thickness and orientation, objective lens defocus, and spherical and chromatic aberration. Although quantitative interpretation of the contrast shown in lattice images is possible, it is inherently complicated and can require extensive computer simulation and analysis, such as electron multislice analysis.
More complex behavior in the diffraction plane is also possible, with phenomena such as Kikuchi lines arising from multiple diffraction within the crystalline lattice. In convergent beam electron diffraction (CBED) where a non-parallel, i.e. converging, electron wavefront is produced by concentrating the electron beam into a fine probe at the sample surface, the interaction of the convergent beam can provide information beyond structural data such as sample thickness.
Electron energy loss spectroscopy (EELS)
Using the advanced technique of electron energy loss spectroscopy (EELS), for TEMs appropriately equipped, electrons can be separated into a spectrum based upon their velocity (which is closely related to their kinetic energy, and thus energy loss from the beam energy), using magnetic sector based devices known as EEL spectrometers. These devices allow for the selection of particular energy values, which can be associated with the way the electron has interacted with the sample. For example, different elements in a sample result in different electron energies in the beam after the sample. This normally results in chromatic aberration – however this effect can, for example, be used to generate an image which provides information on elemental composition, based upon the atomic transition during electron-electron interaction.
EELS spectrometers can often be operated in both spectroscopic and imaging modes, allowing for isolation or rejection of elastically scattered beams. As for many images inelastic scattering will include information that may not be of interest to the investigator thus reducing observable signals of interest, EELS imaging can be used to enhance contrast in observed images, including both bright field and diffraction, by rejecting unwanted components.
Three-dimensional imaging
As TEM specimen holders typically allow for the rotation of a sample by a desired angle, multiple views of the same specimen can be obtained by rotating the angle of the sample along an axis perpendicular to the beam. By taking multiple images of a single TEM sample at differing angles, typically in 1° increments, a set of images known as a "tilt series" can be collected. This methodology was proposed in the 1970s by Walter Hoppe. Under purely absorption contrast conditions, this set of images can be used to construct a three-dimensional representation of the sample.
The reconstruction is accomplished by a two-step process, first images are aligned to account for errors in the positioning of a sample; such errors can occur due to vibration or mechanical drift. Alignment methods use image registration algorithms, such as autocorrelation methods to correct these errors. Secondly, using a reconstruction algorithm, such as filtered back projection, the aligned image slices can be transformed from a set of two-dimensional images, Ij(x, y), to a single three-dimensional image, I′j(x, y, z). This three-dimensional image is of particular interest when morphological information is required, further study can be undertaken using computer algorithms, such as isosurfaces and data slicing to analyse the data.
As TEM samples cannot typically be viewed at a full 180° rotation, the observed images typically suffer from a "missing wedge" of data, which when using Fourier-based back projection methods decreases the range of resolvable frequencies in the three-dimensional reconstruction. Mechanical refinements, such as multi-axis tilting (two tilt series of the same specimen made at orthogonal directions) and conical tomography (where the specimen is first tilted to a given fixed angle and then imaged at equal angular rotational increments through one complete rotation in the plane of the specimen grid) can be used to limit the impact of the missing data on the observed specimen morphology. Using focused ion beam milling, a new technique has been proposed which uses pillar-shaped specimen and a dedicated on-axis tomography holder to perform 180° rotation of the sample inside the pole piece of the objective lens in TEM. Using such arrangements, quantitative electron tomography without the missing wedge is possible. In addition, numerical techniques exist which can improve the collected data.
All the above-mentioned methods involve recording tilt series of a given specimen field. This inevitably results in the summation of a high dose of reactive electrons through the sample and the accompanying destruction of fine detail during recording. The technique of low-dose (minimal-dose) imaging is therefore regularly applied to mitigate this effect. Low-dose imaging is performed by deflecting illumination and imaging regions simultaneously away from the optical axis to image an adjacent region to the area to be recorded (the high-dose region). This area is maintained centered during tilting and refocused before recording. During recording the deflections are removed so that the area of interest is exposed to the electron beam only for the duration required for imaging. An improvement of this technique (for objects resting on a sloping substrate film) is to have two symmetrical off-axis regions for focusing followed by setting focus to the average of the two high-dose focus values before recording the low-dose area of interest.
Non-tomographic variants on this method, referred to as single particle analysis, use images of multiple (hopefully) identical objects at different orientations to produce the image data required for three-dimensional reconstruction. If the objects do not have significant preferred orientations, this method does not suffer from the missing data wedge (or cone) which accompany tomographic methods nor does it incur excessive radiation dosage, however it assumes that the different objects imaged can be treated as if the 3D data generated from them arose from a single stable object.
Sample preparation
Sample preparation in TEM can be a complex procedure. TEM specimens should be less than 100 nanometres thick for a conventional TEM. Unlike neutron or X-ray radiation the electrons in the beam interact readily with the sample, an effect that increases roughly with atomic number squared (Z2). High quality samples will have a thickness that is comparable to the mean free path of the electrons that travel through the samples, which may be only a few tens of nanometres. Preparation of TEM specimens is specific to the material under analysis and the type of information to be obtained from the specimen.
Materials that have dimensions small enough to be electron transparent, such as powdered substances, small organisms, viruses, or nanotubes, can be quickly prepared by the deposition of a dilute sample containing the specimen onto films on support grids. Biological specimens may be embedded in resin to withstand the high vacuum in the sample chamber and to enable cutting tissue into electron transparent thin sections. The biological sample can be stained using either a negative staining material such as uranyl acetate for bacteria and viruses, or, in the case of embedded sections, the specimen may be stained with heavy metals, including osmium tetroxide. Alternately samples may be held at liquid nitrogen temperatures after embedding in vitreous ice. In material science and metallurgy the specimens can usually withstand the high vacuum, but still must be prepared as a thin foil, or etched so some portion of the specimen is thin enough for the beam to penetrate. Constraints on the thickness of the material may be limited by the scattering cross-section of the atoms from which the material is comprised.
Tissue sectioning
Before sectioning, biological tissue is often embedded in an epoxy resin block and first trimmed using a razor blade into a trapezoidal block face. Thick sections are then cut from the block face. The thick sections are crudely stained with toluidine blue and examined for specimen and block orientation before thin sectioning.
Biological tissue is then thinned to less than 100 nm on an ultramicrotome. The resin block is fractured as it passes over a glass or diamond knife edge. This method is used to obtain thin, minimally deformed samples that allow for the observation of tissue ultrastructure. Inorganic samples, such as aluminium, may also be embedded in resins and ultrathin sectioned in this way, using either coated glass, sapphire or larger angle diamond knives. To prevent charge build-up at the sample surface when viewing in the TEM, tissue samples need to be coated with a thin layer of conducting material, such as carbon.
Sample staining
TEM samples of biological tissues need high atomic number stains to enhance contrast. The stain absorbs the beam electrons or scatters part of the electron beam which otherwise is projected onto the imaging system. Compounds of heavy metals such as osmium, lead, uranium or gold (in immunogold labelling) may be used prior to TEM observation to selectively deposit electron dense atoms in or on the sample in desired cellular or protein region. This process requires an understanding of how heavy metals bind to specific biological tissues and cellular structures.
Another form of sample staining is negative stain, where a larger amount of heavy metal stain is applied to the sample. The result is a sample with a dark background and the topological surface of the sample appearing lighter. Negative stain electron microscopy can be ideal for visualizing or forming 3D topological reconstructions of large proteins or macromolecular complexes (> 150 kDa). For smaller proteins, negative stain can be used as a screening step to find ideal sample concentration for cryogenic electron microscopy.
Mechanical milling
Mechanical polishing is also used to prepare samples for imaging on the TEM. Polishing needs to be done to a high quality, to ensure constant sample thickness across the region of interest. A diamond, or cubic boron nitride polishing compound may be used in the final stages of polishing to remove any scratches that may cause contrast fluctuations due to varying sample thickness. Even after careful mechanical milling, additional fine methods such as ion etching may be required to perform final stage thinning.
Chemical etching
Certain samples may be prepared by chemical etching, particularly metallic specimens. These samples are thinned using a chemical etchant, such as an acid, to prepare the sample for TEM observation. Devices to control the thinning process may allow the operator to control either the voltage or current passing through the specimen, and may include systems to detect when the sample has been thinned to a sufficient level of optical transparency.
Ion etching
Ion etching is a sputtering process that can remove very fine quantities of material. This is used to perform a finishing polish of specimens polished by other means. Ion etching uses an inert gas passed through an electric field to generate a plasma stream that is directed to the sample surface. Acceleration energies for gases such as argon are typically a few kilovolts. The sample may be rotated to promote even polishing of the sample surface. The sputtering rate of such methods is on the order of tens of micrometres per hour, limiting the method to only extremely fine polishing.
Ion etching by argon gas has been recently shown to be able to file down MTJ stack structures to a specific layer which has then been atomically resolved. The TEM images taken in plan view rather than cross-section reveal that the MgO layer within MTJs contains a large number of grain boundaries that may be diminishing the properties of devices.
Ion milling (FIB)
More recently focused ion beam methods have been used to prepare samples. FIB is a relatively new technique to prepare thin samples for TEM examination from larger specimens. Because FIB can be used to micro-machine samples very precisely, it is possible to mill very thin membranes from a specific area of interest in a sample, such as a semiconductor or metal. Unlike inert gas ion sputtering, FIB makes use of significantly more energetic gallium ions and may alter the composition or structure of the material through gallium implantation.
Nanowire assisted transfer
For a minimal introduction of stress and bending to transmission electron microscopy (TEM) samples (lamellae, thin films, and other mechanically and beam sensitive samples), when transferring inside a focused ion beam (FIB), flexible metallic nanowires can be attached to a typically rigid micromanipulator.
The main advantages of this method include a significant reduction of sample preparation time (quick welding and cutting of nanowire at low beam current), and minimization of stress-induced bending, Pt contamination, and ion beam damage.
This technique is particularly suitable for in situ electron microscopy sample preparation.
Replication
Samples may also be replicated using cellulose acetate film, the film subsequently coated with a heavy metal such as platinum, the original film dissolved away, and the replica imaged on the TEM. Variations of the replica technique are used for both materials and biological samples. In materials science a common use is for examining the fresh fracture surface of metal alloys.
Modifications
The capabilities of the TEM can be further extended by additional stages and detectors, sometimes incorporated on the same microscope.
Scanning TEM
A TEM can be modified into a scanning transmission electron microscope (STEM) by the addition of a system that rasters a convergent beam across the sample to form the image, when combined with suitable detectors. Scanning coils are used to deflect the beam, such as by an electrostatic shift of the beam, where the beam is then collected using a current detector such as a Faraday cup, which acts as a direct electron counter. By correlating the electron count to the position of the scanning beam (known as the "probe"), the transmitted component of the beam may be measured. The non-transmitted components may be obtained either by beam tilting or by the use of annular dark field detectors.
Fundamentally, TEM and STEM are linked via Helmholtz reciprocity. A STEM is a TEM in which the electron source and observation point have been switched relative to the direction of travel of the electron beam. See the ray diagrams in the figure on the right. The STEM instrument effectively relies on the same optical set-up as a TEM, but operates by flipping the direction of travel of the electrons (or reversing time) during operation of a TEM. Rather than using an aperture to control detected electrons, as in TEM, a STEM uses various detectors with collection angles that may be adjusted depending on which electrons the user wants to capture.
Low-voltage electron microscope
A low-voltage electron microscope (LVEM) is operated at relatively low electron accelerating voltage between 5–25 kV. Some of these can be a combination of SEM, TEM and STEM in a single compact instrument. Low voltage increases image contrast which is especially important for biological specimens. This increase in contrast significantly reduces, or even eliminates the need to stain. Resolutions of a few nm are possible in TEM, SEM and STEM modes. The low energy of the electron beam means that permanent magnets can be used as lenses and thus a miniature column that does not require cooling can be used.
Cryo-TEM
Cryogenic transmission electron microscopy (Cryo-TEM) uses a TEM with a specimen holder capable of maintaining the specimen at liquid nitrogen or liquid helium temperatures. This allows imaging specimens prepared in vitreous ice, the preferred preparation technique for imaging individual molecules or macromolecular assemblies, imaging of vitrified solid-electrolye interfaces, and imaging of materials that are volatile in high vacuum at room temperature, such as sulfur.
Environmental/in-situ TEM
In-situ experiments may also be conducted in TEM using differentially pumped sample chambers, or specialized holders. Types of in-situ experiments include studying nanomaterials, biological specimens, chemical reactions of molecules, liquid-phase electron microscopy, and material deformation testing.
High temperature in-situ TEM
Many phase transformations occur during heating. Additionally, coarsening and grain growth, along with other diffusion-related processes occur more rapidly at elevated temperatures, where kinetics are improved, allowing for the observation of related phenomena under transmission electron microscopy within reasonable time scales. This also allows for the observation of phenomena that occur at elevated temperatures and disappear or are not uniformly preserved in ex-situ samples.
High temperature TEM introduces various additional challenges which must be addressed in the mechanics of high temperature holders, including but not limited to drift-correction, temperature measurement, and decreased spatial resolution at the expense of more complex holders.
Sample drift in the TEM is linearly proportional to the temperature differential between the room and holder. With temperatures as high as 1500C in modern holders, samples may experience significant drift and vertical displacement (bulging), requiring continuous focus or stage adjustments, inducing resolution loss and mechanical drift. Individual labs and manufacturers have developed software coupled with advanced cooling systems to correct for thermal drift based on the predicted temperature in the sample chamber These systems often take 30 min-many hours for sample shifts to stabilize. While significant progress has been made, no universal TEM attachment has been made to account for drift at elevated temperatures.
An additional challenge of many of these specialized holders is knowing the local sample temperature. Many high temperature holders utilize a tungsten filament to locally heat the sample. Ambiguity in temperature in furnace heaters (W wire) with thermocouples arises from the thermal contact between the furnace and the TEM grid; complicated by temperature gradients along the sample caused by varying thermal conductivity with different samples and grid materials. With different holders both commercial and lab made, different methods for creating temperature calibration are available. Manufacturers like Gatan use IR pyrometry to measure temperature gradients over their entire sample. An even better method to calibrate is Raman spectroscopy which measures the local temperature of Si powder on electron transparent windows and quantitatively calibrates the IR pyrometry. These measurements have guaranteed accuracy within 5%. Research laboratories have also performed their own calibrations on commercial holders. Researchers at NIST utilized Raman spectroscopy to map the temperature profile of a sample on a TEM grid and achieve very precise measurements to enhance their research. Similarly, a research group in Germany utilized X-ray diffraction to measure slight shifts in lattice spacing caused by changes in temperature to back calculate the exact temperature in the holder. This process required careful calibration and exact TEM optics. Other examples include the use of EELS to measure local temperature using change of gas density, and resistivity changes.
Optimal resolution in a TEM is achieved when spherical aberrations are corrected with objective lens. However, due to the geometry of most TEMs, inserting large in-situ holders requires the user to compromise the objective lens and endure spherical aberrations. Therefore, there is a compromise between the width of the pole-piece gap and spatial resolution below 0.1 nm. Research groups at various institutions have tried to overcome spherical aberrations through use of monochromators to achieve 0.05 nm resolution with a 5 mm pole piece gap.
In-situ mechanical TEM
High resolution of TEM allows for monitoring the sample in question on a length scale ranging from hundreds of nanometres to several angstroms. This allows for the visualization of both elastic and plastic deformation via strain fields as well as the motion of crystallographic defects such as lattice distortions and dislocation motion. By simultaneously observing deformation phenomena and measuring mechanical response in situ, it is possible to connect nano-mechanical testing information to models that describe both the subtlety and complexity of how materials respond to stress and strain. The material properties and data accuracy obtained from such nano-mechanical tests is largely determined by the mechanical straining holder being used. Current straining holders have the ability to perform tensile tests, nano-indentation, compression tests, shear tests and bending tests on materials.
Classical mechanical holders
One of the pioneers of classical holders was Heinz G.F. Wilsdorf, who conducted a tensile test inside a TEM in 1958. In a typical experiment, electron transparent TEM samples are cut to shape and glued to a deformable grid. Advances in micromanipulators have also enabled the tensile testing of nanowires and thin films. The deformable grid attaches to the classical tensile holder which stretches the sample using a long rigid shaft attached to a worm gear box actuated by an electric motor located in a housing outside the TEM. Typically strain rates range from 10 nm/s to 10 μm/s. Custom-made holders expanding simple straining actuation have enabled bending tests using a bending holder and shear tests using a shear sample holder. The typical measured sample properties in these experiments are yield strength, elastic modulus, shear modulus, tensile strength, bending strength, and shear strength. In order to study the temperature-dependent mechanical properties of TEM samples, the holder can be cooled through a cold finger connected to a liquid nitrogen reservoir. For high temperature experiments, the TEM sample can also be heated through a miniaturized furnace or a laser that can typically reach 1000 °C.
Nano-indentation holders
Nano-Indentation holders perform a hardness test on the material in question by pressing a hard tip into a polished flat surface and measuring the applied force and the resulting displacement on the TEM sample through a change in capacitance between a reference and a movable electrostatic plate attached to the tip. The typical measured sample properties are hardness and elastic modulus. Although nano-indentation was possible since early 1980s, its investigation using a TEM was first reported in 2001 where an aluminum sample deposited on a silicon wedge was investigated. For nanoindentation experiments, TEM samples are typically shaped as wedges using a tripod polisher, H-bar window or a micro-nanopillar using focused ion beam to create enough space for a tip to be pressed at the desired electron transparent location. The indenter tips are typically flat punch-type, pyramidal, or wedge shaped elongated in the z-direction. Pyramidal tips offer high precision on the order of 10 nm but suffer from sample slip, while wedge indenters have greater contract to prevent slipping but require finite element analysis to model the transmitted stress since the high contact area with the TEM sample makes this almost a compression test.
Micro electro-mechanical systems (MEMs)
Micro Electro-Mechanical Systems (MEMs) based holders provide a cheap and customizable platform to conduct mechanical tests on previously difficult samples to work with such as micropillars, nanowires, and thin films. Passive MEMs are used as simple push to pull devices for in-situ mechanical tests. Typically, a nano-indentation holder is used to apply a pushing force at the indentation site. Using a geometry of arms, this pushing force translates to a pulling force on a pair of tensile pads to which the sample is attached. Thus, a compression applied on the outside of the MEMs translates to a tension in the central gap where the TEM sample is located. The resulting force-displacement curve needs to be corrected by performing the same test on an empty MEMs without the TEM sample to account for the stiffness of the empty MEMs. The dimensions and stiffness of the MEMs can be modified to perform tensile tests on different sized samples with different loads. To smoothen the actuation process, active MEMs have been developed with built-in actuators and sensors. These devices work by applying a stress using electrical power and measuring strain using capacitance variations. Electrostatically actuated MEMs have also been developed to accommodate very low applied forces in the 1–100 nN range.
Much of current research focuses on developing sample holders that can perform mechanical tests while creating an environmental stimulus such as temperature change, variable strain rates, and different gas environments. In addition, the emergence of high resolution detectors are allowing to monitor dislocation motion and interactions with other defects and pushing the limits of sub-nanometre strain measurements. In-situ mechanical TEM measurements are routinely coupled with other standard TEM measurements such as EELS and XEDS to reach a comprehensive understanding of the sample structure and properties.
Aberration corrected TEM
Modern research TEMs may include aberration correctors, to reduce the amount of distortion in the image. Incident beam monochromators may also be used which reduce the energy spread of the incident electron beam to less than 0.15 eV. Major aberration corrected TEM manufacturers include JEOL, Hitachi High-technologies, FEI Company, and NION.
Ultrafast and dynamic TEM
It is possible to reach temporal resolution far beyond that of the readout rate of electron detectors with the use of pulsed electrons. Pulses can be produced by either modifying the electron source to enable laser-triggered photoemission or by installation of an ultrafast beam blanker. This approach is termed ultrafast transmission electron microscopy when stroboscopic pump-probe illumination is used: an image is formed by the accumulation of many ultrashort electron pulses (typically of hundreds of femtoseconds) with a fixed time delay between the arrival of the electron pulse and the sample excitation. On the other hand, the use of single or a short sequence of electron pulses with a sufficient number of electrons to form an image from each pulse is called dynamic transmission electron microscopy. Temporal resolution down to hundreds of femtoseconds and spatial resolution comparable to that available with a Schottky field emission source is possible in ultrafast TEM. Using the Photon-gating approach, the temporal resolution in ultrafast electron microscope reaches to 30-fs allowing the imaging of ultrafast atomic and electron dynamics of matter. However, the technique can only image reversible processes that can be reproducibly triggered millions of times. Dynamic TEM can resolve irreversible processes down to tens of nanoseconds and tens of nanometres.
The technique has been pioneered at the early 2000s in laboratories in Germany (Technische Universität Berlin) and in the USA (Caltech
and Lawrence Livermore National Laboratory
). Ultrafast TEM and Dynamic TEM have made possible real-time investigation of numerous physical and chemical phenomena at the nanoscale.
An interesting variant of the Ultrafast Transmission Electron Microscopy technique is the Photon-Induced Near-field Electron Microscopy (PINEM). The latter is based on the inelastic coupling between electrons and photons in presence of a surface or a nanostructure. This method allows one to investigate time-varying nanoscale electromagnetic fields in an electron microscope, as well as dynamically shape the wave properties of the electron beam.
Limitations
There are a number of drawbacks to the TEM technique. Many materials require extensive sample preparation to produce a sample thin enough to be electron transparent, which makes TEM analysis a relatively time-consuming process with a low throughput of samples. The structure of the sample may also be changed during the preparation process. Also the field of view is relatively small, raising the possibility that the region analyzed may not be characteristic of the whole sample. There is potential that the sample may be damaged by the electron beam, particularly in the case of biological materials.
Resolution limits
The limit of resolution obtainable in a TEM may be described in several ways, and is typically referred to as the information limit of the microscope. One commonly used value is a cut-off value of the contrast transfer function, a function that is usually quoted in the frequency domain to define the reproduction of spatial frequencies of objects in the object plane by the microscope optics. A cut-off frequency, qmax, for the transfer function may be approximated with the following equation, where Cs is the spherical aberration coefficient and λ is the electron wavelength:
For a 200 kV microscope, with partly corrected spherical aberrations ("to the third order") and a Cs value of 1 μm, a theoretical cut-off value might be 1/qmax = 42 pm. The same microscope without a corrector would have Cs = 0.5 mm and thus a 200 pm cut-off. The spherical aberrations are suppressed to the third or fifth order in the "aberration-corrected" microscopes. Their resolution is however limited by electron source geometry and brightness and chromatic aberrations in the objective lens system.
The frequency domain representation of the contrast transfer function may often have an oscillatory nature, which can be tuned by adjusting the focal value of the objective lens. This oscillatory nature implies that some spatial frequencies are faithfully imaged by the microscope, whilst others are suppressed. By combining multiple images with different spatial frequencies, the use of techniques such as focal series reconstruction can be used to improve the resolution of the TEM in a limited manner. The contrast transfer function can, to some extent, be experimentally approximated through techniques such as Fourier transforming images of amorphous material, such as amorphous carbon.
More recently, advances in aberration corrector design have been able to reduce spherical aberrations and to achieve resolution below 0.5 ångströms (50 pm) at magnifications above 50 million times. Improved resolution allows for the imaging of lighter atoms that scatter electrons less efficiently, such as lithium atoms in lithium battery materials. The ability to determine the position of atoms within materials has made the HRTEM an indispensable tool for nanotechnology research and development in many fields, including heterogeneous catalysis and the development of semiconductor devices for electronics and photonics.
| Technology | Optical instruments | null |
214550 | https://en.wikipedia.org/wiki/Epstein%E2%80%93Barr%20virus | Epstein–Barr virus | The Epstein–Barr virus (EBV), formally called Human gammaherpesvirus 4, is one of the nine known human herpesvirus types in the herpes family, and is one of the most common viruses in humans. EBV is a double-stranded DNA virus. Epstein–Barr virus (EBV) is the first identified oncogenic virus, or a virus that can cause cancer. EBV establishes permanent infection in humans. It causes infectious mononucleosis and is also tightly linked to many malignant diseases (cancers). Various vaccine formulations underwent testing in different animals or in humans. However, none of them were able to prevent EBV infection and no vaccine has been approved to date.
Infectious mononucleosis ("mono" or "glandular fever"), a disease caused by the virus, is characterized by extreme fatigue, fever, sore throat, and swollen lymph nodes. The virus is also associated with various non-malignant, premalignant, and malignant Epstein–Barr virus-associated lymphoproliferative diseases such as Burkitt lymphoma, hemophagocytic lymphohistiocytosis, and Hodgkin's lymphoma; non-lymphoid malignancies such as gastric cancer and nasopharyngeal carcinoma; and conditions associated with human immunodeficiency virus such as hairy leukoplakia and central nervous system lymphomas. The virus is also associated with the childhood disorders of Alice in Wonderland syndrome and acute cerebellar ataxia and, by some evidence, higher risks of developing certain autoimmune diseases, especially dermatomyositis, systemic lupus erythematosus, rheumatoid arthritis, and Sjögren's syndrome. About 200,000 cancer cases globally per year are thought to be attributable to EBV. In 2022, a large study (population of 10 million over 20 years) suggested EBV as the leading cause of multiple sclerosis, with a recent EBV infection causing a increase in the risk of developing multiple sclerosis.
Infection with EBV occurs by the oral transfer of saliva and genital secretions. Most people become infected with EBV and gain adaptive immunity. In the United States, about half of all five-year-old children and about 90% of adults have evidence of previous infection. Infants become susceptible to EBV as soon as maternal antibody protection disappears. Many children who become infected with EBV display no symptoms or the symptoms are indistinguishable from the other mild, brief illnesses of childhood. When infection occurs during adolescence or young adulthood, it causes infectious mononucleosis 35 to 50% of the time.
EBV infects of the immune system and epithelial cells. Once EBV's initial lytic infection is brought under control, EBV latency persists in the individual's for the rest of their life.
Virology
Structure and genome
The virus is about 122–180 nm in diameter and is composed of a double helix of deoxyribonucleic acid (DNA) which contains about 172,000 base pairs encoding 85 genes. The DNA is surrounded by a protein nucleocapsid, which is surrounded by a tegument made of protein, which in turn is surrounded by an envelope containing both lipids and surface projections of glycoproteins, which are essential to infection of the host cell. In July 2020, a team of researchers reported the first complete atomic model of the nucleocapsid of the virus. This "first complete atomic model [includes] the icosahedral capsid, the capsid-associated tegument complex (CATC) and the dodecameric portal—the viral genome translocation apparatus."
Tropism
The term viral tropism refers to which cell types that EBV infects. EBV can infect different cell types, including and epithelial cells.
The viral three-part glycoprotein complexes of mediate B cell membrane fusion; although the two-part complexes of gHgL mediate epithelial cell membrane fusion. EBV that are made in the B cells have low numbers of gHgLgp42 complexes, because these three-part complexes interact with Human-leukocyte-antigen class II molecules present in B cells in the endoplasmic reticulum and are degraded. In contrast, EBV from epithelial cells are rich in the three-part complexes because these cells do not normally contain molecules. As a consequence, EBV made from B cells are more infectious to epithelial cells, and EBV made from epithelial cells are more infectious to B cells. Viruses lacking the gp42 portion are able to bind to human B cells, but unable to infect.
Replication cycle
Entry to the cell
EBV can infect both B cells and epithelial cells. The mechanisms for entering these two cells are different.
To enter B cells, viral glycoprotein gp350 binds to cellular receptor CD21 (also known as CR2). Then, viral glycoprotein gp42 interacts with cellular MHC class II molecules. This triggers fusion of the viral envelope with the cell membrane, allowing EBV to enter the B cell. Human CD35, also known as complement receptor 1 (CR1), is an additional attachment factor for gp350 / 220, and can provide a route for entry of EBV into CD21-negative cells, including immature B-cells. EBV infection downregulates expression of CD35.
To enter epithelial cells, viral protein BMRF-2 interacts with cellular β1 integrins. Then, viral protein gH/gL interacts with cellular αvβ6/αvβ8 integrins. This triggers fusion of the viral envelope with the epithelial cell membrane, allowing EBV to enter the epithelial cell. Unlike B-cell entry, epithelial-cell entry is actually impeded by viral glycoprotein gp42.
Once EBV enters the cell, the viral capsid dissolves and the viral genome is transported to the cell nucleus.
Lytic replication
The lytic cycle, or productive infection, results in the production of infectious virions. EBV can undergo lytic replication in both B cells and epithelial cells. In B cells, lytic replication normally only takes place after reactivation from latency. In epithelial cells, lytic replication often directly follows viral entry.
For lytic replication to occur, the viral genome must be linear. The latent EBV genome is circular, so it must linearize in the process of lytic reactivation. During lytic replication, viral DNA polymerase is responsible for copying the viral genome. This contrasts with latency, in which host-cell DNA polymerase copies the viral genome.
Lytic gene products are produced in three consecutive stages: immediate-early, early, and late.
Immediate-early lytic gene products act as transactivators, enhancing the expression of later lytic genes. Immediate-early lytic gene products include BZLF1 (also known as Zta, EB1, associated with its product gene ZEBRA) and BRLF1 (associated with its product gene Rta).
Early lytic gene products have many more functions, such as replication, metabolism, and blockade of antigen processing. Early lytic gene products include BNLF2.
Finally, late lytic gene products tend to be proteins with structural roles, such as VCA, which forms the viral capsid. Other late lytic gene products, such as BCRF1, help EBV evade the immune system.
EGCG, a polyphenol in green tea, has shown in a study to inhibit EBV spontaneous lytic infection at the DNA, gene transcription, and protein levels in a time- and dose-dependent manner; the expression of EBV lytic genes Zta, Rta, and early antigen complex EA-D (induced by Rta), however, the highly stable EBNA-1 gene found across all stages of EBV infection is unaffected. Specific inhibitors (to the pathways) suggest that Ras/MEK/MAPK pathway contributes to EBV lytic infection though BZLF1 and PI3-K pathway through BRLF1, the latter completely abrogating the ability of a BRLF1 adenovirus vector to induce the lytic form of EBV infection. Additionally, the activation of some genes but not others is being studied to determine just how to induce immune destruction of latently infected B cells by use of either TPA or sodium butyrate.
Latency
Unlike lytic replication, latency does not result in production of virions.
Instead, the EBV genome circular DNA resides in the cell nucleus as an episome and is copied by host-cell DNA polymerase. It persists in the individual's . Epigenetic changes such as DNA methylation and cellular chromatin constituents, suppress the majority of the viral genes in latently infected cells. Only a portion of EBV's genes are expressed, which support the latent state of the virus.
Latent EBV expresses its genes in one of three patterns, known as latency programs. EBV can latently persist within and epithelial cells, but different latency programs are possible in the two types of cell.
EBV can exhibit one of three latency programs: Latency I, Latency II, or Latency III. Each latency program leads to the production of a limited, distinct set of viral proteins and viral RNAs.
Also, a program is postulated in which all viral protein expression is shut off (Latency 0).
Within B cells, all three latency programs are possible. EBV latency within B cells usually progresses from Latency III to Latency II to Latency I. Each stage of latency uniquely influences B cell behavior. Upon infecting a resting naïve B cell, EBV enters Latency III. The set of proteins and RNAs produced in Latency III transforms the B cell into a proliferating blast (also known as B cell activation). Later, the virus restricts its gene expression and enters Latency II. The more limited set of proteins and RNAs produced in Latency II induces the B cell to differentiate into a memory B cell. Finally, EBV restricts gene expression even further and enters Latency I. Expression of EBNA-1 allows the EBV genome to replicate when the memory B cell divides.
Within epithelial cells, only Latency II is possible.
In primary infection, EBV replicates in oropharyngeal epithelial cells and establishes Latency III, II, and I infections in B lymphocytes. EBV latent infection of B lymphocytes is necessary for virus persistence, subsequent replication in epithelial cells, and release of infectious virus into saliva. EBV Latency III and II infections of B lymphocytes, Latency II infection of oral epithelial cells, and Latency II infection of NK- or T-cell can result in malignancies, marked by uniform EBV genome presence and gene expression.
Reactivation
Latent EBV in B cells can be reactivated to switch to lytic replication. This is known to happen in vivo, but what triggers it is not known precisely. In vitro, latent EBV in B cells can be reactivated by stimulating the B cell receptor, so it is likely reactivation in vivo takes place after latently infected B cells respond to unrelated infections.
Transformation of B lymphocytes
EBV infection of B lymphocytes leads to "immortalization" of these cells, meaning that the virus causes them to continue dividing indefinitely. Normally, cells have a limited lifespan and eventually die, but when EBV infects B lymphocytes, it alters their behavior, making them "immortal" in the sense that they can keep dividing and surviving much longer than usual. This allows the virus to persist in the body for the individual's lifetime.
When EBV infects B cells in vitro, lymphoblastoid cell lines eventually emerge that are capable of indefinite growth. The growth transformation of these cell lines is the consequence of viral protein expression.
EBNA-2, EBNA-3C, and LMP-1, are essential for transformation, whereas EBNA-LP and the EBERs are not.
Following natural infection with EBV, the virus is thought to execute some or all of its repertoire of gene expression programs to establish a persistent infection. Given the initial absence of host immunity, the lytic cycle produces large numbers of virions to infect other (presumably) B-lymphocytes within the host.
The latent programs reprogram and subvert infected B-lymphocytes to proliferate and bring infected cells to the sites at which the virus presumably persists. Eventually, when host immunity develops, the virus persists by turning off most (or possibly all) of its genes and only occasionally reactivates and produces progeny virions. A balance is eventually struck between occasional viral reactivation and host immune surveillance removing cells that activate viral gene expression. The manipulation of the human body's epigenetics by EBV can alter the genome of the cell to leave oncogenic phenotypes. As a result, the modification by the EBV increases the hosts likelihood of developing EBV related cancer. EBV related cancers are unique in that they are frequent to making epigenetic changes but are less likely to mutate.
The site of persistence of EBV may be bone marrow. EBV-positive patients who have had their own bone marrow replaced with bone marrow from an EBV-negative donor are found to be EBV-negative after transplantation.
Latent antigens
All EBV nuclear proteins are produced by alternative splicing of a transcript starting at either the Cp or Wp promoters at the left end of the genome (in the conventional nomenclature). The genes are ordered EBNA-LP/EBNA-2/EBNA-3A/EBNA-3B/EBNA-3C/EBNA-1 within the genome.
The initiation codon of the EBNA-LP coding region is created by an alternate splice of the nuclear protein transcript. In the absence of this initiation codon, EBNA-2/EBNA-3A/EBNA-3B/EBNA-3C/EBNA-1 will be expressed depending on which of these genes is alternatively spliced into the transcript.
Protein/genes
Subtypes of EBV
EBV can be divided into two major types, EBV type 1 and EBV type 2. These two subtypes have different EBNA-3 genes. As a result, the two subtypes differ in their transforming capabilities and reactivation ability. Type 1 is dominant throughout most of the world, but the two types are equally prevalent in Africa. One can distinguish EBV type 1 from EBV type 2 by cutting the viral genome with a restriction enzyme and comparing the resulting digestion patterns by gel electrophoresis.
Detection
Epstein–Barr virus-encoded small RNAs (EBERs) are by far the most abundant EBV products transcribed in cells infected by EBV. They are commonly used as targets for the detection of EBV in histological tissues.
Clinically, the most common way to detect the presence of EBV is enzyme-linked immuno sorbent assay (ELISA). Antibodies (IgM and IgG) to proteins encoded by the EBV DNA are detected. Direct detection of EBV genome presence via polymerase chain reaction (PCR) is seldom done, as this method says nothing about the immune system's reaction to the virus. EBV viral load does not correlate well with clinical symptoms of infection.
Role in disease
| Biology and health sciences | Specific viruses | Health |
214571 | https://en.wikipedia.org/wiki/Catabolism | Catabolism | Catabolism () is the set of metabolic pathways that breaks down molecules into smaller units that are either oxidized to release energy or used in other anabolic reactions. Catabolism breaks down large molecules (such as polysaccharides, lipids, nucleic acids, and proteins) into smaller units (such as monosaccharides, fatty acids, nucleotides, and amino acids, respectively). Catabolism is the breaking-down aspect of metabolism, whereas anabolism is the building-up aspect.
Cells use the monomers released from breaking down polymers to either construct new polymer molecules or degrade the monomers further to simple waste products, releasing energy. Cellular wastes include lactic acid, acetic acid, carbon dioxide, ammonia, and urea. The formation of these wastes is usually an oxidation process involving a release of chemical free energy, some of which is lost as heat, but the rest of which is used to drive the synthesis of adenosine triphosphate (ATP). This molecule acts as a way for the cell to transfer the energy released by catabolism to the energy-requiring reactions that make up anabolism.
Catabolism is a destructive metabolism and anabolism is a constructive metabolism. Catabolism, therefore, provides the chemical energy necessary for the maintenance and growth of cells. Examples of catabolic processes include glycolysis, the citric acid cycle, the breakdown of muscle protein in order to use amino acids as substrates for gluconeogenesis, the breakdown of fat in adipose tissue to fatty acids, and oxidative deamination of neurotransmitters by monoamine oxidase.
Catabolic hormones
There are many signals that control catabolism. Most of the known signals are hormones and the molecules involved in metabolism itself. Endocrinologists have traditionally classified many of the hormones as anabolic or catabolic, depending on which part of metabolism they stimulate. The so-called classic catabolic hormones known since the early 20th century are cortisol, glucagon, and adrenaline (and other catecholamines). In recent decades, many more hormones with at least some catabolic effects have been discovered, including cytokines, orexin (known as hypocretin), and melatonin.
Etymology
The word catabolism is from Neo-Latin, which got the roots from Greek: κάτω kato, "downward" and βάλλειν ballein, "to throw".
| Biology and health sciences | Metabolic processes | Biology |
214645 | https://en.wikipedia.org/wiki/Varnish | Varnish | Varnish is a clear transparent hard protective coating or film. It is not to be confused with wood stain. It usually has a yellowish shade due to the manufacturing process and materials used, but it may also be pigmented as desired. It is sold commercially in various shades.
Varnish is primarily used as a wood finish where, stained or not, the distinctive tones and grains in the wood are intended to be visible. Varnish finishes are naturally glossy, but satin/semi-gloss and flat sheens are available.
History
The word "varnish" comes from Mediaeval Latin vernix, meaning odorous resin, perhaps derived from Middle Greek berōnikón or beroníkē, meaning amber or amber-colored glass. A false etymology traces the word to the Greek Berenice, the ancient name of modern Benghazi in Libya, where the first varnishes in the Mediterranean area were supposedly used and where resins from the trees of now-vanished forests were sold.
Early varnishes were developed by mixing resin—pine pitch, for example—with a solvent and applying them with a brush to get the golden and hardened effect one sees in today's varnishes. Varnishing was a technique well known in ancient Egypt.
Varnishing is also recorded in the history of East and South Asia; in India, China and Japan, where the practice of lacquer work, a species of varnish application, was known at a very early date. The Tang Chinese used medieval chemistry experiments to produce a varnish for clothes and weapons, employing complex chemical formulas applied to silk clothes of underwater divers, a cream designated for polishing bronze mirrors, and other formulas.
Safety
Because of flammability concerns, many product containers list safety precautions for storage and disposal for varnishes and drying oils as they are flammable, and materials used to apply the varnishes may spontaneously combust. Many varnishes contain plant-derived oils (e.g. linseed oil), synthetic oils (e.g. polyurethanes) or resins as their binder in combination with organic solvents. These are flammable in their liquid state. All drying oils, certain alkyds (including paints), and many polyurethanes produce heat (an exothermic reaction) during the curing process. Thus, oil-soaked rags and paper can smolder and ignite into flames, even several hours after use if proper precautions are not taken. Therefore, many manufacturers list proper disposal practices for rags and other items used to apply the finish, such as disposal in a water filled container.
Components of varnish
Varnish is traditionally a combination of a drying oil, a resin, and a thinner or solvent plus a metal drier to accelerate the drying. However, different types of varnish have different components. After being applied, the film-forming substances in varnishes either harden directly, as soon as the solvent has fully evaporated, or harden after evaporation of the solvent through curing processes, primarily chemical reaction between oils and oxygen from the air (autoxidation) and chemical reactions between components of the varnish.
Resin varnishes dry by evaporation of the solvent and harden quickly on drying. Acrylic and waterborne varnishes dry by evaporation of the water but will experience an extended curing period for evaporation of organic solvents absorbed on the latex particles, and possibly chemical curing of the particles. Oil, polyurethane, and epoxy varnishes remain liquid even after evaporation of the solvent but quickly begin to cure, undergoing successive stages from liquid or syrupy, to tacky or sticky, to dry gummy, to dry to the touch, to hard.
Environmental factors such as heat and humidity play a large role in the drying and curing times of varnishes. In classic varnish the cure rate depends on the type of oil used and, to some extent, on the ratio of oil to resin. The drying and curing time of all varnishes may be sped up by exposure to an energy source such as sunlight, ultraviolet light, or heat.
Drying oil
There are many different types of drying oils, including linseed oil, tung oil, and walnut oil. These contain high levels of polyunsaturated fatty acids. Drying oils cure through an exothermic reaction between the polyunsaturated portion of the oil and oxygen from the air. Originally, the term "varnish" referred to finishes that were made entirely of resin dissolved in suitable solvents, either ethanol (alcohol) or turpentine. The advantage to finishes in previous centuries was that resin varnishes had a very rapid cure rate compared to oils; in most cases they are cured practically as soon as the solvent has fully evaporated. By contrast, untreated or "raw" oils may take weeks or months to cure, depending on ambient temperature and other environmental factors. In modern terms, boiled or partially polymerized drying oils with added siccatives or dryers (chemical catalysts) have cure times of less than 24 hours. However, certain non-toxic by-products of the curing process are emitted from the oil film even after it is dry to the touch and over a considerable period of time. It has long been a tradition to combine drying oils with resins to obtain favourable features of both substances.
Resin
Many different kinds of resins may be used to create a varnish. Natural resins used for varnish include amber, kauri gum, dammar, copal, rosin (colophony or pine resin), sandarac, balsam, elemi, mastic, and shellac. Varnish may also be created from synthetic resins such as acrylic, alkyd, or polyurethane. A varnish formula might not contain any added resins at all since drying oils can produce a varnish effect by themselves.
Solvent
Originally, turpentine or alcohol was used to dissolve the resin and thin the drying oils. The invention of petroleum distillates has led to turpentine substitutes such as white spirit, paint thinner, and mineral spirit. Modern synthetic varnishes may be formulated with water instead of hydrocarbon solvents.
Types
Violin
Violin varnishing is a multi-step process involving some or all of the following: primer, sealer, ground, color coats, and clear topcoat. Some systems use a drying oil varnish as described below, while others use spirit varnish made of resin(s) dissolved in alcohol. Touchup in repair or restoration is only done with solvent based varnish.
Drying oil such as walnut oil or linseed oil may be used in combination with amber, copal, rosin or other resins. Traditionally the oil is prepared by cooking or exposure to air and sunlight, but modern stand oil is prepared by heating oil at high temperature without oxygen. The refined resin is sometimes available as a translucent solid and is then "run" by cooking or melting it in a pot over heat without solvents. The thickened oil and prepared resin are then cooked together and thinned with turpentine (away from open flame) into a brushable solution. The ingredients and processes of violin varnish are very diverse, with some highly regarded old examples showing defects (e.g. cracking, crazing) associated with incompatible varnish components.
Some violin finishing systems use vernice bianca (egg white and gum arabic) as a sealer or ground. There is also evidence that finely powdered minerals, possibly volcanic ash, were used in some grounds. Some violins made in the late 18th century used ox blood to create a very deep-red coloration. Today this varnish would have faded and currently be a very warm, dark orange.
Resin
Most resin or gum varnishes consist of a natural, plant- or insect-derived substance dissolved in a solvent, called spirit varnish or solvent varnish. The solvent may be alcohol, turpentine, or petroleum-based. Some resins are soluble in both alcohol and turpentine. Generally, petroleum solvents, i.e. mineral spirits or paint thinner, can substitute for turpentine. The resins include amber, dammar, copal, rosin, sandarac, elemi, benzoin, mastic, balsam, shellac, and a multitude of lacquers.
Synthetic resins such as phenolic resin may be employed as a secondary component in certain varnishes and paints.
Over centuries, many recipes were developed which involved the combination of resins, oils, and other ingredients such as certain waxes. These were believed to impart special tonal qualities to musical instruments and thus were sometimes carefully guarded secrets. The interaction of different ingredients is difficult to predict or reproduce, so expert finishers were often prized professionals.
Shellac
Shellac is a very widely used single-component resin varnish that is alcohol-soluble. It is not used for outdoor surfaces or where it will come into repeated contact with water, such as around a sink or bathtub. The source of shellac resin is a brittle or flaky secretion of the female lac insect, Kerria lacca, found in the forests of Assam and Thailand and harvested from the bark of the trees where she deposits it to provide a sticky hold on the trunk. Shellac is the basis of French polish, which for centuries has been the preferred finish for fine furniture. Specified "dewaxed" shellac has been processed to remove the waxy substances from original shellac and can be used as a primer and sanding-sealer substrate for other finishes such as polyurethanes, alkyds, oils, and acrylics.
Prepared shellac is typically available in "clear" and "amber" (or "orange") varieties, generally as "three-pound cut" or three pounds dry shellac to one US gallon of alcohol. Other natural color shades such as ruby and yellow are available from specialty pigment or woodworker's supply outlets. Dry shellac is available as refined flakes, "sticklac," "button lac," or "seedlac." "White pigmented" shellac primer paint is widely available in retail outlets, billed as a fast-drying interior primer "problem solver", in that it adheres to a variety of surfaces and seals off odors and smoke stains. Shellac clean-up may be done either with pure alcohol or with ammonia cleansers.
Alkyd
Modern commercially produced varnishes employ some form of alkyd for producing a protective film. Alkyds have good solvent, moisture and UV light resistance. Alkyds are chemically modified vegetable oils which operate well in a wide range of conditions and can be engineered to speed up the cure rate and thus harden faster. Usually this is by the use of metal salt driers such as cobalt salts. Better (and more expensive) exterior varnishes employ alkyds made from high performance oils and contain UV-absorbers; this improves gloss-retention and extends the lifetime of the finish. Various resins may also be combined with alkyds as part of the formula for typical "oil" varnishes that are commercially available.
Spar varnish
Spar varnish (also called marine varnish or yacht varnish) was originally intended for use on ship or boat spars, to protect the timber from the effects of sea and weather. Spars bend under the load of their sails. The primary requirements were water resistance and also elasticity, so as to remain adhering as the spars flexed. Elasticity was a pre-condition for weatherproofing too, as a finish that cracked would then allow water through, even if the remaining film was impermeable. Appearance and gloss was of relatively low value. Modified tung oil and phenolic resins are often used.
When first developed, no varnishes had good UV-resistance. Even after more modern synthetic resins did become resistant, a true spar varnish maintained its elasticity above other virtues, even if this required a compromise in its UV-resistance. Spar varnishes are thus not necessarily the best choice for outdoor woodwork that does not need to bend in service.
Despite this, the widespread perception of "marine products" as "tough" led to domestic outdoor varnishes being branded as "Spar varnish" and sold on the virtue of their weather- and UV-resistance. These claims may be more or less realistic, depending on individual products. Only relatively recently have spar varnishes been available that can offer both effective elasticity and UV-resistance.
Drying oils
Drying oils, such as linseed and tung oil, are not true varnishes though often in modern terms they accomplish the same thing.
Polyurethane
Polyurethane varnishes are typically hard, abrasion-resistant, and durable coatings. They are popular for hardwood floors but are considered by some wood finishers to be difficult or unsuitable for finishing furniture or other detailed pieces. Polyurethanes are comparable in hardness to certain alkyds but generally form a tougher film. Compared to simple oil or shellac varnishes, polyurethane varnish forms a harder, decidedly tougher and more waterproof film. However, a thick film of ordinary polyurethane may de-laminate if subjected to heat or shock, fracturing the film and leaving white patches. This tendency increases with long exposure to sunlight or when it is applied over soft woods like pine. This is also in part due to polyurethane's lesser penetration into the wood. Various priming techniques are employed to overcome this problem, including the use of certain oil varnishes, specified "dewaxed" shellac, clear penetrating epoxy sealer, or "oil-modified" polyurethane designed for the purpose. Polyurethane varnish may also lack the "hand-rubbed" lustre of drying oils such as linseed or tung oil; in contrast, however, it is capable of a much faster and higher film build, accomplishing in two coats what may require multiple applications of oil. Polyurethane may also be applied over a straight oil finish, but because of the relatively slow curing time of oils, the emission of certain chemical byproducts, and the need for exposure to oxygen from the air, care must be taken that the oils are sufficiently cured to accept the polyurethane. One of the disadvantages of a polyurethane-based varnish is the tendency to yellow over time. This is because the hydroxyl groups of a regular alkyd are reacted with TDI to produce a urethane-alkyd. This introduces a high degree of aromaticity and hence tendency to yellow.
Unlike drying oils and alkyds which cure after evaporation of the solvent and upon reaction with oxygen from the air, true polyurethane coatings cure after evaporation of the solvent and then either by a variety of reactions of chemicals within the original mix, or by reaction with moisture from the air. Certain polyurethane products are "hybrids" and combine different aspects of their parent components. "Oil-modified" polyurethanes, whether water-borne or solvent-borne, are currently the most widely used wood floor finishes.
Exterior use of polyurethane varnish may be problematic due to its heightened susceptibility to deterioration through ultra-violet light exposure. All clear or translucent varnishes, and indeed all film-polymer coatings (e.g. paint, stain, epoxy, synthetic plastic, etc.) are susceptible to this damage in varying degrees. Pigments in paints and stains protect against UV damage. UV-absorbers are added to polyurethane and other varnishes (e.g. spar varnish) to work against UV damage but are decreasingly effective over the course of 2–4 years, depending on the quantity and quality of UV-absorbers added, as well as the severity and duration of sun exposure. Water exposure, humidity, temperature extremes, and other environmental factors affect all finishes.
Lacquer
The word lacquer refers to quick-drying, solvent-based varnishes or paints. Although their names may be similarly derived, lacquer is not the same as shellac and is not dissolved in alcohol. Lacquer is dissolved in lacquer thinner, which is a highly flammable solvent typically containing butyl acetate and xylene or toluene. Lacquer is typically sprayed on, within a spray booth that evacuates overspray and minimizes the risk of combustion.
The rule of thumb is that a clear wood finish formulated to be sprayed is a lacquer, but if it is formulated to be brushed on then it is a varnish. Thus, by far most pieces of wooden furniture are lacquered.
Lacquer may be considered different from varnish because it can be re-dissolved later by a solvent (such as the one it was dissolved in when it was applied) and does not chemically change to a solid like other varnishes.
Acrylic
Acrylic resin varnishes are typically water-borne varnishes with the lowest refractive index of all finishes and high transparency. They resist yellowing. Acrylics have the advantage of water clean-up and lack of solvent fumes, but typically do not penetrate into wood as well as oils. They sometimes lack the brushability and self-leveling qualities of solvent-based varnishes. Generally they have good UV-resistance.
In the art world, varnishes offer dust-resistance and a harder surface than bare paint – they sometimes have the benefit of ultraviolet light resistors, which help protect artwork from fading in exposure to light. Acrylic varnish should be applied using an isolation coat (a permanent, protective barrier between the painting and the varnish, preferably a soft, glossy gel medium) to make varnish removal and overall conservation easier. Acrylic varnishes used for such a final removable art protection layer are typically mineral-spirit–based acrylic, rather than water-based.
Two-part epoxy
Various epoxy resin systems have been formulated as varnishes or floor finishes whereby two components are mixed directly before application. Sometimes, the two parts are of equal volume and referred to as 1:1 but not always, as 2:1, 3:1, 4:1 and even 5:1 mixing ratios are commercially available. The individual components are usually referred to as Part A and Part B. All two-part epoxies have a pot-life or working time during which the mixed material can be used. Usually the pot-life is a matter of a few hours or less, but this is highly temperature dependent. Both water-borne and solvent-based epoxies are used. Epoxies do have a tendency to yellow over a fairly short period of time.
Conversion
Used when a fast-curing, tough, hard finish is desired, such as for kitchen cabinets and office furniture. Comes in two parts: a resin and an acid catalyst. The first is a blend of an amino resin and an alkyd. The acid catalyst is added right before application in a set ratio determined by the manufacturer. Most produce minimal yellowing. There are, however, two downsides to this finish. The first is that as the finish cures, it gives off formaldehyde, which is toxic and carcinogenic. The second is that the finish can crack or craze if too many coats are applied.
| Technology | Artist's and drafting tools | null |
214701 | https://en.wikipedia.org/wiki/Water%20purification | Water purification | Water purification is the process of removing undesirable chemicals, biological contaminants, suspended solids, and gases from water. The goal is to produce water that is fit for specific purposes. Most water is purified and disinfected for human consumption (drinking water), but water purification may also be carried out for a variety of other purposes, including medical, pharmacological, chemical, and industrial applications. The history of water purification includes a wide variety of methods. The methods used include physical processes such as filtration, sedimentation, and distillation; biological processes such as slow sand filters or biologically active carbon; chemical processes such as flocculation and chlorination; and the use of electromagnetic radiation such as ultraviolet light.
Water purification can reduce the concentration of particulate matter including suspended particles, parasites, bacteria, algae, viruses, and fungi as well as reduce the concentration of a range of dissolved and particulate matter.
The standards for drinking water quality are typically set by governments or by international standards. These standards usually include minimum and maximum concentrations of contaminants, depending on the intended use of the water.
A visual inspection cannot determine if water is of appropriate quality. Simple procedures such as boiling or the use of a household activated carbon filter are not sufficient for treating all possible contaminants that may be present in water from an unknown source. Even natural spring water—considered safe for all practical purposes in the 19th century—must now be tested before determining what kind of treatment, if any, is needed. Chemical and microbiological analysis, while expensive, are the only way to obtain the information necessary for deciding on the appropriate method of purification.
Sources of water
Groundwater: The water emerging from some deep ground water may have fallen as rain many tens, hundreds, or thousands of years ago. Soil and rock layers naturally filter the ground water to a high degree of clarity and often, it does not require additional treatment besides adding chlorine or chloramines as secondary disinfectants. Such water may emerge as springs, artesian springs, or may be extracted from boreholes or wells. Deep ground water is generally of very high bacteriological quality (i.e., pathogenic bacteria or the pathogenic protozoa are typically absent), but the water may be rich in dissolved solids, especially carbonates and sulfates of calcium and magnesium. Depending on the strata through which the water has flowed, other ions may also be present including chloride, and bicarbonate. There may be a requirement to reduce the iron or manganese content of this water to make it acceptable for drinking, cooking, and laundry use. Primary disinfection may also be required. Where groundwater recharge is practised (a process in which river water is injected into an aquifer to store the water in times of plenty so that it is available in times of drought), the groundwater may require additional treatment depending on applicable state and federal regulations.
Upland lakes and reservoirs: Typically located in the headwaters of river systems, upland reservoirs are usually sited above any human habitation and may be surrounded by a protective zone to restrict the opportunities for contamination. Bacteria and pathogen levels are usually low, but some bacteria, protozoa or algae will be present. Where uplands are forested or peaty, humic acids can colour the water. Many upland sources have low pH which require adjustment.
Rivers, canals and low land reservoirs: Low land surface waters will have a significant bacterial load and may also contain algae, suspended solids and a variety of dissolved constituents.
Atmospheric water generation is a new technology that can provide high quality drinking water by extracting water from the air by cooling the air and thus condensing water vapour.
Rainwater harvesting or fog collection which collect water from the atmosphere can be used especially in areas with significant dry seasons and in areas which experience fog even when there is little rain.
Desalination of seawater by distillation or reverse osmosis.
Surface water: Freshwater bodies that are open to the atmosphere and are not designated as groundwater are termed surface waters.
Treatment
Goals
The goals of the treatment are to remove unwanted constituents in the water and to make it safe to drink or fit for a specific purpose in industry or medical applications. Widely varied techniques are available to remove contaminants like fine solids, micro-organisms and some dissolved inorganic and organic materials, or environmental persistent pharmaceutical pollutants. The choice of method will depend on the quality of the water being treated, the cost of the treatment process and the quality standards expected of the processed water.
The processes below are the ones commonly used in water purification plants. Some or most may not be used depending on the scale of the plant and quality of the raw (source) water.
Pretreatment
Pumping and containment – The majority of water must be pumped from its source or directed into pipes or holding tanks. To avoid adding contaminants to the water, this physical infrastructure must be made from appropriate materials and constructed so that accidental contamination does not occur.
Screening (see also screen filter) – The first step in purifying surface water is to remove large debris such as sticks, leaves, rubbish and other large particles which may interfere with subsequent purification steps. Most deep groundwater does not need screening before other purification steps.
Storage – Water from rivers may also be stored in bankside reservoirs for periods between a few days and many months to allow natural biological purification to take place. This is especially important if treatment is by slow sand filters. Storage reservoirs also provide a buffer against short periods of drought or to allow water supply to be maintained during transitory pollution incidents in the source river.
Pre-chlorination – In many plants the incoming water was chlorinated to minimise the growth of fouling organisms on the pipe-work and tanks. Because of the potential adverse quality effects (see chlorine below), this has largely been discontinued.
pH adjustment
Pure water has a pH close to 7 (neither alkaline nor acidic). Sea water can have pH values that range from 7.5 to 8.4 (moderately alkaline). Fresh water can have widely ranging pH values depending on the geology of the drainage basin or aquifer and the influence of contaminant inputs (acid rain). If the water is acidic (lower than 7), lime, soda ash, or sodium hydroxide can be added to raise the pH during water purification processes. Lime addition increases the calcium ion concentration, thus raising the water hardness. For highly acidic waters, forced draft degasifiers can be an effective way to raise the pH, by stripping dissolved carbon dioxide from the water. Making the water alkaline helps coagulation and flocculation processes work effectively and also helps to minimise the risk of lead being dissolved from lead pipes and from lead solder in pipe fittings. Sufficient alkalinity also reduces the corrosiveness of water to iron pipes. Acid (carbonic acid, hydrochloric acid or sulfuric acid) may be added to alkaline waters in some circumstances to lower the pH. Alkaline water (above pH 7.0) does not necessarily mean that lead or copper from the plumbing system will not be dissolved into the water. The ability of water to precipitate calcium carbonate to protect metal surfaces and reduce the likelihood of toxic metals being dissolved in water is a function of pH, mineral content, temperature, alkalinity and calcium concentration.
Coagulation and flocculation
One of the first steps in most conventional water purification processes is the addition of chemicals to assist in the removal of particles suspended in water. Particles can be inorganic such as clay and silt or organic such as algae, bacteria, viruses, protozoa and natural organic matter. Inorganic and organic particles contribute to the turbidity and colour of water.
The addition of inorganic coagulants such as aluminium sulfate (or alum) or iron (III) salts such as iron(III) chloride cause several simultaneous chemical and physical interactions on and among the particles. Within seconds, negative charges on the particles are neutralised by inorganic coagulants. Also within seconds, metal hydroxide precipitates of the iron and aluminium ions begin to form. These precipitates combine into larger particles under natural processes such as Brownian motion and through induced mixing which is sometimes referred to as flocculation. Amorphous metal hydroxides are known as "floc". Large, amorphous aluminium and iron (III) hydroxides adsorb and enmesh particles in suspension and facilitate the removal of particles by subsequent processes of sedimentation and filtration.
Aluminum hydroxides are formed within a fairly narrow pH range, typically: 5.5 to about 7.7. Iron (III) hydroxides can form over a larger pH range including pH levels lower than are effective for alum, typically: 5.0 to 8.5.
In the literature, there is much debate and confusion over the usage of the terms coagulation and flocculation: Where does coagulation end and flocculation begin? In water purification plants, there is usually a high energy, rapid mix unit process (detention time in seconds) whereby the coagulant chemicals are added followed by flocculation basins (detention times range from 15 to 45 minutes) where low energy inputs turn large paddles or other gentle mixing devices to enhance the formation of floc. In fact, coagulation and flocculation processes are ongoing once the metal salt coagulants are added.
Organic polymers were developed in the 1960s as aids to coagulants and, in some cases, as replacements for the inorganic metal salt coagulants. Synthetic organic polymers are high molecular weight compounds that carry negative, positive or neutral charges. When organic polymers are added to water with particulates, the high molecular weight compounds adsorb onto particle surfaces and through interparticle bridging coalesce with other particles to form floc. PolyDADMAC is a popular cationic (positively charged) organic polymer used in water purification plants.
Sedimentation
Waters exiting the flocculation basin may enter the sedimentation basin, also called a clarifier or settling basin. It is a large tank with low water velocities, allowing floc to settle to the bottom. The sedimentation basin is best located close to the flocculation basin so the transit between the two processes does not permit settlement or floc break up. Sedimentation basins may be rectangular, where water flows from end to end, or circular where flow is from the centre outward. Sedimentation basin outflow is typically over a weir so only a thin top layer of water—that furthest from the sludge—exits.
In 1904, Allen Hazen showed that the efficiency of a sedimentation process was a function of the particle settling velocity, the flow through the tank and the surface area of tank. Sedimentation tanks are typically designed within a range of overflow rates of 0.5 to 1.0 gallons per minute per square foot (or 1250 to 2500 litres per square meter per hour). In general, sedimentation basin efficiency is not a function of detention time or depth of the basin. Although, basin depth must be sufficient so that water currents do not disturb the sludge and settled particle interactions are promoted. As particle concentrations in the settled water increase near the sludge surface on the bottom of the tank, settling velocities can increase due to collisions and agglomeration of particles. Typical detention times for sedimentation vary from 1.5 to 4 hours and basin depths vary from 10 to 15 feet (3 to 4.5 meters).
Lamella clarifiers, inclined flat plates or tubes can be added to traditional sedimentation basins to improve particle removal performance. Inclined plates and tubes drastically increase the surface area available for particles to be removed in concert with Hazen's original theory. The amount of ground surface area occupied by a sedimentation basin with inclined plates or tubes can be far smaller than a conventional sedimentation basin.
Sludge storage and removal
As particles settle to the bottom of a sedimentation basin, a layer of sludge is formed on the floor of the tank which must be removed and treated. The amount of sludge generated is significant, often 3 to 5 per cent of the total volume of water to be treated. The cost of treating and disposing of the sludge can impact the operating cost of a water treatment plant. The sedimentation basin may be equipped with mechanical cleaning devices that continually clean its bottom, or the basin can be periodically taken out of service and cleaned manually.
Floc blanket clarifiers
A subcategory of sedimentation is the removal of particulates by entrapment in a layer of suspended floc as the water is forced upward. The major advantage of floc blanket clarifiers is that they occupy a smaller footprint than conventional sedimentation. The disadvantages are that particle removal efficiency can be highly variable depending on changes in influent water quality and influent water flow rate.
Dissolved air flotation
When particles to be removed do not settle out of solution easily, dissolved air flotation (DAF) is often used. After coagulation and flocculation processes, water flows to DAF tanks where air diffusers on the tank bottom create fine bubbles that attach to the floc resulting in a floating mass of concentrated floc. The floating floc blanket is removed from the surface and clarified water is withdrawn from the bottom of the DAF tank.
Water supplies that are particularly vulnerable to unicellular algae blooms and supplies with low turbidity and high colour often employ DAF.
Filtration
After separating most floc, the water is filtered as the final step to remove remaining suspended particles and unsettled floc.
Rapid sand filters
The most common type of filter is a rapid sand filter. Water moves vertically downward through sand which often has a layer of activated carbon or anthracite coal above the sand. The top layer removes organic compounds, which contribute to taste and odour. The space between sand particles is larger than the smallest suspended particles, so simple filtration is not enough. Most particles pass through surface layers but are trapped in pore spaces or adhere to sand particles. Effective filtration extends into the depth of the filter. This property of the filter is key to its operation: if the top layer of sand were to block all the particles, the filter would quickly clog.
To clean the filter, water is passed quickly upward through the filter, opposite the normal direction (called backflushing or backwashing) to remove embedded or unwanted particles. Prior to this step, compressed air may be blown up through the bottom of the filter to break up the compacted filter media to aid the backwashing process; this is known as air scouring. This contaminated water can be disposed of, along with the sludge from the sedimentation basin, or it can be recycled by mixing with the raw water entering the plant although this is often considered poor practice since it re-introduces an elevated concentration of bacteria into the raw water.
Some water treatment plants employ pressure filters. These work on the same principle as rapid gravity filters, differing in that the filter medium is enclosed in a steel vessel and the water is forced through it under pressure.
Advantages:
Filters out much smaller particles than paper and sand filters can.
Filters out virtually all particles larger than their specified pore sizes.
They are quite thin and so liquids flow through them fairly rapidly.
They are reasonably strong and so can withstand pressure differences across them of typically 2–5 atmospheres.
They can be cleaned (back flushed) and reused.
Slow sand filters
Slow sand filters may be used where there is sufficient land and space, as the water flows very slowly through the filters. These filters rely on biological treatment processes for their action rather than physical filtration. They are carefully constructed using graded layers of sand, with the coarsest sand, along with some gravel, at the bottom and the finest sand at the top. Drains at the base convey treated water away for disinfection. Filtration depends on the development of a thin biological layer, called the zoogleal layer or Schmutzdecke, on the surface of the filter. An effective slow sand filter may remain in service for many weeks or even months, if the pretreatment is well designed, and produces water with a very low available nutrient level which physical methods of treatment rarely achieve. Very low nutrient levels allow water to be safely sent through distribution systems with very low disinfectant levels, thereby reducing consumer irritation over offensive levels of chlorine and chlorine by-products. Slow sand filters are not backwashed; they are maintained by having the top layer of sand scraped off when the flow is eventually obstructed by biological growth.
Bank filtration
In bank filtration, natural sediments in a riverbank are used to provide the first stage of contaminant filtration. While typically not clean enough to be used directly for drinking water, the water gained from the associated extraction wells is much less problematic than river water taken directly from the river.
Membrane filtration
Membrane filters are widely used for filtering both drinking water and sewage. For drinking water, membrane filters can remove virtually all particles larger than 0.2 μm—including Giardia and Cryptosporidium. Membrane filters are an effective form of tertiary treatment when it is desired to reuse the water for industry, for limited domestic purposes, or before discharging the water into a river that is used by towns further downstream. They are widely used in industry, particularly for beverage preparation (including bottled water). However no filtration can remove substances that are actually dissolved in the water such as phosphates, nitrates and heavy metal ions.
Removal of ions and other dissolved substances
Ultrafiltration membranes use polymer membranes with chemically formed microscopic pores that can be used to filter out dissolved substances avoiding the use of coagulants. The type of membrane media determines how much pressure is needed to drive the water through and what sizes of micro-organisms can be filtered out.
Ion exchange: Ion-exchange systems use ion-exchange resin- or zeolite-packed columns to replace unwanted ions. The most common case is water softening consisting of removal of Ca2+ and Mg2+ ions replacing them with benign (soap friendly) Na+ or K+ ions. Ion-exchange resins are also used to remove toxic ions such as nitrite, lead, mercury, arsenic and many others.
Precipitative softening: Water rich in hardness (calcium and magnesium ions) is treated with lime (calcium oxide) and/or soda-ash (sodium carbonate) to precipitate calcium carbonate out of solution utilising the common-ion effect.
Electrodeionization: Water is passed between a positive electrode and a negative electrode. Ion-exchange membranes allow only positive ions to migrate from the treated water toward the negative electrode and only negative ions toward the positive electrode. High purity deionised water is produced continuously, similar to ion-exchange treatment. Complete removal of ions from water is possible if the right conditions are met. The water is normally pre-treated with a reverse osmosis unit to remove non-ionic organic contaminants, and with gas transfer membranes to remove carbon dioxide. A water recovery of 99% is possible if the concentrate stream is fed to the RO inlet.
Disinfection
Disinfection is accomplished both by filtering out harmful micro-organisms and by adding disinfectant chemicals. Water is disinfected to kill any pathogens which pass through the filters and to provide a residual dose of disinfectant to kill or inactivate potentially harmful micro-organisms in the storage and distribution systems. Possible pathogens include viruses, bacteria, including Salmonella, Cholera, Campylobacter and Shigella, and protozoa, including Giardia lamblia and other cryptosporidia. After the introduction of any chemical disinfecting agent, the water is usually held in temporary storage – often called a contact tank or clear well – to allow the disinfecting action to complete.
Chlorine disinfection
The most common disinfection method involves some form of chlorine or its compounds such as chloramine or chlorine dioxide. Chlorine is a strong oxidant that rapidly kills many harmful micro-organisms. Because chlorine is a toxic gas, there is a danger of a release associated with its use. This problem is avoided by the use of sodium hypochlorite, which is a relatively inexpensive solution used in household bleach that releases free chlorine when dissolved in water. Chlorine solutions can be generated on site by electrolyzing common salt solutions. A solid form, calcium hypochlorite, releases chlorine on contact with water. Handling the solid, however, requires more routine human contact through opening bags and pouring than the use of gas cylinders or bleach, which are more easily automated. The generation of liquid sodium hypochlorite is inexpensive and also safer than the use of gas or solid chlorine. Chlorine levels up to 4 milligrams per litre (4 parts per million) are considered safe in drinking water.
All forms of chlorine are widely used, despite their respective drawbacks. One drawback is that chlorine from any source reacts with natural organic compounds in the water to form potentially harmful chemical by-products. These by-products, trihalomethanes (THMs) and haloacetic acids (HAAs), are both carcinogenic in large quantities and are regulated by the United States Environmental Protection Agency (EPA) and the Drinking Water Inspectorate in the UK. The formation of THMs and haloacetic acids may be minimised by the effective removal of as many organics from the water as possible prior to chlorine addition. Although chlorine is effective in killing bacteria, it has limited effectiveness against pathogenic protozoa that form cysts in water such as Giardia lamblia and Cryptosporidium.
Chlorine dioxide disinfection
Chlorine dioxide is a faster-acting disinfectant than elemental chlorine. It is relatively rarely used because in some circumstances it may create excessive amounts of chlorite, which is a by-product regulated to low allowable levels in the United States. Chlorine dioxide can be supplied as an aqueous solution and added to water to avoid gas handling problems; chlorine dioxide gas accumulations may spontaneously detonate.
Chloramination
The use of chloramine is becoming more common as a disinfectant. Although chloramine is not as strong an oxidant, it provides a longer-lasting residual than free chlorine because of its lower redox potential compared to free chlorine. It also does not readily form THMs or haloacetic acids (disinfection byproducts).
It is possible to convert chlorine to chloramine by adding ammonia to the water after adding chlorine. The chlorine and ammonia react to form chloramine. Water distribution systems disinfected with chloramines may experience nitrification, as ammonia is a nutrient for bacterial growth, with nitrates being generated as a by-product.
Ozone disinfection
Ozone is an unstable molecule which readily gives up one atom of oxygen providing a powerful oxidising agent which is toxic to most waterborne organisms. It is a very strong, broad spectrum disinfectant that is widely used in Europe and in a few municipalities in the United States and Canada. Ozone disinfection, or ozonation, is an effective method to inactivate harmful protozoa that form cysts. It also works well against almost all other pathogens. Ozone is made by passing oxygen through ultraviolet light or a "cold" electrical discharge. To use ozone as a disinfectant, it must be created on-site and added to the water by bubble contact. Some of the advantages of ozone include the production of fewer dangerous by-products and the absence of taste and odour problems (in comparison to chlorination). No residual ozone is left in the water. In the absence of a residual disinfectant in the water, chlorine or chloramine may be added throughout a distribution system to remove any potential pathogens in the distribution piping.
Ozone has been used in drinking water plants since 1906 where the first industrial ozonation plant was built in Nice, France. The U.S. Food and Drug Administration has accepted ozone as being safe; and it is applied as an anti-microbiological agent for the treatment, storage, and processing of foods. However, although fewer by-products are formed by ozonation, it has been discovered that ozone reacts with bromide ions in water to produce concentrations of the suspected carcinogen bromate. Bromide can be found in fresh water supplies in sufficient concentrations to produce (after ozonation) more than 10 parts per billion (ppb) of bromatethe maximum contaminant level established by the USEPA. Ozone disinfection is also energy intensive.
Ultraviolet disinfection
Ultraviolet light (UV) is very effective at inactivating cysts, in low turbidity water. UV light's disinfection effectiveness decreases as turbidity increases, a result of the absorption, scattering, and shadowing caused by the suspended solids. The main disadvantage to the use of UV radiation is that, like ozone treatment, it leaves no residual disinfectant in the water; therefore, it is sometimes necessary to add a residual disinfectant after the primary disinfection process. This is often done through the addition of chloramines, discussed above as a primary disinfectant. When used in this manner, chloramines provide an effective residual disinfectant with very few of the negative effects of chlorination.
Over 2 million people in 28 developing countries use Solar Disinfection for daily drinking water treatment.
Ionizing radiation
Like UV, ionizing radiation (X-rays, gamma rays, and electron beams) has been used to sterilise water.
Bromination and iodinisation
Bromine and iodine can also be used as disinfectants. However, chlorine in water is over three times more effective as a disinfectant against Escherichia coli than an equivalent concentration of bromine, and over six times more effective than an equivalent concentration of iodine. Iodine is commonly used for portable water purification, and bromine is common as a swimming pool disinfectant.
Portable water purification
Portable water purification devices and methods are available for disinfection and treatment in emergencies or in remote locations. Disinfection is the primary goal, since aesthetic considerations such as taste, odour, appearance, and trace chemical contamination do not affect the short-term safety of drinking water.
Additional treatment options
Water fluoridation: in many areas fluoride is added to water with the goal of preventing tooth decay. Fluoride is usually added after the disinfection process. In the U.S., fluoridation is usually accomplished by the addition of hexafluorosilicic acid, which decomposes in water, yielding fluoride ions.
Water conditioning: This is a method of reducing the effects of hard water. In water systems subject to heating hardness salts can be deposited as the decomposition of bicarbonate ions creates carbonate ions that precipitate out of solution. Water with high concentrations of hardness salts can be treated with soda ash (sodium carbonate) which precipitates out the excess salts, through the common-ion effect, producing calcium carbonate of very high purity. The precipitated calcium carbonate is traditionally sold to the manufacturers of toothpaste. Several other methods of industrial and residential water treatment are claimed (without general scientific acceptance) to include the use of magnetic and/or electrical fields reducing the effects of hard water.
Plumbosolvency reduction: In areas with naturally acidic waters of low conductivity (i.e. surface rainfall in upland mountains of igneous rocks), the water may be capable of dissolving lead from any lead pipes that it is carried in. The addition of small quantities of phosphate ion and increasing the pH slightly both assist in greatly reducing plumbo-solvency by creating insoluble lead salts on the inner surfaces of the pipes.
Radium Removal: Some groundwater sources contain radium, a radioactive chemical element. Typical sources include many groundwater sources north of the Illinois River in Illinois, United States of America. Radium can be removed by ion exchange, or by water conditioning. The back flush or sludge that is produced is, however, a low-level radioactive waste.
Fluoride Removal: Although fluoride is added to water in many areas, some areas of the world have excessive levels of natural fluoride in the source water. Excessive levels can be toxic or cause undesirable cosmetic effects such as staining of teeth. Methods of reducing fluoride levels is through treatment with activated alumina and bone char filter media.
Other water purification techniques
Other popular methods for purifying water, especially for local private supplies are listed below. In some countries some of these methods are used for large scale municipal supplies. Particularly important are distillation (desalination of seawater) and reverse osmosis.
Thermal
Bringing water to its boiling point (about 100 °C or 212 F at sea level), is the oldest and most effective way since it eliminates most microbes causing intestinal disease, but it cannot remove chemical toxins or impurities. For human health, complete sterilisation of water is not required, since heat resistant microbes do not affect intestines. The traditional advice of boiling water for ten minutes is mainly for additional safety, since microbes start expiring at temperatures greater than . Though the boiling point decreases with increasing altitude, it is not enough to affect disinfection. In areas where the water is "hard" (that is, containing significant dissolved calcium salts), boiling decomposes the bicarbonate ions, resulting in partial precipitation as calcium carbonate. This is the "fur" that builds up on kettle elements, etc., in hard water areas. With the exception of calcium, boiling does not remove solutes of higher boiling point than water and in fact increases their concentration (due to some water being lost as vapour). Boiling does not leave a residual disinfectant in the water. Therefore, water that is boiled and then stored for any length of time may acquire new pathogens.
Adsorption
Granular activated carbon is a form of activated carbon with a high surface area. It adsorbs many compounds including many toxic compounds. Water passing through activated carbon is commonly used in municipal regions with organic contamination, taste or odors. Many household water filters and fish tanks use activated carbon filters to purify water. Household filters for drinking water sometimes contain silver as metallic silver nanoparticle. If water is held in the carbon block for longer periods, microorganisms can grow inside which results in fouling and contamination. Silver nanoparticles are excellent anti-bacterial material and can decompose toxic halo-organic compounds such as pesticides into non-toxic organic products. Filtered water must be used soon after it is filtered, as the low amount of remaining microbes may proliferate over time. In general, these home filters remove over 90% of the chlorine in a glass of treated water. These filters must be periodically replaced otherwise the bacterial content of the water may actually increase due to the growth of bacteria within the filter unit.
Distillation
Distillation involves boiling water to produce water vapour. The vapour contacts a cool surface where it condenses as a liquid. Because the solutes are not normally vaporised, they remain in the boiling solution. Even distillation does not completely purify water, because of contaminants with similar boiling points and droplets of unvapourised liquid carried with the steam. However, 99.9% pure water can be obtained by distillation.
Direct contact membrane distillation (DCMD) passes heated seawater along the surface of a hydrophobic polymer membrane. Evaporated water passes from the hot side through pores in the membrane forming a stream of cold pure water on the other side. The difference in vapour pressure between the hot and cold side helps to push water molecules through.
Reverse osmosis
Reverse osmosis involves mechanical pressure applied to force water through a semi-permeable membrane. Contaminants are left on the other side of the membrane. Reverse osmosis is theoretically the most thorough method of large scale water purification available, although perfect semi-permeable membranes are difficult to create. Unless membranes are well-maintained, algae and other life forms can colonise the membranes.
Crystallization
Carbon dioxide or other low molecular weight gas can be mixed with contaminated water at high pressure and low temperature to exothermically form gas hydrate crystals. Hydrate may be separated by centrifuge or sedimentation. Water can be released from the hydrate crystals by heating.
In situ oxidation
In situ chemical oxidation (ISCO) is an advanced oxidation process. It is used for soil and/or groundwater remediation to reduce the concentrations of targeted contaminants. ISCO is accomplished by injecting or otherwise introducing oxidizers into the contaminated medium (soil or groundwater) to destroy contaminants. It can be used to remediate a variety of organic compounds, including some that are resistant to natural degradation
Bioremediation
Bioremediation uses microorganisms to remove waste products from a contaminated area. Since 1991 bioremediation has been a suggested tactic to remove impurities such as alkanes, perchlorates, and metals. Bioremediation has seen success because perchlorates are highly soluble, making them difficult to remove. Example applications of Dechloromonas agitata strain CKB include field studies conducted in Maryland and the US Southwest.
Hydrogen peroxide
Hydrogen peroxide () is a common disinfectant that can purify water. It is typically produced at chemical plants and transported to the contaminated water. An alternative approach employs a gold-palladium catalyst to synthesize from ambient hydrogen and oxygen atoms at the use site. The latter was reported to be faster and 107 times more potent at killing Escherichia coli than commercial , and over 108 times more effective than chlorine The catalytic reaction also produces reactive oxygen species (ROS) that bind and degrade other compounds.
Safety and controversies
In April 2007, the water supply of Spencer, Massachusetts in the United States of America, became contaminated with excess sodium hydroxide (lye) when its treatment equipment malfunctioned.
Many municipalities have moved from free chlorine to chloramine as a disinfection agent. However, chloramine appears to be a corrosive agent in some water systems. Chloramine can dissolve the "protective" film inside older service lines, leading to the leaching of lead into residential spigots. This can result in harmful exposure, including elevated blood lead levels. Lead is a known neurotoxin.
Demineralized water
Distillation removes all minerals from water, and the membrane methods of reverse osmosis and nanofiltration remove most to all minerals. This results in demineralised water which is not considered ideal drinking water. The World Health Organization has investigated the health effects of demineralised water since 1980. Experiments in humans found that demineralised water increased diuresis and the elimination of electrolytes, with decreased blood serum potassium concentration. Magnesium, calcium, and other minerals in water can help to protect against nutritional deficiency. Demineralized water may also increase the risk from toxic metals because it more readily leaches materials from piping like lead and cadmium, which is prevented by dissolved minerals such as calcium and magnesium. Low-mineral water has been implicated in specific cases of lead poisoning in infants, when lead from pipes leached at especially high rates into the water. Recommendations for magnesium have been put at a minimum of 10 mg/L with 20–30 mg/L optimum; for calcium a 20 mg/L minimum and a 40–80 mg/L optimum, and a total water hardness (adding magnesium and calcium) of 2 to 4 mmol/L. At water hardness above 5 mmol/L, higher incidence of gallstones, kidney stones, urinary stones, arthrosis, and arthropathies have been observed. Additionally, desalination processes can increase the risk of bacterial contamination.
Manufacturers of home water distillers claim the opposite—that minerals in water are the cause of many diseases, and that most beneficial minerals come from food, not water.
History
The first experiments into water filtration were made in the 17th century. Sir Francis Bacon attempted to desalinate sea water by passing the flow through a sand filter. Although his experiment did not succeed, it marked the beginning of a new interest in the field. The fathers of microscopy, Antonie van Leeuwenhoek and Robert Hooke, used the newly invented microscope to observe for the first time small material particles that lay suspended in the water, laying the groundwork for the future understanding of waterborne pathogens.
Sand filter
The first documented use of sand filters to purify the water supply dates to 1804, when the owner of a bleachery in Paisley, Scotland, John Gibb, installed an experimental filter, selling his unwanted surplus to the public. This method was refined in the following two decades by engineers working for private water companies, and it culminated in the first treated public water supply in the world, installed by engineer James Simpson for the Chelsea Waterworks Company in London in 1829. This installation provided filtered water for every resident of the area, and the network design was widely copied throughout the United Kingdom in the ensuing decades.
The practice of water treatment soon became mainstream and common, and the virtues of the system were made starkly apparent after the investigations of the physician John Snow during the 1854 Broad Street cholera outbreak. Snow was sceptical of the then-dominant miasma theory that stated that diseases were caused by noxious "bad airs". Although the germ theory of disease had not yet been developed, Snow's observations led him to discount the prevailing theory. His 1855 essay On the Mode of Communication of Cholera conclusively demonstrated the role of the water supply in spreading the cholera epidemic in Soho, with the use of a dot distribution map and statistical proof to illustrate the connection between the quality of the water source and cholera cases. His data convinced the local council to disable the water pump, which promptly ended the outbreak.
The Metropolis Water Act introduced the regulation of the water supply companies in London, including minimum standards of water quality for the first time. The Act "made provision for securing the supply to the Metropolis of pure and wholesome water", and required that all water be "effectually filtered" from 31 December 1855. This was followed up with legislation for the mandatory inspection of water quality, including comprehensive chemical analyses, in 1858. This legislation set a worldwide precedent for similar state public health interventions across Europe. The Metropolitan Commission of Sewers was formed at the same time, water filtration was adopted throughout the country, and new water intakes on the Thames were established above Teddington Lock. Automatic pressure filters, where the water is forced under pressure through the filtration system, were innovated in 1899 in England.
Water chlorination
John Snow was the first to successfully use chlorine to disinfect the water supply in Soho that had helped spread the cholera outbreak. William Soper also used chlorinated lime to treat the sewage produced by typhoid patients in 1879.
In a paper published in 1894, Moritz Traube formally proposed the addition of chloride of lime (calcium hypochlorite) to water to render it "germ-free." Two other investigators confirmed Traube's findings and published their papers in 1895. Early attempts at implementing water chlorination at a water treatment plant were made in 1893 in Hamburg, Germany and in 1897 the city of Maidstone, England was the first to have its entire water supply treated with chlorine.
Permanent water chlorination began in 1905, when a faulty slow sand filter and a contaminated water supply led to a serious typhoid fever epidemic in Lincoln, England. Alexander Cruickshank Houston used chlorination of the water to stem the epidemic. His installation fed a concentrated solution of chloride of lime to the water being treated. The chlorination of the water supply helped stop the epidemic and as a precaution, the chlorination was continued until 1911 when a new water supply was instituted.
The first continuous use of chlorine in the United States for disinfection took place in 1908 at Boonton Reservoir (on the Rockaway River), which served as the supply for Jersey City, New Jersey. Chlorination was achieved by controlled additions of dilute solutions of chloride of lime (calcium hypochlorite) at doses of 0.2 to 0.35 ppm. The treatment process was conceived by John L. Leal and the chlorination plant was designed by George Warren Fuller. Over the next few years, chlorine disinfection using chloride of lime were rapidly installed in drinking water systems around the world.
The technique of purification of drinking water by use of compressed liquefied chlorine gas was developed by a British officer in the Indian Medical Service, Vincent B. Nesfield, in 1903. According to his own account:
U.S. Army Major Carl Rogers Darnall, Professor of Chemistry at the Army Medical School, gave the first practical demonstration of this in 1910. Shortly thereafter, Major William J. L. Lyster of the Army Medical Department used a solution of calcium hypochlorite in a linen bag to treat water. For many decades, Lyster's method remained the standard for U.S. ground forces in the field and in camps, implemented in the form of the familiar Lyster Bag (also spelled Lister Bag). The bag was made of canvas and could hold 36 gallons of water. It was porous and held up by ropes, purifying water with the help of calcium hypochlorite solution. Each bag had a faucet attached, which was used to flush water for testing, as well as dispensing for use. This became the basis for present day systems of municipal water purification.
Global
According to a 2007 World Health Organization (WHO) report, 1.1 billion people lack access to an improved drinking water supply; 88% of the 4 billion annual cases of diarrheal disease are attributed to unsafe water and inadequate sanitation and hygiene, while 1.8 million people die from diarrhoeal disease each year. The WHO estimates that 94% of these diarrhoeal disease cases are preventable through modifications to the environment, including access to safe water. Simple techniques for treating water at home, such as chlorination, filters, and solar disinfection, and for storing it in safe containers could save a huge number of lives each year. Reducing deaths from waterborne diseases is a major public health goal in developing countries.
The global water purification market is worth 22 billion dollars. Home water filters and purifiers in India are common.
| Technology | Food, water and health | null |
214755 | https://en.wikipedia.org/wiki/Rolling%20stock | Rolling stock | The term rolling stock in the rail transport industry refers to railway vehicles, including both powered and unpowered vehicles: for example, locomotives, freight and passenger cars (or coaches), and non-revenue cars. Passenger vehicles can be un-powered, or self-propelled, single or multiple units.
In North America, Australia and other countries, the term consist ( ) is used to refer to the rolling stock in a train.
In the United States, the term rolling stock has been expanded from the older broadly defined "trains" to include wheeled vehicles used by businesses on roadways.
The word stock in the term is used in a sense of inventory. Rolling stock is considered to be a liquid asset, or close to it, since the value of the vehicle can be readily estimated and then shipped to the buyer without much cost or delay. The term contrasts with fixed stock (infrastructure), which is a collective term for the track, signals, stations, other buildings, electric wires, etc., necessary to operate a railway.
Gallery
| Technology | Rail and cable transport | null |
214761 | https://en.wikipedia.org/wiki/Coach%20%28bus%29 | Coach (bus) | A coach (also known as a motorcoach or coach bus) is a type of bus built for longer distance service, in contrast to transit buses that are typically used for shorter journeys within a single metropolitan region. Often used for touring, intercity, and international bus service, coaches are also used for private charter for various purposes.
Deriving the name from horse-drawn carriages and stagecoaches that carried passengers, luggage, and mail, modern motor coaches are almost always high-floor buses, with separate luggage hold mounted below the passenger compartment. In contrast to transit buses, motor coaches typically feature forward-facing seating, with no provision for standing. Other accommodations may include onboard restrooms, televisions, and overhead luggage space.
The name used for this type of bus varies between countries. In United States they are officially designated as motorcoach ("a bus designed with an elevated passenger deck located over a baggage compartment") as well as being referred to as coach bus. In United Kingdom, Europe, Australia and many other countries they are called coach. In Japan they are called Highway Bus, while those operating airport services are called Airport Limousine or Limousine Bus.
History
Background
Horse-drawn chariots and carriages ("coaches") were used by the wealthy and powerful where the roads were of a high enough standard from possibly 3000 BC. In Hungary, during the reign of King Matthias Corvinus in the 15th century, the wheelwrights of Kocs began to build a horse-drawn vehicle with steel-spring suspension. This "cart of Kocs" as the Hungarians called it () soon became popular all over Europe. The imperial post service employed the first horse-drawn mail coaches in Europe since Roman times in 1650, and as they started in the town of Kocs, the use of these mail coaches gave rise to the term "coach". Stagecoaches (drawn by horses) were used for transport between cities from about 1500 in Great Britain until displaced by the arrival of the railways.
One of the earliest motorized vehicles was the charabanc, which was used for short journeys and excursions until the early years of the 20th century. The first "motor coaches" were purchased by operators of those horse-drawn vehicles in the early 20th century by operators such as Royal Blue Coach Services, who purchased their first charabanc in 1913 and were running 72 coaches by 1926.
Features
Coaches are designed for comfort,as passengers are onboard for significant periods of time on long journeys, or the hirer desires a high standard of comfort on shorter trips.
They can vary considerably in quality: some higher-specification coaches feature luxury seats and refreshments, while others may only have the bare essentials such as non-reclining highback seats and an underfloor baggage compartment. Coaches typically have only a single, narrow door, but some may have two doors - it can be a tradeoff between faster passenger boarding/alighting times and having 2-4 extra seats.
Seats are normally in a configuration of 2 seats either side of a central aisle - or in premium coaches there can be 2 seats on one side of the aisle and 1 seat on the other side (for example, "excellent" and "premium" class Intercity/Express buses in South Korea), or even 1 seat either side of the aisle in luxury coaches. Other seating layouts can be found on coaches built for a specific purpose (e.g. some overnight buses in Japan have 3 single seats with 2 narrow aisles).
Some characteristics include:
Air conditioning is installed on virtually every modern coach.
Comfortable highback seats - covered in cloth/fabric or leatherette (or even leather in some luxury coaches) - that recline and can include a fold-out tray table and/or beverage holder, and armrests.
Luggage racks above the seats for storing carry-on bags.
Luggage compartment, accessed from outside the vehicle, under the main floor (or sometimes at the rear)
Personal reading light and adjustable air conditioning outlet above the seat
On-board restrooms fitted with chemical toilets, hand basins and soap or hand sanitizer
On some coaches, on-board entertainment including movies may be shown to passengers
On-board refreshment service provided by an attendant or a vending machine
Wheelchair accessible. This is generally provided using a wheelchair lift, although some coaches are in a partial (or full) double-deck layout able to accommodate wheelchair(s) and optionally a small number of regular seats on the lower level, with access by a portable or foldout ramp, with the remainder of the lower level normally used for luggage storage and a toilet.
Curtains or blinds, useful on overnight services or to block harsh sunlight
Onboard AC power, USB charging ports and Wi-Fi access
Seat belt for safety
Manufacture
Coaches, like buses, may be fully built by integrated manufacturers, or a separate chassis consisting of only an engine, wheels and basic frame may be delivered to a coachwork factory for a body to be added. A few coaches are built with monocoque bodies without a chassis frame. Integrated manufacturers (most of whom also supply chassis) include Autosan, Scania, Fuso, and Alexander Dennis. Major coachwork providers (some of whom can build their own chassis) include Van Hool, Neoplan, Marcopolo, Irizar, MCI, Prevost, Volvo, Denning Manufacturing in Australia and Designline in New Zealand.
Regulations
In some European countries following the 1958 type certification treaty, coach (that is vehicle of type M2 or M3) type certification is regulated by regulation number 107 from the UNECE. In the U.S., commercial drivers of motorcoaches are regulated by the Federal Motor Carrier Safety Administration (FMCSA).
Drivers of buses & coaches require a Commercial driver's license (a higher class of license than is required to drive a car). Many states/countries also require bus/coach drivers to obtain an additional certification to carry paying passengers - for example United Kingdom & European Union require a Driver Certificate of Professional Competence (Driver CPC)
Seat belts for drivers and all passengers are now legally required in many countries - for example United Kingdom. Refer to: Overview of Seat Belt Legislation by Country.
Image gallery
Modern coaches
A representative selection of vehicles currently (or recently) in use in different parts of the world.
Vintage coaches
A selection of vehicles in use in different parts of the world in the past.
| Technology | Motorized road transport | null |
214764 | https://en.wikipedia.org/wiki/Stagecoach | Stagecoach | A stagecoach (also: stage coach, stage, road coach, ) is a four-wheeled public transport coach used to carry paying passengers and light packages on journeys long enough to need a change of horses. It is strongly sprung and generally drawn by four horses although some versions are drawn by six horses.
Commonly used before steam-powered rail transport was available, a stagecoach made long scheduled trips using stage stations or posts where the stagecoach's horses would be replaced by fresh horses. The business of running stagecoaches or the act of journeying in them was known as staging.
Some familiar images of the stagecoach are that of a Royal Mail coach passing through a turnpike gate, a Dickensian passenger coach covered in snow pulling up at a coaching inn, a highwayman demanding a coach to "stand and deliver" and a Wells Fargo stagecoach arriving at or leaving an American frontier town. The yard of ale drinking glass is associated by legend with stagecoach drivers, though it was mainly used for drinking feats and special toasts.
Description
The stagecoach was a closed four-wheeled vehicle drawn by horses or hard-going mules. It was regularly used as a public conveyance on an established route usually to a regular schedule. Spent horses were replaced with fresh horses at stage stations, posts, or relays. In addition to the stage driver or coachman who guided the vehicle, a shotgun messenger armed with a coach gun might travel as a guard beside him. Thus, the origin of the phrase "riding shotgun".
The American mud wagon was an earlier, smaller, and cruder vehicle, being mostly open-sided with minimal protection from weather, causing passengers to risk being mud-splashed. A canvas-topped stage wagon was used for freight and passengers, and it had a lower center of gravity, making it harder to overturn.
Speed
Until the late 18th century, stagecoaches traveled at an average speed of about , with the average daily mileage traversed approximately . With road improvements and the development of steel springs, speeds increased. By 1836 the scheduled coach left London at 19:30, travelled through the night (without lights) and arrived in Liverpool at 16:50 the next day, a distance of about , doubling the overall average speed to about , including stops to change horses.
History
Origins
The first crude depiction of a coach was in an English manuscript from the 13th century. The first recorded stagecoach route in Britain started in 1610 and ran from Edinburgh to Leith. This was followed by a steady proliferation of other routes around the island. By the mid 17th century, a basic stagecoach infrastructure had been put in place. A string of coaching inns operated as stopping points for travellers on the route between London and Liverpool. The stagecoach would depart every Monday and Thursday and took roughly ten days to make the journey during the summer months. Stagecoaches also became widely adopted for travel in and around London by mid-century and generally travelled at a few miles per hour. Shakespeare's first plays were performed at coaching inns such as The George Inn, Southwark.
By the end of the 17th century stagecoach routes ran up and down the three main roads in England. The London-York route was advertised in 1698:
The novelty of this method of transport excited much controversy at the time. One pamphleteer denounced the stagecoach as a "great evil [...] mischievous to trade and destructive to the public health". Another writer, however, argued that:
The speed of travel remained constant until the mid-18th century. Reforms of the turnpike trusts, new methods of road building and the improved construction of coaches led to a sustained rise in the comfort and speed of the average journey - from an average journey length of 2 days for the Cambridge-London route in 1750 to a length of under 7 hours in 1820.
Robert Hooke helped in the construction of some of the first spring-suspended coaches in the 1660s and spoked wheels with iron rim brakes were introduced, improving the characteristics of the coach.
In 1754, a Manchester-based company began a new service called the "Flying Coach". It was advertised with the following announcement - "However incredible it may appear, this coach will actually (barring accidents) arrive in London in four days and a half after leaving Manchester." A similar service was begun from Liverpool three years later, using coaches with steel spring suspension. This coach took an unprecedented three days to reach London with an average speed of
.
Royal Mail stagecoaches
Even more dramatic improvements were made by John Palmer at the British Post Office. The postal delivery service in Britain had existed in the same form for about 150 years—from its introduction in 1635, mounted carriers had ridden between "posts" where the postmaster would remove the letters for the local area before handing the remaining letters and any additions to the next rider. The riders were frequent targets for robbers, and the system was inefficient.
Palmer made much use of the "flying" stagecoach services between cities in the course of his business, and noted that it seemed far more efficient than the system of mail delivery then in operation. His travel from Bath to London took a single day to the mail's three days. It occurred to him that this stagecoach service could be developed into a national mail delivery service, so in 1782 he suggested to the Post Office in London that they take up the idea. He met resistance from officials who believed that the existing system could not be improved, but eventually the Chancellor of the Exchequer, William Pitt, allowed him to carry out an experimental run between Bristol and London. Under the old system the journey had taken up to 38 hours. The stagecoach, funded by Palmer, left Bristol at 4 pm on 2 August 1784 and arrived in London just 16 hours later.
Impressed by the trial run, Pitt authorised the creation of new routes. Within the month the service had been extended from London to Norwich, Nottingham, Liverpool and Manchester, and by the end of 1785 services to the following major towns and cities of England and Wales had also been linked: Leeds, Dover, Portsmouth, Poole, Exeter, Gloucester, Worcester, Holyhead and Carlisle. A service to Edinburgh was added the next year, and Palmer was rewarded by being made Surveyor and Comptroller General of the Post Office. By 1797 there were forty-two routes.
Improved coach design
The period from 1800 to 1830 saw great improvements in the design of coaches, most notably by John Besant in 1792 and 1795. His coach had a greatly improved turning capacity and braking system, and a novel feature that prevented the wheels from falling off while the coach was in motion. Besant, with his partner John Vidler, enjoyed a monopoly on the supply of stagecoaches to the Royal Mail and a virtual monopoly on their upkeep and servicing for the following few decades.
Steel springs had been used in suspensions for vehicles since 1695. Coachbuilder Obadiah Elliott obtained a patent covering the use of elliptic springs - which were not his invention. His patent lasted 14 years delaying development because Elliott allowed no others to license and use his patent. Elliott mounted each wheel with two durable elliptic steel leaf springs on each side and the body of the carriage was fixed directly to the springs attached to the axles. After the expiry of his patent most British horse carriages were equipped with elliptic springs; wooden springs in the case of light one-horse vehicles to avoid taxation, and steel springs in larger vehicles.
Improved roads
in road construction were also made at this time, most importantly the widespread implementation of Macadam roads up and down the country. The speed of coaches in this period rose from around (including stops for provisioning) to and greatly increased the level of mobility in the country, both for people and for mail. Each route had an average of four coaches operating on it at one time - two for both directions and a further two spares in case of a breakdown en route. Joseph Ballard described the stagecoach service between Manchester and Liverpool in 1815 as having price competition between coaches, with timely service and clean accommodations at inns.
Taxation and regulation
Stagecoaches in Victorian Britain were heavily taxed on the number of passenger seats. If more passengers were carried than the licence allowed there were penalties to pay. The lawyer Stanley Harris (1816–1897) writes in his books Old Coaching Days and The Coaching Age that he knew of informers ready to report any breach of regulations to the authorities. This could be overloading of passengers in excess of the licence or minor matters such as luggage too high on the roof. They did this in return for a portion of any fines imposed, sometimes as much as half. The tax paid on passenger seats was a major expense for coach operators. Harris gives an example of the tax payable on the London to Newcastle coach route (278 miles). Annual tax amounted to £2,529 for 15 passengers per coach (4 inside and 11 outside). Annual tolls were £2,537. The hire of the four coach vehicles needed was £1,274. The 250 horses needed for this service also needed to be paid for. Operators could reduce their tax burden by one seventh by operating a six-day-a-week service instead of a seven-day service.
Decline and evolution
The development of railways in the 1830s spelled the end for stagecoaches and mail coaches. The first rail delivery between Liverpool and Manchester took place on 11 November 1830. By the early 1840s most London-based coaches had been withdrawn from service.
Some stagecoaches remained in use for commercial or recreational purposes. They came to be known as road coaches and were used by their enterprising (or nostalgic) owners to provide scheduled passenger services where rail had not yet reached and also on certain routes at certain times of the year for the pleasure of an (often amateur) coachman and his daring passengers.
Competitive display and sport
vanished as rail penetrated the countryside the 1860s did see the start of a coaching revival spurred on by the popularity of Four-in-hand driving as a sporting pursuit (the Four-In-Hand Driving Club was founded in 1856 and the Coaching Club in 1871).
New stagecoaches often known as Park Drags began to be built to order. Some owners would parade their vehicles and magnificently dressed passengers in fashionable locations. Other owners would take more enthusiastic suitably-dressed passengers and indulge in competitive driving. Very similar in design to stagecoaches their vehicles were lighter and sportier.
These owners were (often very expert) amateur gentlemen-coachmen, occasionally gentlewomen. A professional coachman might accompany them to avert disaster. Professionals called these vehicles 'butterflies'. They only appeared in summer.
Spread elsewhere
Australia
Cobb & Co was established in Melbourne in 1853 and grew to service Australia's mainland eastern states and South Australia.
Continental Europe
The , a solidly built stagecoach with four or more horses, was the French vehicle for public conveyance with minor varieties in Germany such as the Stellwagen and Eilwagen.
The diligence from Le Havre to Paris was described by a fastidious English visitor of 1803 with a thoroughness that distinguished it from its English contemporary, the stage coach.
A more uncouth clumsy machine can scarcely be imagined. In the front is a cabriolet fixed to the body of the coach, for the accommodation of three passengers, who are protected from the rain above, by the projecting roof of the coach, and in front by two heavy curtains of leather, well oiled, and smelling somewhat offensively, fastened to the roof. The inside, which is capacious, and lofty, and will hold six people in great comfort is lined with leather padded, and surrounded with little pockets, in which travellers deposit their bread, snuff, night caps, and pocket handkerchiefs, which generally enjoy each others company, in the same delicate depository. From the roof depends a large net work which is generally crouded with hats, swords, and band boxes, the whole is convenient, and when all parties are seated and arranged, the accommodations are by no means unpleasant.
Upon the roof, on the outside, is the imperial, which is generally filled with six or seven persons more, and a heap of luggage, which latter also occupies the basket, and generally presents a pile, half as high again as the coach, which is secured by ropes and chains, tightened by a large iron windlass, which also constitutes another appendage of this moving mass. The body of the carriage rests upon large thongs of leather, fastened to heavy blocks of wood, instead of springs, and the whole is drawn by seven horses.
The English visitor noted the small, sturdy Norman horses "running away with our cumbrous machine, at the rate of six or seven miles an hour". At this speed stagecoaches could compete with canal boats, but they were rendered obsolete in Europe wherever the rail network expanded in the 19th century. Where the rail network did not reach, the diligence was not fully superseded until the arrival of the autobus.
In France, between 1765 and 1780, the turgotines, big mail coaches named for their originator, Louis XVI's economist minister Turgot, and improved roads, where a coach could travel at full gallop across levels, combined with more staging posts at shorter intervals, cut the time required to travel across the country sometimes by half.
New Zealand
A Cobb & Co (Australia) proprietor arrived in New Zealand on 4 October 1861, thus beginning Cobb & Co (New Zealand) stagecoach operation.
United States
Beginning in the 18th century, crude wagons began to be used to carry passengers between cities and towns, first within New England by 1744, then between New York and Philadelphia by 1756. Travel time was reduced on this later run from three days to two in 1766 with an improved coach called the Flying Machine. The first mail coaches appeared in the later 18th century carrying passengers and the mails, replacing the earlier post riders on the main roads. Coachmen carried letters, packages, and money, often transacting business or delivering messages for their customers. By 1829 Boston was the hub of 77 stagecoach lines; by 1832 there were 106. Coaches with iron or steel springs were uncomfortable and had short useful lives. Two men in Concord, New Hampshire, developed what became a popular solution. They built their first Concord stagecoach in 1827 employing long leather straps under their stagecoaches which gave a swinging motion.
Describing a journey he took in 1861, in his 1872 book, Roughing It, Mark Twain wrote that the Concord stage was like "an imposing cradle on wheels". Around twenty years later, in 1880, John Plesent Gray recorded after travelling from Tucson to Tombstone on J.D. Kinnear's mail and express line: The horses were changed three times on the trip, normally completed in 17 hours.
The stagecoach lines in the U.S. were operated by private companies. Their most profitable contracts were with U.S. Mail and were hotly contested. Pony Express, which began operations in 1860, is often called first fast mail service from the Missouri River to the Pacific Coast, but the Overland Mail Company began a twice-weekly mail service from Missouri to San Francisco in September 1858. Transcontinental stage-coaching ended with the completion of the transcontinental railroad in 1869.
Southern Africa
The railway network in South Africa was extended from Mafeking through Bechuanaland and reached Bulawayo in 1897. Prior to its arrival, a network of stagecoach routes existed.
Ottoman Palestine
Stagecoaches, often known by the French name "Diligence" - a smaller model with room for six passengers and a bigger one for ten, drawn by two horses (in the city, on the plain or on a good road) or three (on intercity and elevated roads) - were the main means of public transportation in Ottoman Palestine between the middle of the 19th century and the beginning of the 20th century.
The first stagecoaches were brought to Palestine by the German religious group known as the "Templers" who operated a public transportation service between their colonies in the country as early as 1867. Stagecoach development in Palestine was greatly facilitated by the 1869 visit of Austrian Emperor Franz Joseph I. For this distinguished guest, the road between Jaffa and Jerusalem was greatly improved, making possible the passage of carriages. Stagecoaches were a great improvement over the earlier means of transport used in the country, such as riding horses, donkeys or camels, or light carts drawn by donkeys.
When the stagecoach ran into a difficult ascent or mud, the passengers were required to get off and help push the carriage. The trip between Jaffa and Jerusalem by stagecoach lasted about 14 hours spread over a day and a half, including a night stop at Bab al-Wad (Shaar HaGai), the trip in the opposite, downhill direction took 12 hours.
The stagecoaches belonged to private owners, and the wagoners were mostly hired, although sometimes the wagoner was also the owner of the wagon. The license to operate the stagecoaches was granted by the government to private individuals in the cities and to the colony committees in the early Zionist colonies. The license holders paid a special tax for this right and could employ subcontractors and hired wagons.
The stagecoaches linked Jerusalem with Jaffa, Hebron and Nablus, the Zionist colonies with Jaffa, Haifa with Acre and Nazareth. They were also used for urban and suburban transportation in the Haifa region.
The colony of Rehovot is known to have promulgated detailed regulations for stagecoach operation, soon after its foundation in 1890, which were greatly extended in 1911. Fares were fixed, ranging between 1.10 Grush for traveling to the nearby village of Wadi Hanin and 5.00 Grush for traveling from Rehovot to Jaffa. The stagecoach was required to work six times a week (except for the Shabbat) and to carry free of charge the mails and medicines of the Rehovot pharmacy.
While railways started being constructed in Palestine in the last years of the 19th century, stagecoaches were still a major means of public transport until the outbreak of the First World War, and in peripheral areas were still used in the early years of British Mandatory rule.
In popular culture
Stories that prominently involve a stagecoach include:
Winds of the Wasteland, a 1936 film starring John Wayne
Wells Fargo, a 1937 film starring Joel McCrea
Stagecoach, a 1939 film starring John Wayne
Arizona Bound, a 1941 film starring Buck Jones
Stagecoach to Denver, a 1946 film starring Allan Lane
Black Bart, a 1948 film starring Dan Duryea
A Ticket to Tomahawk, a 1950 musical comedy starring Dan Dailey in which stagecoach interests try to stop establishment of rail service.
Riding Shotgun, a 1954 film starring Randolph Scott
Dakota Incident, a 1956 film starring Dale Robertson
Gunsight Ridge, a 1957 film starring Joel McCrea
The Tall T, a 1957 film starring Randolph Scott
Westbound, 1959 film starring Randolph Scott
The Magnificent Seven, a 1960 film starring Yul Brynner
Stagecoach West, a 1960–1961 television series starring Wayne Rogers and Robert Bray
Stage to Thunder Rock, a 1964 film starring Barry Sullivan
Stagecoach, a 1966 film starring Bing Crosby
Hombre, a 1967 film starring Paul Newman
The Stagecoach, a 1968 comic book by Goscinny and Morris
Dusty's Trail, a 1973 television series starring Bob Denver
Five Mile Creek, a 1983–1985 television series featuring Nicole Kidman
Stagecoach, a 1986 film starring Kris Kristofferson
Maverick, a 1994 film starring Mel Gibson
The Hateful Eight, a 2015 film by Quentin Tarantino
Part of the plot of Doctor Dolittle's Circus is set in a stagecoach, where the animal-loving Doctor Dolittle is traveling along with a female seal, disguised as a woman, whom he is helping to escape from the circus. The Doctor is mistaken for a notorious highwayman.
Selling stagecoaches to the fence in Emerald Ranch is a common method of making money in Red Dead Redemption 2, as well as a fast travel transportation method.
| Technology | Animal-powered transport | null |
214934 | https://en.wikipedia.org/wiki/Stonemasonry | Stonemasonry | Stonemasonry or stonecraft is the creation of buildings, structures, and sculpture using stone as the primary material. Stonemasonry is the craft of shaping and arranging stones, often together with mortar and even the ancient lime mortar, to wall or cover formed structures.
The basic tools, methods and skills of the banker mason have existed as a trade for thousands of years. It is one of the oldest activities and professions in human history. Many of the long-lasting, ancient shelters, temples, monuments, artifacts, fortifications, roads, bridges, and entire cities were built of stone. Famous works of stonemasonry include Göbekli Tepe, the Egyptian pyramids, the Taj Mahal, Cusco's Incan Wall, Taqwesan, Easter Island's statues, Angkor Wat, Borobudur, Tihuanaco, Tenochtitlan, Persepolis, the Parthenon, Stonehenge, the Great Wall of China, the Mesoamerican pyramids, Chartres Cathedral, and the Stari Most.
While stone was important traditionally, it fell out of use in the modern era, in favor of brick and steel-reinforced concrete. This is despite the advantages of stone over concrete. Those advantages include:
many types of stone are stronger than concrete in compression;
stone uses much less energy to produce, and hence its production emits less carbon dioxide than either brick or concrete;
stone is widely considered aesthetically pleasing, while concrete is often painted or clad.
Modern stonemasonry is in the process of reinventing itself for automation, modern load-bearing stone construction, innovative reinforcement techniques, and integration with other sustainable materials, like engineered wood.
Stonemasonry disciplines
A quarryman splits or cuts rock in the quarry, and extracts the resulting blocks of stone. The cut or split pieces are collected and transported away from the extraction surface for further refinement.
A sawyer stonemason cuts these stone blocks into dimension stone, to required size with saws. The resulting block if ordered for a specific component is known as sawn six sides (SSS). A sawyer mason is similar to a banker mason (see below) in that they work with rough pieces of stone and shape them according to certain standards. The main difference between a sawyer mason and a banker mason is the size of the stone they work with – a sawyer mason typically works with much larger pieces and uses diamond-coated tools. Sawyer masons may work in quarries or be found in tile or flooring stores, possessing a range of specific skills, such as examining grain patterns to determine cleavage, creating smaller stones from larger pieces, and carving precise outlines and drilling holes using various tools like chisels.
A banker mason, sometimes referred to as a bank mason, is workshop-based, and specializes in working the stones into the shapes required by a building's design, this set out on templets and a bed mould. A banker mason uses various hand and power tools to cut, carve, and shape stone. They can produce anything from stones with simple chamfers to tracery windows, detailed mouldings and the more classical architectural building masonry. When working a stone from a sawn block, the mason ensures that the stone is bedded in the right way, so the finished work sits in the building in the same orientation as it was formed on the ground. Occasionally though some stones need to be oriented correctly for the application; this includes voussoirs, jambs, copings, and cornices. The stone's size and shape are usually predetermined by builders or other parties involved in a project, and the banker mason works according to a brief or a set of designs provided for that project. Once the stone has been crafted to the required specifications, it is transported to the construction site or another location for use in a building or other structure.
Fixer mason specializes in the fixing of stones onto buildings, using lifting tackle, and traditional lime mortars and grouts. Sometimes modern cements, mastics, and epoxy resins are used, usually on specialist applications such as stone cladding. Metal fixings, from simple dowels and cramps to specialized single application fixings, are also used. The precise tolerances necessarily make this a highly skilled job. This type of masons has specialized into fixing the stones onto the buildings. A fixer mason is responsible for traveling to a job site to fit and lay pre-prepared stone or cladding for buildings. They might do this with grouts, mortars, and lifting tackle. They might also use things like single application specialized fixings, simple cramps, and dowels as well as stone cladding with things like epoxy resins, mastics, and modern cements.
Memorial mason or monumental mason carve gravestones and inscriptions.
Carver mason crosses the line from craft to art. They use their artistic ability to carve stone into foliage, figures, animals or abstract designs. Carver masons are the artists of stonemasonry, responsible for creating designs and/or patterns from stone, as well as on stones. This work can include stone sculptures of figures or animals, or other projects of a similar nature. Throughout history, carver masons have been renowned for their exceptional skills in crafting beautiful pieces from stone.
Classical stonemasonry techniques
Stone has been used in construction for thousands of years, in many contexts. Listed below are six types of classical stonemasonry techniques, some of which still see widespread use.
Ashlar masonry. Stone masonry using dressed (cut) stones is known as ashlar masonry.
Trabeated systems. One of the oldest forms of stone construction uses a lintel (beam) laid across stone posts or columns. This method predates Stonehenge, and refined versions were used by the Egyptians, Persians, Greeks, and Romans.
Arch masonry. Also called "arcuated systems" in contrast with trabeated systems, which are two ancient methods for creating a void below a stone span (either a lintel or an arch). Note that the Wikipedia page on stone arches is about stone-arch bridges exclusively, and the arch page is about all arches, including non-stone.
Rubble masonry. Use of rubble in masonry: antonymous to ashlar masonry. Can be infill in an ashlar wall, used in cyclopean concrete, and other contexts. The term is antonymous to "ashlar".
Dry stone. Stone walls built without mortar, using the shape of the stones, compression, and friction for stability. This technique encompasses cyclopean masonry and other mortar-less methods, but is conventionally used to describe agricultural walls used to mark boundaries, contain livestock, and retain soil.
Cyclopean masonry. Dry wall construction using massive boulders that may have been shaped to fit together. Polygonal masonry is a subtype of cyclopean masonry where the boulders are shaped into polygonal profiles.
Modern stonemasonry systems
In the modern era, stone has been largely relegated as a cosmetic element of buildings, often used as decorative cladding on steel-reinforced concrete. This is despite its wide historical use in large compressive structures: 50-m bridges and colosseums in Roman times, ~65-m tall cathedrals since the middle ages, and 12-story apartment buildings built in the 1690s.
Modern stonemasonry techniques
Stone veneer is used as a protective and decorative covering for interior or exterior walls and surfaces. The veneer is typically 1 in (25.4 mm) thick and must weigh less than 15 lb per square foot (73 kg m−2) so that no additional structural supports are required. The structural wall is put up first, and thin, flat stones are mortared onto the face of the wall. Metal tabs in the structural wall are mortared between the stones to tie everything together, to prevent the stonework from separating from the wall.
Massive precut stone. Also known as "prefabricated stone", "massive stone", "pre-sized stone", or "pré-taille" stone. Main page: Massive precut stone.
Post-tensioned stone. A high-performance composite construction material consisting of stone held in compression with tension elements. The tension elements can be connected to the outside of the stone, but more typically uses tendons threaded internally through a duct formed from aligned drilled holes.
Pre-tensioned stone. Using an epoxy shear connector, early experiments have shown that it is possible to pre-tension stone, maintaining the tendon under tension while the liquid epoxy is injected and allowed to set.
Digital stereotomy. Using CAD and computer models of load, modern designers are able to cut complex vaults, arches, and other arrangements of precisely cut ashlars. Antecedents to this discipline include curved vaults, and also flat vaults that use a concentric flat arch vault and the Abeille flat vault. Using digital design and machining, such compression structures can be shapes into complex compressive structures. Leaders in this area include Giuseppe Fallacara and Philippe Block.
Trabeated stone exoskeleton. In the modern era, post-and-lintel construction was adapted use as a stone exoskeleton in the design of 15 Clerkenwell Close. The stone exoskeleton method is a variant of the massive precut stone method: the ends of the posts and lintels are precisely precut offsite prior to assembly by crane. At least two more trabeated stone exoskeleton high-rise buildings are underway, one in London, and another in Bristol.
Stone bricks. Small stone ashlars that are cut by the quarry to brick sizing to allow their use in standardized brick-laying workflows. Cost is similar to clay composite bricks, but with greatly reduced carbon emissions. As stone does not change size like fired clay bricks, brick-sized stone ashlars do not require expansion joints.
Cyclopean concrete. This method uses a combination of cyclopean masonry and rubble masonry: boulders and or rubble are placed in a form (or in a ditch), and concrete is poured on top to bind the stones together before removing the form. Variations of this include Frank Lloyd Wright's 'desert masonry' and Institut Balear de l'Habitatge's cyclopean concrete blocks, which are cast in a large slab and precisely sawn for use as prefabricated masonry in the massive precut stone system.
Slipform stonemasonry is a variation of Cyclopean concrete stone-wall construction that uses formwork to contain the rocks and mortar while keeping the walls straight. Short forms, up to two feet tall, are placed on both sides of the wall to serve as a guide for the stonework. Stones are placed inside the forms with the good faces against the formwork. Concrete is poured behind the rocks. Rebar is added for strength, to make a wall that is approximately half reinforced concrete and half stonework. The wall can be faced with stone on one side or both sides.
Formwork stone. "Pierre banchée" in French. Uses stone tiles or ashlars as shuttering for pouring concrete. These are left in place after the concrete sets. This is the inverse procedure to stone cladding, where the stone tiles are attached to the concrete after the temporary shuttering has been removed. Developed by Fernand Pouillon to accelerate construction. Formwork stone is distinct from cyclopean concrete in that the former uses rectilinear tiles, while the latter uses boulders and/or cobblestone.
Massive precut stone
Massive precut stone is also known as "prefabricated", or "pre-sized" stone is a modern method of building with load-bearing stone. Precut stone is a DFMA construction method that uses large machine-cut stone blocks with precisely defined dimensions to rapidly assemble buildings in which stone is used as a major or the primary load-bearing material.
Massive precut stone construction was originally developed by Fernand Pouillon in postwar period who referred to the method as "pierre de taille" or "pré-taille" stone. It became possible through innovations by Pouillon and Paul Marcerou, a masonry engineer at a quarry in Fontvieille, to adapt high-precision saws from the timber industry to quarrying and stone sawing.
The key technique of massive precut (MP) stone is to cut stone ashlars to precise dimensions that match the architect's plan such that the stones can be dropped into place by crane for rapid construction. The blocks may be numbered so that the masons can follow the plan procedurally. The use of massive blocks reduces costs by minimizing sawing and fixer-masonry costs. The use of a crane reduces labor, accelerates construction, and allows the masons to precisely and quickly position the blocks.
Design features of massive precut stone
MP stone is defined by four design attributes.
Load-bearing stone. This distinguishes it from cosmetic precut stone, which is used for cladding decoration. Historically, load-bearing stone is the most durable construction method.
Massive block sizes. The heuristic definition of 'massive stone' is a block that is too heavy to be lifted by hand. Using massive dimensions has three critical benefits: (1) minimizing cuts, which lowers cost and shortens production time, (2) increases the thermal mass of walls for temperature regulation in the building, and (3) makes use of crane construction, thereby lowering manual labor, shortening assembly time, reducing mortar, labor, and cost.
Precise offsite dimension cuts. Precutting can be done at the quarry, or at a masonry workshop by sawyer and banker masons. The precision amounts to a form of prefabrication, such that the masons do not have to make adjustments onsite, and construction is an assembly process. Precise ashlar interfaces also reduce the amount of mortar required.
Types of massive precut stone construction
Massive precut ashlars. Blocks cut precisely on four to six sides, used to assemble walls, lintels over windows and doors, and in flat arches.
End-shaped massive precut posts and lintels. Quarry-finished blocks with precisely shaped ends for assembly into post-and-lintel frameworks.
Massive cyclopean concrete blocks. Developed by IBAVI in Mallorca, rough stones are placed in a mold and saturated with concrete. The concrete is sawn into massive ashlars for crane assembly. Enables reuse of rough plum stones from traditional stone masonry.
Benefits of massive precut stone construction
MP stone construction has advantages over conventional masonry and concrete construction.
Build speed. The use of precisely cut and numbered ashlars, combined with crane-assisted assembly, significantly reduces construction time compared to traditional stone-masonry techniques. Compared to concrete construction, MP stone is faster as there is no setting wait time.
Cost reduction. Compared to brick masonry or smaller ashlars, using larger stone blocks and thereby minimizing sawing and fixer-masonry costs, the overall expense of constructing a building can be reduced.
Labor Efficiency. The use of cranes and a well-organized construction plan reduces the labor required, lowering costs and reducing the wait time for skilled mason availability.
Design efficiency. The precision of machine-cut ashlars allows for more modularity in design, enabling architects to create building designs quickly.
Durability. Buildings constructed using massive precut stone maintain the inherent durability and longevity of stone construction, offering long-lasting and low-maintenance structures. At least one study of a modern 20-storey unreinforced MP-stone tower has suggested this method has good seismic resilience: "With regard to the current Algerian seismic design regulation, the results obtained in terms of time period, frequency, storey drifts and displacements showed that the… (Diar Es Saada massive precut stone)… tower can be considered as an earthquake-resistant building fulfilling the required structural safety conditions."
Aesthetics. Compared to concrete and other materials, massive precut stone construction yields visually striking and distinctive buildings that showcase the natural beauty of stone.
Environmental Benefits. The use of a material with lower embedded carbon contributes to a more sustainable building process, minimizing the environmental impact. Lower carbon emissions: load-bearing stone construction emits around one-tenth the carbon as a comparable concrete building. As 80% of energy is non-grid fossil fuel, and construction is responsible for 8% of carbon emissions, the replacement of coal-burning concrete production with lower-energy dimension-stone production could have a substantial impact on net-zero goals.
Reusability. When a building has reached the end of its usefulness, massive rectilinear ashlars are easily reused in new construction.
Thermal Performance. As with traditional stone construction, massive precut stone buildings benefit from excellent thermal mass, helping to regulate indoor temperatures and reduce energy consumption for heating and cooling.
History of massive precut stone
Fernand Pouillon pioneered the use of massive precut stone in modern architecture. During the post-war period, his innovative approach to stone construction led to the development of numerous noteworthy projects, with a particular focus on housing. Throughout his long career, Pouillon played a crucial role in the development and popularization of massive precut stone construction techniques. His pioneering work laid the foundation for subsequent architects to build upon and innovate, leading to the resurgence and expansion of this construction method in the 2020s, with the rise of interest in low-carbon durable construction.
Tensioned stone
Post-tensioned stone is a high-performance composite construction material: stone held in compression with tension elements. The tension elements can be connected to the outside of the stone, but more typically uses tendons threaded internally through a duct formed from aligned drilled holes.
Post-tensioned stone ("PT stone") could consist of a single piece, but drill limitations and other considerations mean it is typically an assembly of multiple components with mortar between pieces. PT stone has been used in both vertical columns (posts), and in horizontal beams (lintels). It has also been used in more unusual engineering applications: arch stabilization, flexible foot bridges, and cantilevered sculptures.
Tensioned stone has a close affiliation with massive precut stone as two central techniques of modern stonemasonry.
Rationale
"Post-tensioned stone increases the failure load of stone in bending, but also the stiffness of a structure by reducing joint cracking. This method of construction is widely used for concrete structures, but the advantages of using similar techniques with stone are only just being realised".
Stone has great compressive strength, so is ideal in compressive structures like stone arches.
However, it has relatively weak flexural strength (compared to steel or wood), so in isolation cannot be safely used in wide spans under tension.
For concrete, this problem has been long solved: in addition to conventional tensile reinforcement, engineers developed prestressed concrete methods starting around 1888. Such tension-reinforced concrete applications combine compressive strength with pre-stressed tensile compression for combined strength much greater than either of the individual components, and have been in wide use for decades. As for concrete, post-tensioning maintains stone in compression, thereby increasing its strength.
Post-tensioning is achieved steel tendons either threaded through ducts within the stone elements or along their surface. Once the stone components are in place, the tendons are tensioned using hydraulic jacks, and the force is transferred to the stone through anchorages located at the ends of the tendons. The tensioning process imparts a compressive force to the stone, which improves its capacity to resist tensile stresses that could otherwise cause cracking or failure.
Energy use and carbon emissions
Stone is 'natural precast concrete' so only needs to be cut (and strength tested) and post-tensioned prior to use in construction. Compared to concrete and steel, post-tensioned stone production has dramatically lower energy costs, with concomitant lower carbon emissions.
History of stonemasonry
Stonemasonry is one of the earliest trades in civilization's history. During the time of the Neolithic Revolution and domestication of non-human animals, people learned how to use fire to create quicklime, plasters, and mortars. They used these to fashion homes for themselves with mud, straw or stone, and masonry was born.
The Ancients heavily relied on the stonemason to build the most impressive and long-lasting monuments to their civilizations. The Egyptians built their pyramids, the civilizations of Central America had their step pyramids, the Persians their palaces, the Greeks their temples, and the Romans their public works and wonders (See Roman Architecture). People of the Indus Valley Civilization, such as at Dholavira made entire cities characterized by stone architecture. Among the famous ancient stonemasons is Sophroniscus, the father of Socrates, who was a stone-cutter.
Castle building was an entire industry for the medieval stonemasons. When the Western Roman Empire fell, building in dressed stone decreased in much of Western Europe, and there was a resulting increase in timber-based construction. Stonework experienced a resurgence in the 9th and 10th centuries in Europe, and by the 12th-century religious fervour resulted in the construction of thousands of impressive churches and cathedrals in stone across Western Europe.
Medieval stonemasons' skills were in high demand, and members of the guild, gave rise to three classes of stonemasons: apprentices, journeymen, and master masons. Apprentices were indentured to their masters as the price for their training, journeymen were qualified craftsmen who were paid by the day, and master masons were considered freemen who could travel as they wished to work on the projects of the patrons and could operate as self-employed craftsmen and train apprentices. During the Renaissance, the stonemason's guild admitted members who were not stonemasons, and eventually evolved into the Society of Freemasonry; fraternal groups which observe the traditional culture of stonemasons but are not typically involved in modern construction projects.
A medieval stonemason would often carve a personal symbol onto their block to differentiate their work from that of other stonemasons. This also provided a simple ‘quality assurance’ system.
The Renaissance saw stonemasonry return to the prominence and sophistication of the Classical age. The rise of the humanist philosophy gave people the ambition to create marvelous works of art. The centre stage for the Renaissance would prove to be Italy, where Italian city-states such as Florence erected great structures, including the Florence Cathedral, the Fountain of Neptune, and the Laurentian Library, which was planned and built by Michelangelo Buonarroti, a famous sculptor of the Renaissance.
When Europeans settled the Americas, they brought the stonemasonry techniques of their respective homelands with them. Settlers used what materials were available, and in some areas, stone was the material of choice. In the first waves, building mimicked that of Europe, to eventually be replaced by unique architecture later on.
In the 20th century, stonemasonry saw its most radical changes in the way the work is accomplished. Prior to the first half of the century, most heavy work was executed by draft animals or human muscle power. With the arrival of the internal combustion engine, many of these hard aspects of the trade have been made simpler and easier. Cranes and forklifts have made moving and laying heavy stones relatively easy for the stonemasons. Motor powered mortar mixers have saved much in time and energy as well. Compressed-air powered tools have made working of stone less time-intensive. Petrol and electric-powered abrasive saws can cut through stone much faster and with more precision than chiseling alone. Cemented carbide-tipped chisels can stand up to much more abuse than the steel and iron chisels made by blacksmiths of old.
Tools
Stonemasons use a wide variety of tools to handle and shape stone blocks (ashlar) and slabs into finished articles. The basic tools for shaping the stone are a mallet, chisels, and a metal straight edge. With these one can make a flat surface – the basis of all stonemasonry.
Chisels come in a variety of sizes and shapes, dependent upon the function for which they are being used and have many different names depending on locality. There are different chisels for different materials and sizes of material being worked, for removing large amounts of material and for putting a fine finish on the stone. A drove chisel is used for smoothing off roughly finished stones.
Mixing mortar is normally done today with mortar mixers which usually use a rotating drum or rotating paddles to mix the mortar.
The masonry trowel is used for the application of the mortar between and around the stones as they are set into place. Filling in the gaps (joints) with mortar is referred to as pointing. Pointing in smaller joints can be accomplished using tuck pointers, pointing trowels, and margin trowels, among other tools.
A mason's hammer has a long thin head and is called a Punch Hammer. It would be used with a chisel or splitter for a variety of purposes
A walling hammer (catchy hammer) can be used in place of a hammer and chisel or pincher to produce rubble or pinnings or snecks.
Stonemasons use a lewis together with a crane or block and tackle to hoist building stones into place.
Today power tools such as compressed-air chisels, abrasive spinners, and angle grinders are much used: these save time and money, but are hazardous and require just as much skill as the hand tools that they augment. But many of the basic tools of stonemasonry have remained virtually the same throughout vast amounts of time, even thousands of years, for instance when comparing chisels that can be bought today with chisels found at the pyramids of Giza the common sizes and shapes are virtually unchanged.
Training
Traditionally medieval stonemasons served a seven-year apprenticeship. A similar system still operates today.
A modern apprenticeship lasts three years. This combines on-site learning through personal experience, the experience of the tradesmen, and college work where apprentices are given an overall experience of the building, hewing and theory work involved in masonry. In some areas, colleges offer courses which teach not only the manual skills but also related fields such as drafting and blueprint reading or construction conservation. Electronic Stonemasonry training resources enhance traditional delivery techniques. Hands-on workshops are a good way to learn about stonemasonry also. Those wishing to become stonemasons should have little problem working at heights, possess reasonable hand-eye coordination, be moderately physically fit, and have basic mathematical ability. Most of these things can be developed while learning.
The modern stonemason undergoes comprehensive training, both in the classroom and in the working environment. Hands-on skill is complemented by an intimate knowledge of each stone type, its application, and best uses, and how to work and fix each stone in place. The mason may be skilled and competent to carry out one or all of the various branches of stonemasonry. In some areas, the trend is towards specialization, in other areas towards adaptability.
Today's stonemasons undergo training that is quite comprehensive and is done both in the work environment and in the classroom. It isn't enough to have hands-on skills only. One must also have knowledge of the types of stones as well as its best uses and how to work it as well as how to fix it in place.
Types of stone
Stonemasons use all types of natural stone: igneous, metamorphic and sedimentary; while some also use artificial stone as well.
Igneous stones
Granite is one of the hardest stones, and requires such different techniques to sedimentary stones that it is virtually a separate trade. With great persistence, simple mouldings can and have been carved from granite, for example in many Cornish churches and in the city of Aberdeen. Generally, however, it is used for purposes that require its strength and durability, such as kerbstones, countertops, flooring, and breakwaters.
Igneous stone ranges from very soft rocks such as pumice and scoria to somewhat harder rocks such as tuff to the hardest rocks such as granite and basalt.
Metamorphic
Marble is a fine, easily worked stone, that comes in various colours, but mainly white. It has traditionally been used for carving statues, and for facings of many Byzantine and Italian Renaissance buildings. Prominent Greek sculptors, such as Antenor (6th century BC), Phidias and Critias (5th century BC), Praxiteles (4th century BC) and others used mainly the marble of Paros and Thassos islands, and the whitest and brightest of all (although not the finest), the Pentelikon marble. Their work was preceded by older sculptors from Mesopotamia and Egypt, but the Greeks were unmatched in plasticity and realistic (re)presentation, either of Gods (Apollo, Aphrodite, Hermes, Zeus, etc.), or humans (Pythagoras, Socrates, Plato, Phryne, etc.). The famous Acropolis of Athens is said to be constructed using the Pentelicon marble. The traditional home of the marble industry is the area around Carrara in Italy, from where a bright and fine, whitish marble is extracted in vast quantities.
Slate is a popular choice of stone for memorials and inscriptions, as its fine grain and hardness means it leaves details very sharp. Its tendency to split into thin plates has also made it a popular roofing material.
Sedimentary
Many of the world's most famous buildings have been built of sedimentary stone, from Durham Cathedral to St Peter's in Rome. There are two main types of sedimentary stone used in masonry work, limestones and sandstones. Examples of limestones include Bath and Portland stone. Yorkstone and Sydney sandstone are the most commonly used sandstone.
Gallery
| Technology | Building materials | null |
214938 | https://en.wikipedia.org/wiki/Leptin | Leptin | Leptin (from Greek λεπτός leptos, "thin" or "light" or "small"), also known as obese protein, is a protein hormone predominantly made by adipocytes (cells of adipose tissue). Its primary role is likely to regulate long-term energy balance.
As one of the major signals of energy status, leptin levels influence appetite, satiety, and motivated behaviors oriented toward the maintenance of energy reserves (e.g., feeding, foraging behaviors).
The amount of circulating leptin correlates with the amount of energy reserves, mainly triglycerides stored in adipose tissue. High leptin levels are interpreted by the brain that energy reserves are high, whereas low leptin levels indicate that energy reserves are low, in the process adapting the organism to starvation through a variety of metabolic, endocrine, neurobiochemical, and behavioral changes.
Leptin is coded for by the LEP gene. Leptin receptors are expressed by a variety of brain and peripheral cell types. These include cell receptors in the arcuate and ventromedial nuclei, as well as other parts of the hypothalamus and dopaminergic neurons of the ventral tegmental area, consequently mediating feeding.
Although regulation of fat stores is deemed to be the primary function of leptin, it also plays a role in other physiological processes, as evidenced by its many sites of synthesis other than fat cells, and the many cell types beyond hypothalamic cells that have leptin receptors. Many of these additional functions are yet to be fully defined.
In obesity, a decreased sensitivity to leptin occurs (similar to insulin resistance in type 2 diabetes), resulting in an inability to detect satiety despite high energy stores and high levels of leptin.
Effects
Predominantly, the "energy expenditure hormone" leptin is made by adipose cells, and is thus labeled fat cell-specific. In the context of its effects, the short describing words central, direct, and primary are not used interchangeably. In regard to the hormone leptin, central vs peripheral refers to the hypothalamic portion of the brain vs non-hypothalamic location of action of leptin; direct vs indirect refers to whether there is no intermediary, or there is an intermediary in the mode of action of leptin; and primary vs secondary is an arbitrary description of a particular function of leptin.
Location of action The central location of action (effect) of the fat cell-specific hormone leptin is the hypothalamus, a part of the brain, which is a part of the central nervous system. Non-hypothalamic targets of leptin are referred to as peripheral targets. There is a different relative importance of central and peripheral leptin interactions under different physiologic states, and variations among species.
Mode of action Leptin acts directly on leptin receptors in the cell membrane of different types of cells in the human body in particular, and in vertebrates in general. The leptin receptor is found on a wide range of cell types. It is a single-transmembrane-domain type I cytokine receptor, a special class of cytokine receptors. Further, leptin interacts with other hormones and energy regulators, indirectly mediating the effects of: insulin, glucagon, insulin-like growth factor, growth hormone, glucocorticoids, cytokines, and metabolites.
Function The primary function of the hormone leptin is the regulation of adipose tissue mass through central hypothalamus mediated effects on hunger, food energy use, physical exercise, and energy balance. Outside the brain, in the periphery of the body, leptin's secondary functions are: modulation of energy expenditure, modulation between fetal and maternal metabolism, and that of a permissive factor in puberty, activator of immune cells, activator of beta islet cells, and growth factor.
Central nervous system
In vertebrates, the nervous system consists of two main parts, the central nervous system (CNS) and the peripheral nervous system (PNS). The primary effect of leptins is in the hypothalamus, a part of the central nervous system. Leptin receptors are expressed not only in the hypothalamus but also in other brain regions, particularly in the hippocampus. Thus some leptin receptors in the brain are classified as central (hypothalamic) and some as peripheral (non-hypothalamic).
As scientifically known so far, the general effects of leptin in the central nervous system are:
Deficiency of leptin has been shown to alter brain proteins and neuronal functions of obese mice that can be restored by leptin injection
Leptin receptor signaling in the hippocampus enhances learning and memory Treatment with leptin has been shown to enhance learning and memory in animal models.
In humans, low circulating plasma leptin has been associated with cognitive changes associated with anorexia, depression, and Alzheimer's Disease
Studies in transgenic mouse models of Alzheimer's disease have shown that chronic administration of leptin can ameliorate brain pathology and improve cognitive performance, by reducing b-amyloid and hyperphosphorylated Tau, two hallmarks of Alzheimer's pathology.
Generally, leptin is thought to enter the brain at the choroid plexus, where the intense expression of a form of leptin receptor molecule could act as a transport mechanism.
Increased levels of melatonin causes a downregulation of leptin, however, melatonin also appears to increase leptin levels in the presence of insulin, therefore causing a decrease in appetite during sleeping. Partial sleep deprivation has also been associated with decreased leptin levels.
Mice with type 1 diabetes treated with leptin or leptin plus insulin, compared to insulin alone had better metabolic profiles: blood sugar did not fluctuate so much; cholesterol levels decreased; less body fat formed.
Hypothalamus
Leptin acts on receptors in the lateral hypothalamus to inhibit hunger and the medial hypothalamus to stimulate satiety.
In the lateral hypothalamus, leptin inhibits hunger by
counteracting the effects of neuropeptide Y, a potent hunger promoter secreted by cells in the gut and in the hypothalamus
counteracting the effects of anandamide, another potent hunger promoter that binds to the same receptors as THC
In the medial hypothalamus, leptin stimulates satiety by
promoting the synthesis of α-MSH, a hunger suppressant
Thus, a lesion in the lateral hypothalamus causes anorexia (due to a lack of hunger signals) and a lesion in the medial hypothalamus causes excessive hunger (due to a lack of satiety signals).
This appetite inhibition is long-term, in contrast to the rapid inhibition of hunger by cholecystokinin (CCK) and the slower suppression of hunger between meals mediated by PYY3-36. The absence of leptin (or its receptor) leads to uncontrolled hunger and resulting obesity. Fasting or following a very-low-calorie diet lowers leptin levels.
Leptin levels change more when food intake decreases than when it increases. The dynamics of leptin due to an acute change in energy balance may be related to appetite and eventually, to food intake rather than fat stores.
It controls food intake and energy expenditure by acting on receptors in the mediobasal hypothalamus.
Leptin binds to neuropeptide Y (NPY) neurons in the arcuate nucleus in such a way as to decrease the activity of these neurons. Leptin signals to the hypothalamus which produces a feeling of satiety. Moreover, leptin signals may make it easier for people to resist the temptation of foods high in calories.
Leptin receptor activation inhibits neuropeptide Y and agouti-related peptide (AgRP), and activates α-melanocyte-stimulating hormone (α-MSH). The NPY neurons are a key element in the regulation of hunger; small doses of NPY injected into the brains of experimental animals stimulates feeding, while selective destruction of the NPY neurons in mice causes them to become anorexic. Conversely, α-MSH is an important mediator of satiety, and differences in the gene for the α-MSH receptor are linked to obesity in humans.
Leptin interacts with six types of receptors (Ob-Ra–Ob-Rf, or LepRa-LepRf), which in turn are encoded by a single gene, LEPR. Ob-Rb is the only receptor isoform that can signal intracellularly via the JAK-STAT and MAPK signal transduction pathways, and is present in hypothalamic nuclei.
Once leptin has bound to the Ob-Rb receptor, it activates the stat3, which is phosphorylated and travels to the nucleus to effect changes in gene expression, one of the main effects being the down-regulation of the expression of endocannabinoids, responsible for increasing hunger. In response to leptin, receptor neurons have been shown to remodel themselves, changing the number and types of synapses that fire onto them.
Circulatory system
The role of leptin/leptin receptors in modulation of T cell activity and the innate immune system was shown in experimentation with mice. It modulates the immune response to atherosclerosis, of which obesity is a predisposing and exercise a mitigating factor.
Exogenous leptin can promote angiogenesis by increasing vascular endothelial growth factor levels.
Hyperleptinemia produced by infusion or adenoviral gene transfer decreases blood pressure in rats.
Leptin microinjections into the nucleus of the solitary tract (NTS) have been shown to elicit sympathoexcitatory responses, and potentiate the cardiovascular responses to activation of the chemoreflex.
Fetal lung
In fetal lung, leptin is induced in the alveolar interstitial fibroblasts ("lipofibroblasts") by the action of PTHrP secreted by formative alveolar epithelium (endoderm) under moderate stretch. The leptin from the mesenchyme, in turn, acts back on the epithelium at the leptin receptor carried in the alveolar type II pneumocytes and induces surfactant expression, which is one of the main functions of these type II pneumocytes.
Reproductive system
Ovulatory cycle
In mice, and to a lesser extent in humans, leptin is required for male and female fertility. Ovulatory cycles in females are linked to energy balance (positive or negative depending on whether a female is losing or gaining weight) and energy flux (how much energy is consumed and expended) much more than energy status (fat levels). When energy balance is highly negative (meaning the woman is starving) or energy flux is very high (meaning the woman is exercising at extreme levels, but still consuming enough calories), the ovarian cycle stops and females stop menstruating. Only if a female has an extremely low body fat percentage does energy status affect menstruation. Leptin levels outside an ideal range may have a negative effect on egg quality and outcome during in vitro fertilization. Leptin is involved in reproduction by stimulating gonadotropin-releasing hormone from the hypothalamus.
Pregnancy
The placenta produces leptin. Leptin levels rise during pregnancy and fall after childbirth. Leptin is also expressed in fetal membranes and the uterine tissue. Uterine contractions are inhibited by leptin. Leptin plays a role in hyperemesis gravidarum (severe morning sickness of pregnancy), in polycystic ovary syndrome, and hypothalamic leptin is implicated in bone growth in mice.
Lactation
Immunoreactive leptin has been found in human breast milk; and leptin from mother's milk has been found in the blood of suckling infant animals.
Puberty
Leptin along with kisspeptin controls the onset of puberty. High levels of leptin, as usually observed in obese females, can trigger neuroendocrine cascade resulting in early menarche. This may eventually lead to shorter stature as oestrogen secretion starts during menarche and causes early closure of epiphyses.
Bone
Leptin's role in regulating bone mass was identified in 2000. Leptin can affect bone metabolism via direct signalling from the brain. Leptin decreases cancellous bone, but increases cortical bone. This "cortical-cancellous dichotomy" may represent a mechanism for enlarging bone size, and thus bone resistance, to cope with increased body weight.
Bone metabolism can be regulated by central sympathetic outflow, since sympathetic pathways innervate bone tissue. A number of brain-signalling molecules (neuropeptides and neurotransmitters) have been found in bone, including adrenaline, noradrenaline, serotonin, calcitonin gene-related peptide, vasoactive intestinal peptide, and neuropeptide Y. Leptin binds to its receptors in the hypothalamus, where it acts through the sympathetic nervous system to regulate bone metabolism. Leptin may also act directly on bone metabolism via a balance between energy intake and the IGF-I pathway. There is a potential for treatment of diseases of bone formation - such as impaired fracture healing - with leptin.
Immune system
Factors that acutely affect leptin levels are also factors that influence other markers of inflammation, e.g., testosterone, sleep, emotional stress, caloric restriction, and body fat levels. While it is well-established that leptin is involved in the regulation of the inflammatory response, it has been further theorized that leptin's role as an inflammatory marker is to respond specifically to adipose-derived inflammatory cytokines.
In terms of both structure and function, leptin resembles IL-6 and is a member of the cytokine superfamily. Circulating leptin seems to affect the HPA axis, suggesting a role for leptin in stress response. Elevated leptin concentrations are associated with elevated white blood cell counts in both men and women.
Similar to what is observed in chronic inflammation, chronically elevated leptin levels are associated with obesity, overeating, and inflammation-related diseases, including hypertension, metabolic syndrome, and cardiovascular disease. While leptin is associated with body fat mass, the size of individual fat cells, and overeating, it is not affected by exercise (for comparison, IL-6 is released in response to muscular contractions). Thus, it is speculated that leptin responds specifically to adipose-derived inflammation. Leptin is a pro-angiogenic, pro-inflammatory, and mitogenic factor, the actions of which are reinforced through crosstalk with IL-1 family cytokines in cancer. High leptin levels have been also demonstrated in patients with COVID-19 pneumonia.
Taken as such, increases in leptin levels (in response to caloric intake) function as an acute pro-inflammatory response mechanism to prevent excessive cellular stress induced by overeating. When high caloric intake overtaxes the ability of fat cells to grow larger or increase in number in step with caloric intake, the ensuing stress response leads to inflammation at the cellular level and ectopic fat storage, i.e., the unhealthy storage of body fat within internal organs, arteries, and/or muscle. The insulin increase in response to the caloric load provokes a dose-dependent rise in leptin, an effect potentiated by high cortisol levels. (This insulin-leptin relationship is notably similar to insulin's effect on the increase of IL-6 gene expression and secretion from preadipocytes in a time- and dose-dependent manner.) Furthermore, plasma leptin concentrations have been observed to gradually increase when acipimox is administered to prevent lipolysis, concurrent hypocaloric dieting and weight loss notwithstanding. Such findings appear to demonstrate high caloric loads in excess of storage rate capacities of fat cells lead to stress responses that induce an increase in leptin, which then operates as an adipose-derived inflammation stopgap signaling for the cessation of food intake so as to prevent adipose-derived inflammation from reaching elevated levels. This response may then protect against the harmful process of ectopic fat storage, which perhaps explains the connection between chronically elevated leptin levels and ectopic fat storage in obese individuals.
Leptin increases the production of leukocytes via actions on the hematopoietic niche, a pathway that is more active in sedentary mice and humans when compared to individuals who are physically active.
Location of gene and structure of hormone
The Ob(Lep) gene (Ob for obese, Lep for leptin) is located on chromosome 7 in humans. Human leptin is a 16-kDa protein of 167 amino acids.
Mutations
A human mutant leptin was first described in 1997, and subsequently six additional mutations were described. All of those affected were from Eastern countries; and all had variants of leptin not detected by the standard immunoreactive technique, so leptin levels were low or undetectable. The most recently described eighth mutation reported in January 2015, in a child with Turkish parents, is unique in that it is detected by the standard immunoreactive technique, where leptin levels are elevated; but the leptin does not turn on the leptin receptor, hence the patient has functional leptin deficiency. These eight mutations all cause extreme obesity in infancy, with hyperphagia.
Nonsense
A nonsense mutation in the leptin gene that results in a stop codon and lack of leptin production was first observed in mice. In the mouse gene, arginine-105 is encoded by CGA and only requires one nucleotide change to create the stop codon TGA. The corresponding amino acid in humans is encoded by the sequence CGG and would require two nucleotides to be changed to produce a stop codon, which is much less likely to happen.
Frameshift
A recessive frameshift mutation resulting in a reduction of leptin has been observed in two consanguineous children with juvenile obesity. A 2001 study of 13 people with a heterozygous frameshift mutation known as delta-G133 found that they had lower blood leptin levels than controls. There was an increased rate of obesity in these individuals, with 76% having a BMI of more than 30 compared to 26% in the control group.
Polymorphisms
A Human Genome Equivalent (HuGE) review in 2004 looked at studies of the connection between genetic mutations affecting leptin regulation and obesity. They reviewed a common polymorphism in the leptin gene (A19G; frequency 0.46), three mutations in the leptin receptor gene (Q223R, K109R, and K656N) and two mutations in the PPARG gene (P12A and C161T). They found no association between any of the polymorphisms and obesity.
A 2006 study found a link between the common LEP-2548 G/A genotype and morbid obesity in Taiwanese aborigines, but a 2014 meta-analysis did not, however, this polymorphism has been associated with weight gain in patients taking antipsychotics.
The LEP-2548 G/A polymorphism has been linked with an increased risk of prostate cancer, gestational diabetes, and osteoporosis.
Other rare polymorphisms have been found, but their association with obesity are not consistent.
Transversion
A single case of a homozygous transversion mutation of the gene encoding for leptin was reported in January 2015. It leads to functional leptin deficiency with high leptin levels in circulation. The transversion of (c.298G → T) changed aspartic acid to tyrosine at position 100 (p.D100Y). The mutant leptin could neither bind to nor activate the leptin receptor in vitro, nor in leptin-deficient mice in vivo. It was found in a two-year-old boy with extreme obesity with recurrent ear and pulmonary infections. Treatment with metreleptin led to "rapid change in eating behavior, a reduction in daily energy intake, and substantial weight loss."
Sites of synthesis
Leptin is produced primarily in the adipocytes of white adipose tissue. It also is produced by brown adipose tissue, placenta (syncytiotrophoblasts), ovaries, skeletal muscle, stomach (the lower part of the fundic glands), mammary epithelial cells, bone marrow, gastric chief cells, and P/D1 cells.
Blood levels
Leptin circulates in blood in free form and bound to proteins.
Physiologic variation
Leptin levels vary exponentially, not linearly, with fat mass. Leptin levels in blood are higher between midnight and early morning, perhaps suppressing appetite during the night. The diurnal rhythm of blood leptin levels may be modified by meal-timing.
In specific conditions
In humans, many instances are seen where leptin dissociates from the strict role of communicating nutritional status between body and brain and no longer correlates with body fat levels:
Leptin plays a critical role in the adaptive response to starvation.
Leptin level is decreased after short-term fasting (24–72 hours), even when changes in fat mass are not observed.
Serum level of leptin is reduced by sleep deprivation.
Leptin levels are paradoxically increased in obesity.
Leptin level is increased by emotional stress.
Leptin level is chronically reduced by physical exercise training.
Leptin level is decreased by increases in testosterone levels and increased by increases in estrogen levels.
Leptin level is increased by insulin.
Leptin release is increased by dexamethasone.
In obese patients with obstructive sleep apnea, leptin level is increased, but decreased after the administration of continuous positive airway pressure. In non-obese individuals, however, restful sleep (i.e., 8–12 hours of unbroken sleep) can increase leptin to normal levels.
In mutations
All known leptin mutations except one are associated with low to undetectable immunoreactive leptin blood levels. The exception is a mutant leptin reported in January 2015 that is not functional, but is detected with standard immunoreactive methods. It was found in a massively obese -year-old boy who had high levels of circulating leptin that had no effect on leptin receptors, so he was functionally leptin-deficient.
Role in disease
Obesity
Although leptin reduces appetite as a circulating signal, obese individuals generally exhibit a higher circulating concentration of leptin than normal weight individuals due to their higher percentage body fat. These people show resistance to leptin, similar to resistance of insulin in type 2 diabetes, with the elevated levels failing to control hunger and modulate their weight. A number of explanations have been proposed to explain this. An important contributor to leptin resistance is changes to leptin receptor signalling, particularly in the arcuate nucleus, however, deficiency of, or major changes to, the leptin receptor itself are not thought to be a major cause. Triglycerides crossing the blood brain barrier (BBB) can induce leptin and insulin resistance in the hypothalamus. Triglycerides can also impair leptin transport across the BBB.
Studies on leptin cerebrospinal fluid (CSF) levels provide evidence for the reduction in leptin crossing the BBB and reaching obesity-relevant targets, such as the hypothalamus, in obese people. In humans it has been observed that the ratio of leptin in the CSF compared to the blood is lower in obese people than in people of a normal weight. The reason for this may be high levels of triglycerides affecting the transport of leptin across the BBB or due to the leptin transporter becoming saturated. Although deficits in the transfer of leptin from the plasma to the CSF is seen in obese people, they are still found to have 30% more leptin in their CSF than lean individuals. These higher CSF levels fail to prevent their obesity. Since the amount and quality of leptin receptors in the hypothalamus appears to be normal in the majority of obese humans (as judged from leptin-mRNA studies), it is likely that the leptin resistance in these individuals is due to a post leptin-receptor deficit, similar to the post-insulin receptor defect seen in type 2 diabetes.
When leptin binds with the leptin receptor, it activates a number of pathways. Leptin resistance may be caused by defects in one or more parts of this process, particularly the JAK/STAT pathway. Mice with a mutation in the leptin receptor gene that prevents the activation of STAT3 are obese and exhibit hyperphagia. The PI3K pathway may also be involved in leptin resistance, as has been demonstrated in mice by artificial blocking of PI3K signalling. The PI3K pathway also is activated by the insulin receptor and is therefore an important area where leptin and insulin act together as part of energy homeostasis. The insulin-pI3K pathway can cause POMC neurons to become insensitive to leptin through hyperpolarization.
Leptin is known to interact with amylin, a hormone involved in gastric emptying and creating a feeling of fullness. When both leptin and amylin were given to obese, leptin-resistant rats, sustained weight loss was seen. Due to its apparent ability to reverse leptin resistance, amylin has been suggested as possible therapy for obesity.
It has been suggested that the main role of leptin is to act as a starvation signal when levels are low, to help maintain fat stores for survival during times of starvation, rather than a satiety signal to prevent overeating. Leptin levels signal when an animal has enough stored energy to spend it in pursuits besides acquiring food. This would mean that leptin resistance in obese people is a normal part of mammalian physiology and possibly, could confer a survival advantage. Leptin resistance (in combination with insulin resistance and weight gain) is seen in rats after they are given unlimited access to palatable, energy-dense foods. This effect is reversed when the animals are put back on a low-energy diet. This also may have an evolutionary advantage: allowing energy to be stored efficiently when food is plentiful would be advantageous in populations where food frequently may be scarce.
A fad diet, the Rosedale diet is based on ideas about how leptin might affect weight. It is based on unsound science and marketed with unevidenced claims of health benefits.
Role in osteoarthritis with obesity
Obesity and osteoarthritis
Osteoarthritis and obesity are closely linked. Obesity is one of the most important preventable factors for the development of osteoarthritis.
Originally, the relationship between osteoarthritis and obesity was considered to be exclusively biomechanically based, according to which the excess weight caused the joint to become worn down more quickly. However, today we recognise that there is also a metabolic component which explains why obesity is a risk factor for osteoarthritis, not only for weight-bearing joints (for example, the knees), but also for joints that do not bear weight (for example, the hands). Consequently, it has been shown that decreasing body fat lessens osteoarthritis to a greater extent than weight loss per se. This metabolic component related with the release of systemic factors, of a pro-inflammatory nature, by the adipose tissues, which frequently are critically associated with the development of osteoarthritis.
Thus, the deregulated production of adipokines and inflammatory mediators, hyperlipidaemia, and the increase of systemic oxidative stress are conditions frequently associated with obesity, which can favour joint degeneration. Furthermore, many regulation factors have been implicated in the development, maintenance, and function, both of adipose tissues, as well as of the cartilage and other joint tissues. Alterations in these factors can be the additional link between obesity and osteoarthritis.
Leptin and osteoarthritis
Adipocytes interact with other cells through producing and secreting a variety of signalling molecules, including the cell signalling proteins known as adipokines. Certain adipokines can be considered as hormones, as they regulate the functions of organs at a distance, and several of them have been specifically involved in the physiopathology of joint diseases. In particular, there is one, leptin, which has been the focus of attention for research in recent years.
The circulating leptin levels are positively correlated with the Body Mass Index (BMI), more specifically with fatty mass, and obese individuals have higher leptin levels in their blood circulation, compared with non-obese individuals. In obese individuals, the increased circulating leptin levels induce unwanted responses, that is, reduced food intake or losing body weight does not occur as there is a resistance to leptin (ref 9). In addition to the function of regulating energy homeostasis, leptin carries out a role in other physiological functions such as neuroendocrine communication, reproduction, angiogenesis, and bone formation. More recently, leptin has been recognised as a cytokine factor as well as with pleiotropic actions also in the immune response and inflammation. For example, leptin can be found in the synovial fluid in correlation with the body mass index, and the leptin receptors are expressed in the cartilage, where leptin mediates and modulates many inflammatory responses that can damage cartilage and other joint tissues. Leptin has thus emerged as a candidate to link obesity and osteoarthritis and serves as an apparent objective as a nutritional treatment for osteoarthritis.
As in the plasma, the leptin levels in the synovial fluid are positively correlated with BMI. The leptin of the synovial fluid is synthesised at least partially in the joint and may originate in part in the circulation. Leptin has been shown to be produced by chondrocytes, as well as by other tissues in the joints, including the synovial tissue, osteophytes, the meniscus, and bone. An infrapatellar fat pad located extrasynovially within the knee joint is also adjacent to the synovial membrane and cartilage, and has recently been highly appreciated as an important source of leptin, as well as other adipokines and mediators that contribute to the pathogenesis of osteoarthritis
The risk of suffering osteoarthritis can be decreased with weight loss. This reduction of risk is related in part with the decrease of the load on the joint, but also in the decrease of fatty mass, the central adipose tissue and the low-level inflammation associated with obesity and systemic factors.
This growing evidence points to leptin as a cartilage degradation factor in the pathogenesis of osteoarthritis, and as a potential biomarker in the progression of the disease, which suggests that leptin, as well as regulation and signalling mechanisms, can be a new and promising target in the treatment of osteoarthritis, especially in obese patients.
Obese individuals are predisposed to developing osteoarthritis, not only due to the excess mechanical load, but also due to the excess expression of soluble factors, that is, leptin and pro-inflammatory cytokines, which contribute to joint inflammation and cartilage destruction. As such, obese individuals are in an altered state, due to a metabolic insufficiency, which requires specific nutritional treatment capable of normalising the leptin production and reducing the systematic low-level inflammation, in order to reduce the harmful impact of these systematic mediators on the joint health.
There are nutritional supplements and pharmacological agents capable of directing these factors and improving both conditions.
Therapeutic use
Leptin
Leptin was approved in the United States in 2014 for use in congenital leptin deficiency and generalized lipodystrophy.
Analog metreleptin
An analog of human leptin metreleptin (trade names Myalept, Myalepta) was first approved in Japan in 2013, in the United States in February 2014, and in Europe in 2018. In the US it is indicated as a treatment for complications of leptin deficiency, and for the diabetes and hypertriglyceridemia associated with congenital or acquired generalized lipodystrophy. In Europe based on EMA, metreleptin should be used in addition to diet to treat lipodystrophy, where patients have loss of fatty tissue under the skin and build-up of fat elsewhere in the body such as in the liver and muscles. The medicine is used in adults and children above the age of 2 years with generalised lipodystrophy (Berardinelli-Seip syndrome and Lawrence syndrome); and in adults and children above the age of 12 years with partial lipodystrophy (including Barraquer-Simons syndrome), when standard treatments have failed.
The National Health Service in England will commission metreleptin treatment for all with congenital leptin deficiency regardless of age beginning on April 1, 2019.
Research
Leptin is currently being evaluated as a potential target for the treatment of anorexia nervosa. It is hypothesized that the gradual loss of body fat mass, and more specifically the ensuing low leptin levels, escalate the preexisting drive for thinness into an obsessive-compulsive-like and addictive-like state. It was shown that short-term metreleptin treatment of patients with anorexia nervosa had rapid on-set of beneficial cognitive, emotional, and behavioral effects. Among other things, depression, drive for activity, repetitive thoughts of food, inner restlessness, and weight phobia decreased rapidly. Whether metreleptin (or another leptin analogue) is a suitable treatment for anorexia nervosa is currently unknown. Potential side effects are weight loss and the development of anti-metreleptin antibodies.
History
The leptin was discovered by Jeffrey Friedman in 1994 after several decades of research conducted by others institutions since 1950 on obese mouse models.
Identification of the encoding gene
In 1949, a non-obese mouse colony being studied at the Jackson Laboratory produced a strain of obese offspring, suggesting that a mutation had occurred in a hormone regulating hunger and energy expenditure. Mice homozygous for the so-called ob mutation (ob/ob) ate voraciously and were massively obese. In the 1960s, a second mutation causing obesity and a similar phenotype was identified by Douglas Coleman, also at the Jackson Laboratory, and was named diabetes (db), as both ob/ob and db/db were obese. In 1990 Rudolph Leibel and Jeffrey M. Friedman reported mapping of the db gene.
Consistent with Coleman's and Leibel's hypothesis, several subsequent studies from Leibel's and Friedman's labs and other groups confirmed that the ob gene encoded a novel hormone that circulated in blood and that could suppress food intake and body weight in ob and wild type mice, but not in db mice.
In 1994, Friedman's laboratory reported the identification of the gene. In 1995, Jose F. Caro's laboratory provided evidence that the mutations in the mouse ob gene did not occur in humans. Furthermore, since ob gene expression was increased, not decreased, in human obesity, it suggested resistance to leptin to be a possibility. At the suggestion of Roger Guillemin, Friedman named this new hormone "leptin" from the Greek lepto meaning thin. Leptin was the first fat cell-derived hormone (adipokine) to be discovered.
Subsequent studies in 1995 confirmed that the db gene encodes the leptin receptor, and that it is expressed in the hypothalamus, a region of the brain known to regulate the sensation of hunger and body weight.
Recognition of scientific advances
Coleman and Friedman have been awarded numerous prizes acknowledging their roles in discovery of leptin, including the Gairdner Foundation International Award (2005), the Shaw Prize (2009), the Lasker Award, the BBVA Foundation Frontiers of Knowledge Award and the King Faisal International Prize, Leibel has not received the same level of recognition from the discovery because he was omitted as a co-author of a scientific paper published by Friedman that reported the discovery of the gene. The various theories surrounding Friedman's omission of Leibel and others as co-authors of this paper have been presented in a number of publications, including Ellen Ruppel Shell’s 2002 book The Hungry Gene.
The discovery of leptin also is documented in a series of books including Fat: Fighting the Obesity Epidemic by Robert Pool, The Hungry Gene by Ellen Ruppel Shell, and Rethinking Thin: The New Science of Weight Loss and the Myths and Realities of Dieting by Gina Kolata. Fat: Fighting the Obesity Epidemic and Rethinking Thin: The New Science of Weight Loss and the Myths and Realities of Dieting review the work in the Friedman laboratory that led to the cloning of the ob gene, while The Hungry Gene draws attention to the contributions of Leibel.
| Biology and health sciences | Animal hormones | Biology |
215027 | https://en.wikipedia.org/wiki/Eastern%20grey%20kangaroo | Eastern grey kangaroo | The eastern grey kangaroo (Macropus giganteus) is a marsupial found in the eastern third of Australia, with a population of several million. It is also known as the great grey kangaroo and the forester kangaroo. Although a big eastern grey male can typically weigh up to and have a length of well over , the scientific name, Macropus giganteus (gigantic large-foot), is misleading: the red kangaroo of the semi-arid inland is larger, weighing up to .
Taxonomy
The eastern grey kangaroo was described by George Shaw in 1790 as Macropus giganteus.
Subspecies
While two subspecies were recognised by Mammal Species of the World (MSW), there is some dispute as to the validity of this division, and the subspecies are not recognised by the Australian Mammal Society, the IUCN, or the American Society of Mammalogists, which produces the successor of the MSW. Albert Sherbourne Le Souef created the Tasmanian subspecies in 1923, based on coat colour. In 1972 Kirsch and Poole published a paper supporting the concept of separate species for the eastern and western greys, but casting doubt on the subspecies M. g. tasmaniensis, as
...these subspecies, or species as Le Souef (1923) would have it in the case of the Tasmanian forester, are based on so little study of so little material that we do not believe that they should be recognized as such at this time.
A later study published in 2003 was also critical of the division, stating that
Phylogenetic comparisons between M. g. giganteus and M. g. tasmaniensis indicated that the current taxonomic status of these subspecies should be revised as there was a lack of genetic differentiation between the populations sampled."
The subspecies recognised by Mammal Species of the World were:
Macropus giganteus giganteusfound in eastern and central Queensland, Victoria, New South Wales and southeastern South Australia
Macropus giganteus tasmaniensis(commonly known as the forester kangaroo) endemic to Tasmania
Description
The eastern grey kangaroo is the second largest and heaviest living marsupial and native land mammal in Australia. An adult male will commonly weigh around whereas females commonly weigh around . They have a powerful tail that is over long in adult males. Large males of this species are more heavily built and muscled than the lankier red kangaroo and can occasionally exceed normal dimensions. One of these, shot in eastern Tasmania weighed , with a total length from nose to tail (possibly along the curves). The largest known specimen, examined by Lydekker, had a weight of and measured along the curves. When the skin of this specimen was measured it had a "flat" length of .
The eastern grey is easy to recognise: its soft grey coat is distinctive, and it is usually found in moister, more fertile areas than the red. Red kangaroos, though sometimes grey-blue in colour, have a totally different face than eastern grey kangaroos. Red kangaroos have distinctive markings in black and white beside their muzzles and along the sides of their face. Eastern grey kangaroos do not have these markings, and their eyes seem large and wide open.
Where their ranges overlap, it is much more difficult to distinguish between eastern grey and western grey kangaroos, which are closely related. They have a very similar body and facial structure, and their muzzles are fully covered with fine hair (though that is not obvious at a distance, their noses do look noticeably different from the noses of reds and wallaroos). The eastern grey's colouration is a light-coloured grey or brownish-grey, with a lighter silver or cream, sometimes nearly white, belly. The western grey is a dark dusty brown colour, with more contrast especially around the head. Indigenous Australian names include iyirrbir (Uw Oykangand and Uw Olkola) and kucha (Pakanh). The highest ever recorded speed of any kangaroo was set by a large female eastern grey kangaroo.
Distribution and habitat
Although the red is better known, the eastern grey is the kangaroo most often encountered in Australia, due to its adaptability. Few Australians visit the arid interior of the continent, while many live in and around the major cities of the southern and eastern coast, from where it is usually only a short drive to the remaining pockets of near-city bushland where kangaroos can be found without much difficulty. The eastern grey prefers open grassland with areas of bush for daytime shelter and mainly inhabits the wetter parts of Australia. It also inhabits coastal areas, woodlands, sub-tropical forests, mountain forests, and inland scrubs.
Behaviour
Like all kangaroos, it is mainly nocturnal and crepuscular, and is mostly seen early in the morning, or as the light starts to fade in the evening. In the middle of the day, kangaroos rest in the cover of the woodlands and eat there but then come out in the open to feed on the grasslands in large numbers. The eastern grey kangaroo is predominantly a grazer, eating a wide variety of grasses, whereas some other species (e.g. the red kangaroo) include significant amounts of shrubs in their diet.
Eastern grey kangaroos are gregarious and form open-membership groups. The groups contain an average of three individuals. Smaller groups join to graze in preferred foraging areas, and to rest in large groups around the middle of the day. They exist in a dominance hierarchy and the dominant individuals gain access to better sources of food and areas of shade. However, kangaroos are not territorial.
Eastern grey kangaroos adjust their behaviour in relation to the risk of predation with reproductive females, individuals on the periphery of the group and individuals in groups far from cover being the most vigilant. Vigilance in individual kangaroos does not seem to significantly decrease when the size of the group increases. However, there is a tendency for the proportion of individuals on the periphery of the group to decline as group size increases. The open membership of the group allows more kangaroos to join and thus provide more buffers against predators.
Reproduction
Eastern grey kangaroos are polygynous which means that one male mates with multiple females. Males do a lot of intraspecific competition for mates which includes male-male fights to determine dominance between the two males. When a dominant male finds a female in estrus, he will court the female and eventually they copulate. After copulation, the male will guard the female from other males. This whole process from courting to when the male is done guarding the singular females is roughly an hour.
Females may form strong kinship bonds with their relatives. Females with living female relatives have a greater chance of reproducing. Most kangaroo births occur during the summer. Eastern grey kangaroos are obligate breeders in that they can reproduce in only one kind of habitat.
The female eastern grey kangaroo is usually permanently pregnant except on the day she gives birth; however, she has the ability to freeze the development of an embryo until the previous joey is able to leave the pouch. This is known as embryonic diapause, and will occur in times of drought and in areas with poor food sources. The composition of the milk produced by the mother varies according to the needs of the joey. Since lactation is very energy expensive, females that are lactating typically change some of their foraging habits. Some females will forage faster so they can tend to their joeys more, while others forage more aggressively so they can eat as much as possible. In addition, the mother is able to produce two different kinds of milk simultaneously for the newborn and the older joey still in the pouch.
Unusually, during a dry period, males will not produce sperm, and females will conceive only if there has been enough rain to produce a large quantity of green vegetation. Females take care of the young without any assistance from the males. Female kangaroos with a joey often feed solitary in order to help separate themselves from the rest of the kangaroos in order to reduce predation. The joeys are heavily reliant on their mothers for about 550 days, which is when they are weaned. Females sexually mature between 17 and 28 months, while males mature at around 25 months.
Status
It is popularly thought, but not confirmed by evidence, that kangaroo populations have increased significantly since the European colonisation of Australia because of the increased areas of grassland (as distinct from forest), the reduction in dingo numbers, and the availability of artificial watering holes. The estimated population of the species Australia-wide in 2010 was 11.4 million. In some places the eastern grey is so numerous it causes overgrazing and some individual populations have been culled in some parts of Australia (see, for example, the Eden Park Kangaroo Cull). Despite the commercial harvest and some culls, the eastern grey remains common and widespread. Eastern greys are common in suburban and rural areas where they have been observed to form larger groups than in natural areas. It still covers the entire range it occupied when Europeans arrived in Australia in 1788 and it often comes into conflict with agriculture as it uses the more fertile districts that now carry crops or exotic pasture grasses, which kangaroos readily eat. Kangaroo meat has also been considered to replace beef in recent years, as their soft feet are preferable to hooves in erosion prone areas.
| Biology and health sciences | Diprotodontia | Animals |
215038 | https://en.wikipedia.org/wiki/Enhancer%20%28genetics%29 | Enhancer (genetics) | In genetics, an enhancer is a short (50–1500 bp) region of DNA that can be bound by proteins (activators) to increase the likelihood that transcription of a particular gene will occur. These proteins are usually referred to as transcription factors. Enhancers are cis-acting. They can be located up to 1 Mbp (1,000,000 bp) away from the gene, upstream or downstream from the start site. There are hundreds of thousands of enhancers in the human genome. They are found in both prokaryotes and eukaryotes. Active enhancers typically get transcribed as enhancer or regulatory non-coding RNA, whose expression levels correlate with mRNA levels of target genes.
The first discovery of a eukaryotic enhancer was in the immunoglobulin heavy chain gene in 1983. This enhancer, located in the large intron, provided an explanation for the transcriptional activation of rearranged Vh gene promoters while unrearranged Vh promoters remained inactive. Lately, enhancers have been shown to be involved in certain medical conditions, for example, myelosuppression. Since 2022, scientists have used artificial intelligence to design synthetic enhancers and applied them in animal systems, first in a cell line, and one year later also in vivo.
Locations
In eukaryotic cells the structure of the chromatin complex of DNA is folded in a way that functionally mimics the supercoiled state characteristic of prokaryotic DNA, so although the enhancer DNA may be far from the gene in a linear way, it is spatially close to the promoter and gene. This allows it to interact with the general transcription factors and RNA polymerase II. The same mechanism holds true for silencers in the eukaryotic genome. Silencers are antagonists of enhancers that, when bound to its proper transcription factors called repressors, repress the transcription of the gene. Silencers and enhancers may be in close proximity to each other or may even be in the same region only differentiated by the transcription factor the region binds to.
An enhancer may be located upstream or downstream of the gene it regulates. Furthermore, an enhancer does not need to be located near the transcription initiation site to affect transcription, as some have been found located several hundred thousand base pairs upstream or downstream of the start site. Enhancers do not act on the promoter region itself, but are bound by activator proteins as first shown by in vivo competition experiments. Subsequently, molecular studies showed direct interactions with transcription factors and cofactors, including the mediator complex, which recruits polymerase II and the general transcription factors which then begin transcribing the genes. Enhancers can also be found within introns. An enhancer's orientation may even be reversed without affecting its function; additionally, an enhancer may be excised and inserted elsewhere in the chromosome, and still affect gene transcription. That is one reason that introns polymorphisms may have effects although they are not translated. Enhancers can also be found at the exonic region of an unrelated gene and they may act on genes on another chromosome.
Enhancers are bound by p300-CBP and their location can be predicted by ChIP-seq against this family of coactivators.
Role in gene expression
Gene expression in mammals is regulated by many cis-regulatory elements, including core promoters and promoter-proximal elements that are located near the transcription start sites of genes. Core promoters are sufficient to direct transcription
initiation, but generally have low basal activity. Other important cis-regulatory modules are localized in DNA regions that are distant from the transcription start sites. These include enhancers, silencers, insulators and tethering elements. Among this constellation of elements, enhancers and their associated transcription factors have a leading role in the regulation of gene expression. An enhancer localized in a DNA region distant from the promoter of a gene can have a very large effect on gene expression, with some genes undergoing up to 100-fold increased expression due to an activated enhancer.
Enhancers are regions of the genome that are major gene-regulatory elements. Enhancers control cell-type-specific gene expression programs, most often by looping through long distances to come in physical proximity with the promoters of their target genes. While there are hundreds of thousands of enhancer DNA regions, for a particular type of tissue only specific enhancers are brought into proximity with the promoters that they regulate. In a study of brain cortical neurons, 24,937 loops were found, bringing enhancers to their target promoters. Multiple enhancers, each often at tens or hundreds of thousands of nucleotides distant from their target genes, loop to their target gene promoters and can coordinate with each other to control the expression of their common target gene.
The schematic illustration in this section shows an enhancer looping around to come into close physical proximity with the promoter of a target gene. The loop is stabilized by a dimer of a connector protein (e.g. dimer of CTCF or YY1), with one member of the dimer anchored to its binding motif on the enhancer and the other member anchored to its binding motif on the promoter (represented by the red zigzags in the illustration). Several cell function specific transcription factors (there are about 1,600 transcription factors in a human cell) generally bind to specific motifs on an enhancer and a small combination of these enhancer-bound transcription factors, when brought close to a promoter by a DNA loop, govern level of transcription of the target gene. Mediator (a complex usually consisting of about 26 proteins in an interacting structure) communicates regulatory signals from enhancer DNA-bound transcription factors directly to the RNA polymerase II (pol II) enzyme bound to the promoter.
Enhancers, when active, are generally transcribed from both strands of DNA with RNA polymerases acting in two different directions, producing two Enhancer RNAs (eRNAs) as illustrated in the Figure. Like mRNAs, these eRNAs are usually protected by their 5′ cap. An inactive enhancer may be bound by an inactive transcription factor. Phosphorylation of the transcription factor may activate it and that activated transcription factor may then activate the enhancer to which it is bound (see small red star representing phosphorylation of transcription factor bound to enhancer in the illustration). An activated enhancer begins transcription of its RNA before activating transcription of messenger RNA from its target gene.
Theories
, there are two different theories on the information processing that occurs on enhancers:
Enhanceosomes – rely on highly cooperative, coordinated action and can be disabled by single point mutations that move or remove the binding sites of individual proteins.
Flexible billboards – less integrative, multiple proteins independently regulate gene expression and their sum is read in by the basal transcriptional machinery.
Examples in the human genome
HACNS1
HACNS1 (also known as CENTG2 and located in the Human Accelerated Region 2) is a gene enhancer "that may have contributed to the evolution of the uniquely opposable human thumb, and possibly also modifications in the ankle or foot that allow humans to walk on two legs". Evidence to date shows that of the 110,000 gene enhancer sequences identified in the human genome, HACNS1 has undergone the most change during the evolution of humans following the split with the ancestors of chimpanzees.
GADD45G
An enhancer near the gene GADD45g has been described that may regulate brain growth in chimpanzees and other mammals, but not in humans. The GADD45G regulator in mice and chimps is active in regions of the brain where cells that form the cortex, ventral forebrain, and thalamus are located and may suppress further neurogenesis. Loss of the GADD45G enhancer in humans may contribute to an increase of certain neuronal populations and to forebrain expansion in humans.
In developmental biology
The development, differentiation and growth of cells and tissues require precisely regulated patterns of gene expression. Enhancers work as cis-regulatory elements to mediate both spatial and temporal control of development by turning on transcription in specific cells and/or repressing it in other cells. Thus, the particular combination of transcription factors and other DNA-binding proteins in a developing tissue controls which genes will be expressed in that tissue. Enhancers allow the same gene to be used in diverse processes in space and time.
Identification and characterization
Traditionally, enhancers were identified by enhancer trap techniques using a reporter gene or by comparative sequence analysis and computational genomics. In genetically tractable models such as the fruit fly Drosophila melanogaster, for example, a reporter construct such as the lacZ gene can be randomly integrated into the genome using a P element transposon. If the reporter gene integrates near an enhancer, its expression will reflect the expression pattern driven by that enhancer. Thus, staining the flies for LacZ expression or activity and cloning the sequence surrounding the integration site allows the identification of the enhancer sequence.
The development of genomic and epigenomic technologies, however, has dramatically changed the outlook for cis-regulatory modules (CRM) discovery. Next-generation sequencing (NGS) methods now enable high-throughput functional CRM discovery assays, and the vastly increasing amounts of available data, including large-scale libraries of transcription factor-binding site (TFBS) motifs, collections of annotated, validated CRMs, and extensive epigenetic data across many cell types, are making accurate computational CRM discovery an attainable goal. An example of NGS-based approach called DNase-seq have enabled identification of nucleosome-depleted, or open chromatin regions, which can contain CRM. More recently techniques such as ATAC-seq have been developed which require less starting material. Nucelosome depleted regions can be identified in vivo through expression of Dam methylase, allowing for greater control of cell-type specific enhancer identification.
Computational methods include comparative genomics, clustering of known or predicted TF-binding sites, and supervised machine-learning approaches trained on known CRMs.
All of these methods have proven effective for CRM discovery, but each has its own considerations and limitations, and each is subject to a greater or lesser number of false-positive identifications.
In the comparative genomics approach, sequence conservation of non-coding regions can be indicative of enhancers. Sequences from multiple species are aligned, and conserved regions are identified computationally. Identified sequences can then be attached to a reporter gene such as green fluorescent protein or lacZ to determine the in vivo pattern of gene expression produced by the enhancer when injected into an embryo. mRNA expression of the reporter can be visualized by in situ hybridization, which provides a more direct measure of enhancer activity, since it is not subjected to the complexities of translation and protein folding. Although much evidence has pointed to sequence conservation for critical developmental enhancers, other work has shown that the function of enhancers can be conserved with little or no primary sequence conservation. For example, the RET enhancers in humans have very little sequence conservation to those in zebrafish, yet both species' sequences produce nearly identical patterns of reporter gene expression in zebrafish. Similarly, in highly diverged insects (separated by around 350 million years), similar gene expression patterns of several key genes was found to be regulated through similarly constituted CRMs although these CRMs do not show any appreciable sequence conservation detectable by standard sequence alignment methods such as BLAST.
In segmentation of insects
The enhancers determining early segmentation in Drosophila melanogaster embryos are among the best characterized developmental enhancers. In the early fly embryo, the gap gene transcription factors are responsible for activating and repressing a number of segmentation genes, such as the pair rule genes. The gap genes are expressed in blocks along the anterior-posterior axis of the fly along with other maternal effect transcription factors, thus creating zones within which different combinations of transcription factors are expressed. The pair-rule genes are separated from one another by non-expressing cells. Moreover, the stripes of expression for different pair-rule genes are offset by a few cell diameters from one another. Thus, unique combinations of pair-rule gene expression create spatial domains along the anterior-posterior axis to set up each of the 14 individual segments. The 480 bp enhancer responsible for driving the sharp stripe two of the pair-rule gene even-skipped (eve) has been well-characterized. The enhancer contains 12 different binding sites for maternal and gap gene transcription factors. Activating and repressing sites overlap in sequence. Eve is only expressed in a narrow stripe of cells that contain high concentrations of the activators and low concentration of the repressors for this enhancer sequence. Other enhancer regions drive eve expression in 6 other stripes in the embryo.
In vertebrate patterning
Establishing body axes is a critical step in animal development. During mouse embryonic development, Nodal, a transforming growth factor-beta superfamily ligand, is a key gene involved in patterning both the anterior-posterior axis and the left-right axis of the early embryo. The Nodal gene contains two enhancers: the Proximal Epiblast Enhancer (PEE) and the Asymmetric Enhancer (ASE). The PEE is upstream of the Nodal gene and drives Nodal expression in the portion of the primitive streak that will differentiate into the node (also referred to as the primitive node). The PEE turns on Nodal expression in response to a combination of Wnt signaling plus a second, unknown signal; thus, a member of the LEF/TCF transcription factor family likely binds to a TCF binding site in the cells in the node. Diffusion of Nodal away from the node forms a gradient which then patterns the extending anterior-posterior axis of the embryo. The ASE is an intronic enhancer bound by the fork head domain transcription factor Fox1. Early in development, Fox1-driven Nodal expression establishes the visceral endoderm. Later in development, Fox1 binding to the ASE drives Nodal expression on the left side of the lateral plate mesoderm, thus establishing left-right asymmetry necessary for asymmetric organ development in the mesoderm.
Establishing three germ layers during gastrulation is another critical step in animal development. Each of the three germ layers has unique patterns of gene expression that promote their differentiation and development. The endoderm is specified early in development by Gata4 expression, and Gata4 goes on to direct gut morphogenesis later. Gata4 expression is controlled in the early embryo by an intronic enhancer that binds another forkhead domain transcription factor, FoxA2. Initially the enhancer drives broad gene expression throughout the embryo, but the expression quickly becomes restricted to the endoderm, suggesting that other repressors may be involved in its restriction. Late in development, the same enhancer restricts expression to the tissues that will become the stomach and pancreas. An additional enhancer is responsible for maintaining Gata4 expression in the endoderm during the intermediate stages of gut development.
Multiple enhancers promote developmental robustness
Some genes involved in critical developmental processes contain multiple enhancers of overlapping function. Secondary enhancers, or "shadow enhancers", may be found many kilobases away from the primary enhancer ("primary" usually refers to the first enhancer discovered, which is often closer to the gene it regulates). On its own, each enhancer drives nearly identical patterns of gene expression. Are the two enhancers truly redundant? Recent work has shown that multiple enhancers allow fruit flies to survive environmental perturbations, such as an increase in temperature. When raised at an elevated temperature, a single enhancer sometimes fails to drive the complete pattern of expression, whereas the presence of both enhancers permits normal gene expression.
Evolution of developmental mechanisms
One theme of research in evolutionary developmental biology ("evo-devo") is investigating the role of enhancers and other cis-regulatory elements in producing morphological changes via developmental differences between species.
Stickleback Pitx1
Recent work has investigated the role of enhancers in morphological changes in threespine stickleback fish. Sticklebacks exist in both marine and freshwater environments, but sticklebacks in many freshwater populations have completely lost their pelvic fins (appendages homologous to the posterior limb of tetrapods). Pitx1 is a homeobox gene involved in posterior limb development in vertebrates. Preliminary genetic analyses indicated that changes in the expression of this gene were responsible for pelvic reduction in sticklebacks. Fish expressing only the freshwater allele of Pitx1 do not have pelvic spines, whereas fish expressing a marine allele retain pelvic spines. A more thorough characterization showed that a 500 base pair enhancer sequence is responsible for turning on Pitx1 expression in the posterior fin bud. This enhancer is located near a chromosomal fragile site—a sequence of DNA that is likely to be broken and thus more likely to be mutated as a result of imprecise DNA repair. This fragile site has caused repeated, independent losses of the enhancer responsible for driving Pitx1 expression in the pelvic spines in isolated freshwater population, and without this enhancer, freshwater fish fail to develop pelvic spines.
In Drosophila wing pattern evolution
Pigmentation patterns provide one of the most striking and easily scored differences between different species of animals. Pigmentation of the Drosophila wing has proven to be a particularly amenable system for studying the development of complex pigmentation phenotypes. The Drosophila guttifera wing has 12 dark pigmentation spots and 4 lighter gray intervein patches. Pigment spots arise from expression of the yellow gene, whose product produces black melanin. Recent work has shown that two enhancers in the yellow gene produce gene expression in precisely this pattern – the vein spot enhancer drives reporter gene expression in the 12 spots, and the intervein shade enhancer drives reporter expression in the 4 distinct patches. These two enhancers are responsive to the Wnt signaling pathway, which is activated by wingless expression at all of the pigmented locations. Thus, in the evolution of the complex pigmentation phenotype, the yellow pigment gene evolved enhancers responsive to the wingless signal and wingless expression evolved at new locations to produce novel wing patterns.
In inflammation and cancer
Each cell typically contains several hundred of a special class of enhancers that stretch over many kilobases long DNA sequences, called "super-enhancers". These enhancers contain a large number of binding sites for sequence-specific, inducible transcription factors, and regulate expression of genes involved in cell differentiation. During inflammation, the transcription factor NF-κB facilitates remodeling of chromatin in a manner that selectively redistributes cofactors from high-occupancy enhancers, thereby repressing genes involved in maintaining cellular identify whose expression they enhance; at the same time, this F-κB-driven remodeling and redistribution activates other enhancers that guide changes in cellular function through inflammation. As a result, inflammation reprograms cells, altering their interactions with the rest of tissue and with the immune system. In cancer, proteins that control NF-κB activity are dysregulated, permitting malignant cells to decrease their dependence on interactions with local tissue, and hindering their surveillance by the immune system.
Designing enhancers in synthetic biology
Synthetic regulatory elements such as enhancers promise to be a powerful tool to direct gene products to particular cell types in order to treat disease by activating beneficial genes or by halting aberrant cell states.
Since 2022, artificial intelligence and transfer learning strategies have led to a better understanding of the features of regulatory DNA sequences, the prediction, and the design of synthetic enhancers.
Building on work in cell culture, synthetic enhancers were successfully applied to entire living organisms in 2023. Using deep neural networks, scientists simulated the evolution of DNA sequences to analyze the emergence of features that underly enhancer function. This allowed the design and production of a range of functioning synthetic enhancers for different cell types of the fruit fly brain. A second approach trained artificial intelligence models on single-cell DNA accessibility data and transferred the learned models towards the prediction of enhancers for selected tissues in the fruit fly embryo. These enhancer prediction models were used to design synthetic enhancers for the nervous system, brain, muscle, epidermis and gut.
| Biology and health sciences | Molecular biology | Biology |
215199 | https://en.wikipedia.org/wiki/Otitis%20media | Otitis media | Otitis media is a group of inflammatory diseases of the middle ear. One of the two main types is acute otitis media (AOM), an infection of rapid onset that usually presents with ear pain. In young children this may result in pulling at the ear, increased crying, and poor sleep. Decreased eating and a fever may also be present.
The other main type is otitis media with effusion (OME), typically not associated with symptoms, although occasionally a feeling of fullness is described; it is defined as the presence of non-infectious fluid in the middle ear which may persist for weeks or months often after an episode of acute otitis media. Chronic suppurative otitis media (CSOM) is middle ear inflammation that results in a perforated tympanic membrane with discharge from the ear for more than six weeks. It may be a complication of acute otitis media. Pain is rarely present.
All three types of otitis media may be associated with hearing loss. If children with hearing loss due to OME do not learn sign language, it may affect their ability to learn.
The cause of AOM is related to childhood anatomy and immune function. Either bacteria or viruses may be involved. Risk factors include exposure to smoke, use of pacifiers, and attending daycare. It occurs more commonly among indigenous Australians and those who have cleft lip and palate or Down syndrome. OME frequently occurs following AOM and may be related to viral upper respiratory infections, irritants such as smoke, or allergies. Looking at the eardrum is important for making the correct diagnosis. Signs of AOM include bulging or a lack of movement of the tympanic membrane from a puff of air. New discharge not related to otitis externa also indicates the diagnosis.
A number of measures decrease the risk of otitis media including pneumococcal and influenza vaccination, breastfeeding, and avoiding tobacco smoke. The use of pain medications for AOM is important. This may include paracetamol (acetaminophen), ibuprofen, benzocaine ear drops, or opioids. In AOM, antibiotics may speed recovery but may result in side effects. Antibiotics are often recommended in those with severe disease or under two years old. In those with less severe disease they may only be recommended in those who do not improve after two or three days. The initial antibiotic of choice is typically amoxicillin. In those with frequent infections, surgical placement of tympanostomy tubes may decrease recurrence. In children with otitis media with effusion antibiotics may increase resolution of symptoms, but may cause diarrhoea, vomiting and skin rash.
Worldwide AOM affects about 11% of people a year (about 325 to 710 million cases). Half the cases involve children less than five years of age and it is more common among males. Of those affected about 4.8% or 31 million develop chronic suppurative otitis media. The total number of people with CSOM is estimated at 65–330 million people. Before the age of ten OME affects about 80% of children at some point. Otitis media resulted in 3,200 deaths in 2015 – down from 4,900 deaths in 1990.
Signs and symptoms
The primary symptom of acute otitis media is ear pain; other possible symptoms include fever, reduced hearing during periods of illness, tenderness on touch of the skin above the ear, purulent discharge from the ears, irritability, ear blocking sensation and diarrhea (in infants). Since an episode of otitis media is usually precipitated by an upper respiratory tract infection (URTI), there are often accompanying symptoms like a cough and nasal discharge. One might also experience a feeling of fullness in the ear.
Discharge from the ear can be caused by acute otitis media with perforation of the eardrum, chronic suppurative otitis media, tympanostomy tube otorrhea, or acute otitis externa. Trauma, such as a basilar skull fracture, can also lead to cerebrospinal fluid otorrhea (discharge of CSF from the ear) due to cerebral spinal drainage from the brain and its covering (meninges).
Causes
The common cause of all forms of otitis media is dysfunction of the Eustachian tube. This is usually due to inflammation of the mucous membranes in the nasopharynx, which can be caused by a viral upper respiratory tract infection (URTI), strep throat, or possibly by allergies.
By reflux or aspiration of unwanted secretions from the nasopharynx into the normally sterile middle-ear space, the fluid may then become infected – usually with bacteria. The virus that caused the initial upper respiratory infection can itself be identified as the pathogen causing the infection.
Diagnosis
As its typical symptoms overlap with other conditions, such as acute external otitis, symptoms alone are not sufficient to predict whether acute otitis media is present; it has to be complemented by visualization of the tympanic membrane. Examiners may use a pneumatic otoscope with a rubber bulb attached to assess the mobility of the tympanic membrane. Other methods to diagnose otitis media is with a tympanometry, reflectometry, or hearing test.
In more severe cases, such as those with associated hearing loss or high fever, audiometry, tympanogram, temporal bone CT and MRI can be used to assess for associated complications, such as mastoid effusion, subperiosteal abscess formation, bony destruction, venous thrombosis or meningitis.
Acute otitis media in children with moderate to severe bulging of the tympanic membrane or new onset of otorrhea (drainage) is not due to external otitis. Also, the diagnosis may be made in children who have mild bulging of the ear drum and recent onset of ear pain (less than 48 hours) or intense erythema (redness) of the ear drum. To confirm the diagnosis, middle-ear effusion and inflammation of the eardrum (called or tympanitis) have to be identified; signs of these are fullness, bulging, cloudiness and redness of the eardrum. It is important to attempt to differentiate between acute otitis media and otitis media with effusion (OME), as antibiotics are not recommended for OME. It has been suggested that bulging of the tympanic membrane is the best sign to differentiate AOM from OME, with a bulging of the membrane suggesting AOM rather than OME.
Viral otitis may result in blisters on the external side of the tympanic membrane, which is called bullous myringitis (myringa being Latin for "eardrum"). However, sometimes even examination of the eardrum may not be able to confirm the diagnosis, especially if the canal is small. If wax in the ear canal obscures a clear view of the eardrum it should be removed using a blunt cerumen curette or a wire loop. An upset young child's crying can cause the eardrum to look inflamed due to distension of the small blood vessels on it, mimicking the redness associated with otitis media.
Acute otitis media
The most common bacteria isolated from the middle ear in AOM are Streptococcus pneumoniae, Haemophilus influenzae, Moraxella catarrhalis, and Staphylococcus aureus.
Otitis media with effusion
Otitis media with effusion (OME), also known as serous otitis media (SOM) or secretory otitis media (SOM), and colloquially referred to as 'glue ear', is fluid accumulation that can occur in the middle ear and mastoid air cells due to negative pressure produced by dysfunction of the Eustachian tube. This can be associated with a viral upper respiratory infection (URI) or bacterial infection such as otitis media. An effusion can cause conductive hearing loss if it interferes with the transmission of vibrations of middle ear bones to the vestibulocochlear nerve complex that are created by sound waves.
Early-onset OME is associated with feeding of infants while lying down, early entry into group child care, parental smoking, lack or too short a period of breastfeeding, and greater amounts of time spent in group child care, particularly those with a large number of children. These risk factors increase the incidence and duration of OME during the first two years of life.
Chronic suppurative otitis media
Chronic suppurative otitis media (CSOM) is a long-term middle ear inflammation causing persistent ear discharge due to a perforated eardrum. It often follows an unresolved upper respiratory infection leading to acute otitis media. Prolonged inflammation leads to middle ear swelling, ulceration, perforation, and attempts at repair with granulation tissue and polyps. This can worsen discharge and inflammation, potentially developing into CSOM, often associated with cholesteatoma. Symptoms may include ear discharge or pus seen only on examination. Hearing loss is common. Risk factors include poor eustachian tube function, recurrent ear infections, crowded living, daycare attendance, and certain craniofacial malformations.
Worldwide approximately 11% of the human population is affected by AOM every year, or 709 million cases. About 4.4% of the population develop CSOM.
According to the World Health Organization, CSOM is a primary cause of hearing loss in children. Adults with recurrent episodes of CSOM have a higher risk of developing permanent conductive and sensorineural hearing loss.
In Britain, 0.9% of children and 0.5% of adults have CSOM, with no difference between the sexes. The incidence of CSOM across the world varies dramatically where high income countries have a relatively low prevalence while in low income countries the prevalence may be up to three times as great. Each year 21,000 people worldwide die due to complications of CSOM.
Adhesive otitis media
Adhesive otitis media occurs when a thin retracted ear drum becomes sucked into the middle-ear space and stuck (i.e., adherent) to the ossicles and other bones of the middle ear.
Prevention
AOM is far less common in breastfed infants than in formula-fed infants, and the greatest protection is associated with exclusive breastfeeding (no formula use) for the first six months of life. A longer duration of breastfeeding is correlated with a longer protective effect.
Pneumococcal conjugate vaccines (PCV) in early infancy decrease the risk of acute otitis media in healthy infants. PCV is recommended for all children, and, if implemented broadly, PCV would have a significant public health benefit. Influenza vaccination in children appears to reduce rates of AOM by 4% and the use of antibiotics by 11% over 6 months. However, the vaccine resulted in increased adverse-effects such as fever and runny nose. The small reduction in AOM may not justify the side effects and inconvenience of influenza vaccination every year for this purpose alone. PCV does not appear to decrease the risk of otitis media when given to high-risk infants or for older children who have previously experienced otitis media.
Risk factors such as season, allergy predisposition and presence of older siblings are known to be determinants of recurrent otitis media and persistent middle-ear effusions (MEE). History of recurrence, environmental exposure to tobacco smoke, use of daycare, and lack of breastfeeding have all been associated with increased risk of development, recurrence, and persistent MEE. Pacifier use has been associated with more frequent episodes of AOM.
Long-term antibiotics, while they decrease rates of infection during treatment, have an unknown effect on long-term outcomes such as hearing loss. This method of prevention has been associated with emergence of undesirable antibiotic-resistant otitic bacteria.
There is moderate evidence that the sugar substitute xylitol may reduce infection rates in healthy children who go to daycare.
Evidence does not support zinc supplementation as an effort to reduce otitis rates except maybe in those with severe malnutrition such as marasmus.
Probiotics do not show evidence of preventing acute otitis media in children.
Management
Oral and topical pain killers are the mainstay for the treatment of pain caused by otitis media. Oral agents include ibuprofen, paracetamol (acetaminophen), and opiates. A 2023 review found evidence for the effectiveness of single or combinations of oral pain relief in acute otitis media is limited. Topical agents shown to be effective include antipyrine and benzocaine ear drops.
A 2008 review found reason to not recommend decongestants and antihistamines, either nasal or oral, due to the lack of benefit and concerns regarding side effects, but this review was withdrawn from publication for being outdated. Half of cases of ear pain in children resolve without treatment in three days and 90% resolve in seven or eight days. The use of steroids is not supported by the evidence for acute otitis media.
Antibiotics
Use of antibiotics for acute otitis media has benefits and harms. As over 82% of acute episodes settle without treatment, about 20 children must be treated to prevent one case of ear pain, 33 children to prevent one perforation, and 11 children to prevent one opposite-side ear infection. For every 14 children treated with antibiotics, one child has an episode of vomiting, diarrhea or a rash. Analgesics may relieve pain, if present. For people requiring surgery to treat otitis media with effusion, preventative antibiotics may not help reduce the risk of post-surgical complications.
For bilateral acute otitis media in infants younger than 24 months, there is evidence that the benefits of antibiotics outweigh the harms. A 2015 Cochrane review concluded that watchful waiting is the preferred approach for children over six months with non severe acute otitis media.
Most children older than 6 months of age who have acute otitis media do not benefit from treatment with antibiotics. If antibiotics are used, a narrow-spectrum antibiotic like amoxicillin is generally recommended, as broad-spectrum antibiotics may be associated with more adverse events. If there is resistance or use of amoxicillin in the last 30 days then amoxicillin-clavulanate or another penicillin derivative plus beta lactamase inhibitor is recommended. Taking amoxicillin once a day may be as effective as twice or three times a day. While less than 7 days of antibiotics have fewer side effects, more than seven days appear to be more effective. If there is no improvement after 2–3 days of treatment a change in therapy may be considered. Azithromycin appears to have less side effects than either high dose amoxicillin or amoxicillin/clavulanate.
Tympanostomy tube
Tympanostomy tubes (also called "grommets") are recommended with three or more episodes of acute otitis media in 6 months or four or more in a year, with at least one episode or more attacks in the preceding 6 months. There is tentative evidence that children with recurrent acute otitis media (AOM) who receive tubes have a modest improvement in the number of further AOM episodes (around one fewer episode at six months and less of an improvement at 12 months following the tubes being inserted).
Evidence does not support an effect on long-term hearing or language development. A common complication of having a tympanostomy tube is otorrhea, which is a discharge from the ear. The risk of persistent tympanic membrane perforation after children have grommets inserted may be low. It is still uncertain whether or not grommets are more effective than a course of antibiotics.
Oral antibiotics should not be used to treat uncomplicated acute tympanostomy tube otorrhea. They are not sufficient for the bacteria that cause this condition and have side effects including increased risk of opportunistic infection. In contrast, topical antibiotic eardrops are useful.
Otitis media with effusion
The decision to treat is usually made after a combination of physical exam and laboratory diagnosis, with additional testing including audiometry, tympanogram, temporal bone CT and MRI. Decongestants, glucocorticoids, and topical antibiotics are generally not effective as treatment for non-infectious, or serous, causes of mastoid effusion. Moreover, it is recommended against using antihistamines and decongestants in children with OME. In less severe cases or those without significant hearing impairment, the effusion can resolve spontaneously or with more conservative measures such as autoinflation. In more severe cases, tympanostomy tubes can be inserted, possibly with adjuvant adenoidectomy as it shows a significant benefit as far as the resolution of middle ear effusion in children with OME is concerned.
Chronic suppurative otitis media
Topical antibiotics are of uncertain benefit as of 2020. Some evidence suggests that topical antibiotics may be useful either alone or with antibiotics by mouth. Antiseptics are of unclear effect. Topical antibiotics (quinolones) are probably better at resolving ear discharge than antiseptics.
Alternative medicine
Complementary and alternative medicine is not recommended for otitis media with effusion because there is no evidence of benefit. Homeopathic treatments have not been proven to be effective for acute otitis media in a study with children. An osteopathic manipulation technique called the Galbreath technique was evaluated in one randomized controlled clinical trial; one reviewer concluded that it was promising, but a 2010 evidence report found the evidence inconclusive.
Outcomes
Complications of acute otitis media consists of perforation of the ear drum, infection of the mastoid space behind the ear (mastoiditis), and more rarely intracranial complications can occur, such as bacterial meningitis, brain abscess, or dural sinus thrombosis. It is estimated that each year 21,000 people die due to complications of otitis media.
Membrane rupture
In severe or untreated cases, the tympanic membrane may perforate, allowing the pus in the middle-ear space to drain into the ear canal. If there is enough, this drainage may be obvious. Even though the perforation of the tympanic membrane suggests a highly painful and traumatic process, it is almost always associated with a dramatic relief of pressure and pain. In a simple case of acute otitis media in an otherwise healthy person, the body's defenses are likely to resolve the infection and the ear drum nearly always heals.
An option for severe acute otitis media in which analgesics are not controlling ear pain is to perform a tympanocentesis, i.e., needle aspiration through the tympanic membrane to relieve the ear pain and to identify the causative organism(s).
Hearing loss
Children with recurrent episodes of acute otitis media and those with otitis media with effusion or chronic suppurative otitis media have higher risks of developing conductive and sensorineural hearing loss. Globally approximately 141 million people have mild hearing loss due to otitis media (2.1% of the population). This is more common in males (2.3%) than females (1.8%).
This hearing loss is mainly due to fluid in the middle ear or rupture of the tympanic membrane. Prolonged duration of otitis media is associated with ossicular complications and, together with persistent tympanic membrane perforation, contributes to the severity of the disease and hearing loss. When a cholesteatoma or granulation tissue is present in the middle ear, the degree of hearing loss and ossicular destruction is even greater.
Periods of conductive hearing loss from otitis media may have a detrimental effect on speech development in children. Some studies have linked otitis media to learning problems, attention disorders, and problems with social adaptation. Furthermore, it has been demonstrated that individuals with otitis media have more depression/anxiety-related disorders compared to individuals with normal hearing. Once the infections resolve and hearing thresholds return to normal, childhood otitis media may still cause minor and irreversible damage to the middle ear and cochlea. More research on the importance of screening all children under 4 years old for otitis media with effusion needs to be performed.
Epidemiology
Acute otitis media is very common in childhood. It is the most common condition for which medical care is provided in children under five years of age in the US. Acute otitis media affects 11% of people each year (709 million cases) with half occurring in those below five years. Chronic suppurative otitis media affects about 5% or 31 million of these cases with 22.6% of cases occurring annually under the age of five years. Otitis media resulted in 2,400 deaths in 2013down from 4,900 deaths in 1990.
Australian Aboriginals experience a high level of conductive hearing loss largely due to the massive incidence of middle ear disease among the young in Aboriginal communities. Aboriginal children experience middle ear disease for two and a half years on average during childhood compared with three months for non indigenous children. If untreated it can leave a permanent legacy of hearing loss. The higher incidence of deafness in turn contributes to poor social, educational and emotional outcomes for the children concerned. Such children as they grow into adults are also more likely to experience employment difficulties and find themselves caught up in the criminal justice system. Research in 2012 revealed that nine out of ten Aboriginal prison inmates in the Northern Territory suffer from significant hearing loss.
Andrew Butcher speculates that the lack of fricatives and the unusual segmental inventories of Australian languages may be due to the very high presence of otitis media ear infections and resulting hearing loss in their populations. People with hearing loss often have trouble distinguishing different vowels and hearing fricatives and voicing contrasts. Australian Aboriginal languages thus seem to show similarities to the speech of people with hearing loss, and avoid those sounds and distinctions which are difficult for people with early childhood hearing loss to perceive. At the same time, Australian languages make full use of those distinctions, namely place of articulation distinctions, which people with otitis media-caused hearing loss can perceive more easily. This hypothesis has been challenged on historical, comparative, statistical, and medical grounds.
Etymology
The term otitis media is composed of otitis, Ancient Greek for "inflammation of the ear", and media, Latin for "middle".
| Biology and health sciences | Infectious diseases by site | Health |
215248 | https://en.wikipedia.org/wiki/Nephron | Nephron | The nephron is the minute or microscopic structural and functional unit of the kidney. It is composed of a renal corpuscle and a renal tubule. The renal corpuscle consists of a tuft of capillaries called a glomerulus and a cup-shaped structure called Bowman's capsule. The renal tubule extends from the capsule. The capsule and tubule are connected and are composed of epithelial cells with a lumen. A healthy adult has 1 to 1.5 million nephrons in each kidney. Blood is filtered as it passes through three layers: the endothelial cells of the capillary wall, its basement membrane, and between the podocyte foot processes of the lining of the capsule. The tubule has adjacent peritubular capillaries that run between the descending and ascending portions of the tubule. As the fluid from the capsule flows down into the tubule, it is processed by the epithelial cells lining the tubule: water is reabsorbed and substances are exchanged (some are added, others are removed); first with the interstitial fluid outside the tubules, and then into the plasma in the adjacent peritubular capillaries through the endothelial cells lining that capillary. This process regulates the volume of body fluid as well as levels of many body substances. At the end of the tubule, the remaining fluid—urine—exits: it is composed of water, metabolic waste, and toxins.
The interior of Bowman's capsule, called Bowman's space, collects the filtrate from the filtering capillaries of the glomerular tuft, which also contains mesangial cells supporting these capillaries. These components function as the filtration unit and make up the renal corpuscle. The filtering structure (glomerular filtration barrier) has three layers composed of endothelial cells, a basement membrane, and podocyte foot processes. The tubule has five anatomically and functionally different parts: the proximal tubule, which has a convoluted section called the proximal convoluted tubule followed by a straight section (proximal straight tubule); the loop of Henle, which has two parts, the descending loop of Henle ("descending loop") and the ascending loop of Henle ("ascending loop"); the distal convoluted tubule ("distal loop"); the connecting tubule, and the last part of nephron the collecting ducts. Nephrons have two lengths with different urine-concentrating capacities: long juxtamedullary nephrons and short cortical nephrons.
The four mechanisms used to create and process the filtrate (the result of which is to convert blood to urine) are filtration, reabsorption, secretion and excretion. Filtration or ultrafiltration occurs in the glomerulus and is largely passive: it is dependent on the intracapillary blood pressure. About one-fifth of the plasma is filtered as the blood passes through the glomerular capillaries; four-fifths continues into the peritubular capillaries. Normally the only components of the blood that are not filtered into Bowman's capsule are blood proteins, red blood cells, white blood cells and platelets. Over 150 liters of fluid enter the glomeruli of an adult every day: 99% of the water in that filtrate is reabsorbed. Reabsorption occurs in the renal tubules and is either passive, due to diffusion, or active, due to pumping against a concentration gradient. Secretion also occurs in the tubules and collecting duct and is active. Substances reabsorbed include: water, sodium chloride, glucose, amino acids, lactate, magnesium, calcium phosphate, uric acid, and bicarbonate. Substances secreted include urea, creatinine, potassium, hydrogen, and uric acid. Some of the hormones which signal the tubules to alter the reabsorption or secretion rate, and thereby maintain homeostasis, include (along with the substance affected) antidiuretic hormone (water), aldosterone (sodium, potassium), parathyroid hormone (calcium, phosphate), atrial natriuretic peptide (sodium) and brain natriuretic peptide (sodium). A countercurrent system in the renal medulla provides the mechanism for generating a hypertonic interstitium, which allows the recovery of solute-free water from within the nephron and returning it to the venous vasculature when appropriate.
Some diseases of the nephron predominantly affect either the glomeruli or the tubules. Glomerular diseases include diabetic nephropathy, glomerulonephritis and IgA nephropathy; renal tubular diseases include acute tubular necrosis and polycystic kidney disease.
Structure
The nephron is the functional unit of the kidney. This means that each separate nephron is where the main work of the kidney is performed.
A nephron is made of two parts:
a renal corpuscle, which is the initial filtering component, and
a renal tubule that processes and carries away the filtered fluid.
Renal corpuscle
The renal corpuscle is the site of the filtration of blood plasma. The renal corpuscle consists of the glomerulus, and the glomerular capsule or Bowman's capsule.
The renal corpuscle has two poles: a vascular pole and a tubular pole. The arterioles from the renal circulation enter and leave the glomerulus at the vascular pole. The glomerular filtrate leaves the Bowman's capsule at the renal tubule at the urinary pole.
Glomerulus
The glomerulus is the network, known as a tuft, of filtering capillaries located at the vascular pole of the renal corpuscle in Bowman's capsule. Each glomerulus receives its blood supply from an afferent arteriole of the renal circulation. The glomerular blood pressure provides the driving force for water and solutes to be filtered out of the blood plasma, and into the interior of Bowman's capsule, called Bowman's space.
Only about a fifth of the plasma is filtered in the glomerulus. The rest passes into an efferent arteriole. The diameter of the efferent arteriole is smaller than that of the afferent, and this difference increases the hydrostatic pressure in the glomerulus.
Bowman's capsule
The Bowman's capsule, also called the glomerular capsule, surrounds the glomerulus. It is composed of a visceral inner layer formed by specialized cells called podocytes, and a parietal outer layer composed of simple squamous epithelium. Fluids from blood in the glomerulus are ultrafiltered through several layers, resulting in what is known as the filtrate.
The filtrate next moves to the renal tubule, where it is further processed to form urine. The different stages of this fluid are collectively known as the tubular fluid.
Renal tubule
The renal tubule is a continuous and long pipe-like structure containing the tubular fluid filtered through the glomerulus. The filtrate passing through the renal tubule ultimately ends at the collecting duct system.
The components of the renal tubule are:
Proximal convoluted tubule: lies in the cortex and is lined by 'simple cuboidal epithelium with brush borders' which greatly increase surface area for absorption.
Loop of Henle: lies in the medulla and is U-shaped (similar to a hair-pin)
Descending limb of loop of Henle: one segment of equal thickness
Ascending limb of loop of Henle: two segments of differing thickness (proximal portion lined by simple squamous epithelium is called the thin ascending limb of loop of Henle, distal portion lined by simple cuboidal epithelium is called the thick ascending limb of loop of Henle).
Distal convoluted tubule: lies in the cortex
Collecting tubule
The epithelial cells that form these nephron segments can be distinguished by the shapes of their actin cytoskeleton visualized by confocal microscopy of fluorescent phalloidin.
Blood from the efferent arteriole, containing everything that was not filtered out in the glomerulus, moves into the peritubular capillaries, tiny blood vessels that surround the loop of Henle and the proximal and distal tubules, where the tubular fluid flows. Substances then reabsorb from the latter back to the blood stream.
The peritubular capillaries then recombine to form an efferent venule, which combines with efferent venules from other nephrons into the renal vein, and rejoins the main bloodstream.
Length difference
Cortical nephrons (the majority of nephrons) start high in the cortex and have a short loop of Henle which does not penetrate deeply into the medulla. Cortical nephrons can be subdivided into superficial cortical nephrons and midcortical nephrons.
Juxtamedullary nephrons start low in the cortex near the medulla and have a long loop of Henle which penetrates deeply into the renal medulla: only they have their loop of Henle surrounded by the vasa recta. These long loops of Henle and their associated vasa recta create a hyperosmolar gradient that allows for the generation of concentrated urine. Also the hairpin bend penetrates up to the inner zone of medulla.
Juxtamedullary nephrons are found only in birds and mammals, and have a specific location: medullary refers to the renal medulla, while juxta (Latin: near) refers to the relative position of the renal corpuscle of this nephron - near the medulla, but still in the cortex. In other words, a juxtamedullary nephron is a nephron whose renal corpuscle is near the medulla, and whose proximal convoluted tubule and its associated loop of Henle occur deeper in the medulla than the other type of nephron, the cortical nephron.
The juxtamedullary nephrons comprise only about 15% of the nephrons in the human kidney. However, it is this type of nephron which is most often depicted in illustrations of nephrons.
In humans, cortical nephrons have their renal corpuscles in the outer two thirds of the cortex, whereas juxtamedullary nephrons have their corpuscles in the inner third of the cortex.
Functions
The nephron uses four mechanisms to convert blood into urine: filtration, reabsorption, secretion, and excretion. These apply to numerous substances. The structure and function of the epithelial cells lining the lumen change during the course of the nephron, and have segments named by their location and which reflects their different functions.
Proximal tubule
The proximal tubule as a part of the nephron can be divided into an initial convoluted portion and a following straight (descending) portion. Fluid in the filtrate entering the proximal convoluted tubule is reabsorbed into the peritubular capillaries, including 80% of glucose, more than half of the filtered salt, water and all filtered organic solutes (primarily glucose and amino acids).
Loop of Henle
The loop of Henle is a U-shaped tube that extends from the proximal tubule. It consists of a descending limb and an ascending limb. It begins in the cortex, receiving filtrate from the proximal convoluted tubule, extends into the medulla as the descending limb, and then returns to the cortex as the ascending limb to empty into the distal convoluted tubule. The primary role of the loop of Henle is to enable an organism to produce concentrated urine, not by increasing the tubular concentration, but by rendering the interstitial fluid hypertonic.
Considerable differences aid in distinguishing the descending and ascending limbs of the loop of Henle. The descending limb is permeable to water and noticeably less permeable to salt, and thus only indirectly contributes to the concentration of the interstitium. As the filtrate descends deeper into the hypertonic interstitium of the renal medulla, water flows freely out of the descending limb by osmosis until the tonicity of the filtrate and interstitium equilibrate. The hypertonicity of the medulla (and therefore concentration of urine) is determined in part by the size of the loops of Henle.
Unlike the descending limb, the thick ascending limb is impermeable to water, a critical feature of the countercurrent exchange mechanism employed by the loop. The ascending limb actively pumps sodium out of the filtrate, generating the hypertonic interstitium that drives countercurrent exchange. In passing through the ascending limb, the filtrate grows hypotonic since it has lost much of its sodium content. This hypotonic filtrate is passed to the distal convoluted tubule in the renal cortex.
Distal convoluted tubule
The distal convoluted tubule has a different structure and function to that of the proximal convoluted tubule. Cells lining the tubule have numerous mitochondria to produce enough energy (ATP) for active transport to take place. Much of the ion transport taking place in the distal convoluted tubule is regulated by the endocrine system. In the presence of parathyroid hormone, the distal convoluted tubule reabsorbs more calcium and secretes more phosphate. When aldosterone is present, more sodium is reabsorbed and more potassium secreted. Ammonia is also absorbed during the selective reabsorption. Atrial natriuretic peptide causes the distal convoluted tubule to secrete more sodium.
Connecting tubule
A part of Distal nephron. This is the final segment of the tubule before it enters the collecting duct system. Water, some salts and nitrogenous waste like urea and creatinine are passed out to collecting tubule.
Collecting duct system
Each distal convoluted tubule delivers its filtrate to a system of collecting ducts, the first segment of which is the connecting tubule. The collecting duct system begins in the renal cortex and extends deep into the medulla. As the urine travels down the collecting duct system, it passes by the medullary interstitium which has a high sodium concentration as a result of the loop of Henle's countercurrent multiplier system.
Because it has a different origin during the development of the urinary and reproductive organs than the rest of the nephron, the collecting duct is sometimes not considered a part of the nephron. Instead of originating from the metanephrogenic blastema, the collecting duct originates from the ureteric bud.
Though the collecting duct is normally impermeable to water, it becomes permeable in the presence of antidiuretic hormone (ADH). ADH affects the function of aquaporins, resulting in the reabsorption of water molecules as it passes through the collecting duct. Aquaporins are membrane proteins that selectively conduct water molecules while preventing the passage of ions and other solutes. As much as three-quarters of the water from urine can be reabsorbed as it leaves the collecting duct by osmosis. Thus the levels of ADH determine whether urine will be concentrated or diluted. An increase in ADH is an indication of dehydration, while water sufficiency results in a decrease in ADH allowing for diluted urine.
Lower portions of the collecting organ are also permeable to urea, allowing some of it to enter the medulla, thus maintaining its high concentration (which is very important for the nephron).
Urine leaves the medullary collecting ducts through the renal papillae, emptying into the renal calyces, the renal pelvis, and finally into the urinary bladder via the ureter.
Juxtaglomerular apparatus
The juxtaglomerular apparatus (JGA) is a specialized region associated with the nephron, but separate from it. It produces and secretes into the circulation the enzyme renin (angiotensinogenase), which cleaves angiotensinogen and results in the ten amino acid substance angiotensin-1 (A-1). A-1 is then converted to angiotensin-2, a potent vasoconstrictor, by removing two amino acids: this is accomplished by angiotensin converting enzyme (ACE). This sequence of events is referred to as the renin–angiotensin system (RAS) or renin-angiotensin-aldosterone system (RAAS). The JGA is located between the thick ascending limb and the afferent arteriole. It contains three components: the macula densa, juxtaglomerular cells, and extraglomerular mesangial cells.
Clinical significance
Patients in early stages of chronic kidney disease show an approximate 50% reduction in the number of nephrons, comparable to the nephron loss that occurs with aging (between ages 18–29 and 70–75).
Diseases of the nephron predominantly affect either the glomeruli or the tubules. Glomerular diseases include diabetic nephropathy, glomerulonephritis and IgA nephropathy; renal tubular diseases include acute tubular necrosis, renal tubular acidosis, and polycystic kidney disease.
Additional images
| Biology and health sciences | Urinary system | Biology |
215397 | https://en.wikipedia.org/wiki/Pipit | Pipit | The pipits are a cosmopolitan genus, Anthus, of small passerine birds with medium to long tails. Along with the wagtails and longclaws, the pipits make up the family Motacillidae. The genus is widespread, occurring across most of the world, except the driest deserts, rainforest and the mainland of Antarctica.
They are slender, often drab, ground-feeding insectivores of open country. Like their relatives in the family, the pipits are monogamous and territorial. Pipits are ground nesters, laying up to six speckled eggs.
Taxonomy and systematics
The genus Anthus was introduced in 1805 by German naturalist Johann Matthäus Bechstein. The type species was later designated as the meadow pipit.
The generic name Anthus is the Latin word for a small bird of grasslands mentioned by Pliny the Elder.
Molecular studies of the pipits suggested that the genus arose in East Asia around seven million years ago (Mya), during the Miocene, and that the genus had spread to the Americas, Africa, and Europe between 5 and 6 Mya. Speciation rates were high during the Pliocene (5.3 to 2.6 Mya ), but slowed down during the Pleistocene. Repeated dispersal between continents seems to have been important in generating new species in Eurasia, Africa, and North America, rather than species arising by radiation once a continent was reached. In South America, however, vicariance appears to have played an important role in speciation.
Extant species
The genus has more than 40 species, making it the largest genus in terms of numbers in its family. The exact species limits of the genus are still a matter of some debate, with some checklists recognising only 34 species. For example, the Australasian pipit, A. novaeseelandiae, which is currently treated as nine subspecies found in New Zealand, Australia, and New Guinea, once also included Richard's pipit and the paddyfield pipit of Asia, and the African pipit of Africa. In addition, the Australian and New Zealand populations could be split, or even that New Zealand's subspecies found on its outlying Subantarctic Islands be split from the mainland species. In part the taxonomic difficulties arise due to the extreme similarities in appearance across the genus.
The family has an additional species, the golden pipit, Tmetothylacus tennelus, which belongs to a distinct, monotypic genus. This species is apparently intermediate in appearance between the pipits and the longclaws, and is probably more closely related to the longclaws. One species, the yellow-breasted pipit, is sometimes split out into a genus Hemimacronyx, which is considered to be intermediate between the longclaws and pipits. The split was originally proposed based on morphological features, but it has also found support based upon genetic analysis.
Formerly, some authorities placed the Kakamega greenbul (nominate) in this genus (as Anthus kakamegae).
Description
The pipits are generally highly conservative in appearance. They are generally in length, although the smallest species, the short-tailed pipit, is only . In weight, they range from . The largest species may be the alpine pipit. Like all members of the family, they are slender, short-necked birds with long tails and long, slender legs with elongated (in some cases very elongated) hind claws. The length of the hind claw varies with the habits of the species, more arboreal species have shorter, more curved hind claws than the more terrestrial species. The bills are generally long, slender, and pointed. In both size and plumage, few differences are seen between the sexes. One unusual feature of the pipits, which they share in common with the rest of their family, but not the rest of the passerines, is that the tertials on the wing entirely cover the primary flight feathers. This is thought to be a feature to protect the primaries, which are important to flight, from the sun, which causes the feathers to fade and become brittle if not protected.
The plumage of the pipits is generally drab and brown, buff, or faded white. The undersides are usually darker than the top, and a variable amount of barring and streaking is seen on the back, wings, and breast. The drab, mottled-brown colours provide some camouflage against the soil and stones on which they are generally found. A few species have slightly more colourful breeding plumages; for example, the rosy pipit has greenish edges on the wing feathers. The yellow-breasted pipit, if it is retained in this genus, is quite atypical in having bright yellow plumage on the throat, breast, and belly.
Pipits are morphologically similar to some larks, but the two groups are quite distantly related; the lark family Alaudidae is part of the superfamily Sylvioidea, rather than the Passeroidea, where the pipits are placed. Morphological differences between the two groups of birds are, in fact, plentiful. Anatomical differences include a differently structured syrinx, differences in the structure of the tarsi, and in many lark genera, the presence of a distinct 10th primary, a fourth tertial, and feathers at least partially covering the nostrils. Bill shape differs between larks and pipits, with larks having an evenly sloping culmen, whereas most pipits have a small hump over the nostrils, and lark bills are generally heavier, reflecting differences in diet. Differences occur in the feather tracts of the two groups; while many larks have crests, no pipit does; pipits have only one prominent row of , whereas larks have two.
Distribution and habitat
The pipits have a cosmopolitan distribution, occurring across most of the world's land surface. They are the only genus in their family to occur widely in the Americas (two species of wagtails marginally occur in Alaska, as well). Three species of pipits occur in North America, and seven species occur in South America. The remaining species are spread throughout Eurasia, Africa, and Australia, along with two species restricted to islands in the Atlantic. Some six species occur on more than one continent.
As might be expected from a genus with such a wide distribution, the pipits are found in an equally wide range of habitats. They occur in most types of open habitat, although they are absent from the very driest deserts. They are mostly associated with some kind of grassland, from sea-level to alpine tundra. The rock pipit and South Georgia pipit are found in the rocks and cliffs of the seashore, whereas several species are restricted (for part of the year in some cases) to alpine areas. The family also ranges from the northern tundra and the subantarctic islands of New Zealand and the South Georgia group to the tropics. They are absent from tropical rainforest, but a few species are associated with open woodland, for example the wood pipit of southern Africa, which is found in open woodland savanna and miombo woodland.
The pipits range from entirely sedentary to entirely migratory. Insular species such as Berthelot's pipit, which is endemic to Madeira and the Canary Islands, are entirely sedentary, as are some species in warmer areas like the Nilgiri pipit. Other species are partly nomadic during the nonbreeding season, like the long-legged pipit of central Africa or the ochre-breasted pipit of South America. These seasonal movements are in response to conditions in the environment, and are poorly understood and unpredictable. Longer, more regular migrations between discrete breeding and wintering grounds are undertaken by several species. The tree pipit, which breeds in Europe and northern Asia, winters in Asia and sub-Saharan Africa, a pattern of long-distance migration shared with other northerly species. Species may also be partly migratory, with northern populations being migratory but more temperate populations being resident (such as the meadow pipit in Europe). The distances involved do not have to be that long; the mountain pipit of southern Africa breeds in the Drakensberg of South Africa and migrates north only as far as Angola and Zambia. Migration is usually undertaken in groups and may happen both during the day and at night. Some variation happens in this, for example, Sprague's pipit of North America apparently only migrates by day.
Behaviour and ecology
The pipits are active terrestrial birds that usually spend most of their time on the ground. They will fly in order to display during breeding, while migrating and dispersing, and also when flushed by danger. A few species make use of trees, perching in them and flying to them when disturbed. Low shrubs, rocks and termite nests may also be used as vantage points. Like their relatives the wagtails, pipits engage in tail-wagging. The way in which a pipit does this can provide clues to its identity in otherwise similar looking species. Upland pipits, for example, flick their tails quite quickly, as opposed to olive-backed pipits which wag their tails more gently. In general pipits move their tails quite slowly. The buff-bellied pipit wags its tail both up and down and from side to side. The exact function of tail-wagging is unclear; in the related wagtails it is thought to be a signal to predators of vigilance.
Feeding
The diet of the pipits is dominated by small invertebrates. Insects are the most important prey items; among the types taken include flies and their larvae, beetles, grasshoppers and crickets, true bugs, mantids, ants, aphids and particularly the larvae and adults of moths and butterflies. Outside of insects other invertebrates taken include spiders and, rarely, worms and scorpions. They are generally catholic in their diet, the composition of their diet apparently reflecting the abundance of their prey in the location (and varying with the season). The diet consumed by adults may vary to that of the young birds; for example adult tree pipits take large numbers of beetles but do not feed many to their chicks. Species feeding on the seashore are reported to feed on marine crustaceans and molluscs. A few species have been reported to feed on small fish, beating them in the manner of a kingfisher having caught them. Rock pipits have also been observed feeding on fish dropped by puffins. These fish, which include sand eels and rocklings, were dropped by puffins being harassed by gulls. A few species also are reported as consuming berries and seeds.
Species list
The genus contains 46 species:
Richard's pipit (Anthus richardi)
Paddyfield pipit (Anthus rufulus)
Australian pipit (Anthus australis)
New Zealand pipit (Anthus novaeseelandiae)
African pipit (Anthus cinnamomeus)
Mountain pipit (Anthus hoeschi)
Blyth's pipit (Anthus godlewskii)
Tawny pipit (Anthus campestris)
Long-billed pipit (Anthus similis)
Nicholson's pipit (Anthus nicholsoni)
Wood pipit (Anthus nyassae)
Buffy pipit (Anthus vaalensis)
Plain-backed pipit (Anthus leucophrys)
Long-legged pipit (Anthus pallidiventris)
Meadow pipit (Anthus pratensis)
Tree pipit (Anthus trivialis)
Olive-backed pipit (Anthus hodgsoni)
Pechora pipit (Anthus gustavi)
Rosy pipit (Anthus roseatus)
Red-throated pipit (Anthus cervinus)
American pipit (Anthus rubescens)
Siberian pipit (Anthus japonicus)
Water pipit (Anthus spinoletta)
European rock pipit (Anthus petrosus)
Nilgiri pipit (Anthus nilghiriensis)
Upland pipit (Anthus sylvanus)
Berthelot's pipit (Anthus berthelotii)
Striped pipit (Anthus lineiventris)
African rock pipit (Anthus crenatus)
Short-tailed pipit (Anthus brachyurus)
Bushveld pipit (Anthus caffer)
Sokoke pipit (Anthus sokokensis)
Malindi pipit (Anthus melindae)
Yellow-breasted pipit (Anthus chloris)
Alpine pipit (Anthus gutturalis)
Madanga (Anthus ruficollis)
Sprague's pipit (Anthus spragueii)
Yellowish pipit (Anthus chii)
Peruvian pipit (Anthus peruvianus)
Short-billed pipit (Anthus furcatus)
Puna pipit (Anthus brevirostris)
Pampas pipit (Anthus chacoensis)
Correndera pipit (Anthus correndera)
South Georgia pipit (Anthus antarcticus)
Ochre-breasted pipit (Anthus nattereri)
Hellmayr's pipit (Anthus hellmayri)
Paramo pipit (Anthus bogotensis)
| Biology and health sciences | Passerida | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.