id stringlengths 2 8 | url stringlengths 31 117 | title stringlengths 1 71 | text stringlengths 153 118k | topic stringclasses 4
values | section stringlengths 4 49 ⌀ | sublist stringclasses 9
values |
|---|---|---|---|---|---|---|
375313 | https://en.wikipedia.org/wiki/Rice%20cooker | Rice cooker | A rice cooker or rice steamer is an automated kitchen appliance designed to boil or steam rice. It consists of a heat source, a cooking bowl, and a thermostat. The thermostat measures the temperature of the cooking bowl and controls the heat. Complex, high-tech rice cookers may have more sensors and other components, and may be multipurpose.
The term rice cooker formerly applied to non-automated dedicated rice-cooking utensils, which have an ancient history (a ceramic rice steamer dated to 1250 BC is on display in the British Museum). It now applies mostly to automated cookers. Electric rice cookers were developed in Japan, where they are known as suihanki (, literally, "boil-rice-device").
History
The NJ-N1, developed by Mitsubishi Electric in 1923, was the first electric rice cooker, a direct ancestor of today's automatic electric rice cookers. At that time, electricity was not widely used in ordinary households; it was for use on ships. It was a simple mechanism that heated with an attached heating coil without automation.
In the 1940s and early 1950s, Mitsubishi Electric, Matsushita Electric (now Panasonic), and Sony introduced electric rice cookers for home use with built-in heating coils, but they too were not automated and were not well received and sold poorly.
The ER-4, introduced by Toshiba on December 10, 1955 (or 1956), was the world's first automatic electric rice cooker for home use. It was developed by Toshiba's Shogo Yamada beginning in 1951 and completed in 1955 thanks to a breakthrough invention by Yoshitada Minami (ja), president of a Toshiba partner company. Research and development was a continuous process of trial and error. Research showed that rice cooks best when cooked at a temperature of for 20 minutes, so theoretically rice should cook well if an automatic timer is set to turn off the cooker 20 minutes after the water in the pot has boiled. However, the time it takes for the water to boil varies depending on the temperature, the amount of heat generated by the pot, and the amount of rice and water, so in the prototype stage, sometimes the rice was overcooked and burnt, while at other times it was undercooked and the rice was left with a core. The revolutionary idea to solve this problem was to use a double-layered structure for the pot. The mechanism involved filling the outer pot with a glass of water and heating it. After about 20 minutes, the water would evaporate and the temperature would rise rapidly, which the thermostat would detect and turn off.
The initial launch price was 3,200 yen, about one-third of the average college graduate's starting monthly salary. At its launch, 700 units were produced, but they did not sell well.
The company then conducted sales promotions using the sales networks of electric power companies, held sales demonstrations, and sold an automatic timer that could turn on the rice cooker at any time, and the product's popularity exploded. By 1960, four years after its introduction, the automatic electric rice cooker was in use in about half of all Japanese households.
The success of Toshiba's automatic electric rice cookers sparked a "manufacturing war", and Matsushita Electric entered the field as early as 1956 with the EC-36. The EC-36 was a cheaper product that used a single pot, reducing the amount of metal used and making it more competitive in terms of sales.
Later, the automatic electric rice cooker was well received in Asian countries and around the world under the name "Automatic Rice Cooker". It also had a great impact on society, giving housewives more time and accelerating the women's liberation movement. On the other hand, Helen Macnaughtan argued that the invention of the automatic electric rice cooker, which freed women from menial tasks in the kitchen and allowed some women to work part-time, was not a great victory for women's liberation because it gave them more time to devote to other household tasks.
In 1972, a rice cooker with a heat-retention function was introduced, and in 1979, an electronic rice cooker equipped with a microcomputer that could also manage the soaking of rice after washing and the heat level. In 1988, rice cookers with electromagnetic induction heating were introduced, which provided higher heating.
Principle of operation
A basic rice cooker has a main body (pot), an inner cooking container which holds the rice, an electric heating element, and a thermostat.
The bowl is filled with rice and water and heated at full power; the water reaches and stays at boiling point (100 °C, 212 °F). When the water has all been absorbed, the temperature can rise above boiling point, which trips the thermostat. Some cookers switch to low-power "warming" mode, keeping the rice at a safe temperature of approximately 65 °C (150 °F); simpler models switch off; the rice has entered the resting phase.
More advanced cookers may use fuzzy logic for more detailed temperature control, induction rather than resistive heating, a steaming tray for other foods, and even the ability to rinse the rice.
Rice types and rice cookers
Brown rice generally needs longer cooking times than white rice, unless it is broken, or flourblasted (which perforates the bran).
Many models feature an ability to cook sticky rice or porridge as an added value. Most can be used as steamers. Some can be used as slow cookers. Some other models can bake bread or in some cases have an added function to maintain temperatures suitable for fermentation of bread dough or yogurt. Multi-purpose devices with rice cooking capability are not necessarily called "rice cookers", but typically "multi-cookers".
A rice cooker, or slow cooker, can be used in conjunction with a temperature probe and an external thermostat to cook food at a stable low temperature ("sous-vide").
Other uses
Steam rice cookers have been shown to be effective for decontamination of face masks.
| Technology | Household appliances | null |
375416 | https://en.wikipedia.org/wiki/Computer%20simulation | Computer simulation | Computer simulation is the running of a mathematical model on a computer, the model being designed to represent the behaviour of, or the outcome of, a real-world or physical system. The reliability of some mathematical models can be determined by comparing their results to the real-world outcomes they aim to predict. Computer simulations have become a useful tool for the mathematical modeling of many natural systems in physics (computational physics), astrophysics, climatology, chemistry, biology and manufacturing, as well as human systems in economics, psychology, social science, health care and engineering. Simulation of a system is represented as the running of the system's model. It can be used to explore and gain new insights into new technology and to estimate the performance of systems too complex for analytical solutions.
Computer simulations are realized by running computer programs that can be either small, running almost instantly on small devices, or large-scale programs that run for hours or days on network-based groups of computers. The scale of events being simulated by computer simulations has far exceeded anything possible (or perhaps even imaginable) using traditional paper-and-pencil mathematical modeling. In 1997, a desert-battle simulation of one force invading another involved the modeling of 66,239 tanks, trucks and other vehicles on simulated terrain around Kuwait, using multiple supercomputers in the DoD High Performance Computer Modernization Program.
Other examples include a 1-billion-atom model of material deformation; a 2.64-million-atom model of the complex protein-producing organelle of all living organisms, the ribosome, in 2005;
a complete simulation of the life cycle of Mycoplasma genitalium in 2012; and the Blue Brain project at EPFL (Switzerland), begun in May 2005 to create the first computer simulation of the entire human brain, right down to the molecular level.
Because of the computational cost of simulation, computer experiments are used to perform inference such as uncertainty quantification.
Simulation versus model
A model consists of the equations used to capture the behavior of a system. By contrast, computer simulation is the actual running of the program that perform algorithms which solve those equations, often in an approximate manner. Simulation, therefore, is the process of running a model. Thus one would not "build a simulation"; instead, one would "build a model (or a simulator)", and then either "run the model" or equivalently "run a simulation".
History
Computer simulation developed hand-in-hand with the rapid growth of the computer, following its first large-scale deployment during the Manhattan Project in World War II to model the process of nuclear detonation. It was a simulation of 12 hard spheres using a Monte Carlo algorithm. Computer simulation is often used as an adjunct to, or substitute for, modeling systems for which simple closed form analytic solutions are not possible. There are many types of computer simulations; their common feature is the attempt to generate a sample of representative scenarios for a model in which a complete enumeration of all possible states of the model would be prohibitive or impossible.
Data preparation
The external data requirements of simulations and models vary widely. For some, the input might be just a few numbers (for example, simulation of a waveform of AC electricity on a wire), while others might require terabytes of information (such as weather and climate models).
Input sources also vary widely:
Sensors and other physical devices connected to the model;
Control surfaces used to direct the progress of the simulation in some way;
Current or historical data entered by hand;
Values extracted as a by-product from other processes;
Values output for the purpose by other simulations, models, or processes.
Lastly, the time at which data is available varies:
"invariant" data is often built into the model code, either because the value is truly invariant (e.g., the value of π) or because the designers consider the value to be invariant for all cases of interest;
data can be entered into the simulation when it starts up, for example by reading one or more files, or by reading data from a preprocessor;
data can be provided during the simulation run, for example by a sensor network.
Because of this variety, and because diverse simulation systems have many common elements, there are a large number of specialized simulation languages. The best-known may be Simula. There are now many others.
Systems that accept data from external sources must be very careful in knowing what they are receiving. While it is easy for computers to read in values from text or binary files, what is much harder is knowing what the accuracy (compared to measurement resolution and precision) of the values are. Often they are expressed as "error bars", a minimum and maximum deviation from the value range within which the true value (is expected to) lie. Because digital computer mathematics is not perfect, rounding and truncation errors multiply this error, so it is useful to perform an "error analysis" to confirm that values output by the simulation will still be usefully accurate.
Types
Models used for computer simulations can be classified according to several independent pairs of attributes, including:
Stochastic or deterministic (and as a special case of deterministic, chaotic) – see external links below for examples of stochastic vs. deterministic simulations
Steady-state or dynamic
Continuous or discrete (and as an important special case of discrete, discrete event or DE models)
Dynamic system simulation, e.g. electric systems, hydraulic systems or multi-body mechanical systems (described primarily by DAE:s) or dynamics simulation of field problems, e.g. CFD of FEM simulations (described by PDE:s).
Local or distributed.
Another way of categorizing models is to look at the underlying data structures. For time-stepped simulations, there are two main classes:
Simulations which store their data in regular grids and require only next-neighbor access are called stencil codes. Many CFD applications belong to this category.
If the underlying graph is not a regular grid, the model may belong to the meshfree method class.
For steady-state simulations, equations define the relationships between elements of the modeled system and attempt to find a state in which the system is in equilibrium. Such models are often used in simulating physical systems, as a simpler modeling case before dynamic simulation is attempted.
Dynamic simulations attempt to capture changes in a system in response to (usually changing) input signals.
Stochastic models use random number generators to model chance or random events;
A discrete event simulation (DES) manages events in time. Most computer, logic-test and fault-tree simulations are of this type. In this type of simulation, the simulator maintains a queue of events sorted by the simulated time they should occur. The simulator reads the queue and triggers new events as each event is processed. It is not important to execute the simulation in real time. It is often more important to be able to access the data produced by the simulation and to discover logic defects in the design or the sequence of events.
A continuous dynamic simulation performs numerical solution of differential-algebraic equations or differential equations (either partial or ordinary). Periodically, the simulation program solves all the equations and uses the numbers to change the state and output of the simulation. Applications include flight simulators, construction and management simulation games, chemical process modeling, and simulations of electrical circuits. Originally, these kinds of simulations were actually implemented on analog computers, where the differential equations could be represented directly by various electrical components such as op-amps. By the late 1980s, however, most "analog" simulations were run on conventional digital computers that emulate the behavior of an analog computer.
A special type of discrete simulation that does not rely on a model with an underlying equation, but can nonetheless be represented formally, is agent-based simulation. In agent-based simulation, the individual entities (such as molecules, cells, trees or consumers) in the model are represented directly (rather than by their density or concentration) and possess an internal state and set of behaviors or rules that determine how the agent's state is updated from one time-step to the next.
Distributed models run on a network of interconnected computers, possibly through the Internet. Simulations dispersed across multiple host computers like this are often referred to as "distributed simulations". There are several standards for distributed simulation, including Aggregate Level Simulation Protocol (ALSP), Distributed Interactive Simulation (DIS), the High Level Architecture (simulation) (HLA) and the Test and Training Enabling Architecture (TENA).
Visualization
Formerly, the output data from a computer simulation was sometimes presented in a table or a matrix showing how data were affected by numerous changes in the simulation parameters. The use of the matrix format was related to traditional use of the matrix concept in mathematical models. However, psychologists and others noted that humans could quickly perceive trends by looking at graphs or even moving-images or motion-pictures generated from the data, as displayed by computer-generated-imagery (CGI) animation. Although observers could not necessarily read out numbers or quote math formulas, from observing a moving weather chart they might be able to predict events (and "see that rain was headed their way") much faster than by scanning tables of rain-cloud coordinates. Such intense graphical displays, which transcended the world of numbers and formulae, sometimes also led to output that lacked a coordinate grid or omitted timestamps, as if straying too far from numeric data displays. Today, weather forecasting models tend to balance the view of moving rain/snow clouds against a map that uses numeric coordinates and numeric timestamps of events.
Similarly, CGI computer simulations of CAT scans can simulate how a tumor might shrink or change during an extended period of medical treatment, presenting the passage of time as a spinning view of the visible human head, as the tumor changes.
Other applications of CGI computer simulations are being developed to graphically display large amounts of data, in motion, as changes occur during a simulation run.
In science
Generic examples of types of computer simulations in science, which are derived from an underlying mathematical description:
a numerical simulation of differential equations that cannot be solved analytically, theories that involve continuous systems such as phenomena in physical cosmology, fluid dynamics (e.g., climate models, roadway noise models, roadway air dispersion models), continuum mechanics and chemical kinetics fall into this category.
a stochastic simulation, typically used for discrete systems where events occur probabilistically and which cannot be described directly with differential equations (this is a discrete simulation in the above sense). Phenomena in this category include genetic drift, biochemical or gene regulatory networks with small numbers of molecules. (see also: Monte Carlo method).
multiparticle simulation of the response of nanomaterials at multiple scales to an applied force for the purpose of modeling their thermoelastic and thermodynamic properties. Techniques used for such simulations are Molecular dynamics, Molecular mechanics, Monte Carlo method, and Multiscale Green's function.
Specific examples of computer simulations include:
statistical simulations based upon an agglomeration of a large number of input profiles, such as the forecasting of equilibrium temperature of receiving waters, allowing the gamut of meteorological data to be input for a specific locale. This technique was developed for thermal pollution forecasting.
agent based simulation has been used effectively in ecology, where it is often called "individual based modeling" and is used in situations for which individual variability in the agents cannot be neglected, such as population dynamics of salmon and trout (most purely mathematical models assume all trout behave identically).
time stepped dynamic model. In hydrology there are several such hydrology transport models such as the SWMM and DSSAM Models developed by the U.S. Environmental Protection Agency for river water quality forecasting.
computer simulations have also been used to formally model theories of human cognition and performance, e.g., ACT-R.
computer simulation using molecular modeling for drug discovery.
computer simulation to model viral infection in mammalian cells.
computer simulation for studying the selective sensitivity of bonds by mechanochemistry during grinding of organic molecules.
Computational fluid dynamics simulations are used to simulate the behaviour of flowing air, water and other fluids. One-, two- and three-dimensional models are used. A one-dimensional model might simulate the effects of water hammer in a pipe. A two-dimensional model might be used to simulate the drag forces on the cross-section of an aeroplane wing. A three-dimensional simulation might estimate the heating and cooling requirements of a large building.
An understanding of statistical thermodynamic molecular theory is fundamental to the appreciation of molecular solutions. Development of the Potential Distribution Theorem (PDT) allows this complex subject to be simplified to down-to-earth presentations of molecular theory.
Notable, and sometimes controversial, computer simulations used in science include: Donella Meadows' World3 used in the Limits to Growth, James Lovelock's Daisyworld and Thomas Ray's Tierra.
In social sciences, computer simulation is an integral component of the five angles of analysis fostered by the data percolation methodology, which also includes qualitative and quantitative methods, reviews of the literature (including scholarly), and interviews with experts, and which forms an extension of data triangulation. Of course, similar to any other scientific method, replication is an important part of computational modeling
In practical contexts
Computer simulations are used in a wide variety of practical contexts, such as:
analysis of air pollutant dispersion using atmospheric dispersion modeling
As a possible humane alternative to live animal testing in respect to animal rights.
design of complex systems such as aircraft and also logistics systems.
design of noise barriers to effect roadway noise mitigation
modeling of application performance
flight simulators to train pilots
weather forecasting
forecasting of risk
simulation of electrical circuits
Power system simulation
simulation of other computers is emulation.
forecasting of prices on financial markets (for example Adaptive Modeler)
behavior of structures (such as buildings and industrial parts) under stress and other conditions
design of industrial processes, such as chemical processing plants
strategic management and organizational studies
reservoir simulation for the petroleum engineering to model the subsurface reservoir
process engineering simulation tools.
robot simulators for the design of robots and robot control algorithms
urban simulation models that simulate dynamic patterns of urban development and responses to urban land use and transportation policies.
traffic engineering to plan or redesign parts of the street network from single junctions over cities to a national highway network to transportation system planning, design and operations. See a more detailed article on Simulation in Transportation.
modeling car crashes to test safety mechanisms in new vehicle models.
crop-soil systems in agriculture, via dedicated software frameworks (e.g. BioMA, OMS3, APSIM)
The reliability and the trust people put in computer simulations depends on the validity of the simulation model, therefore verification and validation are of crucial importance in the development of computer simulations. Another important aspect of computer simulations is that of reproducibility of the results, meaning that a simulation model should not provide a different answer for each execution. Although this might seem obvious, this is a special point of attention in stochastic simulations, where random numbers should actually be semi-random numbers. An exception to reproducibility are human-in-the-loop simulations such as flight simulations and computer games. Here a human is part of the simulation and thus influences the outcome in a way that is hard, if not impossible, to reproduce exactly.
Vehicle manufacturers make use of computer simulation to test safety features in new designs. By building a copy of the car in a physics simulation environment, they can save the hundreds of thousands of dollars that would otherwise be required to build and test a unique prototype. Engineers can step through the simulation milliseconds at a time to determine the exact stresses being put upon each section of the prototype.
Computer graphics can be used to display the results of a computer simulation. Animations can be used to experience a simulation in real-time, e.g., in training simulations. In some cases animations may also be useful in faster than real-time or even slower than real-time modes. For example, faster than real-time animations can be useful in visualizing the buildup of queues in the simulation of humans evacuating a building. Furthermore, simulation results are often aggregated into static images using various ways of scientific visualization.
In debugging, simulating a program execution under test (rather than executing natively) can detect far more errors than the hardware itself can detect and, at the same time, log useful debugging information such as instruction trace, memory alterations and instruction counts. This technique can also detect buffer overflow and similar "hard to detect" errors as well as produce performance information and tuning data.
Pitfalls
Although sometimes ignored in computer simulations, it is very important to perform a sensitivity analysis to ensure that the accuracy of the results is properly understood. For example, the probabilistic risk analysis of factors determining the success of an oilfield exploration program involves combining samples from a variety of statistical distributions using the Monte Carlo method. If, for instance, one of the key parameters (e.g., the net ratio of oil-bearing strata) is known to only one significant figure, then the result of the simulation might not be more precise than one significant figure, although it might (misleadingly) be presented as having four significant figures.
| Technology | Computer science | null |
375743 | https://en.wikipedia.org/wiki/Environmental%20protection | Environmental protection | Environmental protection, or environment protection, is the practice of protecting the natural environment by individuals, groups and governments. Its objectives are to conserve natural resources and the existing natural environment and, where it is possible, to repair damage and reverse trends.
Due to the pressures of overconsumption, population growth and technology, the biophysical environment is being degraded, sometimes permanently. This has been recognized, and governments have begun placing restraints on activities that cause environmental degradation. Since the 1960s, environmental movements have created more awareness of the multiple environmental problems. There is disagreement on the extent of the environmental impact of human activity, so protection measures are occasionally debated.
Approaches to environmental protection
Voluntary environmental agreements
In industrial countries, voluntary environmental agreements often provide a platform for companies to be recognized for moving beyond the minimum regulatory standards and thus support the development of the best environmental practice. For instance, in India, Environment Improvement Trust (EIT) has been working for environmental and forest protection since 1998. In developing countries, such as Latin America, these agreements are more commonly used to remedy significant levels of non-compliance with mandatory regulation.
Ecosystems approach
An ecosystems approach to resource management and environmental protection aims to consider the complex interrelationships of an entire ecosystem in decision-making rather than simply responding to specific issues and challenges. Ideally, the decision-making processes under such an approach would be a collaborative approach to planning and decision-making that involves a broad range of stakeholders across all relevant governmental departments, as well as industry representatives, environmental groups, and community. This approach ideally supports a better exchange of information, development of conflict-resolution strategies and improved regional conservation. Religions also play an important role in the conservation of the environment: for example, the Catholic Church's Compendium on its social teaching states that "environmental protection cannot be assured solely on the basis of financial calculations of costs and benefits. The environment is one of those goods that cannot be adequately safeguarded or promoted by market forces."
International environmental agreements
Many of the earth's resources are especially vulnerable because they are influenced by human impacts across different countries. As a result of this, many attempts are made by countries to develop agreements that are signed by multiple governments to prevent damage or manage the impacts of human activity on natural resources. This can include agreements that impact factors such as climate, oceans, rivers and air pollution. These international environmental agreements are sometimes legally binding documents that have legal implications when they are not followed and, at other times, are more agreements in principle or are for use as codes of conduct. These agreements have a long history with some multinational agreements being in place from as early as 1910 in Europe, America and Africa.
Many of the international technical agencies formed after 1945 addressed environmental themes. By the late 1960s, a growing environmental movement called for coordinated and institutionalized international cooperation. The landmark United Nations Conference on the Human Environment was held in Stockholm in 1972, establishing the concept of a right to a healthy environment. It was followed by the creation of the United Nations Environment Programme later that year. Some of the most well-known international agreements include the Kyoto Protocol of 1997 and the Paris Agreement of 2015.
On 8 October 2021, the UN Human Rights Council passed a resolution recognizing access to a healthy and sustainable environment as a universal right. In the resolution 48/13, the Council called on States around the world to work together, and with other partners, to implement the newly recognized right.
On 28 July 2022, the United Nations General Assembly voted to declare the ability to live in "a clean, healthy and sustainable environment" a universal human right.
Government
Discussion concerning environmental protection often focuses on the role of government, legislation, and law enforcement. However, in its broadest sense, environmental protection may be seen to be the responsibility of all the people and not simply that of government. Decisions that impact the environment will ideally involve a broad range of stakeholders including industry, indigenous groups, environmental group and community representatives. Gradually, environmental decision-making processes are evolving to reflect this broad base of stakeholders and are becoming more collaborative in many countries.
Africa
Tanzania
Many constitutions acknowledge Tanzania as having some of the greatest biodiversity of any African country. Almost 40% of the land has been established into a network of protected areas, including several national parks. The concerns for the natural environment include damage to ecosystems and loss of habitat resulting from population growth, expansion of subsistence agriculture, pollution, timber extraction and significant use of timber as fuel.
Environmental protection in Tanzania began during the German occupation of East Africa (1884–1919)—colonial conservation laws for the protection of game and forests were enacted, whereby restrictions were placed upon traditional indigenous activities such as hunting, firewood collecting, and cattle grazing. In 1948, Serengeti has officially established the first national park for wild cats in East Africa. Since 1983, there has been a more broad-reaching effort to manage environmental issues at a national level, through the establishment of the National Environment Management Council (NEMC) and the development of an environmental act.
Division of the biosphere is the main government body that oversees protection. It does this through the formulation of policy, coordinating and monitoring environmental issues, environmental planning and policy-oriented environmental research. The National Environment Management Council (NEMC) is an institution that was initiated when the National Environment Management Act was first introduced in year 1983. This council has the role to advise governments and the international community on a range of environmental issues. The NEMC the following purposes: provide technical advice; coordinate technical activities; develop enforcement guidelines and procedures; assess, monitor and evaluate activities that impact the environment; promote and assist environmental information and communication; and seek advancement of scientific knowledge.
The National Environment Policy of 1997 acts as a framework for environmental decision making in Tanzania. The policy objectives are to achieve the following:
Ensure sustainable and equitable use of resources without degrading the environment or risking health or safety.
Prevent and control degradation of land, water, vegetation and air.
Conserve and enhance natural and man-made heritage, including biological diversity of unique ecosystems.
Improve condition and productivity of degraded areas.
Raise awareness and understanding of the link between environment and development.
Promote individual and community participation.
Promote international cooperation.
Use ecofriendly resources.
Tanzania is a signatory to a significant number of international conventions including the Rio Declaration on Development and Environment 1992 and the Convention on Biological Diversity 1996. The Environmental Management Act, 2004, is the first comprehensive legal and institutional framework to guide environmental-management decisions. The policy tools that are parts of the act include the use of environmental-impact assessments, strategics environmental assessments, and taxation on pollution for specific industries and products. The effectiveness of shifting of this act will only become clear over time as concerns regarding its implementation become apparent based on the fact that, historically, there has been a lack of capacity to enforce environmental laws and a lack of working tools to bring environmental-protection objectives into practice.
Asia
China
Formal environmental protection in China House was first stimulated by the 1972 United Nations Conference on the Human Environment held in Stockholm, Sweden. Following this, they began establishing environmental protection agencies and putting controls on some of its industrial waste. China was one of the first developing countries to implement a sustainable development strategy. In 1983 the State Council announced that environmental protection would be one of China's basic national policies and in 1984 the National Environmental Protection Agency (NEPA) was established. Following severe flooding of the Yangtze River basin in 1998, NEPA was upgraded to the State Environmental Protection Agency (SEPA) meaning that environmental protection was now being implemented at a ministerial level. In 2008, SEPA became known by its current name of Ministry of Environmental Protection of the People's Republic of China (MEP).
Environmental pollution and ecological degradation has resulted in economic losses for China. In 2005, economic losses (mainly from air pollution) were calculated at 7.7% of China's GDP. This grew to 10.3% by 2002 and the economic loss from water pollution (6.1%) began to exceed that caused by air pollution. China has been one of the top performing countries in terms of GDP growth (9.64% in the past ten years). However, the high economic growth has put immense pressure on its environment and the environmental challenges that China faces are greater than most countries. In 2021 it was noted that China was the world's largest greenhouse gas emitter, while also facing additional environmental challenges which included illegal logging, wildlife trafficking, plastic waste, ocean pollution, environmental-related mismanagement, unregulated fishing, and the consequences associated with being the world's largest mercury polluter. All these factors contribute to climate change and habitat loss. In 2022 China was ranked 160th out of 180 countries on the Environmental Performance Index due to poor air quality and high GHG emissions.
Ecological and environmental degradation in China have health related impacts; for example, if current pollution levels continue, Chinese citizens will lose 3.6 billion total life years. Another issue is that non-transmittable diseases among Chinese, which cause at least 80% of 10.3 million annual deaths, are worsened by air pollution.
China has taken initiatives to increase its protection of the environment and combat environmental degradation:
China's investment in renewable energy grew 18% in 2007 to $15.6 billion, accounting for ~10% of the global investment in this area;
In 2008, spending on the environment was 1.49% of GDP, up 3.4 times from 2000;
The discharge of CO (carbon monoxide) and SO2 (sulfur dioxide) decreased by 6.61% and 8.95% in 2008 compared with that in 2005;
China's protected nature reserves have increased substantially. In 1978 there were only 34 compared with 2,538 in 2010. The protected nature reserve system now occupies 15.5% of the country; this is higher than the world average.
Rapid growth in GDP has been China's main goal during the past three decades with a dominant development model of inefficient resource use and high pollution to achieve high GDP. For China to develop sustainably, environmental protection should be treated as an integral part of its economic policies.
Quote from Shengxian Zhou, head of MEP (2009): "Good economic policy is good environmental policy and the nature of environmental problem is the economic structure, production form and develop model."
Since around 2010 China appears to be placing a greater emphasis on environmental and ecological protection. For example, former General Secretary Hu Jintao's report at the 2012 Party Congress added a section focusing on party policy on ecological issues.
Xi Jinping's report at the 19th CPC National Congress in 2017 noted recent progress in ecological and environmental conservation and restoration, the importance of ecologically sustainable development and global ecological security, and the need to provide ecological goods to meet people's growing demands. Most importantly, Xi Jinping has suggested clearly identifiable methods to meet the ecological demands of the country. Some of the solutions he notes are the need for the development and facilitation of: ecological corridors, biodiversity protection networks, redlines for protecting ecosystems, market-based mechanisms for ecological compensation in addition to afforestation, greater crop rotation, recycling, waste reduction, stricter pollution standards, and greener production and technology. The report at the 19th CPC National Congress isn't simply the personal thoughts from Xi Jinping, it's a product of a long process of compromise and negotiation among competing party officials and leaders.
Additionally, the Third Plenum of the CCP in 2013 included a manifesto that placed extreme emphasis on reforming management of the environment, promising to create greater transparency of those polluting, and placing environmental criteria above GDP growth for local official evaluations.
Reform has not come cheap for China. In 2016, it was noted that in response to pollution and oversupply, China laid off around six million workers in state-owned enterprises and spent $23 billion to cover layoffs specifically for coal and steel companies between 2016 and 2019. While expensive, other benefits of environmental protection have been noticed beyond impacting citizens' health. For example, in the long run, environmental protection has been found to generally improve job quality of migrant workers by reducing their work intensity, while increasing social security and job quality.
Different local governments in China implement different approaches to solving the issue of ecological protection, sometimes with negative consequences for the citizens. For example, a prefecture in the Shanxi province imposed bans, and potential legal detentions or steep fines for violations, on coal-burning by villagers. Although the government provided free gas-heaters often the villagers were unable to afford to run them. In Wuhan, automated surveillance technology and video is used to catch illegal fishing, and in some cities not recycling results in negative social credit points. It is unclear in some of these instances if citizens have any potential routes for recourse.
News in 2023 has found that the Chinese Communist Party's recent war on pollution has already brought substantial and measurable impacts, including China's particulate pollution levels dropping 42% from 2013 levels and increasing the average lifespan expectancy of citizens by an estimated 2.2 years.
India
The Constitution of India has a number of provisions demarcating the responsibility of the Central and State governments towards Environmental Protection. The state's responsibility with regard to environmental protection has been laid down under article 48-A of the constitution which stated that "The states shall endeavor to protect and improve the environment and to safeguard the forest and wildlife of the country".
Environmental protection has been made a fundamental duty of every citizen of India under Article 51-A (g) of the constitution which says "It shall be the duty of every citizen of India to protect and improve the natural environment including forests, lakes, rivers, and wildlife and to have compassion for living creatures".
Article 21 of the constitution is a fundamental right, which states that "No person shall be deprived of his life or personal liberty except according to the procedure established by law".
Middle East
The Middle Eastern countries become part of the joint Islamic environmental action, which was initiated in 2002 in Jeddah. Under the Islamic Educational, Scientific and Cultural Organization, the member states join the Islamic Environment Ministers Conference in every two years, focusing on the importance of environment protection and sustainable development. The Arab countries are also awarded the title of best environment management in the Islamic world.
In August 2019, the Sultanate of Oman won the award for 2018–19 in Saudi Arabia, citing its project "Verifying the Age and Growth of Spotted Small Spots in the Northwest Coast of the Sea of Oman".
Russia
In Russia, environmental protection is considered an integral part of national safety. The Federal Ministry of Natural Resources and Ecology is the authorized state body tasked with managing environmental protection. However, there are a lot of environmental issues in Russia.
Europe
European Union
Environmental protection has become an important task for the institutions of the European Community after the Maastricht Treaty for the European Union ratification by all of its member states. The EU is active in the field of environmental policy, issuing directives such as those on environmental impact assessment and on access to environmental information for citizens in the member states.
Ireland
The Environmental Protection Agency, Ireland (EPA) has a wide range of functions to protect the environment, with its primary responsibilities including:
Environmental licensing
Enforcement of environmental law
Environmental planning, education, and guidance
Monitoring, analyzing and reporting on the environment
Regulating Ireland's greenhouse gas emissions
Environmental research development
Strategic environmental assessment
Waste management
Radiological protection
Switzerland
The environmental protection in Switzerland is mainly based on the measures to be taken against global warming. The pollution in Switzerland is mainly the pollution caused by vehicles and the litteration by tourists.
Latin America
The United Nations Environment Programme (UNEP) has identified 17 megadiverse countries. The list includes six Latin American countries: Brazil, Colombia, Ecuador, Mexico, Peru and Venezuela. Mexico and Brazil stand out among the rest because they have the largest area, population and number of species. These countries represent a major concern for environmental protection because they have high rates of deforestation, ecosystems loss, pollution, and population growth.
Brazil
Brazil has the largest amount of the world's tropical forests, 4,105,401 km2 (48.1% of Brazil), concentrated in the Amazon region. Brazil is home to vast biological diversity, first among the megadiverse countries of the world, having between 15%-20% of the 1.5 million globally described species.
The organization in charge of environment protection is the Brazilian Ministry of the Environment (in Portuguese: Ministério do Meio Ambiente, MMA). It was first created in the year 1973 with the name Special Secretariat for the Environment (Secretaria Especial de Meio Ambiente), changing names several times, and adopting the final name in the year 1999. The Ministry is responsible for addressing the following issues:
A national policy for the environment and for water resources;
A policy for the preservation, conservation and sustainable use of ecosystems, biodiversity, and forests;
Proposing strategies, mechanisms, economic and social instruments for improving environmental quality, and sustainable use of natural resources;
Policies for integrating production and the environment;
Environmental policies and programs for the Legal Amazon;
Ecological and economic territorial zoning.
In 2011, protected areas of the Amazon covered 2,197,485 km2 (an area larger than Greenland), with conservation units, like national parks, accounting for just over half (50.6%) and indigenous territories representing the remaining 49.4%.
Mexico
With over 200,000 different species, Mexico is home to 10–12% of the world's biodiversity, ranking first in reptile biodiversity and second in mammals—one estimate indicates that over 50% of all animal and plant species live in Mexico.
The history of environmental policy in Mexico started in the 1940s with the enactment of the Law of Conservation of Soil and Water (in Spanish: Ley de Conservación de Suelo y Agua). Three decades later, at the beginning of the 1970s, the Law to Prevent and Control Environmental Pollution was created (Ley para Prevenir y Controlar la Contaminación Ambiental).
In the year 1972 was the first direct response from the federal government to address eminent health effects from environmental issues. It established the administrative organization of the Secretariat for the Improvement of the Environment (Subsecretaría para el Mejoramiento del Ambiente) in the Department of Health and Welfare.
The Secretariat of Environment and Natural Resources (Secretaría del Medio Ambiente y Recursos Naturales, SEMARNAT) is Mexico's environment ministry. The Ministry is responsible for addressing the following issues:
Promote the protection, restoration, and conservation of ecosystems, natural resources, goods, and environmental services and facilitate their use and sustainable development.
Develop and implement a national policy on natural resources
Promote environmental management within the national territory, in coordination with all levels of government and the private sector.
Evaluate and provide determination to the environmental impact statements for development projects and prevention of ecological damage
Implement national policies on climate change and protection of the ozone layer.
Direct work and studies on national meteorological, climatological, hydrological, and geohydrological systems, and participate in international conventions on these subjects.
Regulate and monitor the conservation of waterways
In November 2000 there were 127 protected areas; currently there are 174, covering an area of 25,384,818 hectares, increasing federally protected areas from 8.6% to 12.85% of its land area.
Oceania
Australia
In 2008, there was 98,487,116 ha of terrestrial protected area, covering 12.8% of the land area of Australia. The 2002 figures of 10.1% of terrestrial area and 64,615,554 ha of protected marine area were found to poorly represent about half of Australia's 85 bioregions.
Environmental protection in Australia could be seen as starting with the formation of the first national park, Royal National Park, in 1879. More progressive environmental protection had it start in the 1960s and 1970s with major international programs such as the United Nations Conference on the Human Environment in 1972, the Environment Committee of the OECD in 1970, and the United Nations Environment Programme of 1972. These events laid the foundations by increasing public awareness and support for regulation. State environmental legislation was irregular and deficient until the Australian Environment Council (AEC) and Council of Nature Conservation Ministers (CONCOM) were established in 1972 and 1974, creating a forum to assist in coordinating environmental and conservation policies between states and neighbouring countries. These councils have since been replaced by the Australian and New Zealand Environment and Conservation Council (ANZECC) in 1991 and finally the Environment Protection and Heritage Council (EPHC) in 2001.
At a national level, the Environment Protection and Biodiversity Conservation Act 1999 is the primary environmental protection legislation for the Commonwealth of Australia. It concerns matters of national and international environmental significance regarding flora, fauna, ecological communities and cultural heritage. It also has jurisdiction over any activity conducted by the Commonwealth, or affecting it, that has significant environmental impact.
The act covers eight main areas:
National Heritage Sites
World Heritage Sites
Ramsar wetlands
Nationally endangered or threatened species and ecological communities
Nuclear activities and actions
Great Barrier Reef Marine Park
Migratory species
Commonwealth marine areas
There are several Commonwealth protected lands due to partnerships with traditional native owners, such as Kakadu National Park, extraordinary biodiversity such as Christmas Island National Park, or managed cooperatively due to cross-state location, such as the Australian Alps National Parks and Reserves.
At a state level, the bulk of environmental protection issues are left to the responsibility of the state or territory. Each state in Australia has its own environmental protection legislation and corresponding agencies. Their jurisdiction is similar and covers point source pollution, such as from industry or commercial activities, land/water use, and waste management. Most protected lands are managed by states and territories with state legislative acts creating different degrees and definitions of protected areas such as wilderness, national land and marine parks, state forests, and conservation areas. States also create regulation to limit and provide general protection from air, water, and sound pollution.
At a local level, each city or regional council has responsibility over issues not covered by state or national legislation. This includes non-point source, or diffuse pollution, such as sediment pollution from construction sites.
Australia ranks second place on the UN 2010 Human Development Index and one of the lowest debt to GDP ratios of the developed economies. This could be seen as coming at the cost of the environment, with Australia being the world leader in coal exportation and species extinctions. Some have been motivated to proclaim it is Australia's responsibility to set the example of environmental reform for the rest of the world to follow.
New Zealand
At a national level, the Ministry for the Environment is responsible for environmental policy and the Department of Conservation addresses conservation issues. At a regional level the regional councils administer the legislation and address regional environmental issues.
United States
Since 1970, the United States Environmental Protection Agency (EPA) has been working to protect the environment and human health.
The Environmental Protection Agency (EPA) is an independent executive agency of the United States federal government tasked with environmental protection matters.
All US states have their own state-level departments of environmental protection, which may issue regulations more stringent than the federal ones.
In January 2010, EPA Administrator Lisa P. Jackson published via the official EPA blog her "Seven Priorities for EPA's Future", which were (in the order originally listed):
Taking action on climate change
Improving air quality
Assuring the safety of chemicals
Cleaning up [US] communities
Protecting America's waters
Expanding the conversation on environmentalism and working for environmental justice
Building strong state and tribal partnerships
it is unclear whether these still represent the agency's active priorities, as Jackson departed in February 2013, and the page has not been updated in the interim.
In literature
There are numerous works of literature that contain the themes of environmental protection but some have been fundamental to its evolution. Several pieces such as A Sand County Almanac by Aldo Leopold, "Tragedy of the commons" by Garrett Hardin, and Silent Spring by Rachel Carson have become classics due to their far reaching influences. The conservationist and Nobel laureate Wangari Muta Maathai devoted her 2010 book Replenishing the Earth to the Green Belt Movement and the vital importance of trees in protecting the environment.
The subject of environmental protection is present in fiction as well as non-fictional literature. Books such as Antarctica and Blockade have environmental protection as subjects whereas The Lorax has become a popular metaphor for environmental protection. "The Limits of Trooghaft" by Desmond Stewart is a short story that provides insight into human attitudes towards animals. Another book called The Martian Chronicles by Ray Bradbury investigates issues such as bombs, wars, government control, and what effects these can have on the environment.
| Physical sciences | Earth science basics: General | Earth science |
375767 | https://en.wikipedia.org/wiki/Semi-automatic%20firearm | Semi-automatic firearm | A semi-automatic firearm, also called a self-loading or autoloading firearm (fully automatic and selective fire firearms are also variations on self-loading firearms), is a repeating firearm whose action mechanism automatically loads a following round of cartridge into the chamber and prepares it for subsequent firing, but requires the shooter to manually actuate the trigger in order to discharge each shot. Typically, this involves the weapon's action utilizing the excess energy released during the preceding shot (in the form of recoil or high-pressure gas expanding within the bore) to unlock and move the bolt, extracting and ejecting the spent cartridge case from the chamber, re-cocking the firing mechanism, and loading a new cartridge into the firing chamber, all without input from the user. To fire again, however, the user must actively release the trigger, and allow it to "reset", before pulling the trigger again to fire off the next round. As a result, each trigger pull only discharges a single round from a semi-automatic weapon, as opposed to a fully automatic weapon, which will shoot continuously as long as the ammunition is replete and the trigger is kept depressed.
Ferdinand Ritter von Mannlicher produced the first successful design for a semi-automatic rifle in 1885, and by the early 20th century, many manufacturers had introduced semi-automatic shotguns, rifles and pistols.
In military use, the semi-automatic M1911 handgun was adopted by the United States Army in 1911, and subsequently by many other nations. Semi-automatic rifles did not see widespread military adoption until just prior to World War II, the M1 Garand being a notable example. Modern service rifles such as the M4 carbine are often selective-fire, capable of semi-automatic and automatic or burst-fire operation. Civilian variants such as the AR-15 are generally semi-automatic only.
Early history (1885–1945)
The first successful design for a semi-automatic rifle is attributed to Austria-born gunsmith Ferdinand Ritter von Mannlicher, who unveiled the design in 1885. The Model 85 was followed by the equally innovative Mannlicher Models 91, 93 and 95 semi-automatic rifles. Although Mannlicher earned his reputation with his bolt-action rifle designs, he also produced a few semi-automatic pistols, including the Steyr Mannlicher M1894, which employed an unusual blow-forward action and held five rounds of 6.5mm ammunition that were fed into the M1894 by a stripper clip.
Semi-automatic shotgun
In 1902, American gunsmith John Moses Browning developed the first successful semi-automatic shotgun, the Browning Auto-5, which was first manufactured by Fabrique Nationale de Herstal and sold in America under the Browning name. The Auto-5 relied on long recoil operation; this design remained the dominant form in semi-automatic shotguns for approximately 50 years. Production of the Auto-5 ended in 1998.
Blowback semi-automatic
In 1903 and 1905, the Winchester Repeating Arms Company introduced the first semi-automatic rimfire and centerfire rifles designed especially for the civilian market. The Winchester Model 1903 and Winchester Model 1905 operated on the principle of blowback in order to function semi-automatically. Designed entirely by T. C. Johnson, the Model 1903 achieved commercial success and continued to be manufactured until 1932 when the Winchester Model 63 replaced it.
By the early 20th century, several manufacturers had introduced semi-automatic .22 sporting rifles, including Winchester, Remington, Fabrique Nationale and Savage Arms, all using the direct blow-back system of operation. Winchester introduced a medium caliber semi-automatic sporting rifle, the Model 1907 as an upgrade to the Model 1905, utilizing a blowback system of operation, in calibers such as .351 Winchester. Both the Models of 1905 and 1907 saw limited military and police use.
Notable early semi-automatic rifles
In 1906, Remington Arms introduced the Remington Auto-loading Repeating Rifle. Remington advertised this rifle, renamed the "Model 8" in 1911, as a sporting rifle. This is a locked-breech, long recoil action designed by John Browning. The rifle was offered in .25, .30, .32, and .35 caliber models, and gained popularity among civilians as well as some law enforcement officials who appreciated the combination of a semi-automatic action and relatively powerful rifle cartridges. The Model 81 superseded the Model 8 in 1936 and was offered in .300 Savage as well as the original Remington calibers.
The first semi-automatic rifle adopted and widely issued by a major military power (France) was the Fusil Automatique Modele 1917. This is a locked-breech, gas-operated action that is very similar in its mechanical principles to the future M1 Garand in the United States. The M1917 was fielded during the latter stages of World War I but it did not receive a favorable reception. However, its shortened and improved version, the Model 1918, was much more favourably received during the Moroccan Rif War from 1920 to 1926. The Lebel bolt-action rifle remained the standard French infantry rifle until replaced in 1936 by the MAS-36 despite the various semi-automatic rifles designed between 1918 and 1935.
Other nations experimented with self-loading rifles between the two World Wars, including the United Kingdom, which had intended to replace the bolt-action Lee–Enfield with a self-loader, possibly chambered for sub-caliber ammunition, but discarded that plan as the imminence of the Second World War and the emphasis shifted from replacing every rifle with a new design to speeding-up re-armament with existing weapons. The Soviet Union and Nazi Germany would both issue successful self-loading and selective-fire rifles on a large scale during the course of the war, but not in sufficient numbers to replace their standard bolt-action rifles.
Notable gas-operated rifles
In 1937, the American M1 Garand was the first semi-automatic rifle to replace its nation's bolt-action rifle as the standard-issue infantry weapon. The gas-operated M1 Garand was developed by Canadian-born John Garand for the U.S. government at the Springfield Armory in Springfield, Massachusetts. After years of research and testing, the first production model of the M1 Garand was unveiled in 1937. During World War II, the M1 Garand gave American infantrymen an advantage over their opponents, most of whom were issued slower firing bolt-action rifles.
The Soviet AVS-36, SVT-38 and SVT-40 (originally intended to replace the Mosin-Nagant as their standard service rifle), as well as the German Gewehr 43, were semi-automatic gas-operated rifles issued during World War II. In practice, they did not replace the bolt-action rifle as a standard infantry weapon.
Another gas-operated semi-automatic rifle developed toward the end of World War II was the SKS. Designed by Sergei Gavrilovich Simonov in 1945, it came equipped with a bayonet and could be loaded with ten rounds, using a stripper clip. However, the SKS was quickly replaced by the AK-47, produced at around the same time, but with a 30-round magazine, and select fire capability. The SKS was the first widely issued weapon to use the 7.62×39mm cartridge.
Types
There are semi-automatic pistols, rifles, and shotguns designed and made as semi-automatic only. Selective-fire firearms are capable of both full automatic and semi-automatic modes.
Semi-automatic refers to a firearm that uses the force of recoil or gas to eject the empty case and load a fresh cartridge into the firing chamber for the next shot and which allows repeat shots solely through the action of pulling the trigger. A double-action revolver also requires only a trigger pull for each round that is fired but is not considered semi-automatic since the manual action of pulling the trigger is what advances the cylinder, not the energy of the preceding shot.
Fully automatic compared to semi-automatic
The usage of the term automatic may vary according to context. Gun specialists point out that the word automatic is sometimes misunderstood to mean fully automatic fire when used to refer to a self-loading, semi-automatic firearm not capable of fully automatic fire. In this case, automatic refers to the loading mechanism, not the firing capability. To avoid confusion, it is common to refer to such firearms as an "autoloader" in reference to their loading mechanism.
The term "automatic pistol" almost exclusively refers to a semi-automatic (i.e. not fully automatic) pistol (fully automatic pistols are usually referred to as machine pistols). With handguns, the term "automatic" is commonly used to distinguish semi-automatic pistols from revolvers. The term "auto-loader" may also be used to describe a semi-automatic handgun. However, to avoid confusion, the term "automatic rifle" is generally, conventionally, and best restricted to a rifle capable of fully automatic fire. Both uses of the term "automatic" can be found; the exact meaning must be determined from context.
Auto-loading
The mechanism of semi-automatic (or autoloading) firearms is usually what is known as a closed-bolt firing system. In a closed-bolt system, a round must first be chambered manually before the weapon can fire. When the trigger is pulled, only the hammer and firing pin move, striking and firing the cartridge. The bolt then recoils far enough rearward to extract and load a new cartridge from the magazine into the firearm's chamber, ready to fire again once the trigger is pulled.
An open-bolt mechanism is a common characteristic of fully automatic firearms. With this system, pulling the trigger releases the bolt from a cocked, rearward position, pushing a cartridge from the magazine into the chamber, firing the gun. The bolt retracts to the rearward position, ready to strip the next cartridge from the magazine. The open-bolt system is often used in submachine guns and other weapons with a high rate of fire. It is rarely used in semi-automatic-only firearms, which can fire only one shot with each pull of the trigger. The closed-bolt system is generally more accurate, as the centre of gravity changes relatively little at the moment the trigger is pulled.
With fully automatic weapons, the open-bolt operation allows air to circulate, cooling the barrel. With semi-automatic firearms, the closed-bolt operation is preferred, as overheating is not as critical, and accuracy is preferred. Some select-fire military weapons use an open bolt in fully automatic mode and a closed bolt when semi-automatic is selected.
Legal status
Many jurisdictions regulate some or all semi-automatic firearms differently than other types.
Various types of semi-automatic weapons were restricted for civilian use in New Zealand after the 2019 Christchurch mosque shootings, in Australia after the 1996 Port Arthur massacre, in Norway after the 2011 Utøya shooting. In the United States, the 1994–2004 Federal Assault Weapons Ban prohibited semi-automatic weapons with certain additional characteristics. As of 2023, several U.S. states still restrict similar types of semi-automatic weapons.
Examples
| Technology | Mechanisms_2 | null |
375826 | https://en.wikipedia.org/wiki/Mohorovi%C4%8Di%C4%87%20discontinuity | Mohorovičić discontinuity | The Mohorovičić discontinuity ( ; )usually called the Moho discontinuity, Moho boundary, or just Mohois the boundary between the crust and the mantle of Earth. It is defined by the distinct change in velocity of seismic waves as they pass through changing densities of rock.
The Moho lies almost entirely within the lithosphere (the hard outer layer of the Earth, including the crust). Only beneath mid-ocean ridges does it define the lithosphere–asthenosphere boundary (the depth at which the mantle becomes significantly ductile). The Mohorovičić discontinuity is below the ocean floor, and beneath typical continental crusts, with an average of .
Named after the pioneering Croatian seismologist Andrija Mohorovičić, the Moho separates both the oceanic crust and continental crust from the underlying mantle. The Mohorovičić discontinuity was first identified in 1909 by Mohorovičić, when he observed that seismograms from shallow-focus earthquakes had two sets of P-waves and S-waves, one set that followed a direct path near the Earth's surface and the other refracted by a high-velocity medium.
Nature and seismology
The Moho marks the transition in composition between the Earth's crust and the lithospheric mantle. Immediately above the Moho, the velocities of primary seismic waves (P-waves) are consistent with those through basalt (6.7–7.2 km/s), and below they are similar to those through peridotite or dunite (7.6–8.6 km/s). This increase of approximately 1 km/s corresponds to a distinct change in material as the waves pass through the Earth, and is commonly accepted as the lower limit of the Earth's crust. The Moho is characterized by a transition zone of up to 500 meters. Ancient Moho zones are exposed above-ground in numerous ophiolites around the world.
Beginning in the 1980s, geologists became aware that the Moho does not always coincide with the crust-mantle boundary defined by composition. Xenoliths (lower crust and upper mantle rock brought to the surface by volcanic eruptions) and seismic-reflection data showed that, away from continental cratons, the transition between crust and mantle is marked by basaltic intrusions and may be up to 20 km thick. The Moho may lie well below the crust-mantle boundary and care must be used in interpreting the structure of the crust from seismic data alone.
Serpentinization of mantle rock below slowly spreading mid-ocean ridges can also increase the depth to the Moho, since serpentinization lowers seismic wave velocities.
History
Croatian seismologist Andrija Mohorovičić is credited with discovering and defining the Moho. In 1909, he was examining data from a local earthquake in Zagreb when he observed two distinct sets of P-waves and S-waves propagating out from the focus of the earthquake. Mohorovičić knew that waves caused by earthquakes travel at velocities proportional to the density of the material carrying them. As a result of this information, he theorized that the second set of waves could only be caused by a sharp transition in density in the Earth's crust, which could account for such a dramatic change in wave velocity. Using velocity data from the earthquake, he was able to calculate the depth of the Moho to be approximately 54 km, which was supported by subsequent seismological studies.
The Moho has played a large role in the fields of geology and earth science for well over a century. By observing the Moho's refractive nature and how it affects the speed of P-waves, scientists were able to theorize about the earth's composition. These early studies gave rise to modern seismology.
In the early 1960s, Project Mohole was an attempt to drill to the Moho from deep-ocean regions. After initial success in establishing deep-ocean drilling, the project suffered from political and scientific opposition, mismanagement, and cost overruns, and it was cancelled in 1966.
Exploration
Reaching the discontinuity by drilling remains an important scientific objective. Soviet scientists at the Kola Superdeep Borehole pursued the goal from 1970 until 1992. They reached a depth of , the world's deepest hole, before abandoning the project. One proposal considers a rock-melting radionuclide-powered capsule with a heavy tungsten needle that can propel itself down to the Moho discontinuity and explore Earth's interior near it and in the upper mantle. The Japanese project Chikyu Hakken ("Earth Discovery") also aims to explore in this general area with the drilling ship, Chikyū, built for the Integrated Ocean Drilling Program (IODP).
Plans called for the drill-ship JOIDES Resolution to sail from Colombo in Sri Lanka in late 2015 and to head for the Atlantis Bank, a promising location in the southwestern Indian Ocean on the Southwest Indian Ridge, to attempt to drill an initial bore hole to a depth of approximately 1.5 kilometres.
The attempt did not even reach 1.3 km, but researchers hope to further their investigations at a later date.
| Physical sciences | Geology: General | Earth science |
13784601 | https://en.wikipedia.org/wiki/Service%20animal | Service animal | Service animals are working animals that have been trained to perform tasks that assist disabled people. Service animals may also be referred to as assistance animals or helper animals depending on the country and the animal's function. Dogs are the most common service animals, having assisted people since at least 1927.
Various definitions exist for a service animal. Various laws and policies may define service animal more expansively, but they often do not include or specially accommodate emotional support animals, comfort animals, or therapy dogs.
Regulations regarding service animals vary by region. For example, in Japan, regulations outline standards of training and certification for service animals. In the United States, service animals are generally allowed in areas of public accommodation, even where pets are generally forbidden.
Definitions
A service animal is an animal that has been trained to assist a disabled person. The animal needs to be individually trained to do tasks that directly relate to the handler's disability, which goes beyond the ordinary training that a pet receives and the non-individualized training that a therapy dog receives.
The international assistance animal community has categorized three types of assistance animals:
Guide animals, which guide the blind;
Hearing animals, which signal the hearing impaired; and
Service animals, which do work for persons with disabilities other than blindness or deafness.
In the United States, the term service animal encompasses all three of the above types (guide dog, hearing animal, service dog). The Americans with Disabilities Act defines the term as "dogs that are individually trained to do work or perform tasks for people with disabilities". Additionally, the Air Carrier Access Act breaks down the term service animal into emotional support animals and other service animals. Airlines are permitted to require different and more extensive documentation for ESAs than for other service animals.
Role of a service animal
The people that can qualify for a service animal can have a range of physical and/or mental disabilities.
A guide animal is an animal specifically trained to assist a visually impaired person to navigate in public. These animals may be trained to open doors, recognize traffic signals, guide their owners safely across public streets, and navigate through crowds of people. A mobility animal may perform similar services for a person with physical disabilities, as well as assisting with balance or falling issues, or fetching dropped or needed items. Some of them are trained to pull wheelchairs. Hearing animals are trained to assist hearing-impaired or deaf persons. These animals may be trained to respond to doorbells or a ringing phone or to tug their owners toward a person who is speaking to them. Psychiatric animals can be trained to provide deep-pressure therapy by lying on top of a person who may be experiencing PTSD flashbacks, overstimulation, or acute anxiety. They may be trained to interrupt harmful behaviors (e.g., skin picking). Similarly, autism animals have been recently introduced to recognize and respond to the needs of people with autism spectrum disorder; some persons with ASD state that they are more comfortable interacting with animals than with human caregivers due to issues regarding eye contact, touch, and socialization. Medical emergency animals can assist in medical emergency and perform such services as clearing an area in the event of a seizure, fetching medication or other necessary items, or alerting others in the event of a medical episode; some may even be trained to call emergency services through use of a telephone with specially designed oversized buttons. Service animals may also be trained to alert persons to the presence of an allergen.
Difference from emotional support animals and pets
Service animals also provide companionship and emotional support for owners who might otherwise be isolated due to disability; however, providing companionship and emotional support is not a trained task that qualifies an animal as a service animal.
In the US, it is illegal to bring an animal to non-pet friendly places simply because it provides companionship or emotional support. Additionally, claiming that an emotional support animal or a pet is a service animal is illegal.
Limitations
Service animals should not be taken into every place, especially if there are bona fide safety issues. Some activities may be unsafe for the dog (e.g., a roller coaster, which will not have appropriate safety belts for the dog), and, in other situations, the dog's presence may cause a safety problem (e.g., by introducing contaminants into a sterile room in a hospital, or if the combined weight of the dog and its handler exceeds the weight limits for a piece of equipment).
Even if service animals in general are accepted, an individual service animal could be excluded because of its own behavior or situation. For example, in the US, individual service dogs have legally been excluded from some places for not being properly controlled by its handler (e.g., for growling at staff or interfering with other patrons), because the handler was unable to care for it (e.g., while the handler was unable to take the dog out to urinate for an extended period of time), for having a contagious disease, and for urinating or defecating in inappropriate places.
In some places, service animals in training have the same rights to enter a place as a fully trained and working service animal, and in other places, they do not.
Acquisition
Service animals may be acquired from an organization that trains them, or may be purchased as a puppy (e.g., from a dog breeder) and then trained later. Assistance Dogs International and Animal Assisted Intervention International organize international networks of service dog non-profits.
Trained service animals tend to be expensive, with costs running into the tens of thousands of dollars. In some cases, even though money is paid, the service animal is not being bought by the user, but merely leased.
Training
Training a service dog may take two years. The training is intensive; 120 hours of training over six months (about five hours per week) is considered a minimal level of training.
The training for a service dog is more individualized than the training for a therapy dog, because the service dog supports only a single individual, and therapy dogs work with a variety of people.
The training may be done by a non-profit organization, by an individual or small business, or by the owner. For legal recognition, some countries require licensed trainers. For example, service animals in Japan are only legally recognized if they are certified by designated agencies.
Access by region
In many countries, guide dogs, other types of assistance dogs, and in some cases miniature horses, are protected by law, and therefore may accompany their handlers in most places that are open to the public, even if local regulations or rules would deny access to non-service animals. Laws and regulations vary by jurisdiction.
Japan
In Japan, the Act on Assistance Dogs for Physically Disabled Persons was issued in 2002. The stated goal of this act was to improve the quality of "assistance dogs for physically disabled persons" and expand the use of public facilities by physically disabled people.
Assistance dogs are classified as either guide dogs, hearing dogs, or service dogs. Public transportation, public facilities, offices of public organisation, and private businesses of 50 or more people are required to accept certified assistance dogs. Only certified assistance dogs are required to be accommodated. They must display a sign with their certification number, and the dog's health records and proof of certification must be provided upon demand. Private housing and private businesses with less than 50 people are encouraged but not required to accept assistance dogs.
Visitors whose assistance animals were self-trained or trained by an organization not approved by the Japanese government are legally considered ordinary pets while in Japan.
United States
In the United States, the Americans with Disabilities Act of 1990 prohibits any business, government agency, or other organization that provides access to the general public from barring service dogs. However, religious organizations are not required to provide such access. Current federal regulations define service animal for ADA purposes to exclude all species of animals other than domestic dogs and miniature horses.
Other laws also apply. The US Air Carrier Access Act permits trained service animals to travel with disabled people on commercial airplanes. The Fair Housing Act requires housing providers to permit service animals as well as comfort animals and emotional support animals, without species restrictions, in housing.
The revised Americans with Disabilities Act requirements are as follows: "Beginning on March 15, 2011, only dogs are recognized as service animals under titles II and III of the ADA. A service animal is a dog that is individually trained to do work or perform tasks for a person with a disability. Generally, title II and title III entities must permit service animals to accompany people with disabilities in all areas where members of the public are allowed to go. In addition to the provisions about service dogs, the Department's revised ADA regulations have a new, separate provision about miniature horses that have been individually trained to do work or perform tasks for people with disabilities. (Miniature horses generally range in height from 24 inches to 34 inches measured to the shoulders and generally weigh between 70 and 100 pounds.) Entities covered by the ADA must modify their policies to permit miniature horses where reasonable. The regulations set out four assessment factors to assist entities in determining whether miniature horses can be accommodated in their facility. The assessment factors are whether:
the miniature horse is housebroken;
the miniature horse is under the owner's control;
the facility can accommodate the miniature horse's type, size, and weight; and
the miniature horse's presence will not compromise legitimate safety requirements necessary for safe operation of the facility."
The ADA states that a service animal may be removed from the premises if the dog is out of control of the handler or the dog is not housebroken. Service animals are to be kept under control by wearing a leash, harness, or tether unless it would interfere with the animal's ability to perform its tasks. Housebroken means the service animal to be adequately trained to urinate and defecate in appropriate places (e.g., outdoors or being paper trained).
However, businesses may exclude service animals when the animals' presence or behavior "fundamentally alters" the nature of the goods, services, programs, or activities provided to the public. This could include exclusion from certain areas of zoos where a dog's presence could disrupt the animals' behavior or where there is open access to the animals, or if a service dog's alert behavior is barking, its behavior could be considered fundamentally altering the service provided by a movie theater. In a medical setting, service animals are normally permitted in patient exam rooms but excluded from operating rooms and other sterile environments.
Staff are legally allowed to ask the following questions about service animals: (1) "Is the dog a service animal required because of a disability?" and (2) "What work or task has this animal been trained to perform?" Staff cannot request documentation, ask about the handler's disability, or require the animal to perform its tasks.
Other rules relating to service dogs outlined by the ADA:
Staff can neither deny service for reasons such as allergies or fear of dogs nor deny service for people with allergies or psychiatric conditions for reasons of a service dog being present; instead, all disabled people need to be accommodated (for example, by having the allergic person use a different room from the person with a service dog).
Staff cannot charge handlers extra fees because of a service animal
Hotels must provide handlers the ability to reserve any room, not just rooms deemed "pet-friendly"
Staff are not responsible for supervising a service animal
Dog may be of any breed, though certain breeds, such as German Shepherds, Labrador Retrievers, and Golden Retrievers are more popular.
Americans with Disability Act
The ADA (Americans with Disabilities Act of 1990) in the United States defines a service animal as "a dog that is individually trained to do work or perform tasks for an individual with a disability." Emotional support animals do not qualify as service animals under the ADA.
From the year it became active, that is, 1990, ADA inhibits any kind of discrimination against disabled Individuals. Although even before this, the Fair Housing section of the Civil Rights Act of 1964 also protects disabled individuals, the ADA solely focuses on discrimination based on disability. The scope of the American Disabilities Act is vast as it not only inhibits discrimination by the housing authorities but also covers the areas related to employment, transportation, education, etc.
There are different authorities that keep checking that ADA is wholly followed in the United States. For example, the U.S. Equal Employment Opportunity Commission (EEOC) ensures that there is no discrimination against disabled employees in the country. Whereas the Department of Transportation, with its full potential, makes sure that public vehicles and related services are comfortable for disabled individuals.
Federal Fair Housing Act
In 1988, the Federal Fair Housing Amendment Act banned discrimination against individuals based on their disability. Landlords are obliged to approve reasonable accommodation requests for disabled tenants so that disabled tenants can enjoy the dwelling as much as non-disabled tenants. Reasonable accommodations can include living with an assistance animal. The assistance animal can be a trained service animal or emotional support animal.
No matter the breed, type, size, or weight of these assistance animals, the landlord has to allow them in the housing even if they follow a no-pet policy. In accordance with this Fair Housing Act, the landlords cannot ask for any extra charges for allowing either trained service animal or emotional support animals in the rental housing.
Other regions
In most South American countries and Mexico, guide dog access depends solely upon the goodwill of the owner or manager. In more tourist-heavy areas, guide dogs are generally welcomed without problems. In Brazil, however, a 2006 federal decree requires allowance of guide dogs in all public and open to public places. The Federal District Metro has developed a program which trains guide dogs to ride it.
In Europe, the situation varies by location. Some countries have laws that govern the entire country and sometimes the decision is left up to the respective regions.
In Australia, the Disability Discrimination Act 1992 protects all assistance dog handlers. Current laws may not ensure that assistance dog users can always have their service animals present in all situations. Each state and territory has its own laws, which mainly pertain to guide dogs. Queensland has introduced the Guide Hearing and Assistance Dog Act 2009 that covers all certified assistance dogs.
In Canada, guide dogs along with other service animals are allowed anywhere that the general public is allowed, as long as the owner is in control of them. Fines for denying a service animal access can be up to $3000 in Alberta, Canada. There are separate laws for service dogs in Alberta, British Columbia, Nova Scotia, and Ontario.
In South Korea, it is illegal to deny access to guide dogs in any areas that are open to the public. Violators are fined for no more than 2 million won.
Animals for individual assistance
Many service animals may be trained to perform tasks to help their disabled partners live independent lives.
Service dogs
Service dogs are the most common type of service animal. Dogs can support a litany of both physical and mental disabilities. A mobility assistance dog helps with movement; this may be a large dog that can provide physical support or to help propel a wheelchair, or a dog that has been trained to do specific small tasks, such as pushing a door open. A guide dog helps blind people walk safely. A hearing dog alerts deaf people to important sounds, such as a ringing alarm.
Some dogs are medical response dogs that sense a medical issue and alert the handler. Seizure sensing dogs can be trained to sense epileptic seizures in their partner. A diabetes alert dog senses when a person's blood sugar level is dangerously low or high. A psychiatric assistance dog may be trained to calm someone who is upset, help the handler leave an overwhelming situation, or signal specific events (e.g., to interrupt a habitual behavior, such as picking at the skin when anxious). Autism assistance dogs help individuals with autism.
Miniature horse
A miniature horse can be trained to as a guide horse for a blind person, to pull wheelchairs, or as support for persons with Parkinson's disease.
A full-grown miniature horse can vary from 26" to 38". There are two main registering organisations. The American Miniature Horse Association limits height to 34" whereas the American Miniature Horse Registry has a division for horses 34" to 38".
There are a number of advantages of miniature horses as service animals. Miniature horses may be chosen by people whose religion considers dogs to be unclean or who have serious allergies to dogs, as well as phobias. Miniature horses have average lifespans of 30–35 years (longer than those of both service dogs and monkeys) and take at least 6 months to a year of training, done only by professional trainers. The training period is usually longer for a miniature horse than for a dog, partly because horses are easily spooked by loud noises.
Guide horse users report they typically are immediately recognised as a working service animal, whereas a dog may be mistaken for a pet. Miniature horses have been praised for their excellent range of vision (350 degrees), good memories, calm nature, focused demeanor, and good cost-effectiveness. They are cost-effective primarily because their long working life, of about 20 years, is so much longer than for other animals, so the longer training period is balanced by the longer service time. They are particularly well suited to guiding people with no or low vision.
Helper monkey
A helper monkey was a type of assistance animal trained to help people with quadriplegia, severe spinal cord injuries, or other mobility impairments, similar to a mobility assistance dog. Starting in the 1970s and up through 2020, the Boston-based organization Helping Hands trained Capuchin monkeys to perform manual tasks such as grasping items, operating knobs and switches, and turning the pages of a book.
In 2010, the U.S. federal government revised its definition of service animal under the Americans with Disabilities Act (ADA). Non-human primates are no longer recognised as service animals under the ADA. The American Veterinary Medical Association does not support the use of non-human primates as assistance animals because of animal welfare concerns, the potential for serious injury to people, and risks that primates may transfer dangerous diseases to humans. The organization that trained monkey helpers, Helping Hands, rebranded in 2023, becoming Envisioning Access and turning its focus to assistive technologies.
| Technology | Agriculture, labor and economy | null |
13785655 | https://en.wikipedia.org/wiki/Magnetic%20anomaly | Magnetic anomaly | In geophysics, a magnetic anomaly is a local variation in the Earth's magnetic field resulting from variations in the chemistry or magnetism of the rocks. Mapping of variation over an area is valuable in detecting structures obscured by overlying material. The magnetic variation (geomagnetic reversals) in successive bands of ocean floor parallel with mid-ocean ridges was important evidence for seafloor spreading, a concept central to the theory of plate tectonics.
Measurement
Magnetic anomalies are generally a small fraction of the magnetic field. The total field ranges from 25,000 to 65,000 nanoteslas (nT). To measure anomalies, magnetometers need a sensitivity of 10 nT or less. There are three main types of magnetometer used to measure magnetic anomalies:
The fluxgate magnetometer was developed during World War II to detect submarines. It measures the component along a particular axis of the sensor, so it needs to be oriented. On land, it is often oriented vertically, while in aircraft, ships and satellites it is usually oriented so the axis is in the direction of the field. It measures the magnetic field continuously, but drifts over time. One way to correct for drift is to take repeated measurements at the same place during the survey.
The proton precession magnetometer measures the strength of the field but not its direction, so it does not need to be oriented. Each measurement takes a second or more. It is used in most ground surveys except for boreholes and high-resolution gradiometer surveys.
Optically pumped magnetometers, which use alkali gases (most commonly rubidium and caesium) have high sample rates and sensitivities of 0.001 nT or less, but are more expensive than the other types of magnetometers. They are used on satellites and in most aeromagnetic surveys.
Data acquisition
Ground-based
In ground-based surveys, measurements are made at a series of stations, typically 15 to 60 m apart. Usually a proton precession magnetometer is used and it is often mounted on a pole. Raising the magnetometer reduces the influence of small ferrous objects that were discarded by humans. To further reduce unwanted signals, the surveyors do not carry metallic objects such as keys, knives or compasses, and objects such as motor vehicles, railway lines, and barbed wire fences are avoided. If some such contaminant is overlooked, it may show up as a sharp spike in the anomaly, so such features are treated with suspicion. The main application for ground-based surveys is the detailed search for minerals.
Aeromagnetic
Airborne magnetic surveys are often used in oil surveys to provide preliminary information for seismic surveys. In some countries such as Canada, government agencies have made systematic surveys of large areas. The survey generally involves making a series of parallel runs at a constant height and with intervals of anywhere from a hundred meters to several kilometers. These are crossed by occasional tie lines, perpendicular to the main survey, to check for errors. The plane is a source of magnetism, so sensors are either mounted on a boom (as in the figure) or towed behind on a cable. Aeromagnetic surveys have a lower spatial resolution than ground surveys, but this can be an advantage for a regional survey of deeper rocks.
Shipborne
In shipborne surveys, a magnetometer is towed a few hundred meters behind a ship in a device called a fish. The sensor is kept at a constant depth of about 15 m. Otherwise, the procedure is similar to that used in aeromagnetic surveys.
Spacecraft
Sputnik 3 in 1958 was the first spacecraft to carry a magnetometer. In the autumn of 1979, Magsat was launched and jointly operated by NASA and USGS until the spring of 1980. It had a caesium vapor scalar magnetometer and a fluxgate vector magnetometer. CHAMP, a German satellite, made precise gravity and magnetic measurements from 2001 to 2010. A Danish satellite, Ørsted, was launched in 1999 and is still in operation, while the Swarm mission of the European Space Agency involves a "constellation" of three satellites that were launched in November, 2013.
Data reduction
There are two main corrections that are needed for magnetic measurements. The first is removing short-term variations in the field from external sources; e.g., temporal variations which include diurnal variations that have a period of 24 hours and magnitudes of up to 30 nT, probably from the action of the solar wind on the ionosphere. In addition, magnetic storms can have peak magnitudes of 1000 nT and can last for several days. Their contribution can be measured by returning to a base station repeatedly or by having another magnetometer that periodically measures the field at a fixed location.
Second, since the anomaly is the local contribution to the magnetic field, the main geomagnetic field must be subtracted from it. The International Geomagnetic Reference Field is usually used for this purpose. This is a large-scale, time-averaged mathematical model of the Earth's field based on measurements from satellites, magnetic observatories and other surveys.
Some corrections that are needed for gravity anomalies are less important for magnetic anomalies. For example, the vertical gradient of the magnetic field is 0.03 nT/m or less, so an elevation correction is generally not needed.
Interpretation
Theoretical background
The magnetization in the surveyed rock is the vector sum of induced and remanent magnetization:
The induced magnetization of many minerals is the product of the ambient magnetic field and their magnetic susceptibility :
Some susceptibilities are given in the table.
Minerals that are diamagnetic or paramagnetic only have an induced magnetization. Ferromagnetic minerals such as magnetite also can carry a remanent magnetization or remanence. This remanence can last for millions of years, so it may be in a completely different direction from the present Earth's field. If a remanence is present, it is difficult to separate from the induced magnetization unless samples of the rock are measured. The ratio of the magnitudes, , is called the Koenigsberger ratio.
Magnetic anomaly modeling
Interpretation of magnetic anomalies is usually done by matching observed and modeled values of the anomalous magnetic field. An algorithm developed by Talwani and Heirtzler(1964) (and further elaborated by Kravchinsky et al., 2019) treats both induced and remnant magnetizations as vectors and allows theoretical estimation of the remnant magnetization from the existing apparent polar wander paths for different tectonic units or continents.
Applications
Ocean floor stripes
Magnetic surveys over the oceans have revealed a characteristic pattern of anomalies around mid-ocean ridges. They involve a series of positive and negative anomalies in the intensity of the magnetic field, forming stripes running parallel to each ridge. They are often symmetric about the axis of the ridge. The stripes are generally tens of kilometers wide, and the anomalies are a few hundred nanoteslas. The source of these anomalies is primarily permanent magnetization carried by titanomagnetite minerals in basalt and gabbros. They are magnetized when ocean crust is formed at the ridge. As magma rises to the surface and cools, the rock acquires a thermoremanent magnetization in the direction of the field. Then the rock is carried away from the ridge by the motions of the tectonic plates. Every few hundred thousand years, the direction of the magnetic field reverses. Thus, the pattern of stripes is a global phenomenon and can be used to calculate the velocity of seafloor spreading.
In fiction
In the Space Odyssey series by Arthur C. Clarke, a series of monoliths are left by extraterrestrials for humans to find. One near the crater Tycho is found by its unnaturally powerful magnetic field and named Tycho Magnetic Anomaly 1 (TMA-1). One orbiting Jupiter is named TMA-2, and one in the Olduvai Gorge is found in 2513 and retroactively named TMA-0 because it was first encountered by primitive humans.
| Physical sciences | Geophysics | Earth science |
5767923 | https://en.wikipedia.org/wiki/Ext4 | Ext4 | ext4 (fourth extended filesystem) is a journaling file system for Linux, developed as the successor to ext3.
ext4 was initially a series of backward-compatible extensions to ext3, many of them originally developed by Cluster File Systems for the Lustre file system between 2003 and 2006, meant to extend storage limits and add other performance improvements. However, other Linux kernel developers opposed accepting extensions to ext3 for stability reasons, and proposed to fork the source code of ext3, rename it as ext4, and perform all the development there, without affecting existing ext3 users. This proposal was accepted, and on 28 June 2006, Theodore Ts'o, the ext3 maintainer, announced the new plan of development for ext4.
A preliminary development version of ext4 was included in version 2.6.19 of the Linux kernel. On 11 October 2008, the patches that mark ext4 as stable code were merged in the Linux 2.6.28 source code repositories, denoting the end of the development phase and recommending ext4 adoption. Kernel 2.6.28, containing the ext4 filesystem, was finally released on 25 December 2008. On 15 January 2010, Google announced that it would upgrade its storage infrastructure from ext2 to ext4. On 14 December 2010, Google also announced it would use ext4, instead of YAFFS, on Android 2.3.
Adoption
ext4 is the default file system for many Linux distributions including Debian and Ubuntu.
Features
Large file system
The ext4 filesystem can support volumes with sizes in theory up to 64 zebibyte (ZiB) and single files with sizes up to 16 tebibytes (TiB) with the standard 4 KiB block size, and volumes with sizes up to 1 yobibyte (YiB) with 64 KiB clusters, though a limitation in the extent format makes 1 exbibyte (EiB) the practical limit. The maximum file, directory, and filesystem size limits grow at least proportionately with the filesystem block size up to the maximum 64 KiB block size available on ARM and PowerPC/Power ISA CPUs.
Extents
Extents replace the traditional block mapping scheme used by ext2 and ext3. An extent is a range of contiguous physical blocks, improving large-file performance and reducing fragmentation. A single extent in ext4 can map up to 128 MiB of contiguous space with a 4 KiB block size. There can be four extents stored directly in the inode. When there are more than four extents to a file, the rest of the extents are indexed in a tree.
Backward compatibility
ext4 is backward-compatible with ext3 and ext2, making it possible to mount ext3 and ext2 as ext4. This will slightly improve performance, because certain new features of the ext4 implementation can also be used with ext3 and ext2, such as the new block allocation algorithm, without affecting the on-disk format.
ext3 is partially forward-compatible with ext4. Practically, ext4 will not mount as an ext3 filesystem out of the box, unless certain new features are disabled when creating it, such as ^extent, ^flex_bg, ^huge_file, ^uninit_bg, ^dir_nlink, and ^extra_isize.
Persistent pre-allocation
ext4 can pre-allocate on-disk space for a file. To do this on most file systems, zeroes would be written to the file when created. In ext4 (and some other file systems such as XFS) fallocate(), a new system call in the Linux kernel, can be used. The allocated space would be guaranteed and likely contiguous. This situation has applications for media streaming and databases.
Delayed allocation
ext4 uses a performance technique called allocate-on-flush, also known as delayed allocation. That is, ext4 delays block allocation until data is flushed to disk; in contrast, some file systems allocate blocks immediately, even when the data goes into a write cache. Delayed allocation improves performance and reduces fragmentation by effectively allocating larger amounts of data at a time.
Unlimited number of subdirectories
ext4 does not limit the number of subdirectories in a single directory, except by the inherent size limit of the directory itself. (In ext3 a directory can have at most 32,000 subdirectories.) To allow for larger directories and continued performance, ext4 in Linux 2.6.23 and later turns on HTree indices (a specialized version of a B-tree) by default, which allows directories up to approximately 10–12 million entries to be stored in the 2-level HTree index and 2 GB directory size limit for 4 KiB block size, depending on the filename length. In Linux 4.12 and later the large_dir feature enabled a 3-level HTree and directory sizes over 2 GB, allowing approximately 6 billion entries in a single directory.
Journal checksums
ext4 uses checksums in the journal to improve reliability, since the journal is one of the most used files of the disk. This feature has a side benefit: it can safely avoid a disk I/O wait during journaling, improving performance slightly. Journal checksumming was inspired by a research article from the University of Wisconsin, titled IRON File Systems, with modifications within the implementation of compound transactions performed by the IRON file system (originally proposed by Sam Naghshineh in the RedHat summit).
Metadata checksumming
Since Linux kernel 3.5 released in 2012.
Faster file-system checking
In ext4 unallocated block groups and sections of the inode table are marked as such. This enables e2fsck to skip them entirely and greatly reduces the time it takes to check the file system. Linux 2.6.24 implements this feature.
Multiblock allocator
When ext3 appends to a file, it calls the block allocator, once for each block. Consequently, if there are multiple concurrent writers, files can easily become fragmented on disk. However, ext4 uses delayed allocation, which allows it to buffer data and allocate groups of blocks. Consequently, the multiblock allocator can make better choices about allocating files contiguously on disk. The multiblock allocator can also be used when files are opened in O_DIRECT mode. This feature does not affect the disk format.
Improved timestamps
As computers become faster in general, and as Linux becomes used more for mission-critical applications, the granularity of second-based timestamps becomes insufficient. To solve this, ext4 provides timestamps measured in nanoseconds. In addition, 2 bits of the expanded timestamp field are added to the most significant bits of the seconds field of the timestamps to defer the year 2038 problem for an additional 408 years.
ext4 also adds support for time-of-creation timestamps. But, as Theodore Ts'o points out, while it is easy to add an extra creation-date field in the inode (thus technically enabling support for these timestamps in ext4), it is more difficult to modify or add the necessary system calls, like stat() (which would probably require a new version) and the various libraries that depend on them (like glibc). These changes will require coordination of many projects. Therefore, the creation date stored by ext4 is currently only available to user programs on Linux via the statx() API.
Project quotas
Support for project quotas was added in Linux kernel 4.4 on 8 Jan 2016. This feature allows assigning disk quota limits to a particular project ID. The project ID of a file is a 32-bit number stored on each file and is inherited by all files and subdirectories created beneath a parent directory with an assigned project ID. This allows assigning quota limits to a particular subdirectory tree independent of file access permissions on the file, such as user and project quotas that are dependent on the UID and GID. While this is similar to a directory quota, the main difference is that the same project ID can be assigned to multiple top-level directories and is not strictly hierarchical.
Transparent encryption
Support for transparent encryption was added in Linux kernel 4.1 in June 2015.
Lazy initialization
The lazyinit feature allows cleaning of inode tables in background, speeding initialization when creating a new ext4 file system. It is available since 2010 in Linux kernel version 2.6.37.
Write barriers
ext4 enables write barriers by default. It ensures that file system metadata is correctly written and ordered on disk, even when write caches lose power. This goes with a performance cost especially for applications that use fsync heavily or create and delete many small files. For disks with a battery-backed write cache, disabling barriers (option 'barrier=0') may safely improve performance.
Limitations
In 2008, the principal developer of the ext3 and ext4 file systems, Theodore Ts'o, stated that although ext4 has improved features, it is not a major advance, it uses old technology, and is a stop-gap. Ts'o believes that Btrfs is the better direction because "it offers improvements in scalability, reliability, and ease of management". Btrfs also has "a number of the same design ideas that reiser3/4 had". However, ext4 has continued to gain new features such as file encryption and metadata checksums.
The ext4 file system does not honor the "secure deletion" file attribute, which is supposed to cause overwriting of files upon deletion. A patch to implement secure deletion was proposed in 2011, but did not solve the problem of sensitive data ending up in the file-system journal.
Delayed allocation and potential data loss
Because delayed allocation changes the behavior that programmers have been relying on with ext3, the feature poses some additional risk of data loss in cases where the system crashes or loses power before all of the data has been written to disk. Due to this, ext4 in kernel versions 2.6.30 and later automatically handles these cases as ext3 does.
The typical scenario in which this might occur is a program replacing the contents of a file without forcing a write to the disk with fsync. There are two common ways of replacing the contents of a file on Unix systems:
fd=open("file", O_TRUNC); write(fd, data); close(fd);
In this case, an existing file is truncated at the time of open (due to O_TRUNC flag), then new data is written out. Since the write can take some time, there is an opportunity of losing contents even with ext3, but usually very small. However, because ext4 can delay writing file data for a long time, this opportunity is much greater.
There are several problems that can arise:
If the write does not succeed (which may be due to error conditions in the writing program, or due to external conditions such as a full disk), then both the original version and the new version of the file will be lost, and the file may be corrupted because only a part of it has been written.
If other processes access the file while it is being written, they see a corrupted version.
If other processes have the file open and do not expect its contents to change, those processes may crash. One notable example is a shared library file which is mapped into running programs.
Because of these issues, often the following idiom is preferred over the one above:
fd=open("file.new"); write(fd, data); close(fd); rename("file.new", "file");
A new temporary file ("file.new") is created, which initially contains the new contents. Then the new file is renamed over the old one. Replacing files by the rename() call is guaranteed to be atomic by POSIX standards – i.e. either the old file remains, or it's overwritten with the new one. Because the ext3 default "ordered" journaling mode guarantees file data is written out on disk before metadata, this technique guarantees that either the old or the new file contents will persist on disk. ext4's delayed allocation breaks this expectation, because the file write can be delayed for a long time, and the rename is usually carried out before new file contents reach the disk.
Using fsync() more often to reduce the risk for ext4 could lead to performance penalties on ext3 filesystems mounted with the data=ordered flag (the default on most Linux distributions). Given that both file systems will be in use for some time, this complicates matters for end-user application developers. In response, ext4 in Linux kernels 2.6.30 and newer detect the occurrence of these common cases and force the files to be allocated immediately. For a small cost in performance, this provides semantics similar to ext3 ordered mode and increases the chance that either version of the file will survive the crash. This new behavior is enabled by default, but can be disabled with the "noauto_da_alloc" mount option.
The new patches have become part of the mainline kernel 2.6.30, but various distributions chose to backport them to 2.6.28 or 2.6.29.
These patches don't completely prevent potential data loss or help at all with new files. The only way to be safe is to write and use software that does fsync() when it needs to. Performance problems can be minimized by limiting crucial disk writes that need fsync() to occur less frequently.
Implementation
Linux kernel Virtual File System is a subsystem or layer inside of the Linux kernel. It is the result of an attempt to integrate multiple file systems into an orderly single structure. The key idea, which dates back to the pioneering work done by Sun Microsystems employees in 1986, is to abstract out that part of the file system that is common to all file systems and put that code in a separate layer that calls the underlying concrete file systems to actually manage the data.
All system calls related to files (or pseudo files) are directed to the Linux kernel Virtual File System for initial processing. These calls, coming from user processes, are the standard POSIX calls, such as open, read, write, lseek, etc.
Interoperability
Although designed for and primarily used with Linux, an ext4 file system can be accessed via other operating systems via interoperability tools.
Windows provides access via its Windows Subsystem for Linux (WSL) technology. Specifically, the second major version, WSL 2, is the first version with ext4 support. It was first released in Windows 10 Insider Preview Build 20211. WSL 2 requires Windows 10 version 1903 or higher, with build 18362 or higher, for x64 systems, and version 2004 or higher, with build 19041 or higher, for ARM64 systems.
Paragon Software offers commercial products that provide full read/write access for ext2/3/4 Linux File Systems for Windows and extFS for Mac.
The free software ext4fuse provides limited (read-only) support.
| Technology | Data storage and memory | null |
5772237 | https://en.wikipedia.org/wiki/Tetrasodium%20EDTA | Tetrasodium EDTA | Tetrasodium EDTA is the salt resulting from the neutralization of ethylenediaminetetraacetic acid with four equivalents of sodium hydroxide (or an equivalent sodium base). It is a white solid that is highly soluble in water. Commercial samples are often hydrated, e.g. Na4EDTA.4H2O. The properties of solutions produced from the anhydrous and hydrated forms are the same, provided they are at the same pH.
It is used as a source of the chelating agent EDTA4-. A 1% aqueous solution has a pH of approximately 11.3. When dissolved in neutral water, it converts partially to H2EDTA2-. Ethylenediaminetetraacetic acid is produced commercially via the intermediacy of tetrasodium EDTA.
Products
The substance is also known as Dissolvine E-39. It is a salt of edetic acid. It has been known at least since 1954. It is sometimes used as a chelating agent.
The assignee on 5% of patents at the USPTO containing the substance is the firm Procter and Gamble. It is used most notably in cosmetics and hair and skin care products.
The substance has been used to aid in formulation of a removal product for rust, corrosion, and scale from ferrous metal, copper, brass, and other surfaces.
At a concentration of 6%, it is the main active ingredient in some types of engine coolant system flushes.
| Physical sciences | Organic salts | Chemistry |
9052243 | https://en.wikipedia.org/wiki/Sea%20krait | Sea krait | Sea kraits are a genus of venomous snakes (subfamily: Laticaudinae), Laticauda. They are semiaquatic, and retain the wide ventral scales typical of terrestrial snakes for moving on land, but also have paddle-shaped tails for swimming. Unlike fully aquatic ovoviviparous sea snakes, sea kraits are oviparous and must come to land to digest prey and lay eggs. They also have independent evolutionary origins into aquatic habitats, with sea kraits diverging earlier from other Australasian elapids. Thus, sea kraits and sea snakes are an example of convergent evolution into aquatic habitats within the Hydrophiinae snakes. Sea kraits are also often confused with land kraits (genus Bungarus), which are not aquatic.
Description
Sea kraits are semiaquatic, so have morphological adaptations to both land and sea. Laticauda species show traits intermediate between those of sea snakes and terrestrial elapids. They have a vertically flattened and paddle-shaped tail (similar to sea snakes) and laterally positioned nostrils and broad, laterally expanded ventral scales (similar to terrestrial elapids). Their body has a striped pattern, nasal scales are separated by inter-nasals scales, and the maxillary bone extends forwards beyond the palatine bone. Members of Laticauda can grow to long.
Distribution
Laticauda species are found throughout the South and Southeast Asian islands spreading from India in the west, north as far as Japan, and southeast to Fiji. They occasionally wander south to the Eastern coast of Australia and New Zealand (Laticauda colubrina being the most common example in New Zealand), however no known locally breeding populations are known to exist in these areas. Sea kraits typically live in the littoral zone of coastal waters and are semi-terrestrial, spending time ashore and in shallow waters, as well as around coral reefs.
Diet
Laticauda species feed in the ocean, mostly eating moray and conger eels, and some squid, crabs, and fish. They have never been observed feeding on land.
Behavior
Laticauda species are often active at night, which is when they prefer to hunt. Though they possess highly toxic venom, these snakes are usually shy and reclusive, and in New Caledonia, where they are called tricot rayé ("striped sweater"), children play with them. Bites are rare, but must be treated immediately. Bites are more likely to occur under low light conditions (night), and when the snake is roughly handled (e.g. grabbed "hard") while in the water, or having been abruptly taken from the water. When these snakes are on land, bites are extremely rare. Black-banded sea kraits, numbering in the hundreds, form hunting alliances with yellow goatfish and bluefin trevally, flushing potential prey from narrow crannies in a reef the same way some moray eels do.
Sea kraits are capable of diving up to 80 m deep in a single hunting trip. They also have a very large hunting range, with at least 615 and perhaps up to 1660 km2 surface area for the Blue-lipped sea krait; 1380 and potentially up to 4500 km2 for the New Caledonian sea krait. They have a remarkable ability to climb up vertical rocks of their coastal limestone habitats.
Breeding
Laticauda females are oviparous, and they return to land to mate and lay eggs. Several males form a mating ball around the female, twitching their bodies in what is termed "caudocephalic waves". Though these animals can occur in high densities in suitable locations, nests of eggs are very rarely encountered, suggesting specific nesting conditions need to be met.
Species and taxonomy
Eight species are currently recognised as being valid.
Laticauda colubrina – yellow-lipped sea krait
Laticauda crockeri – Crocker's sea snake
Laticauda frontalis
Laticauda guineai – Guinea's sea krait
Laticauda laticaudata – blue-lipped sea krait
Laticauda saintgironsi – New Caledonian sea krait
Laticauda schistorhyncha – katuali or Niue sea krait
Laticauda semifasciata – black-banded sea krait
The species L. schistorhyncha and L. semifasciata have been placed in the genus Pseudolaticauda by some authors.
Nota bene: A binomial authority in parentheses indicates that the species was originally described in a genus other than Laticauda.
Parasites
Sea snakes can have parasitic ticks, occasionally with heavy infestations.
| Biology and health sciences | Snakes | Animals |
17737260 | https://en.wikipedia.org/wiki/%CE%9413C | Δ13C | {{DISPLAYTITLE:δ13C}}
In geochemistry, paleoclimatology, and paleoceanography δ13C (pronounced "delta thirteen c") is an isotopic signature, a measure of the ratio of the two stable isotopes of carbon—13C and 12C—reported in parts per thousand (per mil, ‰). The measure is also widely used in archaeology for the reconstruction of past diets, particularly to see if marine foods or certain types of plants were consumed.
The definition is, in per mille:
where the standard is an established reference material.
δ13C varies in time as a function of productivity, the signature of the inorganic source, organic carbon burial, and vegetation type. Biological processes preferentially take up the lower mass isotope through kinetic fractionation. However some abiotic processes do the same. For example, methane from hydrothermal vents can be depleted by up to 50%.
Reference standard
The standard established for carbon-13 work was the Pee Dee Belemnite (PDB) and was based on a Cretaceous marine fossil, Belemnitella americana, which was from the Peedee Formation in South Carolina. This material had an anomalously high 13C:12C ratio (0.0112372), and was established as δ13C value of zero.
Since the original PDB specimen is no longer available, its 13C:12C ratio can be back-calculated from a widely measured carbonate standard NBS-19, which has a δ13C value of +1.95‰. The 13C:12C ratio of NBS-19 was reported as . Therefore, one could calculate the 13C:12C ratio of PDB derived from NBS-19 as .
Note that this value differs from the widely used PDB 13C:12C ratio of 0.0112372 used in isotope forensics and environmental scientists; this discrepancy was previously attributed by a Wikipedia author to a sign error in the interconversion between standards, but no citation was provided. Use of the PDB standard gives most natural material a negative δ13C. A material with a ratio of 0.010743 for example would have a δ13C value of −44‰ from .
The standards are used for verifying the accuracy of mass spectroscopy; as isotope studies became more common, the demand for the standard exhausted the supply. Other standards calibrated to the same ratio, including one known as VPDB (for "Vienna PDB"), have replaced the original.
The 13C:12C ratio for VPDB, which the International Atomic Energy Agency (IAEA) defines as a δ13C value of zero is 0.01123720.
Causes of δ13C variations
Methane has a very light δ13C signature: biogenic methane of −60‰, thermogenic methane −40‰. The release of large amounts of methane clathrate can affect global δ13C values, as at the Paleocene–Eocene Thermal Maximum.
More commonly, the ratio is affected by variations in primary productivity and organic burial. Organisms preferentially take up light 12C, and have a δ13C signature of about −25‰, depending on their metabolic pathway. Therefore, an increase in δ13C in marine fossils is indicative of an increase in the abundance of vegetation.
An increase in primary productivity causes a corresponding rise in δ13C values as more 12C is locked up in plants. This signal is also a function of the amount of carbon burial; when organic carbon is buried, more 12C is locked out of the system in sediments than the background ratio.
Geologic significance of δ13C excursions
C3 and C4 plants have different signatures, allowing the abundance of C4 grasses to be detected through time in the δ13C record. Whereas plants have a δ13C of −16 to −10‰, plants have a δ13C of −33 to −24‰.
Positive and negative excursions
Positive δ13C excursions are interpreted as an increase in burial of organic carbon in sedimentary rocks following either a spike in primary productivity, a drop in decomposition under anoxic ocean conditions or both. For example, the evolution of large land plants in the late Devonian led to increased organic carbon burial and consequently a rise in δ13C.
Negative δ13C anomalies, which are thought to represent a decrease in primary productivity and release of plant-based carbon, often mark mass extinctions.
Lacustrine environments
Other important applications of δ13C involves understanding its signatures from soft sediments especially in lacustrine environments. This depends on the system from which it is extracted (open system, closed system, etc.). Temporal variations in in organic matter are influenced by diverse internal and external processes:
Changes in the Dominant Source of Dissolved Inorganic Carbon: In stratified lakes, the accumulation of 13C-depleted carbon in deep water is common as sinking and degrading phytoplankton cells contribute to this pool. Recirculating this water to the surface can lead to a significant decrease in . Prolonged stratification enriches the dissolved inorganic carbon (DIC) pool in the epilimnion with 13C. Long-term variations in factors affecting upwelling intensity or depth, such as windiness, water temperature, or salinity-related stratification, manifest as shifts between more negative and positive values.
Changes in Productivity/Eutrophication: Increased productivity accelerates the transfer of organic matter with negative values to the hypolimnion, affecting the of residual epilimnetic DIC. This impact, combined with mixing effects, results in variations in the signal.
Changes in Metabolic Pathways for Carbon Fixation: Major changes in lake alkalinity influence benthic and planktonic primary production. Shifts in the dominant source of DIC for photosynthesis, driven by pH changes, can lead to trends toward more positive , particularly in lakes dominated by autochthonous organic matter and exhibiting evidence of high alkalinity.
Changes in Availability of Dissolved : Cool water can dissolve higher concentrations of than warmer water, affecting in organic matter during cooling events. Changes in atmospheric concentrations also influence , with lower p during glacial periods causing isotopic discrimination in plants using dissolved .
Changes in Dominant Vegetation Within the Watershed: Shifts in watershed vegetation, especially transitions between C3 and C4 photosynthetic pathways, significantly alter the carbon isotopic composition in lake sediments. These changes can be indicative of broader paleoclimatic shifts.
Diagenetic Trends: Diagenetic processes, such as the loss of reactive components like amino acids, result in sustained shifts in in organic matter. Marsh sediments, rich in carbon, exhibit shifts towards more negative bulk organic matter. These diagenetic trends should be considered when interpreting isotopic changes accompanying major Total Organic Carbon (TOC) changes or methanogenesis.
Understanding these processes is crucial for interpreting variations in lake sediments and reconstructing paleoenvironmental conditions.
Major excursion events
Lomagundi-Jatuli event (2,300–2,080 Ma) Paleoproterozoic - Positive excursion
Shunga-Francevillian event (2,080 Ma) Paleoproterozoic - Negative excursion
Shuram-Wonoka excursion (570–551 Ma) Neoproterozoic - Negative excursion
Steptoean positive carbon isotope excursion (494.6-492 Ma) Paleozoic - Positive excursion
Ireviken event (433.4 Ma) Paleozoic - Positive excursion
Mulde event (427 Ma) Paleozoic - Positive excursion
Lau event (424 Ma) Paleozoic - Positive excursion
Cenomanian-Turonian boundary event (93.9 Ma) Mesozoic - Positive excursion
Paleocene–Eocene Thermal Maximum (55.5 Ma) Cenozoic - Negative excursion
| Physical sciences | Geochemistry | Earth science |
2311964 | https://en.wikipedia.org/wiki/Perentie | Perentie | The perentie (Varanus giganteus) is a species of monitor lizard. It is one of the largest living lizards on earth, after the Komodo dragon, Asian water monitor, and the Crocodile monitor. Found west of the Great Dividing Range in the arid areas of Australia, it is rarely seen, because of its shyness and the remoteness of much of its range from human habitation. The species is considered to be a least-concern species according to the International Union for Conservation of Nature.
Its status in many Aboriginal cultures is evident in the totemic relationships, and part of the Ngiṉṯaka dreaming, as well as bush tucker. It was a favoured food item among desert Aboriginal tribes, and the fat was used for medicinal and ceremonial purposes.
Taxonomy
British zoologist John Edward Gray described the perentie in 1845 as Hydrosaurus giganteus, calling it the "gigantic water lizard". George Albert Boulenger moved it to the genus Varanus.
Within the monitor genus Varanus, it lies within the subgenus Varanus. Its closest relatives belong to a lineage that gave rise to the sand goanna and the Argus monitor.
Description
Perenties are the largest living species of lizard in Australia. Perenties can grow to lengths of and weigh up to , possibly up to and , making it the fourth-largest extant species of lizard (exceeded in size only by the Komodo dragon, Asian water monitor and crocodile monitor). However, perenties are very lean among large monitors, making it significantly less bulky than the rock monitor at a similar size.
Venom
In late 2005, University of Melbourne researchers discovered that all monitors may be somewhat venomous. Previously, bites inflicted by monitors were thought to be prone to infection because of bacteria in their mouths, but the researchers showed that the immediate effects are caused by mild envenomation. Bites on the hand by Komodo dragons (V. komodensis), perenties (V. giganteus), lace monitors (V. varius), and spotted tree monitors (V. scalaris) have been observed to cause swelling within minutes, localised disruption of blood clotting, and shooting pain up to the elbow, which can often last for several hours.
University of Washington biologist Kenneth V. Kardong and toxicologists Scott A. Weinstein and Tamara L. Smith have argued that the suggestion of venom glands "... has had the effect of underestimating the variety of complex roles played by oral secretions in the biology of reptiles, produced a very narrow view of oral secretions and resulted in misinterpretation of reptilian evolution". According to the scientists "... reptilian oral secretions contribute to many biological roles other than to quickly dispatch prey". They concluded, "Calling all in this clade venomous implies an overall potential danger that does not exist, misleads in the assessment of medical risks, and confuses the biological assessment of squamate biochemical systems".
Distribution and habitat
Perenties are found in the arid desert areas of Western Australia, South Australia, the Northern Territory, and Queensland. Their habitats consist of rocky outcroppings and gorges, with hard-packed soil and loose stones.
Behaviour and ecology
Perenties generally avoid human contact and often retreat before they are seen. Being able diggers, they can excavate a burrow for shelter in only minutes. Their long claws enable them to climb trees easily. They often stand on their back legs and tails to gain a better view of the surrounding terrain. This behavior, known as "tripoding", is quite common in monitor species. Perenties are fast sprinters and can run using either all four legs or just their hind legs.
Typical of most goannas, the perentie either freezes (lying flat on the ground, and remaining very still until the danger has passed) or runs if detected. If cornered, this powerful carnivore stands its ground and uses its arsenal of claws, teeth, and whip-like tail to defend itself. It can inflate its throat and hiss as a defensive or aggressive display and can strike at opponents with its muscular tail. It may also lunge forward with an open mouth, either as a bluff or attack. The bite of a perentie can do much damage, not only from the teeth but also because of the oral secretions.
Feeding
The perenties are apex predators that do not have natural predators in their range.
They are highly active carnivores that feed on mostly reptiles, small mammals, and less commonly birds such as diamond doves. They hunt live prey, but also scavenge carrion. Reptilian prey includes mostly lizards (such as skinks and agamids) and more seldom snakes, but this species also displays a notable example of intraguild predation, as it eats an unusually large number of other monitor lizard species such as ridge-tailed monitors, black-headed monitors, Gould's monitors, and even Argus monitors. Perenties also eat smaller members of their own species; such is the case of a perentie killing and eating a perentie. Other lizard prey include central bearded dragons and long-nosed water dragons. Coastal and island individuals often eat a large number of sea turtle eggs and hatchlings and hide under vehicles to ambush scavenging gulls. Mammalian prey includes bats, young kangaroos other small marsupials, and rodents. They have also been occasionally seen foraging for food in shallow water. They are able to kill kangaroos and dismember those too large to be swallowed whole using their powerful forelimbs and claws. Although adults feed predominantly on vertebrate prey, young perenties eat mostly arthropods, especially grasshoppers and centipedes.
Prey is typically swallowed whole, but if the food item is too large, chunks are ripped off for ease of consumption.
Breeding
The perentie can lay its eggs in termite mounds or the soil.
Gallery
| Biology and health sciences | Lizards and other Squamata | Animals |
12189877 | https://en.wikipedia.org/wiki/Cupressus%20gigantea | Cupressus gigantea | Cupressus gigantea, the Tibetan cypress, is a species of conifer in the family Cupressaceae in Asia. C. gigantea was previously classified as a subspecies of Cupressus torulosa because of their similar morphological characteristics and close distribution, but have since been genetically distinguished as separate species.
Distribution
It is endemic to Southeast Tibet - China on the Qinghai-Tibetan plateau, particularly in the dry valleys of Nyang River and Yarlung Tsangpo River. Cupressus gigantea is the biggest of all Cupressus species.
King cypress
The biggest known specimen is the famous King Cypress, about 50 meters high, 5.8 meters in diameter, of crown-projection-area; and calculated age of 2,600 years.
| Biology and health sciences | Cupressaceae | Plants |
7527410 | https://en.wikipedia.org/wiki/Satellite%20television | Satellite television | Satellite television is a service that delivers television programming to viewers by relaying it from a communications satellite orbiting the Earth directly to the viewer's location. The signals are received via an outdoor parabolic antenna commonly referred to as a satellite dish and a low-noise block downconverter.
A satellite receiver decodes the desired television program for viewing on a television set. Receivers can be external set-top boxes, or a built-in television tuner. Satellite television provides a wide range of channels and services. It is usually the only television available in many remote geographic areas without terrestrial television or cable television service. Different receivers are required for the two types. Some transmissions and channels are unencrypted and therefore free-to-air, while many other channels are transmitted with encryption. Free-to-view channels are encrypted but not charged-for, while pay television requires the viewer to subscribe and pay a monthly fee to receive the programming.
Modern systems signals are relayed from a communications satellite on the X band (8–12 GHz) or Ku band (12–18 GHz) frequencies requiring only a small dish less than a meter in diameter. The first satellite TV systems were a now-obsolete type known as television receive-only. These systems received weaker analog signals transmitted in the C-band (4–8 GHz) from FSS type satellites, requiring the use of large 2–3-meter dishes. Consequently, these systems were nicknamed "big dish" systems, and were more expensive and less popular. Early systems used analog signals, but modern ones use digital signals which allow transmission of the modern television standard high-definition television, due to the significantly improved spectral efficiency of digital broadcasting. As of 2022, Star One D2 from Brazil is the only remaining satellite broadcasting in analog signals.
Technology
The satellites used for broadcasting television are usually in a geostationary orbit above the earth's equator. The advantage of this orbit is that the satellite's orbital period equals the rotation rate of the Earth, so the satellite appears at a fixed position in the sky. Thus the satellite dish antenna which receives the signal can be aimed permanently at the location of the satellite and does not have to track a moving satellite. A few systems instead use a highly elliptical orbit with inclination of +/−63.4 degrees and an orbital period of about twelve hours, known as a Molniya orbit.
Satellite television, like other communications relayed by satellite, starts with a transmitting antenna located at an uplink facility. Uplink satellite dishes are very large, as much as 9 to 12 meters (30 to 40 feet) in diameter. The increased diameter results in more accurate aiming and increased signal strength at the satellite. The uplink dish is pointed toward a specific satellite and the uplinked signals are transmitted within a specific frequency range, so as to be received by one of the transponders tuned to that frequency range aboard that satellite. The transponder re-transmits the signals back to Earth at a different frequency (a process known as translation, used to avoid interference with the uplink signal), typically in the 10.7-12.7 GHz band, but some still transmit in the C-band (4–8 GHz), Ku-band (12–18 GHz), or both. The leg of the signal path from the satellite to the receiving Earth station is called the downlink.
A typical satellite has up to 32 Ku-band or 24 C-band transponders, or more for Ku/C hybrid satellites. Typical transponders each have a bandwidth between 27 and 50 MHz. Each geostationary C-band satellite needs to be spaced 2° longitude from the next satellite to avoid interference; for Ku the spacing can be 1°. This means that there is an upper limit of 360/2 = 180 geostationary C-band satellites or 360/1 = 360 geostationary Ku-band satellites. C-band transmission is susceptible to terrestrial interference while Ku-band transmission is affected by rain (as water is an excellent absorber of microwaves at this particular frequency). The latter is even more adversely affected by ice crystals in thunder clouds. On occasion, sun outage will occur when the sun lines up directly behind the geostationary satellite to which the receiving antenna is pointed.
The downlink satellite signal, quite weak after traveling the great distance (see path loss), is collected with a parabolic receiving dish, which reflects the weak signal to the dish's focal point. Mounted on brackets at the dish's focal point is a device called a feedhorn or collector. The feedhorn is a section of waveguide with a flared front-end that gathers the signals at or near the focal point and conducts them to a probe or pickup connected to a low-noise block downconverter (LNB). The LNB amplifies the signals and downconverts them to a lower block of intermediate frequencies (IF), usually in the L-band.
The original C-band satellite television systems used a low-noise amplifier (LNA) connected to the feedhorn at the focal point of the dish. The amplified signal, still at the higher microwave frequencies, had to be fed via very expensive low-loss 50-ohm impedance gas filled hardline coaxial cable with relatively complex N-connectors to an indoor receiver or, in other designs, a downconverter (a mixer and a voltage-tuned oscillator with some filter circuitry) for downconversion to an intermediate frequency. The channel selection was controlled typically by a voltage tuned oscillator with the tuning voltage being fed via a separate cable to the headend, but this design evolved.
Designs for microstrip-based converters for amateur radio frequencies were adapted for the 4 GHz C-band. Central to these designs was concept of block downconversion of a range of frequencies to a lower, more easily handled IF.
The advantages of using an LNB are that cheaper cable can be used to connect the indoor receiver to the satellite television dish and LNB, and that the technology for handling the signal at L-band and UHF was far cheaper than that for handling the signal at C-band frequencies. The shift to cheaper technology from the hardline and N-connectors of the early C-band systems to the cheaper and simpler 75-ohm cable and F-connectors allowed the early satellite television receivers to use, what were in reality, modified UHF television tuners which selected the satellite television channel for down conversion to a lower intermediate frequency centered on 70 MHz, where it was demodulated. This shift allowed the satellite television DTH industry to change from being a largely hobbyist one where only small numbers of systems costing thousands of US dollars were built, to a far more commercial one of mass production.
In the United States, service providers use the intermediate frequency ranges of 950–2150 MHz to carry the signal from the LNBF at the dish down to the receiver. This allows for the transmission of UHF signals along the same span of coaxial wire at the same time. In some applications (DirecTV AU9-S and AT-9), ranges of the lower B-band and 2250–3000 MHz, are used. Newer LNBFs in use by DirecTV, called SWM (Single Wire Multiswitch), are used to implement single cable distribution and use a wider frequency range of 2–2150 MHz.
The satellite receiver or set-top box demodulates and converts the signals to the desired form (outputs for television, audio, data, etc.). Often, the receiver includes the capability to selectively unscramble or decrypt the received signal to provide premium services to some subscribers; the receiver is then called an integrated receiver/decoder or IRD. Low-loss cable (e.g. RG-6, RG-11, etc.) is used to connect the receiver to the LNBF or LNB. RG-59 is not recommended for this application as it is not technically designed to carry frequencies above 950 MHz, but may work in some circumstances, depending on the quality of the coaxial wire, signal levels, cable length, etc.
A practical problem relating to home satellite reception is that an LNB can basically only handle a single receiver. This is because the LNB is translating two different circular polarizations (right-hand and left-hand) and, in the case of K-band, two different frequency bands (lower and upper) to the same frequency range on the cable. Depending on which frequency and polarization a transponder is using, the satellite receiver has to switch the LNB into one of four different modes in order to receive a specific "channel". This is handled by the receiver using the DiSEqC protocol to control the LNB mode. If several satellite receivers are to be attached to a single dish, a so-called multiswitch will have to be used in conjunction with a special type of LNB. There are also LNBs available with a multi-switch already integrated. This problem becomes more complicated when several receivers are to use several dishes (or several LNBs mounted in a single dish) pointing to different satellites.
A common solution for consumers wanting to access multiple satellites is to deploy a single dish with a single LNB and to rotate the dish using an electric motor. The axis of rotation has to be set up in the north–south direction and, depending on the geographical location of the dish, have a specific vertical tilt. Set up properly the motorized dish when turned will sweep across all possible positions for satellites lined up along the geostationary orbit directly above the equator. The dish will then be capable of receiving any geostationary satellite that is visible at the specific location, i.e. that is above the horizon. The DiSEqC protocol has been extended to encompass commands for steering dish rotors.
There are five major components in a satellite system: the programming source, the broadcast center, the satellite, the satellite dish, and the receiver. "Direct broadcast" satellites used for transmission of satellite television signals are generally in geostationary orbit above the earth's equator. The reason for using this orbit is that the satellite circles the Earth at the same rate as the Earth rotates, so the satellite appears at a fixed point in the sky. Thus satellite dishes can be aimed permanently at that point, and do not need a tracking system to turn to follow a moving satellite. A few satellite TV systems use satellites in a Molniya orbit, a highly elliptical orbit with inclination of +/-63.4 degrees and an orbital period of about twelve hours.
Satellite television, like other communications relayed by satellite, starts with a transmitting antenna located at an uplink facility. Uplink facilities transmit the signal to the satellite over a narrow beam of microwaves, typically in the C-band frequency range due to its resistance to rain fade. Uplink satellite dishes are very large, often as much as 9 to 12 metres (30 to 40 feet) in diameter to achieve accurate aiming and increased signal strength at the satellite, to improve reliability. The uplink dish is pointed toward a specific satellite and the uplinked signals are transmitted within a specific frequency range, so as to be received by one of the transponders tuned to that frequency range aboard that satellite. The transponder then converts the signals to Ku band, a process known as "translation," and transmits them back to earth to be received by home satellite stations.
The downlinked satellite signal, weaker after traveling the great distance (see path loss), is collected by using a rooftop parabolic receiving dish ("satellite dish"), which reflects the weak signal to the dish's focal point. Mounted on brackets at the dish's focal point is a feedhorn which passes the signals through a waveguide to a device called a low-noise block converter (LNB) or low noise converter (LNC) attached to the horn. The LNB amplifies the weak signals, filters the block of frequencies in which the satellite television signals are transmitted, and converts the block of frequencies to a lower frequency range in the L-band range. The signal is then passed through a coaxial cable into the residence to the satellite television receiver, a set-top box next to the television.
The reason for using the LNB to do the frequency translation at the dish is so that the signal can be carried into the residence using cheap coaxial cable. To transport the signal into the house at its original Ku band microwave frequency would require an expensive waveguide, a metal pipe to carry the radio waves. The cable connecting the receiver to the LNB are of the low loss type RG-6, quad shield RG-6, or RG-11. RG-59 is not recommended for this application as it is not technically designed to carry frequencies above 950 MHz, but will work in many circumstances, depending on the quality of the coaxial wire. The shift to more affordable technology from the 50ohm impedance cable and N-connectors of the early C-band systems to the cheaper 75ohm technology and F-connectors allowed the early satellite television receivers to use, what were in reality, modified UHF television tuners which selected the satellite television channel for down conversion to another lower intermediate frequency centered on 70 MHz where it was demodulated.
An LNB can only handle a single receiver. This is due to the fact that the LNB is mapping two different circular polarisations – right hand and left hand – and in the case of the Ku-band two different reception bands – lower and upper – to one and the same frequency band on the cable, and is a practical problem for home satellite reception. Depending on which frequency a transponder is transmitting at and on what polarisation it is using, the satellite receiver has to switch the LNB into one of four different modes in order to receive a specific desired program on a specific transponder. The receiver uses the DiSEqC protocol to control the LNB mode, which handles this. If several satellite receivers are to be attached to a single dish a so-called multiswitch must be used in conjunction with a special type of LNB. There are also LNBs available with a multi-switch already integrated. This problem becomes more complicated when several receivers use several dishes or several LNBs mounted in a single dish are aimed at different satellites.
The set-top box selects the channel desired by the user by filtering that channel from the multiple channels received from the satellite, converts the signal to a lower intermediate frequency, decrypts the encrypted signal, demodulates the radio signal and sends the resulting video signal to the television through a cable. To decrypt the signal the receiver box must be "activated" by the satellite company. If the customer fails to pay their monthly bill the box is "deactivated" by a signal from the company, and the system will not work until the company reactivates it. Some receivers are capable of decrypting the received signal itself. These receivers are called integrated receiver/decoders or IRDs.
Analog television which was distributed via satellite was usually sent scrambled or unscrambled in NTSC, PAL, or SECAM television broadcast standards. The analog signal is frequency modulated and is converted from an FM signal to what is referred to as baseband. This baseband comprises the video signal and the audio subcarrier(s). The audio subcarrier is further demodulated to provide a raw audio signal.
Later signals were digitized television signals or multiplex of signals, typically QPSK. In general, digital television, including that transmitted via satellites, is based on open standards such as MPEG and DVB-S/DVB-S2 or ISDB-S.
The conditional access encryption/scrambling methods include NDS, BISS, Conax, Digicipher, Irdeto, Cryptoworks, DG Crypt, Beta digital, SECA Mediaguard, Logiways, Nagravision, PowerVu, Viaccess, Videocipher, and VideoGuard. Many conditional access systems have been compromised.
Sun outage
An event called sun outage occurs when the sun lines up directly behind the satellite in the field of view of the receiving satellite dish. This happens for about a 10-minute period daily around midday, twice every year for a two-week period in the spring and fall around the equinox. During this period, the sun is within the main lobe of the dish's reception pattern, so the strong microwave noise emitted by the sun on the same frequencies used by the satellite's transponders drowns out reception.
Uses
Direct-to-home and direct broadcast satellite
Direct-to-home (DTH) can either refer to the communications satellites themselves that deliver service or the actual television service. Most satellite television customers in developed television markets get their programming through a direct broadcast satellite (DBS) provider. Signals are transmitted using Ku band (12 to 18 GHz) and are completely digital which means it has high picture and stereo sound quality.
Programming for satellite television channels comes from multiple sources and may include live studio feeds. The broadcast center assembles and packages programming into channels for transmission and, where necessary, encrypts the channels. The signal is then sent to the uplink where it is transmitted to the satellite. With some broadcast centers, the studios, administration and up-link are all part of the same campus. The satellite then translates and broadcasts the channels.
Most systems use the DVB-S standard for transmission. With pay television services, the data stream is encrypted and requires proprietary reception equipment. While the underlying reception technology is similar, the pay television technology is proprietary, often consisting of a conditional-access module and smart card. This measure assures satellite television providers that only authorized, paying subscribers have access to pay television content but at the same time can allow free-to-air channels to be viewed even by the people with standard equipment available in the market.
Some countries operate satellite television services which can be received for free, without paying a subscription fee. This is called free-to-air satellite television. Germany is likely the leader in free-to-air with approximately 250 digital channels (including 83 HDTV channels and various regional channels) broadcast from the Astra 19.2°E satellite constellation. These are not marketed as a DBS service, but are received in approximately 18 million homes, as well as in any home using the Sky Deutschland commercial DBS system. All German analogue satellite broadcasts ceased on 30 April 2012.
The United Kingdom has approximately 160 digital channels (including the regional variations of BBC channels, ITV channels, Channel 4 and Channel 5) that are broadcast without encryption from the Astra 28.2°E satellite constellation, and receivable on any DVB-S receiver (a DVB-S2 receiver is required for certain high definition television services). Most of these channels are included within the Sky EPG, and an increasing number within the Freesat EPG.
India's national broadcaster, Doordarshan, promotes a free-to-air DBS package as "DD Free Dish", which is provided as in-fill for the country's terrestrial transmission network. It is broadcast from GSAT-15 at 93.5°E and contains about 80 FTA channels.
While originally launched as backhaul for their digital terrestrial television service, a large number of French channels are free-to-air on satellites at 5°W, and have recently been announced as being official in-fill for the DTT network.
In North America (United States, Canada and Mexico) there are over 80 FTA digital channels available on Galaxy 19 (with the majority being ethnic or religious in nature). Other FTA satellites include AMC-4, AMC-6, Galaxy 18, and Satmex 5. A company called GloryStar promotes FTA religious broadcasters on Galaxy 19.
Satellite TV has seen a decline in consumers since the 2010s due to the cord-cutting trend where people are shifting towards internet-based streaming television and free over-the-air television.
Television receive-only
The term television receive-only, or TVRO, arose during the early days of satellite television reception to differentiate it from commercial satellite television uplink and downlink operations (transmit and receive). This was the primary method of satellite television transmissions before the satellite television industry shifted, with the launch of higher powered DBS satellites in the early 1990s which transmitted their signals on the Ku band frequencies. Satellite television channels at that time were intended to be used by cable television networks rather than received by home viewers. Early satellite television receiver systems were largely constructed by hobbyists and engineers. These early TVRO systems operated mainly on the C-band frequencies and the dishes required were large; typically over 3 meters (10 ft) in diameter. Consequently, TVRO is often referred to as "big dish" or "Big Ugly Dish" (BUD) satellite television.
TVRO systems were designed to receive analog and digital satellite feeds of both television or audio from both C-band and Ku-band transponders on FSS-type satellites. The higher frequency Ku-band systems tend to resemble DBS systems and can use a smaller dish antenna because of the higher power transmissions and greater antenna gain. TVRO systems tend to use larger rather than smaller satellite dish antennas, since it is more likely that the owner of a TVRO system would have a C-band-only setup rather than a Ku band-only setup. Additional receiver boxes allow for different types of digital satellite signal reception, such as DVB/MPEG-2 and 4DTV.
The narrow beam width of a normal parabolic satellite antenna means it can only receive signals from a single satellite at a time. Simulsat or the Vertex-RSI TORUS, is a quasi-parabolic satellite earthstation antenna that is capable of receiving satellite transmissions from 35 or more C- and Ku-band satellites simultaneously.
History
Early history
In 1945 British science fiction writer Arthur C. Clarke proposed a worldwide communications system which would function by means of three satellites equally spaced apart in Earth orbit. This was published in the October 1945 issue of the Wireless World magazine and won him the Franklin Institute's Stuart Ballantine Medal in 1963.
The first satellite relayed communication was achieved early on in the space age, after the first relay test was conducted by Pioneer 1 and the first radio broadcast by SCORE at the end of 1958, after at the beginning of the year Sputnik I became the first satellite in history.
First satellite relayed broadcasts
The first public satellite television signals from Europe to North America were relayed via the Telstar satellite over the Atlantic ocean on 23 July 1962, although a test broadcast had taken place almost two weeks earlier on 11 July. The signals were received and broadcast in North American and European countries and watched by over 100 million. Launched in 1962, the Relay 1 satellite was the first satellite to transmit television signals from the US to Japan. The first geosynchronous communication satellite, Syncom 2, was launched on 26 July 1963. The subsequent first geostationary Syncom 3, orbiting near the International Date Line, was used to telecast the 1964 Olympic Games from Tokyo to the United States.
The world's first commercial communications satellite, called Intelsat I and nicknamed "Early Bird", was launched into geosynchronous orbit on April 6, 1965. The first national network of television satellites, called Orbita, was created by the Soviet Union in October 1967, and was based on the principle of using the highly elliptical Molniya satellite for rebroadcasting and delivering of television signals to ground downlink stations.
Development of the direct satellite TV industry
The first domestic satellite to carry television transmissions was Canada's geostationary Anik 1, which was launched on 9 November 1972.
ATS-6, the world's first experimental educational and direct broadcast satellite (DBS), was launched on 30 May 1974. It transmitted at 860 MHz using wideband FM modulation and had two sound channels. The transmissions were focused on the Indian subcontinent but experimenters were able to receive the signal in Western Europe using home constructed equipment that drew on UHF television design techniques already in use.
The first in a series of Soviet geostationary satellites to carry direct-to-home television, Ekran 1, was launched on 26 October 1976. It used a 714 MHz UHF downlink frequency so that the transmissions could be received with existing UHF television technology rather than microwave technology.
The satellite television industry developed in the US from the cable television industry as communication satellites were being used to distribute television programming to remote cable television headends. Home Box Office (HBO), Turner Broadcasting System (TBS), and Christian Broadcasting Network (CBN, later The Family Channel) were among the first to use satellite television to deliver programming. Taylor Howard of San Andreas, California, became the first person to receive C-band satellite signals with his home-built system in 1976.
In the US, PBS, a non-profit public broadcasting service, began to distribute its television programming by satellite in 1978.
In 1979, Soviet engineers developed the Moskva (or Moscow) system of broadcasting and delivering of TV signals via satellites. They launched the Gorizont communication satellites later that same year. These satellites used geostationary orbits. They were equipped with powerful on-board transponders, so the size of receiving parabolic antennas of downlink stations was reduced to 4 and 2.5 metres. On October 18, 1979, the Federal Communications Commission (FCC) began allowing people to have home satellite earth stations without a federal government license. The front cover of the 1979 Neiman-Marcus Christmas catalogue featured the first home satellite TV stations on sale for $36,500. The dishes were nearly in diameter and were remote controlled. The price went down by half soon after that, but there were only eight more channels. The Society for Private and Commercial Earth Stations (SPACE), an organisation which represented consumers and satellite TV system owners, was established in 1980.
Early satellite television systems were not very popular due to their expense and large dish size. The satellite television dishes of the systems in the late 1970s and early 1980s were in diameter, made of fibreglass or solid aluminum or steel, and in the United States cost more than $5,000, sometimes as much as $10,000. Programming sent from ground stations was relayed from eighteen satellites in geostationary orbit located above the Earth.
TVRO/C-band satellite era, 1980–1986
By 1980, satellite television was well established in the US and Europe. On 26 April 1982, the first satellite channel in the UK, Satellite Television Ltd. (later Sky One), was launched. Its signals were transmitted from the ESA's Orbital Test Satellites. Between 1981 and 1985, TVRO systems' sales rates increased as prices fell. Advances in receiver technology and the use of gallium arsenide FET technology enabled the use of smaller dishes. Five hundred thousand systems, some costing as little as $2000, were sold in the US in 1984. Dishes pointing to one satellite were even cheaper. People in areas without local broadcast stations or cable television service could obtain good-quality reception with no monthly fees. The large dishes were a subject of much consternation, as many people considered them eyesores, and in the US most condominiums, neighborhoods, and other homeowner associations tightly restricted their use, except in areas where such restrictions were illegal. These restrictions were altered in 1986 when the Federal Communications Commission ruled all of them illegal. A municipality could require a property owner to relocate the dish if it violated other zoning restrictions, such as a setback requirement, but could not outlaw their use. The necessity of these restrictions would slowly decline as the dishes got smaller.
Originally, all channels were broadcast in the clear (ITC) because the equipment necessary to receive the programming was too expensive for consumers. With the growing number of TVRO systems, the program providers and broadcasters had to scramble their signal and develop subscription systems.
In October 1984, the U.S. Congress passed the Cable Communications Policy Act of 1984, which gave those using TVRO systems the right to receive signals for free unless they were scrambled, and required those who did scramble to make their signals available for a reasonable fee. Since cable channels could prevent reception by big dishes, other companies had an incentive to offer competition. In January 1986, HBO began using the now-obsolete VideoCipher II system to encrypt their channels. Other channels used less secure television encryption systems. The scrambling of HBO was met with much protest from owners of big-dish systems, most of which had no other option at the time for receiving such channels, claiming that clear signals from cable channels would be difficult to receive. Eventually HBO allowed dish owners to subscribe directly to their service for $12.95 per month, a price equal to or higher than what cable subscribers were paying, and required a descrambler to be purchased for $395. This led to the attack on HBO's transponder Galaxy 1 by John R. MacDougall in April 1986. One by one, all commercial channels followed HBO's lead and began scrambling their channels. The Satellite Broadcasting and Communications Association (SBCA) was founded on December 2, 1986, as the result of a merger between SPACE and the Direct Broadcast Satellite Association (DBSA).
Videocipher II used analog scrambling on its video signal and Data Encryption Standard–based encryption on its audio signal. VideoCipher II was defeated, and there was a black market for descrambler devices which were initially sold as "test" devices.
1987 to present
By 1987, nine channels were scrambled, but 99 others were available free-to-air. While HBO initially charged a monthly fee of $19.95, soon it became possible to unscramble all channels for $200 a year. Dish sales went down from 600,000 in 1985 to 350,000 in 1986, but pay television services were seeing dishes as something positive since some people would never have cable service, and the industry was starting to recover as a result. Scrambling also led to the development of pay-per-view events. On November 1, 1988, NBC began scrambling its C-band signal but left its Ku band signal unencrypted in order for affiliates to not lose viewers who could not see their advertising. Most of the two million satellite dish users in the United States still used C-band. ABC and CBS were considering scrambling, though CBS was reluctant due to the number of people unable to receive local network affiliates. The piracy on satellite television networks in the US led to the introduction of the Cable Television Consumer Protection and Competition Act of 1992. This legislation enabled anyone caught engaging in signal theft to be fined up to $50,000 and to be sentenced to a maximum of two years in prison. A repeat offender can be fined up to $100,000 and be imprisoned for up to five years.
Satellite television had also developed in Europe but it initially used low power communication satellites and it required dish sizes of over 1.7 metres. On 11 December 1988, however, Luxembourg launched Astra 1A, the first satellite to provide medium power satellite coverage to Western Europe. This was one of the first medium-powered satellites, transmitting signals in Ku band and allowing reception with small dishes (90 cm). The launch of Astra beat the winner of the UK's state Direct Broadcast Satellite licence holder, British Satellite Broadcasting, to the market.
Commercial satellite broadcasts have existed in Japan since 1992 led by NHK which is influential in the development of regulations and has access to government funding for research. Their entry into the market was protected by the Ministry of Posts and Telecommunications (MPT) resulting in the WOWOW channel that is encrypted and can be accessed from NHK dishes with a decoder.
In the US in the early 1990s, four large cable companies launched PrimeStar, a direct broadcasting company using medium power satellites. The relatively strong transmissions allowed the use of smaller (90 cm) dishes. Its popularity declined with the 1994 launch of the Hughes DirecTV and Dish Network satellite television systems.
Digital satellite broadcasts began in 1994 in the United States through DirecTV using the DSS format. They were launched (with the DVB-S standard) in South Africa, Middle East, North Africa and Asia-Pacific in 1994 and 1995, and in 1996 and 1997 in European countries including France, Germany, Spain, Portugal, Italy and the Netherlands, as well as Japan, North America and Latin America. Digital DVB-S broadcasts in the United Kingdom and Ireland started in 1998. Japan started broadcasting with the ISDB-S standard in 2000.
On March 4, 1996, EchoStar introduced Digital Sky Highway (Dish Network) using the EchoStar 1 satellite. EchoStar launched a second satellite in September 1996 to increase the number of channels available on Dish Network to 170. These systems provided better pictures and stereo sound on 150–200 video and audio channels, and allowed small dishes to be used. This greatly reduced the popularity of TVRO systems. In the mid-1990s, channels began moving their broadcasts to digital television transmission using the DigiCipher conditional access system.
In addition to encryption, the widespread availability, in the US, of DBS services such as PrimeStar and DirecTV had been reducing the popularity of TVRO systems since the early 1990s. Signals from DBS satellites (operating in the more recent Ku band) are higher in both frequency and power (due to improvements in the solar panels and energy efficiency of modern satellites) and therefore require much smaller dishes than C-band, and the digital modulation methods now used require less signal strength at the receiver than analog modulation methods. Each satellite also can carry up to 32 transponders in the Ku band, but only 24 in the C band, and several digital subchannels can be multiplexed (MCPC) or carried separately (SCPC) on a single transponder. Advances in noise reduction due to improved microwave technology and semiconductor materials have also had an effect. However, one consequence of the higher frequencies used for DBS services is rain fade where viewers lose signal during a heavy downpour. C-band satellite television signals are less prone to rain fade.
In a return to the older (but proven) technologies of satellite communication, the current DBS-based satellite providers in the US (Dish Network and DirecTV) are now utilizing additional capacity on the Ku-band transponders of existing FSS-class satellites, in addition to the capacity on their own existing fleets of DBS satellites in orbit. This was done in order to provide more channel capacity for their systems, as required by the increasing number of High-Definition and simulcast local station channels. The reception of the channels carried on the Ku-band FSS satellite's respective transponders has been achieved by both DirecTV & Dish Network issuing to their subscribers dishes twice as big in diameter (36") than the previous 18" (& 20" for the Dish Network "Dish500") dishes the services used initially, equipped with 2 circular-polarized LNBFs (for reception of 2 native DBS satellites of the provider, 1 per LNBF), and 1 standard linear-polarized LNB for reception of channels from an FSS-type satellite. These newer DBS/FSS-hybrid dishes, marketed by DirecTV and Dish Network as the "SlimLine" and "SuperDish" models respectively, are now the current standard for both providers, with their original 18"/20" single or dual LNBF dishes either now obsolete, or only used for program packages, separate channels, or services only broadcast over the providers' DBS satellites.
On 29 November 1999 US President Bill Clinton signed the Satellite Home Viewer Improvement Act (SHVIA). The act allowed Americans to receive local broadcast signals via direct broadcast satellite systems for the first time.
Legal
The 1963 Radio Regulations of the International Telecommunication Union (ITU) defined a "broadcasting satellite service" as a "space service in which signals transmitted or retransmitted by space stations, or transmitted by reflection from objects in orbit around the Earth, are intended for direct reception by the general public."
In the 1970s some states grew concerned that external broadcasting could alter the cultural or political identity of a state leading to the New World Information and Communication Order (NWICO) proposal. However, satellite broadcasts can not be restricted on a per-state basis due to the limitations of the technology. Around the time the MacBride report was released, satellite broadcasting was being discussed at the UN Committee on the Peaceful Uses of Outer Space (COPUOS) where most of the members supported prior consent restrictions for broadcasting in their territories, but some argued this would violate freedom of information. The parties were unable to reach a consensus on this and in 1982 submitted UNGA Res 37/92 ("DBS Principles") to the UN General Assembly which was adopted by a majority vote, however, most States capable of DBS voted against it. The "DBS Principles" resolution is generally regarded as ineffective.
| Technology | Media and communication | null |
4342970 | https://en.wikipedia.org/wiki/Surface%20%28mathematics%29 | Surface (mathematics) | In mathematics, a surface is a mathematical model of the common concept of a surface. It is a generalization of a plane, but, unlike a plane, it may be curved; this is analogous to a curve generalizing a straight line.
There are several more precise definitions, depending on the context and the mathematical tools that are used for the study. The simplest mathematical surfaces are planes and spheres in the Euclidean 3-space. The exact definition of a surface may depend on the context. Typically, in algebraic geometry, a surface may cross itself (and may have other singularities), while, in topology and differential geometry, it may not.
A surface is a topological space of dimension two; this means that a moving point on a surface may move in two directions (it has two degrees of freedom). In other words, around almost every point, there is a coordinate patch on which a two-dimensional coordinate system is defined. For example, the surface of the Earth resembles (ideally) a sphere, and latitude and longitude provide two-dimensional coordinates on it (except at the poles and along the 180th meridian).
Definitions
Often, a surface is defined by equations that are satisfied by the coordinates of its points. This is the case of the graph of a continuous function of two variables. The set of the zeros of a function of three variables is a surface, which is called an implicit surface. If the defining three-variate function is a polynomial, the surface is an algebraic surface. For example, the unit sphere is an algebraic surface, as it may be defined by the implicit equation
A surface may also be defined as the image, in some space of dimension at least 3, of a continuous function of two variables (some further conditions are required to ensure that the image is not a curve). In this case, one says that one has a parametric surface, which is parametrized by these two variables, called parameters. For example, the unit sphere may be parametrized by the Euler angles, also called longitude and latitude by
Parametric equations of surfaces are often irregular at some points. For example, all but two points of the unit sphere, are the image, by the above parametrization, of exactly one pair of Euler angles (modulo ). For the remaining two points (the north and south poles), one has , and the longitude may take any values. Also, there are surfaces for which there cannot exist a single parametrization that covers the whole surface. Therefore, one often considers surfaces which are parametrized by several parametric equations, whose images cover the surface. This is formalized by the concept of manifold: in the context of manifolds, typically in topology and differential geometry, a surface is a manifold of dimension two; this means that a surface is a topological space such that every point has a neighborhood which is homeomorphic to an open subset of the Euclidean plane (see Surface (topology) and Surface (differential geometry)). This allows defining surfaces in spaces of dimension higher than three, and even abstract surfaces, which are not contained in any other space. On the other hand, this excludes surfaces that have singularities, such as the vertex of a conical surface or points where a surface crosses itself.
In classical geometry, a surface is generally defined as a locus of a point or a line. For example, a sphere is the locus of a point which is at a given distance of a fixed point, called the center; a conical surface is the locus of a line passing through a fixed point and crossing a curve; a surface of revolution is the locus of a curve rotating around a line. A ruled surface is the locus of a moving line satisfying some constraints; in modern terminology, a ruled surface is a surface, which is a union of lines.
Terminology
There are several kinds of surfaces that are considered in mathematics. An unambiguous terminology is thus necessary to distinguish them when needed. A topological surface is a surface that is a manifold of dimension two (see ). A differentiable surface is a surfaces that is a differentiable manifold (see ). Every differentiable surface is a topological surface, but the converse is false.
A "surface" is often implicitly supposed to be contained in a Euclidean space of dimension 3, typically . A surface that is contained in a projective space is called a projective surface (see ). A surface that is not supposed to be included in another space is called an abstract surface.
Examples
The graph of a continuous function of two variables, defined over a connected open subset of is a topological surface. If the function is differentiable, the graph is a differentiable surface.
A plane is both an algebraic surface and a differentiable surface. It is also a ruled surface and a surface of revolution.
A circular cylinder (that is, the locus of a line crossing a circle and parallel to a given direction) is an algebraic surface and a differentiable surface.
A circular cone (locus of a line crossing a circle, and passing through a fixed point, the apex, which is outside the plane of the circle) is an algebraic surface which is not a differentiable surface. If one removes the apex, the remainder of the cone is the union of two differentiable surfaces.
The surface of a polyhedron is a topological surface, which is neither a differentiable surface nor an algebraic surface.
A hyperbolic paraboloid (the graph of the function ) is a differentiable surface and an algebraic surface. It is also a ruled surface, and, for this reason, is often used in architecture.
A two-sheet hyperboloid is an algebraic surface and the union of two non-intersecting differentiable surfaces.
Parametric surface
A parametric surface is the image of an open subset of the Euclidean plane (typically ) by a continuous function, in a topological space, generally a Euclidean space of dimension at least three. Usually the function is supposed to be continuously differentiable, and this will be always the case in this article.
Specifically, a parametric surface in is given by three functions of two variables and , called parameters
As the image of such a function may be a curve (for example, if the three functions are constant with respect to ), a further condition is required, generally that, for almost all values of the parameters, the Jacobian matrix
has rank two. Here "almost all" means that the values of the parameters where the rank is two contain a dense open subset of the range of the parametrization. For surfaces in a space of higher dimension, the condition is the same, except for the number of columns of the Jacobian matrix.
Tangent plane and normal vector
A point where the above Jacobian matrix has rank two is called regular, or, more properly, the parametrization is called regular at .
The tangent plane at a regular point is the unique plane passing through and having a direction parallel to the two row vectors of the Jacobian matrix. The tangent plane is an affine concept, because its definition is independent of the choice of a metric. In other words, any affine transformation maps the tangent plane to the surface at a point to the tangent plane to the image of the surface at the image of the point.
The normal line at a point of a surface is the unique line passing through the point and perpendicular to the tangent plane; the normal vector is a vector which is parallel to the normal.
For other differential invariants of surfaces, in the neighborhood of a point, see Differential geometry of surfaces.
Irregular point and singular point
A point of a parametric surface which is not regular is irregular. There are several kinds of irregular points.
It may occur that an irregular point becomes regular, if one changes the parametrization. This is the case of the poles in the parametrization of the unit sphere by Euler angles: it suffices to permute the role of the different coordinate axes for changing the poles.
On the other hand, consider the circular cone of parametric equation
The apex of the cone is the origin , and is obtained for . It is an irregular point that remains irregular, whichever parametrization is chosen (otherwise, there would exist a unique tangent plane). Such an irregular point, where the tangent plane is undefined, is said singular.
There is another kind of singular points. There are the self-crossing points, that is the points where the surface crosses itself. In other words, these are the points which are obtained for (at least) two different values of the parameters.
Graph of a bivariate function
Let be a function of two real variables. This is a parametric surface, parametrized as
Every point of this surface is regular, as the two first columns of the Jacobian matrix form the identity matrix of rank two.
Rational surface
A rational surface is a surface that may be parametrized by rational functions of two variables. That is, if are, for , polynomials in two indeterminates, then the parametric surface, defined by
is a rational surface.
A rational surface is an algebraic surface, but most algebraic surfaces are not rational.
Implicit surface
An implicit surface in a Euclidean space (or, more generally, in an affine space) of dimension 3 is the set of the common zeros of a differentiable function of three variables
Implicit means that the equation defines implicitly one of the variables as a function of the other variables. This is made more exact by the implicit function theorem: if , and the partial derivative in of is not zero at , then there exists a differentiable function such that
in a neighbourhood of . In other words, the implicit surface is the graph of a function near a point of the surface where the partial derivative in is nonzero. An implicit surface has thus, locally, a parametric representation, except at the points of the surface where the three partial derivatives are zero.
Regular points and tangent plane
A point of the surface where at least one partial derivative of is nonzero is called regular. At such a point , the tangent plane and the direction of the normal are well defined, and may be deduced, with the implicit function theorem from the definition given above, in . The direction of the normal is the gradient, that is the vector
The tangent plane is defined by its implicit equation
Singular point
A singular point of an implicit surface (in ) is a point of the surface where the implicit equation holds and the three partial derivatives of its defining function are all zero. Therefore, the singular points are the solutions of a system of four equations in three indeterminates. As most such systems have no solution, many surfaces do not have any singular point. A surface with no singular point is called regular or non-singular.
The study of surfaces near their singular points and the classification of the singular points is singularity theory. A singular point is isolated if there is no other singular point in a neighborhood of it. Otherwise, the singular points may form a curve. This is in particular the case for self-crossing surfaces.
Algebraic surface
Originally, an algebraic surface was a surface which may be defined by an implicit equation
where is a polynomial in three indeterminates, with real coefficients.
The concept has been extended in several directions, by defining surfaces over arbitrary fields, and by considering surfaces in spaces of arbitrary dimension or in projective spaces. Abstract algebraic surfaces, which are not explicitly embedded in another space, are also considered.
Surfaces over arbitrary fields
Polynomials with coefficients in any field are accepted for defining an algebraic surface.
However, the field of coefficients of a polynomial is not well defined, as, for example, a polynomial with rational coefficients may also be considered as a polynomial with real or complex coefficients. Therefore, the concept of point of the surface has been generalized in the following way.
Given a polynomial , let be the smallest field containing the coefficients, and be an algebraically closed extension of , of infinite transcendence degree. Then a point of the surface is an element of which is a solution of the equation
If the polynomial has real coefficients, the field is the complex field, and a point of the surface that belongs to (a usual point) is called a real point. A point that belongs to is called rational over , or simply a rational point, if is the field of rational numbers.
Projective surface
A projective surface in a projective space of dimension three is the set of points whose homogeneous coordinates are zeros of a single homogeneous polynomial in four variables. More generally, a projective surface is a subset of a projective space, which is a projective variety of dimension two.
Projective surfaces are strongly related to affine surfaces (that is, ordinary algebraic surfaces). One passes from a projective surface to the corresponding affine surface by setting to one some coordinate or indeterminate of the defining polynomials (usually the last one). Conversely, one passes from an affine surface to its associated projective surface (called projective completion) by homogenizing the defining polynomial (in case of surfaces in a space of dimension three), or by homogenizing all polynomials of the defining ideal (for surfaces in a space of higher dimension).
In higher dimensional spaces
One cannot define the concept of an algebraic surface in a space of dimension higher than three without a general definition of an algebraic variety and of the dimension of an algebraic variety. In fact, an algebraic surface is an algebraic variety of dimension two.
More precisely, an algebraic surface in a space of dimension is the set of the common zeros of at least polynomials, but these polynomials must satisfy further conditions that may be not immediate to verify. Firstly, the polynomials must not define a variety or an algebraic set of higher dimension, which is typically the case if one of the polynomials is in the ideal generated by the others. Generally, polynomials define an algebraic set of dimension two or higher. If the dimension is two, the algebraic set may have several irreducible components. If there is only one component the polynomials define a surface, which is a complete intersection. If there are several components, then one needs further polynomials for selecting a specific component.
Most authors consider as an algebraic surface only algebraic varieties of dimension two, but some also consider as surfaces all algebraic sets whose irreducible components have the dimension two.
In the case of surfaces in a space of dimension three, every surface is a complete intersection, and a surface is defined by a single polynomial, which is irreducible or not, depending on whether non-irreducible algebraic sets of dimension two are considered as surfaces or not.
Topological surface
In topology, a surface is generally defined as a manifold of dimension two. This means that a topological surface is a topological space such that every point has a neighborhood that is homeomorphic to an open subset of a Euclidean plane.
Every topological surface is homeomorphic to a polyhedral surface such that all facets are triangles. The combinatorial study of such arrangements of triangles (or, more generally, of higher-dimensional simplexes) is the starting object of algebraic topology. This allows the characterization of the properties of surfaces in terms of purely algebraic invariants, such as the genus and homology groups.
The homeomorphism classes of surfaces have been completely described (see Surface (topology)).
Differentiable surface
Fractal surface
In computer graphics
| Mathematics | Three-dimensional space | null |
1082841 | https://en.wikipedia.org/wiki/Fictitious%20force | Fictitious force | A fictitious force is a force that appears to act on a mass whose motion is described using a non-inertial frame of reference, such as a linearly accelerating or rotating reference frame.
Fictitious forces are invoked to maintain the validity and thus use of Newton's second law of motion, in frames of reference which are not inertial.
Measureable examples of fictitious forces
Passengers in a vehicle accelerating in the forward direction may perceive they are acted upon by a force moving them into the direction of the backrest of their seats for instance. An example in a rotating reference frame may be the impression that it is a force which seems to move objects outward toward the rim of a centrifuge or carousel.
The fictitious force called a pseudo force might also be referred to as a body force. It is due to an object's inertia when the reference frame does not move inertially any more but begins to accelerate relative to the free object. In terms of the example of the passenger vehicle, a pseudo force seems to be active just before the body touches the backrest of the seat in the car. A person in the car leaning forward first moves a bit backward in relation to the already accelerating car, before touching the backrest. The motion in this short period just seems to be the result of a force on the person; i.e., it is a pseudo force. A pseudo force does not arise from any physical interaction between two objects, such as electromagnetism or contact forces. It is just a consequence of the acceleration a of the physical object the non-inertial reference frame is connected to, i.e. the vehicle in this case. From the viewpoint of the respective accelerating frame, an acceleration of the inert object appears to be present, apparently requiring a "force" for this to have happened.
As stated by Iro:
The pseudo force on an object arises as an imaginary influence when the frame of reference used to describe the object's motion is accelerating compared to a non-accelerating frame. The pseudo force "explains," using Newton's second law mechanics, why an object does not follow Newton's second law and "floats freely" as if weightless. As a frame may accelerate in any arbitrary way, so may pseudo forces also be as arbitrary (but only in direct response to the acceleration of the frame). An example of a pseudo force as defined by Iro is the Coriolis force, maybe better to be called: the Coriolis effect. The gravitational force would also be a fictitious force (pseudo force) in a field model in which particles distort spacetime due to their mass, such as in the theory of general relativity.
Assuming Newton's second law in the form F = ma, fictitious forces are always proportional to the mass m.
The fictitious force that has been called an inertial force is also referred to as a d'Alembert force, or sometimes as a pseudo force. D'Alembert's principle is just another way of formulating Newton's second law of motion. It defines an inertial force as the negative of the product of mass times acceleration, just for the sake of easier calculations.
(A d'Alembert force is not to be confused with a contact force arising from the physical interaction between two objects, which is the subject of Newton's third law – 'action is reaction'. In terms of the example of the passenger vehicle above, a contact force emerges when the body of the passenger touches the backrest of the seat in the car. It is present for as long as the car is accelerated.)
Four fictitious forces have been defined for frames accelerated in commonly occurring ways:
one caused by any acceleration relative to the origin in a straight line (rectilinear acceleration);
two involving rotation: centrifugal force and Coriolis effect
and a fourth, called the Euler force, caused by a variable rate of rotation, should that occur.
Background
The role of fictitious forces in Newtonian mechanics is described by Tonnelat:
Fictitious forces arise in classical mechanics and special relativity in all non-inertial frames.
Inertial frames are privileged over non-inertial frames because they do not have physics whose causes are outside of the system, while non-inertial frames do. Fictitious forces, or physics whose cause is outside of the system, are no longer necessary in general relativity, since these physics are explained with the geodesics of spacetime: "The field of all possible space-time null geodesics or photon paths unifies the absolute local non-rotation standard throughout space-time.".
On Earth
The surface of the Earth is a rotating reference frame. To solve classical mechanics problems exactly in an Earthbound reference frame, three fictitious forces must be introduced: the Coriolis force, the centrifugal force (described below) and the Euler force. The Euler force is typically ignored because the variations in the angular velocity of the rotating surface of the Earth are usually insignificant. Both of the other fictitious forces are weak compared to most typical forces in everyday life, but they can be detected under careful conditions.
For example, Léon Foucault used his Foucault pendulum to show that the Coriolis force results from the Earth's rotation. If the Earth were to rotate twenty times faster (making each day only ~72 minutes long), people could easily get the impression that such fictitious forces were pulling on them, as on a spinning carousel. People in temperate and tropical latitudes would, in fact, need to hold on, in order to avoid being launched into orbit by the centrifugal force.
When moving along the equator in a ship heading in an easterly direction, objects appear to be slightly lighter than on the way back. This phenomenon has been observed and is called the Eötvös effect.
Detection of non-inertial reference frame
Observers inside a closed box that is moving with a constant velocity cannot detect their own motion; however, observers within an accelerating reference frame can detect that they are in a non-inertial reference frame from the fictitious forces that arise. For example, for straight-line acceleration Vladimir Arnold presents the following theorem:
Other accelerations also give rise to fictitious forces, as described mathematically below. The physical explanation of motions in an inertial frame is the simplest possible, requiring no fictitious forces: fictitious forces are zero, providing a means to distinguish inertial frames from others.
An example of the detection of a non-inertial, rotating reference frame is the precession of a Foucault pendulum. In the non-inertial frame of the Earth, the fictitious Coriolis force is necessary to explain observations. In an inertial frame outside the Earth, no such fictitious force is necessary.
Example concerning Circular motion
The effect of a fictitious force also occurs when a car takes the bend. Observed from a non-inertial frame of reference attached to the car, the fictitious force called the centrifugal force appears. As the car enters a left turn, a suitcase first on the left rear seat slides to the right rear seat and then continues until it comes into contact with the closed door on the right. This motion marks the phase of the fictitious centrifugal force as it is the inertia of the suitcase which plays a role in this piece of movement. It may seem that there must be a force responsible for this movement, but actually, this movement arises because of the inertia of the suitcase, which is (still) a 'free object' within an already accelerating frame of reference.
After the suitcase has come into contact with the closed door of the car, the situation with the emergence of contact forces becomes current. The centripetal force on the car is now also transferred to the suitcase and the situation of Newton's third law comes into play, with the centripetal force as the action part and with the so-called reactive centrifugal force as the reaction part. The reactive centrifugal force is also due to the inertia of the suitcase. Now however the inertia appears in the form of a manifesting resistance to a change in its state of motion.
Suppose a few miles further the car is moving at constant speed travelling a roundabout, again and again, then the occupants will feel as if they are being pushed to the outside of the vehicle by the (reactive) centrifugal force, away from the centre of the turn.
The situation can be viewed from inertial as well as from non-inertial frames.
From the viewpoint of an inertial reference frame stationary with respect to the road, the car is accelerating toward the centre of the circle. It is accelerating, because the direction of the velocity is changing, despite the car having constant speed. This inward acceleration is called centripetal acceleration, it requires a centripetal force to maintain the circular motion. This force is exerted by the ground upon the wheels, in this case, from the friction between the wheels and the road. The car is accelerating, due to the unbalanced force, which causes it to move in a circle. ( | Physical sciences | Classical mechanics | Physics |
1084219 | https://en.wikipedia.org/wiki/Gravimetry | Gravimetry | Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of a gravitational field or the properties of matter responsible for its creation are of interest. The study of gravity changes belongs to geodynamics.
Units of measurement
Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is metres per second squared (m/s2). Other units include the cgs gal (sometimes known as a galileo, in either case with symbol Gal), which equals 1 centimetre per second squared, and the g (gn), equal to 9.80665 m/s2. The value of the gn is defined as approximately equal to the acceleration due to gravity at the Earth's surface, although the actual acceleration varies slightly by location.
Gravimeters
A gravimeter is an instrument used to measure gravitational acceleration. Every mass has an associated gravitational potential. The gradient of this potential is a force. A gravimeter measures this gravitational force.
For a small body, general relativity predicts gravitational effects indistinguishable from the effects of acceleration by the equivalence principle. Thus, gravimeters can be regarded as special-purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), however, gravimeters display their measurements in units of gals (cm/s2), and parts per million, parts per billion, or parts per trillion of the average vertical acceleration with respect to the Earth.
Though similar in design to other accelerometers, gravimeters are typically designed to be much more sensitive. Their first uses were to measure the changes in gravity from the varying densities and distribution of masses inside the Earth, from temporal tidal variations in the shape and distribution of mass in the oceans, atmosphere and earth.
The resolution of gravimeters can be increased by averaging samples over longer periods. Fundamental characteristics of gravimeters are the accuracy of a single measurement (a single sample) and the sampling rate.
for example:
Besides precision, stability is also an important property for a gravimeter as it allows the monitoring of gravity changes. These changes can be the result of mass displacements inside the Earth, or of vertical movements of the Earth's crust on which measurements are being made.
The first gravimeters were vertical accelerometers, specialized for measuring the constant downward acceleration of gravity on the Earth's surface. The Earth's vertical gravity varies from place to place over its surface by about ±0.5%. It varies by about (nanometers per second squared) at any location because of the changing positions of the Sun and Moon relative to the Earth.
The majority of modern gravimeters use specially designed metal or quartz zero-length springs to support the test mass. The special property of these springs is that the natural resonant period of oscillation of the spring–mass system can be made very longapproaching a thousand seconds. This detunes the test mass from most local vibration and mechanical noise, increasing the sensitivity and utility of the gravimeter. Quartz and metal springs are chosen for different reasons; quartz springs are less affected by magnetic and electric fields while metal springs have a much lower drift due to elongation over time. The test mass is sealed in an air-tight container so that tiny changes of barometric pressure from blowing wind and other weather do not change the buoyancy of the test mass in air. Spring gravimeters are, in practice, relative instruments that measure the difference in gravity between different locations. A relative instrument also requires calibration by comparing instrument readings taken at locations with known absolute values of gravity.
Absolute gravimeters provide such measurements by determining the gravitational acceleration of a test mass in a vacuum. A test mass is allowed to fall freely inside a vacuum chamber and its position is measured with a laser interferometer and timed with an atomic clock. The laser wavelength is known to ±0.025 ppb and the clock is stable to ±0.03 ppb. Care must be taken to minimize the effects of perturbing forces such as residual air resistance (even in a vacuum), vibration, and magnetic forces. Such instruments are capable of an accuracy of about 2 ppb or 0.002 mGal and reference their measurement to atomic standards of length and time. Their primary use is for calibrating relative instruments, monitoring crustal deformation, and in geophysical studies requiring high accuracy and stability. However, absolute instruments are somewhat larger and significantly more expensive than relative spring gravimeters and are thus relatively rare.
Relative gravimeter usually refer to comparisons of gravity from one place to another. They are designed to subtract the average vertical gravity automatically. They can be calibrated at a location where the gravity is known accurately and then transported to where gravity is to be measured. Or they can be calibrated in absolute units at their operating location.
Applications
Researchers use more sophisticated gravimeters when precise measurements are needed. When measuring Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to quantify gravity anomalies.
Gravimeters can detect vibrations and gravity changes from human activities. Depending on the interests of the researcher or operator, this might be counteracted by integral vibration isolation and signal processing.
Gravimeters have been designed to mount in vehicles, including aircraft (note the field of aerogravity), ships and submarines. These special gravimeters isolate acceleration from the vehicle's movement and subtract it from measurements. The acceleration of the vehicles is often hundreds or thousands of times stronger than the changes in gravity being measured.
The Lunar Surface Gravimeter was deployed on the surface of the Moon during the 1972 Apollo 17 mission but did not work due to a design error. A second device carried on the same mission, the Lunar Traverse Gravimeter, functioned as anticipated.
Gravimeters are used for petroleum and mineral prospecting, seismology, geodesy, geophysical surveys and other geophysical research, and for metrology. Their fundamental purpose is to map the gravity field in space and time.
Most current work is Earth-based, with a few satellites around Earth, but gravimeters are also applicable to the Moon, Sun, planets, asteroids, stars, galaxies and other bodies. Gravitational wave experiments monitor the changes with time in the gravitational potential itself, rather than the gradient of the potential that the gravimeter is tracking. This distinction is somewhat arbitrary. The subsystems of the gravitational radiation experiments are very sensitive to changes in the gradient of the potential. The local gravity signals on Earth that interfere with gravitational wave experiments are disparagingly referred to as "Newtonian noise", since Newtonian gravity calculations are sufficient to characterize many of the local (earth-based) signals.
Commercial absolute gravimeters
Gravimeters for measuring the Earth's gravity as precisely as possible are getting smaller and more portable. A common type measures the acceleration of small masses free falling in a vacuum, when the accelerometer is firmly attached to the ground. The mass includes a retroreflector and terminates one arm of a Michelson interferometer. By counting and timing the interference fringes, the acceleration of the mass can be measured. A more recent development is a "rise and fall" version that tosses the mass upward and measures both upward and downward motion. This allows cancellation of some measurement errors; however, "rise and fall" gravimeters are not yet in common use. Absolute gravimeters are used in the calibration of relative gravimeters, surveying for gravity anomalies (voids), and for establishing the vertical control network.
Atom interferometric and atomic fountain methods are used for precise measurement of the Earth's gravity, and atomic clocks and purpose-built instruments can use time dilation (also called general relativistic) measurements to track changes in the gravitational potential and gravitational acceleration on the Earth.
The term "absolute" does not convey the instrument's stability, sensitivity, accuracy, ease of use, and bandwidth. The words "Absolute" and "relative" should not be used when more specific characteristics can be given.
Relative gravimeters
The most common gravimeters are spring-based. They are used in gravity surveys over large areas for establishing the figure of the geoid over those areas. They are basically a weight on a spring, and by measuring the amount by which the weight stretches the spring, local gravity can be measured. However, the strength of the spring must be calibrated by placing the instrument in a location with a known gravitational acceleration.
The current standard for sensitive gravimeters are the superconducting gravimeters, which operate by suspending a superconducting niobium sphere in an extremely stable magnetic field; the current required to generate the magnetic field that suspends the niobium sphere is proportional to the strength of the Earth's gravitational acceleration. The superconducting gravimeter achieves sensitivities of (one nanogal), approximately one trillionth (10) of the Earth surface gravity. In a demonstration of the sensitivity of the superconducting gravimeter, Virtanen (2006), describes how an instrument at Metsähovi, Finland, detected the gradual increase in surface gravity as workmen cleared snow from its laboratory roof.
The largest component of the signal recorded by a superconducting gravimeter is the tidal gravity of the Sun and Moon acting at the station. This is roughly (nanometers per second squared) at most locations. "SGs", as they are called, can detect and characterize Earth tides, changes in the density of the atmosphere, the effect of changes in the shape of the surface of the ocean, the effect of the atmosphere's pressure on the Earth, changes in the rate of rotation of the Earth, oscillations of the Earth's core, distant and nearby seismic events, and more.
Many broadband three-axis seismometers in common use are sensitive enough to track the Sun and Moon. When operated to report acceleration, they are useful gravimeters. Because they have three axes, it is possible to solve for their position and orientation, by either tracking the arrival time and pattern of seismic waves from earthquakes, or by referencing them to the Sun and Moon tidal gravity.
Recently, the SGs, and broadband three-axis seismometers operated in gravimeter mode, have begun to detect and characterize the small gravity signals from earthquakes. These signals arrive at the gravimeter at the speed of light, so have the potential to improve earthquake early warning methods. There is some activity to design purpose-built gravimeters of sufficient sensitivity and bandwidth to detect these prompt gravity signals from earthquakes. Not just the magnitude 7+ events, but also the smaller, much more frequent, events.
Newer MEMS gravimeters, atom gravimeters – MEMS gravimeters offer the potential for low-cost arrays of sensors. MEMS gravimeters are currently variations on spring type accelerometers where the motions of a tiny cantilever or mass are tracked to report acceleration. Much of the research is focused on different methods of detecting the position and movements of these small masses. In Atom gravimeters, the mass is a collection of atoms.
For a given restoring force, the central frequency of the instrument is often given by
(in radians per second)
The term for the "force constant" changes if the restoring force is electrostatic, magnetostatic, electromagnetic, optical, microwave, acoustic, or any of dozens of different ways to keep the mass stationary. The "force constant" is just the coefficient of the displacement term in the equation of motion:
m mass, a acceleration, b viscosity, v velocity, k force constant, x displacement
F external force as a function of location/position and time.
F is the force being measured, and is the acceleration.
+ higher derivatives of the restoring force
Precise GPS stations can be operated as gravimeters since they are increasingly measuring three-axis positions over time, which, when differentiated twice, give an acceleration signal.
The satellite borne gravimeters GOCE, GRACE, mostly operated in gravity gradiometer mode. They yielded detailed information about the Earth's time-varying gravity field. The spherical harmonic gravitational potential models are slowly improving in both spatial and temporal resolution. Taking the gradient of the potentials gives estimate of local acceleration which are what is measured by the gravimeter arrays. The superconducting gravimeter network has been used to ground truth the satellite potentials. This should eventually improve both the satellite and Earth-based methods and intercomparisons.
Transportable relative gravimeters also exist; they employ an extremely stable inertial platform to compensate for the masking effects of motion and vibration, a difficult engineering feat. The first transportable relative gravimeters were, reportedly, a secret military technology developed in the 1950–1960s as a navigational aid for nuclear submarines. Subsequently in the 1980s, transportable relative gravimeters were reverse engineered by the civilian sector for use on ship, then in air and finally satellite-borne gravity surveys.
Microgravimetry
Microgravimetry is an important branch developed on the foundation of classical gravimetry. Microgravity investigations are carried out in order to solve various problems of engineering geology, mainly location of voids and their monitoring. Very detailed measurements of high accuracy can indicate voids of any origin, provided the size and depth are large enough to produce gravity effect stronger than is the level of confidence of relevant gravity signal.
History
The modern gravimeter was developed by Lucien LaCoste and Arnold Romberg in 1936.
They also invented most subsequent refinements, including the ship-mounted gravimeter, in 1965, temperature-resistant instruments for deep boreholes, and lightweight hand-carried instruments. Most of their designs remain in use with refinements in data collection and data processing.
Satellite gravimetry
Currently, the static and time-variable Earth's gravity field parameters are determined using modern satellite missions, such as GOCE, CHAMP, Swarm, GRACE and GRACE-FO. The lowest-degree parameters, including the Earth's oblateness and geocenter motion are best determined from satellite laser ranging.
Large-scale gravity anomalies can be detected from space, as a by-product of satellite gravity missions, e.g., GOCE. These satellite missions aim at the recovery of a detailed gravity field model of the Earth, typically presented in the form of a spherical-harmonic expansion of the Earth's gravitational potential, but alternative presentations, such as maps of geoid undulations or gravity anomalies, are also produced.
The Gravity Recovery and Climate Experiment (GRACE) consisted of two satellites that detected gravitational changes across the Earth. Also these changes could be presented as gravity anomaly temporal variations. The Gravity Recovery and Interior Laboratory (GRAIL) also consisted of two spacecraft orbiting the Moon, which orbited for three years before their deorbit in 2015.
| Physical sciences | Geophysics | Earth science |
1085308 | https://en.wikipedia.org/wiki/Candlepower | Candlepower | Candlepower (abbreviated as cp or CP) is a unit of measurement for luminous intensity. It expresses levels of light intensity relative to the light emitted by a candle of specific size and constituents. The historical candlepower is equal to 0.981 candelas. In modern usage, candlepower is sometimes used as a synonym for candla.
History
The term candlepower was originally defined in the United Kingdom, by the Metropolitan Gas Act 1860, as the light produced by a pure spermaceti candle that weighs and burns at a rate of . Spermaceti is a material from the heads of sperm whales, and was once used to make high-quality candles.
At the time the UK established candlepower as a unit, the French standard of light was based on the illumination from a Carcel burner, which defined the illumination that emanates from a lamp burning pure colza oil (obtained from the seed of the plant Brassica campestris) at a defined rate. Ten standard candles equaled about one Carcel burner.
In 1909, several agencies met to establish an international standard. It was attended by representatives of the Laboratoire Central de l’Electricité (France), the National Physical Laboratory (UK), the Bureau of Standards (United States), and the Physikalische Technische Reichsanstalt (Germany). The majority redefined the candle in term of an electric lamp with a carbon filament. The Germans, however, dissented and decided to use a definition equal to 9/10 of the output of a Hefner lamp.
In 1921, the Commission Internationale de l'Eclairage (International Commission for Illumination, commonly referred to as the CIE) redefined the international candle again in terms of a carbon filament incandescent lamp.
In 1937, the international candle was redefined again—against the luminous intensity of a blackbody at the freezing point of liquid platinum which was to be 58.9 international candles per square centimetre.
In 1948, the international unit (SI) candela replaced candlepower. One candlepower unit is about 0.981 candela. In general modern use, a candlepower now equates directly (1:1) to the number of candelas—an implicit increase from its old value.
Calibration of lamps
To measure the candlepower of a lamp, a person judged by eye the relative brightness of adjacent surfaces—one illuminated only by a standard lamp (or candle) and the other only by the lamp under test. They adjusted the distance of one of the lamps until the two surfaces appeared to be of equal brightness. Then they calculated the candlepower of the lamp under test from the two distances and the inverse square law.
Modern use
"Candlepower" is largely an obsolete term. However, people still sometimes use it to describe the luminous intensity of high powered flashlights and spotlights. Narrow-beamed lights of all sorts can have very high candlepower specifications, because candlepower measures the intensity of the light on a target, rather than the total amount of light it emits. A given lamp has a higher candlepower rating if its light is more tightly focused.
Candlepower is still used today in law. For example, it is presently used in the California Vehicle Code to define the legal requirements for headlamps and other lamps, including accessory lamps.
Only a few artificial light sources, such as military photoflash bombs, have the very high candlepower ratings characteristic of narrow-beamed spotlights but, simultaneously, a wide unfocused distribution of light.
| Physical sciences | Luminous intensity | Basics and measurement |
1085343 | https://en.wikipedia.org/wiki/Semi-empirical%20mass%20formula | Semi-empirical mass formula | In nuclear physics, the semi-empirical mass formula (SEMF) (sometimes also called the Weizsäcker formula, Bethe–Weizsäcker formula, or Bethe–Weizsäcker mass formula to distinguish it from the Bethe–Weizsäcker process) is used to approximate the mass of an atomic nucleus from its number of protons and neutrons. As the name suggests, it is based partly on theory and partly on empirical measurements. The formula represents the liquid-drop model proposed by George Gamow, which can account for most of the terms in the formula and gives rough estimates for the values of the coefficients. It was first formulated in 1935 by German physicist Carl Friedrich von Weizsäcker, and although refinements have been made to the coefficients over the years, the structure of the formula remains the same today.
The formula gives a good approximation for atomic masses and thereby other effects. However, it fails to explain the existence of lines of greater binding energy at certain numbers of protons and neutrons. These numbers, known as magic numbers, are the foundation of the nuclear shell model.
Liquid-drop model
The liquid-drop model was first proposed by George Gamow and further developed by Niels Bohr, John Archibald Wheeler and Lise Meitner. It treats the nucleus as a drop of incompressible fluid of very high density, held together by the nuclear force (a residual effect of the strong force), there is a similarity to the structure of a spherical liquid drop. While a crude model, the liquid-drop model accounts for the spherical shape of most nuclei and makes a rough prediction of binding energy.
The corresponding mass formula is defined purely in terms of the numbers of protons and neutrons it contains. The original Weizsäcker formula defines five terms:
Volume energy, when an assembly of nucleons of the same size is packed together into the smallest volume, each interior nucleon has a certain number of other nucleons in contact with it. So, this nuclear energy is proportional to the volume.
Surface energy corrects for the previous assumption made that every nucleon interacts with the same number of other nucleons. This term is negative and proportional to the surface area, and is therefore roughly equivalent to liquid surface tension.
Coulomb energy, the potential energy from each pair of protons. As this is a repelling force, the binding energy is reduced.
Asymmetry energy (also called Pauli energy), which accounts for the Pauli exclusion principle. Unequal numbers of neutrons and protons imply filling higher energy levels for one type of particle, while leaving lower energy levels vacant for the other type.
Pairing energy, which accounts for the tendency of proton pairs and neutron pairs to occur. An even number of particles is more stable than an odd number due to spin coupling.
Formula
The mass of an atomic nucleus, for neutrons, protons, and therefore nucleons, is given by
where and are the rest mass of a neutron and a proton respectively, and is the binding energy of the nucleus. The semi-empirical mass formula states the binding energy is
The term is either zero or , depending on the parity of and , where for some exponent . Note that as , the numerator of the term can be rewritten as .
Each of the terms in this formula has a theoretical basis. The coefficients , , , , and are determined empirically; while they may be derived from experiment, they are typically derived from least-squares fit to contemporary data. While typically expressed by its basic five terms, further terms exist to explain additional phenomena. Akin to how changing a polynomial fit will change its coefficients, the interplay between these coefficients as new phenomena are introduced is complex; some terms influence each other, whereas the term is largely independent.
Volume term
The term is known as the volume term. The volume of the nucleus is proportional to A, so this term is proportional to the volume, hence the name.
The basis for this term is the strong nuclear force. The strong force affects both protons and neutrons, and as expected, this term is independent of Z. Because the number of pairs that can be taken from A particles is , one might expect a term proportional to . However, the strong force has a very limited range, and a given nucleon may only interact strongly with its nearest neighbors and next nearest neighbors. Therefore, the number of pairs of particles that actually interact is roughly proportional to A, giving the volume term its form.
The coefficient is smaller than the binding energy possessed by the nucleons with respect to their neighbors (), which is of order of 40 MeV. This is because the larger the number of nucleons in the nucleus, the larger their kinetic energy is, due to the Pauli exclusion principle. If one treats the nucleus as a Fermi ball of nucleons, with equal numbers of protons and neutrons, then the total kinetic energy is , with the Fermi energy, which is estimated as 38 MeV. Thus the expected value of in this model is not far from the measured value.
Surface term
The term is known as the surface term. This term, also based on the strong force, is a correction to the volume term.
The volume term suggests that each nucleon interacts with a constant number of nucleons, independent of A. While this is very nearly true for nucleons deep within the nucleus, those nucleons on the surface of the nucleus have fewer nearest neighbors, justifying this correction. This can also be thought of as a surface-tension term, and indeed a similar mechanism creates surface tension in liquids.
If the volume of the nucleus is proportional to A, then the radius should be proportional to and the surface area to . This explains why the surface term is proportional to . It can also be deduced that should have a similar order of magnitude to .
Coulomb term
The term or is known as the Coulomb or electrostatic term.
The basis for this term is the electrostatic repulsion between protons. To a very rough approximation, the nucleus can be considered a sphere of uniform charge density. The potential energy of such a charge distribution can be shown to be
where Q is the total charge, and R is the radius of the sphere. The value of can be approximately calculated by using this equation to calculate the potential energy, using an empirical nuclear radius of and Q = Ze. However, because electrostatic repulsion will only exist for more than one proton, becomes :
where now the electrostatic Coulomb constant is
Using the fine-structure constant, we can rewrite the value of as
where is the fine-structure constant, and is the radius of a nucleus, giving to be approximately 1.25 femtometers. is the proton reduced Compton wavelength, and is the proton mass. This gives an approximate theoretical value of 0.691 MeV, not far from the measured value.
Asymmetry term
The term is known as the asymmetry term (or Pauli term).
The theoretical justification for this term is more complex. The Pauli exclusion principle states that no two identical fermions can occupy exactly the same quantum state in an atom. At a given energy level, there are only finitely many quantum states available for particles. What this means in the nucleus is that as more particles are "added", these particles must occupy higher energy levels, increasing the total energy of the nucleus (and decreasing the binding energy). Note that this effect is not based on any of the fundamental forces (gravitational, electromagnetic, etc.), only the Pauli exclusion principle.
Protons and neutrons, being distinct types of particles, occupy different quantum states. One can think of two different "pools" of states one for protons and one for neutrons. Now, for example, if there are significantly more neutrons than protons in a nucleus, some of the neutrons will be higher in energy than the available states in the proton pool. If we could move some particles from the neutron pool to the proton pool, in other words, change some neutrons into protons, we would significantly decrease the energy. The imbalance between the number of protons and neutrons causes the energy to be higher than it needs to be, for a given number of nucleons. This is the basis for the asymmetry term.
The actual form of the asymmetry term can again be derived by modeling the nucleus as a Fermi ball of protons and neutrons. Its total kinetic energy is
where and are the Fermi energies of the protons and neutrons. Since these are proportional to and respectively, one gets
for some constant C.
The leading terms in the expansion in the difference are then
At the zeroth order in the expansion the kinetic energy is just the overall Fermi energy multiplied by . Thus we get
The first term contributes to the volume term in the semi-empirical mass formula, and the second term is minus the asymmetry term (remember, the kinetic energy contributes to the total binding energy with a negative sign).
is 38 MeV, so calculating from the equation above, we get only half the measured value. The discrepancy is explained by our model not being accurate: nucleons in fact interact with each other and are not spread evenly across the nucleus. For example, in the shell model, a proton and a neutron with overlapping wavefunctions will have a greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons and neutrons to have the same quantum numbers (other than isospin), and thus increase the energy cost of asymmetry between them.
One can also understand the asymmetry term intuitively as follows. It should be dependent on the absolute difference , and the form is simple and differentiable, which is important for certain applications of the formula. In addition, small differences between Z and N do not have a high energy cost. The A in the denominator reflects the fact that a given difference is less significant for larger values of A.
Pairing term
The term is known as the pairing term (possibly also known as the pairwise interaction). This term captures the effect of spin coupling. It is given by
where is found empirically to have a value of about 1000 keV, slowly decreasing with mass number A. The binding energy may be increased by converting one of the odd protons or neutrons into a neutron or proton, so the odd nucleon can form a pair with its odd neighbour forming and even Z, N. The pairs have overlapping wave functions and sit very close together with a bond stronger than any other configuration. When the pairing term is substituted into the binding energy equation, for even Z, N, the pairing term adds binding energy, and for odd Z, N the pairing term removes binding energy.
The dependence on mass number is commonly parametrized as
The value of the exponent kP is determined from experimental binding-energy data. In the past its value was often assumed to be −3/4, but modern experimental data indicate that a value of −1/2 is nearer the mark:
or
Due to the Pauli exclusion principle the nucleus would have a lower energy if the number of protons with spin up were equal to the number of protons with spin down. This is also true for neutrons. Only if both Z and N are even, can both protons and neutrons have equal numbers of spin-up and spin-down particles. This is a similar effect to the asymmetry term.
The factor is not easily explained theoretically. The Fermi-ball calculation we have used above, based on the liquid-drop model but neglecting interactions, will give an dependence, as in the asymmetry term. This means that the actual effect for large nuclei will be larger than expected by that model. This should be explained by the interactions between nucleons. For example, in the shell model, two protons with the same quantum numbers (other than spin) will have completely overlapping wavefunctions and will thus have greater strong interaction between them and stronger binding energy. This makes it energetically favourable (i.e. having lower energy) for protons to form pairs of opposite spin. The same is true for neutrons.
Calculating coefficients
The coefficients are calculated by fitting to experimentally measured masses of nuclei. Their values can vary depending on how they are fitted to the data and which unit is used to express the mass. Several examples are as shown below.
The formula does not consider the internal shell structure of the nucleus.
The semi-empirical mass formula therefore provides a good fit to heavier nuclei, and a poor fit to very light nuclei, especially 4He. For light nuclei, it is usually better to use a model that takes this shell structure into account.
Examples of consequences of the formula
By maximizing with respect to Z, one would find the best neutron–proton ratio N/Z for a given atomic weight A. We get
This is roughly 1 for light nuclei, but for heavy nuclei the ratio grows in good agreement with experiment.
By substituting the above value of Z back into , one obtains the binding energy as a function of the atomic weight, .
Maximizing with respect to A gives the nucleus which is most strongly bound, i.e. most stable. The value we get is A = 63 (copper), close to the measured values of A = 62 (nickel) and A = 58 (iron).
The liquid-drop model also allows the computation of fission barriers for nuclei, which determine the stability of a nucleus against spontaneous fission. It was originally speculated that elements beyond atomic number 104 could not exist, as they would undergo fission with very short half-lives, though this formula did not consider stabilizing effects of closed nuclear shells. A modified formula considering shell effects reproduces known data and the predicted island of stability (in which fission barriers and half-lives are expected to increase, reaching a maximum at the shell closures), though also suggests a possible limit to existence of superheavy nuclei beyond Z = 120 and N = 184.
| Physical sciences | Nuclear physics | Physics |
1086573 | https://en.wikipedia.org/wiki/Ostracoderm | Ostracoderm | Ostracoderms () are the armored jawless fish of the Paleozoic Era. The term does not often appear in classifications today because it is paraphyletic (excluding jawed fishes and possibly the cyclostomes if anaspids are closer to them) and thus does not correspond to one evolutionary lineage. However, the term is still used as an informal way of loosely grouping together the armored jawless fishes.
An innovation of ostracoderms was the use of gills not for feeding, but exclusively for respiration. Earlier chordates with gill precursors used them for both respiration and feeding. Ostracoderms had separate pharyngeal gill pouches along the side of the head, which were permanently open with no protective operculum. Unlike invertebrates that use ciliated motion to move food, ostracoderms used their muscular pharynx to create a suction that pulled small and slow-moving prey into their mouths.
Swiss anatomist Louis Agassiz received some fossils of bony armored fish from Scotland in the 1830s. He had difficulty classifying them, as they did not resemble any living creature. He compared them at first with extant armored fish such as catfish and sturgeon, but later realized that they lacked movable jaws. Hence, he classified them in 1844 as a new group, named "ostracoderms" to mean 'shell-skinned' (from Greek + ).
Ostracoderms have heads covered with a bony shield. They are among the earliest creatures with bony heads. The microscopic layers of that shield appear to evolutionary biologists, "like they are composed of little tooth-like structures." Neil Shubin writes: "Cut the bone of the [ostracoderm] skull open…pop it under a microscope and…you find virtually the same structure as in our teeth. There is a layer of enamel and even a layer of pulp. The whole shield is made up of thousands of small teeth fused together. This bony skull--one of the earliest in the fossil record--is made entirely of little teeth. Teeth originally arose to bite creatures (see Conodonts); later a version of teeth was used in a new way to protect them."
Ostracoderms existed in two major groups, the more primitive heterostracans and the cephalaspids. The cephalaspids were more advanced than the heterostracans in that they had lateral stabilizers for more control of their swimming.
It was long assumed that pteraspidomorphs and thelodonts were the only ostracoderms with paired nostrils, while the other groups have just a single median nostril. It has since been revealed that even if galeaspidans have just one external opening, it has two internal nasal organs.
After the appearance of jawed fish (placoderms, acanthodians, sharks, etc.) about 420 million years ago, most ostracoderm species underwent a decline, and the last ostracoderms became extinct at the end of the Devonian period. More recent research indicates that fish with jaws had far less to do with the extinction of the ostracoderms than previously assumed, as they coexisted without noticeable decline for about 30 million years.
The Subclass Ostracodermi has been placed in the division Agnatha along with the extant Subclass Cyclostomata, which includes lampreys and hagfishes.
Major groups
| Biology and health sciences | Agnatha | null |
1087031 | https://en.wikipedia.org/wiki/Gerenuk | Gerenuk | The gerenuk (Litocranius walleri), also known as the giraffe gazelle, is a long-necked, medium-sized antelope found in parts of East Africa. The sole member of the genus Litocranius, the gerenuk was first described by the naturalist Victor Brooke in 1879. It is characterised by its long, slender neck and limbs. The antelope is tall, and weighs between . Two types of colouration are clearly visible on the smooth coat: the reddish brown back or the "saddle", and the lighter flanks, fawn to buff. The horns, present only on males, are lyre-shaped. Curving backward then slightly forward, these measure .
Taxonomy and phylogeny
The gerenuk was first described by Victor Brooke in 1879 on the basis of three male specimens procured on "the mainland of Africa, north of the island of Zanzibar". Brooke used the scientific name Gazella walleri, on the request of Gerald Waller (who provided the specimens) to name it after his deceased brother. The type locality was later corrected by John Kirk, who originally obtained the specimens on the "coast near the River Juba in southern Somalia" before giving them to Waller. In 1886, Franz Friedrich Kohl proposed a new genus for the gerenuk, Litocranius. The common name derives from the Somali name for the animal (gáránúug); the first recorded use of the name dates back to 1895. It is also known as the "giraffe gazelle" due to its similarity to the giraffe.
Two subspecies have been proposed, but these are considered to be independent species by some authors.
L. w. sclateri (Northern gerenuk or Sclater's gazelle) Neumann, 1899: Its range extends from northwestern Somalia (Berbera District) westward to touch the Ethiopian border and Djibouti.
L. w. walleri (Southern gerenuk or Waller's gazelle) (Brooke, 1879): Its range extends through northeastern Tanzania through Kenya to Galcaio (Somalia). The range lies north of the Shebelle River and near Juba River.
In 1997 Colin Groves proposed that Litocranius is a sister taxon of the similarly long-necked dibatag (Ammodorcas clarkei), but withdrew from this in 2000. A 1999 phylogenetic study based on cytochrome b and cytochrome c oxidase subunit III analysis showed that the tribe Antilopini, to which the gerenuk belongs, is monophyletic. In 2013, Eva Verena Bärmann and colleagues (of the University of Cambridge) revised the phylogeny of tribe on the basis of nuclear and mitochondrial gene analysis. The cladogram prepared by them (given below) showed that the springbok (Antidorcas marsupialis) forms a clade with the gerenuk; this clade is sister to the saiga (Saiga tatarica, tribe Saigini) and the genera Antilope (blackbuck), Eudorcas, Gazella and Nanger (of Antilopini).
Description
The gerenuk is a notably tall, slender antelope that resembles gazelles. It is characterised by its long, slender neck and limbs, the flat, wedge-like head and the large, round eyes. Males are nearly tall, and the shorter females ; the head-and-body length is typically between . Males weigh between ; females are lighter, weighing . The species is sexually dimorphic. The tail, that ends in a black tuft, measures .
Two types of colouration are clearly visible on the smooth coat: the reddish brown dorsal parts (the back or the "saddle"), and the lighter flanks, fawn to buff. The underbelly and insides of the legs are cream in colour. The eyes and the mouth are surrounded by white fur. Females have a dark patch on the crown. The horns, present only on males, are lyre-like (S-shaped). Curving backward then slightly forward, these measure .
The gerenuk resembles the dibatag, with which it is sympatric in eastern and central Somalia and southeastern Ethiopia. Both are brachyodonts and share several facial and cranial features, along with a two-tone colouration of the coat and strong thick horns (only in males). However, there are also some features distinguishing it from the gerenuk, including major morphological differences in horns, horn cores, tail, postorbital area and basioccipital processes. The gerenuk has a longer, heavier neck and a shorter tail. A finer point of difference is the absence of an inward-curving lobe in the lower edge of the ear (near its tip) in the gerenuk. The subspecies of the gerenuk are similar in colouration; the southern gerenuk is the smaller of the two. The Gerenuk stages of growth have a timespan from 4 months to 2.5 years: at four months, their shoulder height is about two-thirds of adult female, at six months their shoulder height is about three-quarters of adult female, at eight months their horn tips are clearly visible (about 1cm long), at one year their shoulder height is nearly equal to adult female but body more lightly built, their horns are slightly less than half ear-length, then curve, at two years their horns are about 1.5 times their ear length and the second curve becomes noticeable with the tips turning forwards, and finally at two and a half years the double curve in the horns are nearly completed.
Ecology and behavior
The gerenuk is a diurnal animal (active mainly during the day), though it typically stands or rests in shade during the noon. Foraging and feeding is the major activity throughout the day; females appear to spend longer time in feeding. The gerenuk may expose itself to rain, probably to cool its body. The social structure consists of small herds of two to six members. Herds typically comprise members of a single sex, though female herds additionally have juveniles. Some males lead a solitary life.
Fighting and travel are uncommon, possibly as a strategy to save energy for foraging. Both sexes maintain home ranges large, and might overlap. Those of males are scent-marked with preorbital gland secretions and guarded - hence these may be termed territories. The sedentary tendency of the antelope appears to increase with age.
Diet
Primarily a browser, the gerenuk feed on foliage of bushes as well as trees, shoots, herbs, flowers and fruits. It can reach higher branches and twigs better than other gazelles and antelopes by standing erect on its hindlegs and elongating its neck; this helps it reach over above the ground. Acacia species are eaten whenever available, while evergreen vegetation forms the diet during droughts. The pointed mouth assists in extracting leaves from thorny vegetation. The gerenuk does not drink water regularly. Major predators of the antelope include African wild dogs, cheetahs, hyenas, lions and leopards.
Reproduction
Gerenuk reproduce throughout the year. Females reach sexual maturity at around one year, and males reach sexual maturity at 1.5 years, although in the wild they may only be successful after acquiring a territory (perhaps 3.5 years). The gestation period is about seven months. They are born one at a time, weighing about at birth. Offspring were produced through artificial insemination for the first time in 2010 at White Oak Conservation in Yulee, Florida. Four female calves were born, and one of the four was later inseminated successfully by White Oak and SEZARC (South-East Zoo Alliance for Reproduction & Conservation), creating a second generation of calves born from artificial insemination. Gerenuk can live thirteen years or more in captivity, and at least eight years in the wild.
| Biology and health sciences | Bovidae | Animals |
3186459 | https://en.wikipedia.org/wiki/Hypertensive%20heart%20disease | Hypertensive heart disease | Hypertensive heart disease includes a number of complications of high blood pressure that affect the heart. While there are several definitions of hypertensive heart disease in the medical literature, the term is most widely used in the context of the International Classification of Diseases (ICD) coding categories. The definition includes heart failure and other cardiac complications of hypertension when a causal relationship between the heart disease and hypertension is stated or implied on the death certificate. In 2013 hypertensive heart disease resulted in 1.07 million deaths as compared with 630,000 deaths in 1990.
According to ICD-10, hypertensive heart disease (I11), and its subcategories: hypertensive heart disease with heart failure (I11.0) and hypertensive heart disease without heart failure (I11.9) are distinguished from chronic rheumatic heart diseases (I05-I09), other forms of heart disease (I30-I52) and ischemic heart diseases (I20-I25). However, since high blood pressure is a risk factor for atherosclerosis and ischemic heart disease, death rates from hypertensive heart disease provide an incomplete measure of the burden of disease due to high blood pressure.
Signs and symptoms
The symptoms and signs of hypertensive heart disease will depend on whether or not it is accompanied by heart failure. In the absence of heart failure, hypertension, with or without enlargement of the heart (left ventricular hypertrophy) is usually symptomless.
Symptoms, signs and consequences of congestive heart failure can include:
Fatigue
Irregular pulse or palpitations
Swelling of feet and ankles
Weight gain
Nausea
Shortness of breath
Difficulty sleeping flat in bed (orthopnea)
Bloating and abdominal pain
Greater need to urinate at night
An enlarged heart (cardiomegaly)
Left ventricular hypertrophy and left ventricular remodeling
Diminished coronary flow reserve and silent myocardial ischemia
Coronary heart disease and accelerated atherosclerosis
Heart failure with normal left ventricular ejection fraction (HFNEF), often termed diastolic heart failure
Atrial fibrillation, other cardiac arrhythmias, or sudden cardiac death
Heart failure can develop insidiously over time or patients can present acutely with acute heart failure or acute decompensated heart failure and pulmonary edema due to sudden failure of pump function of the heart. Sudden failure can be precipitated by a variety of causes, including myocardial ischemia, marked increases in blood pressure, or cardiac arrhythmias.
Diagnosis
Differential diagnosis
Other conditions can share features with hypertensive heart disease and need to be considered in the differential diagnosis. For example:
Coronary artery disease or ischemic heart diseases due to atherosclerosis
Hypertrophic cardiomyopathy
Left ventricular hypertrophy in athletes
Congestive heart failure or heart failure with normal ejection fraction due to other causes
Atrial fibrillation or other disorders of cardiac rhythm due to other causes
Sleep apnea
Prevention
Because there are no symptoms with high blood pressure, people can have the condition without knowing it. Diagnosing high blood pressure early can help prevent heart disease, stroke, eye problems, and chronic kidney disease.
The risk of cardiovascular disease and death can be reduced by lifestyle modifications, including dietary advice, promotion of weight loss and regular aerobic exercise, moderation of alcohol intake and cessation of smoking. Drug treatment may also be needed to control the hypertension and reduce the risk of cardiovascular disease, manage the heart failure, or control cardiac arrhythmias. Patients with hypertensive heart disease should avoid taking over the counter nonsteroidal anti-inflammatory drugs (NSAIDs), or cough suppressants, and decongestants containing sympathomimetics, unless otherwise advised by their physician as these can exacerbate hypertension and heart failure.
Blood pressure goals
According to JNC 7, BP goals should be as follows:
Less than 140/90mm Hg in patients with uncomplicated hypertension
Less than 130/85mm Hg in patients with diabetes and those with renal disease with less than 1g/24-hour proteinuria
Less than 125/75mm Hg in patients with renal disease and more than 1 g/24-hour proteinuria
Treatment
The medical care of patients with hypertensive heart disease falls under 2 categories—
Treatment of hypertension
Prevention (and, if present, treatment) of heart failure or other cardiovascular disease
Epidemiology
Hypertension or high blood pressure affects at least 26.4% of the world's population. Hypertensive heart disease is only one of several diseases attributable to high blood pressure. Other diseases caused by high blood pressure include ischemic heart disease, cancer, stroke, peripheral arterial disease, aneurysms and kidney disease. Hypertension increases the risk of heart failure by two or three-fold and probably accounts for about 25% of all cases of heart failure. In addition, hypertension precedes heart failure in 90% of cases, and the majority of heart failure in the elderly may be attributable to hypertension. Hypertensive heart disease was estimated to be responsible for 1.0 million deaths worldwide in 2004 (or approximately 1.7% of all deaths globally), and was ranked 13th in the leading global causes of death for all ages. A world map shows the estimated disability-adjusted life years per 100,000 inhabitants lost due to hypertensive heart disease in 2004.
Sex differences
There are more women than men with hypertension, and, although men develop hypertension earlier in life, hypertension in women is less well controlled. The consequences of high blood pressure in women are a major public health problem and hypertension is a more important contributory factor in heart attacks in women than men. Until recently women have been under-represented in clinical trials in hypertension and heart failure. Nevertheless, there is some evidence that the effectiveness of antihypertensive drugs differs between men and women and that treatment for heart failure may be less effective in women.
Ethnic differences
Studies in the US indicate that a disproportionate number of African Americans have hypertension compared with non-Hispanic whites and Mexican Americans, and that they have a greater burden of hypertensive heart disease. Heart failure is more common in people of African American ethnicity, mortality from heart failure is also consistently higher than in white patients, and it develops at an earlier age. Recent data suggests that rates of hypertension are increasing more rapidly in African Americans than other ethnic groups. The excess of high blood pressure and its consequences in African Americans is likely to contribute to their shorter life expectancy compared with white Americans.
| Biology and health sciences | Cardiovascular disease | Health |
3187689 | https://en.wikipedia.org/wiki/Detritus | Detritus | In biology, detritus ( or ) is organic matter made up of the decomposing remains of organisms and plants, and also of feces. Detritus usually hosts communities of microorganisms that colonize and decompose (remineralise) it. Such microorganisms may be decomposers, detritivores, or coprophages.
In terrestrial ecosystems detritus is present as plant litter and other organic matter that is intermixed with soil, known as soil organic matter. The detritus of aquatic ecosystems is organic substances suspended in the water and accumulated in depositions on the floor of the body of water; when this floor is a seabed, such a deposition is called marine snow.
Theory
The remains of decaying plants or animals, or their tissue parts, and feces gradually lose their form due to physical processes and the action of decomposers, including grazers, bacteria, and fungi. Decomposition, the process by which organic matter is decomposed, occurs in several phases. Micro- and macro-organisms that feed on it rapidly consume and absorb materials such as proteins, lipids, and sugars that are low in molecular weight, while other compounds such as complex carbohydrates are decomposed more slowly. The decomposing microorganisms degrade the organic materials so as to gain the resources they require for their survival and reproduction. Accordingly, simultaneous to microorganisms' decomposition of the materials of dead plants and animals is their assimilation of decomposed compounds to construct more of their biomass (i.e., to grow their own bodies). When microorganisms die, fine organic particles are produced. If small animals (that normally feed on microorganisms) eat these particles, the particles collect inside the intestines of the consumers, and change shape into large pellets of dung. As a result of this process, most of the materials of dead organisms disappear and are not visible and recognizable in any form, but are present in the form of a combination of fine organic particles and the organisms that used them as nutrients. This combination is detritus.
In ecosystems on land, detritus is deposited on the surface of the ground, taking forms such as the humic soil beneath a layer of fallen leaves. In aquatic ecosystems, most detritus is suspended in water, and gradually settles. In particular, many different types of material are collected together by currents, and much material settles in slowly flowing areas.
A large amount of detritus is used as a source of nutrition for animals. In particular, many bottom feeding animals (benthos) living in mud flats feed in this way. In particular, since excreta are materials which other animals do not need, whatever energy value they might have, they are often unbalanced as a source of nutrients, and are not suitable as a source of nutrition on their own. However, there are many microorganisms which multiply in natural environments. These microorganisms do not simply absorb nutrients from these particles, but also shape their own bodies so that they can take the resources they lack from the area around them, and this allows them to make use of excreta as a source of nutrients. In practical terms, the most important constituents of detritus are complex carbohydrates, which are persistent (difficult to break down), and the microorganisms which multiply using these absorb carbon from the detritus, and materials such as nitrogen and phosphorus from the water in their environment to synthesise the components of their own cells.
A characteristic type of food chain called the detritus cycle takes place involving detritus feeders (detritivores), detritus and the microorganisms that multiply on it. For example, mud flats are inhabited by many univalves which are detritus feeders. When these detritus feeders take in detritus with microorganisms multiplying on it, they mainly break down and absorb the microorganisms, which are rich in proteins, and excrete the detritus, which is mostly complex carbohydrates, having hardly broken it down at all. At first, this dung is a poor source of nutrition, and so univalves pay no attention to it, but after several days, microorganisms begin to multiply on it again, its nutritional balance improves, and so they eat it again. Through this process of eating the detritus many times over and harvesting the microorganisms from it, the detritus thins out, becomes fractured and becomes easier for the microorganisms to use, and so the complex carbohydrates are also steadily broken down and disappear over time.
What is left behind by the detritivores is then further broken down and recycled by decomposers, such as bacteria and fungi.
This detritus cycle plays a large part in the so-called purification process, whereby organic materials carried in by rivers is broken down and disappears, and an extremely important part in the breeding and growth of marine resources. In ecosystems on land, far more essential material is broken down as dead material passing through the detritus chain than is broken down by being eaten by animals in a living state. In both land and aquatic ecosystems, the role played by detritus is too large to ignore.
Aquatic ecosystems
In contrast to land ecosystems, dead materials and excreta in aquatic ecosystems are typically transported by water flow; finer particles tend to be transported farther or suspended longer. In freshwater bodies organic material from plants can form a silt known as mulm or humus on the bottom. This material, some called undissolved organic carbon breaks down into dissolved organic carbon and can bond to heavy metal ions via chelation. It can also break down into colored dissolved organic matter such as tannin, a specific form of tannic acid.
In saltwater bodies, organic material breaks down and forms a marine snow. This example of detritus commonly consists of organic materials such as dead phytoplankton and zooplankton, the outer walls of diatoms and coccolithophores, dead skin and scales of fish, and fecal pellets. This material will slowly sink to the seafloor, where it makes up the majority of sediment in some areas. Once settled, the material will not only contribute to sediments but will help to feed different species of detritivore, organisms which feed on detritus, such as annelid worms and sea cucumbers, to name a few. The exact composition of this detritus varies based on location and time of year, as it is very closely tied to primary production.
Terrestrial ecosystems
Detritus occurs in a variety of terrestrial habitats including forest, chaparral and grassland. In forests, the detritus is typically dominated by leaf, twig, and bacteria litter as measured by biomass dominance. This plant litter provides important cover for seedling protection as well as cover for a variety of arthropods, reptiles and amphibians. Some insect larvae feed on the detritus. Fungi and bacteria continue the decomposition process after grazers have consumed larger elements of the organic materials, and animal trampling has assisted in mechanically breaking down organic matter. At the later stages of decomposition, mesophilic micro-organisms decompose residual detritus, generating heat from exothermic processes; such heat generation is associated with the well known phenomenon of the elevated temperature of composting.
Consumers
There is an extremely large number of detritus feeders in water. After all, a large quantity of material is carried in by water currents. Even if an organism stays in a fixed position, as long as it has a system for filtering water, it will be able to obtain enough food to get by. Many immobile organisms survive in this way, using developed gills or tentacles to filter the water to take in food, a process known as filter feeding.
Another more widely used method of feeding, which also incorporates filter feeding, is a system where an organism secretes mucus to catch the detritus in lumps, and then carries these to its mouth using an area of cilia.
Many organisms, including sea slugs and serpent's starfish, scoop up the detritus which has settled on the water bed. Bivalves which live inside the water bed do not simply suck in water through their tubes, but also extend them to fish for detritus on the surface of the bed.
Producers
In contrast, from the point of view of organisms using photosynthesis such as plants and plankton, detritus reduces the transparency of the water and gets in the way of this process. Given that these organisms also require a supply of nutrient salts, in other words fertilizer, for photosynthesis, their relationship with detritus is a complex one.
In land ecosystems, the waste products of plants and animals collect mainly on the ground (or on the surfaces of trees), and as decomposition proceeds, plants are supplied with fertilizer in the form of inorganic salts. In water ecosystems, relatively little waste collects on the water bed, and so the progress of decomposition in water takes a more important role. Investigating the level of inorganic salts in sea ecosystems shows that unless there is an especially large supply, the quantity increases from winter to spring—but is normally extremely low in summer. As such, the quantity of seaweed present reaches a peak in early summer and then decreases. The thinking is that organisms like plants grow quickly in warm periods and thus the quantity of inorganic salts is not enough to keep up with the demand. In other words, during winter, plant-like organisms are inactive and collect fertilizer, but if the temperature rises to some extent they will use this up in a very short period.
It is not entirely true that their productivity falls during the warmest periods. Organisms such as dinoflagellate have mobility, the ability to take in solid food, and the ability to photosynthesise. This type of micro-organism can take in substances such as detritus to grow, without waiting for it to be broken down into fertilizer.
Aquariums
In recent years, the word detritus has also come to be used with aquariums (the word "aquarium" is a general term for any installation for keeping aquatic animals).
When animals such as fish are kept in an aquarium, they produce substances such as excreta, mucus, and dead skin cast off during moulting. These substances naturally generate detritus, which is continually broken down by microorganisms.
Modern sealife aquariums often use the Berlin Method, which employs a piece of equipment called a protein skimmer, which produces air bubbles which the detritus adheres to and forces it outside the tank before it decomposes and also a highly porous type of natural rock called live rock where many benthos and bacteria live (hermatype which has been dead for some time is often used), which causes the detritus-feeding benthos and micro-organisms to undergo a detritus cycle. The Monaco system, where an anaerobic layer is created in the tank, to denitrify the organic compounds in the tank, and also the other nitrogen compounds, so that the decomposition process continues until the stage where water, carbon dioxide, and nitrogen are produced, has also been implemented.
Initially, as the name suggests, filtration systems in water tanks often worked using a physical filter to remove foreign substances in the water. Following this, the standard method for maintaining the water quality was to convert ammonium or nitrates in excreta, which has a high degree of neurotoxicity, but the combination of detritus feeders, detritus and micro-organisms has now brought aquarium technology to a still higher level.
| Physical sciences | Soil science | Earth science |
13794597 | https://en.wikipedia.org/wiki/Service%20%28motor%20vehicle%29 | Service (motor vehicle) | A motor vehicle service or tune-up is a series of maintenance procedures carried out at a set time interval or after the vehicle has traveled a certain distance. The service intervals are specified by the vehicle manufacturer in a service schedule and some modern cars display the due date for the next service electronically on the instrument panel. A tune-up should not be confused with engine tuning, which is the modifying of an engine to perform better than the original specification, rather than using maintenance to keep the engine running as it should.
Common tasks involved in maintaining a vehicle
Inspection - vehicle components are visually inspected for wear or any leaks. A diagnostic is performed to identify any electrical components reporting a failure or a part operating outside of normal conditions.
Replacement - Given certain lubricants break down over time due to heat and wear, manufacturers recommend replacement. Any parts that are close to their expected failure are replaced too to avoid a failure while operating the vehicle.
Adjustments - as vehicle components wear, they may need adjustment over time. Example: parking brake cable.
The completed services are usually recorded in a service book or digital service record upon completion of each service. A digital service record is an online record of a vehicle's maintenance history. A complete service history usually adds to the resale value of a vehicle.
Difference between major and full service: a major service is more comprehensive than a full service; although it covers all the same checks that a full service does, a major service will be more detailed and will include more replacements of wearable parts, such as pollen filters, and changing brake fluid if required.
As a guideline, minor car services are carried out every , and major car services every – or every twelve months, whichever comes first.
Scheduling
The actual schedule of car maintenance varies depending on the year, make, and model of a car, its driving conditions, and driver behavior. Carmakers recommend the so-called extreme or the ideal service schedule based on impact parameters such as
the number of trips and distance traveled per trip per day
extreme hot or cold climate conditions
mountainous, dusty, or DE-iced roads
heavy stop-and-go vs. long-distance cruising
towing a trailer or other heavy load
Service advisers in dealerships and independent shops recommend schedule intervals, which are often in between the ideal or extreme service schedule.
In addition, drivers may be penalized for not regularly servicing their cars. For example, in many states in the U.S., a car has to pass a safety inspection test every year or two years to remain legal, and can incur fines for continuing to drive cars that have failed.
Common maintenance
Maintenance tasks commonly carried out during a motor vehicle service include:
Change the engine oil
Replace the oil filter
Replace the air filter
Replace the fuel filter
Replace the cabin or a/c filter
Replace the spark plugs
Check level and refill brake fluid/clutch fluid
Check Brake pads/Liners, Brake discs/Drums, and replace if worn out
Check level and refill windshield washer fluid
Check Coolant Hoses
Check the charging systems
Check the battery
Check level and refill power steering fluid
Check level and refill Automatic/Manual Transmission Fluid
Check suspension components shocks/struts etc.
Check steering components inner/outer tie rods
Grease and lubricate components
Inspect and replace the timing belt or timing chain if needed
Check condition of the tires
Rotate Tires
Check for proper operation of all lights, wipers, etc.
Check for any error codes in the ECU and take corrective action.
Use a scan tool to read trouble code.
Mechanical parts that may cause the car to cease transmission or prove unsafe for the road are also noted and advised upon.
In the United Kingdom, few parts that are not inspected on the MOT test are inspected and advised upon a Service Inspection, including clutch, gearbox, car battery, and engine components (further inspections than MOT).
| Technology | Concepts of ground transport | null |
1655082 | https://en.wikipedia.org/wiki/Moons%20of%20Mars | Moons of Mars | The two moons of Mars are Phobos and Deimos. They are irregular in shape. Both were discovered by American astronomer Asaph Hall in August 1877 and are named after the Greek mythological twin characters Phobos (fear and panic) and Deimos (terror and dread) who accompanied their father Ares (Mars in Roman mythology, hence the name of the planet) into battle.
Compared to the Earth's Moon, the moons Phobos and Deimos are small. Phobos has a diameter of 22.2 km (13.8 mi) and a mass of 1.08 kg, while Deimos measures 12.6 km (7.8 mi) across, with a mass of 1.5 kg. Phobos orbits closer to Mars, with a semi-major axis of and an orbital period of 7.66 hours; while Deimos orbits farther with a semi-major axis of and an orbital period of 30.35 hours.
Two major hypotheses have emerged as to the origin of the moons: The first suggests that they originated from Mars itself, perhaps from a giant impact event suggested to have created the Martian dichotomy and the Borealis Basin. The second suggests that they are captured asteroids. Both hypotheses are compatible with current data, though upcoming sample return missions may be able to distinguish which hypothesis is correct.
History
Early speculation
Speculation about the existence of the moons of Mars had begun when the moons of Jupiter were discovered. When Galileo Galilei (1564–1642), as a hidden report about his having observed two bumps on the sides of Saturn (later discovered to be its rings), used the anagram smaismrmilmepoetaleumibunenugttauiras for Altissimum planetam tergeminum observavi ("I have observed the most distant planet to have a triple form"), Johannes Kepler (1571–1630) had misinterpreted it to mean Salve umbistineum geminatum Martia proles (Hello, furious twins, sons of Mars).
Perhaps inspired by Kepler (and quoting Kepler's third law of planetary motion), Jonathan Swift's satire Gulliver's Travels (1726) refers to two moons in Part 3, Chapter 3 (the "Voyage to Laputa"), in which Laputa's astronomers are described as having discovered two satellites of Mars orbiting at distances of 3 and 5 Martian diameters with periods of 10 and 21.5 hours. Phobos and Deimos (both found in 1877, more than a century after Swift's novel) have actual orbital distances of 1.4 and 3.5 Martian diameters, and their respective orbital periods are 7.66 and 30.35 hours. In the 20th century, V. G. Perminov, a spacecraft designer of early Soviet Mars and Venus spacecraft, speculated Swift found and deciphered records that Martians left on Earth. However, the view of most astronomers is that Swift was simply employing a common argument of the time, that as the inner planets Venus and Mercury had no satellites, Earth had one and Jupiter had four (known at the time), that Mars by analogy must have two. Furthermore, as they had not yet been discovered, it was reasoned that they must be small and close to Mars. This would lead Swift to make a roughly accurate estimate of their orbital distances and revolution periods. In addition, Swift could have been helped in his calculations by his friend, the mathematician John Arbuthnot.
Voltaire's 1752 short story "Micromégas", about an alien visitor to Earth, also refers to two moons of Mars. Voltaire was presumably influenced by Swift. In recognition of these literary references, two craters on Deimos are named Swift and Voltaire, while on Phobos there is one named regio, Laputa Regio, and one named planitia, Lagado Planitia, both of which are named after places in Gulliver's Travels (the fictional Laputa, a flying island, and Lagado, imaginary capital of the fictional nation Balnibarbi). Many of the craters on Phobos are also named after characters in Gulliver's Travels.
Discovery
Asaph Hall discovered Deimos on 12 August 1877 at about 07:48 UTC and Phobos on 18 August 1877, at the US Naval Observatory (the Old Naval Observatory in Foggy Bottom) in Washington, D.C., at about 09:14 GMT (contemporary sources, using the pre-1925 astronomical convention that began the day at noon, give the time of discovery as 11 August 14:40 and 17 August 16:06 Washington mean time respectively). At the time, he was deliberately searching for Martian moons. Hall had previously seen what appeared to be a Martian moon on 10 August, but due to bad weather, he could not definitively identify them until later.
Hall recorded his discovery of Phobos in his notebook as follows:
The telescope used for the discovery was the 26-inch (66 cm) refractor (telescope with a lens) then located at Foggy Bottom. In 1893 the lens was remounted and put in a new dome, where it remains into the 21st century.
The names, originally spelled Phobus and Deimus, respectively, were suggested by Henry Madan (1838–1901), Science Master of Eton, from Book XV of the Iliad, where Ares summons Fear and Fright. The granddaughter of Henry Madan's brother Falconer Madan was Venetia Burney, who first suggested the name of Pluto.
Mars moon hoax
In 1959, Walter Scott Houston perpetrated a celebrated April Fool's hoax in the April edition of the Great Plains Observer, claiming that "Dr. Arthur Hayall of the University of the Sierras reports that the moons of Mars are actually artificial satellites". Both Dr. Hayall and the University of the Sierras were fictitious. The hoax gained worldwide attention when Houston's claim was repeated in earnest by a Soviet scientist, Iosif Shklovsky, who, based on a later-disproven density estimate, suggested Phobos was a hollow metal shell.
Recent surveys
Searches have been conducted for additional satellites. In 2003, Scott S. Sheppard and David C. Jewitt surveyed nearly the entire Hill sphere of Mars for irregular satellites. However, scattered light from Mars prevented them from searching the inner few arcminutes where the satellites Phobos and Deimos reside. No new satellites were found to an apparent limiting red magnitude of 23.5, which corresponds to radii of about 0.09 km using an albedo of 0.07.
Characteristics
If viewed from Mars's surface near its equator, a full Phobos would look about one-third as big as a full moon on Earth. It has an angular diameter of between 8' (rising) and 12' (overhead). Due to its close orbit, it would look smaller when the observer is further away from the Martian equator until it completely sinks below the horizon as the observer travels closer to the poles; thus Phobos is not visible from Mars's polar ice caps. Deimos would look more like a bright star or planet (only slightly bigger than how Venus looks from Earth) for an observer on Mars. It has an angular diameter of about 2'. The Sun's angular diameter as seen from Mars, by contrast, is about 21'. Thus there are no total solar eclipses on Mars as the moons are far too small to completely cover the Sun. On the other hand, total lunar eclipses of Phobos happen almost every night.
The motions of Phobos and Deimos would appear very different from that of Earth's Moon. Speedy Phobos rises in the west, sets in the east, and rises again in just eleven hours, while Deimos, being only just outside synchronous orbit, rises as expected in the east but very slowly. Despite its 30-hour orbit, it takes 2.7 days to set in the west as it slowly falls behind the rotation of Mars.
Both moons are tidally locked, always presenting the same face towards Mars. Since Phobos orbits Mars faster than the planet itself rotates, tidal forces are slowly but steadily decreasing its orbital radius. At some point in the future, when it falls within the Roche limit, Phobos will be broken up by these tidal forces and either crash into Mars or form a ring. Several strings of craters on the Martian surface, inclined further from the equator the older they are, suggest that there may have been other small moons that suffered the fate expected of Phobos, and that the Martian crust as a whole shifted between these events. Deimos, on the other hand, is far enough away that its orbit is being slowly boosted instead, akin to Earth's Moon.
Orbital details
March 5, 2024: NASA released images of transits of the moon Deimos, the moon Phobos and the planet Mercury as viewed by the Perseverance rover on the planet Mars.
Origin
The origin of the Martian moons is still controversial. Phobos and Deimos both have much in common with carbonaceous C-type asteroids, with spectra, albedo, and density very similar to those of C- or D-type asteroids. Based on their similarity, one hypothesis is that both moons may be captured main-belt asteroids. Both moons have very circular orbits which lie almost exactly in Mars's equatorial plane, and hence a capture origin requires a mechanism for circularizing the initially highly eccentric orbit, and adjusting its inclination into the equatorial plane, most probably by a combination of atmospheric drag and tidal forces, although it is not clear that sufficient time is available for this to occur for Deimos. Capture also requires dissipation of energy. The current atmosphere of Mars is too thin to capture a Phobos-sized object by atmospheric braking. Geoffrey Landis has pointed out that the capture could have occurred if the original body was a binary asteroid that separated under tidal forces.
Phobos could be a second-generation Solar System object that coalesced in orbit after Mars formed, rather than forming concurrently out of the same birth cloud as Mars.
Another hypothesis is that Mars was once surrounded by many Phobos- and Deimos-sized bodies, perhaps ejected into orbit around it by a collision with a large planetesimal. The high porosity of the interior of Phobos (based on the density of 1.88 g/cm3, voids are estimated to comprise 25 to 35 percent of Phobos' volume) is inconsistent with an asteroidal origin. Observations of Phobos in the thermal infrared suggest a composition containing mainly phyllosilicates, which are well known from the surface of Mars. The spectra are distinct from those of all classes of chondrite meteorites, again pointing away from an asteroidal origin. Both sets of findings support an origin of Phobos from material ejected by an impact on Mars that reaccreted in Martian orbit, similar to the prevailing theory for the origin of Earth's moon.
The moons of Mars may have started with a huge collision with a protoplanet one third the mass of Mars that formed a ring around Mars. The inner part of the ring formed a large moon. Gravitational interactions between this moon and the outer ring formed Phobos and Deimos. Later, the large moon crashed into Mars, but the two small moons remained in orbit. This theory agrees with the fine-grained surface of the moons and their high porosity. The outer disk would create fine-grained material. Simulations suggest the object colliding with Mars had to be within the size range of Ceres and Vesta because a larger impact would have created a more massive disc and moons that would have prevented the survival of tiny moons like Phobos and Deimos.
Most recently, Amirhossein Bagheri and his colleagues from ETH Zurich and US Naval Observatory, proposed a new hypothesis on the origin of the moons. By analyzing the seismic and orbital data from Mars InSight Mission and other missions, they proposed that the moons are born from the disruption of a common parent body around 1 to 2.7 billion years ago. The common progenitor of Phobos and Deimos was most probably hit by another object and shattered to form Phobos and Deimos. But a recent paper suggests that it seems unlikely that Phobos and Deimos are split directly from a single ancestral moon. They use N-body simulations to show that the single ancestral moon scenario should result in an impact between the two moons, leading to a debris ring in 104 years.
Another suggestion is that Mars was hit by an object from beyond the orbit of Saturn or Neptune, about 3% the mass of the planet and consisting of at least 30% and up to 70% water ice. This would create a disc around the planet with large amounts of water that cooled it down and changed the chemical composition of the rocks, likely producing a type of minerals called phyllosilicates.
Exploration
Past attempts and proposals
While many Martian probes provided images and other data about Phobos and Deimos, only few were dedicated to these satellites and intended to perform a flyby or landing on the surface.
Two probes under the Soviet Phobos program were successfully launched in 1988, but neither conducted the intended jumping landings on Phobos and Deimos due to failures (although Phobos 2 successfully photographed Phobos). The post-Soviet Russian Fobos-Grunt probe was intended to be the first sample return mission from Phobos, but a rocket failure left it stranded in Earth orbit in 2011. Efforts to reactivate the craft were unsuccessful, and it fell back to Earth in an uncontrolled re-entry on 15 January 2012, over the Pacific Ocean, west of Chile.
In 1997 and 1998, the Aladdin mission was selected as a finalist in the NASA Discovery Program. The plan was to visit both Phobos and Deimos, and launch projectiles at the satellites. The probe would collect the ejecta as it performed a slow flyby. These samples would be returned to Earth for study three years later. Ultimately, NASA rejected this proposal in favor of MESSENGER, a probe to Mercury.
In 2007, the European Space Agency and EADS Astrium proposed and developed a mission to Phobos in 2016 with a lander and sample return, but this mission was never flown. Canadian Space Agency has been considering the Phobos Reconnaissance and International Mars Exploration (PRIME) mission to Phobos with orbiter and lander since 2007. Since 2013 NASA developed the Phobos Surveyor mission concept with an orbiter and a small rover. NASA's PADME mission was designed to conduct multiple flybys of the Martian moons, but was not chosen for development. Also, NASA assessed the OSIRIS-REx II, concept mission for a sample return from Phobos. Another sample return mission from Deimos, called Gulliver, has been conceptualized.
Current proposals
JAXA plans to launch Martian Moons eXploration (MMX) mission in 2026 to bring back the first samples from Phobos. The spacecraft will enter orbit around Mars, then transfer to Phobos, and land once or twice and gather sand-like regolith particles using a simple pneumatic system. The lander mission aims to retrieve a minimum of samples. The spacecraft will then take off from Phobos and make several flybys of the smaller moon Deimos before sending the Return Module back to Earth, arriving in July 2029.
Gallery
| Physical sciences | Solar System | Astronomy |
1655191 | https://en.wikipedia.org/wiki/Task%20%28computing%29 | Task (computing) | In computing, a task is a unit of execution or a unit of work. The term is ambiguous; precise alternative terms include process, light-weight process, thread (for execution), step, request, or query (for work). In the adjacent diagram, there are queues of incoming work to do and outgoing completed work, and a thread pool of threads to perform this work. Either the work units themselves or the threads that perform the work can be referred to as "tasks", and these can be referred to respectively as requests/responses/threads, incoming tasks/completed tasks/threads (as illustrated), or requests/responses/tasks.
Terminology
In the sense of "unit of execution", in some operating systems, a task is synonymous with a process, and in others with a thread. In non-interactive execution (batch processing), a task is a unit of execution within a job, with the task itself typically a process. The term "multitasking" primarily refers to the processing sense – multiple tasks executing at the same time – but has nuances of the work sense of multiple tasks being performed at the same time.
In the sense of "unit of work", in a job (meaning "one-off piece of work") a task can correspond to a single step (the step itself, not the execution thereof), while in batch processing individual tasks can correspond to a single step of processing a single item in a batch, or to a single step of processing all items in the batch. In online systems, tasks most commonly correspond to a single request (in request–response architectures) or a query (in information retrieval), either a single stage of handling, or the whole system-wide handling.
Examples
In the Java programming language, these two concepts (unit of work and unit of execution) are conflated when working directly with threads, but clearly distinguished in the Executors framework:
IBM terminology
IBM's use of the term has been influential, though underlining the ambiguity of the term, in IBM terminology, "task" has dozens of specific meanings, including:
A unit of work representing one of the steps in a process.
A unit of work to be accomplished by a device or process.
A process and the procedures that run the process.
A set of actions designed to achieve a particular result. A task is performed on a set of targets on a specific schedule.
A unit of computation. In a parallel job, two or more concurrent tasks work together through message passing and shared memory. Although it is common to allocate one task per physical or logical processor, the terms "task" and "processor" are not interchangeable.
An activity that has business value, is initiated by a user, and is performed by software.
In z/OS specifically, it is defined precisely as:
"In a multiprogramming or multiprocessing environment, one or more sequences of instructions treated by a control program as an element of work to be accomplished by a computer."
The term task in OS/360 through z/OS is roughly equivalent to light-weight process; the tasks in a job step share an address space. However, in MVS/ESA through z/OS, a task or Service Request Block (SRB) may have access to other address spaces via its access list.
Linux kernel
The term task is used in the Linux kernel (at least since v2.6.13, up to and including v4.8) to refer to a unit of execution, which may share various system resources with other tasks on the system. Depending on the level of sharing, the task may be regarded as a conventional thread or process. Tasks are brought into existence using the clone() system call, where a user can specify the desired level of resource sharing.
History
The term task for a part of a job dates to multiprogramming in the early 1960s, as in this example from 1961:
The term was popularized with the introduction of OS/360 (announced 1964), which featured Multiprogramming with a Fixed number of Tasks (MFT) and Multiprogramming with a Variable number of Tasks (MVT). In this case tasks were identified with light-weight processes, a job consisted of a number of tasks, and, later, tasks could have sub-tasks (in modern terminology, child processes).
Today the term "task" is used very ambiguously. For example, the Windows Task Manager manages (running) processes, while Windows Task Scheduler schedules programs to execute in future, what is traditionally known as a job scheduler, and uses the .job extension. By contrast, the term "task queue" is commonly used in the sense of "units of work".
| Technology | Operating systems | null |
1655200 | https://en.wikipedia.org/wiki/Shading%20language | Shading language | A shading language is a graphics programming language adapted to programming shader effects. Shading languages usually consist of special data types like "vector", "matrix", "color" and "normal".
Offline rendering
Shading languages used in offline rendering tend to be close to natural language, so that no special knowledge of programming is required. Offline rendering aims to produce maximum-quality images, at the cost of greater time and compute than real-time rendering.
RenderMan Shading Language
The RenderMan Shading Language (RSL or SL, for short), defined in the RenderMan Interface Specification, is a common shading language for production-quality rendering. It is also one of the first shading languages ever implemented.
It defines six major shader types:
Light source shaders compute the color of light emitted from a point on a light source to a point on a target surface.
Surface shaders model the color and position of points on an object's surface, based on incoming light and the object's physical properties.
Displacement shaders manipulate surface geometry independent of color.
Deformation shaders transform the entire space. Only one RenderMan implementation, the AIR renderer by SiTex Graphics, implemented this shader type, supporting only a single linear transformation applied to the space.
Volume shaders manipulate the color of light as it passes through a volume. They create effects such as fog.
Imager shaders describe a color transformation to final pixel values. This is like an image filter, except the imager shader operates on data prior to quantization. Such data have more dynamic range and color resolution than can be displayed on a typical output device.
Houdini VEX Shading Language
Houdini VEX (Vector Expressions) shading language (often abbreviated to "VEX") is closely modeled after RenderMan. However, its integration into a complete 3D package means that the shader writer can access the information inside the shader, a feature that is not usually available in a rendering context. The language differences between RSL and VEX are mainly syntactic, in addition to differences regarding the names of several shadeop names.
Gelato Shading Language
Gelato's shading language, like Houdini's VEX, is closely modeled after RenderMan. The differences between Gelato Shading Language and RSL are mainly syntactical — Gelato uses semicolons instead of commas to separate arguments in function definitions and a few shadeops have different names and parameters.
Open Shading Language
Open Shading Language (OSL) was developed by Sony Pictures Imageworks for use in its Autodesk Arnold Renderer. It is also used by Blender's Cycles render engine. OSL's surface and volume shaders define how surfaces or volumes scatter light in a way that allows for importance sampling; thus, it is well suited for physically-based renderers that support ray tracing and global illumination.
Real-time rendering
Shading languages for real-time rendering are now widespread. They provide both higher hardware abstraction and a more flexible programming model than previous paradigms, which hardcoded transformation and shading equations. They deliver more control and richer content with less overhead.
Shaders that are designed to be executed directly on the GPU became useful for high-throughput general processing because of their stream programming model; this led to the development of compute shaders running on similar hardware (see also: GPGPU).
Historically, a few such languages dominated the market; they are described below.
ARB assembly language
The OpenGL Architecture Review Board established the ARB assembly language in 2002 as a standard low-level instruction set for programmable graphics processors.
High-level OpenGL shading languages often compile to ARB assembly for loading and execution. Unlike high-level shading languages, ARB assembly does not support control flow or branching. However, it continues to be used when cross-GPU portability is required.
OpenGL shading language
Also known as GLSL or glslang, this standardized shading language is meant to be used with OpenGL.
The language unifies vertex and fragment processing in a single instruction set, allowing conditional loops and branches. GLSL was preceded by the ARB assembly language.
Cg programming language
The Cg language, developed by Nvidia, was designed for easy and efficient production pipeline integration. It features API independence and comes with many free tools to improve asset management. Development of Cg was stopped in 2012, and the language is now deprecated.
DirectX Shader Assembly Language
The shader assembly language in Direct3D 8 and 9 is the main programming language for vertex and pixel shaders in Shader Model 1.0/1.1, 2.0, and 3.0. It is a direct representation of the intermediate shader bytecode which is passed to the graphics driver for execution.
The shader assembly language cannot be directly used to program unified Shader Model 4.0, 4.1, 5.0, and 5.1, although it retains its function as a representation of the intermediate bytecode for debug purposes.
DirectX High-Level Shader Language
The High-Level Shading Language (HLSL) is a C-style shader language for DirectX 9 and higher and Xbox game consoles. It is related to Nvidia's Cg, but is only supported by DirectX and Xbox. HLSL programs are compiled into bytecode equivalent of DirectX shader assembly language.
HLSL was introduced as an optional alternative to the shader assembly language in Direct3D 9, but became a requirement in Direct3d 10 and higher, where the shader assembly language is deprecated.
Adobe Pixel Bender and Adobe Graphics Assembly Language
Adobe Systems added Pixel Bender as part of the Adobe Flash 10 API. Pixel Bender could only process pixel but not 3D-vertex data. Flash 11 introduced an entirely new 3D API called Stage3D, which uses its own shading language called Adobe Graphics Assembly Language (AGAL), which offers full 3D acceleration support. GPU acceleration for Pixel Bender was removed in Flash 11.8.
AGAL is a low-level but platform-independent shading language, which can be compiled, for example, or GLSL.
PlayStation Shader Language
Sony announced PlayStation Shader Language (PSSL) as a shading language similar to Cg/HLSL, but specific to the PlayStation 4. PSSL is said to be largely compatible with the HLSL shader language from DirectX 12, but with additional features for the PS4 and PS5 platforms.
Metal Shading Language
Apple has created a low-level graphics API, called Metal, which runs on most Macs made since 2012, iPhones since the 5S, and iPads since the iPad Air. Metal has its own shading language called Metal Shading Language (MSL), which is based on C++14 and implemented using clang and LLVM. MSL unifies vertex, fragment and compute processing.
WebGPU Shading Language
WebGPU Shading Language (WGSL) is the shader language for WebGPU. That is, an application using the WebGPU API uses WGSL to express the programs, known as shaders, that run on the GPU.
Translation
To port shaders from one shading language to another, a few approaches are used:
Define a common interface. For example, Cg/HLSL, GLSL, and MSL all implement C preprocessor macros, so it is possible to wrap all the different operations into a common interface. Valve's Source 2 and NVIDIA's FXAA 3.11 do this.
Translate one language to the other. For example, DirectX bytecode can be partially converted to GLSL via HLSLcc, and several tools for converting GLSL to HLSL such as ANGLE and HLSL2GLSL exist.
Define an intermediate language. SPIR-V is designed partially for this purpose. It can be generated from HLSL or GLSL, and be decompiled into HLSL, GLSL, or MSL.
| Technology | Programming languages | null |
1656748 | https://en.wikipedia.org/wiki/Family%20medicine | Family medicine | Family medicine is a medical specialty within primary care that provides continuing and comprehensive health care for the individual and family across all ages, genders, diseases, and parts of the body. The specialist, who is usually a primary care physician, is named a family physician. It is often referred to as general practice and a practitioner as a general practitioner. Historically, their role was once performed by any doctor with qualifications from a medical school and who works in the community. However, since the 1950s, family medicine / general practice has become a specialty in its own right, with specific training requirements tailored to each country. The names of the specialty emphasize its holistic nature and/or its roots in the family. It is based on knowledge of the patient in the context of the family and the community, focusing on disease prevention and health promotion. According to the World Organization of Family Doctors (WONCA), the aim of family medicine is "promoting personal, comprehensive and continuing care for the individual in the context of the family and the community". The issues of values underlying this practice are usually known as primary care ethics.
Scope of practice
Family physicians in the United States must hold either an M.D. or a D.O. degree. Physicians who specialize in family medicine must successfully complete an accredited three or four year long family medicine residency in the United States in addition to their medical degree. They are then eligible to sit for a board certification examination, which is now required by most hospitals and health plans. American Board of Family Medicine requires its diplomates to maintain certification through an ongoing process of continuing medical education, medical knowledge review, patient care oversight through chart audits, practice-based learning through quality improvement projects and retaking the board certification examination every 7 to 10 years. The American Osteopathic Board of Family Physicians requires its diplomates to maintain certification and undergo the process of recertification every 8 years.
Physicians certified in family medicine in Canada are certified through the College of Family Physicians of Canada, after two years of additional education. Continuing education is also a requirement for maintenance of certification.
The term "family medicine" or "family physician" is used in the United States, Mexico, South America, many European and Asian countries. In Sweden, certification in family medicine requires five years working with a tutor, after the medical degree. In India, those who want to specialize in family medicine must complete a three-year family medicine residency, after their medical degree (MBBS). They are awarded either a D.N.B. or an M.D. in family medicine. Similar systems exist in other countries.
General practice is the term used in many other nations, such as the United Kingdom, Australia, New Zealand, and South Africa. Such services are provided by general practitioners. The term primary care in the UK may also include services provided by community pharmacy, optometrist, dental surgery and community hearing care providers. The balance of care between primary care and secondary care - which usually refers to hospital-based services - varies from place to place, and with time. In many countries there are initiatives to move services out of hospitals into the community, in the expectation that this will save money and be more convenient.
Family physicians deliver a range of acute, chronic, and preventive medical care services. In addition to diagnosing and treating illness, they also provide preventive care, including routine checkups, health-risk assessments, immunization and screening tests, and personalized counselling on maintaining a healthy lifestyle. Family physicians also manage chronic illness, often coordinating care provided by other sub-specialists. Family doctors also practice safety-netting, which involves follow-up assessments for uncertain diagnoses associated with symptoms that could innocuous, but may also be a sign of serious illness. Many American Family Physicians deliver babies and provide prenatal care. In the U.S., family physicians treat more patients with back pain than any other physician sub-specialist, and about as many as orthopedists and neurosurgeons combined.
Family medicine and family physicians play a vital role in the healthcare system of a country. In the U.S. for example, nearly one in four of all office visits are made to family physicians. That is 208 million office visits each year — nearly 83 million more than the next largest medical specialty. Today, family physicians provide more care for America's underserved and rural populations than any other medical specialty.
In Canada
Education and training
In Canada, aspiring family physicians are expected to complete a residency in family medicine from an accredited university after obtaining their Doctor of Medicine degree. Although the residency usually has a duration of two years, graduates may apply to complete a third year, leading to a certification from the College of Family Physicians of Canada in disciplines such as emergency medicine, palliative care, care of the elderly, sports and exercise medicine, and women's health, amongst others.
In some institutions, such as McGill University in Montreal, graduates from family medicine residency programs are eligible to complete a master's degree and a Doctor of Philosophy (Ph.D.) in family medicine, which predominantly consists of a research-oriented program.
In the United States
History of medical family practice
Concern for family health and medicine in the United States existed as far back as the early 1930s and 40s. The American public health advocate Bailey Barton Burritt was labeled "the father of the family health movement" by The New York Times in 1944.
Following World War II, two main concerns shaped the advent of family medicine. First, medical specialties and subspecialties increased in popularity, having an adverse effect on the number of physicians in general practice. At the same time, many medical advances were being made and there was concern within the "general practitioner" or "GP" population that four years of medical school plus a one-year internship was no longer adequate preparation for the breadth of medical knowledge required of the profession. Many of these doctors wanted to see a residency program added to their training; this would not only give them additional training, knowledge, and prestige but would allow for board certification, which was increasingly required to gain hospital privileges. In February 1969, family medicine (then known as family practice) was recognized as a distinct specialty in the U.S. It was the twentieth specialty to be recognized.
Education and training
Family physicians complete an undergraduate degree, medical school, and three more years of specialized medical residency training in family medicine. Their residency training includes rotations in internal medicine, pediatrics, obstetrics-gynecology, psychiatry, surgery, emergency medicine, and geriatrics, in addition to electives in a wide range of other disciplines. Residents also must provide care for a panel of continuity patients in an outpatient "model practice" for the entire period of residency. The specialty focuses on treating the whole person, acknowledging the effects of all outside influences, through all stages of life. Family physicians will see anyone with any problem, but are experts in common problems. Many family physicians deliver babies in addition to taking care of patients of all ages.
In order to become board certified, family physicians must complete a residency in family medicine, possess a full and unrestricted medical license, and take a written cognitive examination. Between 2003 and 2009, the process for maintenance of board certification in family medicine is being changed (as well as all other American Specialty Boards) to a series of yearly tests on differing areas. The American Board of Family Medicine, as well as other specialty boards, are requiring additional participation in continuous learning and self-assessment to enhance clinical knowledge, expertise and skills. The Board has created a program called the "Maintenance of Certification Program for Family Physicians" (MC-FP) which will require family physicians to continuously demonstrate proficiency in four areas of clinical practice: professionalism, self-assessment/lifelong learning, cognitive expertise, and performance in practice. Three hundred hours of continuing medical education within the prior six years is also required to be eligible to sit for the exam.
Family physicians may pursue fellowships in several fields, including adolescent medicine, geriatric medicine, sports medicine, sleep medicine, hospital medicine and hospice and palliative medicine. The American Board of Family Medicine and the American Osteopathic Board of Family Medicine both offer Certificates of Added Qualifications (CAQs) in each of these topics.
Shortage of family physicians
Many sources cite a shortage of family physicians (and also other primary care providers, i.e. internists, pediatricians, and general practitioners). The per capita supply of primary care physicians has increased about 1 percent per year since 1998. A recent decrease in the number of M.D. graduates pursuing a residency in primary care has been offset by the number of D.O. graduates and graduates of international medical schools (IMGs) who enter primary care residencies. Still, projections indicate that by 2020 the demand for family physicians will exceed their supply.
The number of students entering family medicine residency training has fallen from a high of 3,293 in 1998 to 1,172 in 2008, according to National Residency Matching Program data. Fifty-five family medicine residency programs have closed since 2000, while only 28 programs have opened.
In 2006, when the nation had 100,431 family physicians, a workforce report by the American Academy of Family Physicians indicated the United States would need 139,531 family physicians by 2020 to meet the need for primary medical care. To reach that figure 4,439 family physicians must complete their residencies each year, but currently, the nation is attracting only half the number of future family physicians that will be needed.
To address this shortage, leading family medicine organizations launched an initiative in 2018 to ensure that by 2030, 25% of combined US allopathic and osteopathic medical school seniors select family medicine as their specialty. The initiative is termed the "25 x 2030 Student Choice Collaborative", and the following eight family medicine organizations have committed resources to reaching this goal:
American Academy of Family Physicians
American Academy of Family Physicians Foundation
American Board of Family Medicine
American College of Osteopathic Family Physicians
Association of Departments of Family Medicine
Association of Family Medicine Residency Directors
North American Primary Care Research Group
Society of Teachers of Family Medicine
The waning interest in family medicine in the U.S. is likely due to several factors, including the lesser prestige associated with the specialty, the lesser pay, the limited ACGME approved fellowship opportunities, and the increasingly frustrating practice environment. Salaries for family physicians in the United States are lower than average for physicians, with the average being $234,000. However, when faced with debt from medical school, most medical students are opting for the higher-paying specialties. Potential ways to increase the number of medical students entering family practice include providing relief from medical education debt through loan-repayment programs and restructuring fee-for-service reimbursement for health care services. Family physicians are trained to manage acute and chronic health issues for an individual simultaneously, yet their appointment slots may average only ten minutes.
In addition to facing a shortage of personnel, physicians in family medicine experience some of the highest rates of burnout among medical specialties, at 47 percent.
Current practice
Most family physicians in the US practice in solo or small-group private practices or as hospital employees in practices of similar sizes owned by hospitals. However, the specialty is broad and allows for a variety of career options including education, emergency medicine or urgent care, inpatient medicine, international or wilderness medicine, public health, sports medicine, and research. Others choose to practice as consultants to various medical institutions, including insurance companies.
United Kingdom
History of general practice services
The pattern of services in the UK was largely established by the National Insurance Act 1911 which established the list system which came from the friendly societies across the country. Every patient was entitled to be on the list, or panel of a general practitioner. In 1911 that only applied to those who paid National insurance contributions. In 1938, 43% of the adult population was covered by a panel doctor. When the National Health Service was established in 1948 this extended to the whole population. The practice would be responsible for the patient record which was kept in a "Lloyd George envelope" and would be transferred if necessary to another practice if the patient changed practice. In the UK, unlike many other countries, patients do not normally have direct access to hospital consultants and the GP controls access to secondary care.
Practices were generally small, often single handed, operating from the doctor's home and often with the doctor's wife acting as a receptionist. When the NHS was established in 1948 there were plans for the building of health centres, but few were built.
In 1953, general practitioners were estimated to be making between 12 and 30 home visits each day and seeing between 15 and 50 patients in their surgeries.
Current practice
Today, the services are provided under the General Medical Services Contract, which is regularly revised.
599 GP practices closed between 2010–11 and 2014–15, while 91 opened and average practice list size increased from 6,610 to 7,171. In 2016 there were 7,613 practices in England, 958 in Scotland, 454 in Wales and 349 in Northern Ireland. There were 7,435 practices in England and the average practice list size in June 2017 was 7,860. There were 1.35 million patients over 85. There has been a great deal of consolidation into larger practices, especially in England. Lakeside Healthcare was the largest practice in England in 2014, with 62 partners and more than 100,000 patients. Maintaining general practices in isolated communities has become very challenging, and calls on very different skills and behaviour from that required in large practices where there is increasing specialization. By 1 October 2018, 47 GP practices in England had a list size of 30,000 or more and the average list size had reached 8,420. In 2019 the average number of registered patients per GP in England has risen since 2018 by 56 to 2,087.
The British Medical Association in 2019 conducted a survey for GP premises. About half of the 1,011 respondents thought their surgeries were not suitable for present needs, and 78% said they would not be able to handle expected future demands.
Under the pressure of the Coronavirus epidemic in 2020 general practice shifted very quickly to remote working, something which had been progressing very slowly up to that point. In the Hurley Group Clare Gerada reported that "99% of all our work is now online" using a digital triage system linked to the patient's electronic patient record which processes up to 3000 consultations per hour. Video calling is used to "see" patients if that is needed.
In 2019 according to NHS England, almost 90% of salaried GPs were working part-time.
England
The GP Forward View, published by NHS England in 2016 promised £2.4 billion (14%) real-terms increase in the budget for general practice. Jeremy Hunt pledged to increase the number of doctors working in general practice by 5,000. There are 3,250 trainee places available in 2017. The GP Career Plus scheme is intended to retain GPs aged over 55 in the profession by providing flexible roles such as providing cover, carrying out specific work such as managing long-term conditions, or doing home visits. In July Simon Stevens announced a programme designed to recruit around 2,000 GPs from the EU and possibly New Zealand and Australia. According to NHS Improvement a 1% deterioration in access to general practice can produce a 10% deterioration in emergency department figures.
GPs are increasingly employing pharmacists to manage the increasingly complex medication regimes of an aging population. In 2017 more than 1,061 practices were employing pharmacists, following the rollout of NHS England's Clinical Pharmacists in General Practice programme. There are also moves to employ care navigators, sometimes an enhanced role for a receptionist, to direct patients to different services such as pharmacy and physiotherapy if a doctor is not needed. In September 2017 270 trained care navigators covering 64,000 patients had been employed across Wakefield. It was estimated that they had saved 930 GP hours over a 10-month trial.
Four NHS trusts: Northumbria Healthcare NHS Foundation Trust; Yeovil District Hospital NHS Foundation Trust; Royal Wolverhampton NHS Trust; and Southern Health NHS Foundation Trust have taken over multiple GP practices in the interests of integration.
GP Federations have become popular among English general practitioners.
Consultations
According to the Local Government Association 57 million GP consultations in England in 2015 were for minor conditions and illnesses, 5.2 million of them for blocked noses. According to the King's Fund between 2014 and 2017 the number of telephone and face-to-face contacts between patients and GPs rose by 7.5% although GP numbers have stagnated. The mean consultation length in the UK has increased steadily over time from around 5 minutes in the 1950s to around 9·22 minutes in 2013–2014. This is shorter than the mean consultation length in a number of other developed countries around the world.
The proportion of patients in England waiting longer than seven days to see a GP rose from 12.8% in 2012 to 20% in 2017. There were 307 million GP appointments, about a million each working day, with more on Mondays, in the year from November 2017. 40% got a same-day appointment. 2.8 million patients, 10.3%, in October 2018, compared to 9.4% in November 2017, did not see the doctor until at least 21 days after they had booked their appointment, and 1.4 million waited for more than 28 days. More than a million people each month failed to turn up for their appointment.
Commercial providers are rare in the UK but a private GP service was established at Poole Road Medical Centre in Bournemouth in 2017 where patients can pay to skip waiting lists to see a doctor.
GP at Hand, an online service using Babylon Health's app, was launched in November 2017 by the Lillie Road Health Centre, a conventional GP practice in west London. It recruited 7000 new patients in its first month, of which 89.6% were between 20 and 45 years old. The service was widely criticized by GPs for cherry picking. Patients with long term medical conditions or who might need home visits were actively discouraged from joining the service. Richard Vautrey warned that it risked 'undermining the quality and continuity of care and further fragmenting the service provided to the public'.
The COVID-19 pandemic in the United Kingdom led to a sudden move to remote working. In March 2020 the proportion of telephone appointments increased by over 600%.
Patient satisfaction
85% of patients rate their overall experience of primary care as good in 2016, but practices run by limited companies operating on APMS contracts (a small minority) performed worse on four out of five key indicators - frequency of consulting a preferred doctor, ability to get a convenient appointment, rating of doctor communication skills, ease of contacting the practice by telephone and overall experience.
Northern Ireland
There have been particularly acute problems in general practice in Northern Ireland as it has proved very difficult to recruit doctors in rural practices. The British Medical Association collected undated resignation letters in 2017 from GPs who threatened to leave the NHS and charge consultation fees. They demanded increased funding, more recruitment and improved computer systems.
A new GP contract was announced in June 2018 by the Northern Ireland Department of Health. It included funding for practice-based pharmacists, an extra £1 million for increased indemnity costs, £1.8 million because of population growth, and £1.5 million for premises upgrades.
Ireland
In Ireland there are about 2,500 General Practitioners working in group practices, primary care centres, single practices and health centres.
Australia
General Practice services in Australia are funded under the Medicare Benefits Scheme (MBS) which is a public health insurance scheme. Australians need a referral from the GP to be able to access specialist care. Most general practitioners work in a general practitioner practice (GPP) with other GPs supported by practice nurses and administrative staff. There is a move to incorporate other health professionals such as pharmacists in to general practice to provide an integrated multidisciplinary healthcare team to deliver primary care.
India
Family medicine (FM) came to be recognized as a medical specialty in India only in the late 1990s. According to the National Health Policy – 2002, there is an acute shortage of specialists in family medicine. As family physicians play a very important role in providing affordable and universal health care to people, the Government of India is now promoting the practice of family medicine by introducing post-graduate training through DNB (Diplomate National Board) programs.
There is a severe shortage of postgraduate training seats, causing a lot of struggle, hardship and a career bottleneck for newly qualified doctors just passing out of medical school. The Family Medicine Training seats should ideally fill this gap and allow more doctors to pursue family medicine careers. However, the uptake, awareness and development of this specialty is slow.
Although family medicine is sometimes called general practice, they are not identical in India. A medical graduate who has successfully completed the Bachelor of Medicine, Bachelor of Surgery (MBBS), course and has been registered with Indian Medical Council or any state medical council is considered a general practitioner. A family physician, however, is a primary care physician who has completed specialist training in the discipline of family medicine.
The Medical Council of India requires three-year residency for family medicine specialty, leading to the award of Doctor of Medicine (MD) in Family Medicine or Diplomate of National Board (DNB) in Family Medicine.
The National Board of Examinations conducts family medicine residency programmes at the teaching hospitals that it accredits. On successful completion of a three-year residency, candidates are awarded Diplomate of National Board (Family Medicine). The curriculum of DNB (FM) comprises: (1) medicine and allied sciences; (2) surgery and allied sciences; (3) maternal and child health; (4) basic sciences and community health. During their three-year residency, candidates receive integrated inpatient and outpatient learning. They also receive field training at community health centres and clinics.
The Medical Council of India permits accredited medical colleges (medical schools) to conduct a similar residency programme in family medicine. On successful completion of three-year residency, candidates are awarded Doctor of Medicine (Family Medicine). A few of the AIIMS institutes have also started a course called MD in community and family medicine in recent years. Even though there is an acute shortage of qualified family physicians in India, further progress has been slow.
The Indian Medical Association's College of General Practitioners, offers a one-year Diploma in Family Medicine (DFM), a distance education programme of the Postgraduate Institute of Medicine, University of Colombo, Sri Lanka, for doctors with minimum five years of experience in general practice. Since the Medical Council of India requires three-year residency for family medicine specialty, these diplomas are not recognized qualifications in India.
As India's need for primary and secondary levels of health care is enormous, medical educators have called for systemic changes to include family medicine in the undergraduate medical curriculum. Some projects like "Buzurgo Ka Humsafar" aid in the growing need for primary care by conducting social awareness workshops and adult vaccination camps.
Recently, the residency-trained family physicians have formed the Academy of Family Physicians of India (AFPI). AFPI is the academic association of family physicians with formal full-time residency training (DNB Family Medicine) in Family medicine. Currently there are about two hundred family medicine residency training sites accredited by the National Board of Examination India, providing around 700 training posts annually. However, there are various issues like academic acceptance, accreditation, curriculum development, uniform training standards, faculty development, research in primary care, etc. in need of urgent attention for family medicine to flourish as an academic specialty in India. The government of India has declared Family Medicine as focus area of human resource development in health sector in the National Health Policy 2002 There is discussion ongoing to employ multi-skilled doctors with DNB family medicine qualification against specialist posts in NRHM (National Rural Health Mission).
Three possible models of how family physicians will practise their specialty in India might evolve, namely (1) private practice, (2) practising at primary care clinics/hospitals, (3) practising as consultants at secondary/tertiary care hospitals.
British model
A group of 15 doctors based in Birmingham have set up a social enterprise company - Pathfinder Healthcare - which plans to build eight primary health centres in India on the British model of general practice. According to Dr Niti Pall, primary health care is very poorly developed in India. These centres will be run commercially. Patients will be charged ₹200 to 300 for an initial consultation, and prescribed only generic drugs, dispensed from attached pharmacies.
Japan
Family medicine was first recognized as specialty in 2015 and currently has approximately 500 certified family doctors. The Japanese government has made a commitment to increase the number of family doctors in an effort to improve the cost-effectiveness and quality of primary care in light of increasing health care costs. The Japan Primary Care Association (JPCA) is currently the largest academic association of family doctors in Japan. The JPCA family medicine training scheme consists of a three-year programme following the two-year internship. The Japanese Medical Specialty Board define the standard of the specialty training programme for board-certified family doctors. Japan has a free access healthcare system meaning patients can bypass primary care services. In addition to family medicine specialists Japan also has ~100,000 organ-specialist primary care clinics. The doctors working in these clinics do not typically have formal training in family medicine. In 2012, the mean consultation length in a family medicine clinic was 10.2 minutes. A review literature has recently been published detailing the context, structure, process, and outcome of family medicine in Japan.
| Biology and health sciences | Fields of medicine | Health |
5777711 | https://en.wikipedia.org/wiki/Rift%20lake | Rift lake | A rift lake is a lake formed as a result of subsidence related to movement on faults within a rift zone, an area of extensional tectonics in the continental crust. They are often found within rift valleys and may be very deep. Rift lakes may be bounded by large steep cliffs along the fault margins.
Examples
Lake Baikal, in Siberia
Lake Balaton, in Hungary
The Dead Sea, on the border of Israel, Palestine and Jordan, a pull-apart basin, formed along the Dead Sea Transform.
Ebi Lake in China at the Dzungarian Gate on the border with Kazakhstan
Lake Elsinore, in the Elsinore Trough in Southern California
Lake Hazar, in Turkey
Lake Idaho, a Pliocene rift lake in Idaho
Lake Khuvsgul, northern Mongolia
Limagne, an infilled Paleogene rift lake in France
Lake Lockatong, a rift lake of Triassic age, formed in the Newark Basin.
Lake Malawi, part of East African Rift
The Orcadian Basin, in northern Scotland, had rift lakes that formed during the Middle Devonian.
Rift Valley lakes, eastern Africa
Lake Tanganyika, part of Albertine Rift
Lake Vostok, in Antarctica, may have formed in a rift setting
Þingvallavatn, in Iceland
| Physical sciences | Hydrology | Earth science |
5779164 | https://en.wikipedia.org/wiki/Hand%20spinning | Hand spinning | Spinning is an ancient textile art in which plant, animal or synthetic fibres are drawn out and twisted together to form yarn. For thousands of years, fibre was spun by hand using simple tools, the spindle and distaff. After the introduction of the spinning wheel in the 13th century, the output of individual spinners increased dramatically. Mass production later arose in the 18th century with the beginnings of the Industrial Revolution. Hand-spinning remains a popular handicraft.
Characteristics of spun yarn vary according to the material used, fibre length and alignment, quantity of fibre used, and degree of twist.
History
The origins of spinning fibre to make string or yarn are lost in time, but archaeological evidence in the form of representation of string skirts has been dated to the Upper Paleolithic era some 20,000 years ago. There has also been recent discovery of plied cord spun by Neanderthals and dating back 41,000-52,000 years. In the earliest type of spinning, tufts of animal hair or plant fibre are rolled down the thigh with the hand, and additional tufts are added as needed until the desired length of spun fibre is achieved. An advanced technique of thigh-spinning while simultaneously plying two singles is still used today in several cultures, such as with Chilkat weaving and Ravenstail weaving. In earlier practice of thigh-spinning, the fibre might be fastened to a stone which is twirled round until the yarn is sufficiently twisted, whereupon it is wound upon the stone and the process repeated over and over.
The next method of spinning yarn is with the spindle, a straight stick eight to twelve inches long on which the yarn is wound after twisting. At first the stick had a cleft or split in the top in which the thread was fixed. Later, a hook of bone was added to the upper end. The bunch of wool or plant fibres is held in the left hand. With the right hand the fibres are drawn out several inches and the end fastened securely in the slit or hook on the top of the spindle. A whirling motion is given to the spindle on the thigh or any convenient part of the body. The twisted yarn is then wound on to the upper part of the spindle. Another bunch of fibres is drawn out, the spindle is given another twirl, the yarn is wound on the spindle, and so on.
The distaff was used for holding the bunch of wool, flax, or other fibres. It was a short stick, on one end of which was loosely wound the raw material. The other end of the distaff was held in the hand, under the arm or thrust in the girdle of the spinner. When held thus, one hand was left free for drawing out the fibres.
A spindle containing a quantity of yarn rotates more easily, steadily, and continues longer than an empty one; hence, the next improvement was the addition of a weight called a spindle whorl at the bottom of the spindle. These whorls are discs of wood, stone, clay, or metal with a hole in the centre for the spindle, which keep the spindle steady and promote its rotation. Spindle whorls appeared in the Neolithic era. They allowed the spinner to slowly lower, or drop, the spindle as it was spinning, thus allowing a greater quantity of yarn to be made before it had to be wound onto the spindle, hence the name "drop spindle," which is now most commonly used for the hand spindle with whorl attached. The Scottish drop spindle is called fairsaid, farsadh, or dealgan.
Spinning wheel
The spinning wheel was possibly invented in the Islamic world by 1030. It later spread to China by 1090, and then spread from the Islamic world to Europe and India by the 13th century.
In medieval times, poor families had such a need for yarn to make their own cloth and clothes that practically all girls and unmarried women would keep busy spinning, and "spinster" became synonymous with an unmarried woman. Subsequent improvements with spinning wheels and then mechanical methods made hand-spinning increasingly uneconomic, but as late as the twentieth century hand-spinning remained widespread in poor countries: in conscious rejection of international industrialization, Gandhi was a notable practitioner. The hand spinning movement that he initiated as a part of the Indian freedom struggle has made the handwoven cloth known as "Khadi" made from handspun cotton yarn world-famous. Women spinners of cotton yarn still continue to work to produce handspun yarn for the weaving of Khadi in Ponduru, a village in South India.
A great wheel (also called a wool wheel, high wheel or walking wheel) is advantageous when using the long-draw technique to spin wool or cotton because the high ratio between the large wheel and the whorl (sheave) enables the spinner to turn the bobbin faster, thus significantly speeding up production.
A Saxony wheel (also called a flax wheel) or an upright wheel (also called a castle wheel) is invaluable when spinning flax to make linen. The ends of flax fibres tend to stick out from the thread unless wetted while being spun, so the spinner usually keeps a bowl of water handy when spinning flax. On these types of wheels both hands are free as the wheel is turned with a treadle rather than by hand, so the spinner can use one hand to draft the fibres and the other to wet them. These wheels can also be used to spin wool or cotton.
Industrial Revolution
Powered spinning, originally done by water or steam power but now done by electricity, is vastly faster than hand-spinning.
The spinning jenny, a multi-spool spinning wheel invented c. 1764 by James Hargreaves, dramatically reduced the amount of work needed to produce yarn of high consistency, with a single worker able to work eight or more spools at once. At roughly the same time, Richard Arkwright and a team of craftsmen developed the spinning frame, which produced a stronger thread than the spinning jenny. Too large to be operated by hand, a spinning frame powered by a waterwheel became the water frame.
In 1779, Samuel Crompton combined elements of the spinning jenny and water frame to create the spinning mule. This produced a stronger thread, and was suitable for mechanisation on a grand scale. A later development, from 1828/29, was Ring spinning.
In the 20th century, new techniques including Open End spinning or rotor spinning were invented to produce yarns at rates in excess of 40 meters per second.
Characteristics of spun yarns
Materials
Yarn can be, and is, spun from a wide variety of materials, including natural fibres such as animal, plant, and mineral fibres, and synthetic fibres.
Twist and ply
The direction in which the yarn is spun is called twist. Yarns are characterized as S-twist or Z-twist according to the direction of spinning (see diagram). Tightness of twist is measured in TPI (twists per inch or turns per inch).
Two or more spun yarns may be twisted together or plied to form a thicker yarn. Generally, handspun single plies are spun with a Z-twist, and plying is done with an S-twist. This is a cultural preference differing in some areas but surprisingly common. It is important, however, to spin the single plies in one direction and then spin them together in the opposite direction—in this way, the opposite-direction plying keeps the spun yarn from untwisting itself.
Plying methods
Yarns can be made of two, three, four, or more plies, or may be used as singles without plying. Two-ply yarn can also be plied from both ends of one long strand of singles using a centre-pull ball, where one end feeds from within a ball of yarn while the other end feeds from the outside. "Andean plying", in which the single is first wound around one hand in a specific manner that allows unwinding both ends at once without tangling, is another way to ply smaller amounts of yarn. The name comes from a method used by Andean spinners to manage and splice unevenly matched singles being plied from multiple spindles. "Navajo plying", a.k.a. "chain-plying" is another method of producing a three-ply yarn, in which one strand of singles is looped around itself in a manner similar to crochet and the resulting three parallel strands twisted together. This method is often used to keep colours together on singles dyed in sequential colours. Cabled yarns are usually four-ply yarns made by plying two strands of two-ply yarn together in the direction opposite to the plying direction for the two-ply yarns.
Contemporary hand spinning
Hand-spinning is still an important skill in many traditional societies. Hobby or small scale artisan spinners spin their own yarn to control specific yarn qualities and produce yarn that is not widely available commercially. Sometimes these yarns are made available to non-spinners online and in local yarn stores. Handspinners also may spin for self-sufficiency, a sense of accomplishment, or a sense of connection to history and the land. In addition, they may take up spinning for its meditative qualities.
Within the recent past, many new spinners have joined into this ancient process, innovating the craft and creating new techniques. From using new dyeing methods before spinning, to mixing in novelty elements (Christmas Garland, eccentric beads, money, etc.) that would not normally be found in traditional yarns, to creating and employing new techniques like coiling, this craft is constantly evolving and shifting.
To make various yarns, besides adding novelty elements, spinners can vary all the same things as in a machined yarn, i.e., the fibre, the preparation, the colour, the spinning technique, the direction of the twist, etc. A common misconception is that yarn spun from rolags may not be as strong, but the strength of a yarn is actually based on the length of hair fibre and the degree of twist. When working with shorter hairs, such as from llama or angora rabbit, the spinner may choose to integrate longer fibres, such as mohair, to prevent yarn breakage. Yarns made of shorter fibres are also given more twist than yarns of longer fibres, and are generally spun with the short draw technique.
The fibre can be dyed at any time, but is often dyed before carding or after the yarn has been spun.
Wool may be spun before or after washing, although excessive amounts of lanolin may make spinning difficult, especially when using a drop-spindle. Careless washing may cause felting. When done prior to spinning, this often leads to unusable wool fibre. In washing wool the key thing to avoid is too much agitation and fast temperature changes from hot to cold. Generally, washing is done lock by lock in warm water with dish-soap.
Education
There are number of guilds and educational institutions which offer certificate programs in handspinning. The Handweavers Guild of America (HGA) offers a Certificate of Excellence in Handspinning. Olds College in Alberta, Canada offers a Master Spinner program both on campus and by distance education. The Ontario Handweavers & Spinners offer both a Spinning Certificate and a Master Spinning Certificate. These programs feature in-depth examinations of handspinning topics, as well as extensive assignments and skill evaluations.
Techniques
Watch video #1: Demonstration of hand spinning:
A tightly spun wool yarn made from fibre with a long staple length in it is called worsted. It is hand spun from combed top, and the fibres all lie in the same direction as the yarn. A woollen yarn, in contrast, is hand spun from a rolag or other carded fibre (roving, batts), where the fibres are not as strictly aligned to the yarn created. The woollen yarn, thus, captures much more air, and makes for a softer and generally bulkier yarn. There are two main techniques to create these different yarns: short draw creates worsted yarns, and long draw creates woollen yarns. Often a spinner will spin using a combination of both techniques and thus make a semi-worsted yarn.
Short draw spinning is used to create worsted yarns. It is spun from combed roving, sliver or wool top. The spinner keeps his/her hands very close to each other. The fibres are held, fanned out, in one hand, and the other hand pulls a small number from the mass. The twist is kept between the second hand and the wheel. There is never any twist between the two hands.
Long draw is spun from a carded rolag. The rolag is spun without much stretching of the fibres from the cylindrical configuration. This is done by allowing twist into a short section of the rolag, and then pulling back, without letting the rolag change position in one's hands, until the yarn is the desired thickness. The twist will concentrate in the thinnest part of the roving; thus, when the yarn is pulled, the thicker sections with less twist will tend to thin out. Once the yarn is the desired thickness, enough twist is added to make the yarn strong. Then the yarn is wound onto the bobbin, and the process starts again.
Spinning in the grease
Handspinners are split, when spinning wool, as to whether it is better to spin it "in the grease" (with lanolin still in) or after it has been washed. More traditional spinners are more willing to spin in the grease, as it is less work to wash the wool after it is in yarn form. Spinners who spin very fine yarn may also prefer to spin in the grease as it can allow them to spin finer yarns with more ease. Spinning in the grease covers the spinner's hands in lanolin and, thus, softens the spinner's hands.
Spinning in the grease works best if the fleece is newly sheared. After several months, the lanolin becomes sticky, which makes the wool harder to spin using the short-draw technique, and almost impossible to spin using the long-draw technique. In general, spinners who use the long-draw technique do not spin in the grease.
Such spinners generally buy their fibres pre-washed and carded, in the form of roving, sliver, or batts. This means less work for the spinners, as they do not have to wash out the lanolin. Spinners then have available predyed fibre, or blends of fibres, which are hard to create when the wool is still in the grease. As machine carders cannot card wool in the grease, pre-carded yarn generally is not spun in the grease. Some spinners use spray-on lanolin-like products to get the same feel of spinning in the grease with carded fibre.
| Technology | Techniques_2 | null |
5781408 | https://en.wikipedia.org/wiki/Sociality | Sociality | Sociality is the degree to which individuals in an animal population tend to associate in social groups (gregariousness) and form cooperative societies.
Sociality is a survival response to evolutionary pressures. For example, when a mother wasp stays near her larvae in the nest, parasites are less likely to eat the larvae. Biologists suspect that pressures from parasites and other predators selected this behavior in wasps of the family Vespidae.
This wasp behaviour evidences the most fundamental characteristic of animal sociality: parental investment. Parental investment is any expenditure of resources (time, energy, social capital) to benefit one's offspring. Parental investment detracts from a parent's capacity to invest in future reproduction and aid to kin (including other offspring). An animal that cares for its young but shows no other sociality traits is said to be subsocial.
An animal that exhibits a high degree of sociality is called a social animal. The highest degree of sociality recognized by sociobiologists is eusociality. A eusocial taxon is one that exhibits overlapping adult generations, reproductive division of labor, cooperative care of young, and—in the most refined cases—a biological caste system.
One characteristic of social animals is the relatively high degree of cognitive ability. Social mammal predators such as spotted hyena and lion have been found to be better than non-social predators such as leopard and tiger at solving problems that require the use of innovation.
Presociality
Solitary animals such as the jaguar do not associate except for courtship and mating. If an animal taxon shows a degree of sociality beyond courtship and mating, but lacks any of the characteristics of eusociality, it is said to be presocial. Although presocial species are much more common than eusocial species, eusocial species have disproportionately large populations.
The entomologist Charles D. Michener published a classification system for presociality in 1969, building on the earlier work of Suzanne Batra (who coined the words eusocial and quasisocial in 1966). Michener used these terms in his study of bees, but also saw a need for additional classifications: subsocial, communal, and semisocial. In his use of these words, he did not generalize beyond insects. E. O. Wilson later refined Batra's definition of quasisocial.
Subsociality
Subsociality is common in the animal kingdom. In subsocial taxa, parents care for their young for some length of time. Even if the period of care is very short, the animal is still described as subsocial. If adult animals associate with other adults, they are not called subsocial, but are ranked in some other classification according to their social behaviours. If occasionally associating or nesting with other adults is a taxon's most social behaviour, then members of those populations are said to be solitary but social. See Wilson (1971) for definitions and further sub-classes of varieties of subsociality. Choe & Crespi (1997) and Costa (2006) give readable overviews.
Subsociality is widely distributed among the winged insects, and has evolved independently many times. Insect groups that contain at least some subsocial species are shown in bold italics on a phylogenetic tree of the Neoptera (note that many non-subsocial groups are omitted):
Solitary but social
Solitary-but-social animals forage separately, but some individuals sleep in the same location or share nests. The home ranges of females usually overlap, whereas those of males do not. Males usually do not associate with other males, and male offspring are usually evicted upon maturity. However, this is opposite among cassowaries, for example. Among primates, this form of social organization is most common among the nocturnal strepsirrhine species and tarsiers. Solitary-but-social species include mouse lemurs, lorises, and orangutans.
Some individual cetaceans adopt a solitary but social behavior, that is, they live apart from their own species but interact with humans. This behavior has been observed in species including bottlenose dolphin, common dolphin, striped dolphin, beluga, Risso's dolphin, and orca. Notable individuals include Pelorus Jack (1888–1912), Tião (1994–1995), and Fungie (1983–2020). At least 32 solitary-sociable dolphins were recorded between 2008 and 2019.
Parasociality
Sociobiologists place communal, quasisocial, and semisocial animals into a meta-class: the parasocial. The two commonalities of parasocial taxa are the exhibition of parental investment, and socialization in a single, cooperative dwelling.
Communal, quasisocial, and semisocial groups differ in a few ways. In a communal group, adults cohabit in a single nest site, but they each care for their own young. Quasisocial animals cohabit, but they also share the responsibilities of brood care. (This has been observed in some Hymenoptera and spider taxa, as well as in some other invertebrates.) A semisocial population has the features of communal and quasisocial populations, but they also have a biological caste system that delegates labor according to whether or not an individual is able to reproduce.
Beyond parasociality is eusociality. Eusocial insect societies have all the characteristics of a semisocial one, except overlapping generations of adults cohabit and share in the care of young. This means that more than one adult generation is alive at the same time, and that the older generations also care for the newest offspring.
Eusociality
Eusocial societies have overlapping adult generations, cooperative care of young, and division of reproductive labor. When organisms in a species are born with physical characteristics specific to a caste which never changes throughout their lives, this exemplifies the highest acknowledged degree of sociality. Eusociality has evolved in several orders of insects. Common examples of eusociality are from Hymenoptera (ants, bees, sawflies, and wasps) and Blattodea (infraorder Isoptera, termites), but some Coleoptera (such as the beetle Austroplatypus incompertus), Hemiptera (bugs such as Pemphigus spyrothecae), and Thysanoptera (thrips) are described as eusocial. Eusocial species that lack this criterion of morphological caste differentiation are said to be primitively eusocial.
Two potential examples of primitively eusocial mammals are the naked mole-rat and the Damaraland mole-rat (Heterocephalus glaber and Fukomys damarensis, respectively). Both species are diploid and highly inbred, and they aid in raising their siblings and relatives, all of whom are born from a single reproductive queen; they usually live in harsh or limiting environments. A study conducted by O'Riain and Faulkes in 2008 suggests that, due to regular inbreeding avoidance, mole rats sometimes outbreed and establish new colonies when resources are sufficient.
Eusociality has arisen among some crustaceans that live in groups in a restricted area. Synalpheus regalis are snapping shrimp that rely on fortress defense. They live in groups of closely related individuals, amidst tropical reefs and sponges. Each group has one breeding female; she is protected by a large number of male defenders who are armed with enlarged snapping claws. As with other eusocial societies, there is a single shared living space for the colony members, and the non-breeding members act to defend it.
Human eusociality
E. O. Wilson and Bert Hölldobler controversially claimed in 2005 that humans exhibit sufficient sociality to be counted as a eusocial species, and that this enabled them to enjoy spectacular ecological success and dominance over ecological competitors.
| Biology and health sciences | Ethology | null |
5782826 | https://en.wikipedia.org/wiki/Fractional%20crystallization%20%28chemistry%29 | Fractional crystallization (chemistry) | In chemistry, fractional crystallization is a stage-wise separation technique that relies on the liquid–solid phase change. This technique fractionates via differences in crystallization temperature and enables the purification of multi-component mixtures, as long as none of the constituents can act as solvents to the others. Due to the high selectivity of the solid–liquid equilibrium, very high purities can be achieved for the selected component.
Principle of separation
The crystallization process starts with the partial freezing of the initial liquid mixture by slowly decreasing its temperature. The frozen solid phase subsequently has a different composition than the remaining liquid. This is the fundamental physical principle behind the melt fractionating process and quite comparable to distillation, which operates between a liquid and the gas phase.
The crystals will grow on a cooled surface or alternatively as a suspension in the liquid. The heat released by the solidification process is withdrawn through a cooling surface or via the liquid. In theory, 100% of the product could be solidified and recovered. In practice, various strategies such as partial melting of the solid fraction (sweating) need to be applied in order to reach high purity levels.
Advantages
Fractional crystallization has various advantages over other separation technologies. First of all, it makes the purification of close boilers possible. This allows for very high purities even for challenging components. Furthermore, because of the lower operating temperature, the thermal stress applied to the product is very low. This is in particular relevant for products that would otherwise oligomerize or degrade. Next, fractional crystallization is usually an inherently safe technology, because it operates at low pressures and low temperatures. Also, it does not use any solvents and is emission-free. Finally, since the latent heat of solidification is 3–6x lower than the heat of evaporation, the energy consumption is – in comparison to distillation – much lower.
Process steps
Fractional crystallization involves several key steps:
Crystallization: This is the initial phase where the material to be purified is cooled. As it cools, high-purity crystals begin to form on the cooling surface. The purity is achieved because the impurities tend to remain in the liquid phase rather than being incorporated into the crystal structure.
Draining: After the formation of the crystals, the next step is to remove the residual liquid that contains a higher concentration of impurities. This process of draining helps to separate the pure crystals from the impure liquid.
Sweating: This phase is a controlled partial melting process. It further purifies the product by melting only a small portion of the crystal. The melting causes the impurities trapped within or between the crystal structures to be released and separated.
Total Melting: In the final step, the remaining crystallized material, which is now the purified product, is completely melted. This total melting facilitates the removal of the pure substance from the crystallization equipment and prepares it for downstream processing.
Crystallizers
There are three differenct fractional crystallization technologies available:
Falling-film
In the falling-film crystallizer, crystals grow from a melt that forms a thin film along the inside of cooled tubes. A concurrent cooling medium flows on the outside of these tubes. This arrangement allows for reproducible and high transfer rates of heat, facilitating the growth of crystals from the falling film of melt. The solid–liquid separation of the resulting slurry can be accomplished using a wash column or a centrifuge. This technology is more complex than others but offers the advantage of high separation efficiency and very high purities. A typical feed has concentrations between 90–99%, which is purified up to 99.99 wt.-% or greater. For example, glacial acrylic acid, optical grade bisphenol-A and battery grade ethylene carbonate can be purified to their highest grade using a falling-film crystallizer.
Static
The static crystallizer allows crystals to grow from a stagnant melt, making it a versatile and robust technology. It can purify highly challenging products, including those with most challenging properties, such as high viscosities and high or low melting points. Examples of applications include isopulegol, phosphoric acid, wax and paraffins, anthracene / carbazole and even satellite-grade hydrazine.
Suspension
In suspension crystallization, crystals are generated on a cooling surface and then scraped off to continue growing in size within a stirred vessel in suspension or slurry. The solid–liquid separation is performed either through a wash-column or a centrifuge. This method is more complex to operate, but offers the advantage of a high separation efficiency, which translates to considerable engery savings. Examples of applications include paraxylene, halogenated aromatics, and also aqueous feeds.
| Physical sciences | Other separations | Chemistry |
5782961 | https://en.wikipedia.org/wiki/Linckia%20laevigata | Linckia laevigata | Linckia laevigata (sometimes called the "blue Linckia" or blue star) is a species of sea star in the shallow waters of tropical Indo-Pacific.
Description and characteristics
The variation ("polymorphism", in this case, a "color morph") most commonly found is pure blue, dark blue, or light blue, although observers find the aqua, purple, or orange variation throughout the ocean. These sea stars may grow up to in diameter, with rounded tips at each of the arms; some individuals may bear lighter or darker spots along each of their arms. Individual specimens are typically firm in texture, possessing the slightly tubular, elongated arms common to most of other members of the family Ophidiasteridae, and usually possessing short, yellowish tube feet.
An inhabitant of coral reefs and sea grass beds, this species is relatively common and is typically found in sparse density throughout its range. Blue stars live subtidally, or sometimes intertidally, on fine (sand) or hard substrata and move relatively slowly (mean locomotion rate of 8.1 cm/min).
The genus Linckia, as is true of other species of starfish, is recognized by scientists as being possessed of remarkable regenerative capabilities, and endowed with powers of defensive autotomy against predators: Although not yet documented, L. laevigata may be able to reproduce asexually, as does the related species Linckia multifora (another denizen of tropical seas, but of differing coloration, i.e., pink or reddish mottled with white and yellow, which has been observed reproducing asexually in captivity). Linckia multifora produces 'comets', or separated arms, from the mother individual; these offspring proceed to grow four tiny stubs of arms ready for growth to maturity. L. laevigata is apparently not an exception to this behavior, as many individuals observed in nature are missing arms or, on occasion, in the comet form.
Some species of other reef inhabitants prey on this species of sea star. Various pufferfishes, Charonia species (triton shells), harlequin shrimp, and even some sea anemones have been observed to eat whole or parts of the sea stars. The Blue Linckia is also prone to parasitization by a species of the parasitic gastropod Thyca crystallina. Commensal associations sometimes play part on this echinoderm's life; animals such as Periclimenes shrimp are sometimes found commensally on the oral or aboral surface of the animal, picking up mucus and detritus.
This sea star is fairly popular with marine aquarium hobbyists, where it requires a proper, slow acclimatization before entering the tank system, and an adequate food source similar to that found in its natural habitat. Generally thought of as a detritivore, many sources maintain that this species will indefinitely graze throughout the aquarium for organic films or sedentary, low-growing organisms such as sponges and algae. In the marine aquarium hobby, they have been seen to consume Asterina Starfish, which are commonly introduced into such aquaria on the ubiquitous "live rock" used in such settings. In 2021 pictures surfaced on Reddit of a linckia eating an Asternia it takes roughly 45 mins to fully devour the starfish. It's a worthy pest control depending on how abundant the food source is, as well as such factors as the conditions of shipping, acclimatization, and water quality, this species has been kept in captivity with variable success. This species has yet to be bred in captivity for sustainable harvest.
This species has long been a staple of the sea-shell trade, which involves marketing dried sea star tests (skeletons) for curios or decoration. Some regions of their habitat have seen significant population decrease due to the continuous harvesting by the sea-shell and tourism industries.
Gallery
| Biology and health sciences | Echinoderms | Animals |
17765610 | https://en.wikipedia.org/wiki/Climate%20change%20in%20Japan | Climate change in Japan | Climate change is an urgent and significant issue affecting Japan. In recent years, the country has observed notable changes in its climate patterns, with rising temperatures serving as a prominent indicator of this phenomenon. As an archipelago situated in northeastern Asia, Japan is particularly vulnerable to the impacts of climate change due to its diverse geography and exposure to various weather systems. The nation experiences a broad range of climates, spanning from the frigid winters of Hokkaido to the subtropical climates of Okinawa. Changes in temperature patterns have the potential to disrupt ecosystems, impact agricultural productivity, modify water resources, and pose significant challenges to infrastructure and human settlements.
Japanese government is increasingly enacting climate change policy to respond. The government has been criticised for lacking a credible plan to get to its pledged net zero greenhouse gas emissions by 2050. As a signatory of the Kyoto Protocol, and host of the 1997 conference which created it, Japan is under treaty obligations to reduce its carbon dioxide emissions and to take other steps related to curbing climate change.
Greenhouse gas emissions
Out of the global GHG emissions, Japan is responsible for 2.6%. The average rate of emissions per person in Japan is almost double the global average. Emissions have been slightly reduced since 2013 and the net zero emissions are set by 2050.
Japan has pledged to become carbon neutral by 2050. In 2019 Japan emitted 1212 Mt CO2eq, The per capita emissions were 9.31 tonnes in 2017 and was the 5th largest producer of carbon emissions. greenhouse gas emissions by Japan are over 2% of the annual world total, partly because coal supplies over 30% of its electricity. Coal-fired power stations were still being constructed in 2021 some may become stranded assets.
Calculations in 2021 show that to give the world a 50% chance of avoiding a temperature rise of 2 degrees or more, Japan should increase its climate commitments by 49%. For a 95% chance, it should increase the commitments by 151%. For a 50% chance of staying below 1.5 degrees, Japan should increase its commitments by 229%. A March 2021 analysis by Climate Action Tracker said that Japan should reduce greenhouse gas emissions so that by 2030 the emissions are 60% below 2013 levels; this would support a goal of limiting warming to 1.5 °C.
Furthermore, Japan has witnessed a decrease in its annual emissions, with a 5.3% reduction in industrial emissions due to decreased steel production. Residential emissions fell by 1.4%, while vehicle emissions rose by 3.9%. Despite these changes, Japan still heavily relies on fossil fuels, which constitute about 70% of its power generation. In terms of renewable energy, Japan aims for 10 gigawatts of offshore wind capacity by 2030 but is currently projected to only reach 4.4 gigawatts.
Transportation
The transportation sector accounts for 20% out of the total emission of Japan. Within the transportation sector, it is mainly oil that is being used. This particular sector currently relies on fossil fuels and is projected to continue doing so for a while. One challenge to decarbonize the transportation sector is the cost of such technologies required for the transformation. Emissions have been decreasing within the sector since 2001 due to fuel efficiency of cars and population decline.
Energy supply and fossil fuels
The energy supply is mainly made out of fossil fuels, reaching up to 88% of the total primary energy supply in 2019. The fossil fuels are composed of a combination of oil (38%), coal (27%) and gas (23%). In 2012, the Fukushima disaster led to an increase in Japan's dependence on fossil fuels. The country's energy supply has been impacted by the phasing out of nuclear power, with only 4% of the supply coming from nuclear sources in 2019 compared to 15% in 2010. Fossil fuel is mainly imported and the high dependence on the non-renewable sources is making it difficult to reach a carbon-neutral society. Out of Japan's total primary energy supply, only 8% is made out of renewable sources; however, this has been doubled since 1990.
Industrial emissions
Although Japan is a developed country, it still has a large presence of energy-intensive industries (such as the production of steel and cement) compared to other developed economies. The country has a high energy consumption that can be compared to emerging countries like China, India, and Brazil. In Japan, the overall industrial emissions domestically account for approximately 967.4 million tons of annually. Among the industries the iron and steel sector has the highest emission rate, accounting for around 111.9 million tons of .
Current Emissions Overview
According to data released by the Ministry of Environment, Japan's total greenhouse gas emissions for the fiscal year ending in March 2023 declined by 2.3%, amounting to 1.085 billion metric tons of CO2 equivalent. This reduction marks a 23% decrease compared to the levels recorded in 2013. Despite this progress, Japan is yet to meet its ambitious target of a 46% reduction by 2030. The primary contributor to this decrease was the industrial sector, which saw a 5.3% drop in emissions, largely due to a decrease in steel production and a corresponding reduction in power demand. In addition, residential emissions decreased by 1.4%. However, not all sectors showed a decline; emissions from the transportation sector, for example, increased by 3.9%.
On the renewable energy front, Japan has set a target to achieve 10 gigawatts of offshore wind power by 2030. Yet, projections suggest that Japan is on pace to reach only 4.4 gigawatts by the end of the decade, indicating significant challenges ahead in meeting its renewable energy goals.
Impacts on the natural environment
Temperature and weather changes
Temperature
Climate change has affected Japan drastically. The temperature and rainfall have increased rapidly in the years leading up to 2020. This has resulted in immature rice grains and also oranges that automatically get separated from their skin due to immature growth by inappropriate weather. Many corals in the Japanese seas and oceans have died due to rising sea temperatures and ocean acidification. Tiger mosquitoes, which transmit dengue fever, were found further north than before.
Earth Simulator calculations reveal the daily increase in mean temperature in Japan during the period 2071 to 2100. The temperature will increase by 3.0 °C in Scenario B1 and 4.2 °C in A1B compared to that of 1971 to 2000. Similarly, the daily maximum temperature in Japan will increase by 3.1 °C in B1 and 4.4 °C in A1B. The precipitation in summer in Japan will increase steadily due to global warming (annual average precipitation will increase by 17% in Scenario B1 and by 19% in Scenario A1B during the period 2071–2100 compared to that of 1971–2000).
Considering the projections in temperature for Japan, depending on the scenario there are different outcomes. In a worst-case scenario for 2100, where GHG emissions are not declining, an increase of almost 6 °C is expected during winter and almost 5 °C for summer in comparison to the annual in 1900. If a drastic reduction in emissions occurs then the increases will be almost 2 °C and 1,5 °C respectively by 2100.
Precipitation
Precipitation in Japan varies between 1000mm to 2,500mm annually, causing various events depending on the year, either flooding or lack of water availability for sectors such as agriculture. It is also more complex to predict in any case scenario the effects of climate change, easily, for precipitation. Extreme rainfall events are more frequent the total annual precipitation seems to decrease.
Extreme weather events
Climate change will not only affect parameters such as temperature and precipitation. Extreme events seem to have increased as well such as heat waves, droughts, tsunamis, storm surges, and typhoons. An increased frequency and prolonged duration of such natural disasters are likely to impact upon the energy, agricultural and tourism sector of Japan.
Sea level rise
Global warming has led to an increase in worldwide sea level rise due to the melting to glaciers and ice sheets.
Southern and eastern coastal parts of Japan, have a high probability to be affected by phenomena such as tsunamis and storms.
Water resources
Water resources are highly dependent on the country's rates of precipitation and evapotranspiration. Temperature projections in Japan are increasingly affecting both water cycle processes, hurting the availability of water resources for Japan. The effect of climate change upon water availability in Japan includes:
Less snow and ice coverage eventually will mean an increase in droughts. Japan is a country that has experienced droughts before. In areas depending on the snow melting for water availability, a decrease in river discharges is expected.
Runoffs which are expected based on low and medium emission scenarios to be increased, causing soil erosion, transport of pollutants, and flood risks.
Alteration in groundwater storage, affects its infrastructure, causing contamination, and even increasing salination due to sea level rise.
A decrease in water resources could potentially cause problems for Japanese sectors like agriculture which will have to find different cultivation methods to manage water waste, especially in the scenario of severe droughts.
Ecosystems
Changes in temperature, precipitation patterns and sea level rise are some potential effects of climate change, which are leading to changes in the distribution and abundance of plant and animal species. Listed below, are ecosystems that will potentially get affected by climate change in Japan:
Changes in species distribution: As temperatures increase, species are shifting their ranges to higher latitudes or elevations in search of cooler conditions. This can disrupt the balance of ecosystems and lead to the loss of species that are unable to adapt.
Changes in phenology: Climate change is causing shifts in the timing of seasonal events such as flowering, migration, and hibernation. These changes can affect the timing of interactions between species, such as pollination or predator-prey relationships.
Changes in forest ecosystems: Climate change is leading to changes in the growth, productivity, and composition of forests in Japan, depending on the tree species. For example, primeval forest ecosystems are already affected because of climate change. Changes in temperature and precipitation patterns can affect the timing and intensity of forest fires which eventually could lead to a loss of biodiversity and an increase in emissions.
Impacts on marine ecosystems: Rising sea temperatures and ocean acidification are affecting marine ecosystems in Japan, leading to changes in the distribution and abundance of species and altering food webs. This can have an impact on the fishing industry, which is an important source of livelihood for Japanese communities.
Overall, climate change is having significant impacts on Japan's ecosystems, and these impacts are likely to continue and even accelerate in the future. Japan must take steps to mitigate and adapt to these impacts to protect its biodiversity and the services that ecosystems provide.
Biodiversity
Japan is a biodiverse region with over 90,000 recognized species, of which more than 30% of amphibians, reptiles, and freshwater and marine species, and more than 20% of mammals and plants are threatened with extinction. Ecological changes are increasingly reported, and climate change is recognized as a major threat to biodiversity. Phenological and distributional records show that ecological changes are occurring in response to climate change in Japan.
On average, the phenology of numerous animal species has been delayed, leading to shifts in species interactions. Rapid range expansions have been observed for insects and corals, while future projections indicate rapid shifts of plants toward higher elevations and significant losses of climatically suitable areas for high-altitude species. The impacts of climate change on Japanese species are not always consistent with the observations and projections previously reported in other regions. There is a need for further investigations in other less-known regions to improve understanding of regional impacts, which can be facilitated by utilizing locally available data and publications, especially in non-English speaking countries.
Coral reefs
The warming of the world's oceans over the past few decades has had a significant impact on coastal ecosystems, particularly on coral reefs found in tropical and subtropical regions. The potential future outcome of global warming in Sekisei Lagoon could lead to extreme heating and mass bleaching, which would have synergistic effects with local stressors.
In 2015–2016, coral bleaching occurred on a large scale due to elevated sea temperatures, and the Ryukyu Islands' coral reefs experienced extreme thermal stress and extensive bleaching in the summer of 2016. This bleaching affected about 90% of the coral in Sekisei Lagoon. Analysis indicated that the decline in corallivores and herbivores' density matched the decrease in coral cover after mass bleaching, while changes in species richness were not correlated with coral cover change. Short-term declines in corallivores were common in the Great Barrier Reef after the 2016 mass bleaching, and at Ishigaki Island and other sites during the 1998 bleaching event. The response of herbivores varied from place to place. All potential stocks, including fisheries production, aquarium fish production, recreational diving, and seaweed control by herbivores, decreased following the bleaching. In January 2017 the Japanese environment ministry said that 70% of the Sekisei lagoon in Okinawa, Japan's biggest coral reef, had been killed by coral bleaching.
These findings suggest that severe bleaching and extreme heating were the main causes of the loss of fish diversity and associated potential stocks of ecosystem services in Sekisei Lagoon.
Impacts on people
Climate change is expected to have an impact on various sectors of Japan's population. In the economic sector, it will affect agriculture, urbanization, and energy, while in the health sector, it will affect people in terms of mortality and increased exposure to heatwaves, among other impacts
Agriculture
Changing climatic conditions, with increasing temperature trends, decreasing rainfall and intensifying heat waves, droughts and other external phenomena, affect food production. These conditions tend to decrease crop yields and quality. Responses to the increase in temperature may be directed to the displacement of crop zones to higher elevations where ideal climatic conditions for growth can be found. With the increase in temperature, there may be changes in the length of the vegetative period and the early appearance of phenological phases.
Studies have shown that climate change is already having a significant impact on rice agriculture with the increase of extreme events such as heat or dry spells. These changes represent a serious concern for growers and may become a source of the vulnerability of the crop production system and pose a threat to national food security. It has been shown that there is a direct relationship between rice production and temperature, when the degree of climate change is large, production decreases. Yield reductions have been reported in specific areas or in extremely hot years.
Irrigation demand could be increased by higher temperatures due to higher plant evapotranspiration. The expansion of irrigated areas could become a threat to water resources, in terms of quantity and quality, if demand and cereal production increase.
Urbanization
Japan is one of the most urbanized countries in the world, with 91.8% of its population concentrated in urban areas by 2020. This trend will continue and increase. By 2050 the urbanization rate is expected to be almost 95%.
The elderly are especially vulnerable to the impacts of heat waves and according to data from the Euro-Mediterranean Center on Climate Change, by 2035, approximately 38% of the population will be over the age of 65. High levels of air pollution have been found to increase the effects of urban heat. In 2017, nearly 77% of the total population was exposed to air pollution levels above WHO thresholds.
Coastal flooding
According to the Euro-Mediterranean Center on Climate Change, due to its geography, high rates of soil sealing and dense urbanization along the Japanese coastline, the country is vulnerable to extreme rainfall and coastal flooding, particularly on the more populated island of Honshu. Japan is subject to the regular arrival of typhoons.
In 2018, torrential rains caused flash floods and landslides, resulting in more than 200 deaths the evacuation of 2.3 million people and more than US$7 billion in damages. The Euro-Mediterranean Center on Climate Change refers, that rising sea levels, wave heights and the frequency of typhoons are expected to increase damage to human settlements. The risk of flooding will increase in the future, with the depth of flooding in Tokyo increasing by 170% by 2050. This would result in damages to real estate and infrastructure of 220% to 240%.
Energy
According to the Euro-Mediterranean Center on Climate Change, the Japanese energy system has been significantly impacted by severe flooding resulting from heavy precipitation and typhoons. In September and October 2020, the Faxai and Hagibis typhoons caused power outages that affected 10 million households in Japan. Due to the faster-than-global-average temperature increases and the rising frequency of heat waves, the demand for cooling has been increasing in the country.
The trend for heating needs is somewhat opposite to that of cooling needs. There will be significant decreases in heating needs across the country, with the largest decrease in Hokkaido and a moderate decrease in the southern islands. On the other hand, cooling needs will increase considerably in the southern islands of Shikoku and Kyushu, while only a slight increase is expected in Hokkaido and elevated areas of Honshu.
Health
The climate and weather patterns in Japan have undergone changes that have led to an increase in mean temperature. As a result, vulnerable populations such as the elderly are at high risk due to the intensity of heat waves and heat stress. The rise in temperatures is anticipated to enable the transmission of diseases throughout Japan, including vector-borne illnesses like dengue, which tend to thrive in warmer climates.
Heatwaves and heat stress
Mortality and morbidity would increase in the country and may even double in eastern and northern Japan due to higher average temperatures and an increase in the frequency and duration of heat waves.
Japan is experiencing an increasing trend in deaths from heat-related illnesses. Between 1968 and 1994, 2,326 deaths from heat stroke were recorded, 589 of them in 1994 alone, when a severe heat wave caused temperatures to exceed 38 °C. In the abnormally hot summer of 2018, there were 95,137 emergency patients with heat stroke symptoms of which 160 died, 50% were over 65 years of age. That trend could continue to increase in the absence of adaptation measures to address climate change.
Labor
The impact of global warming is twofold as it affects both labor supply and productivity. As climate change progresses, a reduction in both labor supply and productivity is expected to occur in most regions of the world, particularly in tropical areas. According to the study by Dasgupta et al. (2021), under a 3.0 °C warming scenario, it is projected that future climate change will lead to a reduction of 18 percentage points in global total labor for low-exposure sectors and a reduction of 24.8 percentage points for high-exposure sectors. In Japan, under a low emissions scenario, the total labor force is estimated to decrease by 0.88%, whereas, under a medium emissions scenario, it is expected to decline by 2.2%.
Disease
The effects of climate change are expected to widen the geographic range and environmental conditions suitable for various vector-borne infectious diseases, including dengue. The likelihood of dengue transmission is amplified by rising temperatures, as the development and proliferation of mosquitoes are substantially impacted by factors such as temperature, precipitation, and humidity. The risks associated with transmission suitability due to climate change have intensified over time, and if the planet continues to warm, more than 1.3 billion individuals could face temperatures conducive to Zika transmission by the year 2050.
The dengue outbreak that occurred in Japan in 2014 suggests that the environmental conditions necessary for its transmission may be increasing. The Asian tiger mosquito, which has adapted well to urban environments, is a significant factor in these changing dynamics. According to the CMCC (2022), if emissions continue at a moderate level, 84.7% of the population could face transmission-suitable mean temperatures for dengue by 2050, and under a high emissions scenario, 81.8% could be at risk. In the case of Zika, 80.7% of the population could be at risk by 2050 under a medium emissions scenario, while 82.7% could be at risk under a high emissions scenario.
Japan was previously affected by malaria, and although it is no longer considered endemic, the mosquitoes responsible for its transmission still exist. According to projections, by 2050, 40.4% of the Japanese population could be at risk of malaria under a low-emissions scenario, while 42.5% could be at risk under a high-emissions scenario.
Research suggests that a general rise of 10 μg/m3 in daily PM2.5 concentrations in Japan is linked to a 1.3% increase in total non-accidental mortality. Projections indicate that by 2060, there could be 779 deaths per year per million people in Japan due to outdoor air pollution, which is an increase from 468 deaths in 2010.
Mitigation and adaptation
Adaptation
In terms of adaptation measures for agriculture and water resources, efforts should focus on the management and renovation of irrigation facilities, as well as anticipating the transplanting of crops in the hottest periods and developing crop varieties resistant to projected increases in temperatures.
In terms of adaptation measures for mortality and morbidity due to higher average temperatures and the increase in the frequency and duration of heat waves, different studies have suggested that lifestyle changes such as the widespread use of air conditioners may represent an important adaptation to the risk of heat stress emergencies.
Japan adopted its National Plan for Adaptation to the Impacts of Climate Change in 2015, which contains specific measures for various sectors such as Agriculture, Forestry, and Fisheries, Water Resources, Natural Ecosystems, Natural Disasters and Coastal Areas, Human Health, Industrial and Economic Activity, as well as the Life of Citizenry and Urban Life.
Energy transition
In terms of energy, in 2020, Japan made a commitment to achieve full decarbonization by 2050, but it is still dedicated to reducing emissions by 26% by 2030. As a result, fossil fuels will continue to be relevant and potentially vulnerable for the next few years, while carbon-free sources such as renewables and residual nuclear energy are expected to become more dominant and potentially face their own vulnerabilities in the second half of the century.
Japan's overall performance in the Energy Transition indicator is in line with the G20 country average. The country has shown high performance in the Efficiency and Electrification domains, which has been driving the transformation of the energy sector. There is still room for improvement in terms of increasing the installed capacity of renewables and reducing the use of fossil fuels. By making progress in these areas, Japan could also decrease the level of urban air pollution and reduce emissions per capita, leading to further improvements in the emissions indicator.
Building upon its existing environmental initiatives, Japan is considering a revised climate target aimed at further reducing its greenhouse gas emissions. The government plans to achieve a reduction of 66% in emissions from 2013 levels by the fiscal year 2035. This ambitious target is part of a comprehensive strategy intended to adjust the country's energy mix by 2040, designed to provide businesses with a predictable framework for future investment and to ensure compliance with international environmental standards set by the Paris Agreement. The intermediate target for 2030 has been established at a 46% reduction in emissions. Additionally, the strategy includes a significant enhancement of nuclear power's role in the national energy portfolio, aiming to increase its share from less than 10% currently to up to 22%. This shift is seen as a key component in accelerating Japan's transition towards more sustainable energy sources.
Policies and legislation
As a member in the Paris Agreement, Japan was the first nation to release a new national climate plan by 2020 as required in the 2015 agreement. However, this new plan included no major changes from the 2013 national climate plan, which aimed to reduce emissions by 26% from 2013 rates. This lack of aggressive action as the fifth largest polluter in the world led the World Resources Institute to describe the plan as "putting the world on a more dangerous trajectory." Similarly, the head of the World Wildlife Fund Japan climate and energy group, Naoyuki Yamagishi, described the plan as "completely the wrong signal."
In 2018, Japan established its Strategic Energy Plan, with goals set for 2030. The plan aimed to reduce coal use from 32 to 26 percent, to increase renewables from 17 to 22–24 percent, and to increase nuclear from 6 to 20–22 percent of the energy production mix. As part of this goal, Japan announced a goal of shutting down 100 old, low-efficiency coal-fired plants out of its 140 coal fired power plants. As of 2020, 114 of Japan's 140 coal-fired plants are deemed old and inefficient. Twenty-six are considered high-efficiency, and 16 new high-efficiency plants are currently under construction. Funding of overseas coal power ended in 2021. The Japanese government said that they would try to be carbon neutral as soon as possible in the second half of the century. The official goal of the Japanese government is to be net zero in 2050.
The Cool Biz campaign introduced under former Prime Minister of Japan Junichiro Koizumi was targeted at reducing energy use through the reduction of air conditioning use in government offices.
Carbon price
Since 2012 the country has levied a "Tax for Climate Change Mitigation" on petroleum, coal and natural gas at per nominal tonne of carbon they emit when burned. In addition, Tokyo has had a local carbon emissions trading system since 2010 in which carbon permits are valued at approximately US$50.
In December 2009, nine industry groupings opposed a carbon tax at the opening day of the COP-15 Copenhagen climate conference stating, "Japan should not consider a carbon tax as it would damage the economy which is already among the world's most energy-efficient." The industry groupings represented the oil, cement, paper, chemical, gas, electric power, auto manufacturing and electronics, and information technology sectors.
Japan launched a carbon credit market on Oct. 11, 2023, with a carbon levy expected in 2028.
Municipality level
Local governments, both prefectures and municipalities, are responsible for creating their own climate change adaptation plans under the Climate Change Adaptation Act, which came into force in December 2018. They are also tasked with creating Local Climate Change Adaptation Centers to study climate change adaptation, which can be established in partnership with research institutes, universities, or other appropriate local institutions. By 2021, 22 of the 47 prefectures and 30 of the 1,741 municipalities had established plans, while 23 prefectures and 2 municipalities had established research centers. While local governments can create joint plans and centers under the legislation, by 2021 none had done so.
Japan's capital Tokyo is preparing to force industry to make big cuts in greenhouse gases, taking the lead in a country struggling to meet its Kyoto Protocol obligations. Tokyo's outspoken governor, Shintaro Ishihara, decided to go it alone and create Japan's first emissions cap system, reducing greenhouse gas emission by a total of 25% by 2020 from the 2000 level.
International cooperation
Japan created the Kyoto Protocol Target Achievement Plan to lay out the necessary measures required to meet their 6% reduction commitment under the Kyoto Protocol. It was first established as an outcome of the evaluation of the Climate Change Policy Program carried out in 2004. The main branches of the plan are ensuring the pursuit of environment and economy, promoting of technology, raising public awareness, utilizing of policy measures, and ensuring international collaboration.
| Physical sciences | Climate change | Earth science |
646125 | https://en.wikipedia.org/wiki/Heterosis | Heterosis | Heterosis, hybrid vigor, or outbreeding enhancement is the improved or increased function of any biological quality in a hybrid offspring. An offspring is heterotic if its traits are enhanced as a result of mixing the genetic contributions of its parents. The heterotic offspring often has traits that are more than the simple addition of the parents' traits, and can be explained by Mendelian or non-Mendelian inheritance. Typical heterotic/hybrid traits of interest in agriculture are higher yield, quicker maturity, stability, drought tolerance etc.
Definitions
In proposing the term heterosis to replace the older term heterozygosis, G.H. Shull aimed to avoid limiting the term to the effects that can be explained by heterozygosity in Mendelian inheritance.
Heterosis is often discussed as the opposite of inbreeding depression, although differences in these two concepts can be seen in evolutionary considerations such as the role of genetic variation or the effects of genetic drift in small populations on these concepts. Inbreeding depression occurs when related parents have children with traits that negatively influence their fitness largely due to homozygosity. In such instances, outcrossing should result in heterosis.
Not all outcrosses result in heterosis. For example, when a hybrid inherits traits from its parents that are not fully compatible, fitness can be reduced. This is a form of outbreeding depression, the effects of which are similar to inbreeding depression.
Genetic and epigenetic bases
Since the early 1900s, two competing genetic hypotheses, not necessarily mutually exclusive, have been developed to explain hybrid vigor. More recently, an epigenetic component of hybrid vigor has also been established.
Dominance and overdominance
When a population is small or inbred, it tends to lose genetic diversity. Inbreeding depression is the loss of fitness due to loss of genetic diversity. Inbred strains tend to be homozygous for recessive alleles that are mildly harmful (or produce a trait that is undesirable from the standpoint of the breeder). Heterosis or hybrid vigor, on the other hand, is the tendency of outbred strains to exceed both inbred parents in fitness.
Selective breeding of plants and animals, including hybridization, began long before there was an understanding of underlying scientific principles. In the early 20th century, after Mendel's laws came to be understood and accepted, geneticists undertook to explain the superior vigor of many plant hybrids. Two competing hypotheses, which are not mutually exclusive, were developed:
Dominance hypothesis. The dominance hypothesis attributes the superiority of hybrids to the suppression of undesirable recessive alleles from one parent by dominant alleles from the other. It attributes the poor performance of inbred strains to loss of genetic diversity, with the strains becoming purely homozygous at many loci. The dominance hypothesis was first expressed in 1908 by the geneticist Charles Davenport. Under the dominance hypothesis, deleterious alleles are expected to be maintained in a random-mating population at a selection–mutation balance that would depend on the rate of mutation, the effect of the alleles and the degree to which alleles are expressed in heterozygotes.
Overdominance hypothesis. Certain combinations of alleles that can be obtained by crossing two inbred strains are advantageous in the heterozygote. The overdominance hypothesis attributes the heterozygote advantage to the survival of many alleles that are recessive and harmful in homozygotes. It attributes the poor performance of inbred strains to a high percentage of these harmful recessives. The overdominance hypothesis was developed independently by Edward M. East (1908) and George Shull (1908). Genetic variation at an overdominant locus is expected to be maintained by balancing selection. The high fitness of heterozygous genotypes favours the persistence of an allelic polymorphism in the population. This hypothesis is commonly invoked to explain the persistence of some alleles (most famously the Sickle cell trait allele) that are harmful in homozygotes. In normal circumstances, such harmful alleles would be removed from a population through the process of natural selection. Like the dominance hypothesis, it attributes the poor performance of inbred strains to expression of such harmful recessive alleles.
Dominance and overdominance have different consequences for the gene expression profile of the individuals. If overdominance is the main cause for the fitness advantages of heterosis, then there should be an over-expression of certain genes in the heterozygous offspring compared to the homozygous parents. On the other hand, if dominance is the cause, fewer genes should be under-expressed in the heterozygous offspring compared to the parents. Furthermore, for any given gene, the expression should be comparable to the one observed in the fitter of the two parents. In any case, outcross matings provide the benefit of masking deleterious recessive alleles in progeny. This benefit has been proposed to be a major factor in the maintenance of sexual reproduction among eukaryotes, as summarized in the article Evolution of sexual reproduction.
Historical retrospective
Which of the two mechanisms are the "main" reason for heterosis has been a scientific controversy in the field of genetics. Population geneticist James Crow (1916–2012) believed, in his younger days, that overdominance was a major contributor to hybrid vigor. In 1998 he published a retrospective review of the developing science. According to Crow, the demonstration of several cases of heterozygote advantage in Drosophila and other organisms first caused great enthusiasm for the overdominance theory among scientists studying plant hybridization. But overdominance implies that yields on an inbred strain should decrease as inbred strains are selected for the performance of their hybrid crosses, as the proportion of harmful recessives in the inbred population rises. Over the years, experimentation in plant genetics has proven that the reverse occurs, that yields increase in both the inbred strains and the hybrids, suggesting that dominance alone may be adequate to explain the superior yield of hybrids. Only a few conclusive cases of overdominance have been reported in all of genetics. Since the 1980s, as experimental evidence has mounted, the dominance theory has made a comeback.
Crow wrote:
The current view ... is that the dominance hypothesis is the major explanation of inbreeding decline and [of] the high yield of hybrids. There is little statistical evidence for contributions from overdominance and epistasis. But whether the best hybrids are getting an extra boost from overdominance or favorable epistatic contributions remains an open question.
Epigenetics
An epigenetic contribution to heterosis has been established in plants, and it has also been reported in animals. MicroRNAs (miRNAs), discovered in 1993, are a class of non-coding small RNAs which repress the translation of messenger RNAs (mRNAs) or cause degradation of mRNAs. In hybrid plants, most miRNAs have non-additive expression (it might be higher or lower than the levels in the parents). This suggests that the small RNAs are involved in the growth, vigor and adaptation of hybrids.
'Heterosis without hybridity' effects on plant size have been demonstrated in genetically isogenic F1 triploid (autopolyploid) plants, where paternal genome excess F1 triploids display positive heterosis, whereas maternal genome excess F1s display negative heterosis effects. Such findings demonstrate that heterosis effects, with a genome dosage-dependent epigenetic basis, can be generated in F1 offspring that are genetically isogenic (i.e. harbour no heterozygosity). It has been shown that hybrid vigor in an allopolyploid hybrid of two Arabidopsis species was due to epigenetic control in the upstream regions of two genes, which caused major downstream alteration in chlorophyll and starch accumulation. The mechanism involves acetylation or methylation of specific amino acids in histone H3, a protein closely associated with DNA, which can either activate or repress associated genes.
Specific mechanisms
Major histocompatibility complex in animals
One example of where particular genes may be important in vertebrate animals for heterosis is the major histocompatibility complex (MHC). Vertebrates inherit several copies of both MHC class I and MHC class II from each parent, which are used in antigen presentation as part of the adaptive immune system. Each different copy of the genes is able to bind and present a different set of potential peptides to T-lymphocytes. These genes are highly polymorphic throughout populations, but are more similar in smaller, more closely related populations. Breeding between more genetically distant individuals decreases the chance of inheriting two alleles that are the same or similar, allowing a more diverse range of peptides to be presented. This, therefore, increases the chance that any particular pathogen will be recognised, and means that more antigenic proteins on any pathogen are likely to be recognised, giving a greater range of T-cell activation, so a greater response. This also means that the immunity acquired to the pathogen is against a greater range of antigens, meaning that the pathogen must mutate more before immunity is lost. Thus, hybrids are less likely to succumb to pathogenic disease and are more capable of fighting off infection. This may be the cause, though, of autoimmune diseases.
Plants
Crosses between inbreds from different heterotic groups result in vigorous F1 hybrids with significantly more heterosis than F1 hybrids from inbreds within the same heterotic group or pattern. Heterotic groups are created by plant breeders to classify inbred lines, and can be progressively improved by reciprocal recurrent selection.
Heterosis is used to increase yields, uniformity, and vigor. Hybrid breeding methods are used in maize, sorghum, rice, sugar beet, onion, spinach, sunflowers, broccoli and to create a more psychoactive cannabis.
Corn (maize)
Nearly all field corn (maize) grown in most developed nations exhibits heterosis. Modern corn hybrids substantially outyield conventional cultivars and respond better to fertilizer.
Corn heterosis was famously demonstrated in the early 20th century by George H. Shull and Edward M. East after hybrid corn was invented by Dr. William James Beal of Michigan State University based on work begun in 1879 at the urging of Charles Darwin. Dr. Beal's work led to the first published account of a field experiment demonstrating hybrid vigor in corn, by Eugene Davenport and Perry Holden, 1881. These various pioneers of botany and related fields showed that crosses of inbred lines made from a Southern dent and a Northern flint, respectively, showed substantial heterosis and outyielded conventional cultivars of that era. However, at that time such hybrids could not be economically made on a large scale for use by farmers. Donald F. Jones at the Connecticut Agricultural Experiment Station, New Haven invented the first practical method of producing a high-yielding hybrid maize in 1914–1917. Jones' method produced a double-cross hybrid, which requires two crossing steps working from four distinct original inbred lines. Later work by corn breeders produced inbred lines with sufficient vigor for practical production of a commercial hybrid in a single step, the single-cross hybrids. Single-cross hybrids are made from just two original parent inbreds. They are generally more vigorous and also more uniform than the earlier double-cross hybrids. The process of creating these hybrids often involves detasseling.
Temperate maize hybrids are derived from two main heterotic groups: 'Iowa Stiff Stalk Synthetic', and nonstiff stalk.
Rice (Oryza sativa)
Hybrid rice sees cultivation in many countries, including China, India, Vietnam, and the Philippines. Compared to inbred lines, hybrids produce approximately 20% greater yield, and comprise 45% of rice planting area in China. Rice production has seen enormous rise in China due to heavy uses of hybrid rice. In China, efforts have generated a super hybrid rice strain ('LYP9') with a production capability around 15 tons per hectare. In India also, several varieties have shown high vigor, including 'RH-10' and 'Suruchi 5401'.
Since rice is a self-pollinating species, it requires the use of male-sterile lines to generate hybrids from separate lineages. The most common way of achieving this is using lines with genetic male-sterility, as manual emasculation is not optimal for large-scale hybridization. The first generation of hybrid rice was developed in the 1970s. It relies on three lines: a cytoplasmic male sterile (CMS) line, a maintainer line, and a restorer line. The second generation was widely adopted in the 1990s. Instead of a CMS line, it uses an environment-sensitive genic male sterile line (EGMS), which can have its sterility reversed based on light or temperature. This removes the need for a maintainer, making the hybridization and breeding process more efficient (albeit still high-maintenance). Second generation lines show a yield increase of 5-10% over first generation lines. The third and current generation uses a nuclear male sterile line (NMS). Third generation lines have a recessive sterility gene, and their cultivation is more lenient towards maintainer lines and environmental conditions. Additionally, transgenes are only present in the maintainer, so hybrid plants can benefit from hybrid vigor without requiring special oversight.
Animals
Hybrid livestock
The concept of heterosis is also applied in the production of commercial livestock. In cattle, crosses between Black Angus and Hereford produce a cross known as a "Black Baldy". In swine, "blue butts" are produced by the cross of Hampshire and Yorkshire. Other, more exotic hybrids (two different species, so genetically more dissimilar), such as "beefalo" which are hybrids of cattle and bison, are also used for specialty markets.
Poultry
Within poultry, sex-linked genes have been used to create hybrids in which males and females can be sorted at one day old by color. Specific genes used for this are genes for barring and wing feather growth. Crosses of this sort create what are sold as Black Sex-links, Red Sex-links, and various other crosses that are known by trade names.
Commercial broilers are produced by crossing different strains of White Rocks and White Cornish, the Cornish providing a large frame and the Rocks providing the fast rate of gain. The hybrid vigor produced allows the production of uniform birds at a marketable carcass weight at 6–9 weeks of age.
Likewise, hybrids between different strains of White Leghorn are used to produce laying flocks that provide the majority of white eggs for sale in the United States.
Dogs
In 2013, a study found that mixed breeds live on average 1.2 years longer than pure breeds.
John Scott and John L. Fuller performed a detailed study of purebred Cocker Spaniels, purebred Basenjis, and hybrids between them.
They found that hybrids ran faster than either parent, perhaps due to heterosis. Other characteristics, such as basal heart rate, did not show any heterosis—the dog's basal heart rate was close to the average of its parents—perhaps due to the additive effects of multiple genes.
Sometimes people working on a dog-breeding program find no useful heterosis.
All this said, studies do not provide definitive proof of hybrid vigor in dogs. This is largely due to the unknown heritage of most mixed breed dogs used. Results vary wildly, with some studies showing benefit and others finding the mixed breed dogs to be more prone to genetic conditions.
Birds
In 2014, a study undertaken by the Centre for Integrative Ecology at Deakin University in Geelong, Victoria, concluded that intrasubspecific hybrids between the subspecies Platycercus elegans flaveolus and P. e. elegans of the crimson rosella (P. elegans) were more likely to fight off diseases than their pure counterparts.
Humans
Human beings are all extremely genetically similar to one another. Michael Mingroni has proposed heterosis, in the form of hybrid vigor associated with historical reductions of the levels of inbreeding, as an explanation of the Flynn effect, the steady rise in IQ test scores around the world during the 20th century, though a review of nine studies found that there is no evidence to suggest inbreeding has an effect on IQ.
Controversy
The term heterosis often causes confusion and even controversy, particularly in selective breeding of domestic animals, because it is sometimes (incorrectly) claimed that all crossbred plants and animals are "genetically superior" to their parents, due to heterosis,. but two problems exist with this claim:
according to an article published in the journal Genome Biology, "genetic superiority" is an ill-defined term and not generally accepted terminology within the scientific field of genetics. A related term fitness is well defined, but it can rarely be directly measured. Instead, scientists use objective, measurable quantities, such as the number of seeds a plant produces, the germination rate of a seed, or the percentage of organisms that survive to reproductive age. From this perspective, crossbred plants and animals exhibiting heterosis may have "superior" traits, but this does not necessarily equate to any evidence of outright "genetic superiority". Use of the term "superiority" is commonplace for example in crop breeding, where it is well understood to mean a better-yielding, more robust plant for agriculture. Such a plant may yield better on a farm, but would likely struggle to survive in the wild, making this use open to misinterpretation. In human genetics any question of "genetic superiority" is even more problematic due to the historical and political implications of any such claim. Some may even go as far as to describe it as a questionable value judgement in the realm of politics, not science.
not all hybrids exhibit heterosis (see outbreeding depression).
An example of the ambiguous value judgements imposed on hybrids and hybrid vigor is the mule. While mules are almost always infertile, they are valued for a combination of hardiness and temperament that is different from either of their horse or donkey parents. While these qualities may make them "superior" for particular uses by humans, the infertility issue implies that these animals would most likely become extinct without the intervention of humans through animal husbandry, making them "inferior" in terms of natural selection.
| Biology and health sciences | Genetics | Biology |
646257 | https://en.wikipedia.org/wiki/Machine%20press | Machine press | A forming press, commonly shortened to press, is a machine tool that changes the shape of a work-piece by the application of pressure. The operator of a forming press is known as a press-tool setter, often shortened to tool-setter.
Presses can be classified according to
their mechanism: hydraulic, mechanical, pneumatic;
their function: forging presses, stamping presses, press brakes, punch press, etc.
their structure, e.g. Knuckle-joint press, screw press, Expeller press
their controllability: conventional vs. servo-presses
Shop Press
Typically consisting of a simple rectangular frame, often fabricated from C-channel or tubing, containing a bottle jack or hydraulic cylinder to apply pressure via a ram to a work-piece. Often used for general-purpose forming work in the auto mechanic shop, machine shop, garage or basement shops, etc. Typical shop presses are capable of applying between 1 and 30 tons pressure, depending on size and construction. Lighter-duty versions are often called arbor presses.
A shop press is commonly used to press interference fit parts together, such as gears onto shafts or bearings into housings.
Other presses by application
A press brake is a special type of machine press that bends sheet metal into shape. A good example of the type of work a press brake can do is the back-plate of a computer case. Other examples include brackets, frame pieces and electronic enclosures. Some press brakes have CNC controls and can form parts with accuracy to a fraction of a millimeter. Bending forces can range up to 3,000 tons.
A punch press is used to form holes.
A screw press is also known as a fly press.
A stamping press is a machine press used to shape or cut metal by deforming it with a die. It generally consists of a press frame, a bolster plate, and a ram.
Capping presses form caps from rolls of aluminium foil at up to 660 per minute.
An example of peculiar press control: servo-press
A servomechanism press, also known as a servo press or an 'electro-press, is a press driven by an AC servo motor. The torque produced is converted to a linear force via a ball screw. Pressure and position are controlled through a load cell and an encoder. The main advantage of a servo press is its low energy consumption; its only 10-20% of other press machines.
When stamping, it is really about maximizing energy as opposed to how the machine can deliver tonnage. Up until recently, the way to increase tonnage between the die and work-piece on a mechanical press was through bigger machines with bigger motors.
Types of presses
The press style used is in direct correlation to the end product. Press types are straight-side, BG (back geared), geared, gap, OBI (open back inclinable) and OBS (open back stationary). Hydraulic and mechanical presses are classified by the frame the moving elements are mounted on. The most common are the gap-frame, also known as C-frame, and the straight-side press. A straight-side press has vertical columns on either side of the machine and eliminates angular deflection. A C-frame allows easy access to the die area on three sides and require less floor space. A type of gap-frame, the OBI pivots the frame for easier scrap or part discharge. The OBS timed air blasts, devices or conveyor for scrap or part discharge.
History
Historically, metal was shaped by hand using a hammer. Later, larger hammers were constructed to press more metal at once, or to press thicker materials. Often a smith would employ a helper or apprentice to swing the hammer while the smith concentrated on positioning the work-piece. Drop hammers and trip hammers utilize a mechanism to lift the hammer, which then falls by gravity onto the work.
In the mid 19th century, manual and rotary-cam hammers began to be replaced in industry by the steam hammer, which was first described in 1784 by James Watt, a British inventor and Mechanical Engineer who also contributed to the earliest steam engines and condensers, but not built until 1840 by British inventor James Nasmyth. By the late 19th century, steam hammers had increased greatly in size; in 1891 the Bethlehem Iron Company made an enhancement allowing a steam hammer to deliver a 125-ton blow.
Most modern machine presses typically use a combination of electric motors and hydraulics to achieve the necessary pressure. Along with the evolution of presses came the evolution of the dies used within them.
Safety
Machine presses can be hazardous, so safety measures must always be taken. Bi-manual controls (controls the use of which requires both hands to be on the buttons to operate) are a very good way to prevent accidents, as are light curtains that keep the machine from working if the operator is in range of the die.
| Technology | Industrial machinery | null |
646651 | https://en.wikipedia.org/wiki/Bisection%20method | Bisection method | In mathematics, the bisection method is a root-finding method that applies to any continuous function for which one knows two values with opposite signs. The method consists of repeatedly bisecting the interval defined by these values and then selecting the subinterval in which the function changes sign, and therefore must contain a root. It is a very simple and robust method, but it is also relatively slow. Because of this, it is often used to obtain a rough approximation to a solution which is then used as a starting point for more rapidly converging methods. The method is also called the interval halving method, the binary search method, or the dichotomy method.
For polynomials, more elaborate methods exist for testing the existence of a root in an interval (Descartes' rule of signs, Sturm's theorem, Budan's theorem). They allow extending the bisection method into efficient algorithms for finding all real roots of a polynomial; see Real-root isolation.
The method
The method is applicable for numerically solving the equation f(x) = 0 for the real variable x, where f is a continuous function defined on an interval [a, b] and where f(a) and f(b) have opposite signs. In this case a and b are said to bracket a root since, by the intermediate value theorem, the continuous function f must have at least one root in the interval (a, b).
At each step the method divides the interval in two parts/halves by computing the midpoint c = (a+b) / 2 of the interval and the value of the function f(c) at that point. If c itself is a root then the process has succeeded and stops. Otherwise, there are now only two possibilities: either f(a) and f(c) have opposite signs and bracket a root, or f(c) and f(b) have opposite signs and bracket a root. The method selects the subinterval that is guaranteed to be a bracket as the new interval to be used in the next step. In this way an interval that contains a zero of f is reduced in width by 50% at each step. The process is continued until the interval is sufficiently small.
Explicitly, if f(c)=0 then c may be taken as the solution and the process stops. Otherwise, if f(a) and f(c) have opposite signs, then the method sets c as the new value for b, and if f(b) and f(c) have opposite signs then the method sets c as the new a. In both cases, the new f(a) and f(b) have opposite signs, so the method is applicable to this smaller interval.
Iteration tasks
The input for the method is a continuous function f, an interval [a, b], and the function values f(a) and f(b). The function values are of opposite sign (there is at least one zero crossing within the interval). Each iteration performs these steps:
Calculate c, the midpoint of the interval, c = .
Calculate the function value at the midpoint, f(c).
If convergence is satisfactory (that is, c − a is sufficiently small, or |f(c)| is sufficiently small), return c and stop iterating.
Examine the sign of f(c) and replace either (a, f(a)) or (b, f(b)) with (c, f(c)) so that there is a zero crossing within the new interval.
When implementing the method on a computer, there can be problems with finite precision, so there are often additional convergence tests or limits to the number of iterations. Although f is continuous, finite precision may preclude a function value ever being zero. For example, consider ; there is no floating-point value approximating that gives exactly zero. Additionally, the difference between a and b is limited by the floating point precision; i.e., as the difference between a and b decreases, at some point the midpoint of will be numerically identical to (within floating point precision of) either a or b.
Algorithm
The method may be written in pseudocode as follows:
input: Function f,
endpoint values a, b,
tolerance TOL,
maximum iterations NMAX
conditions: a < b,
either f(a) < 0 and f(b) > 0 or f(a) > 0 and f(b) < 0
output: value which differs from a root of f(x) = 0 by less than TOL
N ← 1
while N ≤ NMAX do // limit iterations to prevent infinite loop
c ← (a + b)/2 // new midpoint
if f(c) = 0 or (b – a)/2 < TOL then // solution found
Output(c)
Stop
end if
N ← N + 1 // increment step counter
if sign(f(c)) = sign(f(a)) then a ← c else b ← c // new interval
end while
Output("Method failed.") // max number of steps exceeded
Example: Finding the root of a polynomial
Suppose that the bisection method is used to find a root of the polynomial
First, two numbers and have to be found such that and have opposite signs. For the above function, and satisfy this criterion, as
and
Because the function is continuous, there must be a root within the interval [1, 2].
In the first iteration, the end points of the interval which brackets the root are and , so the midpoint is
The function value at the midpoint is . Because is negative, is replaced with for the next iteration to ensure that and have opposite signs. As this continues, the interval between and will become increasingly smaller, converging on the root of the function. See this happen in the table below.
After 13 iterations, it becomes apparent that there is a convergence to about 1.521: a root for the polynomial.
Analysis
The method is guaranteed to converge to a root of f if f is a continuous function on the interval [a, b] and f(a) and f(b) have opposite signs. The absolute error is halved at each step so the method converges linearly. Specifically, if c1 = is the midpoint of the initial interval, and cn is the midpoint of the interval in the nth step, then the difference between cn and a solution c is bounded by
This formula can be used to determine, in advance, an upper bound on the number of iterations that the bisection method needs to converge to a root to within a certain tolerance.
The number n of iterations needed to achieve a required tolerance ε (that is, an error guaranteed to be at most ε), is bounded by
where the initial bracket size and the required bracket size The main motivation to use the bisection method is that over the set of continuous functions, no other method can guarantee to produce an estimate cn to the solution c that in the worst case has an absolute error with less than n1/2 iterations. This is also true under several common assumptions on function f and the behaviour of the function in the neighbourhood of the root.
However, despite the bisection method being optimal with respect to worst case performance under absolute error criteria it is sub-optimal with respect to average performance under standard assumptions as well as asymptotic performance. Popular alternatives to the bisection method, such as the secant method, Ridders' method or Brent's method (amongst others), typically perform better since they trade-off worst case performance to achieve higher orders of convergence to the root. And, a strict improvement to the bisection method can be achieved with a higher order of convergence without trading-off worst case performance with the ITP Method.
Generalization to higher dimensions
The bisection method has been generalized to multi-dimensional functions. Such methods are called generalized bisection methods.
Methods based on degree computation
Some of these methods are based on computing the topological degree, which for a bounded region and a differentiable function is defined as a sum over its roots:
,
where is the Jacobian matrix, , and
is the sign function. In order for a root to exist, it is sufficient that , and this can be verified using a surface integral over the boundary of .
Characteristic bisection method
The characteristic bisection method uses only the signs of a function in different points. Lef f be a function from Rd to Rd, for some integer d ≥ 2. A characteristic polyhedron (also called an admissible polygon) of f is a polytope in Rd, having 2d vertices, such that in each vertex v, the combination of signs of f(v) is unique and the topological degree of f on its interior is not zero (a necessary criterion to ensure the existence of a root). For example, for d=2, a characteristic polyhedron of f is a quadrilateral with vertices (say) A,B,C,D, such that:
, that is, f1(A)<0, f2(A)<0.
, that is, f1(B)<0, f2(B)>0.
, that is, f1(C)>0, f2(C)<0.
, that is, f1(D)>0, f2(D)>0.
A proper edge of a characteristic polygon is an edge between a pair of vertices, such that the sign vector differs by only a single sign. In the above example, the proper edges of the characteristic quadrilateral are AB, AC, BD and CD. A diagonal is a pair of vertices, such that the sign vector differs by all d signs. In the above example, the diagonals are AD and BC.
At each iteration, the algorithm picks a proper edge of the polyhedron (say, AB), and computes the signs of f in its mid-point (say, M). Then it proceeds as follows:
If , then A is replaced by M, and we get a smaller characteristic polyhedron.
If , then B is replaced by M, and we get a smaller characteristic polyhedron.
Else, we pick a new proper edge and try again.
Suppose the diameter (= length of longest proper edge) of the original characteristic polyhedron is . Then, at least bisections of edges are required so that the diameter of the remaining polygon will be at most . If the topological degree of the initial polyhedron is not zero, then there is a procedure that can choose an edge such that the next polyhedron also has nonzero degree.
| Mathematics | Real analysis | null |
646664 | https://en.wikipedia.org/wiki/Sai%20%28weapon%29 | Sai (weapon) | The sai (Japanese: 釵, ; Chinese: 鐵尺, ) is a pointed melee weapon from Okinawa. It was historically utilized in martial arts such as Okinawan kobudō and southern Chinese martial arts, and has been absorbed into the curriculum of many modern martial arts. Although similar weapons can be found in other parts of Asia, the sai is the Okinawan take on the basic concept and should not be confused with the other weapons. The sai is primarily used for stabbing, striking, parrying and disarming opponents. It consists of a pointed metal main prong, that projects from a one-handed handle, two shorter metal side prongs, which project from the opposite sides of the base of the main prong and point in the same direction as it, and a blunt metal pommel fixed to the bottom end of the handle. The sai came to international attention when Okinawan kobudō and karate reached international popularity in the mid-20th Century.
History
Before the creation of the sai in Okinawa, similar weapons were already being used in other Asian countries including India, Thailand, China, Vietnam, Malaysia, and Indonesia. The basic concept of this kind of weapon was brought to Okinawa over time from one or several of these places. However, the sai is the Okinawan take on this weapon concept, and should not be mixed with the other similar weapons.
Some sources theorize that this weapon concept may be based on the Indian trisula, an ancient Hindu-Buddhist symbol that may have spread along with Hinduism and Buddhism into South-East Asia. The word trisula itself can refer to either a long or short-handled trident.
In Okinawa the sai was used by the domestic police (ufuchiku) to arrest criminals and for crowd control. Use of the sai in Okinawan kobudō was approved in 1668 by Moto Chohei, an Okinawan prince.
Japan had a similar weapon, the jitte, which was originally used as a blunt weapon by guards in the Shogun's palace, and was subsequently issued to senior officials as a badge of office. Edo period examples of the jitte typically have only a single hook. The relationship between the sai and jitte is unclear.
Parts (in Okinawan)
Monouchi, the metal main prong of the sai, that is either round or faceted.
Saki, the sharp point of the main prong.
Yoku, the two shorter metal side prongs of the sai, which usually point in the same direction as the main prong, with the exception of the manji sai developed by Taira Shinken, which has the direction of one of the side prongs reversed, causing the weapon to be reminiscent of a swastika (manji).
Tsume, the sharp point of the two side prongs.
Moto, the center point between the two side prongs.
Tsuka, the one-handed handle of the sai, which is usually wrapped with different materials or given different treatments to add more grip to it.
Tsukagashira, the blunt metal pommel of the sai.
Technique
The sai is a weapon typically wielded in pairs, with one in each hand. In modern Okinawan Kobudo, five kata (choreographed patterns of movements in martial arts) are commonly taught, including two kihon kata.
The utility of the sai is given away by its distinctive trident-like shape. It is a weapon primarily used for fast stabbing and striking, but being very versatile, it has many other uses as well. These include a variety of blocks, parries and captures against attackers from all directions and height levels. Use of the sharp points, the main prong and the pommel is emphasized, as well as rapid grip changes for multiple fast stabs and strikes.
One commonly depicted technique in sai kata is to use of one of the sais side prongs to entrap an opponent's weapon and then disarm them of it. Some variations of the sai have the two side prongs pointing inwards towards the main prong to facilitate this maneuver. While this does not completely immobilize the attacker, it encumbers them in close quarters.
Because there is no morphological plural in Japanese, the word "sai refers to either a single weapon or multiple. Nicho sai refers to a kata that uses two sai, while sancho sai kata refers to kata using three sai.
| Technology | Melee weapons | null |
646748 | https://en.wikipedia.org/wiki/Midazolam | Midazolam | Midazolam, sold under the brand name Versed among others, is a benzodiazepine medication used for anesthesia, premedication before surgical anesthesia, and procedural sedation, and to treat severe agitation. It induces sleepiness, decreases anxiety, and causes anterograde amnesia.
The drug does not cause an individual to become unconscious, merely to be sedated. It is also useful for the treatment of prolonged (lasting over five minutes) seizures. Midazolam can be given by mouth, intravenously, by injection into a muscle, by spraying into the nose, or through the cheek. When given intravenously, it typically begins working within five minutes; when injected into a muscle, it can take fifteen minutes to begin working; when taken orally, it can take 10–20 minutes to begin working.
Side effects can include a decrease in efforts to breathe, low blood pressure, and sleepiness. Tolerance to its effects and withdrawal syndrome may occur following long-term use. Paradoxical effects, such as increased activity, can occur especially in children and older people. There is evidence of risk when used during pregnancy but no evidence of harm with a single dose during breastfeeding.
Midazolam was patented in 1974 and came into medical use in 1982. It is on the World Health Organization's List of Essential Medicines. Midazolam is available as a generic medication. In many countries, it is a controlled substance.
Medical uses
Seizures
Midazolam is sometimes used for the acute management of prolonged seizures. Long-term use for the management of epilepsy is not recommended due to the significant risk of tolerance (which renders midazolam and other benzodiazepines ineffective) and the significant side effect of sedation. A benefit of midazolam is that in children it can be given in the cheek or in the nose for acute seizures, including status epilepticus.
Drawbacks include a high degree of breakthrough seizures—due to the short half-life of midazolam—in over 50% of people treated, as well as treatment failure in 14–18% of people with refractory status epilepticus. Tolerance develops rapidly to the anticonvulsant effect, and the dose may need to be increased by several times to maintain anticonvulsant therapeutic effects. With prolonged use, tolerance and tachyphylaxis can occur and the elimination half-life may increase, up to days. Buccal and intranasal midazolam may be both easier to administer and more effective than rectally administered diazepam in the emergency control of seizures.
Procedural sedation
Intravenous midazolam is indicated for procedural sedation (often in combination with an opioid, such as fentanyl), preoperative sedation, for the induction of general anesthesia, and for sedation of people who are ventilated in critical care units. Midazolam is superior to diazepam in impairing memory of endoscopy procedures, but propofol has a quicker recovery time and a better memory-impairing effect. It is the most popular benzodiazepine in the intensive care unit (ICU) because of its short elimination half-life, combined with its water solubility and its suitability for continuous infusion. However, for long-term sedation, lorazepam is preferred due to its long duration of action, and propofol has advantages over midazolam when used in the ICU for sedation, such as shorter weaning time and earlier tracheal extubation.
Midazolam is sometimes used in neonatal intensive care units. When used, additional caution is required in newborns; midazolam should not be used for longer than 72 hours due to risks of tachyphylaxis, and the possibility of development of a benzodiazepine withdrawal syndrome, as well as neurological complications. Bolus injections should be avoided due to the increased risk of cardiovascular depression, as well as neurological complications.
Sedation using midazolam can be used to relieve anxiety and manage behaviour in children undergoing dental treatment.
Agitation
Midazolam, in combination with an antipsychotic drug, is indicated for the acute management of schizophrenia when it is associated with aggressive or out-of-control behaviour.
End of life care
In the final stages of end-of-life care, midazolam is routinely used at low doses via subcutaneous injection to help with agitation, restlessness or anxiety in the last hours or days of life. At higher doses during the last weeks of life, midazolam is considered a first line agent in palliative continuous deep sedation therapy when it is necessary to alleviate intolerable suffering not responsive to other treatments, but the need for this is rare.
Administration
Routes of administration of midazolam can be oral, intranasal, buccal, intravenous, and intramuscular.
Contraindications
Benzodiazepines require special precaution if used in the elderly, during pregnancy, in children, in alcohol- or other drug-dependent individuals or those with comorbid psychiatric disorders. Additional caution is required in critically ill patients, as accumulation of midazolam and its active metabolites may occur. Kidney or liver impairments may slow down the elimination of midazolam leading to prolonged and enhanced effects.
Side effects
Side effects of midazolam in the elderly are listed above. People experiencing amnesia as a side effect of midazolam are generally unaware their memory is impaired, unless they had previously known it as a side effect.
Long-term use of benzodiazepines has been associated with long-lasting deficits in memory, and show only partial recovery six months after stopping benzodiazepines. It is unclear whether full recovery occurs after longer periods of abstinence. Benzodiazepines can cause or worsen depression. Paradoxical excitement occasionally occurs with benzodiazepines, including a worsening of seizures. Children and elderly individuals or those with a history of excessive alcohol use and individuals with a history of aggressive behavior or anger are at increased risk of paradoxical effects. Paradoxical reactions are particularly associated with intravenous administration. After nighttime administration of midazolam, residual 'hangover' effects, such as sleepiness and impaired psychomotor and cognitive functions, may persist into the next day. This may impair the ability of users to drive safely and may increase the risk of falls and hip fractures. Sedation, respiratory depression and hypotension due to a reduction in systematic vascular resistance, and an increase in heart rate can occur. If intravenous midazolam is given too quickly, hypotension may occur. A "midazolam infusion syndrome" may result from high doses, is characterised by delayed arousal hours to days after discontinuation of midazolam, and may lead to an increase in the length of ventilatory support needed.
In rare susceptible individuals, midazolam has been known to cause a paradoxical reaction, a well-documented complication with benzodiazepines. When this occurs, the individual may experience anxiety, involuntary movements, aggressive or violent behavior, uncontrollable crying or verbalization, and other similar effects. This seems to be related to the altered state of consciousness or disinhibition produced by the drug. Paradoxical behavior is often not recalled by the patient due to the amnesia-producing properties of the drug. In extreme situations, flumazenil can be administered to inhibit or reverse the effects of midazolam. Antipsychotic medications, such as haloperidol, have also been used for this purpose.
Midazolam is known to cause respiratory depression. In healthy humans, 0.15 mg/kg of midazolam may cause respiratory depression, which is postulated to be a central nervous system (CNS) effect. When midazolam is administered in combination with fentanyl, the incidence of hypoxemia or apnea becomes more likely.
Although the incidence of respiratory depression/arrest is low (0.1–0.5%) when midazolam is administered alone at normal doses, the concomitant use with CNS acting drugs, mainly analgesic opiates, may increase the possibility of hypotension, respiratory depression, respiratory arrest, and death, even at therapeutic doses. Potential drug interactions involving at least one CNS depressant were observed for 84% of midazolam users who were subsequently required to receive the benzodiazepine antagonist flumazenil. Therefore, efforts directed toward monitoring drug interactions and preventing injuries from midazolam administration are expected to have a substantial impact on the safe use of this drug.
Pregnancy and breastfeeding
Midazolam, when taken during the third trimester of pregnancy, may cause risk to the neonate, including benzodiazepine withdrawal syndrome, with possible symptoms including hypotonia, apnoeic spells, cyanosis, and impaired metabolic responses to cold stress. Symptoms of hypotonia and the neonatal benzodiazepine withdrawal syndrome have been reported to persist from hours to months after birth. Other neonatal withdrawal symptoms include hyperexcitability, tremor, and gastrointestinal upset (diarrhea or vomiting). Breastfeeding by mothers using midazolam is not recommended.
Elderly
Additional caution is required in the elderly, as they are more sensitive to the pharmacological effects of benzodiazepines, metabolise them more slowly, and are more prone to adverse effects, including drowsiness, amnesia (especially anterograde amnesia), ataxia, hangover effects, confusion, and falls.
Tolerance, dependence, and withdrawal
A benzodiazepine dependence occurs in about one-third of individuals who are treated with benzodiazepines for longer than 4 weeks, which typically results in tolerance and benzodiazepine withdrawal syndrome when the dose is reduced too rapidly. Midazolam infusions may induce tolerance and a withdrawal syndrome in a matter of days. The risk factors for dependence include dependent personality, use of a benzodiazepine that is short-acting, high potency and long-term use of benzodiazepines. Withdrawal symptoms from midazolam can range from insomnia and anxiety to seizures and psychosis. Withdrawal symptoms can sometimes resemble a person's underlying condition. Gradual reduction of midazolam after regular use can minimise withdrawal and rebound effects. Tolerance and the resultant withdrawal syndrome may be due to receptor down-regulation and GABAA receptor alterations in gene expression, which causes long-term changes in the function of the GABAergic neuronal system.
Chronic users of benzodiazepine medication who are given midazolam experience reduced therapeutic effects of midazolam, due to tolerance to benzodiazepines. Prolonged infusions with midazolam results in the development of tolerance; if midazolam is given for a few days or more a withdrawal syndrome can occur. Therefore, preventing a withdrawal syndrome requires that a prolonged infusion be gradually withdrawn, and sometimes, continued tapering of dose with an oral long-acting benzodiazepine such as clorazepate dipotassium. When signs of tolerance to midazolam occur during intensive care unit sedation the addition of an opioid or propofol is recommended. Withdrawal symptoms can include irritability, abnormal reflexes, tremors, clonus, hypertonicity, delirium and seizures, nausea, vomiting, diarrhea, tachycardia, hypertension, and tachypnea.
Overdose
A midazolam overdose is considered a medical emergency and generally requires the immediate attention of medical personnel. Benzodiazepine overdose in healthy individuals is rarely life-threatening with proper medical support; however, the toxicity of benzodiazepines increases when they are combined with other CNS depressants such as alcohol, opioids, or tricyclic antidepressants. The toxicity of benzodiazepine overdose and the risk of death are also increased in the elderly and those with obstructive pulmonary disease or when used intravenously. Treatment is supportive; activated charcoal can be used within an hour of the overdose. The antidote for an overdose of midazolam (or any other benzodiazepine) is flumazenil. While effective in reversing the effects of benzodiazepines it is not used in most cases as it may trigger seizures in mixed overdoses and benzodiazepine dependent individuals.
Symptoms of midazolam overdose can include:
Ataxia
Dysarthria
Nystagmus
Slurred speech
Somnolence (difficulty staying awake)
Mental confusion
Hypotension
Respiratory arrest
Vasomotor collapse
Impaired motor functions
Impaired reflexes
Impaired coordination
Impaired balance
Dizziness
Coma
Death
Detection in body fluids
Concentrations of midazolam or its major metabolite, 1-hydroxymidazolam glucuronide, may be measured in plasma, serum, or whole blood to monitor for safety in those receiving the drug therapeutically, to confirm a diagnosis of poisoning in hospitalized patients, or to assist in a forensic investigation of a case of fatal overdosage. Patients with renal dysfunction may exhibit prolongation of elimination half-life for both the parent drug and its active metabolite, with accumulation of these two substances in the bloodstream and the appearance of adverse depressant effects.
Interactions
Protease inhibitors, nefazodone, sertraline, grapefruit, fluoxetine, erythromycin, diltiazem, clarithromycin inhibit the metabolism of midazolam, leading to a prolonged action. St John's wort, rifapentine, rifampin, rifabutin, phenytoin enhance the metabolism of midazolam leading to a reduced action. Sedating antidepressants, antiepileptic drugs such as phenobarbital, phenytoin and carbamazepine, sedative antihistamines, opioids, antipsychotics and alcohol enhance the sedative effects of midazolam. Midazolam is metabolized almost completely by cytochrome P450-3A4. Atorvastatin administration along with midazolam results in a reduced elimination rate of midazolam. St John's wort decreases the blood levels of midazolam. Grapefruit juice reduces intestinal 3A4 and results in less metabolism and higher plasma concentrations.
Pharmacology
Midazolam is a short-acting benzodiazepine in adults with an elimination half-life of 1.5–2.5 hours. In the elderly, as well as young children and adolescents, the elimination half-life is longer. Midazolam is metabolised into an active metabolite alpha-hydroxymidazolam. Age-related deficits, renal and liver status affect the pharmacokinetic factors of midazolam as well as its active metabolite. However, the active metabolite of midazolam is minor and contributes to only 10% of biological activity of midazolam. Midazolam is poorly absorbed orally, with only 50% of the drug reaching the bloodstream. Midazolam is metabolised by cytochrome P450 (CYP) enzymes and by glucuronide conjugation. Oxidation of midazolam is the major metabolite in human liver microsome (HLM). The half life (t) of midazolam in HLM is 3.3 minutes. The therapeutic as well as adverse effects of midazolam are due to its effects on the GABA receptors; midazolam does not activate GABA receptors directly but, as with other benzodiazepines, it enhances the effect of the endogenous neurotransmitter GABA on the GABA receptors (increasing the frequency of Cl channel opening) resulting in neural inhibition. Almost all of the properties can be explained by the actions of benzodiazepines on GABA receptors. This results in the following pharmacological properties being produced: sedation, induction of sleep, reduction in anxiety, anterograde amnesia, muscle relaxation and anticonvulsant effects.
History
Midazolam is among about 35 benzodiazepines currently used medically, and was synthesized in 1975 by Walser and Fryer at Hoffmann-LaRoche, Inc in the United States. Owing to its water solubility, it was found to be less likely to cause thrombophlebitis than similar drugs. The anticonvulsant properties of midazolam were studied in the late 1970s, but not until the 1990s did it emerge as an effective treatment for convulsive status epilepticus. , it is the most commonly used benzodiazepine in anesthetic medicine. In acute medicine, midazolam has become more popular than other benzodiazepines, such as lorazepam and diazepam, because it is shorter lasting, is more potent, and causes less pain at the injection site. Midazolam is also becoming increasingly popular in veterinary medicine due to its water solubility. In 2018 it was revealed the CIA considered using midazolam as a "truth serum" on terrorist suspects in project "Medication".
Society and culture
Cost
Midazolam is available as a generic medication.
Availability
Midazolam is available in the United States as a syrup or as an injectable solution.
Dormicum brand midazolam is marketed by Roche as white, oval, 7.5 mg tablets in boxes of two or three blister strips of 10 tablets, and as blue, oval, 15 mg tablets in boxes of two (Dormonid 3x) blister strips of 10 tablets. The tablets are imprinted with "Roche" on one side and the dose of the tablet on the other side. Dormicum is also available as 1, 3, and 10 mL ampoules at a concentration of 5 mg/mL. Another manufacturer, Novell Pharmaceutical Laboratories, makes it available as Miloz in 3 and 5 mL ampoules. Midazolam is the only water-soluble benzodiazepine available. Another maker is Roxane Laboratories; the product in an oral solution, midazolam HCl Syrup, 2 mg/mL clear, in a red to purplish-red syrup, cherry in flavor. It becomes soluble when the injectable solution is buffered to a pH of 2.9–3.7. Midazolam is also available in liquid form. It can be administered intramuscularly, intravenously, intrathecally, intranasally, buccally, or orally.
Legal status
In the Netherlands, midazolam is a List II drug of the Opium Law.
Midazolam is a Schedule IV drug under the Convention on Psychotropic Substances. In the United Kingdom, midazolam is a Schedule 3/Class C controlled drug. In the United States, midazolam (DEA number 2884) is on the Schedule IV list of the Controlled Substances Act as a non-narcotic agent with low potential for abuse.
Marketing authorization
In 2011, the European Medicines Agency (EMA) granted a marketing authorisation for a buccal application form of midazolam, sold under the brand name Buccolam. Buccolam was initially approved for the treatment of prolonged, acute, convulsive seizures in people from three months to less than 18 years of age. This is the first application of a paediatric-use marketing authorisation by the EMA.
Use in executions
The drug has been introduced for use in executions by lethal injection in certain jurisdictions in the United States in combination with other drugs. It was introduced to replace pentobarbital after the latter's manufacturer disallowed that drug's use for executions. Midazolam acts as a sedative, resulting in the prisoner being in a state of deep anesthesia comparable to that experienced during surgery. One or more other drugs are usually used to stop the prisoner's heart, rendering them medically dead.
Midazolam has been used as part of a three-drug cocktail with vecuronium bromide and potassium chloride in Florida and Oklahoma prisons and has also been used along with hydromorphone in a two-drug protocol in Ohio and Arizona.
Notable incidents
The state of Florida used midazolam to execute William Frederick Happ in October 2013.
The state of Ohio used midazolam in the execution of Dennis McGuire in January 2014; he was heavily anesthetized and unconscious within 4 minutes of starting the administration of midazolam and 20 more minutes after that point before he was declared medically dead. Controversy arose after he was observed gasping and appeared to choke during that time according to reporters who were allowed to be present, leading to questions about the dosing and timing of the drug administration, as well as the choice of drugs.
The usage of midazolam in executions became controversial after condemned inmate Clayton Lockett apparently regained consciousness and started speaking midway through his 2014 execution when the state of Oklahoma attempted to execute him with an untested three-drug lethal injection combination including 100mg of midazolam. Prison officials reportedly discussed taking him to a hospital before he was pronounced dead of a heart attack 40 minutes after the execution began. An observing doctor stated that Lockett's vein had ruptured. It is not clear whether his death was caused by one or more of the drugs or by a problem in the administration procedure, nor is it clear what quantities of vecuronium bromide and potassium chloride were released to his system before the execution was cancelled.
According to news reports, the execution of Ronald Bert Smith in the state of Alabama on 8 December 2016 allegedly went awry due to the fact he displayed movement soon after midazolam was injected, although prison staff confirmed twice that he was still unconscious before injecting the two fatal drugs. This controversy again stirred concern among the public regarding the effectiveness of the drug in question.
In October 2016, the state of Ohio announced that it would resume executions in January 2017, using a formulation of midazolam, vecuronium bromide, and potassium chloride, but this was blocked by a federal judge. On 26 July 2017, Ronald Phillips was executed with a three-drug cocktail including midazolam after the Supreme Court refused to grant a stay. Prior to this, the last execution in Ohio had been that of Dennis McGuire. Murderer Gary Otte's lawyers unsuccessfully challenged his Ohio execution, arguing that midazolam might not protect him from serious pain when the other drugs are administered. He died without incident in about 14 minutes on 13 September 2017.
In April 2017, the state of Arkansas carried out a double-execution, of Jack Harold Jones, 52, and Marcel Williams, 46. Arkansas attempted to execute eight people before its supply of midazolam expired on 30 April 2017. Two of them were granted a stay of execution, and another, Ledell Lee, 51, was executed on 20 April 2017.
In October 2021, the state of Oklahoma executed inmate John Marion Grant, 60, using midazolam as part of its three-drug cocktail hours after the U.S. Supreme Court ruled to lift a stay of execution for Oklahoma death row inmates. The execution was the state's first since 2015. Witnesses to the execution said that when the first drug, midazolam, began to flow at 4:09p.m., Grant started convulsing about two dozen times and vomited. Grant continued breathing, and a member of the execution team wiped the vomit off his face. At 4:15p.m., officials said Grant was unconscious, and he was pronounced dead at 4:21p.m.
Legal challenges
In Glossip v. Gross, attorneys for three Oklahoma inmates argued that midazolam could not achieve the level of unconsciousness required for surgery, meaning severe pain and suffering was likely. They argued that midazolam was cruel and unusual punishment and thus contrary to the Eighth Amendment to the United States Constitution. In June 2015, the U.S. Supreme Court ruled that they had failed to prove that midazolam was cruel and unusual when compared to known, available alternatives.
The state of Nevada is also known to use midazolam in execution procedures. In July 2018, one of the manufacturers accused state officials of obtaining the medication under pretences. This incident was the first time a drug company successfully, though temporarily, halted an execution. A previous attempt in 2017, to halt an execution in the state of Arizona by another drug manufacturer was not successful.
| Biology and health sciences | Anesthetics | Health |
646796 | https://en.wikipedia.org/wiki/Regula%20falsi | Regula falsi | In mathematics, the regula falsi, method of false position, or false position method is a very old method for solving an equation with one unknown; this method, in modified form, is still in use. In simple terms, the method is the trial and error technique of using test ("false") values for the variable and then adjusting the test value according to the outcome. This is sometimes also referred to as "guess and check". Versions of the method predate the advent of algebra and the use of equations.
As an example, consider problem 26 in the Rhind papyrus, which asks for a solution of (written in modern notation) the equation . This is solved by false position. First, guess that to obtain, on the left, . This guess is a good choice since it produces an integer value. However, 4 is not the solution of the original equation, as it gives a value which is three times too small. To compensate, multiply (currently set to 4) by 3 and substitute again to get , verifying that the solution is .
Modern versions of the technique employ systematic ways of choosing new test values and are concerned with the questions of whether or not an approximation to a solution can be obtained, and if it can, how fast can the approximation be found.
Two historical types
Two basic types of false position method can be distinguished historically, simple false position and double false position.
Simple false position is aimed at solving problems involving direct proportion. Such problems can be written algebraically in the form: determine such that
if and are known. The method begins by using a test input value , and finding the corresponding output value by multiplication: . The correct answer is then found by proportional adjustment, .
Double false position is aimed at solving more difficult problems that can be written algebraically in the form: determine such that
if it is known that
Double false position is mathematically equivalent to linear interpolation. By using a pair of test inputs and the corresponding pair of outputs, the result of this algorithm given by,
would be memorized and carried out by rote. Indeed, the rule as given by Robert Recorde in his Ground of Artes (c. 1542) is:
Gesse at this woorke as happe doth leade.
By chaunce to truthe you may procede.
And firste woorke by the question,
Although no truthe therein be don.
Suche falsehode is so good a grounde,
That truth by it will soone be founde.
From many bate to many mo,
From to fewe take to fewe also.
With to much ioyne to fewe againe,
To to fewe adde to manye plaine.
In crossewaies multiplye contrary kinde,
All truthe by falsehode for to fynde.
For an affine linear function,
double false position provides the exact solution, while for a nonlinear function it provides an approximation that can be successively improved by iteration.
History
The simple false position technique is found in cuneiform tablets from ancient Babylonian mathematics, and in papyri from ancient Egyptian mathematics.
Double false position arose in late antiquity as a purely arithmetical algorithm. In the ancient Chinese mathematical text called The Nine Chapters on the Mathematical Art (九章算術), dated from 200 BC to AD 100, most of Chapter 7 was devoted to the algorithm. There, the procedure was justified by concrete arithmetical arguments, then applied creatively to a wide variety of story problems, including one involving what we would call secant lines on a conic section. A more typical example is this "joint purchase" problem involving an "excess and deficit" condition:
Now an item is purchased jointly; everyone contributes 8 [coins], the excess is 3; everyone contributes 7, the deficit is 4. Tell: The number of people, the item price, what is each? Answer: 7 people, item price 53.
Between the 9th and 10th centuries, the Egyptian mathematician Abu Kamil wrote a now-lost treatise on the use of double false position, known as the Book of the Two Errors (Kitāb al-khaṭāʾayn). The oldest surviving writing on double false position from the Middle East is that of Qusta ibn Luqa (10th century), an Arab mathematician from Baalbek, Lebanon. He justified the technique by a formal, Euclidean-style geometric proof. Within the tradition of medieval Muslim mathematics, double false position was known as hisāb al-khaṭāʾayn ("reckoning by two errors"). It was used for centuries to solve practical problems such as commercial and juridical questions (estate partitions according to rules of Quranic inheritance), as well as purely recreational problems. The algorithm was often memorized with the aid of mnemonics, such as a verse attributed to Ibn al-Yasamin and balance-scale diagrams explained by al-Hassar and Ibn al-Banna, all three being mathematicians of Moroccan origin.
Leonardo of Pisa (Fibonacci) devoted Chapter 13 of his book Liber Abaci (AD 1202) to explaining and demonstrating the uses of double false position, terming the method regulis elchatayn after the al-khaṭāʾayn method that he had learned from Arab sources. In 1494, Pacioli used the term el cataym in his book Summa de arithmetica, probably taking the term from Fibonacci. Other European writers would follow Pacioli and sometimes provided a translation into Latin or the vernacular. For instance, Tartaglia translates the Latinized version of Pacioli's term into the vernacular "false positions" in 1556. Pacioli's term nearly disappeared in the 16th century European works and the technique went by various names such as "Rule of False", "Rule of Position" and "Rule of False Position". Regula Falsi appears as the Latinized version of Rule of False as early as 1690.
Several 16th century European authors felt the need to apologize for the name of the method in a science that seeks to find the truth. For instance, in 1568 Humphrey Baker says:
Numerical analysis
The method of false position provides an exact solution for linear functions, but more direct algebraic techniques have supplanted its use for these functions. However, in numerical analysis, double false position became a root-finding algorithm used in iterative numerical approximation techniques.
Many equations, including most of the more complicated ones, can be solved only by iterative numerical approximation. This consists of trial and error, in which various values of the unknown quantity are tried. That trial-and-error may be guided by calculating, at each step of the procedure, a new estimate for the solution. There are many ways to arrive at a calculated-estimate and regula falsi provides one of these.
Given an equation, move all of its terms to one side so that it has the form, , where is some function of the unknown variable . A value that satisfies this equation, that is, , is called a root or zero of the function and is a solution of the original equation. If is a continuous function and there exist two points and such that and are of opposite signs, then, by the intermediate value theorem, the function has a root in the interval .
There are many root-finding algorithms that can be used to obtain approximations to such a root. One of the most common is Newton's method, but it can fail to find a root under certain circumstances and it may be computationally costly since it requires a computation of the function's derivative. Other methods are needed and one general class of methods are the two-point bracketing methods. These methods proceed by producing a sequence of shrinking intervals , at the th step, such that contains a root of .
Two-point bracketing methods
These methods start with two -values, initially found by trial-and-error, at which has opposite signs. Under the continuity assumption, a root of is guaranteed to lie between these two values, that is to say, these values "bracket" the root. A point strictly between these two values is then selected and used to create a smaller interval that still brackets a root. If is the point selected, then the smaller interval goes from to the endpoint where has the sign opposite that of . In the improbable case that , a root has been found and the algorithm stops. Otherwise, the procedure is repeated as often as necessary to obtain an approximation to the root to any desired accuracy.
The point selected in any current interval can be thought of as an estimate of the solution. The different variations of this method involve different ways of calculating this solution estimate.
Preserving the bracketing and ensuring that the solution estimates lie in the interior of the bracketing intervals guarantees that the solution estimates will converge toward the solution, a guarantee not available with other root finding methods such as Newton's method or the secant method.
The simplest variation, called the bisection method, calculates the solution estimate as the midpoint of the bracketing interval. That is, if at step , the current bracketing interval is , then the new solution estimate is obtained by,
This ensures that is between and , thereby guaranteeing convergence toward the solution.
Since the bracketing interval's length is halved at each step, the bisection method's error is, on average, halved with each iteration. Hence, every 3 iterations, the method gains approximately a factor of 23, i.e. roughly a decimal place, in accuracy.
The regula falsi (false position) method
The convergence rate of the bisection method could possibly be improved by using a different solution estimate.
The regula falsi method calculates the new solution estimate as the -intercept of the line segment joining the endpoints of the function on the current bracketing interval. Essentially, the root is being approximated by replacing the actual function by a line segment on the bracketing interval and then using the classical double false position formula on that line segment.
More precisely, suppose that in the -th iteration the bracketing interval is . Construct the line through the points and , as illustrated. This line is a secant or chord of the graph of the function . In point-slope form, its equation is given by
Now choose to be the -intercept of this line, that is, the value of for which , and substitute these values to obtain
Solving this equation for ck gives:
This last symmetrical form has a computational advantage:
As a solution is approached, and will be very close together, and nearly always of the same sign. Such a subtraction can lose significant digits. Because and are always of opposite sign the “subtraction” in the numerator of the improved formula is effectively an addition (as is the subtraction in the denominator too).
At iteration number , the number is calculated as above and then, if and have the same sign, set and , otherwise set and . This process is repeated until the root is approximated sufficiently well.
The above formula is also used in the secant method, but the secant method always retains the last two computed points, and so, while it is slightly faster, it does not preserve bracketing and may not converge.
The fact that regula falsi always converges, and has versions that do well at avoiding slowdowns, makes it a good choice when speed is needed. However, its rate of convergence can drop below that of the bisection method.
Analysis
Since the initial end-points
and are chosen such that and are of opposite signs, at each step, one of the end-points will get closer to a root of .
If the second derivative of is of constant sign (so there is no inflection point) in the interval,
then one endpoint (the one where also has the same sign) will remain fixed for all subsequent
iterations while the converging endpoint becomes updated. As a result,
unlike the bisection method, the width of the bracket does not tend to
zero (unless the zero is at an inflection point around which ). As a consequence, the linear approximation to , which is used to pick the false position,
does not improve as rapidly as possible.
One example of this phenomenon is the function
on the initial bracket
[−1,1]. The left end, −1, is never replaced (it does not change at first and after the first three iterations, is negative on the interval) and thus the width
of the bracket never falls below 1. Hence, the right endpoint approaches 0 at
a linear rate (the number of accurate digits grows linearly, with a rate of convergence of 2/3).
For discontinuous functions, this method can only be expected to find a point where the function changes sign (for example at for or the sign function). In addition to sign changes, it is also possible for the method to converge to a point where the limit of the function is zero, even if the function is undefined (or has another value) at that point (for example at for the function given by when and by , starting with the interval [-0.5, 3.0]).
It is mathematically possible with discontinuous functions for the method to fail to converge to a zero limit or sign change, but this is not a problem in practice since it would require an infinite sequence of coincidences for both endpoints to get stuck converging to discontinuities where the sign does not change, for example at in
The method of bisection avoids this hypothetical convergence problem.
Improvements in regula falsi
Though regula falsi always converges, usually considerably faster than bisection, there are situations that can slow its convergence – sometimes to a prohibitive degree. That problem isn't unique to regula falsi: Other than bisection, all of the numerical equation-solving methods can have a slow-convergence or no-convergence problem under some conditions. Sometimes, Newton's method and the secant method diverge instead of converging – and often do so under the same conditions that slow regula falsi's convergence.
But, though regula falsi is one of the best methods, and even in its original un-improved version would often be the best choice; for example, when Newton's isn't used because the derivative is prohibitively time-consuming to evaluate, or when Newton's and Successive-Substitutions have failed to converge.
Regula falsi's failure mode is easy to detect: The same end-point is retained twice in a row. The problem is easily remedied by picking instead a modified false position, chosen to avoid slowdowns due to those relatively unusual unfavorable situations. A number of such improvements to regula falsi have been proposed; two of them, the Illinois algorithm and the Anderson–Björk algorithm, are described below.
The Illinois algorithm
The Illinois algorithm halves the -value of the retained end point in the next estimate computation when the new -value (that is, )) has the same sign as the previous one ()), meaning that the end point of the previous step will be retained. Hence:
or
down-weighting one of the endpoint values to force the next to occur on that side of the function. The factor used above looks arbitrary, but it guarantees superlinear convergence (asymptotically, the algorithm will perform two regular steps after any modified step, and has order of convergence 1.442). There are other ways to pick the rescaling which give even better superlinear convergence rates.
The above adjustment to regula falsi is called the Illinois algorithm by some scholars. Ford (1995) summarizes and analyzes this and other similar superlinear variants of the method of false position.
Anderson–Björck algorithm
Suppose that in the -th iteration the bracketing interval is and that the functional value of the new calculated estimate has the same sign as . In this case, the new bracketing interval and the left-hand endpoint has been retained.
(So far, that's the same as ordinary Regula Falsi and the Illinois algorithm.)
But, whereas the Illinois algorithm would multiply by , Anderson–Björck algorithm multiplies it by , where has one of the two following values:
For simple roots, Anderson–Björck performs very well in practice.
ITP method
Given , and where is the golden ration , in each iteration the ITP method calculates the point following three steps:
[Interpolation Step] Calculate the bisection and the regula falsi points: and ;
[Truncation Step] Perturb the estimator towards the center: where and ;
[Projection Step] Project the estimator to minmax interval: where .
The value of the function on this point is queried, and the interval is then reduced to bracket the root by keeping the sub-interval with function values of opposite sign on each end. This three step procedure guarantees that the minmax properties of the bisection method are enjoyed by the estimate as well as the superlinear convergence of the secant method. And, is observed to outperform both bisection and interpolation based methods under smooth and non-smooth functions.
Practical considerations
When solving one equation, or just a few, using a computer, the bisection method is an adequate choice. Although bisection isn't as fast as the other methods—when they're at their best and don't have a problem—bisection nevertheless is guaranteed to converge at a useful rate, roughly halving the error with each iteration – gaining roughly a decimal place of accuracy with every 3 iterations.
For manual calculation, by calculator, one tends to want to use faster methods, and they usually, but not always, converge faster than bisection. But a computer, even using bisection, will solve an equation, to the desired accuracy, so rapidly that there's no need to try to save time by using a less reliable method—and every method is less reliable than bisection.
An exception would be if the computer program had to solve equations very many times during its run. Then the time saved by the faster methods could be significant.
Then, a program could start with Newton's method, and, if Newton's isn't converging, switch to regula falsi, maybe in one of its improved versions, such as the Illinois or Anderson–Björck versions. Or, if even that isn't converging as well as bisection would, switch to bisection, which always converges at a useful, if not spectacular, rate.
When the change in has become very small, and is also changing very little, then Newton's method most likely will not run into trouble, and will converge. So, under those favorable conditions, one could switch to Newton's method if one wanted the error to be very small and wanted very fast convergence.
Example: Growth of a bulrush
In chapter 7 of The Nine Chapters, a root finding problem can be translated to modern language as follows:
Excess And Deficit Problem #11:
A bulrush grew 3 units on its first day. At the end of each day, the plant is observed to have grown by of the previous day's growth.
A club-rush grew 1 unit on its first day. At the end of each day, the plant has grown by 2 times as much as the previous day's growth.
Find the time [in fractional days] that the club-rush becomes as tall as the bulrush.
Answer: the height is
Explanation:
Suppose it is day 2. The club-rush is shorter than the bulrush by 1.5 units.
Suppose it is day 3. The club-rush is taller than the bulrush by 1.75 units. ∎
To understand this, we shall model the heights of the plants on day ( = 1, 2, 3...) after a geometric series.
Bulrush
Club-rush
For the sake of better notations, let Rewrite the plant height series in terms of and invoke the sum formula.
Now, use regula falsi to find the root of
Set and compute which equals (the "deficit").
Set and compute which equals (the "excess").
Estimated root (1st iteration):
Example code
This example program, written in the C programming language, is an example of the Illinois algorithm.
To find the positive number where , the equation is transformed into a root-finding form .
#include <stdio.h>
#include <math.h>
double f(double x) {
return cos(x) - x*x*x;
}
/* a,b: endpoints of an interval where we search
e: half of upper bound for relative error
m: maximal number of iteration
*/
double FalsiMethod(double (*f)(double), double a, double b, double e, int m) {
double c, fc;
int n, side = 0;
/* starting values at endpoints of interval */
double fa = f(a);
double fb = f(b);
for (n = 0; n < m; n++) {
c = (fa * b - fb * a) / (fa - fb);
if (fabs(b - a) < e * fabs(b + a))
break;
fc = f(c);
if (fc * fb > 0) {
/* fc and fb have same sign, copy c to b */
b = c; fb = fc;
if (side == -1)
fa /= 2;
side = -1;
} else if (fa * fc > 0) {
/* fc and fa have same sign, copy c to a */
a = c; fa = fc;
if (side == +1)
fb /= 2;
side = +1;
} else {
/* fc * f_ very small (looks like zero) */
break;
}
}
return c;
}
int main(void) {
printf("%0.15f\n", FalsiMethod(&f, 0, 1, 5E-15, 100));
return 0;
}
After running this code, the final answer is approximately
0.865474033101614.
| Mathematics | Real analysis | null |
646817 | https://en.wikipedia.org/wiki/Agrobacterium%20tumefaciens | Agrobacterium tumefaciens | Agrobacterium tumefaciens (also known as Rhizobium radiobacter) is the causal agent of crown gall disease (the formation of tumours) in over 140 species of eudicots. It is a rod-shaped, Gram-negative soil bacterium. Symptoms are caused by the insertion of a small segment of DNA (known as T-DNA, for 'transfer DNA', not to be confused with tRNA that transfers amino acids during protein synthesis), from a plasmid into the plant cell, which is incorporated at a semi-random location into the plant genome. Plant genomes can be engineered by use of Agrobacterium for the delivery of sequences hosted in T-DNA binary vectors.
Agrobacterium tumefaciens is an Alphaproteobacterium of the family Rhizobiaceae, which includes the nitrogen-fixing legume symbionts. Unlike the nitrogen-fixing symbionts, tumor-producing Agrobacterium species are pathogenic and do not benefit the plant. The wide variety of plants affected by Agrobacterium makes it of great concern to the agriculture industry.
Economically, A. tumefaciens is a serious pathogen of walnuts, grape vines, stone fruits, nut trees, sugar beets, horse radish, and rhubarb, and the persistent nature of the tumors or galls caused by the disease make it particularly harmful for perennial crops.
Agrobacterium tumefaciens grows optimally at . The doubling time can range from 2.5–4h depending on the media, culture format, and level of aeration. At temperatures above , A. tumefaciens begins to experience heat shock which is likely to result in errors in cell division.
Conjugation
To be virulent, the bacterium contains a tumour-inducing plasmid (Ti plasmid or pTi) 200 kbp long, which contains the T-DNA and all the genes necessary to transfer it to the plant cell. Many strains of A. tumefaciens do not contain a pTi.
Since the Ti plasmid is essential to cause disease, prepenetration events in the rhizosphere occur to promote bacterial conjugation - exchange of plasmids amongst bacteria. In the presence of opines, A. tumefaciens produces a diffusible conjugation signal called N-(3-oxo-octanoyl)-L-homoserine lactone (3OC8HSL) or the Agrobacterium autoinducer. This activates the transcription factor TraR, positively regulating the transcription of genes required for conjugation.
Infection methods
Agrobacterium tumefaciens infects the plant through its Ti plasmid. The Ti plasmid integrates a segment of its DNA, known as T-DNA, into the chromosomal DNA of its host plant cells. A. tumefaciens has flagella that allow it to swim through the soil towards photoassimilates that accumulate in the rhizosphere around roots. Some strains may chemotactically move towards chemical exudates from plants, such as acetosyringone and sugars, which indicate the presence of a wound in the plant through which the bacteria may enter. Phenolic compounds are recognised by the VirA protein, a transmembrane protein encoded in the virA gene on the Ti plasmid. Sugars are recognised by the chvE protein, a chromosomal gene-encoded protein located in the periplasmic space.
At least 25 vir genes on the Ti plasmid are necessary for tumor induction. In addition to their perception role, virA and chvE induce other vir genes. The VirA protein has autokinase activity: it phosphorylates itself on a histidine residue. Then the VirA protein phosphorylates the VirG protein on its aspartate residue. The virG protein is a cytoplasmic protein produced from the virG Ti plasmid gene. It is a transcription factor, inducing the transcription of the vir operons. The ChvE protein regulates the second mechanism of the vir genes' activation. It increases VirA protein sensitivity to phenolic compounds.
Attachment is a two-step process. Following an initial weak and reversible attachment, the bacteria synthesize cellulose fibrils that anchor them to the wounded plant cell to which they were attracted. Four main genes are involved in this process: chvA, chvB, pscA, and att. The products of the first three genes apparently are involved in the actual synthesis of the cellulose fibrils. These fibrils also anchor the bacteria to each other, helping to form a microcolony.
VirC, the most important virulent protein, is a necessary step in the recombination of illegitimate recolonization. It selects the section of the DNA in the host plant that will be replaced and it cuts into this strand of DNA.
After production of cellulose fibrils, a calcium-dependent outer membrane protein called rhicadhesin is produced, which also aids in sticking the bacteria to the cell wall. Homologues of this protein can be found in other rhizobia. Currently, there are several reports on standardisation of protocol for the Agrobacterium-mediated transformation. The effect of different parameters such as infection time, acetosyringone, DTT, and cysteine have been studied in soybean (Glycine max).
Possible plant compounds that initiate Agrobacterium to infect plant cells:
Acetosyringone and other phenolic compounds
alpha-Hydroxyacetosyringone
Catechol
Ferulic acid
Gallic acid
p-Hydroxybenzoic acid
Protocatechuic acid
Pyrogallic acid
Resorcylic acid
Sinapinic acid
Syringic acid
Vanillin
Formation of the T-pilus
To transfer T-DNA into a plant cell, A. tumefaciens uses a type IV secretion mechanism, involving the production of a T-pilus. When acetosyringone and other substances are detected, a signal transduction event activates the expression of 11 genes within the VirB operon which are responsible for the formation of the T-pilus.
The pro-pilin is formed first. This is a polypeptide of 121 amino acids which requires processing by the removal of 47 residues to form a T-pilus subunit. The subunit was thought to be circularized by the formation of a peptide bond between the two ends of the polypeptide. However, high-resolution structure of the T-pilus revealed no cyclization of the pilin, with the overall organization of the pilin subunits being highly similar to those of other conjugative pili, such as F-pilus.
Products of the other VirB genes are used to transfer the subunits across the plasma membrane. Yeast two-hybrid studies provide evidence that VirB6, VirB7, VirB8, VirB9 and VirB10 may all encode components of the transporter. An ATPase for the active transport of the subunits would also be required.
Transfer of T-DNA into the plant cell
The T-DNA must be cut out of the circular plasmid. This is typically done by the Vir genes within the helper plasmid. A VirD1/D2 complex nicks the DNA at the left and right border sequences. The VirD2 protein is covalently attached to the 5' end. VirD2 contains a motif that leads to the nucleoprotein complex being targeted to the type IV secretion system (T4SS). The structure of the T-pilus showed that the central channel of the pilus is too narrow to allow the transfer of the folded VirD2, suggesting that VirD2 must be partially unfolded during the conjugation process.
In the cytoplasm of the recipient cell, the T-DNA complex becomes coated with VirE2 proteins, which are exported through the T4SS independently from the T-DNA complex.
Nuclear localization signals, or NLSs, located on the VirE2 and VirD2, are recognised by the importin alpha protein, which then associates with importin beta and the nuclear pore complex to transfer the T-DNA into the nucleus. VIP1 also appears to be an important protein in the process, possibly acting as an adapter to bring the VirE2 to the importin. Once inside the nucleus, VIP2 may target the T-DNA to areas of chromatin that are being actively transcribed, so that the T-DNA can integrate into the host genome.
Genes in the T-DNA
Hormones
To cause gall formation, the T-DNA encodes genes for the production of auxin or indole-3-acetic acid via the IAM pathway. This biosynthetic pathway is not used in many plants for the production of auxin, so it means the plant has no molecular means of regulating it and auxin will be produced constitutively. Genes for the production of cytokinins are also expressed. This stimulates cell proliferation and gall formation.
Opines
The T-DNA contains genes for encoding enzymes that cause the plant to create specialized amino acid derivatives which the bacteria can metabolize, called opines. Opines are a class of chemicals that serve as a source of nitrogen for A. tumefaciens, but not for most other organisms. The specific type of opine produced by A. tumefaciens C58 infected plants is nopaline.
Two nopaline type Ti plasmids, pTi-SAKURA and pTiC58, were fully sequenced. "A. fabrum" C58, the first fully sequenced pathovar, was first isolated from a cherry tree crown gall. The genome was simultaneously sequenced by Goodner et al. and Wood et al. in 2001. The genome of A. tumefaciens C58 consists of a circular chromosome, two plasmids, and a linear chromosome. The presence of a covalently bonded circular chromosome is common to Bacteria, with few exceptions. However, the presence of both a single circular chromosome and single linear chromosome is unique to a group in this genus. The two plasmids are pTiC58, responsible for the processes involved in virulence, and pAtC58, once dubbed the "cryptic" plasmid.
The pAtC58 plasmid has been shown to be involved in the metabolism of opines and to conjugate with other bacteria in the absence of the pTiC58 plasmid. If the Ti plasmid is removed, the tumor growth that is the means of classifying this species of bacteria does not occur.
Biotechnological uses
The Asilomar Conference in 1975 established widespread agreement that recombinant techniques were insufficiently understood and needed to be tightly controlled. The DNA transmission capabilities of Agrobacterium have been vastly explored in biotechnology as a means of inserting foreign genes into plants. Shortly after the Asilomar Conference, Marc Van Montagu and Jeff Schell discovered the gene transfer mechanism between Agrobacterium and plants, which resulted in the development of methods to alter the bacterium into an efficient delivery system for genetic engineering in plants. The plasmid T-DNA that is transferred to the plant is an ideal vehicle for genetic engineering. This is done by cloning a desired gene sequence into T-DNA binary vectors that will be used to deliver a sequence of interest into eukaryotic cells. This process has been performed using the firefly luciferase gene to produce glowing plants. This luminescence has been a useful device in the study of plant chloroplast function and as a reporter gene. It is also possible to transform Arabidopsis thaliana by dipping flowers into a broth of Agrobacterium: the seed produced will be transgenic. Under laboratory conditions, T-DNA has also been transferred to human cells, demonstrating the diversity of insertion application.
The mechanism by which Agrobacterium inserts materials into the host cell is by a type IV secretion system which is very similar to mechanisms used by pathogens to insert materials (usually proteins) into human cells by type III secretion. It also employs a type of signaling conserved in many Gram-negative bacteria called quorum sensing. This makes Agrobacterium an important topic of medical research, as well.
Natural genetic transformation
Natural genetic transformation in bacteria is a sexual process involving the transfer of DNA from one cell to another through the intervening medium, and the integration of the donor sequence into the recipient genome by homologous recombination. A. tumefaciens can undergo natural transformation in soil without any specific physical or chemical treatment.
Disease cycle
Agrobacterium tumefaciens overwinters in infested soils. Agrobacterium species live predominantly saprophytic lifestyles, so its common even for plant-parasitic species of this genus to survive in the soil for lengthy periods of time, even without host plant presence. When there is a host plant present, however, the bacteria enter the plant tissue via recent wounds or natural openings of roots or stems near the ground. These wounds may be caused by cultural practices, grafting, insects, etc. Once the bacteria have entered the plant, they occur intercellularly and stimulate surrounding tissue to proliferate due to cell transformation. Agrobacterium performs this control by inserting the plasmid T-DNA into the plant's genome. See above for more details about the process of plasmid DNA insertion into the host genome. Excess growth of the plant tissue leads to gall formation on the stem and roots. These tumors exert significant pressure on the surrounding plant tissue, which causes this tissue to become crushed and/or distorted. The crushed vessels lead to reduced water flow in the xylem. Young tumors are soft and therefore vulnerable to secondary invasion by insects and saprophytic microorganisms. This secondary invasion causes the breakdown of the peripheral cell layers as well as tumor discoloration due to decay. Breakdown of the soft tissue leads to release of the Agrobacterium tumefaciens into the soil allowing it to restart the disease process with a new host plant.
Disease management
Crown gall disease caused by Agrobacterium tumefaciens can be controlled by using various methods. The best way to control this disease is to take preventative measures, such as sterilizing pruning tools so as to avoid infecting new plants. Performing mandatory inspections of nursery stock and rejecting infected plants as well as not planting susceptible plants in infected fields are also valuable practices. Avoiding wounding the crowns/roots of the plants during cultivation is important for preventing disease. In horticultural techniques in which multiple plants are joined to grow as one, such as budding and grafting these techniques lead to plant wounds. Wounds are the primary location of bacterial entry into the host plant. Therefore, it is advisable to perform these techniques during times of the year when Agrobacteria are not active. Control of root-chewing insects is also helpful to reduce levels of infection, since these insects cause wounds (aka bacterial entryways) in the plant roots. It is recommended that infected plant material be burned rather than placed in a compost pile due to the bacteria's ability to live in the soil for many years.
Biological control methods are also utilized in managing this disease. During the 1970s and 1980s, a common practice for treating germinated seeds, seedlings, and rootstock was to soak them in a suspension of K84. K84 is a strain of Rhizobium rhizogenes (formerly classified under A. radiobacter, but later reclassified) which is a species related to A. tumefaciens but is not pathogenic. K84 produces a bacteriocin (agrocin 84) which is an antibiotic specific against related bacteria, including A. tumefaciens. This method, which was successful at controlling the disease on a commercial scale, had the risk of K84 transferring its resistance gene to the pathogenic Agrobacteria. Thus, in the 1990s, a deletion mutant strain based on K84, known as K1026, was created. This strain is just as successful in controlling crown gall as K84 without the caveat of resistance gene transfer.
Environment
Host, environment, and pathogen are extremely important concepts in regards to plant pathology. Agrobacteria have the widest host range of any plant pathogen, so the main factor to take into consideration in the case of crown gall is environment. There are various conditions and factors that make for a conducive environment for A. tumefaciens when infecting its various hosts. The bacterium can't penetrate the host plant without an entry point such as a wound. Factors leading to wounds in plants include cultural practices, grafting, freezing injury, growth cracks, soil insects, and other animals in the environment causing damage to the plant. Consequently, in exceptionally harsh winters, it is common to have an increased incidence of crown gall due to the weather-related damage. Along with this, there are methods of mediating infection of the host plant. For example, nematodes can act as a vector to introduce Agrobacterium into plant roots. More specifically, the root parasitic nematodes damage the plant cell, creating a wound for the bacteria to enter through. Finally, temperature is a factor when considering A. tumefaciens infection. The optimal temperature for crown gall formation due to this bacterium is because of the thermosensitivity of T-DNA transfer. Tumor formation is significantly reduced at higher temperature conditions.
| Biology and health sciences | Gram-negative bacteria | Plants |
647987 | https://en.wikipedia.org/wiki/Dryland%20farming | Dryland farming | Dryland farming and dry farming encompass specific agricultural techniques for the non-irrigated cultivation of crops. Dryland farming is associated with drylands, areas characterized by a cool wet season (which charges the soil with virtually all the moisture that the crops will receive prior to harvest) followed by a warm dry season. They are also associated with arid conditions, areas prone to drought and those having scarce water resources.
Process
Dryland farming has evolved as a set of techniques and management practices to adapt to limited availability of water, as in the Western US and other regions affected by climate change for crops such as tomato and maize.
In marginal regions, a farmer should be financially able to survive occasional crop failures, perhaps for several years in succession. Survival as a dryland farmer requires careful husbandry of the moisture available for the crop and aggressive management of expenses to minimize losses in poor years. Dryland farming involves the constant assessing of the amount of moisture present or lacking for any given crop cycle and planning accordingly. Dryland farmers know that to be financially successful they have to be aggressive during the good years in order to offset the dry years.
Dryland farming is dependent on natural rainfall, which can leave the ground vulnerable to dust storms, particularly if poor farming techniques are used or if the storms strike at a particularly vulnerable time. The fact that a fallow period must be included in the crop rotation means that fields cannot always be protected by a cover crop, which might otherwise offer protection against erosion.
Some of the theories of dryland farming developed in the late 19th and early 20th centuries claimed to be scientific but were in reality pseudoscientific and did not stand up to empirical testing. For example, it was alleged that tillage would seal in moisture, but such "dust mulching" ideas are based on what people imagine should happen, or have been told, rather than what testing actually confirms. In actuality, it has been shown that tillage increases water losses to evaporation. The book Bad Land: An American Romance explores the effects that this had on people who were encouraged to homestead in an area with little rainfall; most smallholdings failed after working miserably to cling on.
Dry farming depends on making the best use of the "bank" of soil moisture that was created by winter rainfall. Some dry farming practices include:
Wider than normal spacing, to provide a larger bank of moisture for each plant.
Controlled Traffic.
Minimal tilling of land.
Strict weed control, to ensure that weeds do not consume soil moisture needed by the cultivated plants.
Cultivation of soil to produce a "dust mulch", thought to prevent the loss of water through capillary action. This practice is controversial, and is not universally advocated.
Selection of crops and cultivars suited for dry farming practices.
Locations
Dry farming may be practiced in areas that have significant annual rainfall during a wet season, often in the winter. Crops are cultivated during the subsequent dry season, using practices that make use of the stored moisture in the soil. California, Colorado, Kansas, South Dakota, North Dakota, Montana, Nebraska, Oklahoma, Oregon, Washington, and Wyoming, in the United States, are a few states where dry farming is practiced for a variety of crops.
Dryland farming is used in the Great Plains, the Palouse plateau of Eastern Washington, and other arid regions of North America such as in the Southwestern United States and Mexico (see Agriculture in the Southwestern United States and Agriculture in the prehistoric Southwest), the Middle East and in other grain growing regions such as the steppes of Eurasia and Argentina. Dryland farming was introduced to southern Russia and Ukraine by Ukrainian Mennonites under the influence of Johann Cornies, making the region the breadbasket of Europe. In Australia, it is widely practiced in all states but the Northern Territory.
Crops
The choice of crop is influenced by the timing of the predominant rainfall in relation to the seasons. For example, winter wheat is more suited to regions with higher winter rainfall while areas with summer wet seasons may be more suited to summer growing crops such as sorghum, sunflowers or cotton. Dry farmed crops may include grapes, tomatoes, pumpkins, beans, and other summer crops.
Dryland grain crops include wheat, corn, millet, rye, and other grasses that produce grains. These crops grow using the winter water stored in the soil, rather than depending on rainfall during the growing season.
Successful dryland farming is possible with as little as of precipitation a year; higher rainfall increases the variety of crops.
Other considerations
Capturing and conservation of moisture: In regions such as Eastern Washington, the average annual precipitation available to a dryland farm may be as little as . In the Horse Heaven Hills in central Washington, wheat farming has been productive purportedly on an average annual rainfall approaching 6 inches. Consequently, moisture must be captured until the crop can utilize it. Techniques include summer fallow rotation (in which one crop is grown on two seasons' precipitation, leaving standing stubble and crop residue to trap snow), and preventing runoff by terracing fields. "Terracing" is also practiced by farmers on a smaller scale by laying out the direction of furrows to slow water runoff downhill, usually by plowing along either contours or keylines. Moisture can be conserved by eliminating weeds and leaving crop residue to shade the soil.
Effective use of available moisture: Once moisture is available for the crop to use, it must be used as effectively as possible. Seed planting depth and timing are carefully considered to place the seed at a depth at which sufficient moisture exists, or where it will exist when seasonal precipitation falls. Farmers tend to use crop varieties which are drought-tolerant and heat-stress tolerant (even lower-yielding varieties). Thus the likelihood of a successful crop is hedged if seasonal precipitation fails.
Soil conservation: The nature of dryland farming makes it particularly susceptible to erosion, especially wind erosion. Some techniques for conserving soil moisture (such as frequent tillage to kill weeds) are at odds with techniques for conserving topsoil. Since healthy topsoil is critical to sustainable agriculture, in particular within arid areas, its preservation is generally considered the most important long-term goal of a dryland farming operation. Erosion control techniques such as windbreaks, reduced tillage or no-till, spreading straw (or other mulch on particularly susceptible ground), and strip farming are used to minimize topsoil loss.
Weedling: Weedling is process of turning over 90 degree and exposing weed's root during tillage to prevent soil erosion by wind and desertification. At the same time, Direct absorption of nutrients from weeds and moisture provides suitable environment to floris biodiversity of organisms in soil.
Control of input costs: Dryland farming is practiced in regions inherently marginal for non-irrigated agriculture. Because of this, there is an increased risk of crop failure and poor yields which may occur in a dry year (regardless of money or effort expended). Dryland farmers must evaluate the potential yield of a crop constantly throughout the growing season and be prepared to decrease inputs to the crop such as fertilizer and weed control if it appears that it is likely to have a poor yield due to insufficient moisture. Conversely, in years when moisture is abundant, farmers may increase their input efforts and budget to maximize yields and to offset poor harvests.
Arid-zone agriculture
As an area of research and development, arid-zone agriculture, or desert agriculture, includes studies of how to increase the agricultural productivity of lands dominated by lack of freshwater, an abundance of heat and sunlight, and usually one or more of: Extreme winter cold, short rainy season, saline soil or water, strong dry winds, poor soil structure, over-grazing, limited technological development, poverty, or political instability.
The two basic approaches are:
View the given environmental and socioeconomic characteristics as negative obstacles to be overcome.
View as many as possible of them as positive resources to be used.
| Technology | Agriculture_2 | null |
648312 | https://en.wikipedia.org/wiki/Pterophyllum | Pterophyllum | Pterophyllum is a small genus of freshwater fish from the family Cichlidae known to most aquarists as angelfish. All Pterophyllum species originate from the Amazon Basin, Orinoco Basin and various rivers in the Guiana Shield in tropical South America. The three species of Pterophyllum are unusually shaped for cichlids being greatly laterally compressed, with round bodies and elongated triangular dorsal and anal fins. This body shape allows them to hide among roots and plants, often on a vertical surface. Naturally occurring angelfish are frequently striped transversely, colouration which provides additional camouflage. Angelfish are ambush predators and prey on small fish and macroinvertebrates. All Pterophyllum species form monogamous pairs. Eggs are generally laid on a submerged log or a flattened leaf. As is the case for other cichlids, brood care is highly developed.
Pterophyllum should not be confused with marine angelfish, perciform fish found on shallow ocean reefs.
Species
The currently recognized species in this genus are:
History
The freshwater angelfish (P. scalare) was described in 1824 by F. Schultze. Pterophyllum is derived from the Greek πτερον, pteron (fin/sail) and φυλλον, phyllon (leaf).
In 1906, J. Pellegrin described P. altum. In 1963, P. leopoldi was described by J. P. Gosse. Undescribed species may still exist in the Amazon Basin. New species of fish are discovered with increasing frequency, and, like P. scalare and P. leopoldi, the differences may be subtle. Scientific notations describe the P. leopoldi as having 29–35 scales in a lateral row and straight predorsal contour, whereas, P. scalare is described as having 35–45 scales in a lateral row and a notched predorsal contour. P. leopoldi shows the same coloration as P. scalare, but a faint stripe shows between the eye stripe and the first complete body stripe and a third incomplete body stripe exists between the two main (complete) body stripes that extends three-fourths the length of the body. P. scalare's body does not show the stripe between the eye stripe and first complete body stripe at all, and the third stripe between the two main body stripes rarely extends downward more than a half inch, if even present. P. leopoldi fry develop three to eight body stripes, with all but one to five fading away as they mature, whereas P. scalare only has two in true wild form throughout life.
Angelfish were bred in captivity for at least 30 years prior to P. leopoldi being described.
In the aquarium
Angelfish are one of the most commonly kept freshwater aquarium fish, as well as the most commonly kept cichlid. They are praised for their unique shape, color, and behavior. It was not until the late 1920s to early 1930s that the angelfish was bred in captivity in the United States.
Species
The most commonly kept species in the aquarium is Pterophyllum scalare. Most of the individuals in the aquarium trade are captive-bred. Sometimes, captive-bred Pterophyllum altum is available. Pterophyllum leopoldi is the hardest to find in the trade.
Care
Angelfish are kept in a warm aquarium, ideally around 80 °F (27 °C), with soft and acidic (<6.5ph) water. Though angelfish are members of the cichlid family, they are generally peaceful when not mating; however, they still may feed on very small species of fishes. Suitable tank mates include catfishes of the families Doradidae and Callichthyidae which have their own armor for protection.
Breeding
P. scalare is relatively easy to breed in the aquarium, although one of the results of generations of inbreeding is that many breeds have almost completely lost their rearing instincts, resulting in the tendency of the parents to eat their young. In addition, it is very difficult to accurately identify the sex of any individual until it is nearly ready to breed.
Angelfish pairs form long-term relationships where each individual will protect the other from threats and potential suitors. Upon the death or removal of one of the mated pair, breeders have experienced the total refusal of the remaining mate to pair up with any other angelfish and successfully breed with subsequent mates.
Depending upon aquarium conditions, P. scalare reaches sexual maturity at the age of six to 12 months or more. In situations where the eggs are removed from the aquarium immediately after spawning, the pair is capable of spawning every seven to 10 days. Around the age of three years, spawning frequency decreases and eventually ceases.
When the pair is ready to spawn, they choose an appropriate medium upon which to lay the eggs, and spend one or two days picking off detritus and algae from the surface. This medium may be a broad-leaf plant in the aquarium, a flat surface such as a piece of slate placed vertically in the aquarium, a length of pipe, or even the glass sides of the aquarium. The female deposits a line of eggs on the spawning substrate, followed by the male, which fertilizes the eggs. This process is repeated until a total of 100 to more than 1,200 eggs are laid, depending on the size and health of the female fish. As both parents care for the offspring throughout development, the pair takes turns maintaining a high rate of water circulation around the eggs by swimming very close to the eggs and fanning them with their pectoral fins. In a few days, the eggs hatch and the fry remain attached to the spawning substrate. During this period, the fry survive by consuming the remnants of their yolk sacs. At one week, the fry detach and become free-swimming. Successful parents keep close watch on the eggs until then. At the free-swimming stage, the fry can be fed suitably sized live food.
P. altum is notably difficult to breed in an aquarium environment.
Lifespan
Freshwater Angelfish with quality genetics are known to live approximately 12 years in captivity, if the ideal living conditions are provided. In the wild they are thought to live as long as 15 years if unthreatened by their numerous natural predators.
Compatibility with other fish
In pet stores, the freshwater angelfish is typically placed in the semiaggressive category. Some tetras and barbs are compatible with angelfish, but ones small enough to fit in the mouth of the angelfish may be eaten. Generous portions of food should be available so the angelfish do not get hungry and turn on their tank mates.
P. scalare and P. altum are described to be peaceful but territorial. While freshwater angelfish are often recommended for community aquaria, it has been reported that fin-nippers such as Tiger barb often target their long fins, and that freshwater angelfish become aggressive towards their companions as they grow. It is thus recommended that freshwater angelfish be kept instead in single-species aquaria.
Common angelfish diseases
Ich (White Spot Disease)
Ich, also known as "White Spot Disease," is caused by the parasitic protozoan Ichthyophthirius multifiliis. Fish infected with Ich exhibit small, white, grain-like spots on their body, fins, and gills. These spots are cysts where the parasites reside. Infected fish often display signs of discomfort, frequently scratching against objects in the aquarium. The primary cause of an Ich outbreak is usually stress, which can result from factors such as poor water quality, sudden temperature changes, or the introduction of new fish without proper quarantine. To treat Ich, increasing the aquarium's temperature gradually to 78–86 °F (25–30 °C) for a few days can speed up the parasite's life cycle. Simultaneously, using commercially available Ich treatments, based on copper or formalin, can be effective in eradicating the disease.
Fin Rot
Fin Rot is a common bacterial infection affecting the fins of aquarium fish. It is characterized by the fraying, discoloration, and gradual degradation of the fish's fins, giving them a ragged appearance. If left untreated, the condition can progress from the fins to the body, leading to a more severe form known as body rot. The primary causes of fin rot are poor water quality, overcrowding, and physical damage, all of which stress the fish and make them more susceptible to infections. In terms of treatment, the first step is to improve water quality by conducting regular water changes, removing waste, and ensuring proper filtration.
Swim Bladder Disease
Swim Bladder Disease refers to a collection of issues affecting a fish's swim bladder, the organ responsible for buoyancy. Fish afflicted with this condition may struggle to maintain their position in the water, often floating upside-down, sinking to the bottom, or swimming at unusual angles. The causes can be diverse, ranging from overfeeding, constipation, and rapid water temperature changes to physical injury and bacterial infections. To treat swim bladder disease, it's advised to first fast the fish for 24–48 hours, followed by feeding them a diet of cooked, skinned peas, which can help alleviate constipation. If the condition is thought to be due to a bacterial infection, antibiotic treatments can be considered.
Bacterial Infections
Bacterial Infections in aquarium fish can manifest in various ways and are caused by harmful bacteria proliferating within the tank. Symptoms can range from visible ulcers, sores, and red streaks on the fish's body to bloating, erratic swimming, and a loss of appetite. The primary triggers for bacterial infections often include poor water quality, overcrowded tanks, stress, and injuries. Overfeeding, which leads to excess waste, can also contribute to bacterial blooms. To treat bacterial infections, the first step is always to improve water conditions by conducting regular water changes, enhancing filtration, and removing any decaying organic matter. Specialized antibacterial medications, available at pet and aquarium stores, can be administered based on the specific type of bacterial infection. In severe or persistent cases, isolating the affected fish in a quarantine tank during treatment is recommended.
Aquarium varieties
Most strains of angelfish available in the fishkeeping hobby are the result of many decades of selective breeding. For the most part, the original crosses of wild angelfish were not recorded and confusion between the various species of Pterophyllum, especially P. scalare and P. leopoldi, is common. This makes the origins of "domestic angelfish" unclear. Domestic strains are most likely a collection of genes resulting from more than one species of wild angelfish, combined with the selection of mutations in domesticated lines over the last 60 or more years. The result of this is a domestic angelfish that is a true hybrid, with little more than a superficial resemblance to wild Pterophyllum species.
Much of the research into the known genetics of P. scalare is the result of the research of Dr. Joanne Norton, who published a series of 18 articles in Freshwater and Marine Aquarium Magazine. The genome of P. scalare was first sequenced and assembled by Indeever Madireddy, a high school student in October 2022.
Silver (+/+): The silver angelfish most commonly resembles the wild form of angelfish, and is also referred to as "wild-type". It is not, however, caught in the wild and is considered domestic. The fish has a silver body with red eyes, and three vertical black stripes that can fade or darken depending on the mood of the fish.
Gold (g/g): The genetic trait for the gold angelfish is recessive, and causes a light golden body with a darker yellow or orange color on the crown of the fish. It does not have the vertical black stripes or the red eye seen in the wild angelfish.
Zebra (Z/+ or Z/Z): The zebra phenotype results in four to six vertical stripes on the fish that in other ways resembles a silver angelfish. It is a dominant mutation that exists at the same locus as the stripeless gene.
Black lace (D/+) or zebra lace (D/+ - Z/+): A silver or zebra with one copy of the dark gene results in very attractive lacing in the fins, considered by some to the most attractive of all angelfish varieties.
Smokey (Sm/+): A variety with a dark brownish grey back half and dark dorsal and anal fins
Chocolate (Sm/Sm): Homozygous for smokey with more of the dark pattern, sometimes only the head is silver
Halfblack (h/h): Silver with a black rear portion, halfblack can express along with some other color genes, but not all. The pattern may not develop or express if the fish are in stressful conditions.
Sunset blushing (g/g S/S): The sunset blushing has two genes of gold and two genes of stripeless. The upper half of the fish exhibits orange on the best specimens. The body is mostly white in color, and the fins are clear. The amount of orange showing on the fish can vary. On some, the body is a pinkish or tangerine color. The term blushing comes from the clear gill plates found on juveniles, with pinkish gills underneath.
Koi (Gm/Gm S/S) or (Gm/g S/S): The koi has a double or single gene of gold marble with a double gene of stripeless. Their expression of orange varies with stress levels. The black marbling varies from 5%-40% coverage.
Leopard (Sm/Sm Z/Z) or (Sm/Sm Z/+): Leopards are very popular fish when young, having spots over most of their bodies. Most of these spots grow closer together as adults, so they look like chocolates with dots.
Blue blushing (S/S): This wild-type angelfish has two stripeless genes. The body is actually grey with a bluish tint under the right light spectrum. An iridescent pigment develops as they age. This iridescence usually appears blue under most lighting.
Silver gold marble (Gm/+): A silver angel with a single gold marble gene, this is a co-dominant expression.
Ghost (S/+): Heterozygous for stripeless results in a mostly silver fish with just a stripe through the eye and tail. Sometimes, portions of the body stripes will express.
Gold marble (Gm/g or Gm/Gm): Depending on whether the Gold Marble is single or double dose, the marbling will range from 5% to 40% coverage.
Marble (M/+ or M/M or M/g or M/Gm): Marble expresses with much more black pattern than gold marble. The marbling varies from 50% to 95%.
Black hybrid (D/g or D/Gm): A cross of black with a gold, the result is black hybrids, a very vigorous black that may look brassy when young. This cross does not breed true.
Pearlscale (p/p): Pearlscale is a scale mutation, also called the "diamond" angelfish in some regions due to the gem-like iridescence on its scales. The scales have a wrinkled, wavy look that reflects light to create a sparkling effect. Pearl develops slowly, starting at around 9 weeks of age. In can be inhibited by stressful conditions. It is recessive, requiring both parents to contribute the allele.
Black ghost (D/+ - S/+): Similar to a ghost, it has a darker appearance due to the dark gene, and very similar to a black lace without complete stripes. Ghosts generally have more iridescence than normal.
Albino (a/a): Albino removes dark pigments in most varieties. Some, like albino marble still have a little black remaining on a percentage of the fish. The eye pupils are pink as in all albino animals. The surrounding iris can be red or yellow depending on the variety.
| Biology and health sciences | Acanthomorpha | Animals |
648614 | https://en.wikipedia.org/wiki/Exponential%20integral | Exponential integral | In mathematics, the exponential integral Ei is a special function on the complex plane.
It is defined as one particular definite integral of the ratio between an exponential function and its argument.
Definitions
For real non-zero values of x, the exponential integral Ei(x) is defined as
Properties
Several properties of the exponential integral below, in certain cases, allow one to avoid its explicit evaluation through the definition above.
Convergent series
For real or complex arguments off the negative real axis, can be expressed as
where is the Euler–Mascheroni constant. The sum converges for all complex , and we take the usual value of the complex logarithm having a branch cut along the negative real axis.
This formula can be used to compute with floating point operations for real between 0 and 2.5. For , the result is inaccurate due to cancellation.
A faster converging series was found by Ramanujan:
Asymptotic (divergent) series
Unfortunately, the convergence of the series above is slow for arguments of larger modulus. For example, more than 40 terms are required to get an answer correct to three significant figures for . However, for positive values of x, there is a divergent series approximation that can be obtained by integrating by parts:
The relative error of the approximation above is plotted on the figure to the right for various values of , the number of terms in the truncated sum ( in red, in pink).
Asymptotics beyond all orders
Using integration by parts, we can obtain an explicit formulaFor any fixed , the absolute value of the error term decreases, then increases. The minimum occurs at , at which point . This bound is said to be "asymptotics beyond all orders".
Exponential and logarithmic behavior: bracketing
From the two series suggested in previous subsections, it follows that behaves like a negative exponential for large values of the argument and like a logarithm for small values. For positive real values of the argument, can be bracketed by elementary functions as follows:
The left-hand side of this inequality is shown in the graph to the left in blue; the central part is shown in black and the right-hand side is shown in red.
Definition by Ein
Both and can be written more simply using the entire function defined as
(note that this is just the alternating series in the above definition of ). Then we have
The function is related to the exponential generating function of the harmonic numbers:
Relation with other functions
Kummer's equation
is usually solved by the confluent hypergeometric functions and But when and that is,
we have
for all z. A second solution is then given by E1(−z). In fact,
with the derivative evaluated at Another connexion with the confluent hypergeometric functions is that E1 is an exponential times the function U(1,1,z):
The exponential integral is closely related to the logarithmic integral function li(x) by the formula
for non-zero real values of .
Generalization
The exponential integral may also be generalized to
which can be written as a special case of the upper incomplete gamma function:
The generalized form is sometimes called the Misra function , defined as
Many properties of this generalized form can be found in the NIST Digital Library of Mathematical Functions.
Including a logarithm defines the generalized integro-exponential function
Derivatives
The derivatives of the generalised functions can be calculated by means of the formula
Note that the function is easy to evaluate (making this recursion useful), since it is just .
Exponential integral of imaginary argument
If is imaginary, it has a nonnegative real part, so we can use the formula
to get a relation with the trigonometric integrals and :
The real and imaginary parts of are plotted in the figure to the right with black and red curves.
Approximations
There have been a number of approximations for the exponential integral function. These include:
The Swamee and Ohija approximation where
The Allen and Hastings approximation where
The continued fraction expansion
The approximation of Barry et al. where: with being the Euler–Mascheroni constant.
Inverse function of the Exponential Integral
We can express the Inverse function of the exponential integral in power series form:
where is the Ramanujan–Soldner constant and is polynomial sequence defined by the following recurrence relation:
For , and we have the formula :
Applications
Time-dependent heat transfer
Nonequilibrium groundwater flow in the Theis solution (called a well function)
Radiative transfer in stellar and planetary atmospheres
Radial diffusivity equation for transient or unsteady state flow with line sources and sinks
Solutions to the neutron transport equation in simplified 1-D geometries
| Mathematics | Specific functions | null |
648954 | https://en.wikipedia.org/wiki/Visual%20acuity | Visual acuity | Visual acuity (VA) commonly refers to the clarity of vision, but technically rates an animal's ability to recognize small details with precision. Visual acuity depends on optical and neural factors. Optical factors of the eye influence the sharpness of an image on its retina. Neural factors include the health and functioning of the retina, of the neural pathways to the brain, and of the interpretative faculty of the brain.
The most commonly referred-to visual acuity is distance acuity or far acuity (e.g., "20/20 vision"), which describes someone's ability to recognize small details at a far distance. This ability is compromised in people with myopia, also known as short-sightedness or near-sightedness. Another visual acuity is near acuity, which describes someone's ability to recognize small details at a near distance. This ability is compromised in people with hyperopia, also known as long-sightedness or far-sightedness.
A common optical cause of low visual acuity is refractive error (ametropia): errors in how the light is refracted in the eye. Causes of refractive errors include aberrations in the shape of the eye or the cornea, and reduced ability of the lens to focus light. When the combined refractive power of the cornea and lens is too high for the length of the eye, the retinal image will be in focus in front of the retina and out of focus on the retina, yielding myopia. A similar poorly focused retinal image happens when the combined refractive power of the cornea and lens is too low for the length of the eye except that the focused image is behind the retina, yielding hyperopia. Normal refractive power is referred to as emmetropia. Other optical causes of low visual acuity include astigmatism, in which contours of a particular orientation are blurred, and more complex corneal irregularities.
Refractive errors can mostly be corrected by optical means (such as eyeglasses, contact lenses, and refractive surgery). For example, in the case of myopia, the correction is to reduce the power of the eye's refraction by a so-called minus lens.
Neural factors that limit acuity are located in the retina, in the pathways to the brain, or in the brain. Examples of conditions affecting the retina include detached retina and macular degeneration. Examples of conditions affecting the brain include amblyopia (caused by the visual brain not having developed properly in early childhood) and by brain damage, such as from traumatic brain injury or stroke. When optical factors are corrected for, acuity can be considered a measure of neural functioning.
Visual acuity is typically measured while fixating, i.e. as a measure of central (or foveal) vision, for the reason that it is highest in the very center. However, acuity in peripheral vision can be of equal importance in everyday life. Acuity declines towards the periphery first steeply and then more gradually, in an inverse-linear fashion (i.e. the decline follows approximately a hyperbola). The decline is according to E2/(E2+E), where E is eccentricity in degrees visual angle, and E2 is a constant of approximately 2 degrees. At 2 degrees eccentricity, for example, acuity is half the foveal value.
Visual acuity is a measure of how well small details are resolved in the very center of the visual field; it therefore does not indicate how larger patterns are recognized. Visual acuity alone thus cannot determine the overall quality of visual function.
Definition
Visual acuity is a measure of the spatial resolution of the visual processing system. VA, as it is sometimes referred to by optical professionals, is tested by requiring the person whose vision is being tested to identify so-called optotypes – stylized letters, Landolt rings, pediatric symbols, symbols for the illiterate, standardized Cyrillic letters in the Golovin–Sivtsev table, or other patterns – on a printed chart (or some other means) from a set viewing distance. Optotypes are represented as black symbols against a white background (i.e. at maximum contrast). The distance between the person's eyes and the testing chart is set so as to approximate "optical infinity" in the way the lens attempts to focus (far acuity), or at a defined reading distance (near acuity).
A reference value above which visual acuity is considered normal is called 6/6 vision, the USC equivalent of which is 20/20 vision: At 6 metres or 20 feet, a human eye with that performance is able to separate contours that are approximately 1.75 mm apart. Vision of 6/12 corresponds to lower performance, while vision of 6/3 to better performance. Normal individuals have an acuity of 6/4 or better (depending on age and other factors).
In the expression 6/x vision, the numerator (6) is the distance in metres between the subject and the chart and the denominator (x) the distance at which a person with 6/6 acuity would discern the same optotype. Thus, 6/12 means that a person with 6/6 vision would discern the same optotype from 12 metres away (i.e. at twice the distance). This is equivalent to saying that with 6/12 vision, the person possesses half the spatial resolution and needs twice the size to discern the optotype.
A simple and efficient way to state acuity is by converting the fraction to a decimal: 6/6 then corresponds to an acuity (or a Visus) of 1.0 (see Expression below), while 6/3 corresponds to 2.0, which is often attained by well-corrected healthy young subjects with binocular vision. Stating acuity as a decimal number is the standard in European countries, as required by the European norm (EN ISO 8596, previously DIN 58220).
The precise distance at which acuity is measured is not important as long as it is sufficiently far away and the size of the optotype on the retina is the same. That size is specified as a visual angle, which is the angle, at the eye, under which the optotype appears. For 6/6 = 1.0 acuity, the size of a letter on the Snellen chart or Landolt C chart is a visual angle of 5 arc minutes (1 arc min = 1/60 of a degree), which is a 43 point font at 20 feet. By the design of a typical optotype (like a Snellen E or a Landolt C), the critical gap that needs to be resolved is 1/5 this value, i.e., 1 arc min. The latter is the value used in the international definition of visual acuity:
Acuity is a measure of visual performance and does not relate to the eyeglass prescription required to correct vision. Instead, an eye exam seeks to find the prescription that will provide the best corrected visual performance achievable. The resulting acuity may be greater or less than 6/6 = 1.0. Indeed, a subject diagnosed as having 6/6 vision will often actually have higher visual acuity because, once this standard is attained, the subject is considered to have normal (in the sense of undisturbed) vision and smaller optotypes are not tested. Subjects with 6/6 vision or "better" (20/15, 20/10, etc.) may still benefit from an eyeglass correction for other problems related to the visual system, such as hyperopia, ocular injuries, or presbyopia.
Measurement
Visual acuity is measured by a psychophysical procedure and as such relates the physical characteristics of a stimulus to a subject's percept and their resulting responses. Measurement can be taken by using an eye chart invented by Ferdinand Monoyer, by optical instruments, or by computerized tests like the FrACT.
Care must be taken that viewing conditions correspond to the standard, such as correct illumination of the room and the eye chart, correct viewing distance, enough time for responding, error allowance, and so forth. In European countries, these conditions are standardized by the European norm (EN ISO 8596, previously DIN 58220).
History
Physiology
Daylight vision (i.e. photopic vision) is subserved by cone receptor cells which have high spatial density (in the central fovea) and allow high acuity of 6/6 or better. In low light (i.e., scotopic vision), cones do not have sufficient sensitivity and vision is subserved by rods. Spatial resolution is then much lower. This is due to spatial summation of rods, i.e. a number of rods merge into a bipolar cell, in turn connecting to a ganglion cell, and the resulting unit for resolution is large, and acuity small. There are no rods in the very center of the visual field (the foveola), and highest performance in low light is achieved in near peripheral vision.
The maximum angular resolution of the human eye is 28 arc seconds or 0.47 arc minutes; this gives an angular resolution of 0.008 degrees, and at a distance of 1 km corresponds to 136 mm. This is equal to 0.94 arc minutes per line pair (one white and one black line), or 0.016 degrees. For a pixel pair (one white and one black pixel) this gives a pixel density of 128 pixels per degree (PPD).
6/6 vision is defined as the ability to resolve two points of light separated by a visual angle of one minute of arc, corresponding to 60 PPD, or about 290–350 pixels per inch for a display on a device held 250 to 300 mm from the eye.
Thus, visual acuity, or resolving power (in daylight, central vision), is the property of cones.
To resolve detail, the eye's optical system has to project a focused image on the fovea, a region inside the macula having the highest density of cone photoreceptor cells (the only kind of photoreceptors existing in the fovea's very center of 300 μm diameter), thus having the highest resolution and best color vision. Acuity and color vision, despite being mediated by the same cells, are different physiologic functions that do not interrelate except by position. Acuity and color vision can be affected independently.
The grain of a photographic mosaic has just as limited resolving power as the "grain" of the retinal mosaic. To see detail, two sets of receptors must be intervened by a middle set. The maximum resolution is that 30 seconds of arc, corresponding to the foveal cone diameter or the angle subtended at the nodal point of the eye. To get reception from each cone, as it would be if vision was on a mosaic basis, the "local sign" must be obtained from a single cone via a chain of one bipolar, ganglion, and lateral geniculate cell each. A key factor of obtaining detailed vision, however, is inhibition. This is mediated by neurons such as the amacrine and horizontal cells, which functionally render the spread or convergence of signals inactive. This tendency to one-to-one shuttle of signals is powered by brightening of the center and its surroundings, which triggers the inhibition leading to a one-to-one wiring. This scenario, however, is rare, as cones may connect to both midget and flat (diffuse) bipolars, and amacrine and horizontal cells can merge messages just as easily as inhibit them.
Light travels from the fixation object to the fovea through an imaginary path called the visual axis. The eye's tissues and structures that are in the visual axis (and also the tissues adjacent to it) affect the quality of the image. These structures are: tear film, cornea, anterior chamber, pupil, lens, vitreous, and finally the retina. The posterior part of the retina, called the retinal pigment epithelium (RPE) is responsible for, among many other things, absorbing light that crosses the retina so it cannot bounce to other parts of the retina. In many vertebrates, such as cats, where high visual acuity is not a priority, there is a reflecting tapetum layer that gives the photoreceptors a "second chance" to absorb the light, thus improving the ability to see in the dark. This is what causes an animal's eyes to seemingly glow in the dark when a light is shone on them. The RPE also has a vital function of recycling the chemicals used by the rods and cones in photon detection. If the RPE is damaged and does not clean up this "shed" blindness can result.
As in a photographic lens, visual acuity is affected by the size of the pupil. Optical aberrations of the eye that decrease visual acuity are at a maximum when the pupil is largest (about 8 mm), which occurs in low-light conditions. When the pupil is small (1–2 mm), image sharpness may be limited by diffraction of light by the pupil (see diffraction limit). Between these extremes is the pupil diameter that is generally best for visual acuity in normal, healthy eyes; this tends to be around 3 or 4 mm.
If the optics of the eye were otherwise perfect, theoretically, acuity would be limited by pupil diffraction, which would be a diffraction-limited acuity of 0.4 minutes of arc (minarc) or 6/2.6 acuity. The smallest cone cells in the fovea have sizes corresponding to 0.4 minarc of the visual field, which also places a lower limit on acuity. The optimal acuity of 0.4 minarc or 6/2.6 can be demonstrated using a laser interferometer that bypasses any defects in the eye's optics and projects a pattern of dark and light bands directly on the retina. Laser interferometers are now used routinely in patients with optical problems, such as cataracts, to assess the health of the retina before subjecting them to surgery.
The visual cortex is the part of the cerebral cortex in the posterior part of the brain responsible for processing visual stimuli, called the occipital lobe. The central 10° of field (approximately the extension of the macula) is represented by at least 60% of the visual cortex. Many of these neurons are believed to be involved directly in visual acuity processing.
Proper development of normal visual acuity depends on a human or an animal having normal visual input when it is very young. Any visual deprivation, that is, anything interfering with such input over a prolonged period of time, such as a cataract, severe eye turn or strabismus, anisometropia (unequal refractive error between the two eyes), or covering or patching the eye during medical treatment, will usually result in a severe and permanent decrease in visual acuity and pattern recognition in the affected eye if not treated early in life, a condition known as amblyopia. The decreased acuity is reflected in various abnormalities in cell properties in the visual cortex. These changes include a marked decrease in the number of cells connected to the affected eye as well as cells connected to both eyes in cortical area V1, resulting in a loss of stereopsis, i.e. depth perception by binocular vision (colloquially: "3D vision"). The period of time over which an animal is highly sensitive to such visual deprivation is referred to as the critical period.
The eye is connected to the visual cortex by the optic nerve coming out of the back of the eye. The two optic nerves come together behind the eyes at the optic chiasm, where about half of the fibers from each eye cross over to the opposite side and join fibers from the other eye representing the corresponding visual field, the combined nerve fibers from both eyes forming the optic tract. This ultimately forms the physiological basis of binocular vision. The tracts project to a relay station in the midbrain called the lateral geniculate nucleus, part of the thalamus, and then to the visual cortex along a collection of nerve fibers called the optic radiation.
Any pathological process in the visual system, even in older humans beyond the critical period, will often cause decreases in visual acuity. Thus measuring visual acuity is a simple test in accessing the health of the eyes, the visual brain, or pathway to the brain. Any relatively sudden decrease in visual acuity is always a cause for concern. Common causes of decreases in visual acuity are cataracts and scarred corneas, which affect the optical path, diseases that affect the retina, such as macular degeneration and diabetes, diseases affecting the optic pathway to the brain such as tumors and multiple sclerosis, and diseases affecting the visual cortex such as tumors and strokes.
Though the resolving power depends on the size and packing density of the photoreceptors, the neural system must interpret the receptors' information. As determined from single-cell experiments on the cat and primate, different ganglion cells in the retina are tuned to different spatial frequencies, so some ganglion cells at each location have better acuity than others. Ultimately, however, it appears that the size of a patch of cortical tissue in visual area V1 that processes a given location in the visual field (a concept known as cortical magnification) is equally important in determining visual acuity. In particular, that size is largest in the fovea's center, and decreases with increasing distance from there.
Optical aspects
Besides the neural connections of the receptors, the optical system is an equally key player in retinal resolution. In the ideal eye, the image of a diffraction grating can subtend 0.5 micrometre on the retina. This is certainly not the case, however, and furthermore the pupil can cause diffraction of the light. Thus, black lines on a grating will be mixed with the intervening white lines to make a gray appearance. Defective optical issues (such as uncorrected myopia) can render it worse, but suitable lenses can help. Images (such as gratings) can be sharpened by lateral inhibition, i.e., more highly excited cells inhibiting the less excited cells. A similar reaction is in the case of chromatic aberrations, in which the color fringes around black-and-white objects are inhibited similarly.
Expression
Visual acuity is often measured according to the size of letters viewed on a Snellen chart or the size of other symbols, such as Landolt Cs or the E Chart.
In some countries, acuity is expressed as a vulgar fraction, and in some as a decimal number. Using the metre as a unit of measurement, (fractional) visual acuity is expressed relative to 6/6. Otherwise, using the foot, visual acuity is expressed relative to 20/20. For all practical purposes, 20/20 vision is equivalent to 6/6. In the decimal system, acuity is defined as the reciprocal value of the size of the gap (measured in arc minutes) of the smallest Landolt C, the orientation of which can be reliably identified. A value of 1.0 is equal to 6/6.
LogMAR is another commonly used scale, expressed as the (decadic) logarithm of the minimum angle of resolution (MAR), which is the reciprocal of the acuity number. The LogMAR scale converts the geometric sequence of a traditional chart to a linear scale. It measures visual acuity loss: positive values indicate vision loss, while negative values denote normal or better visual acuity. This scale is commonly used clinically and in research because the lines are of equal length and so it forms a continuous scale with equally spaced intervals between points, unlike Snellen charts, which have different numbers of letters on each line.
A visual acuity of 6/6 is frequently described as meaning that a person can see detail from away the same as a person with "normal" eyesight would see from 6 metres. If a person has a visual acuity of 6/12, they are said to see detail from away the same as a person with "normal" eyesight would see it from away.
The definition of 6/6 is somewhat arbitrary, since human eyes typically have higher acuity, as Tscherning writes, "We have found also that the best eyes have a visual acuity which approaches 2, and we can be almost certain that if, with a good illumination, the acuity is only equal to 1, the eye presents defects sufficiently pronounced to be easily established." Most observers may have a binocular acuity superior to 6/6; the limit of acuity in the unaided human eye is around 6/3–6/2.4 (20/10–20/8), although 6/3 was the highest score recorded in a study of some US professional athletes. Some birds of prey, such as hawks, are believed to have an acuity of around 20/2; in this respect, their vision is much better than human eyesight.
When visual acuity is below the largest optotype on the chart, the reading distance is reduced until the patient can read it. Once the patient is able to read the chart, the letter size and test distance are noted. If the patient is unable to read the chart at any distance, they are tested as follows:
Legal definitions
Various countries have defined statutory limits for poor visual acuity that qualifies as a disability. For example, in Australia, the Social Security Act defines blindness as:
In the US, the relevant federal statute defines blindness as follows:
A person's visual acuity is registered documenting the following: whether the test was for distant or near vision, the eye(s) evaluated and whether corrective lenses (i.e. glasses or contact lenses) were used:
Distance from the chart
D (distant) for the evaluation done at .
N (near) for the evaluation done at .
Eye evaluated
OD (Latin oculus dexter) for the right eye.
OS (Latin oculus sinister) for the left eye.
OU (Latin oculi uterque) for both eyes.
Usage of spectacles during the test
cc (Latin cum correctore) with correctors.
sc: (Latin sine correctore) without correctors.
Pinhole occluder
The abbreviation PH is followed by the visual acuity as measured with a pinhole occluder, which temporarily corrects for refractive errors such as myopia or astigmatism.
PHNI means no improvement of visual acuity using a pinhole occluder.
So, distant visual acuity of 6/10 and 6/8 with pinhole in the right eye will be: DscOD 6/10 PH 6/8. Distant visual acuity of count fingers and 6/17 with pinhole in the left eye will be: DscOS CF PH 6/17. Near visual acuity of 6/8 with pinhole remaining at 6/8 in both eyes with spectacles will be: NccOU 6/8 PH 6/8.
"Dynamic visual acuity" defines the ability of the eye to visually discern fine detail in a moving object.
Measurement considerations
Visual acuity measurement involves more than being able to see the optotypes. The patient should be cooperative, understand the optotypes, be able to communicate with the physician, and many more factors. If any of these factors is missing, then the measurement will not represent the patient's real visual acuity.
Visual acuity is a subjective test meaning that if the patient is unwilling or unable to cooperate, the test cannot be done. A patient who is sleepy, intoxicated, or has any disease that can alter their consciousness or mental status, may not achieve their maximum possible acuity.
Patients who are illiterate in the language whose letters and/or numbers appear on the chart will be registered as having very low visual acuity if this is not known. Some patients will not tell the examiner that they do not know the optotypes, unless asked directly about it. Brain damage can result in a patient not being able to recognize printed letters, or being unable to spell them.
A motor inability can make a person respond incorrectly to the optotype shown and negatively affect the visual acuity measurement.
Variables such as pupil size, background adaptation luminance, duration of presentation, type of optotype used, interaction effects from adjacent visual contours (or "crowding") can all affect visual acuity measurement.
Testing in children
The newborn's visual acuity is approximately 6/133, developing to 6/6 well after the age of six months in most children, according to a study published in 2009.
The measurement of visual acuity in infants, pre-verbal children and special populations (for instance, disabled individuals) is not always possible with a letter chart. For these populations, specialised testing is necessary. As a basic examination step, one must check whether visual stimuli can be fixated, centered and followed.
More formal testing using preferential looking techniques use Teller acuity cards (presented by a technician from behind a window in the wall) to check whether the child is more visually attentive to a random presentation of vertical or horizontal gratings on one side compared with a blank page on the other side – the bars become progressively finer or closer together, and the endpoint is noted when the child in its adult carer's lap equally prefers the two sides.
Another popular technique is electro-physiologic testing using visual evoked (cortical) potentials (VEPs or VECPs), which can be used to estimate visual acuity in doubtful cases and expected severe vision loss cases like Leber's congenital amaurosis.
VEP testing of acuity is somewhat similar to preferential looking in using a series of black and white stripes (sine wave gratings) or checkerboard patterns (which produce larger responses than stripes). Behavioral responses are not required and brain waves created by the presentation of the patterns are recorded instead. The patterns become finer and finer until the evoked brain wave just disappears, which is considered to be the endpoint measure of visual acuity. In adults and older, verbal children capable of paying attention and following instructions, the endpoint provided by the VEP corresponds very well to the psychophysical measure in the standard measurement (i.e. the perceptual endpoint determined by asking the subject when they can no longer see the pattern). There is an assumption that this correspondence also applies to much younger children and infants, though this does not necessarily have to be the case. Studies do show the evoked brain waves, as well as derived acuities, are very adult-like by one year of age.
For reasons not totally understood, until a child is several years old, visual acuities from behavioral preferential looking techniques typically lag behind those determined using the VEP, a direct physiological measure of early visual processing in the brain. Possibly it takes longer for more complex behavioral and attentional responses, involving brain areas not directly involved in processing vision, to mature. Thus the visual brain may detect the presence of a finer pattern (reflected in the evoked brain wave), but the "behavioral brain" of a small child may not find it salient enough to pay special attention to.
A simple but less-used technique is checking oculomotor responses with an optokinetic nystagmus drum, where the subject is placed inside the drum and surrounded by rotating black and white stripes. This creates involuntary abrupt eye movements (nystagmus) as the brain attempts to track the moving stripes. There is a good correspondence between the optokinetic and usual eye-chart acuities in adults. A potentially serious problem with this technique is that the process is reflexive and mediated in the low-level brain stem, not in the visual cortex. Thus someone can have a normal optokinetic response and yet be cortically blind with no conscious visual sensation.
"Normal" visual acuity
Visual acuity depends upon how accurately light is focused on the retina, the integrity of the eye's neural elements, and the interpretative faculty of the brain. "Normal" visual acuity (in central, i.e. foveal vision) is frequently considered to be what was defined by Herman Snellen as the ability to recognize an optotype when it subtended 5 minutes of arc, that is Snellen's chart 6/6-metre, 20/20 feet, 1.00 decimal or 0.0 logMAR. In young humans, the average visual acuity of a healthy, emmetropic eye (or ametropic eye with correction) is approximately 6/5 to 6/4, so it is inaccurate to refer to 6/6 visual acuity as "perfect" vision. On the contrary, Tscherning writes, "We have found also that the best eyes have a visual acuity which approaches 2, and we can be almost certain that if, with a good illumination, the acuity is only equal to 1, the eye presents defects sufficiently pronounced to be easily established."
6/6 is the visual acuity needed to discriminate two contours separated by 1 arc minute – 1.75 mm at 6 metres. This is because a 6/6 letter, E for example, has three limbs and two spaces in between them, giving 5 different detailed areas. The ability to resolve this therefore requires 1/5 of the letter's total size, which in this case would be 1 minute of arc (visual angle). The significance of the 6/6 standard can best be thought of as the lower limit of normal, or as a screening cutoff. When used as a screening test, subjects that reach this level need no further investigation, even though the average visual acuity with a healthy visual system is typically better.
Some people may have other visual problems, such as severe visual field defects, color blindness, reduced contrast, mild amblyopia, cerebral visual impairments, inability to track fast-moving objects, or one of many other visual impairments and still have "normal" visual acuity. Thus, "normal" visual acuity does not imply normal vision. The reason visual acuity is very widely used is that it is easily measured, its reduction (after correction) often indicates some disturbance, and that it often corresponds with the normal daily activities a person can handle, and evaluates their impairment to do them (even though there is heavy debate over that relationship).
Other measures
Normally, visual acuity refers to the ability to resolve two separated points or lines, but there are other measures of the ability of the visual system to discern spatial differences.
Vernier acuity measures the ability to align two line segments. Humans can do this with remarkable accuracy. This success is regarded as hyperacuity. Under optimal conditions of good illumination, high contrast, and long line segments, the limit to vernier acuity is about 8 arc seconds or 0.13 arc minutes, compared to about 0.6 arc minutes (6/4) for normal visual acuity or the 0.4 arc minute diameter of a foveal cone. Because the limit of vernier acuity is well below that imposed on regular visual acuity by the "retinal grain" or size of the foveal cones, it is thought to be a process of the visual cortex rather than the retina. Supporting this idea, vernier acuity seems to correspond very closely (and may have the same underlying mechanism) enabling one to discern very slight differences in the orientations of two lines, where orientation is known to be processed in the visual cortex.
The smallest detectable visual angle produced by a single fine dark line against a uniformly illuminated background is also much less than foveal cone size or regular visual acuity. In this case, under optimal conditions, the limit is about 0.5 arc seconds or only about 2% of the diameter of a foveal cone. This produces a contrast of about 1% with the illumination of surrounding cones. The mechanism of detection is the ability to detect such small differences in contrast or illumination, and does not depend on the angular width of the bar, which cannot be discerned. Thus as the line gets finer, it appears to get fainter but not thinner.
Stereoscopic acuity is the ability to detect differences in depth with the two eyes. For more complex targets, stereoacuity is similar to normal monocular visual acuity, or around 0.6–1.0 arc minutes, but for much simpler targets, such as vertical rods, may be as low as only 2 arc seconds. Although stereoacuity normally corresponds very well with monocular acuity, it may be very poor, or absent, even in subjects with normal monocular acuities. Such individuals typically have abnormal visual development when they are very young, such as an alternating strabismus, or eye turn, where both eyes rarely, or never, point in the same direction and therefore do not function together.
Another test, including Visual acuity (EVTS/OptimEyes), uses targets which change in size, contrast, and viewing time. This test was developed by Daniel M. Laby with colleagues and uses item response theory to calculate a vision performance score (core score). This specific test of visual function has been shown to correlate to professional sports performance.
Motion acuity
The eye has acuity limits for detecting motion. Forward motion is limited by the subtended angular velocity detection threshold (SAVT), and horizontal and vertical motion acuity are limited by lateral motion thresholds. The lateral motion limit is generally below the looming motion limit, and for an object of a given size, lateral motion becomes the more insightful of the two, once the observer moves sufficiently far away from the path of travel. Below these thresholds subjective constancy is experienced in accordance with the Stevens' power law and Weber–Fechner law.
Subtended angular velocity detection threshold (SAVT)
There is a specific acuity limit in detecting an approaching object's looming motion. This is regarded as the subtended angular velocity detection threshold (SAVT) limit of visual acuity. It has a practical value of 0.0275 rad/s. For a person with SAVT limit of , the looming motion of a directly approaching object of size , moving at velocity , is not detectable until its distance is
where the term is omitted for small objects relative to great distances by small-angle approximation.
To exceed the SAVT, an object of size moving as velocity must be closer than ; beyond that distance, subjective constancy is experienced. The SAVT can be measured from the distance at which a looming object is first detected:
where the term is omitted for small objects relative to great distances by small-angle approximation.
The SAVT has the same kind of importance to driving safety and sports as the static limit. The formula is derived from taking the derivative of the visual angle with respect to distance, and then multiplying by velocity to obtain the time rate of visual expansion ().
Lateral motion
There are acuity limits () of horizontal and vertical motion as well. They can be measured and defined by the threshold detection of movement of an object traveling at distance and velocity orthogonal to the direction of view, from a set-back distance with the formula
Because the tangent of the subtended angle is the ratio of the orthogonal distance to the set-back distance, the angular time rate (rad/s) of lateral motion is simply the derivative of the inverse tangent multiplied by the velocity (). In application this means that an orthogonally traveling object will not be discernible as moving until it has reached the distance
where for lateral motion is generally ≥ 0.0087 rad/s with probable dependence on deviation from the fovia and movement orientation, velocity is in terms of the distance units, and zero distance is straight ahead. Far object distances, close set-backs, and low velocities generally lower the salience of lateral motion. Detection with close or null set-back can be accomplished through the pure scale changes of looming motion.
Radial motion
The motion acuity limit affects radial motion in accordance to its definition, hence the ratio of the velocity to the radius must exceed :
Radial motion is encountered in clinical and research environments, in dome theaters, and in virtual-reality headsets.
| Biology and health sciences | Visual system | Biology |
649259 | https://en.wikipedia.org/wiki/Chromite | Chromite | Chromite is a crystalline mineral composed primarily of iron(II) oxide and chromium(III) oxide compounds. It can be represented by the chemical formula of FeCr2O4. It is an oxide mineral belonging to the spinel group. The element magnesium can substitute for iron in variable amounts as it forms a solid solution with magnesiochromite (MgCr2O4). Substitution of the element aluminium can also occur, leading to hercynite (FeAl2O4). Chromite today is mined particularly to make stainless steel through the production of ferrochrome (FeCr), which is an iron-chromium alloy.
Chromite grains are commonly found in large mafic igneous intrusions such as the Bushveld in South Africa and India. Chromite is iron-black in color with a metallic luster, a dark brown streak and a hardness on the Mohs scale of 5.5.
Properties
Chromite minerals are mainly found in mafic-ultramafic igneous intrusions and are also sometimes found in metamorphic rocks. The chromite minerals occur in layered formations that can be hundreds of kilometres long and a few meters thick. Chromite is also common in iron meteorites and form in association with silicates and troilite minerals.
Crystal structure
The chemical composition of chromite can be represented as FeCr2O4, with the iron in the +2 oxidation state and the chromium in the +3 oxidation state. bauxite, when presented as an ore, or in massive form, forms as fine granular aggregates. The structure of the ore can be seen as platy, with breakages along planes of weakness. Chromite can also be presented in a thin section. The grains seen in thin sections are disseminated with crystals that are euhedral to subhedral.
Chromite contains Mg, ferrous iron [Fe(II)], Al and trace amounts of Ti. Chromite can change into different minerals based on the amounts of each element in the mineral. Chromite is a part of the spinel group, which means that it is able to form a complete solid solution series with other members in the same group. These include minerals such as chenmingite (FeCr2O4), xieite (FeCr2O4), magnesiochromite (MgCr2O4) and magnetite (Fe2+Fe3+2O4). Chenmingite and xieite are polymorphs of chromite while magnesiochromite and magnetite are isostructural with chromite.
Crystal size and morphology
Chromite occurs as massive and granular crystals and very rarely as octahedral crystals. Twinning for this mineral occurs on the {III} plane as described by the spinel law.
Grains of minerals are generally small in size. However, chromite grains up to 3 cm have been found. These grains are seen to crystallize from the liquid of a meteorite body where there are low amounts of chromium and oxygen. The large grains are associated with stable supersaturated conditions seen from the meteorite body.
Reactions
Chromite is an important mineral in helping to determine the conditions that rocks form. It can have reactions with various gases such as CO and CO2. The reaction between these gases and the solid chromite grains results in the reduction of the chromite and allows for the formation of iron and chromium alloys. There could also be a formation of metal carbides from the interaction with chromite and the gases.
Chromite is seen to form early in the crystallization process. This allows for chromite to be resistant to the alteration effects of high temperatures and pressures seen in the metamorphic series. It is able to progress through the metamorphic series unaltered. Other minerals with a lower resistance are seen to alter in this series to minerals such as serpentine, biotite and garnet.
Distribution of deposits
Chromite is found as orthocumulate lenses in peridotite from the Earth's mantle. It also occurs in layered, ultramafic intrusive rocks. In addition, it is found in metamorphic rocks such as some serpentinites. Ore deposits of chromite form as early magmatic differentiates. It is commonly associated with olivine, magnetite, serpentine and corundum. The vast Bushveld Igneous Complex of South Africa is a large layered mafic to ultramafic igneous body with some layers consisting of 90% chromite, forming the rare rock type chromitite (cf. chromite the mineral and chromitite, a rock containing chromite). The Stillwater Igneous Complex in Montana also contains significant chromite.
Chromite suitable for commercial mining is found in just a handful of very substantial deposits. There are 2 main types of chromite deposits: stratiform deposits and podiform deposits. Stratiform deposits in layered intrusions are the main source of chromite resources and are found in South Africa, Canada, Finland, and Madagascar. Chromite resources from podiform deposits are mainly found in Kazakhstan, Turkey, and Albania. Zimbabwe is the only country that contains notable chromite reserves in both stratiform and podiform deposits.
Stratiform deposits
Stratiform deposits are formed as large sheet-like bodies, usually formed in layered mafic to ultramafic igneous complexes. This type of deposit is used to obtain 98% of the worldwide chromite reserves.
Stratiform deposits are typically seen to be of Precambrian in age and are found in cratons. The mafic to ultramafic igneous provinces that these deposits are formed in were likely intruded into continental crust, which may have contained granites or gneisses. The shapes of these intrusions are described as tabular or funnel-shaped. The tabular intrusions were placed in the form of sills with the layering of these intrusions being parallel. Examples of these tabular intrusions can be seen in the Stillwater Igneous Complex and Bird River. The funnel-shaped intrusions are seen to be dipping towards the center of the intrusion. This gives the layers in this intrusion a syncline formation. Examples of this type of intrusion can be seen in the Bushveld Igneous Complex and the Great Dyke.
Chromite can be seen in stratiform deposits as multiple layers which consist of chromitite. Thicknesses for these layers range between 1 cm and 1 m. Lateral depths can reach lengths of 70 km. Chromitite is the main rock in these layers, with 50–95% of it being made of chromite and the rest being composed of olivine, orthopyroxene, plagioclase, clinopyroxene, and the various alteration products of these minerals. An indication of water in the magma is determined by the presence of brown mica.
Podiform deposits
Podiform deposits are seen to occur within the ophiolite sequences. The stratigraphy of the ophiolite sequence is deep-ocean sediments, pillow lavas, sheeted dykes, gabbros and ultramafic tectonites.
These deposits are found in ultramafic rocks, most notably in tectonites. It can be seen that the abundance of podiform deposits increase towards the top of the tectonites.
Podiform deposits are irregular in shape. "Pod" is a term given by geologists to express the uncertain morphology of this deposit. This deposit shows foliation that is parallel to the foliation of the host rock. Podiform deposits are described as discordant, subconcordant and concordant. Chromite in podiform deposits form as anhedral grains. The ores seen in this type of deposit have nodular texture and are loosely-packed nodules with a size range of 5–20 mm. Other minerals that are seen in podiform deposits are olivine, orthopyroxene, clinopyroxene, pargasite, Na-mica, albite, and jadeite.
Health and environmental impacts
Chromium extracted from chromite is used on a large scale in many industries, including metallurgy, electroplating, paints, tanning, and paper production. Environmental contamination with hexavalent chromium is a major health and environmental concern. Chromium is most stable in its trivalent (Cr(III)) form, seen in stable compounds such as natural ores. Cr(III) is an essential nutrient, required for lipid and glucose metabolism in animals and humans. In contrast, the second most stable form, hexavalent chromium (Cr(VI)), is generally produced through human activity and rarely seen in nature (as in crocoite), and is a highly toxic carcinogen that may kill animals and humans if ingested in large doses.
Health effects
When chromite ore is mined, it is aimed for the production of ferrochrome and produces a chromite concentrate of a high chromium to iron ratio. It can also be crushed and processed. Chromite concentrate, when combined with a reductant such as coal or coke and a high temperature furnace can produce ferrochrome. Ferrochrome is a type of ferroalloy that is an alloy in between chromium and iron. This ferroalloy, as well as chromite concentrate can introduce various health effects. Introducing a definitive control approach and distinct mitigation techniques can provide importance related to the safety of human health.
When chromite ore is exposed to surface conditions, weathering and oxidation can occur. The element chromium is most abundant in chromite in the form of trivalent (Cr-III). When chromite ore is exposed to aboveground conditions, Cr-III can be converted to Cr-VI, which is the hexavalent state of chromium. Cr-VI is produced from Cr-III by means of dry milling or grinding of the ore. This is due to the moistness of the milling process as well as the atmosphere in which the milling is taking place. A wet environment and a non-oxygenated atmosphere are ideal conditions to produce less Cr-VI, while the opposite is known to create more Cr-VI.
Production of ferrochrome is observed to emit pollutants into the air such as nitrogen oxides, carbon oxides and sulfur oxides, as well as dust particulates with a high concentration of heavy metals such as chromium, zinc, lead, nickel and cadmium. During high temperature smelting of chromite ore to produce Ferrochrome, Cr-III is converted to Cr-VI. As with chromite ore, Ferrochrome is milled and therefore produces Cr-VI. Cr-VI is therefore introduced into the dust when the Ferrochrome is produced. This introduces health risks such as inhalation potential and leaching of toxins into the environment. Human exposure to chromium is ingestion, skin contact, and inhalation. Chromium-III and VI will accumulate in the tissues of humans and animals. The excretion of this type of chromium from the body tends to be very slow which means that elevated concentrations of chromium can be seen decades later in human tissues.
Environmental effects
Chromite mining, chromium, and ferrochrome production can toxically effect the environment. Chromite mining is necessary when it comes to the production of economic commodities.
As a result of leaching of soils and the explicit discharge from industrial activities, weathering of rocks that contain chromium will enter the water column. The route of chromium uptake in plants is still ambiguous, but because it is a nonessential element, chromium will not have a distinct mechanism for that uptake which is independent from chromium speciation. Plant studies have shown that toxic effects on plants from chromium include things such as wilting, narrow leaves, delayed or reduced growth, a decrease in chlorophyll production, damage to root membranes, small root systems, death and many more. Chromium's structure is similar to other essential elements which means that it can impact the mineral nutrition of plants.
During industrial activities and production things such as sediment, water, soil, and air all become polluted and contaminated with chromium. Hexavalent chromium has negative impacts towards soil ecology because it decreases soil micro-organism presence, function, and diversity. Chromium concentrations in soil diversify depending on the different compositions of the sediments and rocks that the soil is made from. The chromium present in soil is a mixture of both Cr(VI) and Cr(III). Certain types of chromium such as Chromium-VI has the capability to pass into the cells of organisms. Dust particles from industry operations and industrial wastewater contaminate and pollute surface water, groundwater, and soils.
In aquatic environments, chromium could experience things such as dissolution, sorption, precipitation, oxidation, reduction, and desorption. In aquatic ecosystems chromium bioaccumulates in invertebrates, aquatic plants, fish, and algae. These toxic effects will operate differently because things such as sex, size, and the development stage of an organism may vary. Things such as the temperature of the water, its alkalinity, salinity, pH, and other contaminants will also impact these toxic effects on organisms.
Applications
Chromite can be used as a refractory material because it has a high heat stability. The chromium extracted from chromite is used in chrome plating and alloying for production of corrosion resistant superalloys, nichrome, and stainless steel. Chromium is used as a pigment for glass, glazes, and paint, and as an oxidizing agent for tanning leather. It is also sometimes used as a gemstone.
Usually known as chrome, it is a very essential industrial metal. It is hard and resistant to corrosion. This is used for things such as nonferrous alloys, the production of stainless steel, chemicals that process leather, and the creation of pigments. Stainless steel usually contains about 18 percent of chromium. The chromium in the stainless steel is the material which hardens making it resilient to corrosion.
Most shiny car trim is chromium plated. Superalloys that contain chromium allow jet engines to run under high stress, in a chemically oxidizing environment, and in high-temperature situations.
Porcelain tile pigmentation
Porcelain tiles are often produced with many different colours and pigmentations. The usual contributor to colour in fast-fired porcelain tiles is black pigment, which is fairly expensive and is synthetic. Natural chromite allows for an inexpensive and inorganic pigmentation alternative to the expensive and allows for the microstructure and mechanical properties of the tiles to not be substantially altered or modified when introduced.
Gallery
| Physical sciences | Minerals | Earth science |
649634 | https://en.wikipedia.org/wiki/Pinus%20longaeva | Pinus longaeva | Pinus longaeva (commonly referred to as the Great Basin bristlecone pine, intermountain bristlecone pine, or western bristlecone pine) is a long-living species of bristlecone pine tree found in the higher mountains of California, Nevada, and Utah. Methuselah is a Great Basin bristlecone pine that is years old and has been credited as the oldest known living non-clonal organism on Earth. To protect it, the exact location of this tree is kept secret. In 1987, the bristlecone pine was designated one of Nevada's state trees.
Description
It is a medium-size tree, reaching tall and with a trunk diameter of up to . The bark is bright orange-yellow, thin and scaly at the base of the trunk. The needles are in fascicles of five, stout, long, deep green to blue-green on the outer face, with stomata confined to a bright white band on the inner surfaces. The leaves show the longest persistence of any plant, with some remaining green for 45 years (Ewers & Schmid 1981).
The cones are ovoid-cylindrical, long and broad when closed, green or purple at first, ripening orange-buff when 16 months old, with numerous thin, fragile scales, each scale with a bristle-like spine long. The cones open to broad when mature, releasing the seeds immediately after opening. The seeds are long, with a wing; they are mostly dispersed by the wind, but some are also dispersed by Clark's nutcrackers.
These ancient trees have a gnarled and stunted appearance, especially those found at high altitudes, and have reddish-brown bark with deep fissures. As the tree ages, much of its vascular cambium layer may die. In very old specimens, often only a narrow strip of living tissue connects the roots to a handful of live branches.
The Great Basin bristlecone pine differs from the Rocky Mountain bristlecone pine in that the needles of the former always have two uninterrupted resin canals, so it lacks the characteristic small white resin flecks appearing on the needles of the latter. The Great Basin bristlecone pine differs from the foxtail pine because the cone bristles of the former are over long, and the cones have a more rounded (not conic) base. The green pine needles give the twisted branches a bottle-brush appearance. The name 'bristlecone pine' refers to the dark purple female cones that bear incurved prickles on their surface.
Distribution and ecology
The species occurs in Utah, Nevada and eastern California. In California, it is restricted to the White Mountains, the Inyo Mountains, and the Panamint Range, in Mono and Inyo counties. In Nevada, it is found in most of the higher ranges of the Basin and Range from the Spring Mountains near Las Vegas north to the Ruby Mountains, and in Utah, northeast to South Tent in the Wasatch Range. Due to the inaccessibility of many of the sites that this species occupies, information on their location and abundance is incomplete, and thus is needed. Environmental niche modelling has been used to better map the distribution of Great Basin bristlecone pine using topographic and spectral variables calculated from a geographic information system (GIS).
The tree grows in large open stands, unlike the related foxtail pine, which sometimes form dense forests. Pinus longaeva trees generally do not form closed canopies, usually covering only 15-50%. Pinus longaeva shares habitats with a number of other pine species, including the ponderosa pine, the white fir and, notably, the limber pine, a similarly long-lived high-elevation species. The tree is a "vigorous" primary succession species, growing quickly on new open ground. It is a "poor competitor" in good soils, however, and the species does best in harsh terrain. Pinus longaeva is often the dominant species in high-elevation dolomite soils, where few plants can grow.
Bristlecone pines are protected in a number of areas owned by the United States federal government, such as the Ancient Bristlecone Pine Forest in the White Mountains of California and the Great Basin National Park in Nevada. These areas prohibit the cutting or gathering of wood.
Clark's nutcrackers may play a role in seed distribution for P. longaeva, though direct observations of the birds foraging on these seeds have not been reported. The nutcrackers use conifer seeds as a food resource, storing many for later use in the ground, and some of these stored seeds are not used and are able to grow into new plants; these trees often exhibit a "multi-trunk" growth form from several seeds germinating at the same time. The prevalence of multi-trunk P. longaeva individuals in areas in which Clark's Nutcrackers are present has been used as evidence that the birds disperse P. longaeva seeds.
An introduced fungal disease known as white pine blister rust (Cronartium ribicola) is believed to affect some individuals. The species was placed on the IUCN Red List and listed as "Vulnerable", or threatened, in 1998. In 2011, however, a population survey found the population of Pinus longaeva to be stable, with no known subpopulations decreasing in size. White pine blister rust was found to have a negligible effect on the population. As a result, the species was moved to "Least Concern".
Fire ecology
The tree is extremely susceptible to fire, and is damaged by even low-intensity burns. The resinous bark is capable of igniting quickly, and a crown fire will almost certainly kill the tree. However, populations of Pinus longaeva are known to be extremely resilient, and as a primary succession species, it is believed that populations of the tree would reestablish itself quickly after a fire. That said, large-scale fires are extremely uncommon where the species grows, and are not a major factor in the species' long-term viability. Historically, Pinus longaeva stands experienced low to high severity fires, and fuels structures changed considerably across elevational gradients. In low elevation, mixed species stands, fuels are often heavy and in close proximity to anthropogenic ignition sources. Yet at high elevations near treeline, Pinus longaeva typically grow on limestone outcroppings that provide little or no surface fuels to propagate a wildfire. However, warmer temperatures will likely increase the duration of fire season, and thus the frequency of fire in Pinus longeava systems at low and mid elevations could increase where stands are typically denser and surface fuel is greatest. While rare, wild fires such as The Carpenter 1 fire in southern Nevada (July 2013) and the Phillips Fire in Great Basin National Park, (September 2000) that started in lower elevation fuel types and moved through the crowns of trees with the aid of extreme fire weather, could become more likely.
Age
A specimen located in the White Mountains of California was measured by Tom Harlan, a researcher with the Laboratory of Tree-Ring Research, to be 5,062 years old as of 2010. This would make it the oldest known non-clonal tree in the world. The identity of the specimen was kept secret by Harlan. Harlan passed away in 2013, and neither the tree nor the core Harlan studied have been found, making the age or existence of this tree unable to be confirmed.
The confirmed oldest tree of this species, "Methuselah", is also located in the Ancient Bristlecone Pine Forest of the White Mountains. Methuselah is years old, as measured by annual ring count on a small core taken with an increment borer. Its exact location is kept secret.
Among the White Mountain specimens, the oldest trees are found on north-facing slopes, with an average of 2,000 years, as compared to the 1,000 year average on the southern slopes. The climate and the durability of their wood can preserve them long after death, with dead trees as old as 7,000 years persisting next to live ones.
| Biology and health sciences | Pinaceae | Plants |
649760 | https://en.wikipedia.org/wiki/Leptospirosis | Leptospirosis | Leptospirosis is a blood infection caused by the bacterium Leptospira that can infect humans, dogs, rodents and many other wild and domesticated animals. Signs and symptoms can range from none to mild (headaches, muscle pains, and fevers) to severe (bleeding in the lungs or meningitis). Weil's disease ( ), the acute, severe form of leptospirosis, causes the infected individual to become jaundiced (skin and eyes become yellow), develop kidney failure, and bleed. Bleeding from the lungs associated with leptospirosis is known as severe pulmonary haemorrhage syndrome.
More than ten genetic types of Leptospira cause disease in humans. Both wild and domestic animals can spread the disease, most commonly rodents. The bacteria are spread to humans through animal urine or feces, or water or soil contaminated with animal urine and feces, coming into contact with the eyes, mouth, nose or breaks in the skin. In developing countries, the disease occurs most commonly in pest control, farmers and low-income people who live in areas with poor sanitation. In developed countries, it occurs during heavy downpours and is a risk to pest controllers, sewage workers and those involved in outdoor activities in warm and wet areas. Diagnosis is typically by testing for antibodies against the bacteria or finding bacterial DNA in the blood.
Efforts to prevent the disease include protective equipment to block contact when working with potentially infected animals, washing after contact, and reducing rodents in areas where people live and work. The antibiotic doxycycline is effective in preventing leptospirosis infection. Human vaccines are of limited usefulness; vaccines for other animals are more widely available. Treatment when infected is with antibiotics such as doxycycline, penicillin, or ceftriaxone. The overall risk of death is 5–10%. However, when the lungs are involved, the risk of death increases to the range of 50–70%.
It is estimated that one million severe cases of leptospirosis in humans occur every year, causing about 58,900 deaths. The disease is most common in tropical areas of the world but may occur anywhere. Outbreaks may arise after heavy rainfall. The disease was first described by physician Adolf Weil in 1886 in Germany. Infected animals may have no, mild or severe symptoms. These may vary by the type of animal. In some animals Leptospira live in the reproductive tract, leading to transmission during mating.
Signs and symptoms
The symptoms of leptospirosis usually appear one to two weeks after infection, but the incubation period can be as long as a month. The illness is biphasic in a majority of symptomatic cases. Symptoms of the first phase (acute or leptospiremic phase) last five to seven days. In the second phase (immune phase), the symptoms resolve as antibodies against the bacteria are produced. Additional symptoms develop in the second phase. The phases of illness may not be distinct, especially in patients with severe illness. 90% of those infected experience mild symptoms while 10% experience severe leptospirosis.
Leptospiral infection in humans causes a range of symptoms, though some infected persons may have none. The disease begins suddenly with fever accompanied by chills, intense headache, severe muscle aches and abdominal pain. A headache brought on by leptospirosis causes throbbing pain and is characteristically located at the head's bilateral temporal or frontal regions. The person could also have pain behind the eyes and a sensitivity to light. Muscle pain usually involves the calf muscle and the lower back. The most characteristic feature of leptospirosis is the conjunctival suffusion (conjunctivitis without exudate) which is rarely found in other febrile illnesses. Other characteristic findings on the eye include subconjunctival bleeding and jaundice. A rash is rarely found in leptospirosis. When one is found alternative diagnoses such as dengue fever and chikungunya fever should be considered. Dry cough is observed in 20–57% of people with leptospirosis. Thus, this clinical feature can mislead a doctor to diagnose the disease as a respiratory illness. Additionally, gastrointestinal symptoms such as nausea, vomiting, abdominal pain, and diarrhoea frequently occur. Vomiting and diarrhea may contribute to dehydration. The abdominal pain can be due to acalculous cholecystitis or inflammation of the pancreas. Rarely, the lymph nodes, liver, and spleen may be enlarged and palpable.
There will be a resolution of symptoms for one to three days. The immune phase starts after this and can last from four to 30 days and can be anything from brain to kidney complications. The hallmark of the second phase is inflammation of the membranes covering the brain. Signs and symptoms of meningitis include severe headache and neck stiffness. Kidney involvement is associated with reduced or absent urine output.
The classic form of severe leptospirosis, known as Weil's disease, is characterised by liver damage (causing jaundice), kidney failure, and bleeding, which happens in 5–10% of those infected. Lung and brain damage can also occur. For those with signs of inflammation of membranes covering the brain and the brain itself, altered level of consciousness can happen. A variety of neurological problems such as paralysis of half of the body, complete inflammation of a whole horizontal section of spinal cord, and Guillain-Barré syndrome are the complications. Signs of bleeding such as petechiae, ecchymoses, nose bleeding, blackish stools due to bleeding in the stomach, vomiting blood and bleeding from the lungs can also be found. Prolongation of prothrombin time in coagulation testing is associated with severe bleeding manifestation. However, low platelet count is not associated with severe bleeding. Pulmonary haemorrhage is alveolar haemorrhage (bleeding into the alveoli of the lungs) leading to massive coughing up of blood, and causing acute respiratory distress syndrome, where the risk of death is more than 50%. Rarely, inflammation of the heart muscles, inflammation of membranes covering the heart, abnormalities in the heart's natural pacemaker and abnormal heart rhythms may occur.
Cause
Bacteria
Leptospirosis is caused by spirochaete bacteria that belong to the genus Leptospira, which are aerobic, right-handed helical, and 6–20 micrometers long. Like Gram-negative bacteria, Leptospira have an outer membrane studded with lipopolysaccharide (LPS) on the surface, an inner membrane and a layer of peptidoglycan cell wall. However, unlike Gram-negative bacteria, the peptidoglycan layer in Leptospira lies closer to the inner than the outer membrane. This results in a fluid outer membrane loosely associated with the cell wall. In addition, Leptospira have a flagellum located in the periplasm, associated with corkscrew style movement. Chemoreceptors at the poles of the bacteria sense various substrates and change the direction of its movement. The bacteria are traditionally visualised using dark-field microscopy without staining.
A total of 66 species of Leptospira has been identified. Based on their genomic sequence, they are divided into two clades and four subclades: P1, P2, S1, and S2. The 19 members of the P1 subclade include the 8 species that can cause severe disease in humans: L. alexanderi, L. borgpetersenii, L. interrogans, L. kirschneri, L. mayottensis, L. noguchii, L. santarosai, and L. weilii. The P2 clade comprises 21 species that may cause mild disease in humans. The remaining 26 species comprise the S1 and S2 subclades, which include "saprophytes" known to consume decaying matter (saprotrophic nutrition). Pathogenic Leptospira do not multiply in the environment. Leptospira require high humidity for survival but can remain alive in environments such as stagnant water or contaminated soil. The bacterium can be killed by temperatures of and can be inactivated by 70% ethanol, 1% sodium hypochlorite, formaldehyde, detergents and acids.
Leptospira are also classified based on their serovar. The diverse sugar composition of the lipopolysaccharide on the surface of the bacteria is responsible for the antigenic difference between serovars. About 300 pathogenic serovars of Leptospira are recognised. Antigenically related serovars (belonging to the same serogroup) may belong to different species because of horizontal gene transfer of LPS biosynthetic genes between different species. Currently, the cross agglutination absorption test and DNA-DNA hybridisation are used to classify Leptospira species but are time-consuming. Therefore, total genomic sequencing could potentially replace these two methods as the new gold standard of classifying Leptospira species.
Transmission
The bacteria can be found in ponds, rivers, puddles, sewers, agricultural fields, and moist soil. Pathogenic Leptospira have been found in the form of aquatic biofilms, which may aid survival in the environment.
The number of cases of leptospirosis is directly related to the amount of rainfall, making the disease seasonal in temperate climates and year-round in tropical climates. The risk of contracting leptospirosis depends upon the risk of disease carriage in the community and the frequency of exposure. In rural areas, farming and animal husbandry are the major risk factors for contracting leptospirosis. Poor housing and inadequate sanitation also increase the risk of infection. In tropical and semi-tropical areas, the disease often becomes widespread after heavy rains or after flooding.
Leptospira are found mostly in mammals. However, reptiles and cold-blooded animals such as frogs, snakes, turtles, and toads have been shown to have the infection. Whether there are reservoirs of human infection is unknown. Rats, mice, and moles are important primary hosts, but other mammals including dogs, deer, rabbits, hedgehogs, cows, sheep, swine, raccoons, opossums, and skunks can also carry the disease. In Africa, a number of wildlife hosts have been identified as carriers, including the banded mongoose, Egyptian fox, Rusa deer, and shrews. There are various mechanisms whereby animals can infect each other. Dogs may lick the urine of an infected animal off the grass or soil, or drink from an infected puddle. House-bound domestic dogs have contracted leptospirosis, apparently from licking the urine of infected mice in the house. Leptospirosis can also be transmitted via the semen of infected animals. The duration of bacteria being consistently present in animal urine may persist for years.
Humans are the accidental host of Leptospira. Humans become infected through contact with water or moist soil that contains urine & feces from infected animals. The bacteria enter through cuts, abrasions, ingestion of contaminated food, or contact with mucous membrane of the body (e.g. mouth, nose, and eyes). Occupations at risk of contracting leptospirosis include farmers, fishermen, garbage collectors, and sewage workers. The disease is also related to adventure tourism and recreational activities. It is common among water-sports enthusiasts in specific areas, including triathlons, water rafting, canoeing and swimming, as prolonged immersion in water promotes the entry of the bacteria. However, Leptospira are unlikely to penetrate intact skin. The disease is not known to spread between humans, and bacterial dissemination in recovery period is extremely rare in humans. Once humans are infected, bacterial shedding from the kidneys usually persists for up to 60 days.
Rarely, leptospirosis can be transmitted through an organ transplant. Infection through the placenta during pregnancy is also possible. It can cause miscarriage and infection in infants. Leptospirosis transmission through eating raw meat of wildlife animals have also been reported (e.g. psychiatric patients with allotriophagy).
Pathogenesis
When animals ingest the bacteria, they circulate in the bloodstream, then lodge themselves into the kidneys through the glomerular or peritubular capillaries. The bacteria then pass into the lumens of the renal tubules and colonise the brush border and proximal convoluted tubule. This causes the continuous shedding of bacteria in the urine without the animal experiencing significant ill effects. This relationship between the animal and the bacteria is known as a commensal relationship, and the animal is known as a reservoir host.
Humans are the accidental host of Leptospira. The pathogenesis of leptospirosis remains poorly understood despite research efforts. The bacteria enter the human body through a breach in the skin or the mucous membrane, then into the bloodstream. The bacteria later attach to the endothelial cells of the blood vessels and extracellular matrix (a complex network of proteins and carbohydrates present between cells). The bacteria use their flagella to move between cell layers. They bind to cells such as fibroblasts, macrophages, endothelial cells, and kidney epithelial cells. They also bind to several human proteins such as complement proteins, thrombin, fibrinogen, and plasminogen using surface leptospiral immunoglobulin-like (Lig) proteins such as LigB and LipL32, whose genes are found in all pathogenic species.
Through the innate immune system, endothelial cells of the capillaries in the human body are activated by the presence of these bacteria. The endothelial cells produce cytokines and antimicrobial peptides against the bacteria. These products regulate the coagulation cascade and movements of white blood cells. Macrophages presented in humans are able to engulf Leptospira. However, Leptospira can reside and proliferate in the cytoplasmic matrix after being ingested by macrophages. Those with severe leptospirosis can experience a high level of cytokines such as interleukin 6, tumor necrosis factor alpha (TNF-α), and interleukin 10. The high level of cytokines causes sepsis-like symptoms which is life-threatening instead of helping to fight against the infection. Those who have a high risk of sepsis during a leptospirosis infection are found to have the HLA-DQ6 genotype, possibly due to superantigen activation, which damages bodily organs.
Leptospira LPS only activates toll-like receptor 2 (TLR2) in monocytes in humans. The lipid A molecule of the bacteria is not recognised by human TLR4 receptors. Therefore, the lack of Leptospira recognition by TLR4 receptors probably contributes to the leptospirosis disease process in humans.
Although there are various mechanisms in the human body to fight against the bacteria, Leptospira is well adapted to such an inflammatory condition created by it. In the bloodstream, it can activate host plasminogen to become plasmin that breaks down extracellular matrix, degrades fibrin clots and complemental proteins (C3b and C5) to avoid opsonisation. It can also recruit complement regulators such as Factor H, C4b-binding protein, factor H-like binding protein, and vitronectin to prevent the activation of membrane attack complex on its surface. It also secretes proteases to degrade complement proteins such as C3. It can bind to thrombin, which decreases the fibrin formation. Reduced fibrin formation increases the risk of bleeding. Leptospira also secretes sphingomyelinase and haemolysin that target red blood cells.
Leptospira spreads rapidly to all organs through the bloodstream. They mainly affect the liver. They invade spaces between hepatocytes, causing apoptosis. The damaged hepatocytes and hepatocyte intercellular junctions cause bile leakage into the bloodstream, causing elevated levels of bilirubin, resulting in jaundice. Congested liver sinusoids and perisinusoidal spaces have been reported. Meanwhile, in the lungs, petechiae or frank bleeding can be found at the alveolar septum and spaces between alveoli. Leptospira secretes toxins that cause mild to severe kidney failure or interstitial nephritis. The kidney failure can recover completely or lead to atrophy and fibrosis. Rarely, inflammation of the heart muscles, coronary arteries, and aorta are found.
Diagnosis
Laboratory tests
For those who are infected, a complete blood count may show a high white cell count and a low platelet count. When a low haemoglobin count is present together with a low white cell count and thrombocytopenia, bone marrow suppression should be considered. Erythrocyte sedimentation rate and C-reactive protein may also be elevated.
The kidneys are commonly involved in leptospirosis. Blood urea and creatinine levels will be elevated. Leptospirosis increases potassium excretion in urine, which leads to a low potassium level and a low sodium level in the blood. Urinalysis may reveal the presence of protein, white blood cells, and microscopic haematuria. Because the bacteria settle in the kidneys, urine cultures will be positive for leptospirosis starting after the second week of illness until 30 days of infection.
For those with liver involvement, transaminases and direct bilirubin are elevated in liver function tests. The Icterohaemorrhagiae serogroup is associated with jaundice and elevated bilirubin levels. Hemolytic anemia contributes to jaundice. A feature of leptospirosis is acute haemolytic anaemia and conjugated hyperbilirubinemia, especially in patients with glucose-6-phosphate dehydrogenase deficiency. Abnormal serum amylase and lipase levels (associated with pancreatitis) are found in those who are admitted to hospital due to leptospirosis. Impaired kidney function with creatinine clearance less than 50 ml/min is associated with elevated pancreatic enzymes.
For those with severe headaches who show signs of meningitis, a lumbar puncture can be attempted. If infected, cerebrospinal fluid (CSF) examination shows lymphocytic predominance with a cell count of about 500/mm3, protein between 50 and 100 mg/mL and normal glucose levels. These findings are consistent with aseptic meningitis.
Serological tests
Rapid detection of Leptospira can be done by quantifying the IgM antibodies using an enzyme-linked immunosorbent assay (ELISA). Typically, L. biflexa antigen is used to detect the IgM antibodies. This test can quickly determine the diagnosis and help in early treatment. However, the test specificity depends upon the type of antigen used and the presence of antibodies from previous infections. The presence of other diseases such as Epstein–Barr virus infection, viral hepatitis, and cytomegalovirus infection can cause false-positive results. Other rapid screening tests have been developed such as dipsticks, latex, and slide agglutination tests.
The microscopic agglutination test (MAT) is the reference test for the diagnosis of leptospirosis. MAT is a test where serial dilutions of patient sera are mixed with different serovars of Leptospira. The mixture is then examined under a dark field microscope to look for agglutination. The highest dilution where 50% agglutination occurs is the result. MAT titres of 1:100 to 1:800 are diagnostic of leptospirosis. A fourfold or greater rise in titre of two sera taken at symptoms' onset and three to 10 days of disease onset confirms the diagnosis. During the acute phase of the disease, MAT is not specific in detecting a serotype of Leptospira because of cross-reactivity between the serovars. In the convalescent phase, MAT is more specific in detecting the serovar types. MAT requires a panel of live antigens and requires laborious work.
Molecular tests
Leptospiral DNA can be amplified by using polymerase chain reaction (PCR) from serum, urine, aqueous humour, CSF, and autopsy specimens. It detects the presence of bacteria faster than MAT during the first few days of infection without waiting for the appearance of antibodies. As PCR detects the presence of leptospiral DNA in the blood it is useful even when the bacteria is killed by antibiotics.
Imaging
In those who have lung involvement, a chest X-ray may demonstrate diffuse alveolar opacities.
Diagnostic criteria
In 1982, the World Health Organization (WHO) proposed the Faine's criteria for the diagnosis of leptospirosis. It consists of three parts: A (clinical findings), B (epidemiological factors), and C (lab findings and bacteriological data). Since the original Faine's criteria only included culture and MAT in part C, which is difficult and complex to perform, the modified Faine's criteria were proposed in 2004 to include ELISA and slide agglutination tests which are easier to perform. In 2012, modified Faine's criteria (with amendment) was proposed to include shortness of breath and coughing up blood in the diagnosis. In 2013, India recommended modifying Faine's criteria in the diagnosis of leptospirosis.
Prevention
Rates of leptospirosis can be reduced by improving housing, infrastructure, and sanitation standards. Rodent abatement efforts and flood mitigation projects can also help to prevent it. Proper use of personal protective equipment (PPE) by people who have a high risk of occupational exposure can prevent leptospirosis infections in most cases.
There is no human vaccine suitable for worldwide use. Only a few countries such as Cuba, Japan, France, and China have approved inactivated vaccines with limited protective effects. Side effects such as nausea, injection site redness and swelling have been reported after the vaccine was injected. Since the immunity induced by one Leptospiraserovar is only protective against that specific one, trivalent vaccines have been developed. They do not confer long-lasting immunity to humans or animals. Vaccines for other animals are more widely available.
Doxycycline is given once a week as a prophylaxis and is effective in reducing the rate of leptospirosis infections amongst high-risk individuals in flood-prone areas. In one study, it reduced the number of leptospirosis cases in military personnel undergoing exercises in the jungles. In another study, it reduced the number of symptomatic cases after exposure to leptospirosis under heavy rainfall in endemic areas.
The prevention of leptospirosis from environmental sources like contaminated waterways, soil, sewers, and agricultural fields, is disinfection used by effective microorganisms, which is mixed with bokashi mudballs for the infected waterways & sewers.
Treatment
Most leptospiral cases resolve spontaneously. Early initiation of antibiotics may prevent the progression to severe disease. Therefore, in resource-limited settings, antibiotics can be started once leptospirosis is suspected after history taking and examination.
For mild leptospirosis, antibiotic recommendations such as doxycycline, azithromycin, ampicillin, and amoxicillin were based solely on in vitro testing. In 2001, the WHO recommended oral doxycycline (2 mg/kg up to 100 mg every 12 hours) for five to seven days for those with mild leptospirosis. Tetracycline, ampicillin, and amoxicillin can also be used in such cases. However, in areas where both rickettsia and leptospirosis are endemic, azithromycin and doxycycline are the drugs of choice. Doxycycline is not used in cases where the patient suffers from liver damage as it has been linked to hepatotoxicity.
Based on a 1988 study, intravenous (IV) benzylpenicillin (also known as penicillin G) is recommended for the treatment of severe leptospirosis. Intravenous benzylpenicillin (30 mg/kg up to 1.2 g every six hours) is used for five to seven days. Amoxicillin, ampicillin, and erythromycin may also be used for severe cases. Ceftriaxone (1 g IV every 24 hours for seven days) is also effective for severe leptospirosis. Cefotaxime (1 g IV every six hours for seven days) and doxycycline (200 mg initially followed by 100 mg IV every 12 hours for seven days) are equally effective as benzylpenicillin (1.5 million units IV every six hours for seven days). Therefore, there is no evidence on differences in death reduction when benzylpenicillin is compared with ceftriaxone or cefotaxime. Another study conducted in 2007 also showed no difference in efficacy between doxycycline (200 mg initially followed by 100 mg orally every 12 hours for seven days) or azithromycin (2 g on day one followed by 1 g daily for two more days) for suspected leptospirosis. There was no difference in the resolution of fever and azithromycin is better tolerated than doxycycline.
Outpatients are given doxycycline or azithromycin. Doxycycline can shorten the duration of leptospirosis by two days, improve symptoms, and prevent the shedding of organisms in their urine. Azithromycin and amoxicillin are given to pregnant women and children. Rarely, a Jarisch–Herxheimer reaction can develop in the first few hours after antibiotic administration. However, according to a meta-analysis done in 2012, the benefit of antibiotics in the treatment of leptospirosis was unclear although the use of antibiotics may reduce the duration of illness by two to four days. Another meta-analysis done in 2013 reached a similar conclusion.
For those with severe leptospirosis, including potassium wasting with high kidney output dysfunction, intravenous hydration, and potassium supplements can prevent dehydration and hypokalemia. When acute kidney failure occurs, early initiation of haemodialysis or peritoneal dialysis can help to improve survival. For those with respiratory failure, tracheal intubation with low tidal volume improves survival rates.
Corticosteroids have been proposed to suppress inflammation in leptospirosis because Leptospira infection can induce the release of chemical signals which promote inflammation of blood vessels in the lungs. However, there is insufficient evidence to determine whether the use of corticosteroids is beneficial.
Prognosis
The overall risk of death for leptospirosis is 5–10%. For those with jaundice, the case fatality can increase up to 15%. For those infected who present with confusion and neurological signs, there is a high risk of death. Other factors that increase the risk of death include reduced urine output, age more than 36 years, and respiratory failure. With proper care, most of those infected will recover completely. Those with acute kidney failure may develop persistent mild kidney impairment after they recover. In those with severe lung involvement, the risk of death is 50–70%. Thirty percent of people with acute leptospirosis complained of long-lasting symptoms characterised by weakness, muscle pain, and headaches.
Eye complications
Eye problems can occur in 10% of those who recovered from leptospirosis in the range from two weeks to a few years post-infection. Most commonly, eye complications can occur at six months after the infection. This is due to the immune privilege of the eye which protects it from immunological damage during the initial phase of leptospiral infection. These complications can range from mild anterior uveitis to severe panuveitis (which involves all three vascular layers of the eye). The uveitis more commonly happens in young to middle-aged males and those working in agricultural farming. In up to 80% of those infected, Leptospira DNA can be found in the aqueous humour of the eye. Eye problems usually have a good prognosis following treatment or they are self-limiting. In anterior uveitis, only topical steroids and mydriatics (an agent that causes dilation of the pupil) are needed while in panuveitis, it requires periocular corticosteroids. Leptospiral uveitis is characterised by hypopyon, rapidly maturing cataract, free floating vitreous membranes, disc hyperemia and retinal vasculitis.
Epidemiology
It is estimated that one million severe cases of leptospirosis occur annually, with 58,900 deaths. Severe cases account for 5–15% of all leptospirosis cases. Leptospirosis is found in both urban and rural areas in tropical, subtropical, and temperate regions. The global health burden for leptospirosis can be measured by disability-adjusted life year (DALY). The score is 42 per 100,000 people per year, which is more than other diseases such as rabies and filariasis.
The disease is observed persistently in parts of Asia, Oceania, the Caribbean, Latin America and Africa. Antarctica is the only place not affected by leptospirosis. In the United States, there were 100 to 150 leptospirosis cases annually. In 1994, leptospirosis ceased to be a notifiable disease in the United States except in 36 states/territories where it is prevalent such as Hawaii, Texas, California, and Puerto Rico. About 50% of the reported cases occurred in Puerto Rico. In January 2013, leptospirosis was reinstated as a nationally notifiable disease in the United States. Research on epidemiology of leptospirosis in high-risk groups and risk factors is limited in India.
The global rates of leptospirosis have been underestimated because most affected countries lack notification or notification is not mandatory. Distinguishing clinical signs of leptospirosis from other diseases and lack of laboratory diagnostic services are other problems. The socioeconomic status of many of the world's population is closely tied to malnutrition; subsequent lack of micronutrients may lead to increased risk of infection and death due to leptospirosis infection. Micronutrients such as iron, calcium, and magnesium represent important areas for future research.
History
The disease was first described by Adolf Weil in 1886 when he reported an "acute infectious disease with enlargement of spleen, jaundice, and nephritis." Before Weil's description, the disease was known as "rice field jaundice" in ancient Chinese text, "autumn fever", "seven-day fever", and "nanukayami fever" in Japan; in Europe and Australia, the disease was associated with certain occupations and given names such as "cane-cutter's disease", "swine-herd's disease", and "Schlammfieber" (mud fever). It has been known historically as "black jaundice", or "dairy farm fever" in New Zealand. Leptospirosis was postulated as the cause of an epidemic among Native Americans along the coast of what is now New England during 1616–1619. The disease was most likely brought to the New World by Europeans.
Leptospira was first observed in 1907 in a post mortem kidney tissue slice by Arthur Stimson using silver deposition staining technique. He called the organism Spirocheta interrogans because the bacteria resembled a question mark. In 1908, a Japanese research group led by Ryukichi Inada and Yutaka Ito first identified this bacterium as the causative agent of leptospirosis and noted its presence in rats in 1916. Japanese coal mine workers frequently contracted leptospirosis. In Japan, the organism was named Spirocheta icterohaemorrhagiae. The Japanese group also experimented with the first leptospiral immunisation studies in guinea pigs. They demonstrated that by injecting the infected guinea pigs with sera from convalescent humans or goats, passive immunity could be provided to the guinea pigs. In 1917, the Japanese group discovered rats as the carriers of leptospirosis. Unaware of the Japanese group's work, two German groups independently and almost simultaneously published their first demonstration of transmitting leptospiral infection in guinea pigs in October 1915. They named the organism Spirochaeta nodosa and Spirochaeta Icterogenes respectively.
Leptospirosis was subsequently recognised as a disease of all mammalian species. In 1933, Dutch workers reported the isolation of Leptospira canicola which specifically infects dogs. In 1940, the strain that specifically infects cattle was first reported in Russia. In 1942, soldiers at Fort Bragg, North Carolina, were recorded to have an infectious disease which caused a rash over their shinbones. This disease was later known to be caused by leptospirosis. By the 1950s, the number of serovars that infected various mammals had expanded significantly. In the 1980s, leptospirosis was recognised as a veterinary disease of major economic importance.
In 1982, there were about 200 serovars of Leptospira available for classification. The International Committee on Systematic Bacteriology's subcommittee on taxonomy of Leptospira proposed classifying these serovars into two big groups: L. interrogans containing pathogenic serovars and L. biflexa containing saprophytic serovars. In 1979, the leptospiral family of Leptospiraceae was proposed. In the same year, Leptospira illini was reclassified as the new genus Leptonema. In 2002, "Lepthangamushi syndrome" was coined to describe a series of overlapping symptoms of leptospirosis with Hantavirus hemorrhagic fever with renal syndrome, and scrub typhus caused by Orientia tsutsugamushi. In 2005, Leptospira parva was classified as Turneriella. With DNA-DNA hybridisation technology, L. interrogans was divided into seven species. More Leptospira species have been discovered since then. The WHO established the Leptospirosis Burden Epidemiology Reference Group (LERG) to review the latest disease epidemiological data of leptospirosis, formulate a disease transmission model, and identify gaps in knowledge and research. The first meeting was convened in 2009. In 2011, LERG estimated that the global yearly rate of leptospirosis is five to 14 cases per 100,000 population.
Other animals
Infected animals can have no, mild, or severe symptoms; the presenting symptoms may vary by the type of animal. In some animals, the bacteria live in the reproductive tract, leading to transmission during mating.
Animals also present with similar clinical features when compared to humans. Clinical signs can appear in 5–15 days in dogs. The incubation period can be prolonged in cats. Leptospirosis can cause abortions after 2–12 weeks in cattle, and 1–4 weeks of infection in pigs. The illness tends to be milder in reservoir hosts. The most commonly affected organs are the kidneys, liver, and reproductive system, but other organs can be affected. In dogs, the acute clinical signs include fever, loss of appetite, shivering, muscle pain, weakness, and urinary symptoms. Vomiting, diarrhea, and abdominal pain may also present. Petechiae and ecchymoses may be seen on mucous membranes. Bleeding from the lungs may also be seen in dogs. In chronic presentations, the affected dog may have no symptoms. In animals that have died of leptospirosis, their kidneys may be swollen with grey and white spots, mottling, or scarring. Their liver may be enlarged with areas of cell death. Petechiae and ecchymoses may be found in various organs. Inflammation of the blood vessels, inflammation of the heart, meningeal layers covering the brain and spinal cord, and uveitis are also possible. Equine recurrent uveitis (ERU) is the most common disease associated with Leptospira infection in horses in North America and may lead to blindness. ERU is an autoimmune disease involving antibodies against Leptospira proteins LruA and LruB cross-reacting with eye proteins. Live Leptospira can be recovered from the aqueous or vitreous fluid of many horses with Leptospira-associated ERU. Risk of death or disability in infected animals varies depending upon the species and age of the animals. In adult pigs and cattle, reproductive signs are the most common signs of leptospirosis. Up to 40% of cows may have a spontaneous abortion. Younger animals usually develop more severe disease. About 80% of dogs can survive with treatment, but the survival rate is reduced if the lungs are involved.
ELISA and microscopic agglutination tests are most commonly used to diagnose leptospirosis in animals. The bacteria can be detected in blood, urine, and milk or liver, kidney, or other tissue samples by using immunofluorescence or immunohistochemical or polymerase chain reaction techniques. Silver staining or immunogold silver staining is used to detect Leptospira in tissue sections. The organisms stain poorly with Gram stain. Dark-field microscopy can be used to detect Leptospira in body fluids, but it is neither sensitive nor specific in detecting the organism. A positive culture for leptospirosis is definitive, but the availability is limited, and culture results can take 13–26 weeks for a result, limiting its utility. Paired acute and convalescent samples are preferred for serological diagnosis of leptospirosis in animals. A positive serological sample from an aborted fetus is also diagnostic of leptospirosis.
Various antibiotics such as doxycycline, penicillins, dihydrostreptomycin, and streptomycin have been used to treat leptospirosis in animals. Fluid therapy, blood transfusion, and respiratory support may be required in severe disease. For horses with ERU, the primary treatment is with anti-inflammatory drugs.
Leptospirosis vaccines are available for animals such as pigs, dogs, cattle, sheep, and goats. Vaccines for cattle usually contain Leptospira serovar Hardjo and Pomona, for dogs, the vaccines usually contain serovar Icterohaemorrhagiae and Canicola. Vaccines containing multiple serovars do not work for cattle as well as vaccines containing a single serovar, yet the multivalent vaccines continue to be sold. Isolation of infected animals and prophylactic antibiotics are also effective in preventing leptospirosis transmission between animals. Environmental control and sanitation also reduce transmission rates.
| Biology and health sciences | Bacterial infections | Health |
650327 | https://en.wikipedia.org/wiki/Morning | Morning | Morning is the period from sunrise to noon. It is preceded by the twilight period of dawn. There are no exact times for when morning begins (also true of evening and night) because it can vary according to one's lifestyle, latitude, and the hours of daylight at each time of year. However, morning strictly ends at noon, when afternoon starts.
Morning precedes afternoon, evening, and night in the sequence of a day. Originally, the term referred to sunrise.
Etymology
The Modern English words "morning" and "tomorrow" began in Middle English as , developing into , then , and eventually . English, unlike some other languages, has separate terms for "morning" and "tomorrow", despite their common root. Other languages, like Dutch, Scots and German, may use a single wordto signify both "morning" and "tomorrow".
Significance
Cultural implications
Morning prayer is a common practice in several religions. The morning period includes specific phases of the Liturgy of the Hours of Christianity.
Some languages that use the time of day in greeting have a special greeting for morning, such as the English good morning. The appropriate time to use such greetings, such as whether it may be used between midnight and dawn, depends on the culture's or speaker's concept of morning. The use of 'good morning''' is ambiguous, usually depending on when the person woke up. As a general rule, the greeting is normally used from 3:00 a.m. to around noon.
Many people greet someone with the shortened 'morning' rather than 'good morning'. It is used as a greeting, never a farewell, unlike 'good night' which is used as the latter. To show respect, one can add the addressee's last name after the salutation: Good morning, Mr. Smith.For some, the word morning may refer to the period immediately following waking up, irrespective of the current time of day. This modern sense of morning'' is due largely to the worldwide spread of electricity, and the independence from natural light sources.
Astronomy
When a star first appears in the east just prior to sunrise, it is referred to as a heliacal rising. Despite the less favorable lighting conditions for optical astronomy, dawn and morning can be useful for observing objects orbiting close to the Sun. Morning (and evening) serves as the optimum time period for viewing the inferior planets Venus and Mercury. Venus and sometimes Mercury may be referred to as a morning star when they appear in the east prior to sunrise. It is a popular time to hunt for comets, as their tails grow more prominent as these objects draw closer to the Sun. The morning (and evening) twilight is used to search for near-Earth asteroids that orbit inside the orbit of the Earth. In mid-latitudes, the mornings near the autumnal equinox are a favorable time period for viewing the zodiacal light.
Genetics
For people, the morning period may be a period of enhanced or reduced energy and productivity. The ability of a person to wake up effectively in the morning may be influenced by a gene called "Period 3". This gene comes in two forms, a "long" and a "short" variant. It seems to affect the person's preference for mornings or evenings. People who carry the long variant were over-represented as morning people, while the ones carrying the short variant were evening preference people.
| Physical sciences | Celestial mechanics | Astronomy |
4343813 | https://en.wikipedia.org/wiki/Portunidae | Portunidae | Portunidae is a family of crabs which contains the swimming crabs. Its members include many well-known shoreline crabs, such as the blue crab (Callinectes sapidus) and velvet crab (Necora puber). Two genera in the family are contrastingly named Scylla and Charybdis; the former contains the economically important species black crab (Scylla serrata) and Scylla paramamosain.
Description
Portunid crabs are characterised by the flattening of the fifth pair of legs into broad paddles, which are used for swimming. This ability, together with their strong, sharp claws, allows many species to be fast and aggressive predators.
Taxonomy
Swimming crabs reach their greatest species diversity in the Pacific and Indian Oceans. The following species are recognized in the family Portunidae:
Extinct genera are marked with an obelisk.
Achelouinae Spiridonov, 2020
Achelous De Haan, 1833
Caphyrinae Guérin, 1832
Caphyra Guérin, 1832
Coelocarcinus Edmondson, 1930
Lissocarcinus Adams & White, 1849
†Mioxaiva Müller, 1978
Carupinae Paulson, 1875
Carupa Dana, 1851
Catoptrus A. Milne-Edwards, 1870
†Euronectes Karasawa, Schweitzer & Feldmann, 2008
Kume Naruse & Ng, 2012
Laleonectes Manning & Chace, 1990
Libystes A. Milne-Edwards, 1867
Pele Ng, 2011
†Rakosia Müller, 1984
Richerellus Crosnier, 2003
Coelocarcininae Števćić, 1991
Coelocarcinus Edmondson, 1930
Lupocyclinae Alcock, 1899
Lupocycloporus Alcock, 1899
Lupocyclus Adams & White, 1849
Necronectinae Glaessner, 1928
†Necronectes A. Milne-Edwards, 1881
Scylla De Haan, 1833
Podophthalminae Dana, 1851
Euphylax Stimpson, 1860
†Paraeuphylax Varela & Schweitzer, 2011
†Phenophthalmus Feldmann, Schweitzer & Encinas, 2010
Podophthalmus Lamarck, 1801
†Psygmophthalmus Schweitzer, Iturralde-Vinent, Hetler & Velez-Juarbe, 2006
†Sandomingia Rathbun, 1919
†Saratunus Collins, Lee & Noad, 2003
†Viaophthalmus Karasawa, Schweitzer & Feldmann, 2008
Vojmirophthalmus Števčić, 2011
Portuninae Rafinesque, 1815
†Acanthoportunus Schweitzer & Feldmann, 2002
Alionectes Koch, Spiridonov & Ďuriš, 2022
Allomonomia Koch, Spiridonov & Ďuriš, 2022
†Archaeoportunus Artal, Ossó & Domínguez, 2021
Arenaeus Dana, 1851
Atoportunus Ng & Takeda, 2003
Callinectes Stimpson, 1860
Carupella Lenz in Lenz & Strunck, 1914
Cavoportunus Nguyen & Ng, 2010
†Colneptunus Lőrenthey in Lőrenthey & Beurlen, 1929
Cycloachelous Ward, 1942
Eodemus Koch, Spiridonov & Ďuriš, 2022
Incultus Koch, Spiridonov & Ďuriš, 2022
Monomia Gistel, 1848
Portunus Weber, 1795
†Rathbunites Schweitzer, Dworschak & Martin, 2011
Sanquerus Manning, 1989
Trionectes Koch, Spiridonov & Ďuriš, 2022
Xiphonectes A. Milne-Edwards, 1873
Thalamitinae Paulson, 1875
Caphyra Guérin, 1832
Charybdis De Haan, 1833
Cronius Stimpson, 1860
†Eocharybdis Beschin, Busulini, De Angeli & Tessier, 2002
Gonioinfradens Leene, 1938
Goniosupradens Davie, 2002
†Lessinithalamita De Angeli & Ceccon, 2015
Lissocarcinus Adams & White, 1849
†Mioxaiva Müller, 1979
Thalamita Latreille, 1829
Thalamitoides A. Milne-Edwards, 1869
Zygita
incertae sedis
†Neptocarcinus Lörenthey, 1897
†Pseudoachelous Portell & Collins, 2004
| Biology and health sciences | Crabs and hermit crabs | Animals |
3191861 | https://en.wikipedia.org/wiki/Abdomen | Abdomen | The abdomen (colloquially called the belly, tummy, midriff, tucky, or stomach) is the front part of the torso between the thorax (chest) and pelvis in humans and in other vertebrates. The area occupied by the abdomen is called the abdominal cavity. In arthropods, it is the posterior tagma of the body; it follows the thorax or cephalothorax.
In humans, the abdomen stretches from the thorax at the thoracic diaphragm to the pelvis at the pelvic brim. The pelvic brim stretches from the lumbosacral joint (the intervertebral disc between L5 and S1) to the pubic symphysis and is the edge of the pelvic inlet. The space above this inlet and under the thoracic diaphragm is termed the abdominal cavity. The boundary of the abdominal cavity is the abdominal wall in the front and the peritoneal surface at the rear.
In vertebrates, the abdomen is a large body cavity enclosed by the abdominal muscles, at the front and to the sides, and by part of the vertebral column at the back. Lower ribs can also enclose ventral and lateral walls. The abdominal cavity is continuous with, and above, the pelvic cavity. It is attached to the thoracic cavity by the diaphragm. Structures such as the aorta, inferior vena cava and esophagus pass through the diaphragm. Both the abdominal and pelvic cavities are lined by a serous membrane known as the parietal peritoneum. This membrane is continuous with the visceral peritoneum lining the organs. The abdomen in vertebrates contains a number of organs belonging to, for instance, the digestive system, urinary system, and muscular system.
Contents
The abdominal cavity contains most organs of the digestive system, including the stomach, the small intestine, and the colon with its attached appendix. Other digestive organs are known as the accessory digestive organs and include the liver, its attached gallbladder, and the pancreas, and these communicate with the rest of the system via various ducts. The spleen, and organs of the urinary system including the kidneys, and adrenal glands also lie within the abdomen, along with many blood vessels including the aorta and inferior vena cava. The urinary bladder, uterus, fallopian tubes, and ovaries may be seen as either abdominal organs or as pelvic organs. Finally, the abdomen contains an extensive membrane called the peritoneum. A fold of peritoneum may completely cover certain organs, whereas it may cover only one side of organs that usually lie closer to the abdominal wall. This is called the retroperitoneum, and the kidneys and ureters are known as retroperitoneal organs.
Muscles
There are three layers of muscles in the abdominal wall. They are, from the outside to the inside: external oblique, internal oblique, and transverse abdominal. The first three layers extend between the vertebral column, the lower ribs, the iliac crest and pubis of the hip. All of their fibers merge towards the midline and surround the rectus abdominis in a sheath before joining up on the opposite side at the linea alba. Strength is gained by the criss-crossing of fibers, such that the external oblique runs downward and forward, the internal oblique upward and forward, and the transverse abdominal horizontally forward.
The transverse abdominal muscle is flat and triangular, with its fibers running horizontally. It lies between the internal oblique and the underlying transverse fascia. It originates from the inguinal ligament, costal cartilages 7-12, the iliac crest and thoracolumbar fascia. Inserts into the conjoint tendon, xiphoid process, linea alba and the pubic crest.
The rectus abdominis muscles are long and flat. The muscle is crossed by three fibrous bands called the tendinous intersections. The rectus abdominis is enclosed in a thick sheath, formed as described above, by fibers from each of the three muscles of the lateral abdominal wall. They originate at the pubis bone, run up the abdomen on either side of the linea alba, and insert into the cartilages of the fifth, sixth, and seventh ribs. In the region of the groin, the inguinal canal, is a passage through the layers. This gap is where the testes can drop through the wall and where the fibrous cord from the uterus in the female runs. This is also where weakness can form, and cause inguinal hernias.
The pyramidalis muscle is small and triangular. It is located in the lower abdomen in front of the rectus abdominis. It originates at the pubic bone and is inserted into the linea alba halfway up to the navel.
Function
Functionally, the human abdomen is where most of the digestive tract is placed and so most of the absorption and digestion of food occurs here. The alimentary tract in the abdomen consists of the lower esophagus, the stomach, the duodenum, the jejunum, ileum, the cecum and the appendix, the ascending, transverse and descending colons, the sigmoid colon and the rectum. Other vital organs inside the abdomen include the liver, the kidneys, the pancreas and the spleen.
The abdominal wall is split into the posterior (back), lateral (sides), and anterior (front) walls.
Movement, breathing and other functions
The abdominal muscles have different important functions. They assist as muscles of exhalation in the breathing process during forceful exhalation. Moreover, these muscles serve as protection for the inner organs. Furthermore, together with the back muscles they provide postural support and are important in defining the form. When the glottis is closed and the thorax and pelvis are fixed, they are integral in the cough, urination, defecation, childbirth, vomit, and singing functions. When the pelvis is fixed, they can initiate the movement of the trunk in a forward motion. They also prevent hyperextension. When the thorax is fixed, they can pull up the pelvis and finally, they can bend the vertebral column sideways and assist in the trunk's rotation.
Posture
The transverse abdominis muscle is the deepest muscle; therefore, it cannot be touched from the outside. It can greatly affect the body's posture. The internal obliques are also deep and also affect body posture. Both of them are involved in rotation and lateral flexion of the spine and are used to bend and support the spine from the front. The external obliques are more superficial and are also involved in rotation and lateral flexion of the spine. They also stabilize the spine when upright. The rectus abdominis muscle is not the most superficial abdominal muscle. The tendonous sheath extending from the external obliques cover the rectus abdominis. The rectus abdominis is the muscle that very fit people develop into "six-pack" abs, though there are five vertical sections on each side. The two bottom sections are just above the pubic bone and usually not visible. The rectus abdominals' function is to bend one's back forward (flexion). The main work of the abdominal muscles is to bend the spine forward when contracting concentrically.
Society and culture
Social and cultural perceptions of the outward appearance of the abdomen has varying significance around the world. Depending on the type of society, excess weight can be perceived as an indicator of wealth and prestige due to excess food, or as a sign of poor health due to lack of exercise. In many cultures, bare abdomens are distinctly sexualized and perceived similarly to breast cleavage.
Exercise
Being key elements of spinal support, and contributors to good posture, it is important to properly exercise the abdominal muscles together with the back muscles because when these are weak or overly tight they can suffer painful spasms and injuries. When properly exercised, abdominal muscles contribute to improved posture and balance, reduce the likelihood of back pain episodes, reduce the severity of back pain, protect against injury, help avoid some back surgeries, and help with the healing of back problems, or after spine surgery. When strengthened, the abdominal muscles provide flexibility as well. The abdominal muscles can be worked by strength and fitness exercises, and through practicing disciplines of general body strength such as Pilates, yoga, tai chi, and jogging.
Clinical significance
Abdominal obesity is a condition where abdominal fat or visceral fat, has built up excessively between the abdominal organs. This is associated with a higher risk of heart disease, asthma and type 2 diabetes.
Abdominal trauma is an injury to the abdomen and can involve damage to the abdominal organs. There is an associated risk of severe blood loss and infection. Injury to the lower chest can cause injuries to the spleen and liver.
A scaphoid abdomen is when the abdomen is sucked inwards. In a newborn, it may represent a diaphragmatic hernia. In general, it is indicative of malnutrition.
Disease
Many gastrointestinal diseases affect the abdominal organs. These include stomach disease, liver disease, pancreatic disease, gallbladder and bile duct disease; intestinal diseases include enteritis, coeliac disease, diverticulitis, and irritable bowel syndrome.
Examination
Different medical procedures can be used to examine the organs of the gastrointestinal tract. These include endoscopy, colonoscopy, sigmoidoscopy, enteroscopy, oesophagogastroduodenoscopy and virtual colonoscopy. There are also a number of medical imaging techniques that can be used. Surface landmarks are important in the examination of the abdomen.
Surface landmarks
In the mid-line, a slight furrow extends from the xiphoid process above to the pubic symphysis below, representing the linea alba in the abdominal wall. At about its midpoint sits the umbilicus or navel. The rectus abdominis on each side of the linea alba stands out in muscular people. The outline of these muscles is interrupted by three or more transverse depressions indicating the tendinous intersections. There is usually one about the xiphoid process, one at the navel, and one in between. It is the combination of the linea alba and the tendinous intersections which form the abdominal "six-pack" sought after by many people.
The upper lateral limit of the abdomen is the subcostal margin (at or near the subcostal plane) formed by the cartilages of the false ribs (8, 9, 10) joining one another. The lower lateral limit is the anterior crest of the ilium and Poupart's ligament, which runs from the anterior superior spine of the ilium to the spine of the pubis. These lower limits are marked by visible grooves. Just above the pubic spines on either side are the external abdominal rings, which are openings in the muscular wall of the abdomen through which the spermatic cord emerges in the male, and through which an inguinal hernia may rupture.
One method by which the location of the abdominal contents can be appreciated is to draw three horizontal and two vertical lines.
Horizontal lines
The highest of the former is the transpyloric line of C. Addison, which is situated halfway between the suprasternal notch and the top of the pubic symphysis, and often cuts the pyloric opening of the stomach an inch to the right of the mid-line. The hilum of each kidney is a little below it, while its left end approximately touches the lower limit of the spleen. It corresponds to the first lumbar vertebra behind.
The second line is the subcostal line, drawn from the lowest point of the subcostal arch (tenth rib). It corresponds to the upper part of the third lumbar vertebra, and it is an inch or so above the umbilicus. It indicates roughly the transverse colon, the lower ends of the kidneys, and the upper limit of the transverse (3rd) part of the duodenum.
The third line is called the intertubercular line, and runs across between the two rough tubercles, which can be felt on the outer lip of the crest of the ilium about from the anterior superior spine. This line corresponds to the body of the fifth lumbar vertebra, and passes through or just above the ileo-caecal valve, where the small intestine joins the large intestine.
Vertical lines
The two vertical or mid-Poupart lines are drawn from the point midway between the anterior superior spine and the pubic symphysis on each side, vertically upward to the costal margin.
The right one is the most valuable, as the ileo-caecal valve is situated where it cuts the intertubercular line. The orifice of the appendix lies an inch lower, at McBurney's point. In its upper part, the vertical line meets the transpyloric line at the lower margin of the ribs, usually the ninth, and here the gallbladder is situated.
The left mid-Poupart line corresponds in its upper three-quarters to the inner edge of the descending colon.
The right subcostal margin corresponds to the lower limit of the liver, while the right nipple is about half an inch above its upper limit.
Quadrants and regions
The abdomen can be divided into quadrants or regions to describe the location of an organ or structure. Classically, quadrants are described as the left upper, left lower, right upper, and right lower. Quadrants are also often used in describing the site of an abdominal pain.
The abdomen can also be divided into nine regions.
These terms stem from "hypo" meaning "below" and "epi" means "above", while "chondron" means "cartilage" (in this case, the cartilage of the rib) and "gaster" means stomach. The reversal of "left" and "right" is intentional, because the anatomical designations reflect the patient's own right and left.)
The "right iliac fossa" (RIF) is a common site of pain and tenderness in patients who have appendicitis. The fossa is named for the underlying iliac fossa of the hip bone, and thus is somewhat imprecise. Most of the anatomical structures that will produce pain and tenderness in this region are not in fact in the concavity of the ileum. However, the term is in common usage.
Across animal phyla and classes
Chordata
Mammals
Abdominal organs can be highly specialized in some mammals. For example, the stomach of ruminants, (a suborder of mammals that includes cattle and sheep), is divided into four chambers – rumen, reticulum, omasum and abomasum.
Arthropoda
In arthropods, the abdomen is built up of a series of upper plates known as tergites and lower plates known as sternites, the whole being held together by a tough yet stretchable membrane.
Insects
In insects, the abdomen contains the insect's digestive tract and reproductive organs. It consists of eleven segments in most orders of insects, though the eleventh segment is absent in the adult of most higher orders. The number of these segments does vary from species to species with the number of segments visible reduced to only seven in the common honey bee. In the Collembola (springtails), the abdomen has only six segments.
The abdomen is sometimes highly modified. In Apocrita (bees, ants and wasps), the first segment of the abdomen is fused to the thorax and is called the propodeum. In ants, the second segment forms the narrow petiole. Some ants have an additional postpetiole segment, and the remaining segments form the bulbous gaster. The petiole and gaster (abdominal segments 2 and onward) are collectively called the metasoma.
Unlike other arthropods, insects possess no legs on the abdomen in adult form, though the Protura do have rudimentary leg-like appendages on the first three abdominal segments, and Archaeognatha possess small, articulated "styli" which are sometimes considered to be rudimentary appendages. Many larval insects including the Lepidoptera and the Symphyta (sawflies) have fleshy appendages called prolegs on their abdominal segments (as well as their more familiar thoracic legs), which allow them to grip onto the edges of plant leaves as they walk around.
Arachnida
In arachnids (spiders, scorpions and relatives), the term "abdomen" is used interchangeably with "opisthosoma" ("hind body"), which is the body section posterior to that bearing the legs and head (the prosoma or cephalothorax).
| Biology and health sciences | Animal: General | null |
3192578 | https://en.wikipedia.org/wiki/Potamotrygonidae | Potamotrygonidae | River stingrays or freshwater stingrays are Neotropical freshwater fishes of the family Potamotrygonidae in the order Myliobatiformes, one of the four orders of batoids, cartilaginous fishes related to sharks. They are found in rivers in tropical and subtropical South America (freshwater stingrays in Africa, Asia and Australia are in another family, Dasyatidae). A single marine genus, Styracura, of the tropical West Atlantic and East Pacific are also part of Potamotrygonidae. They are generally brownish, greyish or black, often with a mottled, speckled or spotted pattern, have disc widths ranging from and venomous tail stingers. River stingrays feed on a wide range of smaller animals and the females give birth to live young. There are more than 35 species in five genera.
Distribution and habitat
They are native to tropical and subtropical northern, central and eastern South America, living in rivers that drain into the Caribbean, and into the Atlantic as far south as the Río de la Plata in Argentina. A few generalist species are widespread, but most are more restricted and typically native to a single river basin. The greatest species richness can be found in the Amazon, especially the Rio Negro, Tapajós, and Tocantins basins (each home to 8–10 species). The range of several species is limited by waterfalls.
Freshwaters inhabited by members of Potamotrygonidae vary extensively, ranging from lacustrine to fast-flowing rivers, in blackwater, whitewater and clearwater, and on bottoms ranging from sandy to rocky. In at least some species juveniles tend to occur in shallower waters than adults. Most species are strictly freshwater, but a few may range into brackish estuarine habitats in salinities up to at least 12.4‰.
In 2016, two fully marine species formerly included in Himantura were found to belong in Potamotrygonidae, and moved to their own genus Styracura. These are S. schmardae from the tropical West Atlantic, including the Caribbean, and S. pacifica from the tropical East Pacific, including the Galápagos.
Potamotrygonidae are the only family of rays mostly restricted to fresh water habitats. While there are true freshwater species in the family Dasyatidae, for example Urogymnus polylepis, the majority of species in this family are saltwater fish.
Characteristics
River stingrays are almost circular in shape, and range in size from Potamotrygon wallacei, which reaches in disc width, to the chupare stingray (S. schmardae), which grows up to in disc width. The latter is one of only two marine species in this family (the other is S. pacifica). The largest freshwater species in this family are the discus ray (Paratrygon aiereba) and short-tailed river stingray (Potamotrygon brachyura), which grow up to in disc width. At up to , by far the heaviest freshwater member of the family is the short-tailed river stingray, which among South American strict freshwater fish only is matched by the arapaima (Arapaima) and piraíba catfish (Brachyplatystoma filamentosum). In each species in the family Potamotrygonidae, females reach a larger size than the males.
The upper surface is covered with denticles (sharp tooth-like scales). Most species are brownish or greyish and often have distinctive spotted or mottled patterns, but a few species are largely blackish with contrasting pale spots. Juveniles often differ, in some species greatly, in colour and pattern from the adults.
Behavior
Feeding
Members of Potamotrygonidae are predators and feed on a wide range of animals such as insects, worms, molluscs, crustaceans and fish (even spiny catfish). Plant material is sometimes found in their stomachs, but is likely ingested by mistake. The exact diet varies with species; some are generalist predators and others are specialists. For example, Potamotrygon leopoldi mainly feeds on freshwater snails and crabs, although captives easily adapt to a generalist diet. The largest species such as Paratrygon are top predators in their habitat. The jaw joints of stingrays are "loose", allowing them to chew their food in a manner similar to mammals. The family includes both species that are diurnal and species that are nocturnal.
Breeding
Like other Elasmobranchs, male freshwater stingrays are easily recognized by their pair of claspers, modifications of the pelvic fins used when mating. Mating occurs in a ventral-to-ventral position and the females give birth to live young. While still in the mother's uterus, the developing embryo feeds on histotroph, a secretion produced by trophonemata glands. Depending on exact species, the gestation period is 3 to 12 months and there are between 1 and 21 young in each litter. The breeding cycle is generally related to flood levels.
Relationship with humans
Sting
Like other stingrays, members of the family Potamotrygonidae have a venomous stinger on the tail (although it is harmless and vestigal or even absent in Heliotrygon). There are generally one or two stingers, and they are periodically shed and replaced. They are some of the most feared freshwater fishes in the Neotropical region because of the injuries they can cause. In Colombia alone, more than 2,000 injuries are reported per year. Freshwater stingrays are generally non-aggressive, and the stingers are used strictly in self-defense. As a consequence injuries typically occur when bathers step on them (injuries to feet or lower legs) or fishers catch them (injuries to hands or arms). In addition to pain caused by the barbed stinger itself and the venom, bacterial infections of the wounds are common and may account for a greater part of the long-term problems in stinging victims than the actual venom. The stings are typically highly painful and are occasionally fatal to humans, especially people living in rural areas that only seek professional medical help when the symptoms have become severe. In general, relatively little is known about the composites of the venom in freshwater stingrays, but it appears to differ (at least in some species) from that of marine stingrays. There are possibly also significant differences between the venoms of the various Potamotrygonidae species. Due to the potential danger they represent, some locals strongly dislike freshwater stingrays and may kill them on sight. A study at the Butantan Institute, São Paulo, Brazil, revealed that the composition of freshwater stingray venom varies according to sex and age, even between individuals of the same species. Each time the environment changes, the feeding of the stingray changes, leading to changes in the composition of toxins and toxicological effects. There is no specific antidote or treatment for freshwater stingray venom.
Symptomatology
Accidents occur when the rays are stepped on or when the fins are touched, the defensive behavior consists of turning the body, moving the tail and introducing the stinger into the victim. Generally, stingers are inserted into the feet and heels of bathers and the hands of fishermen. Initial symptoms include severe pain, erythema and edema, then necrosis occurs which results in sagging tissue in the affected area and forms a deep ulcer, which develops slowly. Systemic complications include nausea, vomiting, salivation, sweating, respiratory depression, muscle fasciculation and seizures. Once the stinger is torn during penetration into the skin, it can break and cause dentin fragments to be retained in the wound. The stinger can cause laceration, which results in secondary infection, usually caused by Pseudomonas and Staphylococcus. If the stinger reaches internal organs, it can be fatal.
As food
Freshwater stingrays are often caught by hook-and-line and as bycatch in trawls. In the Amazon, Paratrygon and certain Potamotrygon are the most caught species and the first is the most sought. In the Río de la Plata region, the meat of P. brachyura is particularly prized and locally the species is called raya fina (fine ray). Freshwater rays weighing less than are generally discarded, but have a low survival rate. Their meat is mainly consumed locally, but is also exported to Japan and South Korea. From 2005 to 2010, the reported capture in the Brazilian states of Amazonas and Pará has ranged between per year. In contrast, some fishers believe they only can be used for traditional medicine, incorrectly thinking that the meat (not just the tail region around the stinger) is toxic.
In captivity
Freshwater stingrays are often kept in aquariums, but require a very large tank and will eat small tank mates. Although generally non-aggressive, their venomous stinger represents a risk and on occasion aquarists have been stung. The ease of keeping varies significantly: Some such as Potamotrygon motoro are considered relatively hardy in a captive setting, while others such as Paratrygon aiereba, Plesiotrygon nana and Potamotrygon tigrina are much more difficult to maintain.
Several species are commonly bred in captivity, especially at East and Southeast Asian fish farms, which produce thousands of offspring each year. The more serious captive breeding efforts only began in the late 1990s when Brazil put in restrictions on their export of wild-caught individuals. Some captive farms produce hybrids (both intentionally to get offspring with new patterns and unintentionally because of a lack of males), but this practice is generally discouraged. In several US states there are regulations in place that limit the keeping of freshwater stingrays.
Conservation
The status of most species is relatively poorly known, but overall it is suspected that river stingrays are declining due to capture (for food and the aquarium industry) and habitat loss (mainly due to dams and pollution from mining).
Zoos and public aquariums in Europe and North America have initiated programs, including studbooks, for several Potamotrygonidae species.
Dams
Dams represent a risk to some species, but others may benefit from them. For example, the Guaíra Falls disappeared after the completion of the Itaipu Dam, allowing Potamotrygon amandae (formerly misidentified as P. motoro) and P. falkneri to spread into the upper Paraná basin. When the Tucuruí Dam was completed, there was an increase in potential prey animals, allowing the population of P. henlei to increase. In contrast, dams threaten some species such as P. magdalenae by isolating populations and preventing gene flow, and others such as P. brachyura generally avoid lentic habitats, including the reservoirs created by river impoundment.
Fishing and capture
In addition to the large numbers caught for food (hundred of tons per year in the Brazilian Amazon alone), many are killed because of the risk their stings represent to locals and tourists. In the Amazon, it has been estimated that many thousand river stingrays are removed from certain areas to minimize the risk to ecotourism. Such removal is unregulated by the authorities, as not considered fishing in the traditional sense.
Initially Brazil completely banned all exports of wild-caught freshwater stingrays for the aquarium trade, but have since introduced quotas for some species. From 2010 to 2015, between 4,600 and 5,700 of six species (the vast majority were P. leopoldi and P. wallacei; the latter formerly referred to as P. cf. histrix) were legally exported from Brazil per year. The income generated from these are important to several small fishing communities. Other primary exporters of wild-caught freshwater stingrays are Colombia and Peru. A level of illegal exports also occur, and to curb this Paratrygon aiereba (in Colombia) and several Potamotrygon species (in Brazil and Colombia) have been included on CITES Appendix III. It has been suggested that all members of the family should be included on Appendix III, with Paratrygon and a few Potamotrygon species on Appendix II.
Taxonomy and species
The taxonomy of the river stingrays is complex and undescribed species remain. The two species of Styracura were only moved to this family in 2016. Among the freshwater species, Heliotrygon and Paratrygon are sister genera, and Plesiotrygon and Potamotrygon are sister genera.
Subfamily Styracurinae
Genus Styracura Carvalho, Loboda & da Silva, 2016
Styracura pacifica (Beebe & Tee-Van, 1941) (Pacific chupare)
Styracura schmardae (Werner, 1904) (Chupare stingray)
Subfamily Potamotrygoninae
Genus Heliotrygon Carvalho & Lovejoy, 2011
Heliotrygon gomesi Carvalho & Lovejoy, 2011 (Gomes's round ray)
Heliotrygon rosai Carvalho & Lovejoy, 2011 (Rosa's round ray)
Genus Paratrygon A. H. A. Duméril, 1865
Paratrygon aiereba J. P. Müller & Henle, 1841 (Discus ray)
Paratrygon orinocensis Loboda, Lasso, Rosa & De Carvalho, 2021
Paratrygon parvaspina Loboda, Lasso, Rosa & De Carvalho, 2021
Genus Plesiotrygon Rosa, Castello & Thorson, 1987
Plesiotrygon iwamae Rosa, Castello & Thorson, 1987 (Long-tailed river stingray)
Plesiotrygon nana Carvalho & Ragno, 2011 (Black-tailed antenna ray)
Genus Potamotrygon Garman, 1877
Potamotrygon adamastor J. P. Fontenelle & M.R. de Carvalho, 2017
Potamotrygon albimaculata M. R. de Carvalho, 2016 (Itaituba river stingray, Tapajós river stingray)
Potamotrygon amandae Loboda & M. R. de Carvalho, 2013
Potamotrygon amazona J. P. Fontenelle & M.R. de Carvalho, 2017
Potamotrygon boesemani Rosa, M. R. de Carvalho & Almeida Wanderley, 2008 (Boeseman's river stingray, emperor ray)
Potamotrygon brachyura (Günther, 1880) (Short-tailed river stingray)
Potamotrygon constellata (Vaillant, 1880) (Thorny river stingray)
Potamotrygon falkneri Castex & Maciel, 1963 (Largespot river stingray)
Potamotrygon garmani J. P. Fontenelle & M.R. de Carvalho, 2017
Potamotrygon henlei (Castelnau, 1855) (Bigtooth river stingray)
Potamotrygon humerosa Garman, 1913
Potamotrygon histrix (J. P. Müller & Henle, 1834) (Porcupine river stingray)
Potamotrygon jabuti M. R. de Carvalho, 2016 (Pearl river stingray)
Potamotrygon leopoldi Castex & Castello, 1970 (White-blotched river stingray)
Potamotrygon limai Fontenelle, J. P. C. B. da Silva & M. R. de Carvalho, 2014
Potamotrygon magdalenae (A. H. A. Duméril, 1865) (Magdalena river stingray)
Potamotrygon marinae Deynat, 2006
Potamotrygon marquesi Silva & Loboda, 2019
Potamotrygon motoro (J. P. Müller]& Henle, 1841) (Ocellate river stingray)
Potamotrygon ocellata (Engelhardt, 1912) (Red-blotched river stingray)
Potamotrygon orbignyi (Castelnau, 1855) (Smoothback river stingray)
Potamotrygon pantanensis Loboda & M. R. de Carvalho, 2013
Potamotrygon rex Loboda & M. R. de Carvalho, 2016 (Great river stingray)
Potamotrygon schroederi Fernández-Yépez, 1958 (Rosette river stingray)
Potamotrygon schuhmacheri Castex, 1964 (Parana River stingray)
Potamotrygon scobina Garman, 1913 (Raspy river stingray)
Potamotrygon signata Garman, 1913 (Parnaiba River stingray)
Potamotrygon tatianae J. P. C. B. da Silva & M. R. de Carvalho, 2011
Potamotrygon tigrina [M. R. de Carvalho, Sabaj Pérez & Lovejoy, 2011 (Tiger ray)
Potamotrygon wallacei M. R. de Carvalho, R. S. Rosa & M. L. G. Araújo, 2016 (Cururu ray)
Potamotrygon yepezi Castex & Castello, 1970 (Maracaibo River stingray)
| Biology and health sciences | Batoidea | Animals |
3193012 | https://en.wikipedia.org/wiki/Interacting%20galaxy | Interacting galaxy | Interacting galaxies (colliding galaxies) are galaxies whose gravitational fields result in a disturbance of one another. An example of a minor interaction is a satellite galaxy disturbing the primary galaxy's spiral arms. An example of a major interaction is a galactic collision, which may lead to a galaxy merger.
Satellite interaction
A giant galaxy interacting with its satellites is common. A satellite's gravity could attract one of the primary's spiral arms. Alternatively, the secondary satellite can dive into the primary galaxy, as in the Sagittarius Dwarf Elliptical Galaxy diving into the Milky Way. That can possibly trigger a small amount of star formation. Such orphaned clusters of stars were sometimes referred to as "blue blobs" before they were recognized as stars.
Galaxy collision
Colliding galaxies are common during galaxy evolution. The extremely tenuous distribution of matter in galaxies means these are not collisions in the traditional sense of the word, but rather gravitational interactions.
Colliding may lead to merging if two galaxies collide and do not have enough momentum to continue traveling after the collision. As with other galaxy collisions, the merging of two galaxies may create a starburst region of new stars. In that case, they fall back into each other and eventually merge into one galaxy after many passes through each other. If one of the colliding galaxies is much larger than the other, it will remain largely intact after the merger. The larger galaxy will look much the same, while the smaller galaxy will be stripped apart and become part of the larger galaxy. When galaxies pass through each other, unlike during mergers, they largely retain their material and shape after the pass.
Galactic collisions are now frequently simulated on computers, which use realistic physics principles, including the simulation of gravitational forces, gas dissipation phenomena, star formation, and feedback. Dynamical friction slows the relative motion of galaxy pairs, which may possibly merge at some point, according to the initial relative energy of the orbits. A library of simulated galaxy collisions can be found at the Paris Observatory website GALMER.
Gallery
Galactic cannibalism
Galactic cannibalism is a common phenomenon. It refers to the process in which a large galaxy, through tidal gravitational interactions with a companion, merges with that companion. The most common result of the gravitational merger between two or more galaxies is a larger irregular galaxy, but elliptical galaxies may also result.
It has been suggested that galactic cannibalism is currently occurring between the Milky Way and the Large and Small Magellanic Clouds. Streams of gravitationally-attracted hydrogen arcing from these dwarf galaxies to the Milky Way is taken as evidence for the theory.
Galaxy harassment
Galaxy harassment is a type of interaction between a low-luminosity galaxy and a brighter one that takes place within rich galaxy clusters, such as Virgo and Coma, where galaxies are moving at high relative speeds and suffering frequent encounters with other systems of the cluster due to the high galactic density.
According to computer simulations, the interactions convert the affected galaxy disks into disturbed barred spiral galaxies and produces starbursts followed by, if more encounters occur, loss of angular momentum and heating of their gas. The result would be the conversion of (late type) low-luminosity spiral galaxies into dwarf spheroidals and dwarf ellipticals.
Evidence for the hypothesis had been claimed by studying early-type dwarf galaxies in the Virgo Cluster and finding structures, such as disks and spiral arms, which suggest they are former disc systems transformed by the above-mentioned interactions. The existence of similar structures in isolated early-type dwarf galaxies, such as LEDA 2108986, has undermined this hypothesis.
Notable examples
Andromeda–Milky Way collision
Astronomers have estimated the Milky Way Galaxy will collide with the Andromeda Galaxy in about 4.5 billion years. Some think the two spiral galaxies will eventually merge to become an elliptical galaxy whose gravitational interactions will fling various celestial bodies outward, evicting them from the resulting elliptical galaxy. or perhaps a large disc galaxy.
| Physical sciences | Basics_2 | Astronomy |
3196565 | https://en.wikipedia.org/wiki/Fish%20hatchery | Fish hatchery | A fish hatchery is a place for artificial breeding, hatching, and rearing through the early life stages of animals—finfish and shellfish in particular. Hatcheries produce larval and juvenile fish, shellfish, and crustaceans, primarily to support the aquaculture industry where they are transferred to on-growing systems, such as fish farms, to reach harvest size. Some species that are commonly raised in hatcheries include Pacific oysters, shrimp, Indian prawns, salmon, tilapia and scallops.
The value of global aquaculture farming is estimated to be US$98.4 billion in 2008 with China significantly dominating the market; however, the value of aquaculture hatchery and nursery production has yet to be estimated. Additional hatchery production for small-scale domestic uses, which is particularly prevalent in South-East Asia or for conservation programmes, has also yet to be quantified.
There is much interest in supplementing exploited stocks of fish by releasing juveniles that may be wild caught and reared in nurseries before transplanting, or produced solely within a hatchery. Culture of finfish larvae has been utilised extensively in the United States in stock enhancement efforts to replenish natural populations. The U.S. Fish and Wildlife Service have established a National Fish Hatchery System to support the conservation of native fish species.
Purpose
Hatcheries produce larval and juvenile fish and shellfish for transferral to aquaculture facilities where they are ‘on-grown’ to reach harvest size. Hatchery production confers three main benefits to the industry:
1. Out of season production
Consistent supply of fish from aquaculture facilities is an important market requirement. Broodstock conditioning can extend the natural spawning season and thus the supply of juveniles to farms. Supply can be further guaranteed by sourcing from hatcheries in the opposite hemisphere i.e. with opposite seasons.
2. Genetic improvement
Genetic modification is conducted in some hatcheries to improve the quality and yield of farmed species. Artificial fertilisation facilitates selective breeding programs which aim to improve production characteristics such as growth rate, disease resistance, survival, colour, increased fecundity and/or lower age of maturation. Genetic improvement can be mediated by selective breeding, via hybridization, or other genetic manipulation techniques.
3. Reduce dependence on wild-caught juveniles
In 2008 aquaculture accounted for 46% of total food fish supply, around 115 million tonnes. Although wild caught juveniles are still utilised in the industry, concerns over sustainability of extracting juveniles, and the variable timing and magnitude of natural spawning events, make hatchery production an attractive alternative to support the growing demands of aquaculture.
Production steps
Broodstock
Broodstock conditioning is the process of bringing adults into spawning condition by promoting the development of gonads. Broodstock conditioning can also extend spawning beyond natural spawning periods, or for production of species reared outside their natural geographic range with different environmental conditions. Some hatcheries collect wild adults and then bring them in for conditioning whilst others maintain a permanent breeding stock.
Conditioning is achieved by holding broodstock in flow-through tanks at optimal conditions for light, temperature, salinity, flow rate and food availability (optimal levels are species specific).
Another important aspect of broodstock conditioning is ensuring the production of high quality eggs to improve growth and survival of larvae by optimising the health and welfare of broodstock individuals. Egg quality is often determined by the nutritional condition of the mother. High levels of lipid reserves in particular are required to improve larval survival rates.
Spawning
Natural spawning can occur in hatcheries during the regular spawning season however where more control over spawning time is required spawning of mature animals can be induced by a variety of methods. Some of the more common methods are:
Manual stripping: For shellfish, gonads are generally removed and gametes are extracted or washed free. Fish can be manually stripped of eggs and sperm by stroking the anaesthetised fish under the pectoral fins towards the anus causing gametes to freely flow out.
Environmental manipulation: Thermal shock, where cool water is alternated with warmer water in flow-through tanks can induce spawning. Alternatively, if environmental cues that stimulate natural spawning are known, these can be mimicked in the tank e.g. changing salinity to simulate migratory behaviour. Many individuals can be induced to spawn this way, however this increases the likelihood of uncontrolled fertilisation occurring.
Chemical injection: A number of chemicals can be used to induce spawning with various hormones being the most commonly used.
Fertilisation
Prior to fertilisation, eggs can be gently washed to remove wastes and bacteria that may contaminate cultures. Promoting cross-fertilisation between a large number of individuals is necessary to retain genetic diversity in hatchery produced stock. Batches of eggs are kept separate, fertilised with sperm obtained from several males and allowed to stand for an hour or two before samples are analyzed under a microscope to ensure high rates of fertilisation and to estimate numbers to be transferred to larval rearing tanks.
Larvae
Rearing larvae through the early life stages is conducted in nurseries which are generally closely associated with hatcheries for fish culture whilst it is common for shellfish nurseries to exist separately. Nursery culture of larvae to rear juveniles of a size suitable for transferral to on-growing facilities can be performed in a variety of different systems which may be entirely land-based, or larvae may be later transferred to sea-based rearing systems which reduce the need to supply feed. Juvenile survival is dependent on very high quality water conditions.
Feeding is an important component of the rearing process. Although many species are able to grow on maternal reserves alone (lecithotrophy), most commercially produced species require feeding to optimise survival, growth, yield and juvenile quality. Nutritional requirements are species specific and also vary with larval stage. Carnivorous fish are commonly fed with live prey; rotifers are usually offered to early larvae due to their small size, progressing to larger Artemia nauplii or zooplankton. The production of live feed on-site or buying-in is one of the biggest costs for hatchery facilities as it is a labour-intensive process. The development of artificial feeds is targeted to reduce the costs involved in live feed production and increase the consistency of nutrition, however decreased growth and survival has been found with these alternatives.
Settlement of shellfish
The hatchery production of shellfish also involves a crucial settling phase where free-swimming larvae settle out of the water onto a substrate and undergo metamorphosis if suitable conditions are found. Once metamorphosis has taken place the juveniles are generally known as spat, it is this phase which is then transported to on-growing facilities. Settlement behaviour is governed by a range of cues including substrate type, water flow, temperature, and the presence of chemical cues indicating the presence of adults, or a food source etc. Hatchery facilities therefore need to understand these cues to induce settlement and also be able to substitute artificial substrates to allow for easy handling and transportation with minimal mortality.
Hatchery design
Hatchery designs are highly flexible and are tailored to the requirements of site, species produced, geographic location, funding and personal preferences. Many hatchery facilities are small and coupled to larger on-growing operations, whilst others may produce juveniles solely for sale. Very small-scale hatcheries are often utilized in subsistence farming to supply families or communities particularly in south-east Asia. A small-scale hatchery unit consists of larval rearing tanks, filters, live food production tanks and a flow through water supply.
A generalized commercial scale hatchery would contain a broodstock holding and spawning area, feed culture facility, larval culture area, juvenile culture area, pump facilities, laboratory, quarantine area, and offices and bathrooms.
Expense
Labour is generally the largest cost in hatchery production making up more than 50% of total costs. Hatcheries are a business and thus economic viability and scale of production are vital considerations. The cost of production for stock-enhancement programmes is further complicated by the difficulty of assessing the benefits to wild populations from restocking activities.
Issues
Genetic
Hatchery facilities present three main problems in the field of genetics. The first is that maintenance of a small number of broodstock can cause inbreeding and potentially lead to inbreeding depression thus affecting the success of the facility. Secondly, hatchery reared juveniles, even from a fairly large broodstock, can have greatly reduced genetic diversity compared to wild populations (the situation is comparable to the founder effect). Such fish that escape from farms or are released for restocking purposes may adversely affect wild population genetics and viability. This is of particular concern where escaped fish have been actively bred or are otherwise genetically modified. The third key issue is that genetic modification of food items is highly undesirable for many people. See Genetically modified food controversies.
Fish farms
Other arguments that surround fish farms such as the supplementation of feed from wild caught species, the prevalence of disease, fish welfare issues and potential effects on the environment are also issues for hatchery facilities.
| Technology | Buildings and infrastructure | null |
9774228 | https://en.wikipedia.org/wiki/Flatbed%20truck | Flatbed truck | A flatbed truck (or flatbed lorry in British English) is a type of truck the bodywork of which is just an entirely flat, level 'bed' with no sides or roof.
This allows for quick and easy loading of goods, and consequently they are used to transport heavy loads that are not delicate or vulnerable to rain, and also for abnormal loads that require more space than is available on a closed body. Flatbed trucks can be either articulated or rigid.
Road trucks
A flatbed has a solid bed, usually of wooden planks. There is no roof and no fixed sides. To retain the load there are often low sides which may be hinged down for loading, as a 'drop-side' truck. A 'stake truck' has no sides but has steel upright stanchions, which may be removable, again used to retain the load.
Loads are retained by being manually tied down with ropes. The bed of a flatbed truck has tie-down hooks around its edge and techniques such as a trucker's hitch are used to tighten them. Weather protection is optionally provided by manually 'sheeting' the load with a tarpaulin, held down by ropes. These manual loading techniques are slow and require some care and skill. There is also the risk that an improperly secured load may be shed in transit, often leading to accidents or road blockages. There is also little theft protection for such a load. The slowness of loading loads like this led to the development of more efficient truck designs with enclosed bodies.
Some improvement was made with the general replacement of ropes by flat webbing straps, tightened with a ratchet. These reduced the skill of 'roping up' and improved the control of tension, leading to fewer shed loads.
Decline of flatbeds
Flatbeds became rare in the 1980s as the majority of road freight changed to either containers or pallet loads carried on larger and more efficient trucks, optimised for quicker loading by fork-lift trucks. Containers are carried on specialised semi-trailers with twistlocks in the corners to retain the container. Pallet loads are carried in either box bodies, loaded through rear doors, or curtain-sided bodies loaded through the sides. Both of these protect loads from the weather and can be quickly loaded with standard loads, but are more restrictive for single bulky loads, loaded by crane. The haulage and logistics business also changed around the same time as a greater proportion became more regular in nature, such as standard daily loads of equally-sized boxes from a distribution centre to a supermarket, rather than the unpredictable ad hoc nature of earlier road transport.
Flatbeds are still in use, but are now used for more specialised cargoes, such as constructional steelwork or lighter abnormal loads, such as machinery, lumber loads/dry wall or any load that requires use of a forklift without the use of a loading dock.
Low loaders, for construction machinery and heavy plant vehicles, are not considered as flatbeds. Neither are abnormal load carriers for heavy haulage.
Configuration/design in US trailers
In North America, the length is commonly , and the width is either (including rub rails and stake pockets on the sides, which generally placed every ). Some older trailers still in service are only or shorter if used in sets of doubles or triples (often used to haul hay). Various lengths and combination setups can only be legally driven on turnpike/toll roads which are far too long for most roadways. Body and frame can be one of three general designs: the heaviest and sturdiest is all steel (usually with wood planks), ever-popular combo with steel frame and aluminium bed, these type often have wood portions for nailing down dunnage boards), and aluminium (which is the lightest allowing for more cargo to be legally carried without overweight permits). Incredibly light and very expensive to purchase, all aluminium trailers are very slippery when wet, flex more and are easily damaged. They also have a natural upwards bend so that when loaded they straighten out to be flatter, rather than to sag in the middle under a load.
Another popular type of flatbed trailer is a step deck (or drop deck) with approximately 2 feet lower deck and low profile wheels to accommodate taller loads, without hitting low bridges or tunnels. These stepdecks can come with loading ramps to allow vehicles to roll on and off of the back from ground level. Shorter trailers used for local jobs such as landscaping and building material delivery within urban or local areas can have a "hitchhiker" type forklift truck attached to the back in order for driver alone to deliver and unload pallet/skid items. A bulkhead or "headache rack" is sometimes attached to the front of either a straight or a stepdeck trailer for load securement at the front of the deck. In the event of long pipes or steel or lumber coming loose in a hard braking incident, they save the operator and cab/sleeper in one of two manners in theory. If attached to the trailer they bend while attempting to block forward motion of a loosened cargo, causing the long load to go above the cab and driver.
If attached to the frame behind the cab or sleeper of the tractor, in theory, they protect the back of the cab from impact and if unable to stop the load coming through the cab, they cause the cab to be knocked off of the frame, rather than impale the cab and kill or seriously injure the driver. 48- and 53-foot lengths usually have two axles spread out to over apart at the rear "California spread" in order to allow for more weight distribution on the rear of the deck (40,000 lb instead of 34,000 for a tandem axles design).
The so-called Cali spread was originally designed to comply with bridge weight formulas in that state but has since been adopted in most other parts of the country. These spread axles take a far wider turning radius, and if turning the combination tractor/trailer too sharply, the front axle tires of the trailer may damage the road/parking lot surface, or pop a tire off of the rim, or both. Some trailers have the capability of lifting or lowering the front axle independently to mitigate this risk. The driver may not be able to use this feature if the trailer is loaded, but if the deck is empty the driver can lower the front axle to bring the rear axle off of the ground to significantly decrease the turning radius of the rig for easy maneuvering in tight spaces, or to reduce tire wear during empty/deadhead miles of travel.
Under the deck of the trailer can be attached racks for spare tires, dunnage boards, or tire chains, as well as various tool/storage boxes. On one side (or often both sides for alternating pull-on strap tension) are usually sliding (but sometimes fixed) winches to ratchet down 4-inch straps for load securement. On most 48-foot trailers, these strap/winches may not be placed over a tire as when air pressure releases out of the suspension system when parked, the deck lowers down and will likely pop a trailer tire. Some trailers have an air scale.
When the driver learns how to interpret the scale properly through experience, combined with their knowledge of how much the rig weighs when empty, they can interpret how much cargo can safely and legally be loaded onto the trailer. With different varying loads of cargo, the driver can have an idea how much the gross total weight is, and if they are legal to avoid a ticket (80,000 pounds without a permit in most states, but slightly lower in others). Some decks have pop-up chain systems which have a higher WLL (working load limit) than attaching chains to either the stake pocket/spools, or the frame.
Other decks of trailers can have sliding detachable accessories which greatly diversify the options of where and how to place a chain hook for securement. Besides axles which raise/lower as needed, some spread axle trailers can slide one or both axles forward, or back to create a tandem setup in specific situations when necessary, to comply with weight distribution requirements. Certain amounts of front and rear cargo overhang are allowed (as well as overhang to one or both sides of the trailer) with flags/banners/flashing lights to warn drivers behind and to the side of impending danger of impalement if they follow too closely behind and the truck suddenly stops. In extreme cases, permit loads require an escort vehicle in the front, rear, or both for oversize/over dimension cargo/equipment.
Tow trucks
Some vehicle recovery tow trucks have flat beds and are able to winch a recovered vehicle entirely on board. They can then drive the vehicle away for repair without needing to tow it. This allows a faster journey, does not require a driver in the vehicle being towed, and allows a damaged vehicle to be recovered when it cannot be towed. As these flat beds usually slope gradually to the rear, unlike the level bed of a cargo flatbed, they are known as 'beavertails'. Some tow truck beds are demountable and may be lowered behind the truck for easy loading, then both bed and load winched back aboard as one.
Railway flatbeds
Railways also employ flatbed trucks in engineering trains and freight trains. In Britain and the Commonwealth the term bogie flat is often applied to a bogie flatbed truck. Although less common, flatbed railway trucks on rigid frames and axles are sometimes used, with both 4-wheel and 6-wheel versions being extant. In British English, the term 'truck' most commonly relates to railway vehicles, with the word 'lorry' more commonly applied to road vehicles.
| Technology | Motorized road transport | null |
9779915 | https://en.wikipedia.org/wiki/Helicidae | Helicidae | Helicidae is a large, diverse family of western Palaearctic, medium to large-sized, air-breathing land snails, sometimes called the "typical snails." It includes some of the largest European land snails, several species are common in anthropogenic habitats, and some became invasive on other continents. A number of species in this family are valued as food items, including Cornu aspersum (formerly Helix aspersa; "petit gris") the brown or garden snail, and Helix pomatia (the "escargot"). The biologies of these two species in particular have been thoroughly studied and documented.
Shell description
The shells are usually flattened or depressed conical. Globular shells are found in the genera Helix, Maltzanella, Lindholmia, Cornu, Cantareus, Eremina, and Idiomella. One species, Cylindrus obtusus, has a cylindrical shell. In some genera, especially in Cepaea, the shells are brightly colored and patterned.
Anatomy
Helicidae typically have a ribbed jaw, bursa copulatrix with a diverticulum, and one dart sac accompanied by a pair of (usually) branched, tubular mucous glands inserting at the base of the dart sac.
Genetics
In this family, the number of haploid chromosomes lies between 22 and 30.
In the " project, four species (Cepaea nemoralis, Cepaea hortensis, Cornu aspersum, and Arianta arbustorum) are scheduled for whole genome sequencing and assembly ().
Distribution
The core of helicids is distributed in from the Caucasus through Turkey and Europe to North Africa. However, some genera or species live beyond these limits. Helicids occur on Cape Verde (Eremina), Canary Islands (Theba, Hemicycla) and the Madeira Archipelago (Lampadia, Idiomela). Levantina extends far south in western Arabia, and Eremina desertella is distributed as south as Sudan, Eritrea and Puntland in Somalia. Cepaea hortensis lives on Iceland and in a small area in eastern Canada. Some species, notably Cornu aspersum and Theba pisana have been introduced and become established in numerous different areas worldwide.
Taxonomy
The family Helicidae contains 3 subfamilies (according to molecular phylogenetic analyses):
Subfamily Helicinae Rafinesque, 1815
Genital system anatomy (does not apply on all species, as derived states are found in some of them): mucous glands divided into 2 or more branches, love dart with four blades (vanes) along its length, two penial papillae/verges.
Tribe Allognathini Westerlund, 1903
Allognathus
Cepaea Held, 1838
Hemicycla
Iberus
Idiomela T. Cockerell, 1921
Lampadia
Tribe Helicini Rafinesque, 1815
Aristena Psonis, Vardinoyannis & Poulakakis, 2022
Amanica Nordsieck, 2017
Caucasotachea Boettger, 1909
Codringtonia Kobelt, 1898
Helix Linnaeus, 1758 - type genus
Isaurica Kobelt, 1901
Levantina Kobelt, 1871
Lindholmia Hesse, 1918
Maltzanella Hesse, 1917
Neocrassa Subai, 2005
Tribe Thebini Wenz, 1923
A 2022 phylogenetic analysis proposed that all groups of the Maghreb radiation belonged to a single tribe, Thebini, without support for a separate Otalini tribe. The same study proposed a new tribe, Maculariini trib. nov. containing the genus Macularia due to the wide geographic disjunction between the western Alpine Macularia and the primarily Maghrebian Thebini tribe.
Cantareus Risso, 1826
Cornu Born, 1778
Eobania P. Hesse, 1913
Eremina Pfeiffer, 1855
Gyrostomella P. Hesse, 1911
Loxana Pallary, 1899
Massylaea Möllendorff, 1898
Otala Schumacher, 1817
Rossmaessleria P. Hesse, 1907
Theba Risso, 1826
Tribe Maculariini Neiber, Korábek, Glaubrecht & Hausdorf, 2021
Macularia Albers, 1850
Subfamily Murellinae Hesse, 1918
Genital system anatomy (does not apply on all species, as derived states are found in some of them): mucous glands weakly branched or undivided, love dart with four blades along its length, one penial papilla.
Distributed in Sardinia, Corsica, the Apennine Peninsula and Sicily.
Marmorana W. Hartmann, 1844
Tacheocampylaea
Tyrrheniberus
Subfamily Ariantinae Mörch, 1864
Genital system anatomy: mucous glands divided into 2 branches or undivided, love dart with two blades on the tip, one penial papilla.
Arianta Turton, 1831
Campylaea H. Beck, 1837
Campylaeopsis A.J. Wagner, 1914
Cattania Brusina, 1904
Causa Schileyko, 1971
Chilostoma Fitzinger, 1833
Corneola Held, 1838
Cylindrus Fitzinger, 1833
Delphinatia P. Hesse, 1931
Dinarica Kobelt, 1902
Drobacia Brusina, 1904
Faustina Kobelt, 1904
Helicigona A. Férussac, 1821
Isognomostoma Fitzinger, 1833
Josephinella F. Haas, 1936
Kollarix Groenenberg, Subai & E. Gittenberger, 2016
Kosicia Brusina, 1904
Liburnica Kobelt, 1904
Pseudotrizona Groenenberg, Subai & E. Gittenberger, 2016
Thiessea Kobelt, 1904
Vidovicia Brusina, 1904
Pseudochloritis C. R. Boettger, 1909
Mesodontopsis Pilsbry, 1895
Metacampylaea Pilsbry, 1895
Paradrobacia H. Nordsieck, 2014
Pseudoklikia H. Nordsieck, 2018
Incertae sedis
†Megalotachea Pfeffer, 1930
| Biology and health sciences | Gastropods | Animals |
1088870 | https://en.wikipedia.org/wiki/Output%20device | Output device | An output device is any piece of computer hardware that converts information or data into a human-perceptible form or, historically, into a physical machine-readable form for use with other non-computerized equipment. It can be text, graphics, tactile, audio, or video. Examples include monitors, printers and sound cards.
In an industrial setting, output devices also include "printers" for paper tape and punched cards, especially where the tape or cards are subsequently used to control industrial equipment, such as an industrial loom with electrical robotics which is not fully computerized
Visual
A display device is the most common form of output device which presents output visually on computer screen. The output appears temporarily on the screen and can easily be altered or erased.
With all-in-one PCs, notebook computers, hand held PCs and other devices; the term display screen is used for the display device. The display devices are also used in home entertainment systems, mobile systems, cameras and video game systems.
Display devices form images by illuminating a desired configuration of . Raster display devices are organized in the form of a 2-dimensional matrix with rows and columns. This is done many times within a second, typically 60, 75, 120 or 144 Hz on consumer devices.
Interface
The interface between a computer's CPU and the display is a Graphics Processing Unit (GPU). This processor is used to form images on a framebuffer. When the image is to be sent to the display, the GPU sends its image through a video display controller to generate a video signal, which is then sent to a display interface such as HDMI, VGA, or DVI
GPUs can be divided into discrete and integrated units, the former being an external unit and the latter of which is included within a CPU die. Discrete graphics cards are almost always connected to the host through the PCI Express bus, while older graphics cards may have used AGP or PCI. Some mobile computers support an external graphics card through Thunderbolt (via PCIe).
Form factors
Monitor
A monitor is a standalone display commonly used with a desktop computer, or in conjunction to a laptop as an external display. The monitor is connected to the host through the use of a display cable, such as HDMI, DisplayPort, VGA, and more.
Older monitors use CRT technology, while modern monitors are typically flat panel displays using a plethora of technologies such as TFT-LCD, LED, OLED, and more.
Internal display
Almost all mobile devices incorporate an internal display. These internal displays are connected to the computer through an internal display interface such as LVDS or eDP. The chief advantage of these displays is their portability.
Terminal
Prior to the development of modern pixel-oriented displays, computer terminals were used, composed of a character-oriented display device known as a VDU and a computer keyboard.
These terminals were often monochromatic, and could only display text. Rudimentary graphics could be displayed through the use of ASCII art along with box-drawing characters. Teleprinters were the precursors to these devices.
Projector
A projector is a display that projects the computer image onto a surface through the use of a high power lamp. These displays are seen in use to show slideshow presentations or in movie screenings.
Technologies
Display technologies can be classified based on working principle, lighting (or lack thereof), pixel layout, and more.
Cathode-ray tube (CRT) CRT screens produce an image using electron tube, which fires electrons at a phosphorous coated screen to light up pixels in order to display images.
Liquid crystal display (LCD) An LCD is a display technology employing the use of liquid crystals to form images.
Thin-film transistor (TFT) A TFT refers to the thin layer of transistors used with an LCD.
LED-backlit LCD An LCD display which uses LEDs as a backlight. Prior to the use of LED based backlighting, Cold Cathode Fluorescent (CCFL) tubes were used. LED displays use an array of LEDs to form an image.
Organic Light Emitting Diode (OLED) Unlike an LED display, an OLED display does not use a backlight.
Electronic paper (e-ink) An e-ink display uses encapsulated pigment to form an image resembling printed paper, commonly used in e-book readers.
Color output
Monochromatic display
A monochrome display is a type of CRT common in the early days of computing, from the 1960s through the 1980s, before color monitors became popular.
They are still widely used in applications such as computerized cash register systems. Green screen was the common name for a monochrome monitor using a green "P1" phosphor screen.
Colored display
Color monitors, sometimes called RGB monitors, accept three separate signals (red, green, and blue), unlike a monochromatic display which accepts one. Color monitors implement the RGB color model by using three different phosphors that appear red, green, and blue when activated. By placing the phosphors directly next to each other, and activating them with different intensities, color monitors can create an unlimited number of colors. In practice, however, the real number of colors that any monitor can display is controlled by the video adapter.
Auditory
A speaker is an output device that produces sound through an oscillating transducer called a driver. The equivalent input device is a microphone.
Speakers are plugged into a computer's sound card via a myriad of interfaces, such as a phone connector for analog audio, or SPDIF for digital audio. While speakers can be connected through cables, wireless speakers are connected to the host device through radio technology such as Bluetooth.
Speakers are most often used in pairs, which allows the speaker system to produce positional audio. When more than one pair is used, it is referred to as surround sound.
Certain models of computers includes a built-in speaker, which may sacrifice audio quality in favor of size. For example, the built-in speaker of a smartphone allows the users to listen to media without attaching an external speaker.
Interface
The interface between an auditory output device and a computer is the sound card. Sound cards may be included on a computer's motherboard, installed as an expansion card, or as a desktop unit.
The sound card may offer either an analog or digital output. In the latter case, output is often transmitted using SPDIF as either an electrical signal or an optical interface known as TOSLINK. Digital outputs are then decoded by an AV receiver.
In the case of wireless audio, the computer merely transmits a radio signal, and responsibility of decoding and output is shifted to the speaker.
Form factors
Computer speakers
While speakers can be used for any purpose, there are computer speakers which are built for computer use. These speakers are designed to sit on a desk, and as such, cannot be as large as conventional speakers.
Computer speakers may be powered via USB, and are most often connected through a 3.5mm phone connector.
PC speaker
The PC speaker is a simple loudspeaker built into IBM PC compatible computers. Unlike a speaker used with a sound card, the PC speaker is only meant to produce square waves to produce sounds such as beeping.
Modern computers utilize a piezoelectric buzzer or a small speaker as the PC speaker.
PC speakers are used during Power-on self-test to identify errors during the computer's boot process, without needing a video output device to be present and functional.
Studio monitor
A Studio monitor is a speaker used in a studio environment. These speakers optimize for accuracy. A monitor produces a flat (linear) frequency response which does not emphasize or de-emphasize of particular frequencies.
Headphones
Headphones, earphones, and earpieces are a kind of speaker which is supported either on the user's head, or the user's ear.
Unlike a speaker, headphones are not meant to be audible to people nearby, which suits them for use in the public, office or other quiet environments.
Noise-cancelling headphones are built with ambient noise reduction capabilities which may employ active noise cancelling.
Technology
Loudspeakers are composed of several components within an enclosure, such as several drivers, active amplifiers, crossovers, and other electronics. Multiple drivers are used to reproduce the full frequency range of human hearing, with tweeters producing high pitches and woofers producing low pitches. Full-range speakers use only one driver to produce as much of a frequency response as possible.
While Hi-Fi speakers attempt to produce high quality sound, computer speakers may compromise on these aspects due to their limited size and to be inexpensive, and the latter often uses full-range speakers as a result.
Tactile
Braille display
A refreshable braille display outputs braille characters through the use of pins raised out of holes on its surface. It is ordinarily used by visually-impaired individuals as an alternative to a screen reader.
Haptic technology
Haptic technology involves the use of vibration and other motion to induce a sense of touch. Haptic technology was introduced in the late 1990s for use in game controllers, to provide tactile feedback while a user is playing a video game. Haptic feedback has seen further uses in the automotive field, aircraft simulation systems, and brain-computer interfaces.
In mobile devices, Apple added haptic technology in various devices, marketed as 3D Touch and Force Touch. In this form, several devices could sense the amount of force exerted on its touchscreen, while MacBooks could sense two levels of force on its touchpad, which will produce a haptic sensation.
Printing devices
Printer
A printer is a device that outputs data to be put on a physical item, usually a piece of paper. Printers operate by transferring ink onto this medium in the form of the image received from the host.
Early printers could only print text, but later developments allowed printing of graphics. Modern printers can receive data in multiple forms like vector graphics, as an image, a program written in a page description language, or a string of characters.
Multiple types of printers exist:
Inkjet printers An inkjet printer injects tiny droplets onto the printing medium via a series of nozzles on a printing head.
Laser printers A laser printer uses a laser to charge a drum of toner in order to mark points where the toner would stick onto the medium.
Thermal printers A printer which heats up a thermally sensitive roll of paper to reveal ink. Most often seen in retail stores to print receipts.
Dot matrix printer A printer which uses impact to transfer ink from a ribbon to the medium.
Plotter
A plotter is a type of printer used to print vector graphics. Instead of drawing pixels onto the printing medium, the plotter draws lines, which may be done with a writing implement such as a pencil or pen.
Teleprinter
A teleprinter or teletypewriter (TTY) is a type of printer that is meant for sending and receiving messages. Before displays were used to display data visually, early computers would only have a teleprinter for use to access the system console. As the operator would enter commands into its keyboard, the teleprinter would output the results onto a piece of paper. The teleprinter would ultimately be succeeded by a computer terminal, which had a display instead of a printer.
Headless operation
A computer can still function without an output device, as is commonly done with servers, where the primary interaction is typically over a data network. A number of protocols exist over serial ports or LAN cables to determine operational status, and to gain control over low-level configuration from a remote location without having a local display device. If the server is configured with a video output, it is often possible to connect a temporary display device for maintenance or administration purposes while the server continues to operate normally; sometimes several servers are multiplexed to a single display device though a KVM switch or equivalent.
Some methods to use remote systems are:
Remote access The computer's console can be accessed through a network connection such as the Internet, using protocols such as telnet or SSH.
Remote desktop Allows a graphical user interface to be accessed through remote access even without a monitor.
KVM switch Multiple computers are connected to a single display device which can be switched between computers.
Serial port A serial console can be connected to access the device's console.
| Technology | Computer hardware | null |
1089079 | https://en.wikipedia.org/wiki/Miller%20index | Miller index | Miller indices form a notation system in crystallography for lattice planes in crystal (Bravais) lattices.
In particular, a family of lattice planes of a given (direct) Bravais lattice is determined by three integers h, k, and ℓ, the Miller indices. They are written (hkℓ), and denote the family of (parallel) lattice planes (of the given Bravais lattice) orthogonal to , where are the basis or primitive translation vectors of the reciprocal lattice for the given Bravais lattice. (Note that the plane is not always orthogonal to the linear combination of direct or original lattice vectors because the direct lattice vectors need not be mutually orthogonal.) This is based on the fact that a reciprocal lattice vector (the vector indicating a reciprocal lattice point from the reciprocal lattice origin) is the wavevector of a plane wave in the Fourier series of a spatial function (e.g., electronic density function) which periodicity follows the original Bravais lattice, so wavefronts of the plane wave are coincident with parallel lattice planes of the original lattice. Since a measured scattering vector in X-ray crystallography, with as the outgoing (scattered from a crystal lattice) X-ray wavevector and as the incoming (toward the crystal lattice) X-ray wavevector, is equal to a reciprocal lattice vector as stated by the Laue equations, the measured scattered X-ray peak at each measured scattering vector is marked by Miller indices. By convention, negative integers are written with a bar, as in for −3. The integers are usually written in lowest terms, i.e. their greatest common divisor should be 1. Miller indices are also used to designate reflections in X-ray crystallography. In this case the integers are not necessarily in lowest terms, and can be thought of as corresponding to planes spaced such that the reflections from adjacent planes would have a phase difference of exactly one wavelength (2), regardless of whether there are atoms on all these planes or not.
There are also several related notations:
the notation denotes the set of all planes that are equivalent to by the symmetry of the lattice.
In the context of crystal directions (not planes), the corresponding notations are:
with square instead of round brackets, denotes a direction in the basis of the direct lattice vectors instead of the reciprocal lattice; and
similarly, the notation denotes the set of all directions that are equivalent to by symmetry.
Note, for Laue–Bragg interferences
lacks any bracketing when designating a reflection
Miller indices were introduced in 1839 by the British mineralogist William Hallowes Miller, although an almost identical system (Weiss parameters) had already been used by German mineralogist Christian Samuel Weiss since 1817. The method was also historically known as the Millerian system, and the indices as Millerian, although this is now rare.
The Miller indices are defined with respect to any choice of unit cell and not only with respect to primitive basis vectors, as is sometimes stated.
Definition
There are two equivalent ways to define the meaning of the Miller indices: via a point in the reciprocal lattice, or as the inverse intercepts along the lattice vectors. Both definitions are given below. In either case, one needs to choose the three lattice vectors a1, a2, and a3 that define the unit cell (note that the conventional unit cell may be larger than the primitive cell of the Bravais lattice, as the examples below illustrate). Given these, the three primitive reciprocal lattice vectors are also determined (denoted b1, b2, and b3).
Then, given the three Miller indices denotes planes orthogonal to the reciprocal lattice vector:
That is, (hkℓ) simply indicates a normal to the planes in the basis of the primitive reciprocal lattice vectors. Because the coordinates are integers, this normal is itself always a reciprocal lattice vector. The requirement of lowest terms means that it is the shortest reciprocal lattice vector in the given direction.
Equivalently, (hkℓ) denotes a plane that intercepts the three points a1/h, a2/k, and a3/ℓ, or some multiple thereof. That is, the Miller indices are proportional to the inverses of the intercepts of the plane, in the basis of the lattice vectors. If one of the indices is zero, it means that the planes do not intersect that axis (the intercept is "at infinity").
Considering only (hkℓ) planes intersecting one or more lattice points (the lattice planes), the perpendicular distance d between adjacent lattice planes is related to the (shortest) reciprocal lattice vector orthogonal to the planes by the formula: .
The related notation [hkℓ] denotes the direction:
That is, it uses the direct lattice basis instead of the reciprocal lattice. Note that [hkℓ] is not generally normal to the (hkℓ) planes, except in a cubic lattice as described below.
Case of cubic structures
For the special case of simple cubic crystals, the lattice vectors are orthogonal and of equal length (usually denoted a), as are those of the reciprocal lattice. Thus, in this common case, the Miller indices (hkℓ) and [hkℓ] both simply denote normals/directions in Cartesian coordinates.
For cubic crystals with lattice constant a, the spacing d between adjacent (hkℓ) lattice planes is (from above)
.
Because of the symmetry of cubic crystals, it is possible to change the place and sign of the integers and have equivalent directions and planes:
Indices in angle brackets such as ⟨100⟩ denote a family of directions which are equivalent due to symmetry operations, such as [100], [010], [001] or the negative of any of those directions.
Indices in curly brackets or braces such as {100} denote a family of plane normals which are equivalent due to symmetry operations, much the way angle brackets denote a family of directions.
For face-centered cubic and body-centered cubic lattices, the primitive lattice vectors are not orthogonal. However, in these cases the Miller indices are conventionally defined relative to the lattice vectors of the cubic supercell and hence are again simply the Cartesian directions.
Case of hexagonal and rhombohedral structures
With hexagonal and rhombohedral lattice systems, it is possible to use the Bravais–Miller system, which uses four indices (h k i ℓ) that obey the constraint
h + k + i = 0.
Here h, k and ℓ are identical to the corresponding Miller indices, and i is a redundant index.
This four-index scheme for labeling planes in a hexagonal lattice makes permutation symmetries apparent. For example, the similarity between (110) ≡ (110) and (10) ≡ (110) is more obvious when the redundant index is shown.
In the figure at right, the (001) plane has a 3-fold symmetry: it remains unchanged by a rotation of 1/3 (2/3 rad, 120°). The [100], [010] and the [0] directions are really similar. If S is the intercept of the plane with the [0] axis, then
i = 1/S.
There are also ad hoc schemes (e.g. in the transmission electron microscopy literature) for indexing hexagonal lattice vectors (rather than reciprocal lattice vectors or planes) with four indices. However they do not operate by similarly adding a redundant index to the regular three-index set.
For example, the reciprocal lattice vector (hkℓ) as suggested above can be written in terms of reciprocal lattice vectors as . For hexagonal crystals this may be expressed in terms of direct-lattice basis-vectors a1, a2 and a3 as
Hence zone indices of the direction perpendicular to plane (hkℓ) are, in suitably normalized triplet form, simply . When four indices are used for the zone normal to plane (hkℓ), however, the literature often uses instead. Thus as you can see, four-index zone indices in square or angle brackets sometimes mix a single direct-lattice index on the right with reciprocal-lattice indices (normally in round or curly brackets) on the left.
And, note that for hexagonal interplanar distances, they take the form
Crystallographic planes and directions
Crystallographic directions are lines linking nodes (atoms, ions or molecules) of a crystal. Similarly, crystallographic planes are planes linking nodes. Some directions and planes have a higher density of nodes; these dense planes have an influence on the behavior of the crystal:
optical properties: in condensed matter, light "jumps" from one atom to the other with the Rayleigh scattering; the velocity of light thus varies according to the directions, whether the atoms are close or far; this gives the birefringence
adsorption and reactivity: adsorption and chemical reactions can occur at atoms or molecules on crystal surfaces, these phenomena are thus sensitive to the density of nodes;
surface tension: the condensation of a material means that the atoms, ions or molecules are more stable if they are surrounded by other similar species; the surface tension of an interface thus varies according to the density on the surface
Pores and crystallites tend to have straight grain boundaries following dense planes
cleavage
dislocations (plastic deformation)
the dislocation core tends to spread on dense planes (the elastic perturbation is "diluted"); this reduces the friction (Peierls–Nabarro force), the sliding occurs more frequently on dense planes;
the perturbation carried by the dislocation (Burgers vector) is along a dense direction: the shift of one node in a dense direction is a lesser distortion;
the dislocation line tends to follow a dense direction, the dislocation line is often a straight line, a dislocation loop is often a polygon.
For all these reasons, it is important to determine the planes and thus to have a notation system.
Integer versus irrational Miller indices: Lattice planes and quasicrystals
Ordinarily, Miller indices are always integers by definition, and this constraint is physically significant. To understand this, suppose that we allow a plane (abc) where the Miller "indices" a, b and c (defined as above) are not necessarily integers.
If a, b and c have rational ratios, then the same family of planes can be written in terms of integer indices (hkℓ) by scaling a, b and c appropriately: divide by the largest of the three numbers, and then multiply by the least common denominator. Thus, integer Miller indices implicitly include indices with all rational ratios. The reason why planes where the components (in the reciprocal-lattice basis) have rational ratios are of special interest is that these are the lattice planes: they are the only planes whose intersections with the crystal are 2d-periodic.
For a plane (abc) where a, b and c have irrational ratios, on the other hand, the intersection of the plane with the crystal is not periodic. It forms an aperiodic pattern known as a quasicrystal. This construction corresponds precisely to the standard "cut-and-project" method of defining a quasicrystal, using a plane with irrational-ratio Miller indices. (Although many quasicrystals, such as the Penrose tiling, are formed by "cuts" of periodic lattices in more than three dimensions, involving the intersection of more than one such hyperplane.)
| Physical sciences | Crystallography | Physics |
1089621 | https://en.wikipedia.org/wiki/Telogen%20effluvium | Telogen effluvium | Telogen effluvium is a scalp disorder characterized by the thinning or shedding of hair resulting from the early entry of hair in the telogen phase (the resting phase of the hair follicle). It is in this phase that telogen hairs begin to shed at an increased rate, where normally the approximate rate of hair loss (having no effect on one's appearance) is 125 hairs per day.
There are 5 potential alterations in the hair cycle that could lead to this shedding: immediate anagen release, delayed anagen release, short anagen syndrome, immediate telogen release, and delayed telogen release.
Immediate anagen release occurs when follicles leave anagen and are stimulated to enter telogen prematurely. The effects become visible 2–3 months later with increased telogen effluvium.
Delayed anagen release, most commonly associated with pregnancy, involves the prolongation of anagen under the effect of pregnancy hormones, resulting in delayed but synchronous and heavy postpartum hair shedding.
Short anagen syndrome is characterized by an idiopathic and persistent telogen hair shedding, as well as the inability to grow hair long. This is a result of the shortening of the duration of anagen, meaning a greater number of telogen hairs at any given time, and is responsible for the majority of chronic TE cases.
Immediate telogen release generally occurs with drug-induced shortening of telogen leading to the premature reentrance of follicles to anagen, which causes a massive release of club (telogen) hairs. Drugs such as minoxidil can precipitate immediate telogen release.
Delayed telogen release involves a prolonged telogen phase followed by a delayed transition to anagen. This occurs in animals with synchronous hair cycles that shed their hair or winter coats seasonally. This is also sometimes responsible for seasonal hair loss in humans.
Emotional or physiological stress may result in an alteration of the normal hair cycle and cause the disorder, with potential causes including eating disorders, crash diets, pregnancy and childbirth, chronic illness, major surgery, anemia, severe emotional disorders, hypothyroidism, and drugs.
Diagnostic tests, which may be performed to verify the diagnosis, include a trichogram, trichoscopy and biopsy. Effluvium can present with similar appearance to alopecia totalis, with further distinction by clinical course, microscopic examination of plucked follicles, or biopsy of the scalp. Histology would show telogen hair follicles in the dermis with minimal inflammation in effluvium, and dense peribulbar lymphocytic infiltrate in alopecia totalis.
Vitamin D levels may also play a role in the normal hair cycle.
Many new cosmetic treatments have been reported, including Stemoxydine, Nioxin, minoxidil, and a leave-on technology combination: caffeine, niacinamide, panthenol, dimethicone, and an acrylate polymer (CNPDA). This treatment has shown to increase the diameter of existing, individual scalp hair fibres by 2–5 μm, yielding a significant increase of approximately 10% in the cross-sectional area of each hair. Additionally, CNPDA-thickened hairs also demonstrate altered mechanical properties of thicker fibres; increased suppleness/pliability, and increased ability to withstand force without breaking.
| Biology and health sciences | Health and fitness: General | Health |
1090852 | https://en.wikipedia.org/wiki/Software%20prototyping | Software prototyping | Software prototyping is the activity of creating prototypes of software applications, i.e., incomplete versions of the software program being developed. It is an activity that can occur in software development and is comparable to prototyping as known from other fields, such as mechanical engineering or manufacturing.
A prototype typically simulates only a few aspects of, and may be completely different from, the final product.
Prototyping has several benefits: the software designer and implementer can get valuable feedback from the users early in the project. The client and the contractor can compare if the software made matches the software specification, according to which the software program is built. It also allows the software engineer some insight into the accuracy of initial project estimates and whether the deadlines and milestones proposed can be successfully met. The degree of completeness and the techniques used in prototyping have been in development and debate since its proposal in the early 1970s.
Overview
The purpose of a prototype is to allow users of the software to evaluate developers' proposals for the design of the eventual product by actually trying them out, rather than having to interpret and evaluate the design based on descriptions. Software prototyping provides an understanding of the software's functions and potential threats or issues. Prototyping can also be used by end users to describe and prove requirements that have not been considered, and that can be a key factor in the commercial relationship between developers and their clients. Interaction design in particular makes heavy use of prototyping with that goal.
This process is in contrast with the 1960s and 1970s monolithic development cycle of building the entire program first and then working out any inconsistencies between design and implementation, which led to higher software costs and poor estimates of time and cost. The monolithic approach has been dubbed the "Slaying the (software) Dragon" technique, since it assumes that the software designer and developer is a single hero who has to slay the entire dragon alone. Prototyping can also avoid the great expense and difficulty of having to change a finished software product.
The practice of prototyping is one of the points Frederick P. Brooks makes in his 1975 book The Mythical Man-Month and his 10-year anniversary article "No Silver Bullet".
An early example of large-scale software prototyping was the implementation of NYU's Ada/ED translator for the Ada programming language. It was implemented in SETL with the intent of producing an executable semantic model for the Ada language, emphasizing clarity of design and user interface over speed and efficiency. The NYU Ada/ED system was the first validated Ada implementation, certified on April 11, 1983.
Outline
The process of prototyping involves the following steps:
Identify basic requirements
Determine basic requirements including the input and output information desired. Details, such as security, can typically be ignored.
Develop initial prototype
The initial prototype is developed that includes only user interfaces. (See Horizontal Prototype, below)
Review
The customers, including end-users, examine the prototype and provide feedback on potential additions or changes.
Revise and enhance the prototype
Using the feedback both the specifications and the prototype can be improved. Negotiation about what is within the scope of the contract/product may be necessary. If changes are introduced then a repeat of steps #3 and #4 may be needed.
Dimensions
Nielsen summarizes the various dimensions of prototypes in his book Usability Engineering:
Horizontal prototype
A common term for a user interface prototype is the horizontal prototype. It provides a broad view of an entire system or subsystem, focusing on user interaction more than low-level system functionality, such as database access. Horizontal prototypes are useful for:
Confirmation of user interface requirements and system scope,
Demonstration version of the system to obtain buy-in from the business,
Develop preliminary estimates of development time, cost and effort.
Vertical prototype
A vertical prototype is an enhanced complete elaboration of a single subsystem or function. It is useful for obtaining detailed requirements for a given function, with the following benefits:
Refinement database design,
Obtain information on data volumes and system interface needs, for network sizing and performance engineering,
Clarify complex requirements by drilling down to actual system functionality.
Types
Software prototyping has many variants. However, all of the methods are in some way based on two major forms of prototyping: throwaway prototyping and evolutionary prototyping.
Throwaway prototyping
Also called close-ended prototyping. Throwaway or rapid prototyping refers to the creation of a model that will eventually be discarded rather than becoming part of the final delivered software. After preliminary requirements gathering is accomplished, a simple working model of the system is constructed to visually show the users what their requirements may look like when they are implemented into a finished system.
It is also a form of rapid prototyping.
Rapid prototyping involves creating a working model of various parts of the system at a very early stage, after a relatively short investigation. The method used in building it is usually quite informal, the most important factor being the speed with which the model is provided. The model then becomes the starting point from which users can re-examine their expectations and clarify their requirements. When this goal has been achieved, the prototype model is 'thrown away', and the system is formally developed based on the identified requirements.
The most obvious reason for using throwaway prototyping is that it can be done quickly. If the users can get quick feedback on their requirements, they may be able to refine them early in the development of the software. Making changes early in the development lifecycle is extremely cost effective since there is nothing at that point to redo. If a project is changed after a considerable amount of work has been done then small changes could require large efforts to implement since software systems have many dependencies. Speed is crucial in implementing a throwaway prototype, since with a limited budget of time and money little can be expended on a prototype that will be discarded.
Another strength of throwaway prototyping is its ability to construct interfaces that the users can test. The user interface is what the user sees as the system, and by seeing it in front of them, it is much easier to grasp how the system will function.
…it is asserted that revolutionary rapid prototyping is a more effective manner in which to deal with user requirements-related issues, and therefore a greater enhancement to software productivity overall. Requirements can be identified, simulated, and tested far more quickly and cheaply when issues of evolvability, maintainability, and software structure are ignored. This, in turn, leads to the accurate specification of requirements, and the subsequent construction of a valid and usable system from the user's perspective, via conventional software development models.
Prototypes can be classified according to the fidelity with which they resemble the actual product in terms of appearance, interaction and timing. One method of creating a low fidelity throwaway prototype is paper prototyping. The prototype is implemented using paper and pencil, and thus mimics the function of the actual product, but does not look at all like it. Another method to easily build high fidelity throwaway prototypes is to use a GUI Builder and create a click dummy, a prototype that looks like the goal system, but does not provide any functionality.
The usage of storyboards, animatics or drawings is not exactly the same as throwaway prototyping, but certainly falls within the same family. These are non-functional implementations but show how the system will look.
Summary: In this approach the prototype is constructed with the idea that it will be discarded and the final system will be built from scratch. The steps in this approach are:
Write preliminary requirements
Design the prototype
User experiences/uses the prototype, specifies new requirements
Repeat if necessary
Write the final requirements
Evolutionary prototyping
Evolutionary prototyping (also known as breadboard prototyping) is quite different from throwaway prototyping. The main goal when using evolutionary prototyping is to build a very robust prototype in a structured manner and constantly refine it. The reason for this approach is that the evolutionary prototype, when built, forms the heart of the new system, and the improvements and further requirements will then be built.
When developing a system using evolutionary prototyping, the system is continually refined and rebuilt.
"…evolutionary prototyping acknowledges that we do not understand all the requirements and builds only those that are well understood."
This technique allows the development team to add features, or make changes that couldn't be conceived during the requirements and design phase.
For a system to be useful, it must evolve through use in its intended operational environment. A product is never "done;" it is always maturing as the usage environment changes…we often try to define a system using our most familiar frame of reference—where we are now. We make assumptions about the way business will be conducted and the technology base on which the business will be implemented. A plan is enacted to develop the capability, and, sooner or later, something resembling the envisioned system is delivered.
Evolutionary prototypes have an advantage over throwaway prototypes in that they are functional systems. Although they may not have all the features the users have planned, they may be used on an interim basis until the final system is delivered.
"It is not unusual within a prototyping environment for the user to put an initial prototype to practical use while waiting for a more developed version…The user may decide that a 'flawed' system is better than no system at all."
In evolutionary prototyping, developers can focus themselves to develop parts of the system that they understand instead of working on developing a whole system.
To minimize risk, the developer does not implement poorly understood features. The partial system is sent to customer sites. As users work with the system, they detect opportunities for new features and give requests for these features to developers. Developers then take these enhancement requests along with their own and use sound configuration-management practices to change the software-requirements specification, update the design, recode and retest.
Incremental prototyping
The final product is built as separate prototypes. At the end, the separate prototypes are merged in an overall design. By the help of incremental prototyping the time gap between user and software developer is reduced.
Extreme prototyping
Extreme prototyping as a development process is used especially for developing web applications. Basically, it breaks down web development into three phases, each one based on the preceding one. The first phase is a static prototype that consists mainly of HTML pages. In the second phase, the screens are programmed and fully functional using a simulated services layer. In the third phase, the services are implemented.
"The process is called Extreme Prototyping to draw attention to the second phase of the process, where a fully functional UI is developed with very little regard to the services other than their contract."
Advantages
There are many advantages to using prototyping in software development – some tangible, some abstract.
Reduced time and costs: Prototyping can improve the quality of requirements and specifications provided to developers. Because changes cost exponentially more to implement as they are detected later in development, the early determination of what the user really wants can result in faster and less expensive software.
Improved and increased user involvement: Prototyping requires user involvement and allows them to see and interact with a prototype allowing them to provide better and more complete feedback and specifications. The presence of the prototype being examined by the user prevents many misunderstandings and miscommunications that occur when each side believe the other understands what they said. Since users know the problem domain better than anyone on the development team does, increased interaction can result in a final product that has greater tangible and intangible quality. The final product is more likely to satisfy the user's desire for look, feel and performance.
Disadvantages
Using, or perhaps misusing, prototyping can also have disadvantages.
Insufficient analysis: The focus on a limited prototype can distract developers from properly analyzing the complete project. This can lead to overlooking better solutions, preparation of incomplete specifications or the conversion of limited prototypes into poorly engineered final projects that are hard to maintain. Further, since a prototype is limited in functionality it may not scale well if the prototype is used as the basis of a final deliverable, which may not be noticed if developers are too focused on building a prototype as a model.
User confusion of prototype and finished system: Users can begin to think that a prototype, intended to be thrown away, is actually a final system that merely needs to be finished or polished. (They are, for example, often unaware of the effort needed to add error-checking and security features which a prototype may not have.) This can lead them to expect the prototype to accurately model the performance of the final system when this is not the intent of the developers. Users can also become attached to features that were included in a prototype for consideration and then removed from the specification for a final system. If users are able to require all proposed features be included in the final system this can lead to conflict.
Developer misunderstanding of user objectives: Developers may assume that users share their objectives (e.g. to deliver core functionality on time and within budget), without understanding wider commercial issues. For example, user representatives attending Enterprise software (e.g. PeopleSoft) events may have seen demonstrations of "transaction auditing" (where changes are logged and displayed in a difference grid view) without being told that this feature demands additional coding and often requires more hardware to handle extra database accesses. Users might believe they can demand auditing on every field, whereas developers might think this is feature creep because they have made assumptions about the extent of user requirements. If the developer has committed delivery before the user requirements were reviewed, developers are between a rock and a hard place, particularly if user management derives some advantage from their failure to implement requirements.
Developer attachment to prototype: Developers can also become attached to prototypes they have spent a great deal of effort producing; this can lead to problems, such as attempting to convert a limited prototype into a final system when it does not have an appropriate underlying architecture. (This may suggest that throwaway prototyping, rather than evolutionary prototyping, should be used.)
Excessive development time of the prototype: A key property to prototyping is the fact that it is supposed to be done quickly. If the developers lose sight of this fact, they very well may try to develop a prototype that is too complex. When the prototype is thrown away the precisely developed requirements that it provides may not yield a sufficient increase in productivity to make up for the time spent developing the prototype. Users can become stuck in debates over details of the prototype, holding up the development team and delaying the final product.
Expense of implementing prototyping: the start up costs for building a development team focused on prototyping may be high. Many companies have development methodologies in place, and changing them can mean retraining, retooling, or both. Many companies tend to just begin prototyping without bothering to retrain their workers as much as they should.
A common problem with adopting prototyping technology is high expectations for productivity with insufficient effort behind the learning curve. In addition to training for the use of a prototyping technique, there is an often overlooked need for developing corporate and project specific underlying structure to support the technology. When this underlying structure is omitted, lower productivity can often result.
Applicability
It has been argued that prototyping, in some form or another, should be used all the time. However, prototyping is most beneficial in systems that will have many interactions with the users.
It has been found that prototyping is very effective in the analysis and design of on-line systems, especially for transaction processing, where the use of screen dialogs is much more in evidence. The greater the interaction between the computer and the user, the greater the benefit is that can be obtained from building a quick system and letting the user play with it.
Systems with little user interaction, such as batch processing or systems that mostly do calculations, benefit little from prototyping. Sometimes, the coding needed to perform the system functions may be too intensive and the potential gains that prototyping could provide are too small.
Prototyping is especially good for designing good human–computer interfaces. "One of the most productive uses of rapid prototyping to date has been as a tool for iterative user requirements engineering and human–computer interface design."
Dynamic systems development method
Dynamic Systems Development Method (DSDM) is a framework for delivering business solutions that relies heavily upon prototyping as a core technique, and is itself ISO 9001 approved. It expands upon most understood definitions of a prototype. According to DSDM the prototype may be a diagram, a business process, or even a system placed into production. DSDM prototypes are intended to be incremental, evolving from simple forms into more comprehensive ones.
DSDM prototypes can sometimes be throwaway or evolutionary. Evolutionary prototypes may be evolved horizontally (breadth then depth) or vertically (each section is built in detail with additional iterations detailing subsequent sections). Evolutionary prototypes can eventually evolve into final systems.
The four categories of prototypes as recommended by DSDM are:
Business prototypes – used to design and demonstrates the business processes being automated.
Usability prototypes – used to define, refine, and demonstrate user interface design usability, accessibility, look and feel.
Performance and capacity prototypes – used to define, demonstrate, and predict how systems will perform under peak loads as well as to demonstrate and evaluate other non-functional aspects of the system (transaction rates, data storage volume, response time, etc.)
Capability/technique prototypes – used to develop, demonstrate, and evaluate a design approach or concept.
The DSDM lifecycle of a prototype is to:
Identify prototype
Agree to a plan
Create the prototype
Review the prototype
Operational prototyping
Operational prototyping was proposed by Alan Davis as a way to integrate throwaway and evolutionary prototyping with conventional system development. "It offers the best of both the quick-and-dirty and conventional-development worlds in a sensible manner. Designers develop only well-understood features in building the evolutionary baseline, while using throwaway prototyping to experiment with the poorly understood features."
Davis' belief is that to try to "retrofit quality onto a rapid prototype" is not the correct method when trying to combine the two approaches. His idea is to engage in an evolutionary prototyping methodology and rapidly prototype the features of the system after each evolution.
The specific methodology follows these steps:
An evolutionary prototype is constructed and made into a baseline using conventional development strategies, specifying and implementing only the requirements that are well understood.
Copies of the baseline are sent to multiple customer sites along with a trained prototyper.
At each site, the prototyper watches the user at the system.
Whenever the user encounters a problem or thinks of a new feature or requirement, the prototyper logs it. This frees the user from having to record the problem, and allows him to continue working.
After the user session is over, the prototyper constructs a throwaway prototype on top of the baseline system.
The user now uses the new system and evaluates. If the new changes aren't effective, the prototyper removes them.
If the user likes the changes, the prototyper writes feature-enhancement requests and forwards them to the development team.
The development team, with the change requests in hand from all the sites, then produce a new evolutionary prototype using conventional methods.
Obviously, a key to this method is to have well trained prototypers available to go to the user sites. The operational prototyping methodology has many benefits in systems that are complex and have few known requirements in advance.
Evolutionary systems development
Evolutionary Systems Development is a class of methodologies that attempt to formally implement evolutionary prototyping. One particular type, called Systemscraft is described by John Crinnion in his book Evolutionary Systems Development.
Systemscraft was designed as a 'prototype' methodology that should be modified and adapted to fit the specific environment in which it was implemented.
Systemscraft was not designed as a rigid 'cookbook' approach to the development process. It is now generally recognised[sic] that a good methodology should be flexible enough to be adjustable to suit all kinds of environment and situation...
The basis of Systemscraft, not unlike evolutionary prototyping, is to create a working system from the initial requirements and build upon it in a series of revisions. Systemscraft places heavy emphasis on traditional analysis being used throughout the development of the system.
Evolutionary rapid development
Evolutionary Rapid Development (ERD) was developed by the Software Productivity Consortium, a technology development and integration agent for the Information Technology Office of the Defense Advanced Research Projects Agency (DARPA).
Fundamental to ERD is the concept of composing software systems based on the reuse of components, the use of software templates and on an architectural template. Continuous evolution of system capabilities in rapid response to changing user needs and technology is highlighted by the evolvable architecture, representing a class of solutions. The process focuses on the use of small artisan-based teams integrating software and systems engineering disciplines working multiple, often parallel short-duration timeboxes with frequent customer interaction.
Key to the success of the ERD-based projects is parallel exploratory analysis and development of features, infrastructures, and components with and adoption of leading edge technologies enabling the quick reaction to changes in technologies, the marketplace, or customer requirements.
To elicit customer/user input, frequent scheduled and ad hoc/impromptu meetings with the stakeholders are held. Demonstrations of system capabilities are held to solicit feedback before design/implementation decisions are solidified. Frequent releases (e.g., betas) are made available for use to provide insight into how the system could better support user and customer needs. This assures that the system evolves to satisfy existing user needs.
The design framework for the system is based on using existing published or de facto standards. The system is organized to allow for evolving a set of capabilities that includes considerations for performance, capacities, and functionality. The architecture is defined in terms of abstract interfaces that encapsulate the services and their implementation (e.g., COTS applications). The architecture serves as a template to be used for guiding development of more than a single instance of the system. It allows for multiple application components to be used to implement the services. A core set of functionality not likely to change is also identified and established.
The ERD process is structured to use demonstrated functionality rather than paper products as a way for stakeholders to communicate their needs and expectations. Central to this goal of rapid delivery is the use of the "timebox" method. Timeboxes are fixed periods of time in which specific tasks (e.g., developing a set of functionality) must be performed. Rather than allowing time to expand to satisfy some vague set of goals, the time is fixed (both in terms of calendar weeks and person-hours) and a set of goals is defined that realistically can be achieved within these constraints. To keep development from degenerating into a "random walk," long-range plans are defined to guide the iterations. These plans provide a vision for the overall system and set boundaries (e.g., constraints) for the project. Each iteration within the process is conducted in the context of these long-range plans.
Once an architecture is established, software is integrated and tested on a daily basis. This allows the team to assess progress objectively and identify potential problems quickly. Since small amounts of the system are integrated at one time, diagnosing and removing the defect is rapid. User demonstrations can be held at short notice since the system is generally ready to exercise at all times.
Tools
Efficiently using prototyping requires that an organization have the proper tools and a staff trained to use those tools. Tools used in prototyping can vary from individual tools, such as 4th generation programming languages used for rapid prototyping to complex integrated CASE tools. 4th generation visual programming languages like Visual Basic and ColdFusion are frequently used since they are cheap, well known and relatively easy and fast to use. CASE tools, supporting requirements analysis, like the Requirements Engineering Environment (see below) are often developed or selected by the military or large organizations. Object oriented tools are also being developed like LYMB from the GE Research and Development Center. Users may prototype elements of an application themselves in a spreadsheet.
As web-based applications continue to grow in popularity, so too, have the tools for prototyping such applications. Frameworks such as Bootstrap, Foundation, and AngularJS provide the tools necessary to quickly structure a proof of concept. These frameworks typically consist of a set of controls, interactions, and design guidelines that enable developers to quickly prototype web applications.
Screen generators, design tools, and software factories
Screen generating programs are also commonly used and they enable prototypers to show user's systems that do not function, but show what the screens may look like. Developing Human Computer Interfaces can sometimes be the critical part of the development effort, since to the users the interface essentially is the system.
Software factories can generate code by combining ready-to-use modular components. This makes them ideal for prototyping applications, since this approach can quickly deliver programs with the desired behaviour, with a minimal amount of manual coding.
Application definition or simulation software
A new class of software called Application definition or simulation software enables users to rapidly build lightweight, animated simulations of another computer program, without writing code. Application simulation software allows both technical and non-technical users to experience, test, collaborate and validate the simulated program, and provides reports such as annotations, screenshot and schematics. As a solution specification technique, Application Simulation falls between low-risk, but limited, text or drawing-based mock-ups (or wireframes) sometimes called paper-based prototyping, and time-consuming, high-risk code-based prototypes, allowing software professionals to validate requirements and design choices early on, before development begins. In doing so, the risks and costs associated with software implementations can be dramatically reduced.
To simulate applications one can also use software that simulates real-world software programs for computer-based training, demonstration, and customer support, such as screencasting software as those areas are closely related.
Requirements Engineering Environment
"The Requirements Engineering Environment (REE), under development at Rome Laboratory since 1985, provides an integrated toolset for rapidly representing, building, and executing models of critical aspects of complex systems."
Requirements Engineering Environment is currently used by the United States Air Force to develop systems. It is:
an integrated set of tools that allows systems analysts to rapidly build functional, user interface, and performance prototype models of system components. These modeling activities are performed to gain a greater understanding of complex systems and lessen the impact that inaccurate requirement specifications have on cost and scheduling during the system development process. Models can be constructed easily, and at varying levels of abstraction or granularity, depending on the specific behavioral aspects of the model being exercised.
REE is composed of three parts. The first, called proto is a CASE tool specifically designed to support rapid prototyping. The second part is called the Rapid Interface Prototyping System or RIP, which is a collection of tools that facilitate the creation of user interfaces. The third part of REE is a user interface to RIP and proto that is graphical and intended to be easy to use.
Rome Laboratory, the developer of REE, intended that to support their internal requirements gathering methodology. Their method has three main parts:
Elicitation from various sources (users, interfaces to other systems), specification, and consistency checking
Analysis that the needs of diverse users taken together do not conflict and are technically and economically feasible
Validation that requirements so derived are an accurate reflection of user needs.
In 1996, Rome Labs contracted Software Productivity Solutions (SPS) to further enhance REE to create "a commercial quality REE that supports requirements specification, simulation, user interface prototyping, mapping of requirements to hardware architectures, and code generation..." This system is named the Advanced Requirements Engineering Workstation or AREW.
Non-relational environments
Non-relational definition of data (e.g. using Caché or associative models) can help make end-user prototyping more productive by delaying or avoiding the need to normalize data at every iteration of a simulation. This may yield earlier/greater clarity of business requirements, though it does not specifically confirm that requirements are technically and economically feasible in the target production system.
PSDL
PSDL is a prototype description language to describe real-time software.
The associated tool set is CAPS (Computer Aided Prototyping System).
Prototyping software systems with hard real-time requirements is challenging because timing constraints introduce implementation and hardware dependencies.
PSDL addresses these issues by introducing control abstractions that include declarative timing constraints. CAPS uses this information to automatically generate code and associated real-time schedules, monitor timing constraints during prototype execution, and simulate execution in proportional real time relative to a set of parameterized hardware models. It also provides default assumptions that enable execution of incomplete prototype descriptions, integrates prototype construction with a software reuse repository for rapidly realizing efficient implementations, and provides support for rapid evolution of requirements and designs.
| Technology | Software development: General | null |
1091093 | https://en.wikipedia.org/wiki/Lath%20and%20plaster | Lath and plaster | Lath and plaster is a building process used to finish mainly interior dividing walls and ceilings. It consists of narrow strips of wood (laths) which are nailed horizontally across the wall studs or ceiling joists and then coated in plaster. The technique derives from an earlier, more primitive process called wattle and daub.
Lath and plaster largely fell out of favour in the U.K. after the introduction of plasterboard in the 1930s. In Canada and the United States, wood lath and plaster remained in use until the process was replaced by transitional methods followed by drywall (the North American term for plasterboard) in the mid-twentieth century.
Description
The wall or ceiling finishing process begins with wood or metal laths. These are narrow strips of wood, extruded metal, or split boards, nailed horizontally across the wall studs or ceiling joists. Each wall frame is covered in lath, tacked at the studs. Wood lath is typically about wide by long by thick. Each horizontal course of lath is spaced about away from its neighboring courses. Metal lath is available in by sheets.
In Canada and the United States the laths were generally sawn, but in the United Kingdom and its colonies, riven or split hardwood laths of random lengths and sizes were often used. Early American examples featured split beam construction, as did examples put up in rural areas of the U.S. and Canada well into the second half of the 19th century. Splitting the timber along its grain greatly improved the laths' strength and durability. As Americans and Canadians expanded west, saw mills were not always available to create neatly planed boards and the first crop of buildings in any new western or northern settlement would be put up with split beam lath. In some areas of the U.K. reed mat was also used as a lath.
Temporary lath guides are then placed vertically to the wall, usually at the studs. Lime or gypsum plaster is then applied, typically using a wooden board as the application tool. The applier drags the board upward over the wall, forcing the plaster into the gaps between the lath and leaving a layer on the front the depth of the temporary guides, typically about . A helper feeds new plaster onto the board, as the plaster is applied in quantity. When the wall is fully covered, the vertical lath "guides" are removed, and their "slots" are filled in, leaving a fairly uniform undercoat.
In three coat plastering it is standard to apply a second layer in the same fashion, leaving about of rough, sandy plaster (called a brown coat or browning (UK)). A smooth, white finish coat goes on last. After the plaster is completely dry, the walls are ready to be painted. In this article's photo ("lath seen from the back...") the curls of plaster are called keys and are necessary to keep the plaster on the lath. Traditional lime based mortar/plaster often incorporates horsehair which reinforces the plasterwork, thereby helping to prevent the keys from breaking away.
Historical transition
In addition to wood lath, various types of metal lath began to be used toward the end of the 19th century. Metal lath is categorized according to weight, type of ribbing, and whether the lath is galvanized or not. Metal lathing was spaced across a center, attached by tie wires using lathers' nippers. Sometimes, the mesh was dimpled to be self-furring.
In use as early as 1900, rock lath (also known as "button board," "plaster board" or "gypsum-board lath"), is a type of gypsum wall board (essentially an early form of drywall) with holes spaced regularly to provide a 'key' for wet plaster. Rock lath was typically produced in sheets sized . The purpose of the four-foot length is so that the sheet of lath exactly spans three interstud voids (overlapping half a stud at each end of a four-stud sequence in standard construction), the studs themselves being spaced apart on center (United States building code standard measurements). By the late 1930s, rock lath was the primary method used in residential plastering.
Lath and plaster methods have mostly been replaced with modern drywall or plasterboard, which is faster and less expensive to install. Drywall possesses poor sound dampening qualities and can be easily damaged by moisture. Traditional lime based plasters are resistant to moisture and provide excellent sound isolation.
Advantages
One continued advantage of using traditional lath is for ornamental or unusual shapes. For instance, building a rounded wall would be difficult if drywall were used exclusively, as drywall is not flexible enough to allow tight radii. Wire mesh, often used for exterior stucco, is also found in combination or replacement of lath and plaster which serves similar purpose.
Traditional lath and plaster (including rock and metal lath varieties) has superior sound-proofing qualities when used with lime or gypsum plaster, which is denser than modern drywall.
In many historic buildings lath and plaster ceilings play a major role for the prevention of fire spread. They are critical to the protection of horizontal elements such as timber joisted floors, including the flooring on top, which in terms of fire performance is often in a poor condition due to the presence of gaps.
| Technology | Building materials | null |
1091105 | https://en.wikipedia.org/wiki/Thorny%20devil | Thorny devil | The thorny devil (Moloch horridus), also known commonly as the mountain devil, thorny lizard, thorny dragon, and moloch, is a species of lizard in the family Agamidae. The species is endemic to Australia. It is the sole species in the genus Moloch. It grows up to in total length (including tail), with females generally larger than males.
Taxonomy
The thorny devil was first described by the biologist John Edward Gray in 1841. While it is the only species contained in the genus Moloch, many taxonomists suspect another species might remain to be found in the wild. The thorny devil is only distantly related to the morphologically similar North American horned lizards of the genus Phrynosoma. This similarity is usually thought of as an example of convergent evolution.
The names given to this lizard reflect its appearance: the two large horned scales on its head complete the illusion of a dragon or devil. The name Moloch was used for a deity of the ancient Near East, usually depicted as a hideous beast. The thorny devil also has other nicknames people have given it such as the "devil lizard", "horned lizard", and the "thorny toad".
Description
The thorny devil grows up to in total length (including tail), and can live for 15 to 20 years. The females are larger than the males. Most specimens are coloured in camouflaging shades of desert browns and tans. These colours change from pale colours during warm weather to darker colours during cold weather. The thorny devil is covered entirely with conical spines that are mostly uncalcified.
An intimidating array of spikes covers the entire upper side of the body of the thorny devil. These thorny scales also help to defend it from predators. Camouflage and deception may also be used to evade predation. This lizard's unusual gait involves freezing and rocking as it moves about slowly in search of food, water, and mates.
The thorny devil also features a spiny "false head" on the back of its neck, and the lizard presents this to potential predators by dipping its real head. The "false head" is made of soft tissue.
The thorny devil's scales are ridged, enabling the animal to collect water by simply touching it with any part of the body, usually the limbs; capillary action transports the water to the mouth through channels in its skin. The thorny devil is also equipped to harvest moisture in the dry desert following nighttime's extremely low temperatures and the subsequent condensation of dew. The process involves moisture contact, their hydrophilic skin surface structures with capillaries, and an internal transport mechanism.
The lizard rubs its body against the moist substrate and shovels damp sand onto its back, the outer epidermis layer equipped to draw in cutaneous moisture.
The keratinous fibered epidermis is hydrophilic with hexagonal microstructures on the scale surfaces. When trace amounts of water contact its skin (pre-wetting) these microstructures fill with water, the skin surface becoming superhydrophilic. This allows moisture to spread across wider surface areas, yielding faster uptake, as water is collected via capillary action in small channels located between its scales.
Captured water is transported passively via capillary action in semi-tubular channels located beneath the partially overlapping scales, in an asymmetric and interconnected system that extends over the lizard's entire body surface. The channels terminate at the mouth where active ingestion (drinking) is observable by jaw movements when moisture is plentiful, e.g. water puddles.
The same hydrophilic moisture-harvesting physiology is characteristic in the Texas horned lizard (Phrynosoma cornutum), roundtail horned lizard (Phrynosoma modestum), desert horned lizard (Phrynosoma platyrhinos), Arabian toad-headed agama (Phrynocephalus arabicus), sunwatcher toadhead agama (Phrynocephalus helioscopus), Phrynocephalus horvathi, yellow-spotted agama (Trapelus flavimaculatus), Trapelus pallidus and desert agama (Trapelus mutabilis).
Distribution and habitat
The thorny devil usually lives in the arid scrubland and desert that covers most of central Australia, sandplain and sandridge desert in the deep interior and the mallee belt.
The habitat of the thorny devil coincides more with the regions of sandy loam soils than with a particular climate in Western Australia.
Self-defense
The thorny devil is covered in hard, rather sharp spines that dissuade attacks by predators by making it difficult to swallow. It also has a false head on its back. When it feels threatened by other animals, it lowers its head between its front legs, and then presents its false head. Predators that consume the thorny devil include wild birds and goannas.
Diet
The thorny devil mainly subsists on ants, especially Ochetellus flavipes and other species in the Camponotus, Ectatomma, Iridomyrmex (especially Iridomyrmex rufoniger), Monomorium, Ochetellus, Pheidole, or Polyrhachis genera. Thorny devils often eat thousands of ants in one day.
The thorny devil collects moisture in the dry desert by the condensation of dew. This dew forms on its skin in the early morning as it begins to warm outside. Then the dew is channeled to its mouth by gravity and capillary action via the channels between its spines. During rainfalls, capillary action allows the thorny devil to absorb water from all over its body. Capillary action also allows the thorny devil to absorb water from damp sand. Absorption through sand is the thorny devil's main source of water intake.
Reproduction
The female thorny devil lays a clutch of three to ten eggs between September and December. She puts these in a nesting burrow about 30 cm underground. The eggs hatch after about three to four months.
Popular reference
The popular appeal of the thorny devil is the basis of an anecdotal petty scam. American servicemen stationed in Southwest Australia decades ago (such as during World War II) were supposedly sold the thorny fruits of a species of weeds, the so-called "double gee" (Emex australis), but those were called "thorny devil eggs" as a part of the scam. Thorny devils have been kept in captivity.
| Biology and health sciences | Iguania | Animals |
1091325 | https://en.wikipedia.org/wiki/BL%20Lacertae | BL Lacertae | BL Lacertae or BL Lac is a highly variable, extragalactic active galactic nucleus (AGN or active galaxy). It was first discovered by Cuno Hoffmeister in 1929, but was originally thought to be an irregular variable star in the Milky Way galaxy and so was given a variable star designation. In 1968, the "star" was identified by John Schmitt at the David Dunlap Observatory as a bright, variable radio source. A faint trace of a host galaxy was also found. In 1974, Oke and Gunn measured the redshift of BL Lacertae as z = 0.07, corresponding to a recession velocity of 21,000 km/s with respect to the Milky Way. The redshift figure implies that the object lies at a distance of 900 million light years.
Due to its early discovery, BL Lacertae became the prototype and namesake of the class of active galactic nuclei known as "BL Lacertae objects" or "BL Lac objects". This class is distinguished by rapid and high-amplitude brightness variations and by optical spectra devoid (or nearly devoid) of the broad emission lines characteristic of quasars. These characteristics are understood to result from relativistic beaming of emission from a jet of plasma ejected from the vicinity of a supermassive black hole. BL Lac objects are also categorized as a type of blazar.
BL Lacertae changes in apparent magnitude over fairly small time periods, typically between values of 14 and 17. In January 2021, it exhibited extreme flaring behavior and was reported to reach magnitude 11.45 in the R filter band.
| Physical sciences | Notable galaxies | Astronomy |
2316934 | https://en.wikipedia.org/wiki/Haplogroup | Haplogroup | A haplotype is a group of alleles in an organism that are inherited together from a single parent, and a haplogroup (haploid from the , haploûs, "onefold, simple" and ) is a group of similar haplotypes that share a common ancestor with a single-nucleotide polymorphism mutation. More specifically, a haplotype is a combination of alleles at different chromosomal regions that are closely linked and tend to be inherited together. As a haplogroup consists of similar haplotypes, it is usually possible to predict a haplogroup from haplotypes. Haplogroups pertain to a single line of descent. As such, membership of a haplogroup, by any individual, relies on a relatively small proportion of the genetic material possessed by that individual.
Each haplogroup originates from, and remains part of, a preceding single haplogroup (or paragroup). As such, any related group of haplogroups may be precisely modelled as a nested hierarchy, in which each set (haplogroup) is also a subset of a single broader set (as opposed, that is, to biparental models, such as human family trees). Haplogroups can be further divided into subclades.
Haplogroups are normally identified by an initial letter of the alphabet, and refinements consist of additional number and letter combinations, such as (for example) . The alphabetical nomenclature was published in 2002 by the Y Chromosome Consortium.
In human genetics, the haplogroups most commonly studied are Y-chromosome (Y-DNA) haplogroups and mitochondrial DNA (mtDNA) haplogroups, each of which can be used to define genetic populations. Y-DNA is passed solely along the patrilineal line, from father to son, while mtDNA is passed down the matrilineal line, from mother to offspring of both sexes. Neither recombines, and thus Y-DNA and mtDNA change only by chance mutation at each generation with no intermixture between parents' genetic material.
Haplogroup formation
Mitochondria are small organelles that lie in the cytoplasm of eukaryotic cells, such as those of humans. Their primary function is to provide energy to the cell. Mitochondria are thought to be reduced descendants of symbiotic bacteria that were once free living. One indication that mitochondria were once free living is that each contains a circular DNA, called mitochondrial DNA (mtDNA), whose structure is more similar to bacteria than eukaryotic organisms (see endosymbiotic theory). The overwhelming majority of a human's DNA is contained in the chromosomes in the nucleus of the cell, but mtDNA is an exception.
An individual inherits their cytoplasm and the organelles contained by that cytoplasm exclusively from the maternal ovum (egg cell); sperm only pass on the chromosomal DNA, all paternal mitochondria are digested in the oocyte. When a mutation arises in a mtDNA molecule, the mutation is therefore passed down in a direct female line of descent. Mutations are changes in the nitrogen bases of the DNA sequence. Single changes from the original sequence are called single nucleotide polymorphisms (SNPs).
Human Y chromosomes are male-specific sex chromosomes; nearly all humans that possess a Y chromosome will be morphologically male. Although Y chromosomes are situated in the cell nucleus and paired with X chromosomes, they only recombine with the X chromosome at the ends of the Y chromosome; the remaining 95% of the Y chromosome does not recombine. Therefore, the Y chromosome and any mutations that arise in it are passed down in a direct male line of descent.
Other chromosomes, autosomes and X chromosomes (when another X chromosome is available to pair with it), share their genetic material during meiosis, the process of cell division which produces gametes. Effectively this means that the genetic material from these chromosomes gets mixed up in every generation, and so any new mutations are passed down randomly from parents to offspring.
The special feature that both Y chromosomes and mtDNA display is that mutations can accrue along a certain segment of both molecules and these mutations remain fixed in place on the DNA. Furthermore, the historical sequence of these mutations can also be inferred. For example, if a set of ten Y chromosomes (derived from ten different individuals) contains a mutation, A, but only five of these chromosomes contain a second mutation, B, then it is overwhelmingly likely that mutation B occurred after mutation A.
Furthermore, all ten individuals who carry the chromosome with mutation A are the direct male line descendants of the same man who was the first person to carry this mutation. The first man to carry mutation B was also a direct male line descendant of this man, but is also the direct male line ancestor of all men carrying mutation B. Series of mutations such as this form molecular lineages. Furthermore, each mutation defines a set of specific Y chromosomes called a haplogroup.
All humans carrying mutation A form a single haplogroup, and all humans carrying mutation B are part of this haplogroup, but mutation B also defines a more recent haplogroup (which is a subgroup or subclade) of its own to which humans carrying only mutation A do not belong. Both mtDNA and Y chromosomes are grouped into lineages and haplogroups; these are often presented as tree-like diagrams.
Human Y-chromosome DNA haplogroups
Human Y chromosome DNA (Y-DNA) haplogroups are named from A to T, and are further subdivided using numbers and lower case letters. Y chromosome haplogroup designations are established by the Y Chromosome Consortium.
Y-chromosomal Adam is the name given by researchers to the male who is the most recent common patrilineal (male-lineage) ancestor of all living humans.
Major Y-chromosome haplogroups, and their geographical regions of occurrence (prior to
the recent European colonization), include:
Groups without mutation M168
Haplogroup A (M91) (Africa, especially the Khoisan and Nilotes)
Haplogroup B (M60) (Africa, especially the Pygmies and Hadzabe)
Groups with mutation M168
(mutation M168 occurred ~50,000 bp)
Haplogroup C (M130) (Oceania, North/Central/East Asia, North America and a minor presence in South America, Southeast Asia, South Asia, West Asia, and Europe)
YAP+ haplogroups
Haplogroup DE (M1, M145, M203)
Haplogroup D (CTS3946) (Tibet, Nepal, Japan, the Andaman Islands, Central Asia, and a sporadic presence in Nigeria, Syria, and Saudi Arabia)
Haplogroup E (M96)
Haplogroup E1b1a (V38) West Africa and surrounding regions; formerly known as E3a
Haplogroup E1b1b (M215) Associated with the spread of Afroasiatic languages; now concentrated in North Africa and the Horn of Africa, as well as parts of the Middle East, the Mediterranean, and the Balkans; formerly known as E3b
Groups with mutation M89
(mutation M89 occurred ~45,000 bp)
Haplogroup F (M89) Oceania, Europe, Asia, North and South America
Haplogroup FT (P14, M213) (China, Vietnam, Singapore)
Haplogroup G (M201) (present among many ethnic groups in Eurasia, usually at low frequency; most common in the Caucasus, the Iranian plateau, and Anatolia; in Europe mainly in Greece, Italy, Iberia, the Tyrol, Bohemia; rare in Northern Europe)
Haplogroup H (L901/M2939)
H1'3 (Z4221/M2826, Z13960)
H1 (L902/M3061)
H1a (M69/Page45) India, Sri Lanka, Nepal, Pakistan, Iran, Central Asia
H1b (B108) Found in a Burmese individual in Myanmar.
H3 (Z5857) India, Sri Lanka, Pakistan, Bahrain, Qatar
H2 (P96) Formerly known as haplogroup F3. Found with low frequency in Europe and western Asia.
Haplogroup IJK (L15, L16)
Groups with mutations L15 & L16
Haplogroup IJK (L15, L16)
Haplogroup IJ (S2, S22)
Haplogroup I (M170, P19, M258) (widespread in Europe, found infrequently in parts of the Middle East, and virtually absent elsewhere)
Haplogroup I1 (M253, M307, P30, P40) (Northern Europe, dominant in Scandinavia)
Haplogroup I2 (S31) (Central and Southeast Europe, Sardinia, Balkans)
Haplogroup J (M304) (the Middle East, Turkey, Caucasus, Italy, Greece, the Balkans, North Africa)
Haplogroup J* (Mainly found in Socotra, with a few observations in Pakistan, Oman, Greece, the Czech Republic, and among Turkic peoples)
Haplogroup J1 (M267) (Mostly associated with Semitic peoples in the Middle East but also found in; Mediterranean Europe, Ethiopia, North Africa, Pakistan, India and with Northeast Caucasian peoples in Dagestan; J1 with DYS388=13 is associated with eastern Anatolia)
Haplogroup J2 (M172) (Mainly found in West Asia, Central Asia, Iran, Italy, Greece, the Balkans and North Africa)
Haplogroup K (M9, P128, P131, P132)
Groups with mutation M9
(mutation M9 occurred ~40,000 bp)
Haplogroup K
Haplogroup LT (L298/P326)
Haplogroup L (M11, M20, M22, M61, M185, M295) (South Asia, Central Asia, Southwestern Asia, the Mediterranean)
Haplogroup T (M70, M184/USP9Y+3178, M193, M272) (North Africa, Horn of Africa, Southwest Asia, the Mediterranean, South Asia); formerly known as Haplogroup K2
Haplogroup K(xLT) (rs2033003/M526)
Groups with mutation M526
Haplogroup M (P256) (New Guinea, Melanesia, eastern Indonesia)
Haplogroup NO (M214)
Haplogroup N (M231) (northernmost Eurasia)
Haplogroup O (M175) (East Asia, Southeast Asia, the South Pacific, South Asia, Central Asia)
Haplogroup O1 (F265)
Haplogroup O1a (M119)
Haplogroup O1b (P31, M268)
Haplogroup O2 (M122)
Haplogroup P-M45 (M45) (M45 occurred ~35,000 bp)
Haplogroup Q-M242 (M242) (Occurred ~15,000–20,000 bp. Found in Asia and the Americas)
Haplogroup Q-M3 (M3) (North America, Central America, and South America)
Haplogroup R (M207)
Haplogroup R1 (M173)
Haplogroup R1a (M17) (Central Asia, South Asia, and Central, Northern, and Eastern Europe)
Haplogroup R1b (M343) (Europe, Caucasus, Central Asia, South Asia, North Africa, Central Africa)
Haplogroup R2 (M124) (South Asia, Caucasus, Central Asia)
Haplogroup S (M230, P202, P204) (New Guinea, Melanesia, eastern Indonesia)
Human mitochondrial DNA haplogroups
Human mtDNA haplogroups are lettered:
A,
B,
C,
CZ,
D,
E,
F,
G,
H,
HV,
I,
J,
pre-JT,
JT,
K,
L0,
L1,
L2,
L3,
L4,
L5,
L6,
M,
N,
O,
P,
Q,
R,
R0,
S,
T,
U,
V,
W,
X,
Y, and
Z. The most up-to-date version of the mtDNA tree is maintained by Mannis van Oven on the PhyloTree website.
Mitochondrial Eve is the name given by researchers to the woman who is the most recent common matrilineal (female-lineage) ancestor of all living humans.
Defining populations
Haplogroups can be used to define genetic populations and are often geographically oriented. For example, the following are common divisions for mtDNA haplogroups:
African: L0, L1, L2, L3, L4, L5, L6
West Eurasian: H, T, U, V, X, K, I, J, W (all listed West Eurasian haplogroups are derived from macro-haplogroup N)
East Eurasian: A, B, C, D, E, F, G, Y, Z (note: C, D, E, G, and Z belong to macro-haplogroup M)
Native American: A, B, C, D, X
Australo-Melanesian: P, Q, S
The mitochondrial haplogroups are divided into three main groups, which are designated by the sequential letters L, M, N.
Humanity first split within the L group between L0 and L1-6. L1-6 gave rise to other L groups, one of which, L3, split into the M and N group.
The M group comprises the first wave of human migration which is thought to have evolved outside of Africa, following an eastward route along southern coastal areas. Descendant lineages of haplogroup M are now found throughout Asia, the Americas, and Melanesia, as well as in parts of the Horn of Africa and North Africa; almost none have been found in Europe. The N haplogroup may represent another macrolineage that evolved outside of Africa, heading northward instead of eastward. Shortly after the migration, the large R group split off from the N.
Haplogroup R consists of two subgroups defined on the basis of their geographical distributions, one found in southeastern Asia and Oceania and the other containing almost all of the modern European populations. Haplogroup N(xR), i.e. mtDNA that belongs to the N group but not to its R subgroup, is typical of Australian aboriginal populations, while also being present at low frequencies among many populations of Eurasia and the Americas.
The L type consists of nearly all Africans.
The M type consists of:
M1 – Ethiopian, Somali and Indian populations. Likely due to much gene flow between the Horn of Africa and the Arabian Peninsula (Saudi Arabia, Yemen, Oman), separated only by a narrow strait between the Red Sea and the Gulf of Aden.
CZ – Many Siberians; branch C – Some Amerindian; branch Z – Many Saami, some Korean, some North Chinese, some Central Asian populations.
D – Some Amerindians, many Siberians and northern East Asians
E – Malay, Borneo, Philippines, Taiwanese aborigines, Papua New Guinea
G – Many Northeast Siberians, northern East Asians, and Central Asians
Q – Melanesian, Polynesian, New Guinean populations
The N type consists of:
A – Found in many Amerindians and some East Asians and Siberians
I – 10% frequency in Northern, Eastern Europe
S – Some Indigenous Australian (First Nations People of Australia)
W – Some Eastern Europeans, South Asians, and southern East Asians
X – Some Amerindians, Southern Siberians, Southwest Asians, and Southern Europeans
Y – Most Nivkhs and people of Nias; many Ainus, Tungusic people, and Austronesians; also found with low frequency in some other populations of Siberia, East Asia, and Central Asia
R – Large group found within the N type. Populations contained therein can be divided geographically into West Eurasia and East Eurasia. Almost all European populations and a large number of Middle-Eastern population today are contained within this branch. A smaller percentage is contained in other N type groups (See above). Below are subclades of R:
B – Some Chinese, Tibetans, Mongolians, Central Asians, Koreans, Amerindians, South Siberians, Japanese, Austronesians
F – Mainly found in southeastern Asia, especially Vietnam; 8.3% in Hvar Island in Croatia.
R0 – Found in Arabia and among Ethiopians and Somalis; branch HV (branch H; branch V) – Europe, Western Asia, North Africa;
Pre-JT – Arose in the Levant (modern Lebanon area), found in 25% frequency in Bedouin populations; branch JT (branch J; branch T) – North, Eastern Europe, Indus, Mediterranean
U – High frequency in West Eurasia, Indian sub-continent, and Algeria, found from India to the Mediterranean and to the rest of Europe; U5 in particular shows high frequency in Scandinavia and Baltic countries with the highest frequency in the Sami people.
Y-chromosome and MtDNA geographic haplogroup assignment
Here is a list of Y-chromosome and MtDNA geographic haplogroup assignment proposed by Bekada et al. 2013.
Y-chromosome
According to SNPS haplogroups which are the age of the first extinction event tend to be around 45–50 kya. Haplogroups of the second extinction event seemed to diverge 32–35 kya according to Mal'ta. The ground zero extinction event appears to be Toba during which haplogroup CDEF* appeared to diverge into C, DE and F. C and F have almost nothing in common while D and E have plenty in common. Extinction event #1 according to current estimates occurred after Toba, although older ancient DNA could push the ground zero extinction event to long before Toba, and push the first extinction event here back to Toba. Haplogroups with extinction event notes by them have a dubious origin and this is because extinction events lead to severe bottlenecks, so all notes by these groups are just guesses. Note that the SNP counting of ancient DNA can be highly variable meaning that even though all these groups diverged around the same time no one knows when.
mtDNA
| Biology and health sciences | Basics_4 | Biology |
2319263 | https://en.wikipedia.org/wiki/Thelodonti | Thelodonti | Thelodonti (from Greek: "nipple teeth") is a class of extinct Palaeozoic jawless fishes with distinctive scales instead of large plates of armor.
There is much debate over whether the group represents a monophyletic grouping, or disparate stem groups to the major lines of jawless and jawed fish.
Thelodonts are united in possession of "thelodont scales". This defining character is not necessarily a result of shared ancestry, as it may have been evolved independently by different groups. Thus the thelodonts are generally thought to represent a polyphyletic group, although there is no firm agreement on this point. On the basis that they are monophyletic, they are reconstructed as being ancestrally marine and invading freshwater on multiple occasions.
"Thelodonts" were morphologically very similar, and probably closely related, to fish of the classes Heterostraci and Anaspida, differing mainly in their covering of distinctive, small, spiny scales. These scales were easily dispersed after death; their small size and resilience makes them the most common vertebrate fossil of their time.
The fish lived in both freshwater and marine environments, first appearing during the Ordovician, and perishing during the Frasnian–Famennian extinction event of the Late Devonian. Traditionally they were considered predominantly deposit-feeding bottom dwellers, but more recent studies have showed they occupied various ecological roles in various parts of the water column, much like modern bony fishes and sharks. In particular, a large variety of species preferred reef ecosystems, and it has been suggested that this preference was the reason for the development of their unique scales, protecting against abrasion and allowing for the development of more flexible bodies than other jawless fish, which had inflexible armors and were restricted to open habitats.
Description
Very few complete thelodont specimens are known; fewer still are preserved in three dimensions. This is due in part to the lack of an internal ossified (i.e. bony) skeleton; it does not help that the scales are poorly, if at all, attached to one another, and that they readily detach from their owners upon death.
The exoskeleton is composed of many tooth-like scales, usually around 0.5–1.5mm in size. These scales did not overlap, and were aligned to point backwards along the fish, in the most streamlined direction, but beyond that, often appear haphazard in their orientation. The scales themselves approximate the form of a teardrop mounted on a small, bulky base, with the base often containing a small rootlet with which the scale was attached to the fish. The "teardrop" often contains lines, ridges, furrows and spikes running down its length in an array of sometimes complex patterns. Scales found around the gill region were generally smaller than the larger, bulkier scales found on the dorsal/ventral sides of the fish; some genera display rows of longer spikes.
The scaly covering contrasts them with most other jawless fishes, which were armor-plated with large, flat scales.
Aside from scattered scales, some specimens do appear to display imprints, giving an indication of the structure of the whole animal – which appeared to reach 15–30 cm in length. Tentative studies appear to suggest that the fish possessed a more developed braincase than the lampreys, with an almost shark-like outline. Internal scales have also been recovered, some fused into plates resembling gnathostome tooth-whorls to such a degree that some researchers favour a close link between the families.
Despite the rarity of complete fossils, these very rare intact specimens do allow us to gain an insight into the internal organ arrangement of the Thelodonts. Some specimens described in 1993 were the first to be found with a significant degree of three-dimensionality, ending speculations that the Thelodonts were flattened fish. Further, these fossils allowed the gut morphology to be interpreted, which generated much excitement: their guts were unlike those of any other agnathans, and a stomach was clearly visible: this was unexpected, as it was previously thought that stomachs evolved after jaws. Distinctive fork-shaped tails – usually characteristic of the jawed fish (gnathostomes) – were also found, linking the two groups to an unexpected degree.
The fins of the thelodonts are useful in reconstructing their mode of life. Their paired pectoral fins were combined with a single, usually well-developed, dorsal and anal fins; these and the hypercercal and much larger hypocercal lobes forming a heterocercal tail resemble features of modern fish that are associated with their deftness at predation and evasion.
Taxonomy
Due to the small number of intact fossils, the taxonomy of thelodonts is based primarily on scale morphology. In fact, some thelodont families are only recognised based on their scale fossils.
A recent assessment of thelodont taxonomy by Wilson and Märss in 2009 merges the orders Loganelliiformes, Katoporiida and Shieliiformes into Thelodontiformes, places families Lanarkiidae and Nikoliviidae into Furcacaudiformes on the basis of scale morphology, and establishes Archipelepidiformes as the basal-most order.
A newer taxonomy based on the work of Nelson, Grande and Wilson 2016 and van der Laan 2016.
Superclass †Thelodontomorphi Jaekel 1911
Class †Thelodonti Kiaer 1932
Family †Oeseliidae Märss 2005
Order †Archipelepidiformes Wilson & Märss 2009
Family †Boothialepididae Märss 1999
Family †Archipelepididae Märss ex Soehn et al. 2001
Order †Furcacaudiformes Wilson & Caldwell 1998 (Fork-tailed thelodonts)
Family †Nikoliviidae Karatajūtė-Talimaa 1978
Family †Lanarkiidae Obručhev 1949
Family †Pezopallichthyidae Wilson & Caldwell 1998
Family †Drepanolepididae Wilson & Marss 2009
Family †Barlowodidae Märss, Wilson & Thorsteinsson 2002
Family †Apalolepididae Turner 1976
Family †Furcacaudidae Wilson & Caldwell 1998
Clade Thelodontida Stensiö 1958 non Kiaer 1932 s.l.
Family †Talivaliidae Marss, Wilson & Thorsteinsson 2002
Family †Longodidae Märss 2006b
Family †Helenolepididae Wilson & Märss 2009
Order †Sandiviiformes Karatajūtė-Talimaa & Märss 2004
Family †Angaralepididae Karatajūtė-Talimaa & Märss 2004
Family †Stroinolepididae Karatajūtė-Talimaa & Märss 2002
Family †Sandiviidae Karatajūtė-Talimaa & Märss 2004
Order †Turiniida Stensiö 1958
Family †Turiniidae Obručhev 1964
Order †Thelodontiformes
Family †Thelodontidae Jordan 1905
Order †Loganelliiformes Turner 1991
Family †Nunavutiidae Marss, Wilson & Thorsteinsson 2002
Family †Loganelliidae Märss, Wilson & Thorsteinsson 2002
Order †Phlebolepidiformes Berg 1937 s.s.
Family †Phlebolepididae Berg 1937 corrig.
Family †Shieliidae Märss, Wilson & Thorsteinsson 2002
Family †Katoporodidae Soehn et al. 2001 ex Märss, Wilson & Thorsteinsson 2002
Scales
The bony scales of the thelodont group, as the most abundant form of fossil, are also the best understood – and thus most useful. The scales were formed and shed throughout the organisms' lifetimes, and quickly separated after their death.
Bone – being one of the most resistant materials to the process of fossilisation – often preserves internal detail, which allows the histology and growth of the scales to be studied in detail. The scales consist of a non-growing "crown" composed of dentine, with a sometimes-ornamented enameloid upper surface and an aspidine (acellular bony tissue) base. Its growing base is made of cell-free bone, which sometimes developed anchorage structures to fix it in the side of the fish. Beyond that, there appear to be five types of bone-growth, which may represent five natural groupings within the thelodonts – or a spectrum ranging between the end members, meta- (or ortho-) dentine and mesodentine tissues. Each of the five scale morphs appears to resemble the scales of more derived groupings of fish, suggesting that thelodont groups may have been stem groups to succeeding clades of fish.
Scale morphology, alone, has limited value for distinguishing thelodont species. Within each organism, scale shape varies greatly according to body area, with intermediate scale forms appearing between different areas; furthermore, scale morphology may not even be constant within a given body area. To confuse things further, scale morphologies are not unique to specific taxa, and may be indistinguishable on the same area of two different species.
The morphology and histology of the thelodonts provides the main tool for quantifying their diversity and distinguishing between species – although ultimately using such convergent traits is prone to errors. Nonetheless, a framework of three groups has been proposed, based upon scale morphology and histology.
Thelodonts displayed similar squamation patterns as modern sharks, so functionally they served a similar role. This allows for a clearer observation of their ecological niches. In particular, protection against abrasion seems to have been the original role for these scales.
Ecology
Most thelodonts were considered deposit feeders, but more recent studies have shown that several species were active swimmers and thus more pelagic. A large variety of species in particular preferred reef ecosystems. They are mainly known from open shelf environments, but are also found nearer the shore and in some freshwater settings.
The appearance of the same species in fresh- and salt-water settings has led to suggestions that some thelodonts migrated into fresh water, perhaps to spawn. However, the transition from fresh- to salt- water should be observable, as the scales' composition would change to reflect the different environment. This compositional change has not yet been found.
Utility as biostratigraphic markers
Thelodont scales are globally widespread during the Silurian and Early Devonian times, becoming restricted in range to Gondwana, until their extinction in the Late Devonian (Frasnian). The morphology of some species diversified rapidly enough for the scales to rival the conodonts in utility as biostratigraphic markers, allowing precise correlation of widely spaced sediments.
Evolutionary patterns
The first major pattern or group of jawless fish with exoskeletons or plated armour, was the Laurentian group, which existed during the Cambrian-Ordovician time. However, the thelodonts (as well as the conodonts, placoderms, acanthodians, and chondrichthyans) are the second major group which are believed to have emerged in the middle Ordovician and lasted near the Late Devonian period. Due to their similar characteristics and chronological time frame of existence, many believe the thelodonts have Laurentian origins.
| Biology and health sciences | Prehistoric agnathae and early chordates | Animals |
2319292 | https://en.wikipedia.org/wiki/Anaspida | Anaspida | Anaspida ("shieldless ones") is an extinct group of jawless fish that existed from the early Silurian period to the late Devonian period. They were classically regarded as the ancestors of lampreys, but it is denied in recent phylogenetic analysis, although some analysis show these group would be at least related. Anaspids were small marine fish that lacked a heavy bony shield and paired fins, but were distinctively hypocercal.
Anatomy
Compared to many other ostracoderms, such as the Heterostraci and Osteostraci, anaspids did not possess a bony shield or armor, hence their name. The anaspid head and body are instead covered in an array of small, weakly mineralized scales, with a row of massive scutes running down the back, and, at least confirmed among the birkeniids, the body was covered in rows of tile-like scales made of aspidine, an acellular bony tissue. Anaspids all had prominent, laterally placed eyes with no sclerotic ring, with the gills opened as a row of holes along either side of the animal, typically numbering anywhere from 6-15 pairs. The major synapomorphy for the anaspids is the large, tri-radiate spine behind the series of the gill openings.
Taxonomy
Now that Jamoytius and its close cohorts, i.e., Euphanerops, have been moved to Jamoytiiformes, Class Anaspida now consists of two orders, the monogeneric Lasaniida, which contains the genus Lasanius and represents a basal anaspid group, and Birkeniida, which contains all other recognized anaspid taxa. Birkeniida is further divided into several families, including Birkeniidae, Pterygolepididae, Rhyncholepididae and Pharyngolepididae, which contain those taxa known from whole body fossils (in addition to several taxa known only from scales) and the family Septentrioniidae, whose subtaxa are known exclusively from scales. Two recently described genera, Kerreralepis and Cowielepis, are considered to be Birkeniida incertae sedis.
Some recent studies have suggested that anaspids are stem-cyclostomes, more closely related to hagfish and lampreys than to jawed fish.
A newer taxonomy based on the work of Mikko's Phylogeny Archive, Nelson, Grande and Wilson 2016 and van der Laan 2018.
Class †Anaspida Janvier 1996 non Williston 1917
Order †Endeiolepidiformes Berg 1940
Family †Endeiolepididae Stensiö 1939 corrig.
Genus †Endeiolepis Stensiö 1939
Order †Birkeniiformes Berg 1940
Genus †Cowielepis Blom 2008
Genus †Hoburgilepis Blom, Märss & Miller 2002
Genus †Kerreralepis Blom 2012
Genus †Maurylepis Blom, Märss & Miller 2002
Genus †Rytidolepis Pander 1856
Genus †Schidiosteus Pander 1856
Genus †Silmalepis Blom, Märss & Miller 2002
Genus †Vesikulepis Blom, Märss & Miller 2002
Family †Pharyngolepididae Kiær 1924 corrig.
Genus †Pharyngolepis Kiaer 1911
Family †Pterygolepididae Obručhev 1964 corrig.
Genus †Pterygolepis Cossmann 1920 [Pterolepis Kiaer 1911 non Rambur 1838 non De Candolle ex Miquel 1840; Pterolepidops Fowler 1947]
Family †Rhyncholepididae Kiær 1924 corrig.
Genus †Rhyncholepis Kiær 1911 non Miquel 1843 non Nuttall 1841
Family †Tahulalepididae Blom, Märss & Miller 2002
Genus †Tahulalepis Blom, Märss & Miller 2002
Genus †Trimpleylepis Miller, Märss & Blom 2004
Family †Lasaniidae Goodrich 1909
Genus †Lasanius Traquair 1898
Family †Ramsaasalepididae Blom, Märss & Miller 2003
Genus †Ramsaasalepis Blom, Märss & Miller 2003
Family †Birkeniidae Traquair 1899
Genus ?†Vilkitskilepis Märss 2002
Genus †Ctenopleuron Matthew 1907
Genus †Saarolepis Robertson 1945 [Anaspis Robertson 1941 non Geoffroy 1762 non Thomson 1893]
Genus †Birkenia Traquair 1898
Family †Septentrioniidae Blom, Märss & Miller 2002
Genus †Liivilepis Blom, Märss & Miller 2002
Genus †Manbrookia Blom, Märss & Miller 2002
Genus †Ruhnulepis Blom, Märss & Miller 2002
Genus †Spokoinolepis Blom, Märss & Miller 2002
Genus †Septentrionia Blom, Märss & Miller 2002
| Biology and health sciences | Prehistoric agnathae and early chordates | Animals |
2321587 | https://en.wikipedia.org/wiki/Degrees%20of%20freedom%20%28mechanics%29 | Degrees of freedom (mechanics) | In physics, the degrees of freedom (DOF) of a mechanical system is the number of independent parameters that define its configuration or state. It is important in the analysis of systems of bodies in mechanical engineering, structural engineering, aerospace engineering, robotics, and other fields.
The position of a single railcar (engine) moving along a track has one degree of freedom because the position of the car is defined by the distance along the track. A train of rigid cars connected by hinges to an engine still has only one degree of freedom because the positions of the cars behind the engine are constrained by the shape of the track.
An automobile with highly stiff suspension can be considered to be a rigid body traveling on a plane (a flat, two-dimensional space). This body has three independent degrees of freedom consisting of two components of translation and one angle of rotation. Skidding or drifting is a good example of an automobile's three independent degrees of freedom.
The position and orientation of a rigid body in space is defined by three components of translation and three components of rotation, which means that it has six degrees of freedom.
The exact constraint mechanical design method manages the degrees of freedom to neither underconstrain nor overconstrain a device.
Motions and dimensions
The position of an n-dimensional rigid body is defined by the rigid transformation, [T] = [A, d], where d is an n-dimensional translation and A is an n × n rotation matrix, which has n translational degrees of freedom and n(n − 1)/2 rotational degrees of freedom. The number of rotational degrees of freedom comes from the dimension of the rotation group SO(n).
A non-rigid or deformable body may be thought of as a collection of many minute particles (infinite number of DOFs), this is often approximated by a finite DOF system. When motion involving large displacements is the main objective of study (e.g. for analyzing the motion of satellites), a deformable body may be approximated as a rigid body (or even a particle) in order to simplify the analysis.
The degree of freedom of a system can be viewed as the minimum number of coordinates required to specify a configuration. Applying this definition, we have:
For a single particle in a plane two coordinates define its location so it has two degrees of freedom;
A single particle in space requires three coordinates so it has three degrees of freedom;
Two particles in space have a combined six degrees of freedom;
If two particles in space are constrained to maintain a constant distance from each other, such as in the case of a diatomic molecule, then the six coordinates must satisfy a single constraint equation defined by the distance formula. This reduces the degree of freedom of the system to five, because the distance formula can be used to solve for the remaining coordinate once the other five are specified.
Rigid bodies
A single rigid body has at most six degrees of freedom (6 DOF) 3T3R consisting of three translations 3T and three rotations 3R.
| Physical sciences | Basics_4 | Physics |
2322181 | https://en.wikipedia.org/wiki/Postosuchus | Postosuchus | Postosuchus, meaning "Crocodile from Post", is an extinct genus of rauisuchid reptiles comprising two species, P. kirkpatricki and P. alisonae, that lived in what is now North America during the Late Triassic. Postosuchus is a member of the clade Pseudosuchia, the lineage of archosaurs that includes modern crocodilians (the other main group of archosaurs is Avemetatarsalia, the lineage that includes all archosaurs more closely related to birds than to crocodilians). Its name refers to Post Quarry, a place in Texas where many fossils of the type species, P. kirkpatricki, were found. It was one of the apex predators of its area during the Triassic, larger than the small dinosaur predators of its time (such as Coelophysis). It was a hunter which probably preyed on large bulky herbivores like dicynodonts and many other creatures smaller than itself (such as early dinosaurs).
The skeleton of Postosuchus is large and robust with a deep skull and a long tail. It was a large animal up to long or even more. The extreme shortness of the forelimbs relative to the hind limbs, the very small hands, and measurements of the vertebrae suggest that Postosuchus may have been committed to bipedal locomotion.
Description
Postosuchus was one of the largest carnivorous reptiles during the late Triassic. The length of the paratype is estimated up to long, and an individual of such length would have measured tall at the head when stood upright and weighed around . The holotype is estimated up to long, and the largest known individual may measure up to long or more based on a complete cervical series specimen (TTU-P 9235).
The neck of Postosuchus consists of at least eight cervical vertebrae followed by sixteen dorsals, while four co-ossified sacral vertebrae supported the hips. The neck was elongated, expanding to a short torso and long tail. Along with remains of the skeleton, paleontologists also identify osteoderms, which were thick plates forming scales on its back, neck, and possibly above or under the tail. It is thought to have had over thirty vertebrae in the tail decreasing in size to the end. The pelvis with the hooked pubis and the rod-like ischium looked like those of carnosaurs. The ribcage of Postosuchus had typical archosaur structure, composed of large and slender, curved ribs. In some discoveries ribs were found associated with gastralia, dermal bones located in the ventral region of the body.
Skull
Postosuchus had a massively built skull, bearing dagger-like teeth, which was constructed narrow in front, and extended wide and deep behind. The holotype skull was 55 cm in length and 21 cm broad and deep. There are many fenestrae (openings) present in the bones that lighten the skull, providing space for the muscles. Like more derived archosaurs, the lower jaw had mandibular fenestrae (openings at the lower jaw), formed by the junction of the dentary with other jaw bones (surangular and angular).
Postosuchus likely had very good long distant sight, due to large orbits, supporting large and sharp eyes, and strong olfaction provided by elongated nostrils. Inside the skull, under the nostrils, there was a hollow that may have contained the Jacobson's organ, an olfactory sensory organ sometimes referred as the "sixth sense". The jaws held large and sharp serrated teeth, of which some were developed even larger to operate as hooked sabers.
A complete tooth found among Postosuchus remains in North Carolina measured about 7.2 cm in height. Postosuchus possessed heterodonty dentition, which means each tooth was different in size and shape from the others. The upper jaw contained seventeen teeth, with each premaxilla bearing only four teeth and each maxilla thirteen teeth. In the lower jaw were over thirty teeth. Replacement activity in Postosuchus was different from that of crocodiles, since the replacement tooth didn't fit directly in the pulp cavity of the old tooth, but grew until resorption of the old tooth was complete.
Limbs and posture
With the forelimbs being approximately 64% the size of the hindlimbs, Postosuchus had small hands bearing five toes, of which only the first digit bore a claw. Due to the diminutive size of the hands, it is uncertain this claw was especially predominant in predation, but it may have helped in grappling prey. The feet were much larger than the hands, with the fifth metatarsal forming a hook shape. The innermost two digits were less robust than the other toes, and likely could not touch the ground. As it was a crurotarsan, the heel and ankle of Postosuchus resemble those of modern crocodiles.
The limbs were located underneath the body giving Postosuchus an upright stance. Historically, there has been debate over whether or not rauisuchids like Postosuchus were mainly bipedal or quadrupedal. Each one of Postosuchus'''s two forelimbs was slightly over half the size of the hindlimbs. This characteristic of short forelimbs can usually be seen in bipedal reptiles. Chatterjee suggested that Postosuchus could walk in an erect stance, since the short forelimbs were probably used only during slow locomotion. In 1995 Robert Long and Phillip A. Murry argued that Postosuchus was heavily built and quadrupedal. Peyer et al. 2008, argued that the thick pectoral girdle served for locomotion of the forelimbs. They noted that this does not, however, detract from the theory that Postosuchus could also walk bipedally. In 2013, a major study of the skeletal structure concluded that Postosuchus may have been an obligate biped based on evidence from the anatomy of the digits, vertebrae, and pelvis. The proportions of the limbs and weight-bearing sections of the spine were very similar to many theropod dinosaurs, nearly all of which are thought to have been strictly bipedal. However a 2015 study noted several load-bearing adaptations present in the manus of Postosuchus, substantiating the view that its manus was used for support. In a 2022 article Postosuchus was considered predominantly bipedal, but probably still capable of supporting its weight on the forelimbs at low speeds, and an ontogenetic shift was noted, with the shortening of the arms as individuals aged, suggesting that at least hatchlings and juveniles were facultatively quadrupedal.
History
During an expedition in 1980, paleontologists of the Texas Tech University discovered a new geological site rich in fossils near Post, Garza County, Texas, US, where a dozen well-preserved specimens belonging to a new rauisuchid were found. In the following years further excavation in the Post Quarry, in Cooper Canyon Formation (Dockum Group), unearthed many remains of late Triassic terrestrial fauna. The holotype of P. kirkpatricki (TTUP 9000), representing a well-preserved skull and a partial postcranial skeleton, was described along with other findings of this new genus by paleontologist Sankar Chatterjee in 1985. A paratype, TTU-P 9002, representing a well-preserved skull and a complete skeleton was also assigned to this species. Chatterjee named the species after Mr. and Mrs. Jack Kirkpatrick who helped during his fieldwork. Subsequently, some specimens (such manus and toe bones) were re-assigned to Chatterjeea and Lythrosuchus; Long and Murry pointed out that many of the juvenile skeletons (TTUP 9003-9011), which Chatterjee assigned to P. kirkpatricki, belong to a distinct genus, named Chatterjeea elegans. Furthermore, in 2006 Nesbitt and Norell argued that Chatterjeea is a junior synonym of Shuvosaurus.
In 2008, Peyer et al., described a new species of Postosuchus, P. alisonae that was discovered by two UNC undergrad students, Brian Coffey and Marco Brewer in 1992 in Triangle Brick Co. Quarry, Durham County, North Carolina. The remains were prepared and reconstructed between 1994 and 1998 by the Department of Geological Sciences at the University of North Carolina. The specific name is in reference to Alison L. Chambers, who worked to popularize paleontology in North Carolina. The skeleton of P. alisonae consists of a few cranial bones, seven neck, one back, and four tail vertebrae, ribs, gastralia ("belly ribs"), chevrons, bony scutes, much of the shoulder girdles, most of the forelimbs except the left wrist and hand, most of the hindlimbs except for the thigh bones, and pieces from the hip. Moreover, the well-preserved remains of P. alisonae shed new light on parts of Postosuchus anatomy, which were previously not well known. Specifically, the differences between the manus bones of P. kirkpatricki and P. alisonae confirm the chimera theory (associated fossils belonging to different animals) suggested by Long and Murry. The holotype specimen of P. alisonae (UNC 15575) is also unusual in its preservation of gut contents: bones from at least four other animals, including a partial skeleton of an aetosaur, a snout, coracoid, and humerus of the traversodontid cynodont Plinthogomphodon, two phalanges from a dicynodont, and a possible temnospondyl bone. Furthermore, the Postosuchus was positioned on top of a skeleton of the sphenosuchian Dromicosuchus, which included tooth marks on the skull and neck. P. alisonae represents the largest suchian reptile recovered from the quarry and the first articulated specimen of 'rauisuchian' archosaur found in eastern North America.
Putative occurrences
Specimens similar to Postosuchus were discovered in Crosby County, Texas in 1920, and described by paleontologist Ermine Cowles Case in 1922.Case (1922), pp. 70–74. The fossils were composed only of an isolated braincase (UM 7473) and fragments of pelvic bones (UM 7244). Case then mistakenly assigned these specimens to the dinosaur genus Coelophysis. In the case of the braincase later assigned to Postosuchus, in 2002 paleontologist David J. Gower argued that the specimen is not complete and may belong to an ornithodire. Between 1932 and 1934, Case discovered other fossils of caudal vertebrae (UMMP 13670) in Rotten Hill, Texas, and a complete pelvis (UCMP V72183/113314) near Kalgary, Texas. Within the same period, paleontologist Charles Lewis Camp collected over a hundred "rauisuchian" bones, from what is now the Petrified Forest National Park of Arizona, which belong to at least seven individuals (UCMP A296, MNA 207C). Later, more remains came to light. In 1943, Case again described a pelvis along with a pubis (UM 23127) from the Dockum Group of Texas, which dates from the Carnian through the early Norian stages of Late Triassic period. These early findings, from 1932 to 1943, were initially referred to as a new phytosaur reptile, but assigned forty years later to Postosuchus.
The first articulated skeleton referred to P. kirkpatricki (CM 73372) was recovered by David S. Berman of the Carnegie Museum of Natural History, in Coelophysis Quarry at Ghost Ranch, New Mexico, between 1988 and 1989. This specimen was composed of a well-preserved skeleton without skull and was described by Long and Murry in 1995, Weinbaum in 2002 and Novak in 2004.Weinbaum (2002), 78 pp. The specimen represents a skeletally immature individual because none of the neural sutures are closed. It was referred to P. kirkpatricki by Long and Murry (1995) without specific justification, and more recent studies accepted this referral. Nevertheless, Nesbitt (2011) noted that these studies failed to note any synapomorphies unique to P. kirkpatricki and CM 73372. Weinbaum (2002) and Novak (2004) even noted that the preacetabular process of the ilium in CM 73372 was much longer than that of P. kirkpatricki. Nesbitt (2011) also noted that CM 73372 differs from P. kirkpatricki and Rauisuchus in possessing a concave ventral margin of the ilium, and from P. alisonae in processing an asymmetrical distal end of the fourth metatarsal. Nesbitt (2011) couldn't differentiate CM 73372 and Polonosuchus as they overlap only in the caudal vertebrae. A phylogenetic analysis conducted by Nesbitt (2011), one of the most extensive on archosaurs, found CM 73372 to be the most basal crocodylomorph, thus referable neither to P. kirkpatricki nor to Rauisuchidae.
In their description of Vivaron, Lessner et al. (2016) questioned the random referral of all rauisuchid material from the southwestern US to Postosuchus, saying that the discovery of Vivaron stresses the need for a re-appraisal of all material from localities younger or older than unequivocal remains of Postosuchus and Vivaron.
PaleoecologyPostosuchus lived in a tropical environment. The moist and warm region consisted of ferns, such as Cynepteris, Phelopteris and Clathropteris, gymnosperms, represented by Pelourdea, Araucarioxylon, Woodworthia, Otozamites and Dinophyton, and cycads like Sanmiguelia.Ash (1976), pp. 799–804. Plants of the Dockum Group are not well known since the oxidizing of the environment has destroyed most of the plant fossils. Some of them may, however, provide information about the climate in Dockum Group during the late Triassic period. For example, the discovery of large specimens belonging to Araucarioxylon determine that the region was well watered.Ash (1972), pp. 124–128. Postosuchus was one of the largest animals in that ecosystem and preyed on herbivores in the uplands like the dicynodont Placerias. The fauna found in Dockum Group confirm that there were lakes and/or rivers containing fish such as the cartilaginous Xenacanthus, the lobe-finned Chinlea and the dipnoan Ceratodus. On the margins of these rivers and in the uplands lived labyrinthodonts (Latiscopus) and reptiles such as Malerisaurus and Trilophosaurus, and even the archosaurs Coelophysis, Desmatosuchus, Typothorax, Leptosuchus, Nicrosaurus and Rutiodon''.
| Biology and health sciences | Other prehistoric archosaurs | Animals |
376476 | https://en.wikipedia.org/wiki/Ocean%20current | Ocean current | An ocean current is a continuous, directed movement of seawater generated by a number of forces acting upon the water, including wind, the Coriolis effect, breaking waves, cabbeling, and temperature and salinity differences. Depth contours, shoreline configurations, and interactions with other currents influence a current's direction and strength. Ocean currents move both horizontally, on scales that can span entire oceans, as well as vertically, with vertical currents (upwelling and downwelling) playing an important role in the movement of nutrients and gases, such as carbon dioxide, between the surface and the deep ocean.
Ocean currents flow for great distances and together they create the global conveyor belt, which plays a dominant role in determining the climate of many of Earth's regions. More specifically, ocean currents influence the temperature of the regions through which they travel. For example, warm currents traveling along more temperate coasts increase the temperature of the area by warming the sea breezes that blow over them. Perhaps the most striking example is the Gulf Stream, which, together with its extension the North Atlantic Drift, makes northwest Europe much more temperate for its high latitude than other areas at the same latitude. Another example is Lima, Peru, whose cooler subtropical climate contrasts with that of its surrounding tropical latitudes because of the Humboldt Current.
The largest ocean current is the Antarctic Circumpolar Current (ACC), a wind-driven current which flows clockwise uninterrupted around Antarctica. The ACC connects all the ocean basins together, and also provides a link between the atmosphere and the deep ocean due to the way water upwells and downwells on either side of it.
Ocean currents are patterns of water movement that influence climate zones and weather patterns around the world. They are primarily driven by winds and by seawater density, although many other factors influence them – including the shape and configuration of the ocean basin they flow through. The two basic types of currents – surface and deep-water currents – help define the character and flow of ocean waters across the planet.the ocean current is divided in to two warm ocean current and cold ocean current
Causes
Ocean currents are driven by the wind, by the gravitational pull of the moon in the form of tides, and by the effects of variations in water density. Ocean dynamics define and describe the motion of water within the oceans.
Ocean temperature and motion fields can be separated into three distinct layers: mixed (surface) layer, upper ocean (above the thermocline), and deep ocean. Ocean currents are measured in units of sverdrup (Sv), where 1 Sv is equivalent to a volume flow rate of per second.
There are two main types of currents, surface currents and deep water currents. Generally surface currents are driven by wind systems and deep water currents are driven by differences in water density due to variations in water temperature and salinity.
Wind-driven circulation
Surface oceanic currents are driven by wind currents, the large scale prevailing winds drive major persistent ocean currents, and seasonal or occasional winds drive currents of similar persistence to the winds that drive them, and the Coriolis effect plays a major role in their development. The Ekman spiral velocity distribution results in the currents flowing at an angle to the driving winds, and they develop typical clockwise spirals in the northern hemisphere and counter-clockwise rotation in the southern hemisphere.
In addition, the areas of surface ocean currents move somewhat with the seasons; this is most notable in equatorial currents.
Deep ocean basins generally have a non-symmetric surface current, in that the eastern equator-ward flowing branch is broad and diffuse whereas the pole-ward flowing western boundary current is relatively narrow.
Thermohaline circulation
Large scale currents are driven by gradients in water density, which in turn depend on variations in temperature and salinity. This thermohaline circulation is also known as the ocean's conveyor belt. Where significant vertical movement of ocean currents is observed, this is known as upwelling and downwelling. The adjective thermohaline derives from thermo- referring to temperature and referring to salt content, factors which together determine the density of seawater.
The thermohaline circulation is a part of the large-scale ocean circulation that is driven by global density gradients created by surface heat and freshwater fluxes. Wind-driven surface currents (such as the Gulf Stream) travel polewards from the equatorial Atlantic Ocean, cooling en route, and eventually sinking at high latitudes (forming North Atlantic Deep Water). This dense water then flows into the ocean basins. While the bulk of it upwells in the Southern Ocean, the oldest waters (with a transit time of around 1000 years) upwell in the North Pacific. Extensive mixing therefore takes place between the ocean basins, reducing differences between them and making the Earth's oceans a global system. On their journey, the water masses transport both energy (in the form of heat) and matter (solids, dissolved substances and gases) around the globe. As such, the state of the circulation has a large impact on the climate of the Earth. The thermohaline circulation is sometimes called the ocean conveyor belt, the great ocean conveyor, or the global conveyor belt. On occasion, it is imprecisely used to refer to the meridional overturning circulation, (MOC).
Since the 2000s an international program called Argo has been mapping the temperature and salinity structure of the ocean with a fleet of automated platforms that float with the ocean currents. The information gathered will help explain the role the oceans play in the earth's climate.
Effects on climate and ecology
Ocean currents affect temperatures throughout the world. For example, the ocean current that brings warm water up the north Atlantic to northwest Europe also cumulatively and slowly blocks ice from forming along the seashores, which would also block ships from entering and exiting inland waterways and seaports, hence ocean currents play a decisive role in influencing the climates of regions through which they flow. Ocean currents are important in the study of marine debris.
Upwellings and cold ocean water currents flowing from polar and sub-polar regions bring in nutrients that support plankton growth, which are crucial prey items for several key species in marine ecosystems.
Ocean currents are also important in the dispersal and distribution of many organisms, including those with pelagic egg or larval stages. An example is the life-cycle of the European Eel. Terrestrial species, for example tortoises and lizards, can be carried on floating debris by currents to colonise new terrestrial areas and islands.
Ocean currents and climate change
The continued rise of atmospheric temperatures is anticipated to have various effects on the strength of surface ocean currents, wind-driven circulation and dispersal patterns. Ocean currents play a significant role in influencing climate, and shifts in climate in turn impact ocean currents.
Over the last century, reconstructed sea surface temperature data reveal that western boundary currents are heating at double the rate of the global average. These observations indicate that the western boundary currents are likely intensifying due to this change in temperature, and may continue to grow stronger in the near future. There is evidence that surface warming due to anthropogenic climate change has accelerated upper ocean currents in 77% of the global ocean. Specifically, increased vertical stratification due to surface warming intensifies upper ocean currents, while changes in horizontal density gradients caused by differential warming across different ocean regions results in the acceleration of surface zonal currents.
There are suggestions that the Atlantic meridional overturning circulation (AMOC) is in danger of collapsing due to climate change, which would have extreme impacts on the climate of northern Europe and more widely, although this topic is controversial and remains an active area of research. The "State of the cryosphere" report, dedicates significant space to AMOC, saying it may be en route to collapse because of ice melt and water warming. In the same time, the Antarctic Circumpolar Current (ACC) is also slowing down and is expected to lose 20% of it power by the year 2050, "with widespread impacts on ocean circulation and climate". UNESCO mentions that the report in the first time "notes a growing scientific consensus that melting Greenland and Antarctic ice sheets, among other factors, may be slowing important ocean currents at both poles, with potentially dire consequences for a much colder northern Europe and greater sea-level rise along the U.S. East Coast."
In addition to water surface temperatures, the wind systems are a crucial determinant of ocean currents. Wind wave systems influence oceanic heat exchange, the condition of the sea surface, and can alter ocean currents. In the North Atlantic, equatorial Pacific, and Southern Ocean, increased wind speeds as well as significant wave heights have been attributed to climate change and natural processes combined. In the East Australian Current, global warming has also been accredited to increased wind stress curl, which intensifies these currents, and may even indirectly increase sea levels, due to the additional warming created by stronger currents.
As ocean circulation changes due to climate, typical distribution patterns are also changing. The dispersal patterns of marine organisms depend on oceanographic conditions, which as a result, influence the biological composition of oceans. Due to the patchiness of the natural ecological world, dispersal is a species survival mechanism for various organisms. With strengthened boundary currents moving toward the poles, it is expected that some marine species will be redirected to the poles and greater depths. The strengthening or weakening of typical dispersal pathways by increased temperatures are expected to not only impact the survival of native marine species due to inability to replenish their meta populations but also may increase the prevalence of invasive species. In Japanese corals and macroalgae, the unusual dispersal pattern of organisms toward the poles may destabilize native species.
Economic importance
Knowledge of surface ocean currents is essential in reducing costs of shipping, since traveling with them reduces fuel costs. In the wind powered sailing-ship era, knowledge of wind patterns and ocean currents was even more essential. Using ocean currents to help their ships into harbor and using currents such as the gulf stream to get back home. The lack of understanding of ocean currents during that time period is hypothesized to be one of the contributing factors to exploration failure. The Gulf Stream and the Canary current keep western European countries warmer and less variable, while at the same latitude North America's weather was colder. A good example of this is the Agulhas Current (down along eastern Africa), which long prevented sailors from reaching India.
In recent times, around-the-world sailing competitors make good use of surface currents to build and maintain speed.
Ocean currents can also be used for marine power generation, with areas of Japan, Florida and Hawaii being considered for test projects. The utilization of currents today can still impact global trade, it can reduce the cost and emissions of shipping vessels.
Ocean currents can also impact the fishing industry, examples of this include the Tsugaru, Oyashio and Kuroshio currents all of which influence the western North Pacific temperature, which has been shown to be a habitat predictor for the Skipjack tuna. It has also been shown that it is not just local currents that can affect a country's economy, but neighboring currents can influence the viability of local fishing industries.
Distribution
Currents of the Arctic Ocean
Currents of the Atlantic Ocean
Currents of the Indian Ocean
Currents of the Pacific Ocean
Currents of the Southern Ocean
Oceanic gyres
| Physical sciences | Oceanography | null |
376707 | https://en.wikipedia.org/wiki/R%20%28programming%20language%29 | R (programming language) | R is a programming language for statistical computing and data visualization. It has been adopted in the fields of data mining, bioinformatics and data analysis.
The core R language is augmented by a large number of extension packages, containing reusable code, documentation, and sample data.
R software is open-source and free software. It is licensed by the GNU Project and available under the GNU General Public License. It is written primarily in C, Fortran, and R itself. Precompiled executables are provided for various operating systems.
As an interpreted language, R has a native command line interface. Moreover, multiple third-party graphical user interfaces are available, such as RStudio—an integrated development environment—and Jupyter—a notebook interface.
History
R was started by professors Ross Ihaka and Robert Gentleman as a programming language to teach introductory statistics at the University of Auckland. The language was inspired by the S programming language, with most S programs able to run unaltered in R. The language was also inspired by Scheme's lexical scoping, allowing for local variables.
The name of the language, R, comes from being both an S language successor as well as the shared first letter of the authors, Ross and Robert. In August 1993, Ihaka and Gentleman posted a binary of R on StatLib — a data archive website. At the same time, they announced the posting on the s-news mailing list. On December 5, 1997, R became a GNU project when version 0.60 was released. On February 29, 2000, the first official 1.0 version was released.
Packages
R packages are collections of functions, documentation, and data that expand R. For example, packages add report features such as RMarkdown, Quarto, knitr and Sweave. Packages also add the capability to implement various statistical techniques such as linear, generalized linear and nonlinear modeling, classical statistical tests, spatial analysis, time-series analysis, and clustering. Easy package installation and use have contributed to the language's adoption in data science.
Base packages are immediately available when starting R and provide the necessary syntax and commands for programming, computing, graphics production, basic arithmetic, and statistical functionality.
The Comprehensive R Archive Network (CRAN) was founded in 1997 by Kurt Hornik and Friedrich Leisch to host R's source code, executable files, documentation, and user-created packages. Its name and scope mimic the Comprehensive TeX Archive Network and the Comprehensive Perl Archive Network. CRAN originally had three mirrors and 12 contributed packages. , it has 99 mirrors and 21,513 contributed packages. Packages are also available on repositories R-Forge, Omegahat, and GitHub.
The Task Views on the CRAN web site list packages in fields such as causal inference, finance, genetics, high-performance computing, machine learning, medical imaging, meta-analysis, social sciences, and spatial statistics.
The Bioconductor project provides packages for genomic data analysis, complementary DNA, microarray, and high-throughput sequencing methods.
The tidyverse package bundles several subsidiary packages that provide a common interface for tasks related to accessing and processing "tidy data", data contained in a two-dimensional table with a single row for each observation and a single column for each variable.
Installing a package occurs only once. For example, to install the tidyverse package:
> install.packages("tidyverse")
To load the functions, data, and documentation of a package, one executes the library() function. To load tidyverse:
> # Package name can be enclosed in quotes
> library("tidyverse")
> # But also the package name can be called without quotes
> library(tidyverse)
Interfaces
R comes installed with a command line console. Available for installation are various integrated development environments (IDE). IDEs for R include R.app (OSX/macOS only), Rattle GUI, R Commander, RKWard, RStudio, and Tinn-R.
General purpose IDEs that support R include Eclipse via the StatET plugin and Visual Studio via R Tools for Visual Studio.
Editors that support R include Emacs, Vim via the Nvim-R plugin, Kate, LyX via Sweave, WinEdt (website), and Jupyter (website).
Scripting languages that support R include Python (website), Perl (website), Ruby (source code), F# (website), and Julia (source code).
General purpose programming languages that support R include Java via the Rserve socket server, and .NET C# (website).
Statistical frameworks which use R in the background include Jamovi and JASP.
Community
The R Core Team was founded in 1997 to maintain the R source code. The R Foundation for Statistical Computing was founded in April 2003 to provide financial support. The R Consortium is a Linux Foundation project to develop R infrastructure.
The R Journal is an open access, academic journal which features short to medium-length articles on the use and development of R. It includes articles on packages, programming tips, CRAN news, and foundation news.
The R community hosts many conferences and in-person meetups. These groups include:
UseR!: an annual international R user conference (website)
Directions in Statistical Computing (DSC) (website)
R-Ladies: an organization to promote gender diversity in the R community (website)
SatRdays: R-focused conferences held on Saturdays (website)
R Conference (website)
posit::conf (formerly known as rstudio::conf) (website)
Implementations
The main R implementation is written primarily in C, Fortran, and R itself. Other implementations include:
pretty quick R (pqR), by Radford M. Neal, attempts to improve memory management.
Renjin is an implementation of R for the Java Virtual Machine.
CXXR and Riposte are implementations of R written in C++.
Oracle's FastR is an implementation of R, built on GraalVM.
TIBCO Software, creator of S-PLUS, wrote TERR — an R implementation to integrate with Spotfire.
Microsoft R Open (MRO) was an R implementation. As of 30 June 2021, Microsoft started to phase out MRO in favor of the CRAN distribution.
Commercial support
Although R is an open-source project, some companies provide commercial support:
Revolution Analytics provides commercial support for Revolution R.
Oracle provides commercial support for the Big Data Appliance, which integrates R into its other products.
IBM provides commercial support for in-Hadoop execution of R.
Examples
Hello, World!
"Hello, World!" program:
> print("Hello, World!")
[1] "Hello, World!"
Basic syntax
The following examples illustrate the basic syntax of the language and use of the command-line interface. (An expanded list of standard language features can be found in the R manual, "An Introduction to R".)
In R, the generally preferred assignment operator is an arrow made from two characters <-, although = can be used in some cases.
> x <- 1:6 # Create a numeric vector in the current environment
> y <- x^2 # Create vector based on the values in x.
> print(y) # Print the vector’s contents.
[1] 1 4 9 16 25 36
> z <- x + y # Create a new vector that is the sum of x and y
> z # Return the contents of z to the current environment.
[1] 2 6 12 20 30 42
> z_matrix <- matrix(z, nrow = 3) # Create a new matrix that turns the vector z into a 3x2 matrix object
> z_matrix
[,1] [,2]
[1,] 2 20
[2,] 6 30
[3,] 12 42
> 2 * t(z_matrix) - 2 # Transpose the matrix, multiply every element by 2, subtract 2 from each element in the matrix, and return the results to the terminal.
[,1] [,2] [,3]
[1,] 2 10 22
[2,] 38 58 82
> new_df <- data.frame(t(z_matrix), row.names = c("A", "B")) # Create a new data.frame object that contains the data from a transposed z_matrix, with row names 'A' and 'B'
> names(new_df) <- c("X", "Y", "Z") # Set the column names of new_df as X, Y, and Z.
> print(new_df) # Print the current results.
X Y Z
A 2 6 12
B 20 30 42
> new_df$Z # Output the Z column
[1] 12 42
> new_df$Z == new_df['Z'] && new_df[3] == new_df$Z # The data.frame column Z can be accessed using $Z, ['Z'], or [3] syntax and the values are the same.
[1] TRUE
> attributes(new_df) # Print attributes information about the new_df object
$names
[1] "X" "Y" "Z"
$row.names
[1] "A" "B"
$class
[1] "data.frame"
> attributes(new_df)$row.names <- c("one", "two") # Access and then change the row.names attribute; can also be done using rownames()
> new_df
X Y Z
one 2 6 12
two 20 30 42
Structure of a function
One of R's strengths is the ease of creating new functions. Objects in the function body remain local to the function, and any data type may be returned. In R, almost all functions and all user-defined functions are closures.
Create a function:
# The input parameters are x and y.
# The function returns a linear combination of x and y.
f <- function(x, y) {
z <- 3 * x + 4 * y
# an explicit return() statement is optional, could be replaced with simply `z`
return(z)
}
Usage output:
> f(1, 2)
[1] 11
> f(c(1, 2, 3), c(5, 3, 4))
[1] 23 18 25
> f(1:3, 4)
[1] 19 22 25
It is possible to define functions to be used as infix operators with the special syntax `%name%` where "name" is the function variable name:
> `%sumx2y2%` <- function(e1, e2) {e1 ^ 2 + e2 ^ 2}
> 1:3 %sumx2y2% -(1:3)
[1] 2 8 18
Since version 4.1.0 functions can be written in a short notation, which is useful for passing anonymous functions to higher-order functions:
> sapply(1:5, \(i) i^2) # here \(i) is the same as function(i)
[1] 1 4 9 16 25
Native pipe operator
In R version 4.1.0, a native pipe operator, |>, was introduced. This operator allows users to chain functions together one after another, instead of a nested function call.
> nrow(subset(mtcars, cyl == 4)) # Nested without the pipe character
[1] 11
> mtcars |> subset(cyl == 4) |> nrow() # Using the pipe character
[1] 11
Another alternative to nested functions, in contrast to using the pipe character, is using intermediate objects:
> mtcars_subset_rows <- subset(mtcars, cyl == 4)
> num_mtcars_subset <- nrow(mtcars_subset_rows)
> print(num_mtcars_subset)
[1] 11While the pipe operator can produce code that is easier to read, it has been advised to pipe together at most 10 to 15 lines and chunk code into sub-tasks which are saved into objects with meaningful names. Here is an example with fewer than 10 lines that some readers may still struggle to grasp without intermediate named steps:(\(x, n = 42, key = c(letters, LETTERS, " ", ":", ")"))
strsplit(x, "")[[1]] |>
(Vectorize(\(chr) which(chr == key) - 1))() |>
(`+`)(n) |>
(`%%`)(length(key)) |>
(\(i) key[i + 1])() |>
paste(collapse = "")
)("duvFkvFksnvEyLkHAErnqnoyr")
Object-oriented programming
The R language has native support for object-oriented programming. There are two native frameworks, the so-called S3 and S4 systems. The former, being more informal, supports single dispatch on the first argument and objects are assigned to a class by just setting a "class" attribute in each object. The latter is a Common Lisp Object System (CLOS)-like system of formal classes (also derived from S) and generic methods that supports multiple dispatch and multiple inheritance
In the example, summary is a generic function that dispatches to different methods depending on whether its argument is a numeric vector or a "factor":
> data <- c("a", "b", "c", "a", NA)
> summary(data)
Length Class Mode
5 character character
> summary(as.factor(data))
a b c NA's
2 1 1 1
Modeling and plotting
The R language has built-in support for data modeling and graphics. The following example shows how R can generate and plot a linear model with residuals.
# Create x and y values
x <- 1:6
y <- x^2
# Linear regression model y = A + B * x
model <- lm(y ~ x)
# Display an in-depth summary of the model
summary(model)
# Create a 2 by 2 layout for figures
par(mfrow = c(2, 2))
# Output diagnostic plots of the model
plot(model)
Output:
Residuals:
1 2 3 4 5 6 7 8 9 10
3.3333 -0.6667 -2.6667 -2.6667 -0.6667 3.3333
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -9.3333 2.8441 -3.282 0.030453 *
x 7.0000 0.7303 9.585 0.000662 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.055 on 4 degrees of freedom
Multiple R-squared: 0.9583, Adjusted R-squared: 0.9478
F-statistic: 91.88 on 1 and 4 DF, p-value: 0.000662
Mandelbrot set
This Mandelbrot set example highlights the use of complex numbers. It models the first 20 iterations of the equation z = z2 + c, where c represents different complex constants.
Install the package that provides the write.gif() function beforehand:
install.packages("caTools")
R Source code:
library(caTools)
jet.colors <-
colorRampPalette(
c("green", "pink", "#007FFF", "cyan", "#7FFF7F",
"white", "#FF7F00", "red", "#7F0000"))
dx <- 1500 # define width
dy <- 1400 # define height
C <-
complex(
real = rep(seq(-2.2, 1.0, length.out = dx), each = dy),
imag = rep(seq(-1.2, 1.2, length.out = dy), times = dx)
)
# reshape as matrix of complex numbers
C <- matrix(C, dy, dx)
# initialize output 3D array
X <- array(0, c(dy, dx, 20))
Z <- 0
# loop with 20 iterations
for (k in 1:20) {
# the central difference equation
Z <- Z^2 + C
# capture the results
X[, , k] <- exp(-abs(Z))
}
write.gif(
X,
"Mandelbrot.gif",
col = jet.colors,
delay = 100)
Version names
All R version releases from 2.14.0 onward have codenames that make reference to Peanuts comics and films.
In 2018, core R developer Peter Dalgaard presented a history of R releases since 1997. Some notable early releases before the named releases include:
Version 1.0.0 released on February 29, 2000 (2000-02-29), a leap day
Version 2.0.0 released on October 4, 2004 (2004-10-04), "which at least had a nice ring to it"
The idea of naming R version releases was inspired by the Debian and Ubuntu version naming system. Dalgaard also noted that another reason for the use of Peanuts references for R codenames is because, "everyone in statistics is a P-nut".
| Technology | Programming languages | null |
376774 | https://en.wikipedia.org/wiki/Gas%20mantle | Gas mantle | An incandescent gas mantle, gas mantle or Welsbach mantle is a device for generating incandescent bright white light when heated by a flame. The name refers to its original heat source in gas lights which illuminated the streets of Europe and North America in the late 19th century. Mantle refers to the way it hangs like a cloak above the flame. Gas mantles were also used in portable camping lanterns, pressure lanterns and some oil lamps.
Gas mantles are usually sold as a fabric bag which, because of impregnation with metal nitrates, burns away to leave a rigid but fragile mesh of metal oxides when heated during initial use; these metal oxides produce light from the heat of the flame whenever used. Thorium dioxide was commonly a major component; being radioactive, it has led to concerns about the safety of those involved in manufacturing mantles. Normal use, however, poses minimal health risk.
Mechanism
The mantle is a roughly pear-shaped fabric bag, made from silk, ramie-based artificial silk, or rayon. The fibers are impregnated with metallic salts; when the mantle is first heated in a flame, the fibers burn away in seconds and the metallic salts convert to solid oxides, forming a brittle ceramic oxide shell in the shape of the original fabric. A mantle glows brightly in the visible spectrum while emitting little infrared radiation. The rare-earth (cerium) and actinide (thorium) oxides in the mantle have a low emissivity in the infrared (in comparison with an ideal black body) but have high emissivity in the visible spectrum. There is also some evidence that the emission is enhanced by candoluminescence, the emission of light from the combustion products before they reach thermal equilibrium. The combination of these properties yields a mantle that, when heated by a kerosene or liquified petroleum gas flame, emits intense radiation that is mostly visible light, with relatively little energy in the unwanted infrared, increasing the luminous efficiency.
The mantle aids the combustion process by keeping the flame small and contained at higher fuel flow rates than in a simple lamp. Concentrating combustion inside the mantle improves the transfer of heat from the flame to the mantle. The mantle shrinks after all the fabric material has burned away during installation leaving a very fragile ceramic oxide shell after its first use.
History
For centuries, artificial light has been generated using open flames. Limelight was invented in the 1820s, but the temperature required to produce visible light through black-body radiation alone was too high to be practical for small lights. In the late 19th century, several inventors tried to develop an effective alternative based on heating a material to a lower temperature but using the emission of discrete spectral lines to simulate white light.
Many early attempts used platinum-iridium gauze soaked in metal nitrates, but these were not successful because of the high cost of these materials and their poor reliability. The first effective mantle was the Clamond basket in 1881, named after its inventor. This device was made from a matrix of magnesium oxide, which did not need to be supported by a platinum wire cage, and was exhibited in the Crystal Palace exhibition of 1883.
The modern gas mantle was one of the many inventions of Carl Auer von Welsbach, a chemist who studied rare-earth elements in the 1880s and who had been Robert Bunsen's student. Ignaz Kreidl worked with him on his early experiments to create the Welsbach mantle. His first process used a mixture of 60% magnesium oxide, 20% lanthanum oxide and 20% yttrium oxide, which he called "Actinophor" and patented in 1887 (March 15, 1887, US patent #359,524). These original mantles gave off a green-tinted light and were not very successful. Welsbach's first company established a factory in Atzgersdorf in 1887, but it failed in 1889. In 1889, Welsbach received his first patent mentioning thorium (March 5, 1889, US patent #399,174). In 1891 he perfected a new mixture of 99% thorium dioxide and 1% cerium dioxide that gave off a much whiter light and produced a stronger mantle. After introducing this new mantle commercially in 1892, it quickly spread throughout Europe. The gas mantle remained an important part of street lighting until the widespread introduction of electric lighting in the early 1900s.
Production
To produce a mantle, cotton is woven or knit into a net bag, impregnated with soluble nitrates of the chosen metals, and then transported to its destination. The user installs the mantle and then burns it to remove the cotton bag and convert the metal nitrates to nitrites which fuse together to form a solid mesh. As the heating continues, the nitrites finally decompose into a fragile mesh of solid oxides with very high melting points.
Early mantles were sold in the unheated cotton mesh condition, since the post heating oxide structure was too fragile to transport easily. The mantle converts to its working form when the cotton burns away on first use. Originally, unused mantles could not be stored for very long because the cotton quickly rotted due to the corrosive nature of the acidic metal nitrates. The acidic metal corrosion was later addressed by soaking the mantle in an ammonia solution to neutralize the excess acid.
Later mantles were made from guncotton (nitrocellulose) which can be produced with extremely fine threads when compared with ordinary cotton threads. These had to be converted back to cellulose by immersion in ammonium sulfide before first use as guncotton is highly flammable and can be explosive. Later, it was discovered that a cotton mantle could be strengthened sufficiently by dipping it in a solution of collodion to coat it with a thin layer that would be burned off when the mantle was first used.
Mantles have a binding thread to tie them to the lamp fitting. Until asbestos was banned due to its carcinogenicity, an asbestos thread was used. Modern mantles use a wire or a ceramic fiber thread.
Safety concerns
Thorium is radioactive and produces the radioactive gas radon-220 as one of its decay products. Moreover, when heated to incandescence, the thorium volatilizes its in-growth radio-daughters, particularly radium-224. Despite its very short half-life, radium quickly replenishes from its radio-parent (thorium-228), and every new heating of the mantle to incandescence releases a fresh flush of radium-224 into the air. This byproduct can be inhaled if the mantle is used indoors, and is an internal alpha-emitter radio-toxicity concern. Secondary decay products of thorium include radium and actinium. Because of this, there are concerns about the safety of thorium mantles. The Australian Radiation Protection and Nuclear Safety Agency recommends mantles made with yttrium instead.
A study in 1981 estimated that the dose from using a thorium mantle every weekend for a year would be , tiny in comparison to the normal annual background radiation dose of around , although this assumes the thorium remains intact rather than airborne. A person actually ingesting a mantle would receive a dose of . However, the radioactivity is a major concern for people involved with the manufacture of mantles and an issue with contamination of soil around some former factory sites.
One potential cause for concern is that particles from thorium gas mantles "fall out" over time and get into the air, where they may be ingested in food or drink. These particles may also be inhaled and remain in the lungs or liver, causing long-term exposure exceeding the risk of background radiation. Also of concern is the release of thorium-bearing dust if the mantle shatters due to mechanical impact.
All of these issues have led to the use of alternatives in some countries, usually yttrium or sometimes zirconium, although these are usually either more expensive or less efficient. Safety concerns were the subject of a federal suit against the Coleman Company (Wagner v. Coleman), which initially agreed to place warning labels on the mantles for this concern, and subsequently switched to using yttrium.
In June 2001, the U.S. Nuclear Regulatory Commission published a study about the Systematic Radiological Assessment of Exemptions for Source and Byproduct Materials, stating that radioactive gas mantles are explicitly legal in the US.
| Technology | Lighting | null |
376845 | https://en.wikipedia.org/wiki/Cooper%20pair | Cooper pair | In condensed matter physics, a Cooper pair or BCS pair (Bardeen–Cooper–Schrieffer pair) is a pair of electrons (or other fermions) bound together at low temperatures in a certain manner first described in 1956 by American physicist Leon Cooper.
Description
Cooper showed that an arbitrarily small attraction between electrons in a metal can cause a paired state of electrons to have a lower energy than the Fermi energy, which implies that the pair is bound. In conventional superconductors, this attraction is due to the electron–phonon interaction. The Cooper pair state is responsible for superconductivity, as described in the BCS theory developed by John Bardeen, Leon Cooper, and John Schrieffer for which they shared the 1972 Nobel Prize.
Although Cooper pairing is a quantum effect, the reason for the pairing can be seen from a simplified classical explanation. An electron in a metal normally behaves as a free particle. The electron is repelled from other electrons due to their negative charge, but it also attracts the positive ions that make up the rigid lattice of the metal. This attraction distorts the ion lattice, moving the ions slightly toward the electron, increasing the positive charge density of the lattice in the vicinity. This positive charge can attract other electrons. At long distances, this attraction between electrons due to the displaced ions can overcome the electrons' repulsion due to their negative charge, and cause them to pair up. The rigorous quantum mechanical explanation shows that the effect is due to electron–phonon interactions, with the phonon being the collective motion of the positively-charged lattice.
The energy of the pairing interaction is quite weak, of the order of 10−3 eV, and thermal energy can easily break the pairs. So only at low temperatures, in metal and other substrates, are a significant number of the electrons bound in Cooper pairs.
The electrons in a pair are not necessarily close together; because the interaction is long range, paired electrons may still be many hundreds of nanometers apart. This distance is usually greater than the average interelectron distance so that many Cooper pairs can occupy the same space. Electrons have spin-, so they are fermions, but the total spin of a Cooper pair is integer (0 or 1) so it is a composite boson. This means the wave functions are symmetric under particle interchange. Therefore, unlike electrons, multiple Cooper pairs are allowed to be in the same quantum state, which is responsible for the phenomenon of superconductivity.
The BCS theory is also applicable to other fermion systems, such as helium-3. Indeed, Cooper pairing is responsible for the superfluidity of helium-3 at low temperatures. In 2008 it was proposed that pairs of bosons in an optical lattice may be similar to Cooper pairs.
Relationship to superconductivity
The tendency for all the Cooper pairs in a body to "condense" into the same ground quantum state is responsible for the peculiar properties of superconductivity.
Cooper originally considered only the case of an isolated pair's formation in a metal. When one considers the more realistic state of many electronic pair formations, as is elucidated in the full BCS theory, one finds that the pairing opens a gap in the continuous spectrum of allowed energy states of the electrons, meaning that all excitations of the system must possess some minimum amount of energy. This gap to excitations leads to superconductivity, since small excitations such as scattering of electrons are forbidden.
The gap appears due to many-body effects between electrons feeling the attraction.
R. A. Ogg Jr., was first to suggest that electrons might act as pairs coupled by lattice vibrations in the material. This was indicated by the isotope effect observed in superconductors. The isotope effect showed that materials with heavier ions (different nuclear isotopes) had lower superconducting transition temperatures. This can be explained by the theory of Cooper pairing: heavier ions are harder for the electrons to attract and move (how Cooper pairs are formed), which results in smaller binding energy for the pairs.
The theory of Cooper pairs is quite general and does not depend on the specific electron-phonon interaction. Condensed matter theorists have proposed pairing mechanisms based on other attractive interactions such as electron–exciton interactions or electron–plasmon interactions. Currently, none of these other pairing interactions has been observed in any material.
It should be mentioned that Cooper pairing does not involve individual electrons pairing up to form "quasi-bosons". The paired states are energetically favored, and electrons go in and out of those states preferentially. This is a fine distinction that John Bardeen makes:
"The idea of paired electrons, though not fully accurate, captures the sense of it."
The mathematical description of the second-order coherence involved here is given by Yang.
| Physical sciences | Electrical circuits | Physics |
376885 | https://en.wikipedia.org/wiki/Diffraction-limited%20system | Diffraction-limited system | In optics, any optical instrument or systema microscope, telescope, or camerahas a principal limit to its resolution due to the physics of diffraction. An optical instrument is said to be diffraction-limited if it has reached this limit of resolution performance. Other factors may affect an optical system's performance, such as lens imperfections or aberrations, but these are caused by errors in the manufacture or calculation of a lens, whereas the diffraction limit is the maximum resolution possible for a theoretically perfect, or ideal, optical system.
The diffraction-limited angular resolution, in radians, of an instrument is proportional to the wavelength of the light being observed, and inversely proportional to the diameter of its objective's entrance aperture. For telescopes with circular apertures, the size of the smallest feature in an image that is diffraction limited is the size of the Airy disk. As one decreases the size of the aperture of a telescopic lens, diffraction proportionately increases. At small apertures, such as f/22, most modern lenses are limited only by diffraction and not by aberrations or other imperfections in the construction.
For microscopic instruments, the diffraction-limited spatial resolution is proportional to the light wavelength, and to the numerical aperture of either the objective or the object illumination source, whichever is smaller.
In astronomy, a diffraction-limited observation is one that achieves the resolution of a theoretically ideal objective in the size of instrument used. However, most observations from Earth are seeing-limited due to atmospheric effects. Optical telescopes on the Earth work at a much lower resolution than the diffraction limit because of the distortion introduced by the passage of light through several kilometres of turbulent atmosphere. Advanced observatories have started using adaptive optics technology, resulting in greater image resolution for faint targets, but it is still difficult to reach the diffraction limit using adaptive optics.
Radio telescopes are frequently diffraction-limited, because the wavelengths they use (from millimeters to meters) are so long that the atmospheric distortion is negligible. Space-based telescopes (such as Hubble, or a number of non-optical telescopes) always work at their diffraction limit, if their design is free of optical aberration.
The beam from a laser with near-ideal beam propagation properties may be described as being diffraction-limited. A diffraction-limited laser beam, passed through diffraction-limited optics, will remain diffraction-limited, and will have a spatial or angular extent essentially equal to the resolution of the optics at the wavelength of the laser.
Calculation of diffraction limit
The Abbe diffraction limit for a microscope
The observation of sub-wavelength structures with microscopes is difficult because of the Abbe diffraction limit. Ernst Abbe found in 1873, and expressed as a formula in 1882, that light with wavelength , traveling in a medium with refractive index and converging to a spot with half-angle will have a minimum resolvable distance of
The portion of the denominator is called the numerical aperture (NA) and can reach about 1.4–1.6 in modern optics, hence the Abbe limit is .
The same formula had been proven by Hermann von Helmholtz in 1874.
Considering green light around 500 nm and a NA of 1, the Abbe limit is roughly (0.25 μm), which is small compared to most biological cells (1 μm to 100 μm), but large compared to viruses (100 nm), proteins (10 nm) and less complex molecules (1 nm). To increase the resolution, shorter wavelengths can be used such as UV and X-ray microscopes. These techniques offer better resolution but are expensive, suffer from lack of contrast in biological samples and may damage the sample.
Digital photography
In a digital camera, diffraction effects interact with the effects of the regular pixel grid. The combined effect of the different parts of an optical system is determined by the convolution of the point spread functions (PSF). The point spread function of a diffraction limited circular-aperture lens is simply the Airy disk. The point spread function of the camera, otherwise called the instrument response function (IRF) can be approximated by a rectangle function, with a width equivalent to the pixel pitch. A more complete derivation of the modulation transfer function (derived from the PSF) of image sensors is given by Fliegel. Whatever the exact instrument response function, it is largely independent of the f-number of the lens. Thus at different f-numbers a camera may operate in three different regimes, as follows:
In the case where the spread of the IRF is small with respect to the spread of the diffraction PSF, in which case the system may be said to be essentially diffraction limited (so long as the lens itself is diffraction limited).
In the case where the spread of the diffraction PSF is small with respect to the IRF, in which case the system is instrument limited.
In the case where the spread of the PSF and IRF are similar, in which case both impact the available resolution of the system.
The spread of the diffraction-limited PSF is approximated by the diameter of the first null of the Airy disk,
where is the wavelength of the light and is the f-number of the imaging optics, i.e., in the Abbe diffraction limit formula. For instance, for an f/8 lens ( and ) and for green light ( 0.5 μm wavelength) light, the focusing spot diameter will be d = 9.76 μm or 19.5. This is similar to the pixel size for the majority of commercially available 'full frame' (43mm sensor diagonal) cameras and so these will operate in regime 3 for f-numbers around 8 (few lenses are close to diffraction limited at f-numbers smaller than 8). Cameras with smaller sensors will tend to have smaller pixels, but their lenses will be designed for use at smaller f-numbers and it is likely that they will also operate in regime 3 for those f-numbers for which their lenses are diffraction limited.
Obtaining higher resolution
There are techniques for producing images that appear to have higher resolution than allowed by simple use of diffraction-limited optics. Although these techniques improve some aspect of resolution, they generally come at an enormous increase in cost and complexity. Usually the technique is only appropriate for a small subset of imaging problems, with several general approaches outlined below.
Extending numerical aperture
The effective resolution of a microscope can be improved by illuminating from the side.
In conventional microscopes such as bright-field or differential interference contrast, this is achieved by using a condenser. Under spatially incoherent conditions, the image is understood as a composite of images illuminated from each point on the condenser, each of which covers a different portion of the object's spatial frequencies. This effectively improves the resolution by, at most, a factor of two.
Simultaneously illuminating from all angles (fully open condenser) drives down interferometric contrast. In conventional microscopes, the maximum resolution (fully open condenser, at N = 1) is rarely used. Further, under partially coherent conditions, the recorded image is often non-linear with object's scattering potential—especially when looking at non-self-luminous (non-fluorescent) objects. To boost contrast, and sometimes to linearize the system, unconventional microscopes (with structured illumination) synthesize the condenser illumination by acquiring a sequence of images with known illumination parameters. Typically, these images are composited to form a single image with data covering a larger portion of the object's spatial frequencies when compared to using a fully closed condenser (which is also rarely used).
Another technique, 4Pi microscopy, uses two opposing objectives to double the effective numerical aperture, effectively halving the diffraction limit, by collecting the forward and backward scattered light. When imaging a transparent sample, with a combination of incoherent or structured illumination, as well as collecting both forward, and backward scattered light it is possible to image the complete scattering sphere.
Unlike methods relying on localization, such systems are still limited by the diffraction limit of the illumination (condenser) and collection optics (objective), although in practice they can provide substantial resolution improvements compared to conventional methods.
Near-field techniques
The diffraction limit is only valid in the far field as it assumes that no evanescent fields reach the detector. Various near-field techniques that operate less than ≈1 wavelength of light away from the image plane can obtain substantially higher resolution. These techniques exploit the fact that the evanescent field contains information beyond the diffraction limit which can be used to construct very high resolution images, in principle beating the diffraction limit by a factor proportional to how well a specific imaging system can detect the near-field signal. For scattered light imaging, instruments such as near-field scanning optical microscopes and nano-FTIR, which are built atop atomic force microscope systems, can be used to achieve up to 10-50 nm resolution. The data recorded by such instruments often requires substantial processing, essentially solving an optical inverse problem for each image.
Metamaterial-based superlenses can image with a resolution better than the diffraction limit by locating the objective lens extremely close (typically hundreds of nanometers) to the object.
In fluorescence microscopy the excitation and emission are typically on different wavelengths. In total internal reflection fluorescence microscopy a thin portion of the sample located immediately on the cover glass is excited with an evanescent field, and recorded with a conventional diffraction-limited objective, improving the axial resolution.
However, because these techniques cannot image beyond 1 wavelength, they cannot be used to image into objects thicker than 1 wavelength which limits their applicability.
Far-field techniques
Far-field imaging techniques are most desirable for imaging objects that are large compared to the illumination wavelength but that contain fine structure. This includes nearly all biological applications in which cells span multiple wavelengths but contain structure down to molecular scales. In recent years several techniques have shown that sub-diffraction limited imaging is possible over macroscopic distances. These techniques usually exploit optical nonlinearity in a material's reflected light to generate resolution beyond the diffraction limit.
Among these techniques, the STED microscope has been one of the most successful. In STED, multiple laser beams are used to first excite, and then quench fluorescent dyes. The nonlinear response to illumination caused by the quenching process in which adding more light causes the image to become less bright generates sub-diffraction limited information about the location of dye molecules, allowing resolution far beyond the diffraction limit provided high illumination intensities are used.
Laser beams
The limits on focusing or collimating a laser beam are very similar to the limits on imaging with a microscope or telescope. The only difference is that laser beams are typically soft-edged beams. This non-uniformity in light distribution leads to a coefficient slightly different from the 1.22 value familiar in imaging. However, the scaling with wavelength and aperture is exactly the same.
The beam quality of a laser beam is characterized by how well its propagation matches an ideal Gaussian beam at the same wavelength. The beam quality factor M squared (M2) is found by measuring the size of the beam at its waist, and its divergence far from the waist, and taking the product of the two, known as the beam parameter product. The ratio of this measured beam parameter product to that of the ideal is defined as M2, so that M2=1 describes an ideal beam. The M2 value of a beam is conserved when it is transformed by diffraction-limited optics.
The outputs of many low and moderately powered lasers have M2 values of 1.2 or less, and are essentially diffraction-limited.
Other waves
The same equations apply to other wave-based sensors, such as radar and the human ear.
As opposed to light waves (i.e., photons), massive particles have a different relationship between their quantum mechanical wavelength and their energy. This relationship indicates that the effective "de Broglie" wavelength is inversely proportional to the momentum of the particle. For example, an electron at an energy of 10 keV has a wavelength of 0.01 nm, allowing the electron microscope (SEM or TEM) to achieve high resolution images. Other massive particles such as helium, neon, and gallium ions have been used to produce images at resolutions beyond what can be attained with visible light. Such instruments provide nanometer scale imaging, analysis and fabrication capabilities at the expense of system complexity.
| Physical sciences | Waves | Physics |
376967 | https://en.wikipedia.org/wiki/Rosin | Rosin | Rosin (), also known as colophony or Greek pitch (), is a resinous material obtained from pine trees and other plants, mostly conifers. The primary components of rosin are diterpenoids, i.e., C20 carboxylic acids. Rosin consists mainly of resin acids, especially abietic acid. Rosin often appears as a semi-transparent, brittle substance that ranges in color from yellow to black and melts at stove-top temperatures.
In addition to industrial applications such in as varnishes, adhesives, and sealing wax, rosin is used with string instruments on the bow hair to enhance its ability to grip and sound the strings, and it provides grip in various sports and activities. Rosin also serves as an ingredient in medicinal and pharmaceutical formulations and can cause contact dermatitis or occupational asthma in sensitive individuals. It is an FDA approved food additive.
The name "colophony" originates from , Latin for "resin from Colophon" (), an ancient Ionic city.
Properties
Rosin is brittle and friable, with a faint pine odor. It is typically a glassy solid, though some rosins will crystallize, especially when brought into solution. The practical melting point is variable, some being semi-fluid at the temperature of boiling water, others melting at . It is flammable, burning with a smoky flame. It is soluble in alcohol, ether, benzene and chloroform.
Rosin, consisting mainly of abietic acid, combines with caustic alkalis to form salts (rosinates or pinates) that are known as rosin soaps. They are used in soap making.
Uses
Rosin is the principal component has been used for centuries as a flux for soldering. (Abietic acid in the flux removes oxidation from the surfaces of metals, increasing their ability to bond with the liquified solder.)
Is approved by the US FDA as a miscellaneous food additive.
Rosin has been used for centuries as a flux for soldering. (Abietic acid in the flux removes oxidation from the surfaces of metals, increasing their ability to bond with the liquified solder.)
Is rubbed on the hair of bows for bowed string instruments to increase friction.
Has been used for centuries for caulking ships.
Is approved by the US FDA as a miscellaneous food additive.
Rosin is an ingredient in printing inks, photocopying and laser printing paper, varnishes, adhesives (glues), soap, paper sizing, soda, soldering fluxes, and sealing wax.
Rosin can be used as a glazing agent in medicines and chewing gum. It is denoted by E number E915. A related glycerol ester (E445) can be used as an emulsifier in soft drinks. In pharmaceuticals, rosin forms an ingredient in several plasters and ointments.
In industry, rosin is a flux used in soldering. The lead-tin solder commonly used in electronics has 1 to 2% rosin by weight as a flux core, helping the molten metal flow and making a better connection by reducing the refractory solid oxide layer formed at the surface back to metal. It is frequently seen as a burnt or clear residue around new soldering.
Rosin is also sometimes used as internal reinforcement for very thin skinned metal objects - like silver, copper or tin plate candlesticks, or sculptures, where it is simply melted, poured into a hollow thin-skinned object, and left to harden.
A mixture of pitch and rosin is used to make a surface against which glass is polished when making optical components such as lenses.
Rosin is added in small quantities to traditional linseed oil/sand gap fillers ("mastic"), used in building work.
When mixed with waxes and oils, rosin is the main ingredient of mystic smoke, a gum which, when rubbed and suddenly stretched, appears to produce puffs of smoke from the fingertips.
Rosin is extensively used for its friction-increasing capacity in several fields:
Ballet, flamenco, and Irish dancers are known to rub the tips and heels of their shoes in powdered rosin to reduce slippage on clean wooden dance floors or competition/performance stages. It was at one time used in the same way in fencing and is still used as such by boxers.
Gymnasts and team handball players use it to improve grip. Rock climbers have used it in some locations.
Olympic weightlifters rub the soles of their weightlifting boots in rosin to improve traction on the platform.
It is applied to the track surface at the starting line of drag racing courses to improve traction.
Bull riders rub rosin on their rope and glove for additional grip.
Baseball pitchers and ten-pin bowlers may use a small cloth bag of powdered rosin for better ball control. Baseball players sometimes combine rosin with sunscreen, creating a very sticky substance that allows far more grip on the ball than the rosin alone will; the use of such a substance is a violation of Major League Baseball rules.
Rosin can be applied to the hands in aerial acrobatics such as aerial silks and pole dancing to increase grip.
Other uses that are not based on friction:
Fine art uses rosin for tempera emulsions and as painting-medium component for oil paintings. It is soluble in oil of turpentine and turpentine substitute, and needs to be warmed.
In a printmaking technique, aquatint rosin is used on the etching plate in order to create surfaces in gray tones.
In archery, when a new bowstring is being made or waxed for maintenance purposes, rosin may be present in the wax mixture. This provides an amount of tackiness to the string to hold its constituent strands together and reduce wear and fraying.
Dog groomers use powdered rosin to aid in removal of excess hair from deep in the ear canal by giving the groomer a better grip to grasp the hairs with.
Some brands of fly paper use a solution of rosin and rubber as the adhesive.
Rosin is sometimes used as an ingredient in dubbing wax used in fly tying.
Rosin is used hot to de-encapsulate epoxy integrated circuits.
Rosin can be mixed with beeswax and a small amount of linseed oil to affix reeds to reed blocks in accordions.
Rosin potatoes can be cooked by dropping potatoes into boiling rosin and cooking until they float to the surface.
Rosin and its derivatives also exhibit wide-ranging pharmaceutical applications. Rosin derivatives show excellent film forming and coating properties. They are also used for tablet film and enteric coating purpose. Rosins have also been used to formulate microcapsules and nanoparticles.
Glycerol, sorbitol, and mannitol esters of rosin are used as chewing gum bases for medicinal applications. The degradation and biocompatibility of rosin and rosin-based biomaterials has been examined in vitro and ex vivo.
Rosin soaps and esters
Treatment of rosin with sodium hydroxide or sodium carbonate converts the abietic acid into its sodium salt, which is known as a soap. Whereas most domestic soaps are sodium salts of straight-chain fatty acids, the rosin soaps have the branched and cyclic backbone associated with abietic acid. Rosin soaps, also called rosinates, are used to "size" paper, a process that gives paper a desirable hydrophobic texture.
The conversion of abietic acid to esters is also practiced commercially. Ester of glycerol and methanol are both of interest. These materials are colorless syrups. They are compounded with polymers as tackifiers.
Violin rosin
Players of bowed string instruments rub cakes or blocks of rosin on their bow hair so it can grip the strings and make them "speak", or vibrate clearly. Occasionally, substances such as beeswax, gold, silver, tin, or meteoric iron are added to the rosin to modify its stiction/friction properties and the tone that can be produced. Powdered rosin can be applied to new hair, for example with a felt pad or cloth, to reduce the time taken in getting sufficient rosin onto the hair. Rosin is often reapplied immediately before playing the instrument. Lighter rosin is generally preferred for violins and violas, and in high-humidity climates, while darker rosins are preferred for cellos, and for players in cool, dry areas. There are also specific, distinguishing types for basses.
Violin rosin can be applied to the bridges in other musical instruments, such as the banjo and banjolele, in order to prevent the bridge from moving during vigorous playing.
The type of rosin used with bowed string instruments is determined by the diameter of the strings. Generally this means that the larger the instrument is, the softer the rosin should be. For instance, double bass rosin is generally soft enough to be pliable with slow movements. A cake of bass rosin left in a single position for several months will show evidence of flow, especially in warmer weather.
Production
Three methods are used to collect rosin. Rosin exudates are collected from gashes in the bark of living pine trees. Alternatively (see below) rosin is extracted from stumps. Yet another source is pulp mills that use the Kraft process. Tall oil rosin is produced during the distillation of crude tall oil, a by-product of the kraft paper making process. The collection and processing of rosin is called Naval Stores.
The separation of the oleo-resin into the essential oil (spirit of turpentine) and common rosin is accomplished by distillation in large copper stills. The essential oil is carried off at a temperature of between ° and , leaving fluid rosin, which is run off through a tap at the bottom of the still, and purified by passing through straining wadding. Rosin varies in color, according to the age of the tree from which the turpentine is drawn and the degree of heat applied in distillation, from an opaque, almost pitch-black substance through grades of brown and yellow to an almost perfectly transparent colorless glassy mass. The commercial grades are numerous, ranging by letters from A (the darkest) to N (extra pale), superior to which are W (window glass) and WW (water-white) varieties, the latter having about three times the value of the common qualities.
When pine trees are harvested "the resinous portions of fallen or felled trees like longleaf and slash pines, when allowed to remain upon the ground, resist decay indefinitely." This "stump waste", through the use of destructive distillation or solvent processes, can be used to obtain rosin. This type of rosin is typically called wood rosin.
Because the turpentine and pine oil from destructive distillation "become somewhat contaminated with other distillation products", solvent processes are commonly used. In this process, stumps and roots are chipped and soaked in the light end of the heavy naphtha fraction (boiling between ). Multi-stage counter-current extraction is commonly used. In this process, fresh naphtha first contacts wood leached in intermediate stages, and naphtha laden with rosin from intermediate stages contacts unleached wood before vacuum distillation to recover naphtha from the rosin, along with fatty acids, turpentine, and other constituents later separated through steam distillation. Leached wood is steamed for additional naphtha recovery prior to burning for energy recovery. After the solvent has been recovered, "the terpene oils are separated by fractional distillation and recovered mainly as refined turpentine, dipentene, and pine oil. The nonvolatile residue from the extract is wood rosin of rather dark color. Upgrading of the rosin is carried out by clarification methods that generally may include bed-filtering or furfural-treatment of rosin-solvent solution."
On a large scale, rosin is treated by destructive distillation for the production of rosin spirit, pinoline and rosin oil. The last enters into the composition of some of the solid lubricating greases, and is also used as an adulterant of other oils.
Locales
The chief region of rosin production includes Indonesia, southern China (such as Guangdong, Guangxi, Fujian, Yunnan and Jiangxi), and the northern part of Vietnam. Chinese rosin is obtained mainly from the turpentine of Masson's pine Pinus massoniana and slash pine P. elliottii. The latter species is native to the southeastern U.S., but is now widely planted in tree plantations in China.
The South Atlantic and eastern Gulf states of the United States is a second chief region of production. American rosin is obtained from the turpentine of longleaf pine Pinus palustris and loblolly pine P. taeda. In Mexico, most of the rosin is derived from live tapping of several species of pine trees, but mostly Pinus oocarpa, Pinus leiophylla, Pinus devoniana and Pinus montezumae. Most production is concentrated in the west-central state of Michoacán.
The main source of supply in Europe is the French district of Landes, in the departments of Gironde and Landes, where the maritime pine P. pinaster is extensively cultivated. In the north of Europe, rosin is obtained from the Scots pine P. sylvestris, and throughout European countries local supplies are obtained from other species of pine, with Aleppo pine P. halepensis being particularly important in the Mediterranean region.
Health effects
The fumes released during soldering have been cited as a causative agent of occupational asthma. The symptoms also include desquamation of bronchial epithelium.
Prolonged exposure to rosin fumes released during soldering can cause occupational asthma (formerly called colophony disease in this context) in sensitive individuals, although it is not known which component of the fumes causes the problem.
Prolonged exposure to rosin, by handling rosin-coated products, such as laser printer or photocopying paper, can give rise to a form of industrial contact dermatitis.
| Physical sciences | Terpenes and terpenoids | Chemistry |
377055 | https://en.wikipedia.org/wiki/Frostbite | Frostbite | Frostbite is a skin injury that occurs when someone is exposed to extremely low temperatures, causing the freezing of the skin or other tissues, commonly affecting the fingers, toes, nose, ears, cheeks and chin areas. Most often, frostbite occurs in the hands and feet. The initial symptoms are typically a feeling of cold and tingling or numbing. This may be followed by clumsiness with a white or bluish color to the skin. Swelling or blistering may occur following treatment. Complications may include hypothermia or compartment syndrome.
People who are exposed to low temperatures for prolonged periods, such as winter sports enthusiasts, military personnel, and homeless individuals, are at greatest risk. Other risk factors include drinking alcohol, smoking, mental health problems, certain medications, and prior injuries due to cold. The underlying mechanism involves injury from ice crystals and blood clots in small blood vessels following thawing. Diagnosis is based on symptoms. Severity may be divided into superficial (1st and 2nd degree) or deep (3rd and 4th degree). A bone scan or MRI may help in determining the extent of injury.
Prevention consists of wearing proper, fully-covering clothing, avoiding low temperatures and wind, maintaining hydration and nutrition, and sufficient physical activity to maintain core temperature without exhaustion. Treatment is by rewarming, by immersion in warm water (near body temperature) or by body contact, and should be done only when consistent temperature can be maintained so that refreezing is not a risk. Rapid heating or cooling should be avoided since it could potentially cause burning or heart stress. Rubbing or applying force to the affected areas should be avoided as it may cause further damage such as abrasions. The use of ibuprofen and tetanus toxoid is recommended for pain relief or to reduce swelling or inflammation. For severe injuries, iloprost or thrombolytics may be used. Surgery, including amputation, is sometimes necessary.
Evidence of frostbite occurring in people dates back 5,000 years. Evidence was documented in a pre-Columbian mummy discovered in the Andes. The number of cases of frostbite is unknown. Rates may be as high as 40% a year among those who mountaineer. The most common age group affected is those 30 to 50 years old. Frostbite has also played an important role in a number of military conflicts. The first formal description of the condition was in 1813 by Dominique Jean Larrey, a physician in Napoleon's army, during its invasion of Russia.
Signs and symptoms
Areas that are usually affected include cheeks, ears, nose and fingers and toes. Frostbite is often preceded by frostnip. The symptoms of frostbite progress with prolonged exposure to cold. Historically, frostbite has been classified by degrees according to skin and sensation changes, similar to burn classifications. However, the degrees do not correspond to the amount of long term damage. A simplification of this system of classification is superficial (first or second degree) or deep injury (third or fourth degree).
First degree
First degree frostbite is superficial, surface skin damage that is usually not permanent.
Early on, the primary symptom is loss of feeling in the skin. In the affected areas, the skin is numb, and possibly swollen, with a reddened border.
In the weeks after injury, the skin's surface may slough off.
Second degree
In second degree frostbite, the skin develops clear blisters early on, and the skin's surface hardens.
In the weeks after injury, this hardened, blistered skin dries, blackens, and peels.
At this stage, lasting cold sensitivity and numbness can develop.
Third degree
In third degree frostbite, the layers of tissue below the skin freeze.
Symptoms include blood blisters and "blue-grey discoloration of the skin".
In the weeks after injury, pain persists and a blackened crust (eschar) develops.
There can be longterm ulceration and damage to growth plates.
Fourth degree
In fourth degree frostbite, structures below the skin are involved like muscles, tendon, and bone.
Early symptoms include a colorless appearance of the skin, a hard texture, and painless rewarming.
Later, the skin becomes black and mummified. The amount of permanent damage can take one month or more to determine. Autoamputation can occur after two months.
Causes
Risk factors
The major risk factor for frostbite is exposure to cold through geography, occupation and/or recreation. Inadequate clothing and shelter are major risk factors. Frostbite is more likely when the body's ability to produce or retain heat is impaired. Physical, behavioral, and environmental factors can all contribute to the development of frostbite. Immobility and physical stress (such as malnutrition or dehydration) are also risk factors. Disorders and substances that impair circulation contribute, including diabetes, Raynaud's phenomenon, tobacco and alcohol use. Homeless individuals and individuals with some mental illnesses may be at higher risk.
Mechanism
Freezing
In frostbite, cooling of the body causes narrowing of the blood vessels (vasoconstriction). Prolonged exposure to temperatures below may cause ice crystals to form in the tissues, and prolonged exposure to temperatures below may cause ice crystals to form in the blood. Ice crystals can damage small blood vessels at the site of injury. Typically, prolonged exposure to temperatures below may cause frostbite.
Rewarming
Rewarming causes tissue damage through reperfusion injury, which involves vasodilation, swelling (edema), and poor blood flow (stasis). Platelet aggregation is another possible mechanism of injury. Blisters and spasm of blood vessels (vasospasm) can develop after rewarming.
Non-freezing cold injury
The process of frostbite differs from the process of non-freezing cold injury (NFCI). In NFCI, temperature in the tissue decreases gradually. This slower temperature decrease allows the body to try to compensate through alternating cycles of closing and opening blood vessels (vasoconstriction and vasodilation). If this process continues, inflammatory mast cells act in the area. Small clots (microthrombi) form and can cut off blood to the affected area (known as ischemia) and damage nerve fibers. Rewarming causes a series of inflammatory chemicals such as prostaglandins to increase localized clotting.
Pathophysiology
The pathological mechanism by which frostbite causes body tissue injury can be characterized by four stages: Prefreeze, freeze-thaw, vascular stasis, and the late ischemic stage.
Prefreeze phase: involves the cooling of tissues without ice crystal formation.
Freeze-thaw phase: ice-crystals form, resulting in cellular damage and death.
Vascular stasis phase: marked by blood coagulation or the leaking of blood out of the vessels.
Late ischemic phase: characterized by inflammatory events, ischemia and tissue death.
Diagnosis
Frostbite is diagnosed based on signs and symptoms as described above, and by patient history. Other conditions that can have a similar appearance or occur at the same time include:
Frostnip is similar to frostbite, but without ice crystal formation in the skin. Whitening of the skin and numbness reverse quickly after rewarming.
Trench foot is damage to nerves and blood vessels that results from exposure to cold wet (non-freezing) conditions. This is reversible if treated early.
Pernio or chilblains are inflammation of the skin from exposure to wet, cold (non-freezing) conditions. They can appear as various types of ulcers and blisters.
Bullous pemphigoid is a condition that causes itchy blisters over the body that can mimic frostbite. It does not require exposure to cold to develop.
Levamisole toxicity is a vasculitis that can appear similar to frostbite. It is caused by contamination of cocaine by levamisole. Skin lesions can look similar those of frostbite, but do not require cold exposure to occur.
People who have hypothermia often have frostbite as well. Since hypothermia is life-threatening this should be treated first. Technetium-99 or MR scans are not required for diagnosis, but might be useful for prognostic purposes.
Prevention
The Wilderness Medical Society recommends covering the skin and scalp, taking in adequate nutrition, avoiding constrictive footwear and clothing, and remaining active without causing exhaustion. Supplemental oxygen might also be of use at high elevations. Repeated exposure to cold water makes people more susceptible to frostbite. Additional measures to prevent frostbite include:
Avoiding temperatures below −23 °C (-9 °F)
Avoiding moisture, including in the form of sweat and/or skin emollients
Avoiding alcohol and drugs that impair circulation or natural protective responses
Layering clothing
Using chemical or electric warming devices
Recognizing early signs of frostnip and frostbite
Treatment
Individuals with frostbite or potential frostbite should go to a protected environment and get warm fluids. If there is no risk of re-freezing, the extremity can be exposed and warmed in the underarm of a companion or the groin. If the area is allowed to refreeze, there can be worse tissue damage. If the area cannot be reliably kept warm, the person should be brought to a medical facility without rewarming the area. Rubbing the affected area can also increase tissue damage. Aspirin and ibuprofen can be given in the field to prevent clotting and inflammation. Ibuprofen is often preferred to aspirin because aspirin may block a subset of prostaglandins that are important in injury repair.
The first priority in people with frostbite should be to assess for hypothermia and other life-threatening complications of cold exposure. Before treating frostbite, the core temperature should be raised above 35 °C. Oral or intravenous (IV) fluids should be given.
Other considerations for standard hospital management include:
wound care: blisters can be drained by needle aspiration, unless they are bloody (hemorrhagic). Aloe vera gel can be applied before breathable, protective dressings or bandages are put on.
antibiotics: if there is trauma, skin infection (cellulitis) or severe injury
tetanus toxoid: should be administered according to local guidelines. Uncomplicated frostbite wounds are not known to encourage tetanus.
pain control: NSAIDs or opioids are recommended during the painful rewarming process.
Rewarming
If the area is still partially or fully frozen, it should be rewarmed in the hospital with a warm bath with povidone iodine or chlorhexidine antiseptic. Active rewarming seeks to warm the injured tissue as quickly as possible without burning. The faster tissue is thawed, the less tissue damage occurs. According to Handford and colleagues, "The Wilderness Medical Society and State of Alaska Cold Injury Guidelines recommend a temperature of 37–39 °C, which decreases the pain experienced by the patient whilst only slightly slowing rewarming time." Warming takes 15 minutes to 1 hour. The faucet should be left running so the water can circulate. Rewarming can be very painful, so pain management is important.
Medications
People with potential for large amputations and who present within 24 hours of injury can be given TPA with heparin. These medications should be withheld if there are any contraindications. Bone scans or CT angiography can be done to assess damage.
Blood vessel dilating medications such as iloprost may prevent blood vessel blockage. This treatment might be appropriate in grades 2–4 frostbite, when people get treatment within 48 hours. In addition to vasodilators, sympatholytic drugs can be used to counteract the detrimental peripheral vasoconstriction that occurs during frostbite.
A systematic review and metaanalysis revealed that iloprost alone or iloprost plus recombinant tissue plasminogen activator (rtPA) may decrease amputation rate in case of severe frostbite in comparison to buflomedil alone with no major adverse events reported from iloprost or iloprost plus rtPA in the included studies.
Surgery
Various types of surgery might be indicated in frostbite injury, depending on the type and extent of damage. Debridement or amputation of necrotic tissue is usually delayed unless there is gangrene or systemic infection (sepsis). This has led to the adage "Frozen in January, amputate in July". If symptoms of compartment syndrome develop, fasciotomy can be done to attempt to preserve blood flow.
Prognosis
Tissue loss and autoamputation are potential consequences of frostbite. Permanent nerve damage including loss of feeling can occur. It can take several weeks to know what parts of the tissue will survive. Time of exposure to cold is more predictive of lasting injury than temperature the individual was exposed to. The classification system of grades, based on the tissue response to initial rewarming and other factors is designed to predict degree of longterm recovery.
Grades
Grade 1: if there is no initial lesion on the area, no amputation or lasting effects are expected
Grade 2: if there is a lesion on the distal body part, tissue and fingernails can be destroyed
Grade 3: if there is a lesion on the intermediate or near body part, auto-amputation and loss of function can occur
Grade 4: if there is a lesion very near the body (such as the carpals of the hand), the limb can be lost. Sepsis and/or other systemic problems are expected.
A number of long term sequelae can occur after frostbite. These include transient or permanent changes in sensation, paresthesia, increased sweating, cancers, and bone destruction/arthritis in the area affected.
Epidemiology
There is a lack of comprehensive statistics about the epidemiology of frostbite. In the United States, frostbite is more common in northern states. In Finland, annual incidence was 2.5 per 100,000 among civilians, compared with 3.2 per 100,000 in Montreal. Research suggests that men aged 30–49 are at highest risk, possibly due to occupational or recreational exposures to cold.
History
Frostbite has been described in military history for millennia. The Greeks encountered and discussed the problem of frostbite as early as 400 BC. Researchers have found evidence of frostbite in humans dating back 5,000 years, in an Andean mummy. Napoleon's Army was the first documented instance of mass cold injury in the early 1800s. According to Zafren, nearly 1 million combatants fell victim to frostbite in the First and Second World Wars, and the Korean War.
Society and culture
Several notable cases of frostbite include:
Captain Lawrence Oates, an English army captain and Antarctic explorer who in 1912 died of complications of frostbite
Harold Bride, the junior wireless operator of , who suffered severe frostbite on his feet as he and other survivors stood for over an hour on the back of a capsized lifeboat knee-deep in freezing water—Bride had to be carried off from the rescue vessel after it arrived in New York
Noted American rock climber Hugh Herr, who in 1982 lost both legs below the knee to frostbite after being stranded on Mount Washington (New Hampshire) in a blizzard
Beck Weathers, a survivor of the 1996 Mount Everest disaster who lost his nose and hands to frostbite
Scottish mountaineer Jamie Andrew, who in 1999 had all four limbs amputated due to sepsis from frostbite sustained after becoming trapped for four nights whilst climbing Les Droites in the Mont Blanc massif
Research directions
Evidence is insufficient to determine whether or not hyperbaric oxygen therapy as an adjunctive treatment can assist in tissue salvage. Cases have been reported, but no randomized control trial has been performed on humans.
Medical sympathectomy using intravenous reserpine has also been attempted with limited success. Studies have suggested that administration of tissue plasminogen activator (tPa) either intravenously or intra-arterially may decrease the likelihood of eventual need for amputation.
| Biology and health sciences | Injury | null |
377204 | https://en.wikipedia.org/wiki/Fire%20ant | Fire ant | Fire ants are several species of ants in the genus Solenopsis, which includes over 200 species. Solenopsis are stinging ants, and most of their common names reflect this, for example, ginger ants and tropical fire ants. Many of the names shared by this genus are often used interchangeably to refer to other species of ant, such as the term red ant, mostly because of their similar coloration despite not being in the genus Solenopsis. Both Myrmica rubra and Pogonomyrmex barbatus are common examples of non-Solenopsis ants being termed red ants.
None of these common names apply to all species of Solenopsis nor exclusively to species of Solenopsis; for example, several species of weaver ants of the genus Oecophylla in Southeast Asia are colloquially called "fire ants" because of their similar coloration and painful bites, but the two genera are not closely related. Wasmannia auropunctata is another unrelated ant more commonly called the "little fire ant" due to its potent sting.
Appearance
The bodies of mature fire ants, like the bodies of all typical mature insects, are divided into three sections: the head, the thorax, and the abdomen, with three pairs of legs and a pair of antennae. Fire ants of those species invasive in the United States can be distinguished from other ants locally present by their copper brown head and thorax with a darker abdomen. The worker ants are blackish to reddish and their size varies from . In an established nest these different sizes of ants are all present at the same time.
Solenopsis spp. ants can be identified by three body features—a pedicel with two nodes, an unarmed propodeum, and antennae with 10 segments plus a two-segmented club. Many ants bite, and formicine ants can cause irritation by spraying formic acid; myrmecine ants like fire ants have a dedicated venom-injecting sting, which injects an alkaloid venom, as well as mandibles for biting.
Behavior
A typical fire ant colony produces large mounds in open areas, and feeds mostly on young plants, insects and seeds. Fire ants often attack small animals such as small lizards and can kill them. Unlike many other ants, which bite and then spray acid on the wound, fire ants bite only to get a grip and then sting (from the abdomen) and inject a toxic alkaloid venom called solenopsin, a compound from the class of piperidines. For humans, this is a painful sting, a sensation similar to what one feels when burned by fire (hence the name), and the after-effects of the sting can be deadly to sensitive people. Fire ants are more aggressive than most native species, so have pushed many species away from their local habitat. One such species that Solenopsis ants parasitically take advantage of are bees, such as Euglossa imperialis, a nonsocial orchid bee species, from which the ants enter the cells from below the nest and rob the cell's contents.
These ants are renowned for their ability to survive extreme conditions. They do not hibernate, but can survive cold conditions, although this is costly to fire ant populations as observed during several winters in Tennessee, where 80 to 90% of colonies died due to several consecutive days of extremely low temperatures.
Fire ants have been known to form mutualistic relationships with several species of Lycaenidae and Riodinidae butterflies. In Lycaena rubidus, the larvae secrete a fluid that is high in sugar content. Fire ants bring the larvae back to the nest, and protect them through the pupal stage in exchange for feeding on the fluid. In Eurybia elvina, fire ants were observed to frequently construct soil shelters over later instars of larvae on inflorescences on which the larvae are found.
Fire ants nest in the soil, often near moist areas, such as river banks, pond shores, watered lawns, and highway shoulders. Usually, the nest will not be visible, as it will be built under objects such as timber, logs, rocks, or bricks. If no cover for nesting is available, dome-shaped mounds are constructed, but these are usually only found in open spaces, such as fields, parks, and lawns. These mounds can reach heights of , but can be even higher on heavier soils, standing at in height and in diameter. Colonies are founded by small groups of queens or single queens. Even if only one queen survives, within a month or so, the colony can expand to thousands of individuals. Some colonies may be polygynous (having multiple queens per nest).
Fire ants are resilient and can survive floods. During Hurricane Harvey in Texas in 2017, clumps of fire ants, known as rafts, were seen clumped together on the surface of the water. Each clump had as many as 100,000 individual ants, which formed a temporary structure until finding a new permanent home. Ants clumped in this way will recognize different fluid flow conditions and adapt their behavior accordingly to preserve the raft's stability.
Fire ants dig tunnels efficiently using about 30% of the population of the colony, thereby avoiding congestion in tunnels.
Queens, males and workers
Queen
Fire ant queens, the reproductive females in their colony, also are generally the largest. Their primary function is reproduction. Typically, a fire ant queen will seek to establish a new colony following a nuptial flight, wherein it will use its special venom to paralyze offending competitors, in the absence of workers for defense. Fire ant queens may live up to seven years and can produce up to 1,600 eggs per day, and colonies will have as many as 250,000 workers. The estimated potential life span is around 5 years and 10 months to 6 years and 9 months. Young, virgin fire ant queens have wings (as do male fire ants), but they often cut them off after mating. Occasionally, a queen will keep its wings after mating and through her first year.
Males (drones)
Male fire ants mate with queens during a nuptial flight. After a male has successfully inseminated a queen, he will not get accepted back to the mother colony, and will eventually die outside the nest.
Workers
The other roles in an ant colony are usually undertaken by workers. Fire ant workers are haphazardly divided into different size classes, namely minima, minor, media, and major workers. The major ants are known for their larger size and more powerful mandibles typically used in macerating and storing food items (i.e. as repletes), while smaller workers take care of regular tasks (the main tasks in a colony are caring for the eggs/larvae/pupae, cleaning the nest, and foraging for food). However, Solenopsis daguerrei colonies contain no workers, as they are considered social parasites.
Invasive species
Although most fire ant species do not bother people and are not invasive, Solenopsis invicta, known in the United States as the red imported fire ant (or RIFA), is an invasive pest in many areas of the world, including the United States, Australia, China and Taiwan. The RIFA was believed to have been accidentally introduced to these countries via shipping crates, particularly in the case of Australia.
In Australia, RIFA ants were first identified in the Port of Brisbane in 2001, although a strategic review of the Australian RIFA eradication program published in 2021 suggested that RIFA ants may have been present but undetected in Australia as early as 1992. As of November 2023, the invasion of fire ants is restricted to an area of 7000 km2 in South East Queensland that includes Brisbane, with the colonised area bordering the state of New South Wales (NSW), with incursions reported in northern NSW on a regular basis. Outside of this region, as of 2023, there have been seven other incursions that have had to be eradicated, with all incursions linked to ports and airports, including in Gladstone, the Port of Botany near Sydney (in 2014), and the Port of Fremantle, Western Australia. Elsewhere, fire ants have been frequently intercepted on incoming cargo in ports and airports across Australia. Of particular concern, Australian researchers predict that the entire country is able to provide suitable habitat for RIFA colonization, with the exception of highland Tasmania and the Snowy Mountains. In the 11-year time period between 2001 and 2022, the commonwealth and state governments of Australia spent a combined AU$644m in their attempts to eradicate RIFA ants. In 2015, the Australian National Red Imported Fire Ant Eradication Program (NRIFAEP) National Fire Ant Eradication Program was set up and received AU$411m of funding. For the time period of 2023-2027, funding of AU$593m has been agreed. Despite the funded plan and a degree of success in the eradication of RIFA not seen elsewhere in the world, some Australian experts warn that the government on a national and state level may be moving too slowly given the size of the threat.
They were believed to be in the Philippines, but they are most likely to be misidentified for Solenopsis geminata ants.
In the US, the FDA estimates that more than US$5 billion is spent annually on medical treatment, damage, and control in RIFA-infested areas. Furthermore, the ants cause approximately $750 million in damage annually to agricultural assets, including veterinarian bills and livestock loss, as well as crop loss. Over 40 million people live in RIFA-infested areas in the southeastern United States. It is estimated that 30–60% of the people living in fire ant-infested areas of the US are stung each year. RIFA are currently found mainly in warmer US states in the south-east of the country including Florida, Georgia, South Carolina, Louisiana, Mississippi and Alabama, but extend to include parts of North Carolina, Virginia, Tennessee, Arkansas, Texas, Oklahoma, New Mexico, and California.
Since September 2004, Taiwan has been seriously affected by the red fire ant. The US, Taiwan and Australia all have ongoing national programs to control or eradicate the species, but with the exception of those in Australia, none have been especially effective. According to a study published in 2009, it took only seventy years for the lizards in parts of the United States to adapt to the ant's presence; they now have longer legs and new behaviors that aid them in escaping from the danger.
Solenopsis invicta is the most famous species in this genus, especially in the US, however several other species are similarly dangerous and invasive, such as Solenopsis geminata, which has invaded most of the tropical countries, wreaking havoc in medical systems especially in unprepared countries and islands.
Sting symptoms and treatment
The venom of fire ants is mainly (>95%) composed of oily alkaloids structurally derived from piperidine (also known as solenopsins) mixed with a small amount of toxic proteins. Fire ant stings are painful, characterised by a local burning sensation, followed by urticaria. The sting site typically swells into a bump within hours, which can cause further pain and irritation, especially following several stings at the same place. The bump may develop into a white pustule within 24–36 hours which can become infected if scratched, but will spontaneously flatten within a few days if left alone. The pustules are obtrusive and uncomfortable while active and, if they become infected, may cause scarring. Some people may become allergic to the venom, and if untreated, may become increasingly sensitive to the point of experiencing anaphylaxis following fire ant stings, which requires emergency treatment. Management of an emergency visit due to anaphylaxis is recommended with the use of adrenaline. It has been demonstrated that, whilst pustule formation results from the injected venom alkaloids, allergy to fire ant stings is caused solely by venom allergenic proteins.
First aid for fire ant stings includes external treatments and oral medicines. There are also many home remedies of varying efficacy, including immediate application of a solution of half bleach and half water, or aloe vera gel – the latter of which is also often included in over-the-counter creams that also include medically tested and verified treatments. External, topical treatments include the anesthetic benzocaine, the antihistamine diphenhydramine, and the corticosteroid hydrocortisone. Antihistamines or topical corticosteroids may help reduce the itching and will generally benefit local sting reactions. Oral medicine include antihistamines. Severe allergic reactions to fire ant stings, including severe chest pain, nausea, severe sweating, loss of breath, serious swelling, and slurred speech can be fatal if not treated.
Predators
Phorid flies, or Phoridae, are a large family of small, hump-backed flies somewhat smaller than vinegar flies; two species in this family (Pseudacteon tricuspis and Pseudacteon curvatus) are parasitoids of the red imported fire ant in its native range in South America. Some 110 species of the genus Pseudacteon, or ant-decapitating flies, have been described. Members of Pseudacteon reproduce by laying eggs in the thorax of the ant. The first instar larvae migrates to the head, then develops by feeding on the hemolymph, muscle tissue, and nervous tissue. After about two weeks, they cause the ant's head to fall off by releasing an enzyme that dissolves the membrane attaching the ant's head to its body. The fly pupates in the detached head capsule, emerging two weeks later.
Pseudacteon flies appear to be important ecological constraints on Solenopsis species and they have been introduced throughout the southern United States, starting with Travis, Brazos, and Dallas counties in Texas, as well as south central Alabama, where the ants first entered North America.
The Venus flytrap, a carnivorous plant, is native only to North and South Carolina in the United States. About 33% of the prey of the Venus flytrap are ants of various species. They lure their prey with a sweet sap. Once the prey has entered the trap and within about three seconds of touching two or three "trigger hairs" on the surface of the trap, the leaf closes around the prey and digests it. The majority of ants that are captured include non-native RIFAs, and three other species of ants. Other carnivorous plants, such as sundews (Drosera) and various kinds of pitcher plants also trap many ants.
Key natural enemies of fire ants also include other ant species which will attack prospective queens during the nest founding period, when there is an absence of workers to defend the emergent colony. Frequent competitors of fire ant founding queens include other Solenopsis thief ant species, and some invasive pest species, such as the tawny crazy ant, and the black crazy ant.
A number of entomopathogenic fungi are also natural enemies of fire ants, such as Beauveria bassiana and Metarhizium anisopliae. The latter is commercially available for the biological control (as an alternative to conventional pesticides) of various pest insects, and a new proposed technology has increased its shelf life and efficiency against fire ants.
Species
The genus Solenopsis contains over 200 species. Not all species included in the genus are known as fire ants, but most are small slow-moving ants which are unable to sting, called thief ants. "True" fire ants are but a group of about 20 species of Solenopsis which are larger, and will viciously sting in swarms whenever disturbed. Some of the most studied species include:
Solenopsis invicta Buren, 1972
Solenopsis richteri Forel, 1909
Solenopsis saevissima (Smith, 1855)
Solenopsis silvestrii Emery, 1906
Solenopsis solenopsidis (Kusnezov, 1953)
Solenopsis xyloni McCook, 1879
Solenopsis geminata (Fabricius, 1804)
| Biology and health sciences | Hymenoptera | null |
377228 | https://en.wikipedia.org/wiki/Cyanogen | Cyanogen | Cyanogen is the chemical compound with the formula (CN). The simplest stable carbon nitride, it is a colorless and highly toxic gas with a pungent odor. The molecule is a pseudohalogen. Cyanogen molecules consist of two CN groups ‒ analogous to diatomic halogen molecules, such as Cl, but far less oxidizing. The two cyano groups are bonded together at their carbon atoms: N≡C‒C≡N, though other isomers have been detected. The name is also used for the CN radical, and hence is used for compounds such as cyanogen bromide (NCBr) (but see also Cyano radical). When burned at increased pressure with oxygen, it is possible to get a blue tinted flame, the temperature of which is about 4800°C (a higher temperature is possible with ozone). It is as such regarded as the gas with the second highest temperature of burning (after dicyanoacetylene).
Cyanogen is the anhydride of oxamide:
though oxamide is manufactured from cyanogen by hydrolysis:
Preparation
Cyanogen is typically generated from cyanide compounds. One laboratory method entails thermal decomposition of mercuric cyanide:
Or, one can combine solutions of copper(II) salts (such as copper(II) sulfate) with cyanides; an unstable copper(II) cyanide is formed which rapidly decomposes into copper(I) cyanide and cyanogen.
Industrially, it is created by the oxidation of hydrogen cyanide, usually using chlorine over an activated silicon dioxide catalyst or nitrogen dioxide over a copper salt. It is also formed when nitrogen and acetylene are reacted by an electrical spark or discharge.
Isomers
Cyanogen is NCCN. There are less stable isomers in which the order of the atoms differs. Isocyanogen (or cyanoisocyanogen) is NCNC, diisocyanogen is CNNC, and diazodicarbon is CCNN.
Paracyanogen
Paracyanogen is a polymer of cyanogen. It can be best prepared by heating mercury(II) cyanide. It can also be prepared by heating silver cyanide, silver cyanate, cyanogen iodide or cyanuric iodide. It can also be prepared by the polymerization of cyanogen at in the presence of trace impurities. Paracyanogen can also be converted back to cyanogen by heating to . Based on experimental evidence, the structure of this polymeric material is thought to be rather irregular, with most of the carbon atoms being of sp type and localized domains of π conjugation.
History
Cyanogen was first synthesized in 1815 by Joseph Louis Gay-Lussac, who determined its empirical formula and named it. Gay-Lussac coined the word "cyanogène" from the Greek words κυανός (kyanos, blue) and γεννάω (gennao, to create), because cyanide was first isolated by Swedish chemist Carl Wilhelm Scheele from the pigment Prussian blue. It attained importance with the growth of the fertilizer industry in the late 19th century and remains an important intermediate in the production of many fertilizers. It is also used as a stabilizer in the production of nitrocellulose.
Cyanogen is commonly found in comets. In 1910 a spectroscopic analysis of Halley's Comet found cyanogen in the comet's tail, which led to public fear that the Earth would be poisoned as it passed through the tail. People in New York wore gas masks, and merchants sold quack "comet pills" claimed to neutralize poisoning. Because of the extremely diffuse nature of the tail, there was no effect when the planet passed through it.
Safety
Like other cyanides, cyanogen is very toxic, as it readily undergoes reduction to cyanide, which poisons the cytochrome c oxidase complex, thus interrupting the mitochondrial electron transfer chain. Cyanogen gas is an irritant to the eyes and respiratory system. Inhalation can lead to headache, dizziness, rapid pulse, nausea, vomiting, loss of consciousness, convulsions, and death, depending on exposure. Lethal dose through inhalation typically ranges from .
Cyanogen produces the second-hottest-known natural flame (after dicyanoacetylene aka carbon subnitride) with a temperature of over when it burns in oxygen.
In popular culture
In the Doctor Who serial "The Brain of Morbius" (the 5th serial of season 13), the Doctor synthesizes cyanogen using hydrogen cyanide as a starting material and vents it through a pipe to stop Solon from performing surgery on the brain of Morbius's body.
In Dragnet (1987) Friday (Dan Aykroyd) and Streebek (Tom Hanks) are tracking down the villain who stole "the pseudohalogenic compound cyanogen".
| Physical sciences | Molecular compounds | Chemistry |
377397 | https://en.wikipedia.org/wiki/American%20lobster | American lobster | The American lobster (Homarus americanus) is a species of lobster found on the Atlantic coast of North America, chiefly from Labrador to New Jersey. It is also known as Atlantic lobster, Canadian lobster, true lobster, northern lobster, Canadian Reds, or Maine lobster. It can reach a body length of , and a mass of over , making it not only the heaviest crustacean in the world, but also the heaviest of all living arthropod species. Its closest relative is the European lobster Homarus gammarus, which can be distinguished by its coloration and the lack of spines on the underside of the rostrum. American lobsters are usually bluish green to brown with red spines, but several color variations have been observed.
Distribution
Homarus americanus is distributed along the Atlantic coast of North America, from Labrador in the north to Cape Hatteras, North Carolina, in the south. South of New Jersey, the species is uncommon, and landings in Delaware, Maryland, Virginia and North Carolina usually make up less than 0.1% of all landings. A fossil claw assigned to Homarus americanus was found at Nantucket, dating from the Pleistocene. In 2013, an American lobster was caught at the Farallon Islands off the coast of California. It has been introduced to Norway and potentially Iceland.
Description
Homarus americanus commonly reaches long and weighs in weight, but has been known to weigh as much as , making this the heaviest crustacean in the world. Together with Sagmariasus verreauxi, it is also the longest decapod crustacean in the world; an average adult is about long and weighs . The longest American lobsters have a body (excluding claws) long. According to Guinness World Records, the heaviest crustacean ever recorded was an American lobster caught off Nova Scotia, Canada, weighing .
The closest relative of H. americanus is the European lobster, Homarus gammarus. The two species are very similar, and can be crossed artificially, although hybrids are unlikely to occur in the wild since their ranges do not overlap. The two species can be distinguished by several characteristics:
The rostrum of H. americanus bears one or more spines on the underside, which are lacking in H. gammarus.
The spines on the claws of H. americanus are red or red-tipped, while those of H. gammarus are white or white-tipped.
The underside of the claw of H. americanus is orange or red, while that of H. gammarus is creamy white or very pale red.
Head
The antennae measure about long and split into Y-shaped structures with pointed tips. Each tip exhibits a dense zone of hair tufts staggered in a zigzag arrangement. These hairs are covered with multiple nerve cells that can detect odors. Larger, thicker hairs found along the edges control the flow of water, containing odor molecules, to the inner sensory hairs. The shorter antennules provide a further sense of smell. By having a pair of olfactory organs, a lobster can locate the direction a smell comes from, much the same way humans can hear the direction a sound comes from. In addition to sensing smells, the antennules can judge water speed to improve direction finding.
Lobsters have two urinary bladders, located on either side of the head. Lobsters use scents to communicate what and where they are, and those scents are in the urine. They project long plumes of urine in front of them, and do so when they detect a rival or a potential mate in the area.
Thorax
The first pair of pereiopods (legs) is armed with a large, asymmetric pair of claws. The larger one is the "crusher", and has rounded nodules used for crushing prey; the other is the "cutter" or "gripper", which has sharp inner edges and is used for holding or tearing the prey. Whether the crusher claw is on the left side or right side of its body determines whether a lobster is left or right "handed".
Coloration
The normal coloration of Homarus americanus is bluish green to brown with red spines due to a mixture of yellow, blue, and red pigments that occur naturally in the shell. On rare occasions these colors are distorted due to genetic mutations or conditions creating a spectacle for those who catch them. In 2012 it was reported that there has been an increase in these "rare" catches due to unclear reasons. Social media influence making reporting and sharing more accessible to a drop in predator populations have been suggested as possible reasons. The lobsters mentioned below thus usually receive media coverage due to their rarity and eye appeal.
Life cycle
Mating only takes place shortly after the female has molted and her exoskeleton is still soft. The female releases a pheromone which causes the males to become less aggressive and to begin courtship, which involves a courtship dance with claws closed. Eventually, the male inserts spermatophores (sperm packets) into the female's seminal receptacle using his first pleopods; the female may store the sperm for up to 15 months.
The female releases eggs through her oviducts, and they pass the seminal receptacle and are fertilized by the stored sperm. They are then attached to the female's pleopods (swimmerets) using an adhesive, where they are cared for until they are ready to hatch. The female cleans the eggs regularly and fans them with water to keep them oxygenated. The large telolecithal eggs may resemble the segments of a raspberry, and a female carrying eggs is said to be "in berry". Since this period lasts 10–11 months, berried females can be found at any time of year. In the waters off New England, the eggs are typically laid in July or August, and hatch the following May or June. The developing embryo passes through several molts within the egg, before hatching as a metanauplius larva. When the eggs hatch, the female releases them by waving her tail in the water, setting batches of larvae free.
The metanauplius of H. americanus is long, transparent, with large eyes and a long spine projecting from its head. It quickly molts, and the next three stages are similar, but larger. These molts take 10–20 days, during which the planktonic larvae are vulnerable to predation; only 1 in 1,000 is thought to survive to the juvenile stage. To reach the fourth stage – the post-larva – the larva undergoes metamorphosis, and subsequently shows a much greater resemblance to the adult lobster, is around long, and swims with its pleopods. At this stage, the lobster's claws are still relatively small so they rely primarily on tail-flip escapes if threatened.
After the next molt, the lobster sinks to the ocean floor and adopts a benthic lifestyle. It molts more and more infrequently, from an initial rate of ten times per year to once every few years. After one year it is around long, and after six years it may weigh . By the time it reaches the minimum landing size, an individual may have molted 25–27 times, and thereafter each molt may signal a 40%–50% increase in weight, and a 14% increase in carapace length. If threatened, adult lobsters will generally choose to fight unless they have lost their claws.
Ecology
The American lobster thrives in cold, shallow waters where there are many rocks and other places to hide from predators. It typically lives at a depth of , but can be found up to below the surface.
Diet
The natural diet of H. americanus is relatively consistent across different habitats. It is dominated by mollusks (especially mussels, clams and snails), echinoderms and polychaetes, although a wide range of other prey items may be eaten, including other crustaceans (such as crabs), brittle stars, cnidarians and small fish. It will also feed on dead animals, as well as algae and eelgrass. Since lobsters sometimes eat their own molted shell, they were thought to be cannibalistic, but this has never been recorded in the wild. Lobsters in Maine have been shown to gain 35–55% of their calories from herring, which is used as bait for lobster traps. Only 6% of lobsters entering lobster traps to feed are caught.
Diseases
Bacterial
Gaffkaemia or red-tail is an extremely virulent infectious disease of lobsters caused by the bacterium Aerococcus viridans. It only requires a few bacterial cells to cause death of otherwise healthy lobsters. The "red tail" common name refers to a dark orange discoloration of the ventral abdomen of affected lobsters. This is, in fact, the hemolymph or blood seen through the thin ventral arthrodial membranes. The red discoloration comes from astaxanthin, a carotenoid pigment exported to the blood during times of stress. The same sign is also seen in other diseases of lobsters and appears to be a nonspecific stress response, possibly relating to the antioxidant and immunostimulatory properties of the astaxanthin molecule.
Epizootic shell disease is a bacterial infection which causes black lesions on the lobsters' dorsal carapaces, reducing their saleability and sometimes killing the lobsters.
Limp lobster disease caused by systemic infection by the bacterium Vibrio fluvialis (or similar species) causes lobsters to become lethargic and die.
Parasitic
Paramoebiasis is an infectious disease of lobsters caused by infection with the sarcomastigophoran (amoeba) Neoparamoeba pemaquidensis. This organism also causes amoebic gill disease in farmed Atlantic salmon, Salmo salar. Infection occurs throughout the tissues, causing granuloma-like lesions, especially within the ventral nerve cord, the interstices of the hepatopancreas and the antennal gland. Paramoebiasis is strongly suspected to play a prominent role in the rapid die-off of American lobsters in Long Island Sound that occurred in the summer of 1999.
Environmental
Excretory calcinosis in American lobsters in Long Island Sound was described in 2002. The disease causes mineralized calculi to form in the antennal glands and gills. These cause a loss of surface area around the gills, and the lobster eventually asphyxiates. Several reasons have been proposed for the cause of a recent outbreak of the disease. The most generally attributed factor is an increased duration of warmer temperatures in the bottom of the Long Island Sound.
Plastic pollution is harmful for American lobsters. Consumption of microplastic particles may be deadly to early-stage larvae. For later stage larvae, oxygen consumption rate decreases with high level of microplastic fibers.
Taxonomy
The American lobster was first described by Thomas Say in 1817, with a type locality of "Long-branch, part of the coast of New Jersey". The name Say chose – "Astacus marinus" – was invalid as a junior homonym of Astacus marinus Fabricius, 1775, which is in turn a junior synonym of Homarus gammarus. The American lobster was given its current scientific name of Homarus americanus by Henri Milne-Edwards in his 1837 work ("Natural History of the Crustacea"). The common name preferred by the Food and Agriculture Organization is "American lobster", but the species is also known locally as the "northern lobster", "Maine lobster" or simply "lobster".
As food
American lobsters are a popular food. They are commonly boiled or steamed. Hard-shells (lobsters that are several months past their last molt) can survive out of water for up to four or five days if kept refrigerated. Soft-shells (lobsters that have only recently molted) do not survive more than a few hours out of water. Lobsters are usually cooked alive, which may be illegal in certain areas and which some people consider inhumane.
Lobster 'tail' (actually the abdomen) is sometimes served with beef as surf and turf. Lobsters have a greenish or brownish organ called the tomalley, which, like the liver and pancreas in a human, filters out toxins from the body. Some diners consider it a delicacy, but others avoid it because they consider it a toxin source; dislike eating innards; or are put off by its texture and appearance, that of a grainy greenish paste.
A set of nutcrackers and a long, thin tool for pulling meat from inaccessible areas are suggested as basics, although more experienced diners can eat the animal with their bare hands or a simple tool (a fork, knife or rock). Eating a lobster can get messy, and most restaurants offer a lobster bib. Meat is generally contained in the larger claws and tails, and stays warm quite a while after being served. There is some meat in the legs and in the arms that connect the large claws to the body. There is also some small amount of meat just below the carapace around the thorax and in the smaller legs.
North American lobster industry
Most lobsters come from the northeastern coast of North America, with the Atlantic Provinces of Canada and the U.S. state of Maine being the largest producers. They are caught primarily using lobster traps, although lobsters are also harvested as bycatch by bottom trawlers, fishermen using gillnets, and by scuba divers in some areas. Maine prohibits scuba divers from catching lobsters; violations are punishable by fines of up to $1000. Maine also prohibits the landing of lobsters caught by bottom trawlers and other "mobile gear". Massachusetts offers scuba divers lobster licenses for a fee, and they are only available to state residents. Rhode Island also requires divers to acquire a permit.
Lobster traps are rectangular cages made of vinyl-coated galvanized steel mesh or wood, with woven mesh entrances. These are baited and lowered to the sea floor. They allow a lobster to enter, but make it difficult for the larger specimens to turn around and exit. This allows the creatures to be captured alive. The traps, sometimes referred to as "pots", have a buoy floating on the surface, and lobstermen check their traps between one and seven days after setting them. The inefficiency of the trapping system has inadvertently prevented the lobster population from being overfished. Lobsters can easily escape the trap, and will defend the trap against other lobsters because it is a source of food. An estimated 10% of lobsters that encounter a trap enter, and of those that enter 6% will be caught.
United States
In the United States, the lobster industry is regulated. Every lobster fisher is required to use a lobster gauge to measure the distance from the lobster's eye socket to the end of its carapace: if the lobster is less than long, it is too young to be sold and must be released back to the sea. There is also a legal maximum size of in Maine, meant to ensure the survival of a healthy breeding stock of adult males, but in parts of some states, such as Massachusetts, there is none. Also, traps must contain an escape hole or "vent", which allows juvenile lobsters and bycatch species to escape. The law in Maine and other states dictates a second large escape hole or "ghost panel" must be installed. This hole is held shut through use of degradable clips made of ferrous metal. Should the trap become lost, the trap eventually opens, allowing the catch to escape.
To protect known breeding females, lobsters caught carrying eggs are to be notched on a tail flipper (second from the right, if the lobster is right-side up and the tail is fully extended). Following this, the female cannot be kept or sold, and is commonly referred to as a "punch-tail" or as "v-notched". This notch remains for two molts of the lobster exoskeleton, providing harvest protection and continued breeding availability for up to five years.
Canada
In Canada, the Department of Fisheries and Oceans is responsible for the governance of fisheries under the authority of the Fisheries Act. The governance structure also includes various other acts, regulations, orders and policies. American Lobster is fished in Canada by lobster licence holders hailing from ports located in provinces on Canada's east coast. Lobster is Canada's most valuable seafood export, worth over CAD$2 billion in 2016.
Management
American lobster tends to have a stable stock in colder northern waters, but gradually decreases in abundance moving southward. To manage lobster populations, more regulations and restrictions, geared towards achieving sustainable populations, are implemented gradually southward.
Genetics
Currently there is no published genome for the American lobster, although a transcriptome was published in 2016.
| Biology and health sciences | Crayfishes and lobsters | Animals |
377519 | https://en.wikipedia.org/wiki/Harbour%20porpoise | Harbour porpoise | The harbour porpoise (Phocoena phocoena) is one of eight extant species of porpoise. It is one of the smallest species of cetacean. As its name implies, it stays close to coastal areas or river estuaries, and as such, is the most familiar porpoise to whale watchers. This porpoise often ventures up rivers, and has been seen hundreds of kilometres from the sea. The harbour porpoise may be polytypic, with geographically distinct populations representing distinct races: P. p. phocoena in the North Atlantic and West Africa, P. p. relicta in the Black Sea and Sea of Azov, an unnamed population in the northwestern Pacific and P. p. vomerina in the northeastern Pacific.
Taxonomy
The English word porpoise comes from the French (Old French , 12th century), which is from Medieval Latin , which is a compound of porcus (pig) and (fish). The old word is probably a loan-translation of a Germanic word, compare Danish marsvin and Middle Dutch mereswijn (sea swine). Classical Latin had a similar name, porculus marinus. The species' taxonomic name, Phocoena phocoena, is the Latinized form of the Greek φώκαινα, phōkaina, "big seal", as described by Aristotle; this from φώκη, phōkē, "seal".
The species is sometimes known as the common porpoise in texts originating in the United Kingdom. In parts of Atlantic Canada it is known colloquially as the puffing pig, and in Norway ‘nise’, derived from an Old Norse word for sneeze; both of which refer to the sound made when porpoises surface to breathe.
Description
The harbour porpoise is a little smaller than the other porpoises, at about long at birth, weighing 6.4–10 kg. Adults of both sexes grow to . The females are heavier, with a maximum weight of around compared with the males' . The body is robust, and the animal is at its maximum girth just in front of its triangular dorsal fin. The beak is poorly demarcated. The flippers, dorsal fin, tail fin and back are a dark grey. The sides are a slightly speckled, lighter grey. The underside is much whiter, though there are usually grey stripes running along the throat from the underside of the body.
Many anomalously white coloured individuals have been confirmed, mostly in the North Atlantic, but also notably around Turkish and British coasts, and in the Wadden Sea, Bay of Fundy and around the coast of Cornwall.
Although conjoined twins are rarely seen in wild mammals, the first known case of a two-headed harbour porpoise was documented in May 2017 when Dutch fishermen in the North Sea caught them by chance. A study published by the online journal of the Natural History Museum Rotterdam points out that conjoined twins in whales and dolphins are extremely rare.
The vocalizations of the harbour porpoise is made up of short clicks from 0.5 to 5 milliseconds in bursts up to two seconds long. Each click has a frequency between 1000 and 2200 hertz. Aside from communication, the clicks are used for echolocation.
Distribution
The harbour porpoise species is widespread in cooler coastal waters of the North Atlantic, North Pacific and the Black Sea. In the Atlantic, harbour porpoises may be present in a curved band of water running from the coast of West Africa to the coasts of Portugal, Spain, France, the United Kingdom, Ireland, Scandinavia, Iceland, Greenland, Nova Scotia and Newfoundland and the eastern seaboard of the United States. The population in the Baltic Sea is limited in winter due to sea freezing, and is most common in the southwest parts of the sea. There is another band in the Pacific Ocean running from the Sea of Japan, Vladivostok, the Bering Strait, Alaska, British Columbia, and California.
The populations in these regions are not continuous and are classified as separate subspecies with P. p. phocoena in the North Atlantic and West Africa, P. p. relicta in the Black Sea and Sea of Azov, an unnamed population in the northwest Pacific and P. p. vomerina in the northeast Pacific.
Concerning the North Atlantic, an international workshop co-organised by the North Atlantic Marine Mammal Commission and the Norwegian Institute of Marine Research reviewed the status of the species in 2018. It concluded that the harbour porpoise population structure is more complex than previously thought, with at least three genetically distinct subspecies in the North Atlantic. Given the structure of the harbour porpoise population, the workshop delineated 18 assessment areas for the North Atlantic.
Population status
The harbour porpoise has a global population of at least 700,000. In 2016, a comprehensive survey of the Atlantic region in Europe, from Gibraltar to Vestfjorden in Norway, found that the population was about 467,000 harbour porpoises, making it the most abundant cetacean in the region, together with the common dolphin. Based on surveys in 1994, 2005 and 2016, the harbour porpoise population in this region is stable. The highest densities are in the southwestern North Sea and oceans of mainland Denmark; the latter region alone is home to about 107,000-300,000 harbour porpoises. The entire North Sea population is about 335,000. In the Western Atlantic it is estimated that there are about 33,000 harbour porpoises along the mid-southwestern coast of Greenland (where increasing temperatures have aided them), 75,000 between the Gulf of Maine and Gulf of St. Lawrence, and 27,000 in the Gulf of St. Lawrence. The Pacific population off mainland United States is about 73,000 and off Alaska 89,000. After sharp declines in the 20th century, populations have rebounded in the inland waters of Washington state. In contrast, some subpopulations are seriously threatened. For example, there are less than 12,000 in the Black Sea, and only about 500 remaining in the Baltic Sea proper, representing a sharp decrease since the mid-1900s.
Natural history
Ecology
Harbour porpoises prefer temperate and subarctic waters. They inhabit fjords, bays, estuaries and harbours, hence their name. They feed mostly on small pelagic schooling fish, particularly herring, pollack, hake, sardine, cod, capelin, and sprat. They will, however, eat squid and crustaceans in certain places. This species tends to feed close to the sea bottom, at least for waters less than deep. However, when hunting sprat, porpoise may stay closer to the surface. When in deeper waters, porpoises may forage for mid-water fish, such as pearlsides. A study published in 2016 showed that porpoises off the coast of Denmark were hunting 200 fish per hour during the day and up to 550 per hour at night, catching 90% of the fish they targeted. Almost all the fish they ate were very small, between long.
A study (2024) shown that prey availability is an important driver of seasonal and diel dynamics of harbour porpoise acoustic activity in the Black Sea. In the southeastern region, porpoise activity was primarily nocturnal, with a peak from January to May, aligned with anchovy migration. On the northwestern shelf, porpoises were more active during daylight from April to October, reflecting the migration patterns of sprat.
Harbour porpoises tend to be solitary foragers, but they do sometimes hunt in packs and herd fish together. Young porpoises need to consume about 7% to 8% of their body weight each day to survive, which is approximately 15 pounds or 7 kilograms of fish. Significant predators of harbour porpoises include white sharks and killer whales (orcas). Researchers at the University of Aberdeen in Scotland have also discovered that the local bottlenose dolphins attack and kill harbour porpoises without eating them due to competition for a decreasing food supply. An alternative explanation is that the adult dolphins exhibit infanticidal behaviour and mistake the porpoises for juvenile dolphins which they are believed to kill. Grey seals are also known to attack harbour porpoises by biting off chunks of fat as a high energy source.
Behaviour, reproduction and life-span
Some studies suggest porpoises are relatively sedentary and usually do not leave a certain area for long. Nevertheless, they have been recorded to move from onshore to offshore waters along coast. Dives of by harbour porpoises have been recorded. Dives can last five minutes but typically last one minute.
The social life of harbour porpoises is not well understood. They are generally seen as a solitary species. Most of the time, porpoises are either alone or in groups of no more than five animals. Porpoises mate promiscuously. Males produce large amounts of sperm, perhaps for sperm competition. Females become sexually mature by their third or fourth year and can calve each year for several consecutive years, being pregnant and lactating at the same time. The gestation of the porpoise is typically 10–11 months. Most births occur in late spring and summer. Calves are weaned after 8–12 months. Their average life-span in the wild is 8–13 years, although exceptionally individuals have reached up to 20, and in captivity up to 28 years. In a study of 239 dead harbour porpoises in the Gulf of Maine–Bay of Fundy, the vast majority were less than 12 years old and the oldest was 17.
Threats
Hunting
Harbour porpoises were traditionally hunted for food, as well as for their blubber, which was used for lighting fuel. Among others, hunting occurred in the Black Sea, off Normandy, in the Bay of Biscay, off Flanders, in the Little Belt strait, off Iceland, western Norway, in Puget Sound, Bay of Fundy and Gulf of Saint Lawrence. The drive hunt in the Little Belt strait is the best documented example. Thousands of porpoises were caught there until the end of the 19th century (it was banned in 1899), and again in smaller scale during the shortages that occurred in World War I and World War II. A similar, short-lived re-emergence of hunting during the world wars happened in Poland and the Baltic countries. Currently, the species is only hunted as part of the traditional Inuit hunt in the Arctic, notably in Greenland. In prehistoric times, harbour porpoises were also hunted in many areas, for example by the Alby People of the east coast of Öland, Sweden.
Interactions with fisheries
The main threat to porpoises is static fishing techniques such as gill and tangle nets. Bycatch in bottom-set gill nets is considered the main anthropogenic mortality factor for harbour porpoises worldwide. Several thousand die each year in incidental bycatch, which has been reported from the Black Sea, the Baltic Sea, the North Sea, off California, and along the east coast of the United States and Canada. Bottom-set gill nets are anchored to the sea floor and are up to in length. It is unknown why porpoises become entangled in gill nets, since several studies indicate they are able to detect these nets using their echolocation. Porpoise-scaring devices, so-called pingers, have been developed to keep porpoises out of nets and numerous studies have demonstrated they are very effective at reducing entanglement. However, concern has been raised over the noise pollution created by the pingers and whether their efficiency will diminish over time due to porpoises habituating to the sounds.
Mortality resulting from trawling bycatch seems to be less of an issue, probably because porpoises are not inclined to feed inside trawls, as dolphins are known to do.
Overfishing
Overfishing may reduce preferred prey availability for porpoises. Overfishing resulting in the collapse of herring in the North Sea caused porpoises to hunt for other prey species. Reduction of prey may result from climate change, overfishing, or both.
Noise pollution
Noise from ship traffic and oil platforms is thought to affect the distribution of toothed whales, like the harbour porpoise, that use echolocation for communication and prey detection. Noise from shipping traffic, particularly busy sea lanes, appears to instigate evasive behavior, with predominantly lateral movements during the day and deeper dives during the night. The construction of thousands of offshore wind turbines, planned in different areas of North Sea, is known to cause displacement of porpoises from the construction site, particularly if steel monopile foundations are installed by percussive piling, where reactions can occur at distances of more than . Noise levels from operating wind turbines are low and unlikely to affect porpoises, even at close range.
Pollution
Marine top predators like porpoises and seals accumulate pollutants such as heavy metals, PCBs and pesticides in their fat tissue. Porpoises have a coastal distribution that potentially brings them close to sources of pollution. Porpoises may not experience any toxic effects until they draw on their fat reserves, such as in periods of food shortage, migration or reproduction.
Climate change
An increase in the temperature of the sea water is likely to affect the distribution of porpoises and their prey, but has not been shown to occur. Reduced stocks of sand eel along the east coast of Scotland, a pattern linked to climate change, appears to be the main reason for the increase in malnutrition in porpoises in the area.
Conservation status
Overall, the harbour porpoise is not considered threatened and the total population is in the hundreds of thousands.
The harbour porpoise populations of the North Sea, Baltic Sea, western North Atlantic, Black Sea and North West Africa are protected under Appendix II of the Convention on the Conservation of Migratory Species of Wild Animals (CMS). In 2013, the two Baltic Sea subpopulations were listed as vulnerable and critically endangered respectively by HELCOM. Although the species overall is considered to be of Least Concern by the IUCN, they consider the Baltic Sea and Western African populations critically endangered, and the subspecies P. p. relicta of the Black Sea endangered.
In addition, the harbour porpoise is covered by the Agreement on the Conservation of Small Cetaceans of the Baltic, North East Atlantic, Irish and North Seas (ASCOBANS), the Agreement on the Conservation of Cetaceans in the Black Sea, Mediterranean Sea and Contiguous Atlantic Area (ACCOBAMS) and the Memorandum of Understanding Concerning the Conservation of the Manatee and Small Cetaceans of Western Africa and Macaronesia (Western African Aquatic Mammals MoU).
| Biology and health sciences | Toothed whale | Animals |
377738 | https://en.wikipedia.org/wiki/Mantle%20plume | Mantle plume | A mantle plume is a proposed mechanism of convection within the Earth's mantle, hypothesized to explain anomalous volcanism. Because the plume head partially melts on reaching shallow depths, a plume is often invoked as the cause of volcanic hotspots, such as Hawaii or Iceland, and large igneous provinces such as the Deccan and Siberian Traps. Some such volcanic regions lie far from tectonic plate boundaries, while others represent unusually large-volume volcanism near plate boundaries.
Concepts
Mantle plumes were first proposed by J. Tuzo Wilson in 1963 and further developed by W. Jason Morgan in 1971 and 1972. A mantle plume is posited to exist where super-heated material forms (nucleates) at the core-mantle boundary and rises through the Earth's mantle. Rather than a continuous stream, plumes should be viewed as a series of hot bubbles of material. Reaching the brittle upper Earth's crust they form diapirs. These diapirs are "hotspots" in the crust. In particular, the concept that mantle plumes are fixed relative to one another and anchored at the core-mantle boundary would provide a natural explanation for the time-progressive chains of older volcanoes seen extending out from some such hotspots, for example, the Hawaiian–Emperor seamount chain. However, paleomagnetic data show that mantle plumes can also be associated with Large Low Shear Velocity Provinces (LLSVPs) and do move relative to each other.
The current mantle plume theory is that material and energy from Earth's interior are exchanged with the surface crust in two distinct and largely independent convective flows:
as previously theorized and widely accepted, the predominant, steady state plate tectonic regime driven by upper mantle convection, mainly the sinking of cold plates of lithosphere back into the asthenosphere.
the punctuated, intermittently dominant mantle overturn regime driven by plume convection that carries heat upward from the core-mantle boundary in a narrow column. This second regime, while often discontinuous, is periodically significant in mountain building and continental breakup.
The plume hypothesis was simulated by laboratory experiments in small fluid-filled tanks in the early 1970s. Thermal or compositional fluid-dynamical plumes produced in that way were presented as models for the much larger postulated mantle plumes. Based on these experiments, mantle plumes are now postulated to comprise two parts: a long thin conduit connecting the top of the plume to its base, and a bulbous head that expands in size as the plume rises. The entire structure resembles a mushroom. The bulbous head of thermal plumes forms because hot material moves upward through the conduit faster than the plume itself rises through its surroundings. In the late 1980s and early 1990s, experiments with thermal models showed that as the bulbous head expands it may entrain some of the adjacent mantle into itself.
The size and occurrence of mushroom mantle plumes can be predicted by the transient instability theory of Tan and Thorpe. The theory predicts mushroom-shaped mantle plumes with heads of about 2000 km diameter that have a critical time (time from onset of heating of the lower mantle to formation of a plume) of about 830 million years for a core mantle heat flux of 20 mW/m2, while the cycle time (the time between plume formation events) is about 2000 million years. The number of mantle plumes is predicted to be about 17.
When a plume head encounters the base of the lithosphere, it is expected to flatten out against this barrier and to undergo widespread decompression melting to form large volumes of basalt magma. It may then erupt onto the surface. Numerical modelling predicts that melting and eruption will take place over several million years. These eruptions have been linked to flood basalts, although many of those erupt over much shorter time scales (less than 1 million years). Examples include the Deccan traps in India, the Siberian traps of Asia, the Karoo-Ferrar basalts/dolerites in South Africa and Antarctica, the Paraná and Etendeka traps in South America and Africa (formerly a single province separated by opening of the South Atlantic Ocean), and the Columbia River basalts of North America. Flood basalts in the oceans are known as oceanic plateaus, and include the Ontong Java plateau of the western Pacific Ocean and the Kerguelen Plateau of the Indian Ocean.
The narrow vertical conduit, postulated to connect the plume head to the core-mantle boundary, is viewed as providing a continuous supply of magma to a hotspot. As the overlying tectonic plate moves over this hotspot, the eruption of magma from the fixed plume onto the surface is expected to form a chain of volcanoes that parallels plate motion. The Hawaiian Islands chain in the Pacific Ocean is the archetypal example. It has recently been discovered that the volcanic locus of this chain has not been fixed over time, and it thus joined the club of the many type examples that do not exhibit the key characteristic originally proposed.
The eruption of continental flood basalts is often associated with continental rifting and breakup. This has led to the hypothesis that mantle plumes contribute to continental rifting and the formation of ocean basins.
Chemistry, heat flow and melting
The chemical and isotopic composition of basalts found at hotspots differs subtly from mid-ocean-ridge basalts. These basalts, also called ocean island basalts (OIBs), are analysed in their radiogenic and stable isotope compositions. In radiogenic isotope systems the originally subducted material creates diverging trends, termed mantle components. Identified mantle components are DMM (depleted mid-ocean ridge basalt (MORB) mantle), HIMU (high U/Pb-ratio mantle), EM1 (enriched mantle 1), EM2 (enriched mantle 2) and FOZO (focus zone). This geochemical signature arises from the mixing of near-surface materials such as subducted slabs and continental sediments, in the mantle source. There are two competing interpretations for this. In the context of mantle plumes, the near-surface material is postulated to have been transported down to the core-mantle boundary by subducting slabs, and to have been transported back up to the surface by plumes. In the context of the Plate hypothesis, subducted material is mostly re-circulated in the shallow mantle and tapped from there by volcanoes.
Stable isotopes like Fe are used to track processes that the uprising material experiences during melting.
The processing of oceanic crust, lithosphere, and sediment through a subduction zone decouples the water-soluble trace elements (e.g., K, Rb, Th) from the immobile trace elements (e.g., Ti, Nb, Ta), concentrating the immobile elements in the oceanic slab (the water-soluble elements are added to the crust in island arc volcanoes). Seismic tomography shows that subducted oceanic slabs sink as far as the bottom of the mantle transition zone at 650 km depth. Subduction to greater depths is less certain, but there is evidence that they may sink to mid-lower-mantle depths at about 1,500 km depth.
The source of mantle plumes is postulated to be the core-mantle boundary at 3,000 km depth. Because there is little material transport across the core-mantle boundary, heat transfer must occur by conduction, with adiabatic gradients above and below this boundary. The core-mantle boundary is a strong thermal (temperature) discontinuity. The temperature of the core is approximately 1,000 degrees Celsius higher than that of the overlying mantle. Plumes are postulated to rise as the base of the mantle becomes hotter and more buoyant.
Plumes are postulated to rise through the mantle and begin to partially melt on reaching shallow depths in the asthenosphere by decompression melting. This would create large volumes of magma. This melt rises to the surface and erupts to form hotspots.
The lower mantle and the core
The most prominent thermal contrast known to exist in the deep (1000 km) mantle is at the core-mantle boundary at 2900 km. Mantle plumes were originally postulated to rise from this layer because the hotspots that are assumed to be their surface expression were thought to be fixed relative to one another. This required that plumes were sourced from beneath the shallow asthenosphere that is thought to be flowing rapidly in response to motion of the overlying tectonic plates. There is no other known major thermal boundary layer in the deep Earth, and so the core-mantle boundary was the only candidate.
The base of the mantle is known as the D″ layer, a seismological subdivision of the Earth. It appears to be compositionally distinct from the overlying mantle and may contain partial melt.
Two very broad, large low-shear-velocity provinces exist in the lower mantle under Africa and under the central Pacific. It is postulated that plumes rise from their surface or their edges. Their low seismic velocities were thought to suggest that they are relatively hot, although it has recently been shown that their low wave velocities are due to high density caused by chemical heterogeneity.
Evidence for the theory
Some common and basic lines of evidence cited in support of the theory are linear volcanic chains, noble gases, geophysical anomalies, and geochemistry.
Linear volcanic chains
The age-progressive distribution of the Hawaiian-Emperor seamount chain has been explained as a result of a fixed, deep-mantle plume rising into the upper mantle, partly melting, and causing a volcanic chain to form as the plate moves overhead relative to the fixed plume source. Other hotspots with time-progressive volcanic chains behind them include Réunion, the Chagos-Laccadive Ridge, the Louisville Ridge, the Ninety East Ridge and Kerguelen, Tristan, and Yellowstone.
While there is evidence that the chains listed above are time-progressive, it has been shown that they are not fixed relative to one another. The most remarkable example of this is the Emperor chain, the older part of the Hawaii system, which was formed by migration of the hotspot in addition to the plate motion. Another example is the Canary Islands in the northeast of Africa in the Atlantic Ocean.
Noble gas and other isotopes
Helium-3 is a primordial isotope that formed in the Big Bang. Very little is produced, and little has been added to the Earth by other processes since then. Helium-4 includes a primordial component, but it is also produced by the natural radioactive decay of elements such as uranium and thorium. Over time, helium in the upper atmosphere is lost into space. Thus, the Earth has become progressively depleted in helium, and 3He is not replaced as 4He is. As a result, the ratio 3He/4He in the Earth has decreased over time.
Unusually high 3He/4He have been observed in some, but not all, hotspots. This is explained by plumes tapping a deep, primordial reservoir in the lower mantle, where the original, high 3He/4He ratios have been preserved throughout geologic time.
Other elements, e.g. osmium, have been suggested to be tracers of material arising from near to the Earth's core, in basalts at oceanic islands. However, so far conclusive proof for this is lacking.
Geophysical anomalies
The plume hypothesis has been tested by looking for the geophysical anomalies predicted to be associated with them. These include thermal, seismic, and elevation anomalies. Thermal anomalies are inherent in the term "hotspot". They can be measured in numerous different ways, including surface heat flow, petrology, and seismology. Thermal anomalies produce anomalies in the speeds of seismic waves, but unfortunately so do composition and partial melt. As a result, wave speeds cannot be used simply and directly to measure temperature, but more sophisticated approaches must be taken.
Seismic anomalies are identified by mapping variations in wave speed as seismic waves travel through Earth. A hot mantle plume is predicted to have lower seismic wave speeds compared with similar material at a lower temperature. Mantle material containing a trace of partial melt (e.g., as a result of it having a lower melting point), or being richer in Fe, also has a lower seismic wave speed and those effects are stronger than temperature. Thus, although unusually low wave speeds have been taken to indicate anomalously hot mantle beneath hotspots, this interpretation is ambiguous. The most commonly cited seismic wave-speed images that are used to look for variations in regions where plumes have been proposed come from seismic tomography. This method involves using a network of seismometers to construct three-dimensional images of the variation in seismic wave speed throughout the mantle.
Seismic waves generated by large earthquakes enable structure below the Earth's surface to be determined along the ray path. Seismic waves that have traveled a thousand or more kilometers (also called teleseismic waves) can be used to image large regions of Earth's mantle. They also have limited resolution, however, and only structures at least several hundred kilometers in diameter can be detected.
Seismic tomography images have been cited as evidence for a number of mantle plumes in Earth's mantle. There is, however, vigorous on-going discussion regarding whether the structures imaged are reliably resolved, and whether they correspond to columns of hot, rising rock.
The mantle plume hypothesis predicts that domal topographic uplifts will develop when plume heads impinge on the base of the lithosphere. An uplift of this kind occurred when the North Atlantic Ocean opened about 54 million years ago. Some scientists have linked this to a mantle plume postulated to have caused the breakup of Eurasia and the opening of the North Atlantic, now suggested to underlie Iceland. Current research has shown that the time-history of the uplift is probably much shorter than predicted, however. It is thus not clear how strongly this observation supports the mantle plume hypothesis.
Geochemistry
Basalts found at oceanic islands are geochemically distinct from mid-ocean ridge basalt (MORB). Ocean island basalt (OIB) is more diverse compositionally than MORB, and the great majority of ocean islands are composed of alkali basalt enriched in sodium and potassium relative to MORB. Larger islands, such as Hawaii or Iceland, are mostly tholeiitic basalt, with alkali basalt limited to late stages of their development, but this tholeiitic basalt is chemically distinct from the tholeiitic basalt of mid-ocean ridges. OIB tends to be more enriched in magnesium, and both alkali and tholeiitic OIB is enriched in trace incompatible elements, with the light rare earth elements showing particular enrichment compared with heavier rare earth elements. Stable isotope ratios of the elements strontium, neodymium, hafnium, lead, and osmium show wide variations relative to MORB, which is attributed to the mixing of at least three mantle components: HIMU with a high proportion of radiogenic lead, produced by decay of uranium and other heavy radioactive elements; EM1 with less enrichment of radiogenic lead; and EM2 with a high 87Sr/86Sr ratio. Helium in OIB shows a wider variation in the 3He/4He ratio than MORB, with some values approaching the primordial value.
The composition of ocean island basalts is attributed to the presence of distinct mantle chemical reservoirs formed by subduction of oceanic crust. These include reservoirs corresponding to HUIMU, EM1, and EM2. These reservoirs are thought to have different major element compositions, based on the correlation between major element compositions of OIB and their stable isotope ratios. Tholeiitic OIB is interpreted as a product of a higher degree of partial melting in particularly hot plumes, while alkali OIB is interpreted as a product of a lower degree of partial melting in smaller, cooler plumes.
Seismology
In 2015, based on data from 273 large earthquakes, researchers compiled a model based on full waveform tomography, requiring the equivalent of 3 million hours of supercomputer time. Due to computational limitations, high-frequency data still could not be used, and seismic data remained unavailable from much of the seafloor. Nonetheless, vertical plumes, 400 C hotter than the surrounding rock, were visualized under many hotspots, including the Pitcairn, Macdonald, Samoa, Tahiti, Marquesas, Galapagos, Cape Verde, and Canary hotspots. They extended nearly vertically from the core-mantle boundary (2900 km depth) to a possible layer of shearing and bending at 1000 km. They were detectable because they were 600–800 km wide, more than three times the width expected from contemporary models. Many of these plumes are in the large low-shear-velocity provinces under Africa and the Pacific, while some other hotspots such as Yellowstone were less clearly related to mantle features in the model.
The unexpected size of the plumes leaves open the possibility that they may conduct the bulk of the Earth's 44 terawatts of internal heat flow from the core to the surface, and means that the lower mantle convects less than expected, if at all. It is possible that there is a compositional difference between plumes and the surrounding mantle that slows them down and broadens them.
Suggested mantle plume locations
Mantle plumes have been suggested as the source for flood basalts. These extremely rapid, large scale eruptions of basaltic magmas have periodically formed continental flood basalt provinces on land and oceanic plateaus in the ocean basins, such as the Deccan Traps, the Siberian Traps the Karoo-Ferrar flood basalts of Gondwana, and the largest known continental flood basalt, the Central Atlantic magmatic province (CAMP).
Many continental flood basalt events coincide with continental rifting. This is consistent with a system that tends toward equilibrium: as matter rises in a mantle plume, other material is drawn down into the mantle, causing rifting.
Alternative hypotheses
In parallel with the mantle plume model, two alternative explanations for the observed phenomena have been considered: the plate hypothesis and the impact hypothesis.
The plate hypothesis
Since the beginning of the 21st century, a paradigm debate "The great plume debate" has developed around plumes, in which the plume hypothesis has been challenged and contrasted with the more recent plate hypothesis ("Plates vs. Plumes"). The reason for this is that the mantle-plume hypothesis has not been suitable for making reliable predictions since its introduction in 1971 and has therefore been repeatedly adapted to observed hotspots depending on the situation. Over time, with the growing number of models, the concept of a plume developed into a weakly defined hypothesis, which as a general term is currently neither provable nor refutable.
The dissatisfaction with the state of the evidence for mantle plumes and the proliferation of ad hoc hypotheses drove a number of geologists, led by Don L. Anderson, Gillian Foulger, and Warren B. Hamilton, to propose a broad alternative based on shallow processes in the upper mantle and above, with an emphasis on plate tectonics as the driving force of magmatism.
The plate hypothesis suggests that "anomalous" volcanism results from lithospheric extension that permits melt to rise passively from the asthenosphere beneath. It is thus the conceptual inverse of the plume hypothesis because the plate hypothesis attributes volcanism to shallow, near-surface processes associated with plate tectonics, rather than active processes arising at the core-mantle boundary.
Lithospheric extension is attributed to processes related to plate tectonics. These processes are well understood at mid-ocean ridges, where most of Earth's volcanism occurs. It is less commonly recognised that the plates themselves deform internally, and can permit volcanism in those regions where the deformation is extensional. Well-known examples are the Basin and Range Province in the western USA, the East African Rift valley, and the Rhine Graben. Under this hypothesis, variable volumes of magma are attributed to variations in chemical composition (large volumes of volcanism corresponding to more easily molten mantle material) rather than to temperature differences.
While not denying the presence of deep mantle convection and upwelling in general, the plate hypothesis holds that these processes do not result in mantle plumes, in the sense of columnar vertical features that span most of the Earth's mantle, transport large amounts of heat, and contribute to surface volcanism.
Under the umbrella of the plate hypothesis, the following sub-processes, all of which can contribute to permitting surface volcanism, are recognised:
Continental break-up;
Fertility at mid-ocean ridges;
Enhanced volcanism at plate boundary junctions;
Small-scale sublithospheric convection;
Oceanic intraplate extension;
Slab tearing and break-off;
Shallow mantle convection;
Abrupt lateral changes in stress at structural discontinuities;
Continental intraplate extension;
Catastrophic lithospheric thinning;
Sublithospheric melt ponding and draining.
The impact hypothesis
In addition to these processes, impact events such as ones that created the Addams crater on Venus and the Sudbury Igneous Complex in Canada are known to have caused melting and volcanism. In the impact hypothesis, it is proposed that some regions of hotspot volcanism can be triggered by certain large-body oceanic impacts which are able to penetrate the thinner oceanic lithosphere, and flood basalt volcanism can be triggered by converging seismic energy focused at the antipodal point opposite major impact sites. Impact-induced volcanism has not been adequately studied and comprises a separate causal category of terrestrial volcanism with implications for the study of hotspots and plate tectonics.
Comparison of the hypotheses
In 1997 it became possible using seismic tomography to image submerging tectonic slabs penetrating from the surface all the way to the core-mantle boundary.
For the Hawaii hotspot, long-period seismic body wave diffraction tomography provided evidence that a mantle plume is responsible, as had been proposed as early as 1971. For the Yellowstone hotspot, seismological evidence began to converge from 2011 in support of the plume model, as concluded by James et al., "we favor a lower mantle plume as the origin for the Yellowstone hotspot." Data acquired through Earthscope, a program collecting high-resolution seismic data throughout the contiguous United States has accelerated acceptance of a plume underlying Yellowstone.
Although there is thus strong evidence that at least these two deep mantle plumes rise from the core-mantle boundary, confirmation that other hypotheses can be dismissed may require similar tomographic evidence for other hotspots.
| Physical sciences | Geophysics | Earth science |
377931 | https://en.wikipedia.org/wiki/Seismic%20tomography | Seismic tomography | Seismic tomography or seismotomography is a technique for imaging the subsurface of the Earth using seismic waves. The properties of seismic waves are modified by the material through which they travel. By comparing the differences in seismic waves recorded at different locations, it is possible to create a model of the subsurface structure. Most commonly, these seismic waves are generated by earthquakes or man-made sources such as explosions. Different types of waves, including P, S, Rayleigh, and Love waves can be used for tomographic images, though each comes with their own benefits and downsides and are used depending on the geologic setting, seismometer coverage, distance from nearby earthquakes, and required resolution. The model created by tomographic imaging is almost always a seismic velocity model, and features within this model may be interpreted as structural, thermal, or compositional variations. Geoscientists apply seismic tomography to a wide variety of settings in which the subsurface structure is of interest, ranging in scale from whole-Earth structure to the upper few meters below the surface.
Theory
Tomography is solved as an inverse problem. Seismic data are compared to an initial Earth model and the model is modified until the best possible fit between the model predictions and observed data is found. Seismic waves would travel in straight lines if Earth was of uniform composition, but structural, chemical, and thermal variations affect the properties of seismic waves, most importantly their velocity, leading to the reflection and refraction of these waves. The location and magnitude of variations in the subsurface can be calculated by the inversion process, although solutions to tomographic inversions are non-unique. Most commonly, only the travel time of the seismic waves is considered in the inversion. However, advances in modeling techniques and computing power have allowed different parts, or the entirety, of the measured seismic waveform to be fit during the inversion.
Seismic tomography is similar to medical x-ray computed tomography (CT scan) in that a computer processes receiver data to produce a 3D image, although CT scans use attenuation instead of travel-time difference. Seismic tomography has to deal with the analysis of curved ray paths which are reflected and refracted within the Earth, and potential uncertainty in the location of the earthquake hypocenter. CT scans use linear x-rays and a known source.
History
In the early 20th century, seismologists first used travel time variations in seismic waves from earthquakes to make discoveries such as the existence of the Moho and the depth to the outer core. While these findings shared some underlying principles with seismic tomography, modern tomography itself was not developed until the 1970s with the expansion of global seismic networks. Networks like the World-Wide Standardized Seismograph Network were initially motivated by underground nuclear tests, but quickly showed the benefits of their accessible, standardized datasets for geoscience. These developments occurred concurrently with advancements in modeling techniques and computing power that were required to solve large inverse problems and generate theoretical seismograms, which are required to test the accuracy of a model. As early as 1972, researchers successfully used some of the underlying principles of modern seismic tomography to search for fast and slow areas in the subsurface.
The first widely cited publication that largely resembles modern seismic tomography was published in 1976 and used local earthquakes to determine the 3D velocity structure beneath Southern California. The following year, P wave delay times were used to create 2D velocity maps of the whole Earth at several depth ranges, representing an early 3D model. The first model using iterative techniques, which improve upon an initial model in small steps and are required when there are a large number of unknowns, was done in 1984. The model was made possible by iterating upon the first radially anisotropic Earth model, created in 1981. A radially anisotropic Earth model describes changes in material properties, specifically seismic velocity, along a radial path through the Earth, and assumes this profile is valid for every path from the core to the surface. This 1984 study was also the first to apply the term "tomography" to seismology, as the term had originated in the medical field with X-ray tomography.
Seismic tomography has continued to improve in the past several decades since its initial conception. The development of adjoint inversions, which are able to combine several different types of seismic data into a single inversion, help negate some of the trade-offs associated with any individual data type. Historically, seismic waves have been modeled as 1D rays, a method referred to as "ray theory" that is relatively simple to model and can usually fit travel-time data well. However, recorded seismic waveforms contain much more information than just travel-time and are affected by a much wider path than is assumed by ray theory. Methods like the finite-frequency method attempt to account for this within the framework of ray theory. More recently, the development of "full waveform" or "waveform" tomography has abandoned ray theory entirely. This method models seismic wave propagation in its full complexity and can yield more accurate images of the subsurface. Originally these inversions were developed in exploration seismology in the 1980s and 1990s and were too computationally complex for global and regional scale studies, but development of numerical modeling methods to simulate seismic waves has allowed waveform tomography to become more common.
Process
Seismic tomography uses seismic records to create 2D and 3D models of the subsurface through an inverse problem that minimizes the difference between the created model and the observed seismic data. Various methods are used to resolve anomalies in the crust, lithosphere, mantle, and core based on the availability of data and types of seismic waves that pass through the region. Longer wavelengths penetrate deeper into the Earth, but seismic waves are not sensitive to features significantly smaller than their wavelength and therefore provide a lower resolution. Different methods also make different assumptions, which can have a large effect on the image created. For example, commonly used tomographic methods work by iteratively improving an initial input model, and thus can produce unrealistic results if the initial model is unreasonable.
P wave data are used in most local models and global models in areas with sufficient earthquake and seismograph density. S and surface wave data are used in global models when this coverage is not sufficient, such as in ocean basins and away from subduction zones. First-arrival times are the most widely used, but models utilizing reflected and refracted phases are used in more complex models, such as those imaging the core. Differential traveltimes between wave phases or types are also used.
Local tomography
Local tomographic models are often based on a temporary seismic array targeting specific areas, unless in a seismically active region with extensive permanent network coverage. These allow for the imaging of the crust and upper mantle.
Diffraction and wave equation tomography use the full waveform, rather than just the first arrival times. The inversion of amplitude and phases of all arrivals provide more detailed density information than transmission traveltime alone. Despite the theoretical appeal, these methods are not widely employed because of the computing expense and difficult inversions.
Reflection tomography originated with exploration geophysics. It uses an artificial source to resolve small-scale features at crustal depths. Wide-angle tomography is similar, but with a wide source to receiver offset. This allows for the detection of seismic waves refracted from sub-crustal depths and can determine continental architecture and details of plate margins. These two methods are often used together.
Local earthquake tomography is used in seismically active regions with sufficient seismometer coverage. Given the proximity between source and receivers, a precise earthquake focus location must be known. This requires the simultaneous iteration of both structure and focus locations in model calculations.
Teleseismic tomography uses waves from distant earthquakes that deflect upwards to a local seismic array. The models can reach depths similar to the array aperture, typically to depths for imaging the crust and lithosphere (a few hundred kilometers). The waves travel near 30° from vertical, creating a vertical distortion to compact features.
Regional or global tomography
Regional to global scale tomographic models are generally based on long wavelengths. Various models have better agreement with each other than local models due to the large feature size they image, such as subducted slabs and superplumes. The trade off from whole mantle to whole Earth coverage is the coarse resolution (hundreds of kilometers) and difficulty imaging small features (e.g. narrow plumes). Although often used to image different parts of the subsurface, P and S wave derived models broadly agree where there is image overlap. These models use data from both permanent seismic stations and supplementary temporary arrays.
First arrival traveltime P wave data are used to generate the highest resolution tomographic images of the mantle. These models are limited to regions with sufficient seismograph coverage and earthquake density, therefore cannot be used for areas such as inactive plate interiors and ocean basins without seismic networks. Other phases of P waves are used to image the deeper mantle and core.
In areas with limited seismograph or earthquake coverage, multiple phases of S waves can be used for tomographic models. These are of lower resolution than P wave models, due to the distances involved and fewer bounce-phase data available. S waves can also be used in conjunction with P waves for differential arrival time models.
Surface waves can be used for tomography of the crust and upper mantle where no body wave (P and S) data are available. Both Rayleigh and Love waves can be used. The low frequency waves lead to low resolution models, therefore these models have difficulty with crustal structure. Free oscillations, or normal mode seismology, are the long wavelength, low frequency movements of the surface of the Earth which can be thought of as a type of surface wave. The frequencies of these oscillations can be obtained through Fourier transformation of seismic data. The models based on this method are of broad scale, but have the advantage of relatively uniform data coverage as compared to data sourced directly from earthquakes.
Attenuation tomography attempts to extract the anelastic signal from the elastic-dominated waveform of seismic waves. Generally, it is assumed that seismic waves behave elastically, meaning individual rock particles that are displaced by the seismic wave eventually return to their original position. However, a comparatively small amount of permanent deformation does occur, which adds up to significant energy loss over large distances. This anelastic behavior is called attenuation, and in certain conditions can become just as important as the elastic response. It has been shown that the contribution of anelasticity to seismic velocity is highly sensitive to temperature, so attenuation tomography can help determine if a velocity feature is caused by a thermal or chemical variation, which can be ambiguous when assuming a purely elastic response.
Ambient noise tomography uses random seismic waves generated by oceanic and atmospheric disturbances to recover the velocities of surface waves. Assuming ambient seismic noise is equal in amplitude and frequency content from all directions, cross-correlating the ambient noise recorded at two seismometers for the same time period should produce only seismic energy that travels from one station to the other. This allows one station to be treated as a "virtual source" of surface waves sent to the other station, the "virtual receiver". These surface waves are sensitive to the seismic velocity of the Earth at different depths depending on their period. A major advantage of this method is that it does not require an earthquake or man-made source. A disadvantage of the method is that an individual cross-correlation can be quite noisy due to the complexity of the real ambient noise field. Thus, many individual correlations over a shorter time period, typically one day, need to be created and averaged to improve the signal-to-noise ratio. While this has often required very large amounts of seismic data recorded over multiple years, more recent studies have successfully used much shorter time periods to create tomographic images with ambient noise.
Waveforms are usually modeled as rays due to ray theory being significantly less complex to model than the full seismic wave equations. However, seismic waves are affected by the material properties of a wide area surrounding the ray path, not just the material through which the ray passes directly. The finite frequency effect is the result the surrounding medium has on a seismic record. Finite frequency tomography accounts for this in determining both travel time and amplitude anomalies, increasing image resolution. This has the ability to resolve much larger variations (i.e. 10–30%) in material properties.
Applications
Seismic tomography can resolve anisotropy, anelasticity, density, and bulk sound velocity. Variations in these parameters may be a result of thermal or chemical differences, which are attributed to processes such as mantle plumes, subducting slabs, and mineral phase changes. Larger scale features that can be imaged with tomography include the high velocities beneath continental shields and low velocities under ocean spreading centers.
Hotspots
The mantle plume hypothesis proposes that areas of volcanism not readily explained by plate tectonics, called hotspots, are a result of thermal upwelling within the mantle. Some researchers have proposed an upper mantle source above the 660km discontinuity for these plumes, while others propose a much deeper source, possibly at the core-mantle boundary.
While the source of mantle plumes has been highly debated since they were first proposed in the 1970s, most modern studies argue in favor of mantle plumes originating at or near the core-mantle boundary. This is in large part due to tomographic images that reveal both the plumes themselves as well as large low-velocity zones in the deep mantle that likely contribute to the formation of mantle plumes. These large low-shear velocity provinces as well as smaller ultra low velocity zones have been consistently observed across many tomographic models of the deep Earth
Subduction Zones
Subducting plates are colder than the mantle into which they are moving. This creates a fast anomaly that is visible in tomographic images. Tomographic images have been made of most subduction zones around the world and have provided insight into the geometries of the crust and upper mantle in these areas. These images have revealed that subducting plates vary widely in how steeply they move into the mantle. Tomographic images have also seen features such as deeper portions of the subducting plate tearing off from the upper portion.
Other Applications
Tomography can be used to image faults to better understand their seismic hazard. This can be through imaging the fault itself by seeing differences in seismic velocity across the fault boundary or by determining near-surface velocity structure, which can have a large impact on the magnitude on the amplitude of ground-shaking during an earthquake due to site amplification effects. Near-surface velocity structure from tomographic images can also be useful for other hazards, such as monitoring of landslides for changes in near-surface moisture content which has an effect on both seismic velocity and potential for future landslides.
Tomographic images of volcanoes have yielded new insights into properties of the underlying magmatic system. These images have most commonly been used to estimate the depth and volume of magma stored in the crust, but have also been used to constrain properties such as the geometry, temperature, or chemistry of the magma. It is important to note that both lab experiments and tomographic imaging studies have shown that recovering these properties from seismic velocity alone can be difficult due to the complexity of seismic wave propagation through focused zones of hot, potentially melted rocks.
While comparatively primitive to tomography on Earth, seismic tomography has been proposed on other bodies in the solar system and successfully used on the Moon. Data collected from four seismometers placed by the Apollo missions have been used many times to create 1-D velocity profiles for the moon, and less commonly 3-D tomographic models. Tomography relies on having multiple seismometers, but tomography-adjacent methods for constraining Earth structure have been used on other planets. While on Earth these methods are often used in combination with seismic tomography models to better constrain the locations of subsurface features, they can still provide useful information about the interiors of other planetary bodies when only a single seismometer is available. For example, data gathered by the SEIS (Seismic Experiment for Interior Structure) instrument on InSight on Mars has been able to detect the Martian core.
Limitations
Global seismic networks have expanded steadily since the 1960s, but are still concentrated on continents and in seismically active regions. Oceans, particularly in the southern hemisphere, are under-covered. Temporary seismic networks have helped improve tomographic models in regions of particular interest, but typically only collect data for months to a few years. The uneven distribution of earthquakes biases tomographic models towards seismically active regions. Methods that do not rely on earthquakes such as active source surveys or ambient noise tomography have helped image areas with little to no seismicity, though these both have their own limitations as compared to earthquake-based tomography.
The type of seismic wave used in a model limits the resolution it can achieve. Longer wavelengths are able to penetrate deeper into the Earth, but can only be used to resolve large features. Finer resolution can be achieved with surface waves, with the trade off that they cannot be used in models deeper than the crust and upper mantle. The disparity between wavelength and feature scale causes anomalies to appear of reduced magnitude and size in images. P and S wave models respond differently to the types of anomalies. Models based solely on the wave that arrives first naturally prefer faster pathways, causing models based on these data to have lower resolution of slow (often hot) features. This can prove to be a significant issue in areas such as volcanoes where rocks are much hotter than their surroundings and oftentimes partially melted. Shallow models must also consider the significant lateral velocity variations in continental crust.
Because seismometers have only been deployed in large numbers since the late-20th century, tomography is only capable of viewing changes in velocity structure over decades. For example, tectonic plates only move at millimeters per year, so the total amount of change in geologic structure due to plate tectonics since the development of seismic tomography is several orders of magnitude lower than the finest resolution possible with modern seismic networks. However, seismic tomography has still been used to view near-surface velocity structure changes at time scales of years to months.
Tomographic solutions are non-unique. Although statistical methods can be used to analyze the validity of a model, unresolvable uncertainty remains. This contributes to difficulty comparing the validity of different model results.
Computing power limits the amount of seismic data, number of unknowns, mesh size, and iterations in tomographic models. This is of particular importance in ocean basins, which due to limited network coverage and earthquake density require more complex processing of distant data. Shallow oceanic models also require smaller model mesh size due to the thinner crust.
Tomographic images are typically presented with a color ramp representing the strength of the anomalies. This has the consequence of making equal changes appear of differing magnitude based on visual perceptions of color, such as the change from orange to red being more subtle than blue to yellow. The degree of color saturation can also visually skew interpretations. These factors should be considered when analyzing images.
| Physical sciences | Seismology | Earth science |
378193 | https://en.wikipedia.org/wiki/Coprocessor | Coprocessor | A coprocessor is a computer processor used to supplement the functions of the primary processor (the CPU). Operations performed by the coprocessor may be floating-point arithmetic, graphics, signal processing, string processing, cryptography or I/O interfacing with peripheral devices. By offloading processor-intensive tasks from the main processor, coprocessors can accelerate system performance. Coprocessors allow a line of computers to be customized, so that customers who do not need the extra performance do not need to pay for it.
Functionality
Coprocessors vary in their degree of autonomy. Some (such as FPUs) rely on direct control via coprocessor instructions, embedded in the CPU's instruction stream. Others are independent processors in their own right, capable of working asynchronously; they are still not optimized for general-purpose code, or they are incapable of it due to a limited instruction set focused on accelerating specific tasks. It is common for these to be driven by direct memory access (DMA), with the host processor (a CPU) building a command list. The PlayStation 2's Emotion Engine contained an unusual DSP-like SIMD vector unit capable of both modes of operation.
History
To make the best use of mainframe computer processor time, input/output tasks were delegated to separate systems called Channel I/O. The mainframe would not require any I/O processing at all, instead would just set parameters for an input or output operation and then signal the channel processor to carry out the whole of the operation. By dedicating relatively simple sub-processors to handle time-consuming I/O formatting and processing, overall system performance was improved.
Coprocessors for floating-point arithmetic first appeared in desktop computers in the 1970s and became common throughout the 1980s and into the early 1990s. Early 8-bit and 16-bit processors used software to carry out floating-point arithmetic operations. Where a coprocessor was supported, floating-point calculations could be carried out many times faster. Math coprocessors were popular purchases for users of computer-aided design (CAD) software and scientific and engineering calculations. Some floating-point units, such as the AMD 9511, Intel 8231/8232 and Weitek FPUs were treated as peripheral devices, while others such as the Intel 8087, Motorola 68881 and National 32081 were more closely integrated with the CPU.
Another form of coprocessor was a video display coprocessor, as used in the Atari 8-bit computers, TI-99/4A, and MSX home computers, which were called "Video Display Controllers". The Amiga custom chipset includes such a unit known as the Copper, as well as a blitter for accelerating bitmap manipulation in memory.
As microprocessors developed, the cost of integrating the floating-point arithmetic functions into the processor declined. High processor speeds also made a closely integrated coprocessor difficult to implement. Separately packaged mathematics coprocessors are now uncommon in desktop computers. The demand for a dedicated graphics coprocessor has grown, however, particularly due to the increasing demand for realistic 3D graphics in computer games.
Intel
The original IBM PC included a socket for the Intel 8087 floating-point coprocessor (aka FPU) which was a popular option for people using the PC for computer-aided design or mathematics-intensive calculations. In that architecture, the coprocessor speeds up floating-point arithmetic on the order of fiftyfold. Users that only used the PC for word processing, for example, saved the high cost of the coprocessor, which would not have accelerated performance of text manipulation operations.
The 8087 was tightly integrated with the 8086/8088 and responded to floating-point machine code operation codes inserted in the 8088 instruction stream. An 8088 processor without an 8087 could not interpret these instructions, requiring separate versions of programs for FPU and non-FPU systems, or at least a test at run time to detect the FPU and select appropriate mathematical library functions.
Another coprocessor for the 8086/8088 central processor was the 8089 input/output coprocessor. It used the same programming technique as 8087 for input/output operations, such as transfer of data from memory to a peripheral device, and so reducing the load on the CPU. But IBM did not use it in IBM PC design and Intel stopped development of this type of coprocessor.
The Intel 80386 microprocessor used an optional "math" coprocessor (the 80387) to perform floating-point operations directly in hardware. The Intel 80486DX processor included floating-point hardware on the chip. Intel released a cost-reduced processor, the 80486SX, that had no floating-point hardware, and also sold an 80487SX coprocessor that essentially disabled the main processor when installed, since the 80487SX was a complete 80486DX with a different set of pin connections.
Intel processors later than the 80486 integrated floating-point hardware on the main processor chip; the advances in integration eliminated the cost advantage of selling the floating-point processor as an optional element. It would be very difficult to adapt circuit-board techniques adequate at 75 MHz processor speed to meet the time-delay, power consumption, and radio-frequency interference standards required at gigahertz-range clock speeds. These on-chip floating-point processors are still referred to as coprocessors because they operate in parallel with the main CPU.
During the era of 8- and 16-bit desktop computers another common source of floating-point coprocessors was Weitek. These coprocessors had a different instruction set from the Intel coprocessors, and used a different socket, which not all motherboards supported. The Weitek processors did not provide transcendental mathematics functions (for example, trigonometric functions) like the Intel x87 family, and required specific software libraries to support their functions.
Motorola
The Motorola 68000 family had the 68881/68882 coprocessors which provided similar floating-point speed acceleration as for the Intel processors. Computers using the 68000 family but not equipped with the hardware floating-point processor could trap and emulate the floating-point instructions in software, which, although slower, allowed one binary version of the program to be distributed for both cases. The 68451 memory-management coprocessor was designed to work with the 68020 processor.
Modern coprocessors
, dedicated Graphics Processing Units (GPUs) in the form of graphics cards are commonplace. Certain models of sound cards have been fitted with dedicated processors providing digital multichannel mixing and real-time DSP effects as early as 1990 to 1994 (the Gravis Ultrasound and Sound Blaster AWE32 being typical examples), while the Sound Blaster Audigy and the Sound Blaster X-Fi are more recent examples.
In 2006, AGEIA announced an add-in card for computers that it called the PhysX PPU. PhysX was designed to perform complex physics computations so that the CPU and GPU do not have to perform these time-consuming calculations. It was designed for video games, although other mathematical uses could theoretically be developed for it. In 2008, Nvidia purchased the company and phased out the PhysX card line; the functionality was added through software allowing their GPUs to render PhysX on cores normally used for graphics processing, using their Nvidia PhysX engine software.
In 2006, BigFoot Systems unveiled a PCI add-in card they christened the KillerNIC which ran its own special Linux kernel on a FreeScale PowerQUICC running at 400 MHz, calling the FreeScale chip a Network Processing Unit or NPU.
The SpursEngine is a media-oriented add-in card with a coprocessor based on the Cell microarchitecture. The SPUs are themselves vector coprocessors.
In 2008, Khronos Group released the OpenCL with the aim to support general-purpose CPUs, ATI/AMD and Nvidia GPUs (and other accelerators) with a single common language for compute kernels.
In 2010s, some mobile computation devices had implemented the sensor hub as a coprocessor. Examples of coprocessors used for handling sensor integration in mobile devices include the Apple M7 and M8 motion coprocessors, the Qualcomm Snapdragon Sensor Core and Qualcomm Hexagon, and the Holographic Processing Unit for the Microsoft HoloLens.
In 2012, Intel announced the Intel Xeon Phi coprocessor.
, various companies are developing coprocessors aimed at accelerating artificial neural networks for vision and other cognitive tasks (e.g. vision processing units, TrueNorth, and Zeroth), and as of 2018, such AI chips are in smartphones such as from Apple, and several Android phone vendors.
Other coprocessors
The MIPS architecture supports up to four coprocessor units, used for memory management, floating-point arithmetic, and two undefined coprocessors for other tasks such as graphics accelerators.
Using FPGA (field-programmable gate arrays), custom coprocessors can be created for acceleration of particular processing tasks such as digital signal processing (e.g. Zynq, combines ARM cores with FPGA on a single die).
TLS/SSL accelerators, used on servers; such accelerators used to be cards, but in modern times are instructions for crypto in mainstream CPUs.
Some multi-core chips can be programmed so that one of their processors is the primary processor, and the other processors are supporting coprocessors.
China's Matrix 2000 128 core PCI-e coprocessor is a proprietary accelerator that requires a CPU to run it, and has been employed in an upgrade of the 17,792 node Tianhe-2 supercomputer (2 Intel Knights Bridge+ 2 Matrix 2000 each), now dubbed 2A, roughly doubling its speed at 95 petaflops, exceeding the world's fastest supercomputer.
A range of coprocessors were available for various models from Acorn Computers, notably the BBC Micro and BBC Master series. Rather than special-purpose graphics or arithmetic devices, these were general-purpose CPUs (principally the 6502, Zilog Z80, National Semiconductor 32016, and ARM 1) described as second processors, typically interfaced to the host system using a message passing architecture known as the Tube, with Acorn's own products providing such processors in a BBC Micro expansion unit with accompanying memory and interfacing circuitry. Software could be executed independently on the second processor, and applications could be written to offload work from the host system, leaving it to perform input/output tasks, resulting in acceleration. Since a range of CPUs were available in a variety of products, a BBC Micro fitted with such a coprocessor was able to run operating systems for other processor architectures, such as CP/M, DOS and Unix, along with accompanying software.
Trends
Over time CPUs have tended to grow to absorb the functionality of the most popular coprocessors. FPUs are now considered an integral part of a processors' main pipeline; SIMD units gave multimedia its acceleration, taking over the role of various DSP accelerator cards; and even GPUs have become integrated on CPU dies. Nonetheless, specialized units remain popular away from desktop machines, and for additional power, and allow continued evolution independently of the main processor product lines.
| Technology | Computer hardware | null |
378200 | https://en.wikipedia.org/wiki/Lowest%20common%20denominator | Lowest common denominator | In mathematics, the lowest common denominator or least common denominator (abbreviated LCD) is the lowest common multiple of the denominators of a set of fractions. It simplifies adding, subtracting, and comparing fractions.
Description
The lowest common denominator of a set of fractions is the lowest number that is a multiple of all the denominators: their lowest common multiple.
The product of the denominators is always a common denominator, as in:
but it is not always the lowest common denominator, as in:
Here, 36 is the least common multiple of 12 and 18. Their product, 216, is also a common denominator, but calculating with that denominator involves larger numbers:
With variables rather than numbers, the same principles apply:
Some methods of calculating the LCD are at .
Role in arithmetic and algebra
The same fraction can be expressed in many different forms. As long as the ratio between numerator and denominator is the same, the fractions represent the same number. For example:
because they are all multiplied by 1 written as a fraction:
It is usually easiest to add, subtract, or compare fractions when each is expressed with the same denominator, called a "common denominator". For example, the numerators of fractions with common denominators can simply be added, such that and that , since each fraction has the common denominator 12. Without computing a common denominator, it is not obvious as to what equals, or whether is greater than or less than . Any common denominator will do, but usually the lowest common denominator is desirable because it makes the rest of the calculation as simple as possible.
Practical uses
The LCD has many practical uses, such as determining the number of objects of two different lengths necessary to align them in a row which starts and ends at the same place, such as in brickwork, tiling, and tessellation. It is also useful in planning work schedules with employees with y days off every x days.
In musical rhythm, the LCD is used in cross-rhythms and polymeters to determine the fewest notes necessary to count time given two or more metric divisions. For example, much African music is recorded in Western notation using because each measure is divided by 4 and by 3, the LCD of which is 12.
Colloquial usage
The expression "lowest common denominator" is used to describe (usually in a disapproving manner) a rule, proposal, opinion, or media that is deliberately simplified so as to appeal to the largest possible number of people.
| Mathematics | Basics | null |
378269 | https://en.wikipedia.org/wiki/Helium%E2%80%93neon%20laser | Helium–neon laser | A helium–neon laser or He–Ne laser is a type of gas laser whose high energetic gain medium consists of a mixture of helium and neon (ratio between 5:1 and 20:1) at a total pressure of approximately 1 Torr (133 Pa) inside a small electrical discharge. The best-known and most widely used He-Ne laser operates at a center wavelength of 632.81646 nm (in air), 632.99138 nm (vac), and frequency 473.6122 THz, in the red part of the visible spectrum. Because of the mode structure of the laser cavity, the instantaneous output of a laser can be shifted by up to 500 MHz in either direction from the center.
History of He-Ne laser development
The first He-Ne lasers emitted infrared at 1150 nm, and were the first gas lasers and the first lasers with continuous wave output. However, a laser that operated at visible wavelengths was much more in demand. A number of other neon transitions were investigated to identify ones in which a population inversion could be achieved. The 633 nm line was found to have the highest gain in the visible spectrum, making this the wavelength of choice for most He-Ne lasers. However, other visible and infrared stimulated-emission wavelengths are possible, and by using mirror coatings with their peak reflectance at these other wavelengths; He-Ne lasers could be engineered to employ those transitions, including visible lasers appearing red, orange, yellow, and green. Stimulated emissions are known from over 100 μm in the far infrared to 540 nm in the visible.
Because visible transitions have somewhat lower gain, these lasers generally have lower output efficiencies and are more costly. The 3.39 μm transition has a very high gain, but is prevented from use in an ordinary He-Ne laser (of a different intended wavelength) because the cavity and mirrors are lossy at that wavelength. However, in high-power He-Ne lasers having a particularly long cavity, superluminescence at 3.39 μm can become a nuisance, robbing power from the stimulated emission medium, often requiring additional suppression.
The best-known and most widely used He-Ne laser operates at a wavelength of 632.8 nm, in the red part of the visible spectrum. It was developed at Bell Telephone Laboratories in 1962, 18 months after the pioneering demonstration at the same laboratory of the first continuous infrared He-Ne gas laser in December 1960.
Construction and operation
The gain medium of the laser, as suggested by its name, is a mixture of helium and neon gases, in approximately a 10:1 ratio, contained at low pressure in a glass envelope. The gas mixture is mostly helium, so that helium atoms can be excited. The excited helium atoms collide with neon atoms, exciting some of them to the state that radiates 632.8 nm. Without helium, the neon atoms would be excited mostly to lower excited states, responsible for non-laser lines.
A neon laser with no helium can be constructed, but it is much more difficult without this means of energy coupling. Therefore, a He-Ne laser that has lost enough of its helium (e.g., due to diffusion through the seals or glass) will lose its laser functionality because the pumping efficiency will be too low. The energy or pump source of the laser is provided by a high-voltage electrical discharge passed through the gas between electrodes (anode and cathode) within the tube. A DC current of 3 to 20 mA is typically required for CW operation. The optical cavity of the laser usually consists of two concave mirrors or one plane and one concave mirror: one having very high (typically 99.9%) reflectance, and the output coupler mirror allowing approximately 1% transmission.
Commercial He-Ne lasers are relatively small devices compared to other gas lasers, having cavity lengths usually ranging from 15 to 50 cm (but sometimes up to about 1 meter to achieve the highest powers), and optical output power levels ranging from 0.5 to 50 mW.
The precise wavelength of red He-Ne lasers is 632.991 nm in a vacuum, which is refracted to about 632.816 nm in air. The wavelengths of the stimulated emission modes lie within about 0.001 nm above or below this value, and the wavelengths of those modes shift within this range due to thermal expansion and contraction of the cavity. Frequency-stabilized versions enable the wavelength of a single mode to be specified to within 1 part in 108 by the technique of comparing the powers of two longitudinal modes in opposite polarizations. Absolute stabilization of the laser's frequency (or wavelength) as fine as 2.5 parts in 1011 can be obtained through use of an iodine absorption cell.
The mechanism producing population inversion and light amplification in a He-Ne laser plasma originates with inelastic collision of energetic electrons with ground-state helium atoms in the gas mixture. As shown in the accompanying energy-level diagram, these collisions excite helium atoms from the ground state to higher energy excited states, among them the 23S1 and 21S0 (LS, or Russell–Saunders coupling, front number 2 indicates that an excited electron is = 2 state) are long-lived metastable states. Because of a fortuitous near-coincidence between the energy levels of the two He metastable states and the 5s2 and 4s2 (Paschen notation) levels of neon, collisions between these helium metastable atoms and ground-state neon atoms results in a selective and efficient transfer of excitation energy from the helium to neon. This excitation energy transfer process is given by the reaction equations
He*(23S1) + Ne1S0 → He(1S0) + Ne*4s2 + ΔE,
He*(21S) + Ne1S0 + ΔE → He(1S0) + Ne*5s2,
where * represents an excited state, and ΔE is the small energy difference between the energy states of the two atoms, of the order of 0.05 eV, or 387 cm−1, which is supplied by kinetic energy. Excitation-energy transfer increases the population of the neon 4s2 and 5s2 levels manyfold. When the population of these two upper levels exceeds that of the corresponding lower level, 3p4, to which they are optically connected, population inversion is present. The medium becomes capable of amplifying light in a narrow band at 1.15 μm (corresponding to the 4s2 to 3p4 transition) and in a narrow band at 632.8 nm (corresponding to the 5s2 to 3p4 transition). The 3p4 level is efficiently emptied by fast radiative decay to the 3s state, eventually reaching the ground state.
The remaining step in utilizing optical amplification to create an optical oscillator is to place highly reflecting mirrors at each end of the amplifying medium so that a wave in a particular spatial mode will reflect back upon itself, gaining more power in each pass than is lost due to transmission through the mirrors and diffraction. When these conditions are met for one or more longitudinal modes, then radiation in those modes will rapidly build up until gain saturation occurs, resulting in a stable continuous laser-beam output through the front (typically 99% reflecting) mirror.
The gain bandwidth of the He-Ne laser is dominated by Doppler broadening rather than pressure broadening due to the low gas pressure and is thus quite narrow: only about 1.5 GHz full width for the 633 nm transition. With cavities having typical lengths of 15 to 50 cm, this allows about 2 to 8 longitudinal modes to oscillate simultaneously (however, single-longitudinal-mode units are available for special applications). The visible output of the red He-Ne laser, long coherence length, and its excellent spatial quality, makes this laser a useful source for holography and as a wavelength reference for spectroscopy. A stabilized He-Ne laser is also one of the benchmark systems for the definition of the meter.
Prior to the invention of cheap, abundant diode lasers, red He-Ne lasers were widely used in barcode scanners at supermarket checkout counters. He-Ne lasers are generally present in educational and research optical laboratories. They are also unsurpassed for use in nano-positioning in applications such as semiconductor device fabrication. High precision laser gyroscopes have employed He-Ne lasers operating at 633 nm in a ring laser configuration.
Applications
Red He-Ne lasers have an enormous number of industrial and scientific uses. They are widely used in laboratory demonstrations in the field of optics because of their relatively low cost and ease of operation compared to other visible lasers producing beams of similar quality in terms of spatial coherence (a single-mode Gaussian beam) and long coherence length (however, since about 1990 semiconductor lasers have offered a lower-cost alternative for many such applications).
Starting in 1978, HeNe tube lasers (manufactured by Toshiba and NEC) were used in LaserDisc players from Pioneer. This continued until the 1984 model lineup, which contained infrared laser diodes instead. Pioneer continued to use laser diodes in all LaserDisc players afterwards until LaserDisc was discontinued in 2009.
| Technology | Lasers | null |
378297 | https://en.wikipedia.org/wiki/Rest%20area | Rest area | A rest area is a public facility located next to a large thoroughfare such as a motorway, expressway, or highway, at which drivers and passengers can rest, eat, or refuel without exiting onto secondary roads. Other names include motorway service area (UK), services (UK), travel plaza, rest stop, oasis (US), service area, rest and service area (RSA), resto, service plaza, lay-by, service centre, and onroute (Canada). Facilities may include park-like areas, fuel stations, public toilets, water fountains, restaurants, and dump and fill stations for caravans / motorhomes.
A rest area with limited to no public facilities is a lay-by, parking area, scenic area, or scenic overlook. Along some highways and roads are services known as wayside parks, roadside parks, or picnic areas.
Overview
The availability, standards and upkeep of facilities at a stop vary by jurisdiction. Service stations have parking areas allotted for cars and trucks, articulated trucks, as well as buses and caravans.
Most state-run rest areas tend to be located in more remote or rural areas, where there are likely no fast food eateries (let alone any full-service restaurants), fuel stations, hotels, campgrounds or other roadside services nearby. The locations of these remote rest areas are usually marked by signs on the freeway or motorway; for example, a sign may read, "Next Rest Area 64 miles", "Next Services 48 miles" or "Next Rest Stop 10 km".
Driving information is usually available at these locations, such as posted maps and other local information, along with public toilets; again, however, depending on the location or standards of the area, some stops have rows of portable toilets ("porta-potties") available rather than a more permanent structure or restroom building. Some rest areas have visitor information kiosks, or even stations with staff on duty. There may also be drinking fountains, vending machines, pay telephones, a fuel station, a restaurant/food court, or a convenience store at a service area. Some rest areas provide free coffee for long-distance drivers, paid-for by donations from other travelers (and-or donations from local businesses, civic groups, churches, etc.). Many service stations have Wi-Fi access, bookshops and newsstands. Many scenic rest areas have picnic areas. Service areas tend to have traveller information in the form of so-called "exit guides", which often contain very basic maps and advertisements for local motels and nearby tourist attractions.
Privatised commercial services may take the form of a truck stop complete with a filling station, arcade video games, and even a children's recreation area or playground, as well as shower and laundry facilities, nearby fast food eateries(s), or their own cafeteria or food court, all under one roof, immediately adjacent to the motorway. Some offer business and financial services, such as ATMs, fax machines, office cubicles, as well as the aforementioned internet access.
Safety issues
Some rest areas have the reputations of being unsafe with regard to crime, especially at night, since they are usually situated in remote or rural areas and inherently attract transient individuals. California's current policy is to maintain existing public rest areas but no longer build new ones, due to the cost and difficulty of keeping them safe, although many California rest stops now feature highway patrol quarters.
Asia
In Malaysia, Indonesia, Iran, Saudi Arabia, and Turkey, rest areas have prayer rooms (musola) for Muslims travelling more than (2 marhalah; 1 marhalah ≈ ).
In Iran it is called Esterāhatgāh (Persian:استراحتگاه) meaning the rest area or rest place.
In Thailand and Vietnam, bus travel is common, and long-distance bus rides typically include stops at rest areas designed for bus passengers. These rest stops typically have a small restaurant as well as a small store for buying food. Some have proper restrooms and even souvenir shops.
Japan
In Japan, there are two grades of rest areas on Japan's tolled expressways. These are part of the expressway system, allowing a person to stop without exiting the expressway, as exiting and reentering the tollway would lead to a higher overall toll for the trip. They are modeled and named after the motorway service stations in the United Kingdom.
The larger rest area is called a "Service Area", abbreviated to SA. SAs are usually very large facilities with parking for hundreds of cars and many buses - offering toilets, smoking areas, convenience stores, pet relief areas, restaurants, regional souvenir shops, a filling station, and sometimes even tourist attractions, such as a Ferris wheel or a view of a famous location. They are usually spaced about one hour apart on the system, and often a planned stop for tour buses. Two Service Areas also have a motel. The other grade of rest stop is a "Parking Area", or a PA. PAs are much smaller, and spaced roughly 20 minutes apart on the system. Besides a small parking lot, toilets and drink vending machines are the only consistent amenities offered, while some larger parking areas have small shops, local goods, and occasionally a filling station - but are much smaller than their larger Service Area counterparts.
Since 1990s, many Japanese towns also established "Roadside stations" along highway and trunk route. In addition to conventional functions of service area, most of them also provide shops and restaurants dedicated to local culture and local produce, and a number of them would also feature information center, community hall, leisure facilities including hot springs and parks and such, and other features unique to individual stations. There are now over a thousand across Japan.
In the past, there were shukuba (stage stations) which serve as resting place for people travelling along traditional routes in Japan by horse or foot before modern transportation vehicles are introduced into Japan.
Malaysia
In Malaysia, an overhead bridge restaurant (OBR), or overhead restaurant, is a special rest area with restaurants above the expressway. Unlike typical laybys and RSAs, which are only accessible in one-way direction only, an overhead restaurant is accessible from both directions of the expressway.
Philippines
In the Philippines, barring certain exceptions, rest areas typically occupy large land areas with restaurants and retail space on top of filling stations. There are 10 service stations in the North Luzon Expressway, 9 service stations in the South Luzon Expressway, 3 service stations in both STAR Tollway and SCTEX, and a Caltex service station in Muntinlupa-Cavite Expressway.
North Korea
South Korea
In South Korea, a rest area usually includes a park and sells regional specialties. Usually Korean rest areas are very big and clean. Cellphone charging is free and WiFi is available in every rest area.
Taiwan
In Taiwan, rest areas are maintained by the Freeway Bureau and the Directorate General of Highways. There are 16 rest areas along four important freeways: Freeways No.1 (Sun Yat-sen Freeway), 3 (Formosa Freeway), 5 (Chiang Wei-shui Memorial Freeway), 6 (Shuishalian Freeway) and one expressway (West Coast Expressway).
Thailand
In Thailand, rest areas are considered part of the national highway. Especially on intercity highways (Motorways) which are under the supervision of the Department of Highways.
For standard rest areas in the areas of motorways and concession highways, they are divided into 3 types: (1) Service Centers, accommodation on large highways. with an area of approximately 50 rai or more (2) Service Area, medium-sized highway accommodation The area is about 20 rai or more. (3) Rest Stop, a small highway accommodation. with an area of approximately 5 acres or more.
There are four rest areas on motorways on Motorway 7 and Motorway 9 and there are plans to open for service in total 18 rest areas.
Europe
In some countries, such as Spain, rest areas are uncommon, as motorists are directed to establishments that serve both the traveling public and the local population. In other areas, access to a rest area is impossible other than from a motorway. The Dutch rest area, De Lucht,(nl) is fairly typical of many European rest areas, in that it has no access roads—other than from the motorway, itself.
Austria and Germany
(:de:Autobahnraststätte) is the name for service areas on the German and Austrian Autobahn. They often include a fuel station, public phones, restaurants, restrooms, parking and, occasionally, a hotel or a motel. If the service area is off the motorway, it is named Rasthof or .
Smaller parking areas, mostly known as a (:de:Rastplatz), are more frequent, but they have only picnic tables, and sometimes, toilets (signposted).
Finland
Rest areas are constructed and maintained by the Finnish government, while the local municipality provides area maps and sanitary services. If there are commercial services, the shop inherits the responsibility for cleanliness and upkeep of the area. Rest areas are designed mostly for long-distance drivers. The recommendation in Finland is that there should be a rest area each 20 km (12.4 mi).
France
In France, both full-service rest areas and picnic sites are provided on the autoroute network, and regulations dictate there to be one such area every . Both types may also be found on national (N-class) highways, although less frequently than on autoroutes. They are known as , or and , respectively; ("rest area") usually refers to a picnic stop. These areas are not usually stated on approach signs, but are rather distinguished by the symbols used. A name is usually given, generally that of a nearby town or village, such as "".
United Kingdom and Ireland
The term "rest area" is not generally used in the United Kingdom and Ireland. The most common terms are motorway service areas (MSA), motorway service stations or simply "services". As with the rest of the world, these are places where drivers can leave a motorway to refuel, rest, or get refreshments. Most service stations accommodate fast food outlets, restaurants, coffee shops, general goods and mini supermarkets. Some service stations also incorporate hotels.
Services may also be present on non-motorway roads, as well; many A-roads have services, possibly only providing a petrol station and, in some cases, a restaurant or café.
The majority of service areas within Ireland are operated by Circle K or Applegreen, and contain fuel stations, truck stops, shops and fast food outlets, such as McDonalds, Burger King, Subway or Chopstix; they differ, from the United Kingdom for example, in that only one service station contains a hotel (the M7 services in Portlaoise, County Laois).
Lay-bys
The term "lay-by" is used in the UK and Ireland to describe a roadside parking or rest area for drivers. Equivalent terms in the United States are "turnout" or "pullout".
Lay-bys can vary in size, from a simple parking bay alongside the carriageway (sufficient for one or two cars only) to substantial areas that are separated from the carriageway by verges, which can accommodate dozens of vehicles.
Lay-bys are to be found on the side of most rural UK roads, except motorways that are not on sections of smart motorways (but for emergencies only) where the hard shoulder is missing. They are marked by a rectangular blue sign bearing a white letter P, and there should also be advance warning of lay-bys to give drivers time to slow down safely.
North America
Canada
In Canada, roadside services are known as service centres in most provinces and onroutes in Ontario. In some instances, where there are no retail facilities, they may be known as rest areas or text stops ('halte-texto' in French). Most service centres are concentrated along Ontario's 400-series highway and Quebec's Autoroute networks, while rest areas are found along the highway networks of all provinces, and the Trans-Canada Highway.
Nova Scotia has constructed a small number of full-fledged service centres along its 100-Series Highways.
In New Brunswick, the only rest areas are roadside parks with picnic tables and washrooms operated as a part of the provincial park system, but many have closed due to cutbacks. Occasionally, litter barrels are also found along the side of the road.
The Prairie provinces of (Saskatchewan, and Manitoba) have rest stops located along the Trans-Canada Highway (Highway 1). However, these stops are simply places to rest, or go to the washroom; they are not built to the standard rest area found on the 400-series highways in Ontario, or the Interstate Highways of the United States.
Alberta
Alberta Transportation operates seven provincial rest areas or safety rest areas. These include:
Highway 1 (Trans-Canada Highway) westbound between Brooks and Bassano;
Highway 1 (Trans-Canada Highway) eastbound between Tilley and Suffield;
Highway 2 (Queen Elizabeth II Highway) southbound between Crossfield and Airdrie;
Highway 2 (Queen Elizabeth II Highway) northbound near Highway 13 west of Wetaskiwin;
Highway 16 (Yellowhead Highway) eastbound and westbound between Edson and Carrot Creek;
Highway 43 accessible from both directions south of Valleyview; and
Highway 63 accessible both ways between Atmore and Breynat.
Alberta Transportation also designates partnership rest areas or highway service rest areas that are privately owned and operated highway user facilities. These facilities are located on Highway 1 at Dead Man's Flats, Highway 2 at Red Deer (Gasoline Alley), Highway 9 near Hanna, Highway 16 at Niton Junction and at Innisfree, and Highway 43 at Rochfort Bridge.
British Columbia
British Columbia has many services centres on its provincial roads, particularly along the Yellowhead Highway/Highway 16, the Coquihalla Highway/Highway 5, and on Highway 97C, the first service centres built in the province. One notable curiosity is a service centre built along Highway 118: it is a minor road connecting two towns to the Yellowhead Highway (Hwy. 16).
Ontario
Ontario has a modern and well-developed network of service centres, now mostly known as ONroute, located along Highway 401 along the Quebec City-Windsor Corridor, as well as sections of Highway 400. However, shorter and/or less trafficked 400-series highways (including the northern sections of Highway 400), do not have even basic rest areas along them at all.
The original service centres for Highway 401 were mostly built around 1962. In 1991, one was placed at the west end of the Greater Toronto Area, serving eastbound traffic in Mississauga; this location was branded as "Info Centre" and was intended as a welcome centre for Toronto. The Mississauga travel centre closed on September 30, 2006.
Most of the original 1960s-era service centres on highways 400 and 401 were demolished in 2010, with new buildings constructed on the original sites and operated by HMSHost subsidiary Host Kilmer under the ONroute banner.
The service centres in Ontario were originally of a generic, cafeteria-style nature. They contain filling stations, washrooms, picnic areas, and vending machines. During the late 1980s the service centres were taken over by Scott's Hospitality, a major publicly traded Canadian restaurant operator, who leased them out to major oil companies and fast food restaurant chains, with a single gasoline distributor and sole restaurant for most locations. In 2010–11, most of the older service centres were replaced by a common design operated by ONroute, which features a selection of fast food providers akin to a food court.
Outside of the ONRoute locations there are 211 rest areas along provincial highways. Most are basic stops (picnic area) with restrooms for most locations and parking for most vehicles (commercial trucks may not be serviced at small areas). Most are seasonal operated from mid May to mid November.
Reese's Corner at the intersection of Highway 21 and Highway 7 is often considered a service centre. Although Highway 7 was bypassed by the freeway Highway 402 in the late 1970s, Reese's Corner still receives much traffic as it is only a short distance from the interchange of Highway 402 and Highway 21 (Exit 25). Lastly, truck inspection stations (which are more frequent than service centres) can be used by travellers for bathroom breaks, although this is not encouraged.
Two off-highway service campuses at Exit 74 along the Queen Elizabeth Way in Grimsby are unofficial rest areas for travelling motorists. Two smaller such facilities (Seguin Trail Road south of Parry Sound and Port Severn Road in Port Severn) also exist on the less-busy section Highway 400 north of the last official on-highway service centre.
Quebec
In Quebec, rest areas are known as and service areas as . Rest rooms and picnic areas are located along the autoroutes and many of the provincial highways. Some of the rest areas have vending machines and/or canteens. Some truck and isolated rest areas have no services or they have been removed due to facilities having deteriorated beyond repair. Beginning in 2019, the province began to modernize some rest areas to provide needs for families and truckers.
There are about 10 service areas (on Highways 10, 15, 20, 40, 55, 117, and 175); with some of these rest areas have restrooms, filling stations and restaurants/vending machines.
United States
In the United States, rest areas are typically non-commercial facilities that provide, at a minimum, parking and restrooms. In the United States, there are 1,840 rest areas along interstate routes. Some may have information kiosks, vending machines, and picnic areas, but little else, while some have "dump" facilities, where recreational vehicles may empty their sewage holding tanks. They are typically maintained and funded by the departments of transportation of the state governments. For example, rest areas in California are maintained by Caltrans. In 2008, state governments began to close some rest areas as a result of the Great Recession.
Some places, such as California, have laws that explicitly prohibit private retailers from occupying rest stops. A federal statute passed by Congress also prohibits states from allowing private businesses to occupy rest areas along interstate highways. The relevant clause of 23 U.S.C. § 111 states:
The State will not permit automotive service stations or other commercial establishments for serving motor vehicle users to be constructed or located on the rights-of-way of the Interstate System.
The original reason for this clause was to protect innumerable small towns whose survival depended upon providing roadside services such as gasoline, food, and lodging. Because of it, private truck stops and travel plazas have blossomed into a $171 billion industry in the United States. The clause was immediately followed by an exception for facilities constructed prior to January 1, 1960, many of which continue to exist, as explained further below.
Therefore, the standard practice is that private businesses must buy up land near existing exits and build their own facilities to serve travelers. Such facilities often have tall signs that can be seen from several miles away (so that travelers have adequate time to make a decision). In turn, it is somewhat harder to visit such private facilities, because one has to first exit the freeway and navigate through several intersections to reach a desired business's parking lot, rather than exit directly into a rest area's parking lot. Public rest areas are usually (but not always) positioned so as not to compete with private businesses.
Special blue signs indicating gas, food, lodging, camping and roadside attractions near an exit can be found on most freeways in the United States. Beginning in the mid-1970s, private businesses have been permitted to display their logos or trademarks on these signs by paying a transportation department (or a subcontractor to a transportation department) a small fee. Until the release of the 2000 edition of the Manual on Uniform Traffic Control Devices, these signs were allowed only on the rural sections of highways. The 2000 MUTCD added provisions for allowing these signs on highways in urban areas as long as adequate sign spacing can be maintained, however, some states (such as California and New York) continue to restrict these signs to rural areas only. These signs are allowed on urban freeways in 15 states, with Arizona being the most recent state (as of 2013) to repeal the restriction of these signs to only rural highways.
Attempts to remove the federal ban on privatized rest areas have been generally unsuccessful, due to resistance from existing businesses that have already made enormous capital investments in their existing locations.
For example, in 2003, Congress's federal highway funding reauthorization bill contained a clause allowing states to start experimenting with privatized rest areas on Interstate highways. The clause was fiercely resisted by the National Association of Truck Stop Owners (NATSO), which argued that allowing such rest areas would shift revenue to state governments (in the form of lease payments) that would have gone to local governments (in the form of property and sales taxes). NATSO also argued that by destroying private commercial truck stops, the bill would result in an epidemic of drowsy truck drivers, since such stops provide about 90% of the parking spaces used by American truck drivers while in transit.
Service areas
Prior to the creation of the Interstate Highway System, many states east of the Rocky Mountains had already started building and operating their own long-distance intercity toll roads (turnpikes). To help recover construction costs, most turnpike operators leased concession space at rest areas to private businesses. In addition, the use of this sort of service area allows drivers to stop for food and fuel without passing through additional tollbooths and thereby incurring a higher toll.
Pennsylvania, which opened the first such highway in 1940 with the mainline Pennsylvania Turnpike, was the model for many subsequent areas. Instead of operating the service areas themselves, the Pennsylvania Turnpike Commission opted to lease them out to Standard Oil of Pennsylvania (which was acquired shortly afterward by the modern-day Exxon), which in turn operated a Filling station with a garage and Howard Johnson's franchises as a restaurant offering. The turnpike leases the filling station space to Sunoco (which operates 7-Eleven convenience stores instead of garages at the sites) and, as of 2021, the rest of the service area space to Applegreen.
In the summer of 2021, Iris Buyer LLC (an Applegreen company) announced that they were acquiring all travel plazas by HMSHost. The deal reached an agreement at the end of July 2021 officially transferring ownership. The New York State Thruway Service Areas (which will be owned by another company by Applegreen) was not affected by this transition due to the fact that Host's contract was expired. As of July 2022, Connecticut, Delaware, Indiana, Maine, Massachusetts, New Jersey, New York, Ohio, Pennsylvania, and West Virginia have service areas that are operated or have stake by Applegreen.
Some turnpikes, such as Florida's Turnpike, were never integrated into the Interstate system and never became subject to the federal ban on private businesses. On turnpikes that did become Interstates, all privatized rest areas in operation prior to January 1, 1960, were allowed to continue operating. Such facilities are often called service areas by the public and in road atlases, but each state varies:
Connecticut, Florida, Maine, Massachusetts, Ohio, Pennsylvania, and West Virginia – service plaza
Delaware, Kansas, Maryland, and Oklahoma – service area
Illinois – oasis
Indiana and New York – travel plaza
New Jersey – service area or service plaza
Some states, such as Ohio, allow nonprofit organizations to run a concession trailer in a rest area.
Started in 2015(ish), The New Jersey Turnpike and Garden State Parkway Service Areas started advertising and selling products from Popcorn for The People. It is a non-profit organization which creates employment for people with disabilities, specifically Autism.
Text stops
In 2013, the state of New York launched "It Can Wait", a program for encouraging drivers to pause at rest stops and parking areas along state roads to text (thereby avoiding texting while driving), by designating all such areas "text stops". The practice involves placing road signs which indicate the nearest "texting zone" at which to legally stop and use mobile devices such as smartphones.
Welcome centers
A rest area often located near state or municipal borders in the United States is sometimes called a welcome center. Welcome centers tend to be larger than regular rest areas, and are staffed at peak travel times with one or more employees who advise travelers as to their options. Some welcome centers contain a small museum or at least a basic information kiosk about the state. Because air travel has made it possible to enter and leave many states without crossing the state line at ground level, some states, like California, have official welcome centers inside major cities far from their state borders. In some states (such as Massachusetts), these rest areas are called tourist information centers and in others (such as New Jersey), visitor centers.
Other types
Rest areas without modern restrooms are called 'waysides'. These locations have parking spaces for trucks and cars, or for semi-trailer trucks only. Some have portable toilets and waste containers. In Missouri these locations are called 'Roadside Parks' or 'Roadside Tables'.
The most basic parking areas have no facilities of any kind; they consist solely of a paved shoulder on the side of the highway where travelers can rest for a short time. A scenic area is similar to a parking area, but it is provided to the traveler in a place of natural beauty. These are also called scenic overlooks.
Oceania
Australia
Rest areas in Australia are a common feature of the road network in rural areas. They are the responsibility of a variety of authorities, such as a state transport or main roads bureau, or a local government's works department. Facilities and standards vary widely and unpredictably: a well-appointed rest area will have bins to deposit small items of litter, a picnic table with seating, a cold water tap (sometimes fed by a rainwater tank), barbecue fireplace (sometimes gas or electric), toilets, and – less commonly – showers. Other rest areas, especially in more remote locations, may lack some or even all of these facilities: in South Australia, a rest area may be no more than a cleared section besides the road with a sign indicating its purpose. Rest areas in Australia do not provide service stations or restaurants (such facilities would be called roadhouses or truck stops), although there may be caravans, often run by charities, providing refreshments to travellers.
Comfort and hygiene are important considerations for the responsible authorities, as such remote sites can be very expensive to clean and maintain, and vandalism is common. Also, Australia's dependence on road transport by heavy vehicles can lead to competition between the amenity needs of recreational travelers and those of the drivers of heavy vehicles—so much so that on arterial routes it is common to see rest areas specifically signed to segregate the two user groups entirely. Thus rest areas generally do not allow overnight occupation. In Queensland, however, well-maintained rest areas sometimes explicitly invite travelers to stay overnight, as a road safety measure, but this is rare elsewhere.
| Technology | Concepts of ground transport | null |
378569 | https://en.wikipedia.org/wiki/Hydrozoa | Hydrozoa | Hydrozoa (hydrozoans; ) is a taxonomic class of individually very small, predatory animals, some solitary and some colonial, most of which inhabit saline water. The colonies of the colonial species can be large, and in some cases the specialized individual animals cannot survive outside the colony. A few genera within this class live in freshwater habitats. Hydrozoans are related to jellyfish and corals, which also belong to the phylum Cnidaria.
Some examples of hydrozoans are the freshwater jelly (Craspedacusta sowerbyi), freshwater polyps (Hydra), Obelia, Portuguese man o' war (Physalia physalis), chondrophores (Porpitidae), and pink-hearted hydroids (Tubularia).
Anatomy
Most hydrozoan species include both a polypoid and a medusoid stage in their life cycles, although a number of them have only one or the other. For example, Hydra has no medusoid stage, while Liriope lacks the polypoid stage.
Polyps
The hydroid form is usually colonial, with multiple polyps connected by tubelike hydrocauli. The hollow cavity in the middle of the polyp extends into the associated hydrocaulus, so that all the individuals of the colony are intimately connected. Where the hydrocaulus runs along the substrate, it forms a horizontal root-like stolon that anchors the colony to the bottom.
The colonies are generally small, no more than a few centimeters across, but some in Siphonophorae can reach sizes of several meters. They may have a tree-like or fan-like appearance, depending on species. The polyps themselves are usually tiny, although some noncolonial species are much larger, reaching , or, in the case of the deep-sea Branchiocerianthus, a remarkable 2 m (6.6 ft).
The hydrocaulus is usually surrounded by a sheath of chitin and proteins called the perisarc. In some species, this extends upwards to also enclose part of the polyps, in some cases including a closeable lid through which the polyp may extend its tentacles.
In any given colony, the majority of polyps are specialized for feeding. These have a more or less cylindrical body with a terminal mouth on a raised protuberance called the hypostome, surrounded by a number of tentacles. The polyp contains a central cavity, in which initial digestion takes place. Partially digested food may then be passed into the hydrocaulus for distribution around the colony and completion of the digestion process. Unlike some other cnidarian groups, the lining of the central cavity lacks stinging nematocysts, which are found only on the tentacles and outer surface.
All colonial hydrozoans also include some polyps specialized for reproduction. These lack tentacles and contain numerous buds from which the medusoid stage of the life cycle is produced. The arrangement and type of these reproductive polyps varies considerably between different groups.
In addition to these two basic types of polyps, a few colonial species have other specialized forms. In some, defensive polyps are found, armed with large numbers of stinging cells. In others, one polyp may develop as a large float, from which the other polyps hang down, allowing the colony to drift in open water instead of being anchored to a solid surface.
Medusae
The medusae of hydrozoans are smaller than those of typical jellyfish, ranging from in diameter. Although most hydrozoans have a medusoid stage, this is not always free-living and in many species exists solely as a sexually reproducing bud on the surface of the hydroid colony. Sometimes, these medusoid buds may be so degenerated as to entirely lack tentacles or mouths, essentially consisting of an isolated gonad.
The body consists of a dome-like umbrella ringed by tentacles. A tube-like structure hangs down from the centre of the umbrella and includes the mouth at its tip. Most hydrozoan medusae have just four tentacles, although a number of exceptions exist. Stinging cells are found on the tentacles and around the mouth.
The mouth leads into a central stomach cavity. Four radial canals connect the stomach to an additional, circular canal running around the base of the bell, just above the tentacles. Striated muscle fibres also line the rim of the bell, allowing the animal to move along by alternately contracting and relaxing its body. An additional shelf of tissue lies just inside the rim, narrowing the aperture at the base of the umbrella, and thereby increasing the force of the expelled jet of water.
The nervous system is unusually advanced for cnidarians. Two nerve rings lie close to the margin of the bell, and send fibres into the muscles and tentacles. The genus Sarsia has even been reported to possess organised ganglia. Numerous sense organs are closely associated with the nerve rings. Mostly these are simple sensory nerve endings, but they also include statocysts and primitive light-sensitive ocelli.
Life cycle
Hydroid colonies are usually dioecious, which means they have separate sexes—all the polyps in each colony are either male or female, but not usually both sexes in the same colony. In some species, the reproductive polyps, known as gonozooids (or "gonotheca" in thecate hydrozoans) bud off asexually produced medusae. These tiny, new medusae (which are either male or female) mature and spawn, releasing gametes freely into the sea in most cases. Zygotes become free-swimming planula larvae or actinula larvae that either settle on a suitable substrate (in the case of planulae), or swim and develop into another medusa or polyp directly (actinulae). Colonial hydrozoans include siphonophore colonies, Hydractinia, Obelia, and many others.
In hydrozoan species with both polyp and medusa generations, the medusa stage is the sexually reproductive phase. Medusae of these species of Hydrozoa are known as "hydromedusae". Most hydromedusae have shorter lifespans than the larger scyphozoan jellyfish. Some species of hydromedusae release gametes shortly after they are themselves released from the hydroids (as in the case of fire corals), living only a few hours, while other species of hydromedusae grow and feed in the plankton for months, spawning daily for many days before their supply of food or other water conditions deteriorate and cause their demise.
Additionally, some hydrozoan species (particularly in Turritopsis genus) share an unusual life cycle among the animals - they can transform themselves from sexually mature medusae stage back to their juvenile hydroid stage.
Systematics and evolution
The earliest hydrozoans may be from the Vendian (late Precambrian), more than 540 million years ago.
Hydrozoan systematics are highly complex. Several approaches for expressing their interrelationships were proposed and heavily contested since the late 19th century, but in more recent times a consensus seems to be emerging.
Historically, the hydrozoans were divided into a number of orders, according to their mode of growth and reproduction. Most famous among these was probably the assemblage called "Hydroida", but this group is apparently paraphyletic, united by plesiomorphic (ancestral) traits. Other such orders were the Anthoathecatae, Actinulidae, Laingiomedusae, Polypodiozoa, Siphonophorae and Trachylina.
As far as can be told from the molecular and morphological data at hand, the Siphonophora for example were just highly specialized "hydroids," whereas the Limnomedusae—presumed to be a "hydroid" suborder—were simply very primitive hydrozoans and not closely related to the other "hydroids." So, the hydrozoans now are at least tentatively divided into two subclasses, the Leptolinae (containing the bulk of the former "Hydroida" and the Siphonophora) and the Trachylinae, containing the others (including the Limnomedusae). The monophyly of several of the presumed orders in each subclass is still in need of verification.
In any case, according to this classification, the hydrozoans can be subdivided as follows, with taxon names emended to end in "-ae":
Class Hydrozoa
Subclass Hydroidolina
Order Anthoathecata (= Anthoathecata(e), Athecata(e), Anthomedusae, Stylasterina(e)) — includes Laingoimedusae but monophyly requires verification
Order Leptothecata (= Leptothecata(e), Thecaphora(e), Thecata(e), Leptomedusae)
Order Siphonophorae
Subclass Trachylinae
Order Actinulidae
Order Limnomedusae — monophyly requires verification; tentatively placed here
Order Narcomedusae
Order Trachymedusae — monophyly requires verification
ITIS uses the same system, but unlike here, does not use the oldest available names for many groups.
In addition, there exists a cnidarian parasite, Polypodium hydriforme, which lives inside its host's cells. It is sometimes placed in the Hydrozoa, though its relationships are currently unresolved—a somewhat controversial 18S rRNA sequence analysis found it to be closer to the also parasitic Myxozoan. It was traditionally placed in its own class, Polypodiozoa, and this view is often seen to reflect the uncertainties surrounding this highly distinct animal.
Other classifications
Some of the more widespread classification systems for the Hydrozoa are listed below. Though they are often found in seemingly authoritative Internet sources and databases, they do not agree with the available data.Especially the presumed phylogenetic distinctness of the Siphonophorae is a major flaw that was corrected only recently.
The obsolete classification mentioned above was:
Order Actinulidae
Order Anthoathecatae
Order Hydroida
Suborder Anthomedusae
Suborder Leptomedusae
Suborder Limnomedusae
Order Laingiomedusae
Order Polypodiozoa
Order Siphonophorae
Order Trachylina
Suborder Narcomedusae
Suborder Trachymedusae
A very old classification that is sometimes still seen is:
Order Hydroida
Order Milleporina
Order Siphonophorae
Order Stylasterina (= Anthomedusae)
Order Trachylinida
Catalogue of Life uses:
Order Actinulida
Order Anthoathecata (= Anthomedusae)
Order Hydroida
Order Laingiomedusae
Order Leptothecata (= Leptomedusae)
Order Limnomedusae
Order Narcomedusae
Order Siphonophorae
Order Trachymedusae
Animal Diversity Web uses:
Order Actinulida
Order Anthoathecata
Order Laingiomedusae
Order Leptothecata
Order Limnomedusae
Order Narcomedusae
Order Siphonophorae
Order Trachymedusae
| Biology and health sciences | Cnidarians | Animals |
378579 | https://en.wikipedia.org/wiki/Bupropion | Bupropion | Bupropion, formerly called amfebutamone, and sold under the brand name Wellbutrin among others, is an atypical antidepressant that is US FDA-approved to treat major depressive disorder, seasonal affective disorder and to support smoking cessation. It is also popular as an add-on medication in the cases of "incomplete response" to the first-line selective serotonin reuptake inhibitor (SSRI) antidepressant. Bupropion has several features that distinguish it from other antidepressants: it does not usually cause sexual dysfunction, it is not associated with weight gain and sleepiness, and it is more effective than SSRIs at improving symptoms of hypersomnia and fatigue. Bupropion, particularly the immediate-release formulation, carries a higher risk of seizure than many other antidepressants, hence caution is recommended in patients with a history of seizure disorder. The medication is taken by mouth.
Common adverse effects of bupropion with the greatest difference from placebo are dry mouth, nausea, constipation, insomnia, anxiety, tremor, and excessive sweating. Raised blood pressure is notable. Rare but serious side effects include seizures, liver toxicity, psychosis, and risk of overdose. Bupropion use during pregnancy may be associated with increased likelihood of congenital heart defects.
Bupropion acts as a norepinephrine–dopamine reuptake inhibitor (NDRI) and a nicotinic receptor antagonist. However, its effects on dopamine are weak and clinical significance is contentious. Chemically, bupropion is an aminoketone that belongs to the class of substituted cathinones and more generally that of substituted amphetamines and substituted phenethylamines.
Bupropion was invented by Nariman Mehta, who worked at Burroughs Wellcome, in 1969. It was first approved for medical use in the United States in 1985. Bupropion was originally called by the generic name amfebutamone, before being renamed in 2000. In 2022, it was the 21st most commonly prescribed medication in the United States, with more than 25million prescriptions. It is on the World Health Organization's List of Essential Medicines. In 2022, the US Food and Drug Administration (FDA) approved the combination dextromethorphan/bupropion to serve as a rapid-acting antidepressant in patients with major depressive disorder.
Medical uses
Depression
The evidence overall supports the effectiveness of bupropion over placebo for the treatment of depression. Some peer-reviewed studies suggest the quality of evidence is low. Some meta-analyses report that bupropion has an at-most small effect size for depression. One meta-analysis reported a large effect size. However, there were methodological limitations with this meta-analysis, including using a subset of only five trials for the effect size calculation, substantial variability in effect sizes between the selected trials—which led the authors to state that their findings in this area should be interpreted with "extreme caution"—and general lack of inclusion of unpublished trials in the meta-analysis. Unpublished trials are more likely to be negative in findings, and other meta-analyses have included unpublished trials. Evidence suggests that the effectiveness of bupropion for depression is similar to that of other antidepressants.
Over the autumn and winter months, bupropion prevents the development of depression in those who have recurring seasonal affective disorder: 15% of participants on bupropion experienced a major depressive episode vs. 27% of those on placebo. Bupropion also improves depression in bipolar disorder, with the efficacy and risk of an affective switch being similar to other antidepressants.
Bupropion has several features that distinguish it from other antidepressants: for instance, unlike the majority of antidepressants, it does not usually cause sexual dysfunction, and the occurrence of sexual side effects is not different from placebo. Bupropion treatment is not associated with weight gain; on the contrary, the majority of studies observed significant weight loss in bupropion-treated participants. Bupropion treatment also is not associated with the sleepiness that may be produced by other antidepressants. Bupropion is more effective than selective serotonin reuptake inhibitors (SSRIs) at improving symptoms of hypersomnia and fatigue in depressed patients. Bupropion is effective in the treatment of anxious depression and, contrary to common belief, does not exacerbate anxiety in this context. The effectiveness of bupropion for anxious depression is equivalent to that of SSRIs in the case of depression with low or moderate anxiety, whereas SSRIs show a modest effectiveness advantage in terms of response rates for depression with high anxiety.
The addition of bupropion to a prescribed SSRI is a common strategy when people do not respond to the SSRI, and it is supported by clinical trials; however, it appears to be inferior to the addition of atypical antipsychotic aripiprazole.
Smoking cessation
Prescribed as an aid for smoking cessation, bupropion reduces the severity of craving for nicotine and withdrawal symptoms such as depressed
mood, irritability, difficulty concentrating, and increased appetite. Initially, bupropion slows the weight gain that often occurs in the first weeks after quitting smoking. With time, however, this effect becomes negligible.
The bupropion treatment course lasts for seven to twelve weeks, with the patient halting the use of tobacco about ten days into the course. After the course, the effectiveness of bupropion for maintaining abstinence from smoking declines over time, from 37% of tobacco abstinence at 3 months to 20% at one year. It is unclear whether extending bupropion treatment helps to prevent relapse of smoking.
Overall, six months after the therapy, bupropion increases the likelihood of quitting smoking by approximately 1.6-fold as compared to placebo. In this respect, bupropion is as effective as nicotine replacement therapy but inferior to varenicline. Combining bupropion and nicotine replacement therapy does not improve the quitting rate.
In children and adolescents, the use of bupropion for smoking cessation does not appear to offer any significant benefits. The evidence for its use to aid smoking cessation in pregnant women is insufficient.
Attention deficit hyperactivity disorder
In the United States, the treatment of attention deficit hyperactivity disorder (ADHD) is not an approved indication of bupropion, and it is not mentioned in the 2019 guideline on ADHD treatment from the American Academy of Pediatrics. Systematic reviews of bupropion for the treatment of ADHD in both adults and children note that bupropion may be effective for ADHD but warn that this conclusion has to be interpreted with caution, because clinical trials were of low quality due to small sizes and risk of bias. Similarly to atomoxetine, bupropion has a delayed onset of action for ADHD, and several weeks of treatment are required for therapeutic effects. This is in contrast to stimulants, such as amphetamine and methylphenidate, which have an immediate onset of effect in the condition.
Sexual dysfunction
Bupropion is less likely than other antidepressants to cause sexual dysfunction. A range of studies indicate that bupropion not only produces fewer sexual side effects than other antidepressants but can actually help to alleviate sexual dysfunction including sexual dysfunction induced by SSRI antidepressants. There have also been small studies suggesting that bupropion or a bupropion/trazodone combination may improve some measures of sexual function in women who have hypoactive sexual desire disorder (HSDD) and are not depressed. According to an expert consensus recommendation from the International Society for the Study of Women's Sexual Health, bupropion can be considered as an off-label treatment for HSDD despite limited safety and efficacy data. Likewise, a 2022 systematic review and meta-analysis of bupropion for sexual desire disorder in women reported that although data were limited, bupropion appeared to be dose-dependently effective for the condition.
Weight loss
Bupropion, when used for treating long-term weight gain over six to twelve months, results in an average weight loss of over placebo. This is not much different from the weight loss produced by several other weight-loss medications such as sibutramine or orlistat. The combination drug naltrexone/bupropion has been approved by the US Food and Drug Administration (FDA) for the treatment of obesity.
Other uses
Bupropion is not effective in the treatment of cocaine dependence, but it is showing promise in reducing drug use in treating amphetamine-type stimulant use and cravings. Based on studies indicating that bupropion lowers the level of the inflammatory mediator TNF-alpha, there have been suggestions that it might be useful in treating inflammatory bowel disease, psoriasis, and other autoimmune conditions, but very little clinical evidence is available. Bupropion is not effective in treating chronic low back pain. The drug may be useful in the treatment of excessive daytime sleepiness (EDS) and narcolepsy.
Bupropion has been used to treat disorders of diminished motivation, like apathy, abulia, and akinetic mutism. Accordingly, the drug has been found to increase effort expenditure and improve motivational deficits in animal models. However, only limited benefits of bupropion in the treatment of apathy have been observed in clinical trials in various conditions.
Bupropion has been used in the treatment of postural orthostatic tachycardia syndrome (POTS).
Available forms
Bupropion is available as an oral tablet in several different formulations. It is mainly formulated as the hydrochloride salt but also as the hydrobromide salt. In addition to single-drug formulations, bupropion is formulated in combinations including naltrexone/bupropion (Contrave) for obesity and dextromethorphan/bupropion (Auvelity) for depression.
Contraindications
The US Food and Drug Administration (FDA) prescription label advises that bupropion should not be prescribed to individuals with epilepsy or other conditions that lower the seizure threshold, such as anorexia nervosa, bulimia nervosa, or benzodiazepine or alcohol withdrawal. It should be avoided in individuals who are taking monoamine oxidase inhibitors (MAOIs). The label recommends that caution should be exercised when treating people with liver damage, severe kidney disease, and severe hypertension, and in children, adolescents, and young adults due to the increased risk of suicidal ideation.
Side effects
The common adverse effects of bupropion with the greatest difference from placebo are dry mouth, nausea, constipation, insomnia, anxiety, tremor, and excessive sweating. Bupropion has the highest incidence of insomnia of all second-generation antidepressants, apart from desvenlafaxine. It is also associated with about 20% increased risk of headache.
Bupropion raises blood pressure in some people. One study showed an average rise of 6 mm Hg in systolic blood pressure in 10% of patients. The prescribing information notes that hypertension, sometimes severe, is observed in some people taking bupropion, both with and without pre-existing hypertension. The safety of bupropion in people with cardiovascular conditions and its general cardiovascular safety profile remains unclear due to the lack of data.
Seizure is a rare but serious adverse effect of bupropion. It is strongly dose-dependent: for the immediate release preparation, the seizure incidence is 0.4% at the dose 300–450 mg per day; the incidence climbs almost ten-fold for the higher than recommended dose of 600 mg. For comparison, the incidence of unprovoked seizure in the general population is 0.07–0.09%, and the risk of seizure for a variety of other antidepressants is generally 0–0.5% at the recommended doses.
Cases of liver toxicity leading to death or liver transplantation have been reported for bupropion. It is considered to be one of several antidepressants with a greater risk of hepatotoxicity.
The prescribing information warns about bupropion triggering an angle-closure glaucoma attack. On the other hand, bupropion may decrease the risk of development of open angle glaucoma.
Bupropion use by mothers in the first trimester of pregnancy is associated with a 23% increase in the odds of congenital heart defects in their children.
Bupropion has rarely been associated with instances of Stevens–Johnson syndrome.
Bupropion has not been associated with QT prolongation at therapeutic doses but has been associated with QT prolongation in overdose.
Psychiatric
The US Food and Drug Administration (FDA) requires all antidepressants, including bupropion, to carry a boxed warning stating that antidepressants may increase the risk of suicide in people younger than 25. This warning is based on a statistical analysis conducted by the FDA which found a 2-fold increase in suicidal thought and behavior in children and adolescents, and a 1.5-fold increase in the 18–24 age group. For this analysis the FDA combined the results of 295 trials of 11 antidepressants to obtain statistically significant results. Considered in isolation, bupropion was not statistically different from placebo.
Bupropion prescribed for smoking cessation results in a 25% increase in the risk of psychiatric side effects, in particular, anxiety (about 40% increase) and insomnia (about 80% increase). The evidence is insufficient to determine whether bupropion is associated with suicides or suicidal behavior.
In rare cases, bupropion-induced psychosis may develop. It is associated with higher doses of bupropion; many cases described are at higher than recommended doses. Concurrent antipsychotic medication appears to be protective. In most cases the psychotic symptoms are eliminated by reducing the dose, ceasing treatment or adding antipsychotic medication.
Although studies are lacking, a handful of case reports suggest that abrupt discontinuation of bupropion may cause antidepressant discontinuation syndrome.
Overdose
Bupropion is considered moderately dangerous in overdose. According to an analysis of US National Poison Data System, adjusted for the number of prescriptions, bupropion and venlafaxine are the two new generation antidepressants (that is excluding tricyclic antidepressants) that result in the highest mortality and morbidity. For significant overdoses, seizures have been reported in about a third of all cases; other serious effects include hallucinations, loss of consciousness, and abnormal heart rhythms. When bupropion was one of several kinds of pills taken in an overdose, fever, muscle rigidity, muscle damage, hypertension or hypotension, stupor, coma, and respiratory failure have been reported. While most people recover, some people have died, having had multiple uncontrolled seizures and myocardial infarction.
Interactions
Since bupropion is metabolized to hydroxybupropion by the enzyme CYP2B6, drug interactions with CYP2B6 inhibitors are possible: this includes such medications as paroxetine, sertraline, norfluoxetine (active metabolite of fluoxetine), diazepam, clopidogrel, and orphenadrine. The expected result is an increase of bupropion and a decrease in hydroxybupropion blood concentration. The reverse effect (decrease of bupropion and increase of hydroxybupropion) can be expected with CYP2B6 inducers such as carbamazepine, clotrimazole, rifampicin, ritonavir, St John's wort, and phenobarbital. Indeed, carbamazepine decreases exposure to bupropion by 90% and increases exposure to hydroxybupropion by 94%. Ritonavir, lopinavir/ritonavir, and efavirenz have been shown to decrease levels of bupropion and/or its metabolites. Ticlopidine and clopidogrel, both potent CYP2B6 inhibitors, have been found to considerably increase bupropion levels as well as decrease levels of its metabolite hydroxybupropion.
Bupropion and its metabolites are inhibitors of CYP2D6, with hydroxybupropion responsible for most of the inhibition. Additionally, bupropion and its metabolites may decrease the expression of CYP2D6 in the liver. The end effect is a significant slowing of the clearance of other drugs metabolized by this enzyme. For instance, bupropion has been found to increase area-under-the-curve of desipramine, a CYP2D6 substrate, by 5-fold. Bupropion has also been found to increase levels of atomoxetine by 5.1-fold, while decreasing the exposure to its main metabolite by 1.5-fold. As another example, the ratio of dextromethorphan (a drug that is mainly metabolized by CYP2D6) to its major metabolite dextrorphan increased approximately 35-fold when it was administered to people being treated with 300 mg/day bupropion. When people on bupropion are given MDMA, about 30% increase of exposure to both drugs is observed, with enhanced mood but decreased heart rate effects of MDMA. Interactions with other CYP2D6 substrates, such as metoprolol, imipramine, nortriptyline, venlafaxine, and nebivolol have also been reported. However, in a notable exception, bupropion does not seem to affect the concentrations of CYP2D6 substrates fluoxetine and paroxetine. Bupropion prevents norepinephrine and dopamine release induced by amphetamines and has been found to reduce the subjective and sympathomimetic effects of methamphetamine in humans.
Bupropion lowers the seizure threshold, and therefore can potentially interact with other medications that also lower it, such as antipsychotics, tricyclic antidepressants, theophylline, and systemic corticosteroids. The prescribing information recommends minimizing the use of alcohol, since in rare cases bupropion reduces alcohol tolerance.
Caution should be observed when combining bupropion with a monoamine oxidase inhibitor (MAOI), as it may result in hypertensive crisis.
Pharmacology
Pharmacodynamics
The mechanism of action of bupropion in the treatment of depression and for other indications is unclear. However, it is thought to be related to the fact that bupropion is a norepinephrine–dopamine reuptake inhibitor (NDRI) and negative allosteric modulator of several nicotinic acetylcholine receptors. Bupropion does not act as a norepinephrine–dopamine releasing agent. Pharmacological actions of bupropion, to a substantial degree, are due to its active metabolites hydroxybupropion, threo-hydrobupropion, and erythro-hydrobupropion that are present in the blood plasma at comparable or much higher levels. In fact, bupropion could accurately be conceptualized as a prodrug of these metabolites. Overall action of these metabolites, and particularly one enantiomer S,S-hydroxybupropion, is also characterized by inhibition of norepinephrine and dopamine reuptake and nicotinic inhibition (see the chart on the right). Bupropion has no meaningful direct activity at a variety of receptors, including α- and β-adrenergic, dopamine, serotonin, histamine, and muscarinic acetylcholine receptors.
The occupancy of dopamine transporter (DAT) by bupropion (300mg/day) and its metabolites in the human brain as measured by several positron emission tomography (PET) studies is approximately 20%, with a mean occupancy range of about 14 to 26%. For comparison, the NDRI methylphenidate at therapeutic doses is thought to occupy greater than 50% of DAT sites. In accordance with its low DAT occupancy, no measurable dopamine release in the human brain was detected with bupropion (one 150mg dose) in a PET study. Bupropion has also been shown to increase striatal VMAT2, though it is unknown if this effect is more pronounced than other DRIs. These findings raise questions about the role of dopamine reuptake inhibition in the pharmacology of bupropion, and suggest that other actions may be responsible for its therapeutic effects. No data are available on occupancy of the norepinephrine transporter (NET) by bupropion and its metabolites. However, due to the increased exposure of hydroxybupropion over bupropion itself, which has higher affinity for the NET than the DAT, bupropion's overall pharmacological profile in humans may end up making it effectively more of a norepinephrine reuptake inhibitor than a dopamine reuptake inhibitor. Accordingly, the clinical effects of bupropion are more consistent with noradrenergic activity than with dopaminergic actions.
Bupropion has been claimed to be a sigma σ1 receptor agonist. Its antidepressant-like effects in rodents depend on σ1 receptor activation. They are enhanced and inhibited by σ1 receptor agonists and antagonists, respectively. However, no data on the binding or functional effects of bupropion at the human sigma receptors seem to be available. In any case, bupropion has been reported to bind to rodent σ1 receptors with values of 580 to 2,100nM. In contrast to many other phenethylamines and amphetamines, bupropion is not an agonist of the trace amine-associated receptor 1 (TAAR1).
Bupropion has been found to have a mixture of anti-inflammatory and pro-inflammatory activity through modulation of the immune system. One such mechanism underlying these effects may be reduced levels of the pro-inflammatory cytokine tumor necrosis factor alpha (TNFα). The catecholaminergic actions of bupropion may be involved in its immunomodulatory effects.
Pharmacokinetics
After oral administration, bupropion is rapidly and completely absorbed reaching the peak blood plasma concentration after 1.5 hours (tmax). Sustained-release (SR) and extended-release (XL) formulations have been designed to slow down absorption resulting in tmax of 3 hours and 5 hours, respectively. Absolute bioavailability of bupropion is unknown but is presumed to be low, at 5–20%, due to the first-pass metabolism. As for the relative bioavailability of the formulations, XL formulation has lower bioavailability (68%) compared to SR formulation and immediate release bupropion.
Bupropion is metabolized in the body by a variety of pathways. The oxidative pathways are by cytochrome P450 isoenzymes CYP2B6 leading to R,R- and S,S-hydroxybupropion and, to a lesser degree, CYP2C19 leading to 4'-hydroxybupropion. The reductive pathways are by 11β-hydroxysteroid dehydrogenase type 1 in the liver and AKR7A2/AKR7A3 in the intestine leading to threo-hydrobupropion and by yet unknown enzyme leading to erythro-hydrobupropion.
The metabolism of bupropion is highly variable: the effective doses of bupropion received by persons who ingest the same amount of the drug may differ by as much as 5.5 times (with a half-life of 12–30 hours), while the effective doses of hydroxybupropion may differ by as much as 7.5 times (with a half-life of 15–25 hours). Based on this, some researchers have advocated monitoring of the blood level of bupropion and hydroxybupropion.
The metabolism of bupropion also seems to follow biphasic pharmacokinetics: the redistribution alpha phase with half-life of about 1 hour precedes the metabolism beta phase of about 12-30 hours. This might explain why abuse is unfeasible due to a short "high", as well as support the use of extended-release formulas to maintain a consistent concentration of bupropion.
The metabolism of bupropion is highly species-dependent. As an example, oral bupropion results in hydroxybupropion levels that are 16-fold higher than those of bupropion itself in humans, whereas in rats, oral bupropion results in levels of bupropion that are 3.4-fold higher than those of hydroxybupropion. The species-dependent metabolism of bupropion is thought to be involved in species differences in its pharmacodynamic effects. For example, bupropion produces psychostimulant-like and reinforcing effects in rodents, whereas oral bupropion at therapeutic doses seems to have much less or no potential for such effects in humans.
Chemistry
Bupropion is an aminoketone that belongs to the class of substituted cathinones and the more general class of substituted phenethylamines. It is also known structurally as 3-chloro-N-tert-butyl-β-keto-α-methylphenethylamine, 3-chloro-N-tert-butyl-β-ketoamphetamine, or 3-chloro-N-tert-butylcathinone. The clinically used bupropion is racemic, which is a mixture of two enantiomers: S-bupropion and R-bupropion. Although the optical isomers on bupropion can be separated, they rapidly racemize under physiological conditions.
Bupropion is a small-molecule compound with the molecular formula C13H18ClNO and a molecular weight of 239.74g/mol. It is a highly lipophilic compound, with an experimental log P of 3.6. Pharmaceutically, bupropion is used mainly as the hydrochloride salt but also to a lesser extent as the hydrobromide salt.
A number of analogues of bupropion exist, such as hydroxybupropion, radafaxine, and manifaxine, among others. These compounds are norepinephrine–dopamine reuptake inhibitors (NDRIs) similarly to bupropion. The analogues of bupropion with the N-tert-butyl group removed or replaced with an N-methyl group, 3-chlorocathinone (3-CC) and 3-chloromethcathinone (3-CMC; clophedrone), respectively, are potent serotonin–norepinephrine–dopamine releasing agents (SNDRAs). They have been encountered as cathinone designer and recreational drugs.
There have been reported cases of false-positive urine amphetamine tests in persons taking bupropion.
Synthesis
It is synthesized in two chemical steps starting from 3'-chloro-propiophenone. The alpha position adjacent to the ketone is first brominated followed by nucleophilic displacement of the resulting alpha-bromoketone with t-butylamine and treated with hydrochloric acid to give bupropion as the hydrochloride salt in 75–85% overall yield.
History
Bupropion was invented by Nariman Mehta of Burroughs Wellcome (now GlaxoSmithKline) in 1969, and the US patent for it was granted in 1974. It was approved by the US Food and Drug Administration (FDA) as an antidepressant on 30 December 1985, and marketed under the name Wellbutrin. However, a significant incidence of seizures at the originally recommended dosage (400–600 mg/day) caused the withdrawal of the drug in 1986. Subsequently, the risk of seizures was found to be highly dose-dependent, and bupropion was re-introduced to the market in 1989 with a lower maximum recommended daily dose of 450 mg/day.
In 1996, the US Food and Drug Administration (FDA) approved a sustained-release formulation of alcohol-resistant bupropion called Wellbutrin SR, intended to be taken twice a day (as compared with three times a day for immediate-release Wellbutrin). In 2003, the FDA approved another sustained-release formulation called Wellbutrin XL, intended for once-daily dosing. Wellbutrin SR and XL are available in generic form in the United States and Canada. In 1997, bupropion was approved by the FDA for use as a smoking cessation aid under the name Zyban. In 2006, Wellbutrin XL was similarly approved as a treatment for seasonal affective disorder.
In October 2007, two providers of consumer information on nutritional products and supplements, ConsumerLab.com and The People's Pharmacy, released the results of comparative tests of different brands of bupropion. The People's Pharmacy received multiple reports of increased side effects and decreased efficacy of generic bupropion, which prompted it to ask ConsumerLab.com to test the products in question. The tests showed that "one of a few generic versions of Wellbutrin XL 300 mg, sold as Budeprion XL 300 mg, didn't perform the same as the brand-name pill in the lab." The FDA investigated these complaints and concluded that Budeprion XL is equivalent to Wellbutrin XL in regard to bioavailability of bupropion and its main active metabolite hydroxybupropion. The FDA also said that coincidental natural mood variation is the most likely explanation for the apparent worsening of depression after the switch from Wellbutrin XL to Budeprion XL. On 3 October 2012, however, the FDA reversed this opinion, announcing that "Budeprion XL 300 mg fails to demonstrate therapeutic equivalence to Wellbutrin XL 300 mg." The FDA did not test the bioequivalence of any of the other generic versions of Wellbutrin XL 300 mg, but requested that the four manufacturers submit data on this question to the FDA by March 2013. the FDA has made determinations on the formulations from some manufacturers not being bioequivalent.
In April 2008, the FDA approved a formulation of bupropion as a hydrobromide salt instead of a hydrochloride salt, to be sold under the name Aplenzin by Sanofi-Aventis.
In 2009, the FDA issued a health advisory warning that the prescription of bupropion for smoking cessation has been associated with reports of unusual behavior changes, agitation, and hostility. Some people, according to the advisory, have become depressed or have had their depression worsen, have had thoughts about suicide or dying, or have attempted suicide. This advisory was based on a review of anti-smoking products that identified 75 reports of "suicidal adverse events" for bupropion over ten years. Based on the results of follow-up trials this warning was removed in 2016.
In 2012, the US Justice Department announced that GlaxoSmithKline had agreed to plead guilty and pay a $3 billion fine, in part for promoting the unapproved use of Wellbutrin for weight loss and sexual dysfunction.
In 2017, the European Medicines Agency (EMA) recommended suspending a number of nationally approved medicines due to misrepresentation of bioequivalence study data by Micro Therapeutic Research Labs in India. The products recommended for suspension included several 300 mg modified-release bupropion tablets.
Following EMA's call for an industry-wide review of medicines for the possible presence of nitrosamines, GlaxoSmithKline paused batch release and distribution of bupropion 150 mg tablets in November 2022. In July 2023, EMA raised the acceptable daily intake of nitrosamine impurities, leading GlaxoSmithKline to announce that distribution of bupropion 150 mg tablets would resume "across the EU and Europe" by the end of 2023.
Society and culture
Recreational use
While bupropion demonstrates some potential for misuse, this potential is less than of other commonly used stimulants, being limited by features of its pharmacology. Case reports describe the misuse of bupropion as producing a "high" similar to cocaine or amphetamine usage but with less intensity. Bupropion misuse is uncommon. There have been some anecdotal and case-study reports of bupropion abuse, but the bulk of evidence indicates that the subjective effects of bupropion when taken orally are markedly different from those of addictive stimulants such as cocaine or amphetamine. However, bupropion, by non-conventional routes of administration like injection or insufflation, has been reported to be misused in the United States and Canada, notably in prisons.
Legal status
In Russia bupropion is banned as a narcotic drug, due to it being a derivative of methcathinone.
In Australia, France, and the UK, smoking cessation is the only licensed use of bupropion, and no generics are marketed.
Brand names
Brand names include Wellbutrin, Aplenzin, Budeprion, Buproban, buprapan, Forfivo, Voxra, Zyban, Bupron, Bupisure, Bupep, Smoquite, Elontril, Oribion and Buxon.
Research
Bupropion has been studied limitedly in the treatment of social anxiety disorder.
| Biology and health sciences | Psychiatric drugs | Health |
378598 | https://en.wikipedia.org/wiki/Tunicate | Tunicate | A tunicate is an exclusively marine invertebrate animal, a member of the subphylum Tunicata ( ). This grouping is part of the Chordata, a phylum which includes all animals with dorsal nerve cords and notochords (including vertebrates). The subphylum was at one time called Urochordata, and the term urochordates is still sometimes used for these animals.
Despite their simple appearance and very different adult form, their close relationship to the vertebrates is certain. Both groups are chordates, as evidenced by the fact that during their mobile larval stage, tunicates possess a notochord, a hollow dorsal nerve cord, pharyngeal slits, post-anal tail, and an endostyle. They resemble a tadpole.
Tunicates are the only chordates that have lost their myomeric segmentation, with the possible exception of the seriation of the gill slits. However, doliolids still display segmentation of the muscle bands.
Some tunicates live as solitary individuals, but others replicate by budding and become colonies, each unit being known as a zooid. They are marine filter feeders with a water-filled, sac-like body structure and two tubular openings, known as siphons, through which they draw in and expel water. During their respiration and feeding, they take in water through the incurrent (or inhalant) siphon and expel the filtered water through the excurrent (or exhalant) siphon. Adult ascidian tunicates are sessile, immobile and permanently attached to rocks or other hard surfaces on the ocean floor. Thaliaceans (pyrosomes, doliolids, and salps) and larvaceans on the other hand, swim in the pelagic zone of the sea as adults.
Various species of ascidians, the most well-known class of tunicates, are commonly known as sea squirts, sea pork, sea livers, or sea tulips.
The earliest probable species of tunicate appears in the fossil record in the early Cambrian period.
Their name derives from their unique outer covering or "tunic", which is formed from proteins and carbohydrates, and acts as an exoskeleton. In some species, it is thin, translucent, and gelatinous, while in others it is thick, tough, and stiff.
Taxonomy
About 3,000 species of tunicate exist in the world's oceans, living mostly in shallow water. The most numerous group is the ascidians; fewer than 100 species of these are found at depths greater than . Some are solitary animals leading a sessile existence attached to the seabed, but others are colonial and a few are pelagic. Some are supported by a stalk, but most are attached directly to a substrate, which may be a rock, shell, coral, seaweed, mangrove root, dock, piling, or ship's hull. They are found in a range of solid or translucent colours and may resemble seeds, grapes, peaches, barrels, or bottles. One of the largest is a stalked sea tulip, Pyura pachydermatina, which can grow to be over tall.
The Tunicata were established by Jean-Baptiste Lamarck in 1816. In 1881, Francis Maitland Balfour introduced another name for the same group, "Urochorda", to emphasize the affinity of the group to other chordates. No doubt largely because of his influence, various authors supported the term, either as such, or as the slightly older "Urochordata", but this usage is invalid because "Tunicata" has precedence, and grounds for superseding the name never existed. Accordingly, the current (formally correct) trend is to abandon the name Urochorda or Urochordata in favour of the original Tunicata, and the name Tunicata is almost invariably used in modern scientific works. It is accepted as valid by the World Register of Marine Species but not by the Integrated Taxonomic Information System.
Various common names are used for different species. Sea tulips are tunicates with colourful bodies supported on slender stalks. Sea squirts are so named because of their habit of contracting their bodies sharply and squirting out water when disturbed. Sea liver and sea pork get their names from the resemblance of their dead colonies to pieces of meat.
Classification
Tunicates are more closely related to craniates (including hagfish, lampreys, and jawed vertebrates) than to lancelets, echinoderms, hemichordates, Xenoturbella or other invertebrates.
The clade consisting of tunicates and vertebrates is called Olfactores.
The Tunicata contain roughly 3,051 described species, traditionally divided into these classes:
Ascidiacea (Aplousobranchia, Phlebobranchia, and Stolidobranchia)
Thaliacea (Pyrosomida, Doliolida, and Salpida)
Appendicularia (Copelata)
Members of the Sorberacea were included in Ascidiacea in 2011 as a result of rDNA sequencing studies. Although the traditional classification is provisionally accepted, newer evidence suggests the Ascidiacea are an artificial group of paraphyletic status. A close relationship between Thaliacea and Ascidiacea, with the former possibly emerging from the latter, had already been proposed since the early 20th century under the name of Acopa.
The following cladogram is based on the 2018 phylogenomic study of Delsuc and colleagues.
Fossil record
Undisputed fossils of tunicates are rare. The best known and earliest unequivocally identified species is Shankouclava shankouense from the Lower Cambrian Maotianshan Shale at Shankou village, Anning, near Kunming (South China). There is also a common bioimmuration, (Catellocaula vallata), of a possible tunicate found in Upper Ordovician bryozoan skeletons of the upper midwestern United States. A well-preserved Cambrian fossil, Megasiphon thylakos, shows that the tunicate basic body design had already been established 500 million years ago.
Three enigmatic species were also found from the Ediacaran period – Ausia fenestrata from the Nama Group of Namibia, the sac-like Yarnemia ascidiformis, and one from a second new Ausia-like genus from the Onega Peninsula of northern Russia, Burykhia hunti. Results of a new study have shown possible affinity of these Ediacaran organisms to the ascidians. Ausia and Burykhia lived in shallow coastal waters slightly more than 555 to 548 million years ago, and are believed to be the oldest evidence of the chordate lineage of metazoans. The Russian Precambrian fossil Yarnemia is identified as a tunicate only tentatively, because its fossils are nowhere near as well-preserved as those of Ausia and Burykhia, so this identification has been questioned.
Fossils of tunicates are rare because their bodies decay soon after death, but in some tunicate families, microscopic spicules are present, which may be preserved as microfossils. These spicules have occasionally been found in Jurassic and later rocks, but, as few palaeontologists are familiar with them, they may have been mistaken for sponge spicules.
In the Permian and the Triassic, there were also forms with a calcareous exoskeleton. At first, they were mistaken for corals.
Hybridization studies
A multi-taxon molecular study in 2010 proposed that sea squirts are descended from a hybrid between a chordate and a protostome ancestor (before the divergence of panarthropods and nematodes). This study was based on a quartet partitioning approach designed to reveal horizontal gene transfer events among metazoan phyla.
Anatomy
Body form
Colonies of tunicates occur in a range of forms, and vary in the degree to which individual organisms, known as zooids, integrate with one another. In the simplest systems, the individual animals are widely separated, but linked together by horizontal connections called stolons, which grow along the seabed. Other species have the zooids growing closer together in a tuft or clustered together and sharing a common base. The most advanced colonies involve the integration of the zooids into a common structure surrounded by the tunic. These may have separate buccal siphons and a single central atrial siphon and may be organized into larger systems, with hundreds of star-shaped units. Often, the zooids in a colony are tiny but very numerous, and the colonies can form large encrusting or mat-like patches.
Body structure
By far the largest class of tunicates is the Ascidiacea. The body of an ascidiacean is surrounded by a test or tunic, from which the subphylum derives its name. This varies in thickness between species but may be tough, resembling cartilage, thin and delicate, or transparent and gelatinous. The tunic is composed of proteins, crosslinked by phenoloxidase reaction, and complex carbohydrates, and includes tunicin, a variety of cellulose. The tunic is unique among invertebrate exoskeletons in that it can grow as the animal enlarges and does not need to be periodically shed. Inside the tunic is the body wall or mantle composed of connective tissue, muscle fibres, blood vessels, and nerves. Two openings are found in the body wall: the buccal siphon at the top through which water flows into the interior, and the atrial siphon on the ventral side through which it is expelled. A large pharynx occupies most of the interior of the body. It is a muscular tube linking the buccal opening with the rest of the gut. It has a ciliated groove known as an endostyle on its ventral surface, and this secretes a mucous net which collects food particles and is wound up on the dorsal side of the pharynx. The gullet, at the lower end of the pharynx, links it to a loop of gut which terminates near the atrial siphon. The walls of the pharynx are perforated by several bands of slits, known as stigmata, through which water escapes into the surrounding water-filled cavity, the atrium. This is criss-crossed by various rope-like mesenteries which extend from the mantle and provide support for the pharynx, preventing it from collapsing, and also hold up the other organs.
The Thaliacea, the other main class of tunicates, is characterised by free-swimming, pelagic individuals. They are all filter feeders using a pharyngeal mucous net to catch their prey. The pyrosomes are bioluminous colonial tunicates with a hollow cylindrical structure. The buccal siphons are on the outside and the atrial siphons inside. About ten species are known, and all are found in the tropics. The 23 species of doliolids are small, mostly under long. They are solitary, have the two siphons at opposite ends of their barrel-shaped bodies, and swim by jet propulsion. The 40 species of salps are also small, under long, and found in the surface waters of both warm and cold seas. They also move by jet propulsion, and often form long chains by budding off new individuals.
A third class, the Larvacea (or Appendicularia), is the only group of tunicates to retain their chordate characteristics in the adult state, a product of extensive neoteny. The 70 species of larvaceans superficially resemble the tadpole larvae of amphibians, although the tail is at right angles to the body. The notochord is retained, and the animals, mostly under 1 cm long, are propelled by undulations of the tail. They secrete an external mucous net known as a house, which may completely surround them and is very efficient at trapping planktonic particles.
Physiology and internal anatomy
Like all other chordates, tunicates have a notochord during their early development, but it is lost by the time they have completed their metamorphosis. As members of the Chordata, they are true Coelomata with endoderm, ectoderm, and mesoderm, but they do not develop very clear coelomic body cavities, if any at all. Whether they do or not, by the end of their larval development, all that remain are the pericardial, renal, and gonadal cavities of the adults. Except for the heart, gonads, and pharynx (or branchial sac), the organs are enclosed in a membrane called an epicardium, which is surrounded by the jelly-like mesenchyme.
Ascidian tunicates begin life as a lecithotrophic (non-feeding) mobile larva that resembles a tadpole, with the exception of some members of the families Styelidae and Molgulidae which has direct development. The latter also have several species with tail-less larval forms. The ascidian larvae very rapidly settle down and attach themselves to a suitable surface, later developing into a barrel-like and usually sedentary adult form. The species in the class Appendicularia are pelagic, and the general larval form is kept throughout life. Also the class Thaliacea is pelagic throughout their lives and may have complex lifecycles. In this class a free living larval stage is absent: Doliolids and pyrosomatids are viviparous–lecithotrophic, and salpids are viviparous–matrotrophic. Only some species of doliolids still have a rudimentary tailed tadpole stage, which is never free-living and lacks a brain.
Tunicates have a well-developed heart and circulatory system. The heart is a double U-shaped tube situated just below the gut. The blood vessels are simple connective tissue tubes, and their blood has several types of corpuscle. The blood may appear pale green, but this is not due to any respiratory pigments, and oxygen is transported dissolved in the plasma. Exact details of the circulatory system are unclear, but the gut, pharynx, gills, gonads, and nervous system seem to be arranged in series rather than in parallel, as happens in most other animals. Every few minutes, the heart stops beating and then restarts, pumping fluid in the reverse direction.
Tunicate blood has some unusual features. In some species of Ascidiidae and Perophoridae, it contains high concentrations of the transitional metal vanadium and vanadium-associated proteins in vacuoles in blood cells known as vanadocytes. Some tunicates can concentrate vanadium up to a level ten million times that of the surrounding seawater. It is stored in a +3 oxidation form that requires a pH of less than 2 for stability, and this is achieved by the vacuoles also containing sulfuric acid. The vanadocytes are later deposited just below the outer surface of the tunic, where their presence is thought to deter predation, although it is unclear whether this is due to the presence of the metal or low pH. Other species of tunicates concentrate lithium, iron, niobium, and tantalum, which may serve a similar function. Other tunicate species produce distasteful organic compounds as chemical defenses against predators.
Tunicates lack the kidney-like metanephridial organs typical of deuterostomes. Most have no excretory structures, but rely on the diffusion of ammonia across their tissues to rid themselves of nitrogenous waste, though some have a simple excretory system. The typical renal organ is a mass of large clear-walled vesicles that occupy the rectal loop, and the structure has no duct. Each vesicle is a remnant of a part of the primitive coelom, and its cells extract nitrogenous waste matter from circulating blood. They accumulate the wastes inside the vesicles as urate crystals, and do not have any obvious means of disposing of the material during their lifetimes.
Adult tunicates have a hollow cerebral ganglion, equivalent to a brain, and a hollow structure known as a neural gland. Both originate from the embryonic neural tube and are located between the two siphons. Nerves arise from the two ends of the ganglion; those from the anterior end innervate the buccal siphon and those from the posterior end supply the rest of the body, the atrial siphon, organs, gut and the musculature of the body wall. There are no sense organs but there are sensory cells on the siphons, the buccal tentacles and in the atrium.
Tunicates are unusual among animals in that they produce a large fraction of their tunic and some other structures in the form of cellulose. The production in animals of cellulose is so unusual that at first some researchers denied its presence outside of plants, but the tunicates were later found to possess a functional cellulose synthesizing enzyme, encoded by a gene horizontally transferred from a bacterium. When, in 1845, Carl Schmidt first announced the presence in the test of some ascidians of a substance very similar to cellulose, he called it "tunicine", but it is now recognized as cellulose rather than any alternative substance.
Feeding
Nearly all adult tunicates are suspension feeders (the larval form usually does not feed), capturing planktonic particles by filtering sea water through their bodies. Ascidians are typical in their digestive processes, but other tunicates have similar systems. Water is drawn into the body through the buccal siphon by the action of cilia lining the gill slits. To obtain enough food, an average ascidian needs to process one body-volume of water per second. This is drawn through a net lining the pharynx which is being continuously secreted by the endostyle. The net is made of sticky mucus threads with holes about 0.5 μm in diameter which can trap planktonic particles including bacteria. The net is rolled up on the dorsal side of the pharynx, and it and the trapped particles are drawn into the esophagus. The gut is U-shaped and also ciliated to move the contents along. The stomach is an enlarged region at the lowest part of the U-bend. Here, digestive enzymes are secreted and a pyloric gland (absent in appendicularians) adds further secretions. After digestion, the food is moved on through the intestine, where absorption takes place, and the rectum, where undigested remains are formed into faecal pellets or strings. The anus opens into the dorsal or cloacal part of the peribranchial cavity near the atrial siphon. Here, the faeces are caught up by the constant stream of water which carries the waste to the exterior. The animal orientates itself to the current in such a way that the buccal siphon is always upstream and does not draw in contaminated water.
Some ascidians that live on soft sediments are detritivores. A few deepwater species, such as Megalodicopia hians, are sit-and-wait predators, trapping tiny crustacea, nematodes, and other small invertebrates with the muscular lobes which surround their buccal siphons. Certain tropical species in the family Didemnidae have symbiotic green algae or cyanobacteria in their tunics, and one of these symbionts, Prochloron, is unique to tunicates. Excess photosynthetic products are assumed to be available to the host.
Life cycle
Ascidians are almost all hermaphrodites and each has a single ovary and testis, either near the gut or on the body wall. In some solitary species, sperm and eggs are shed into the sea and the larvae are planktonic. In others, especially colonial species, sperm is released into the water and drawn into the atria of other individuals with the incoming water current. Fertilization takes place here and the eggs are brooded through their early developmental stages. Some larval forms appear very much like primitive chordates with a notochord (stiffening rod) and superficially resemble small tadpoles. These swim by undulations of the tail and may have a simple eye, an ocellus, and a balancing organ, a statocyst.
When sufficiently developed, the larva of the sessile species finds a suitable rock and cements itself in place. The larval form is not capable of feeding, though it may have a rudimentary digestive system, and is only a dispersal mechanism. Many physical changes occur to the tunicate's body during metamorphosis, one of the most significant being the reduction of the cerebral ganglion, which controls movement and is the equivalent of the vertebrate brain. From this comes the common saying that the sea squirt "eats its own brain". However, the adult does possess a cerebral ganglion adapted to lack of self-locomotion. In the Thaliacea, the larval stage is rudimentary or suppressed, and the adults are pelagic (swimming or drifting in the open sea). Colonial forms also increase the size of the colony by budding off new individuals to share the same tunic.
Pyrosome colonies grow by budding off new zooids near the posterior end of the colony. Sexual reproduction starts within a zooid with an internally fertilized egg. This develops directly into an oozooid without any intervening larval form. This buds precociously to form four blastozooids which become detached in a single unit when the oozoid disintegrates. The atrial siphon of the oozoid becomes the exhalent siphon for the new, four-zooid colony.
Doliolids have a very complex life cycle that includes various zooids with different functions. The sexually reproducing members of the colony are known as gonozooids. Each one is a hermaphrodite with the eggs being fertilised by sperm from another individual. The gonozooid is viviparous, and at first, the developing embryo feeds on its yolk sac before being released into the sea as a free-swimming, tadpole-like larva. This undergoes metamorphosis in the water column into an oozooid. This is known as a "nurse" as it develops a tail of zooids produced by budding asexually. Some of these are known as trophozooids, have a nutritional function, and are arranged in lateral rows. Others are phorozooids, have a transport function, and are arranged in a single central row. Other zooids link to the phorozooids, which then detach themselves from the nurse. These zooids develop into gonozooids, and when these are mature, they separate from the phorozooids to live independently and start the cycle over again. Meanwhile, the phorozooids have served their purpose and disintegrate. The asexual phase in the lifecycle allows the doliolid to multiply very rapidly when conditions are favourable.
Salps also have a complex lifecycle with an alternation of generations. In the solitary life history phase, an oozoid reproduces asexually, producing a chain of tens or hundreds of individual zooids by budding along the length of a stolon. The chain of salps is the 'aggregate' portion of the lifecycle. The aggregate individuals, known as blastozooids, remain attached together while swimming and feeding and growing larger. The blastozooids are sequential hermaphrodites. An egg in each is fertilized internally by a sperm from another colony. The egg develops in a brood sac inside the blastozooid and has a placental connection to the circulating blood of its "nurse". When it fills the blastozooid's body, it is released to start the independent life of an oozooid.
Larvaceans only reproduce sexually. They are protandrous hermaphrodites, except for Oikopleura dioica which is gonochoric, and a larva resembles the tadpole larva of ascidians. Once the trunk is fully developed, the larva undergoes "tail shift", in which the tail moves from a rearward position to a ventral orientation and twists through 90° relative to the trunk. The larva consists of a small, fixed number of cells, and grows by enlargement of these rather than cell division. Development is very rapid and only takes seven hours for a zygote to develop into a house-building juvenile starting to feed.
During embryonic development, tunicates exhibit determinate cleavage, where the fate of the cells is set early on with reduced cell numbers and genomes that are rapidly evolving. In contrast, the amphioxus and vertebrates show cell determination relatively late in development and cell cleavage is indeterminate. The genome evolution of amphioxus and vertebrates is also relatively slow.
Promotion of out-crossing
Ciona intestinalis (class Ascidiacea) is a hermaphrodite that releases sperm and eggs into the surrounding seawater almost simultaneously. It is self-sterile, and thus has been used for studies on the mechanism of self-incompatibility. Self/non-self-recognition molecules play a key role in the process of interaction between sperm and the vitelline coat of the egg. It appears that self/non-self recognition in ascidians such as C. intestinalis is mechanistically similar to self-incompatibility systems in flowering plants. Self-incompatibility promotes out-crossing, and thus provides the adaptive advantage at each generation of the masking of deleterious recessive mutations (i.e. genetic complementation) and the avoidance of inbreeding depression.
Botryllus schlosseri (class Ascidiacea) is a colonial tunicate, a member of the only group of chordates that are able to reproduce both sexually and asexually. B. schlosseri is a sequential (protogynous) hermaphrodite, and in a colony, eggs are ovulated about two days before the peak of sperm emission. Thus self-fertilization is avoided, and cross-fertilization is favored. Although avoided, self-fertilization is still possible in B. schlosseri. Self-fertilized eggs develop with a substantially higher frequency of anomalies during cleavage than cross-fertilized eggs (23% vs. 1.6%). Also a significantly lower percentage of larvae derived from self-fertilized eggs metamorphose, and the growth of the colonies derived from their metamorphosis is significantly lower. These findings suggest that self-fertilization gives rise to inbreeding depression associated with developmental deficits that are likely caused by expression of deleterious recessive mutations.
A model tunicate
Oikopleura dioica (class Appendicularia) is a semelparous organism, reproducing only once in its lifetime. It employs an original reproductive strategy in which the entire female germ-line is contained within an ovary that is a single giant multinucleate cell termed the "coenocyst". O. dioica can be maintained in laboratory culture, and is of growing interest as a model organism because of its phylogenetic position within the closest sister group to vertebrates.
Invasive species
Over the past few decades, tunicates (notably of the genera Didemnum and Styela) have been invading coastal waters in many countries. The carpet tunicate (Didemnum vexillum) has taken over a area of the seabed on the Georges Bank off the northeast coast of North America, covering stones, molluscs, and other stationary objects in a dense mat. D. vexillum, Styela clava and Ciona savignyi have appeared and are thriving in Puget Sound and Hood Canal in the Pacific Northwest.
Invasive tunicates usually arrive as fouling organisms on the hulls of ships, but may also be introduced as larvae in ballast water. Another possible means of introduction is on the shells of molluscs brought in for marine cultivation. Current research indicates many tunicates previously thought to be indigenous to Europe and the Americas are, in fact, invaders. Some of these invasions may have occurred centuries or even millennia ago. In some areas, tunicates are proving to be a major threat to aquaculture operations.
Use by humans
Medical uses
Tunicates contain a host of potentially useful chemical compounds, including:
Plitidepsin, a didemnin effective against various types of cancer; as of late January 2021 undergoing Phase III trials as a treatment for COVID-19
Trabectedin, an FDA approved anticancer drug.
Tunicates are able to correct their own cellular abnormalities over a series of generations, and a similar regenerative process may be possible for humans. The mechanisms underlying the phenomenon may lead to insights about the potential of cells and tissues to be reprogrammed and to regenerate compromised human organs.
As food
Various Ascidiacea species are consumed as food around the world. The piure (Pyura chilensis) is used in the cuisine of Chile, both raw and in seafood stews. In Japan and Korea, the sea pineapple (Halocynthia roretzi) is the main species eaten. It is cultivated on dangling cords made of palm fronds. In 1994, over 42,000 tons were produced, but since then, mass mortality events have occurred among the farmed sea squirts (the tunics becoming soft), and only 4,500 tons were produced in 2004.
Other uses
The use of tunicates as a source of biofuel is being researched. The cellulose body wall can be broken down and converted into ethanol, and other parts of the animal are protein-rich and can be converted into fish feed. Culturing tunicates on a large scale may be possible and the economics of doing so are attractive. As tunicates have few predators, their removal from the sea may not have profound ecological impacts. Being sea-based, their production does not compete with food production as does the cultivation of land-based crops for biofuel projects.
Some tunicates are used as model organisms. Ciona intestinalis and Ciona savignyi have been used for developmental studies. Both species' mitochondrial and nuclear genomes have been sequenced. The nuclear genome of the appendicularian Oikopleura dioica appears to be one of the smallest among metazoans and this species has been used to study gene regulation and the evolution and development of chordates.
| Biology and health sciences | Chordata (except vertebrates) | Animals |
378625 | https://en.wikipedia.org/wiki/War%20elephant | War elephant | A war elephant is an elephant that is trained and guided by humans for combat purposes. Historically, the war elephant's main use was to charge the enemy, break their ranks, and instill terror and fear. Elephantry is a term for specific military units using elephant-mounted troops. In modern times, war elephants on the battlefield were effectively made redundant by the invention of motor vehicles, particularly tanks.
Description
War elephants played a critical role in several key battles in antiquity, especially in ancient India. While seeing limited and periodic use in Ancient China, they became a permanent fixture in armies of historical kingdoms in Southeast Asia. During classical antiquity they were also used in ancient Persia and in the Mediterranean world within armies of Macedon, Hellenistic Greek states, the Roman Republic and later Empire, and Ancient Carthage in North Africa. In some regions they maintained a firm presence on the battlefield throughout the Medieval era. However, their use declined with the spread of firearms and other gunpowder weaponry in early modern warfare. After this, war elephants became restricted to non-combat engineering and labour roles, as well as being used for minor ceremonial uses.
Taming
An elephant trainer, rider, or keeper is called a mahout. Mahouts were responsible for capturing and handling elephants. To accomplish this, they utilize metal chains and a specialized hook called an ankus, or 'elephant goad'. According to Chanakya as recorded in the Arthashastra, first the mahout would have to get the elephant used to being led. The elephant would have learned how to raise its legs to help a rider climb on. Then the elephants were taught to run and maneuver around obstacles, and move in formation. These elephants would be fit to learn how to systematically trample and charge enemies.
The first elephant species to be tamed was the Asian elephant, for use in agriculture. Elephant taming – not full domestication, as they are still captured in the wild, rather than being bred in captivity – may have begun in any of three different places. The oldest evidence comes from the Indus Valley civilization, around roughly 2000 BC. Archaeological evidence for the presence of wild elephants in the Yellow River valley in Shang China () may suggest that they also used elephants in warfare. The wild elephant populations of Mesopotamia and China declined quickly because of deforestation and human population growth: by 850 BC the Mesopotamian elephants were extinct, and by 500 BC the Chinese elephants were seriously reduced in numbers and limited to areas well south of the Yellow River.
Capturing elephants from the wild remained a difficult task, but a necessary one given the difficulties of breeding in captivity and the long time required for an elephant to reach sufficient maturity to engage in battle. Sixty-year-old war elephants were always prized as being at the most suitable age for battle service and gifts of elephants of this age were seen as particularly generous. Today an elephant is considered in its prime and at the height of its power between the ages of 25 and 40, yet elephants as old as 80 are used in tiger hunts because they are more disciplined and experienced.
It is commonly thought that the reason all war elephants were male was because of males' greater aggression, but it was instead because a female elephant in battle will run from a male; therefore only males could be used in war, whereas female elephants were more commonly used for logistics. According to the First Book of Maccabees, the Seleucids used the "blood of grapes and mulberries" to provoke their war elephants in preparation for battle.
Antiquity
Indian subcontinent
There is uncertainty as to when elephant warfare first started, but it is widely accepted that it began in ancient India. The early Vedic period did not extensively specify the use of elephants in war. However, in the Ramayana, Indra is depicted as riding either Airavata, a mythological elephant, or on the Uchchaihshravas, as his mounts. Elephants were widely utilized in warfare by the later Vedic period by the 6th century BC. The increased conscription of elephants in the military history of India coincides with the expansion of the Vedic Kingdoms into the Indo-Gangetic Plain suggesting its introduction during the intervening period. The practice of riding on elephants in peace and war, royalty or commoner, was first recorded in the 6th or 5th century BC. This practice is believed to be much older than proper recorded history.
The ancient Indian epics Ramayana and Mahābhārata, dating from 5th–4th century BC, elaborately depict elephant warfare. They are recognized as an essential component of royal and military processions. In ancient India, initially, the army was fourfold (chaturanga), consisting of infantry, cavalry, elephants and chariots. Kings and princes principally ride on chariots, which was considered the most royal, while seldom riding the back of elephants. Although viewed as secondary to chariots by royalty, elephants were the preferred vehicle of warriors, especially the elite ones. While the chariots eventually fell into disuse, the other three arms continued to be valued. Many characters in the epic Mahābhārata were trained in the art. According to the rules of engagement set for the Kurukshetra War two men were to duel utilizing the same weapon and mount including elephants. In the Mahābhārata the akshauhini battle formation consists of a ratio of 1 chariot : 1 elephant : 3 cavalry : 5 infantry soldiers. Many characters in the Mahābhārata were described as skilled in the art of elephant warfare e.g. Duryodhana rides an elephant into battle to bolster the demoralized Kaurava army. Scriptures like the Nikāya and Vinaya Pitaka assign elephants in their proper place in the organization of an army. The Samyutta Nikaya additionally mentions the Gautama Buddha being visited by a 'hatthāroho gāmaṇi'. He is the head of a village community bound together by their profession as mercenary soldiers forming an elephant corp.
Ancient Indian kings certainly valued the elephant in war, some stating that an army without elephants is as despicable as a forest without a lion, a kingdom without a king, or as valor unaided by weapons. The use of elephants further increased with the rise of the Mahajanapadas. King Bimbisara (), who began the expansion of the Magadha kingdom, relied heavily on his war elephants. The Mahajanapadas would be conquered by the Nanda Empire under the reign of Mahapadma Nanda. Pliny the Elder and Plutarch also estimated the Nanda Army strength in the east as 200,000 infantry, 80,000 cavalry, 8,000 chariots, and 6,000 war elephants. Alexander the Great would come in contact with the Nanda Empire on the banks of the Beas River and was forced to return due to his army's unwillingness to advance. Even if the numbers and prowess of these elephants were exaggerated by historic accounts, elephants were established firmly as war machines in this period.
Chandragupta Maurya (321–297 BC), formed the Maurya Empire, the largest empire to exist in South Asia. At the height of his power, Chandragupta is said to have wielded a military of 600,000 infantry, 30,000 cavalry, 8,000 chariots and 9,000 war elephants besides followers and attendants.
In the Mauryan Empire, the 30-member war office was made up of six boards. The sixth board looked after the elephants, and were headed by Gajadhyaksha. The gajadhyaksha was the superintendent of elephants and his qualifications. The use of elephants in the Maurya Empire as recorded by Chanakya in the Arthashastra. According to Chanakya; catching, training, and controlling war elephants was one of the most important skills taught by the military academies. He advised Chandragupta to set up forested sanctuaries for the wellness of the elephants. Chanakya explicitly conveyed the importance of these sanctuaries. The Maurya Empire would reach its zenith under the reign of Ashoka, who used elephants extensively during his conquest. During the Kalinga War, Kalinga had a standing army of 60,000 infantry, 1000 cavalry and 700 war elephants. Kalinga was notable for the quality of their war elephants which were prized by its neighbors for being stronger. Later the King Kharavela was to restore an independent Kalinga into a powerful kingdom using war elephants as stated in the Hathigumpha inscription or "Elephant Cave" Inscriptions.Following Indian accounts foreign rulers would also adopt the use of elephants.
The Chola Empire of Tamil Nadu also had a very strong elephant force. The Chola emperor Rajendra Chola had an armored elephant force, which played a major role in his campaigns.
Sri Lanka made extensive use of elephants and also exported elephants with Pliny the Elder stating that the Sri Lankan elephants, for example, were larger, fiercer and better for war than local elephants. This superiority, as well as the proximity of the supply to seaports, made Sri Lanka's elephants a lucrative trading commodity. Sri Lankan history records indicate elephants were used as mounts for kings leading their men in the battlefield, with individual mounts being recorded in history. The elephant Kandula was King Dutugamunu's mount and Maha Pambata, 'Big Rock', the mount of King Ellalan during their historic encounter on the battlefield in 200 BC, for example.
Eastern Asia
Elephants were used for warfare in China by a small handful of southern dynasties. The state of Chu used elephants in 506 BC against Wu by tying torches to their tails and sending them into the ranks of the enemy soldiers, but the attempt failed. In December 554 AD, the Liang dynasty used armoured war elephants, carrying towers, against Western Wei. They were defeated by a volley of arrows. The Southern Han dynasty is the only state in Chinese history to have kept a permanent corps of war elephants. These elephants were able to carry a tower with some ten people on their backs. They were used successfully during the Han invasion of Ma Chu in 948. In 970, the Song dynasty invaded Southern Han and their crossbowmen readily routed the Han elephants on 23 January 971, during the taking of Shao. That was the last time elephants were used in Chinese warfare, although the Wanli Emperor (r. 1572–1620) did keep a herd of elephants capable of carrying a tower and eight men, which he showed to his guests in 1598. These elephants were probably not native to China and were delivered to the Ming dynasty by Southeast Asian countries such as Siam. During the Revolt of the Three Feudatories, the rebels used elephants against the Qing dynasty, but the Qing Bannermen shot them with so many arrows that they "resembled porcupines" and repelled the elephant charge.
Chinese armies faced off against war elephants in Southeast Asia, such as during the Sui–Lâm Ấp war (605), Lý–Song War (1075–1077), Ming–Mong Mao War (1386–1388), and Ming–Hồ War (1406–1407). In 605, the Champa kingdom of Lâm Ấp in what is now southern Vietnam used elephants against the invading army of China's Sui dynasty. The Sui army dug pits and lured the elephants into them and shot them with crossbows, causing the elephants to turn back and trample their own army. In 1075, the Song defeated elephants deployed on the borderlands of Đại Việt during the Lý–Song War. The Song forces used scythed polearms to cut the elephants' trunks, causing them to trample their own troops. During the Mong Mao campaign, the elephants were routed by an assortment of gunpowder projectiles. In the war against the Hồ dynasty, Ming troops covered their horses with lion masks to scare the elephants and shot them with firearms. The elephants all trembled with fear and were wounded by the guns and arrows, causing the Viet army to panic.
Achaemenid Persia, Macedonia and Hellenistic Greek states
From India, military thinking on the use of war elephants spread westwards to the Persian Achaemenid Empire, where they were used in several campaigns. They in turn came to influence the campaigns of Alexander the Great, king of Macedonia in Hellenistic Greece. The first confrontation between Europeans and the Persian war elephants occurred at Alexander's Battle of Gaugamela (331 BC), where the Persians deployed fifteen elephants. These elephants were placed at the centre of the Persian line and made such an impression on Alexander's army that he felt the need to sacrifice to Phobos, the God of Fear, the night before the battle – but according to some sources the elephants ultimately failed to deploy in the final battle owing to their long march the day before. Alexander won resoundingly at Gaugamela, but was deeply impressed by the enemy elephants and took these first fifteen into his own army, adding to their number during his capture of the rest of Persia.
By the time Alexander reached the borders of India five years later, he had a substantial number of elephants under his own command. When it came to defeating Porus, who ruled in what is now Punjab region, Alexander found himself facing a force of between 85 and 100 war elephants at the Battle of the Hydaspes. Preferring stealth and mobility to sheer force, Alexander manoeuvered and engaged with just his infantry and cavalry, ultimately defeating Porus' forces, including his elephant corps, albeit at some cost. Porus for his part placed his elephants individually, at long intervals from each other, a short distance in front of his main infantry line, in order to scare off Macedonian cavalry attacks and aid his own infantry in their struggle against the phalanx. The elephants caused many losses with their tusks fitted with iron spikes or by lifting the enemies with their trunks and trampling them.
Arrian described the subsequent fight: "[W]herever the beasts could wheel around, they rushed forth against the ranks of infantry and demolished the phalanx of the Macedonians, dense as it was."
The Macedonians adopted the standard ancient tactic for fighting elephants, loosening their ranks to allow the elephants to pass through and assailing them with javelins as they tried to wheel around; they managed to pierce the unarmoured elephants' legs. The panicked and wounded elephants turned on the Indians themselves; the mahouts were armed with poisoned rods to kill the beasts but were slain by javelins and archers.
Looking further east again, Alexander could see that the emperors and kings of the Nanda Empire and Gangaridai could deploy between 3,000 and 6,000 war elephants. Such a force was many times larger than the number of elephants employed by the Persians and Greeks, which probably discouraged Alexander's army and effectively halted their advance into India. On his return, Alexander established a force of elephants to guard his palace at Babylon, and created the post of elephantarch to lead his elephant units.
The successful military use of elephants spread further. The successors to Alexander's empire, the Diadochi, used hundreds of Indian elephants in their wars, with the Seleucid Empire being particularly notable for their use of the animals, still being largely brought from India. Indeed, the Seleucid–Mauryan war of 305–303 BC ended with the Seleucids ceding vast eastern territories in exchange for 500 war elephants – a small part of the Mauryan forces, which included up to 9000 elephants by some accounts. The Seleucids put their new elephants to good use at the Battle of Ipsus four years later, where they blocked the return of the victorious Antigonid cavalry, allowing the latter's phalanx to be isolated and defeated.
The first use of war elephants in Europe was made in 318 BC by Polyperchon, one of Alexander's generals, when he besieged Megalopolis in the Peloponnesus during the wars of the Diadochi. He used 60 elephants brought from Asia with their mahouts. A veteran of Alexander's army, named Damis, helped the besieged Megalopolitians to defend themselves against the elephants and eventually Polyperchon was defeated. Those elephants were subsequently taken by Cassander and transported, partly by sea, to other battlefields in Greece. It is assumed that Cassander constructed the first elephant transport sea vessels. Some of the elephants died of starvation in 316 BC in the besieged city of Pydna in Macedonia. Others of Polyperchon's elephants were used in various parts of Greece by Cassander.
Although the use of war elephants in the western Mediterranean is most famously associated with the wars between Carthage and Roman Republic, the introduction of war elephants there was primarily the result of an invasion by Hellenistic era Epirus across the Adriatic Sea. King Pyrrhus of Epirus brought twenty elephants to attack Roman Italy at the battle of Heraclea in 280 BC, leaving fifty additional animals, on loan from Ptolemaic Pharaoh Ptolemy II, on the mainland. The Romans were unprepared for fighting elephants, and the Epirot forces routed the Romans. The next year, the Epirots again deployed a similar force of elephants, attacking the Romans at the battle of Asculum. This time the Romans came prepared with flammable weapons and anti-elephant devices: these were ox-drawn wagons, equipped with long spikes to wound the elephants, pots of fire to scare them, and accompanying screening troops who would hurl javelins at the elephants to drive them away. A final charge of Epirot elephants won the day again, but this time Pyrrhus had suffered very heavy casualties – a Pyrrhic victory.
The Seleucid king Antiochus V Eupator, whose father and he contended with Ptolemaic Egypt's ruler Ptolemy VI for control of Syria, invaded Judea in 161 BCE with eighty elephants (some sources claim thirty-two), some of which were clad in armoured breastplates, in an attempt to subdue the Jews who had revolted during the Maccabean Revolt. In the ensuing battle, near the mountainous straights adjacent to Beth Zachariah, Eleazar, brother of Judas Maccabeus, attacked the largest of the elephants, piercing its underside and causing it to collapse upon him, killing him under its weight.
North Africa
The North African elephant was a significant animal in Nubian culture. They were depicted on the walls of temples and on Meroitic lamps. Kushite kings also utilize war elephants, which are believed to have been kept and trained in the "Great Enclosure" at Musawwarat al-Sufa. The Kingdom of Kush provided these war elephants to the Egyptians, Ptolemies and Syrians.
The Ptolemaic Egypt and the Punics began acquiring African elephants for the same purpose, as did Numidia and the Kingdom of Kush. The animal used was the North African elephant (Loxodonta africana pharaohensis) which would become extinct from overexploitation. These animals were smaller and harder to tame, and could not swim deep rivers compared with the Asian elephants used by the Seleucid Empire on the east of the Mediterranean region, particularly Syrian elephants, which stood at the shoulder. It is likely that at least some Syrian elephants were traded abroad. The favourite, and perhaps last surviving, elephant of Hannibal's crossing of the Alps was an animal named Surus ("the Syrian"), which may have been of Syrian stock, though the evidence remains ambiguous.
Since the late 1940s, a strand of scholarship has argued that the African forest elephants used by Numidia, the Ptolemies and the military of Carthage did not carry howdahs or turrets in combat, perhaps owing to the physical weakness of the species. Some allusions to turrets in ancient literature are certainly anachronistic or poetic invention, but other references are less easily discounted. There is contemporary testimony that the army of Juba I of Numidia included turreted elephants in 46 BC. This is backed by the image of a turreted African elephant used on the coinage of Juba II. This also appears to be the case with Ptolemaic armies: Polybius reports that at the battle of Raphia in 217 BC the elephants of Ptolemy IV carried turrets; these elephants were significantly smaller than the Asian elephants fielded by the Seleucids and so presumably African forest elephants. There is also evidence that Carthaginian war elephants were furnished with turrets and howdahs in certain military contexts.
Farther south, tribes would have had access to the African savanna elephant (Loxodonta africana oxyotis). Although much larger than either the African forest elephant or the Asian elephant, these proved difficult to tame for war purposes and were not used extensively. Asian elephants were traded westwards to the Mediterranean markets with Sri Lankan elephants being particularly preferred for war.
Perhaps inspired by the victories of Pyrrhus of Epirus, Carthage developed its own use of war elephants and deployed them extensively during the First and Second Punic Wars. The performance of the Carthaginian elephant corps was mixed, illustrating the need for proper tactics to take advantage of the elephant's strength and cover its weaknesses. At Adyss in 255 BC, the Carthaginian elephants were ineffective due to the terrain, while at the battle of Panormus in 251 BC the Romans' velites were able to terrify the Carthaginian elephants being used unsupported, which fled from the field. At the battle of Tunis the charge of the Carthaginian elephants helped to disorder the Roman legions, allowing the Carthaginian phalanx to stand fast and defeat them. During the Second Punic War, Hannibal led an army of war elephants across the Alps. Many of them perished in the harsh conditions but the surviving elephants were successfully used in the battle of Trebia, where they panicked the Roman cavalry and Gallic allies. The Romans eventually developed effective anti-elephant tactics, leading to Hannibal's defeat at his final battle of Zama in 202 BC; his elephant charge, unlike the one at the battle of Tunis, was ineffective because the disciplined Roman maniples made way for them to pass.
Rome
Rome brought back many elephants at the end of the Punic Wars, and used them in its campaigns for many years afterwards. The conquest of Greece saw many battles in which the Romans deployed war elephants, including the invasion of Macedonia in 199 BC, the battle of Cynoscephalae 197 BC, the battle of Thermopylae, and the battle of Magnesia in 190 BC, during which Antiochus III's fifty-four elephants took on the Roman force of sixteen. In later years the Romans deployed twenty-two elephants at Pydna in 168 BC. The role of the elephant force at Cynoscephalae was particularly decisive, as their quick charge shattered the unformed Macedonian left wing, allowing the Romans to encircle and destroy the victorious Macedonian right. A similar event also occurred at Pydna. The Romans' successful use of war elephants against the Macedonians might be considered ironic, given that it was Pyrrhus who first taught them the military potential of elephants.
Elephants also featured throughout the Roman campaign against the Lusitanians and Celtiberians in Hispania. During the Second Celtiberian War, Quintus Fulvius Nobilior was helped by ten elephants sent by king Masinissa of Numidia. He deployed them against the Celtiberian forces of Numantia, but a falling stone hit one of the elephants, which panicked and frightened the rest, turning them against the Roman forces. After the subsequent Celtiberian counterattack, the Romans were forced to withdraw. Later, Quintus Fabius Maximus Servilianus marched against Viriathus with another ten elephants sent by king Micipsa. However, the Lusitanian style of ambushes in narrow terrains ensured his elephants did not play an important factor in the conflict, and Servilianus was eventually defeated by Viriathus in the city of Erisana.
The Romans used a war elephant in their first invasion of Britain, one ancient writer recording that "Caesar had one large elephant, which was equipped with armour and carried archers and slingers in its tower. When this unknown creature entered the river, the Britons and their horses fled and the Roman army crossed over" – although he may have confused this incident with the use of a war elephant in Claudius' final conquest of Britain. At least one elephant skeleton with flint weapons found in England was initially misidentified as one of these elephants, but later dating proved it to be a mammoth skeleton from the Stone Age.
In the African campaign of the Roman civil war of 49–45 BC, the army of Metellus Scipio used elephants against Caesar's army at the battle of Thapsus. Scipio trained his elephants before the battle by aligning the elephants in front of slingers that would throw rocks at them, and another line of slingers at the elephants' rear to perform the same, in order to propel the elephants only in one direction, preventing them turning their backs because of frontal attack and charging against his own lines, but the author of De Bello Africano admits of the enormous effort and time required to accomplish this.
By the time of Claudius, such animals were being used by the Romans in single numbers only – the last significant use of war elephants in the Mediterranean was against the Romans at the battle of Thapsus, 46 BC, where Julius Caesar armed his fifth legion (Alaudae) with axes and commanded his legionaries to strike at the elephant's legs. The legion withstood the charge, and the elephant became its symbol. The remainder of the elephants seemed to have been thrown into panic by Caesar's archers and slingers.
Parthia and Sassanian Persia
The Parthian Empire occasionally used war elephants in their battles against the Roman Empire, but elephants were of substantial importance in the army of the subsequent Sassanid Empire. The Sasanian war elephants are recorded in engagements against the Romans, such as during Julian's invasion of Persia. Other examples include the Battle of Vartanantz in 451 AD, at which the Sassanid elephants terrified the Armenians, and the Battle of al-Qādisiyyah of 636 AD, in which a unit of thirty-three elephants was used against the invading Arab Muslims.
The Sassanid elephant corps held primacy amongst the Sassanid cavalry forces and was recruited from India. The elephant corps was under a special chief, known as the Zend−hapet, meaning "Commander of the Indians", either because the animals came from that country, or because they were managed by natives of Hindustan. The Sassanid elephant corps was never on the same scale as others further east, and after the fall of the Sassanid Empire the use of war elephants died out in the region.
Aksumite Empire
The Kingdom of Aksum in what is now Ethiopia and Eritrea made use of war elephants in 525 AD during the invasion of the Himyarite Kingdom in the Arabian peninsula. The war elephants used by the Aksumite army consisted of African savannah elephants, a significantly larger and more temperamental species of elephant. War elephants were again put to use by an Aksumite army in 570 in a military expedition against the Quraysh of Mecca.
Middle Ages
The Kushan Empire conquered most of Northern India. The empire adopted war elephants when levying troops as they expanded into the Indian subcontinent. The Weilüe describes how the population of Eastern India rode elephants into battle, but currently they provide military service and taxes to the Yuezhi (Kushans). The Hou Hanshu additionally describes the Kushan as acquiring riches including elephants as part of their conquests. The emperor Kanishka assembled a great army from his subject nations, including elephants from India. He planned on attacking the Tarim Kingdoms, and sent a vanguard of Indian troops led by white elephants. However, when crossing the Pamir Mountains the elephants and horses in the vanguard were unwilling to advance. Kanishka is then said to have had a religious revelation and rejected violence.
The Gupta Empire demonstrated extensive use of elephants in war and greatly expanded under the reign of Samudragupta. Local squads which each consisted of one elephant, one chariot, three armed cavalrymen, and five foot soldiers protected Gupta villages from raids and revolts. In times of war, the squads joined together to form a powerful imperial army. The Gupta Empire employed 'Mahapilupati', a position as an officer in charge of elephants. Emperors such as Kumaragupta struck coins depicted as elephant riders and lion slayers.
Harsha established hegemony over most of North India. The Harshacharita composed by Bāṇabhaṭṭa describes the army under the rule of Harsha. Much like the Gupta Empire, his military consisted of infantry, cavalry, and elephants. Harsha received war elephants as tribute and presents from vassals. Some elephants were also obtained by forest rangers from the jungles. Elephants were additionally taken from defeated armies. Bana additionally details the diet of the elephants, recording that they each consumed 600 pounds of fodder consisting of trees with mangos and sugarcanes.
The Chola dynasty and the Western Chalukya Empire maintained a large number of war elephants in the 11th and 12th century. The war elephants of the Chola dynasty carried on their backs fighting towers which were filled with soldiers who would shoot arrows at long range. The army of the Pala Empire was noted for its huge elephant corps, with estimates ranging from 5,000 to 50,000.
The Ghaznavids were the first amongst the Islamic dynasties to incorporate war elephants into their tactical theories. They also used a large number of elephants in their battles. The Ghaznavids acquired their elephants as tribute from the Hindu princes and as war plunder. The sources usually list the number of beasts captured, and these frequently ran into hundreds, such as 350 from Qanauj and 185 from Mahaban in 409/1018-19, and 580 from the Raja Ganda in 410/1019-20. Utbi records that the Thanesar expedition of 405/1014-15 was provoked by Mahmad's desire to get some of the special breed of Sri lankan breed of elephants excellent in war
In 1526, Babur, a descendant of Timur, invaded India and established the Mughal Empire. Babur introduced firearms and artillery into Indian warfare. He destroyed the army of Ibrahim Lodi at the First Battle of Panipat and the army of Rana Sanga in 1527 at the Battle of Khanua. The great Moghul Emperor Akbar (r. 1556–1605 AD) had 32,000 elephants in his stables. Jahangir, (reigned 1605–1627 A.D.) was a great connoisseur of elephants. He increased the number of elephants in service. Jahangir was stated to have 113,000 elephants in captivity: 12,000 in active army service, 1,000 to supply fodder to these animals, and another 100,000 elephants to carry courtiers, officials, attendants and baggage.
King Rajasinghe I laid siege to the Portuguese fort at Colombo, Sri Lanka, in 1558 with an army containing 2,200 elephants, used for logistics and siege work. The Sri Lankans had continued their proud traditions in capturing and training elephants from ancient times. The officer in charge of the royal stables, including the capture of elephants, was called the Gajanayake Nilame, while the post of Kuruve Lekham controlled the Kuruwe or elephant men. The training of war elephants was the duty of the Kuruwe clan who came under their own Muhandiram, a Sri Lankan administrative post.
In Islamic history there is a significant event known as the ‘Am al-Fil (, "Year of the Elephant"), approximately equating to 570 AD. At that time Abraha, the Christian ruler of Yemen, marched upon the Ka‘bah in Mecca, intending to demolish it. He had a large army, which included one or more elephants (as many as eight, in some accounts). However, the (single or lead) elephant, whose name was 'Mahmud', is said to have stopped at the boundary around Mecca, and refused to enter – which was taken by both the Meccans and their Yemenite foes as a serious omen. According to Islamic tradition, it was in this year that Muhammad was born.
In the Middle Ages, elephants were seldom used in Europe. Charlemagne took his one elephant, Abul-Abbas, when he went to fight the Danes in 804, and the Crusades gave Holy Roman Emperor Frederick II the opportunity to capture an elephant in the Holy Land, the same animal later being used in the capture of Cremona in 1214, but the use of these individual animals was more symbolic than practical, especially when contrasting food and water consumption of elephants in foreign lands and the harsh conditions of the crusades.
The Mongols faced war-elephants in Khorazm, Burma, Siam, Vietnam, Cambodia and India throughout the 13th century. Despite their unsuccessful campaigns in Vietnam and India, the Mongols defeated the war elephants outside Samarkand by using catapults and mangonels, and during the Mongol invasions of Burma in 1277–1287 and 1300–1302 by showering arrows from their famous composite bows. Genghis and Kublai both retained captured elephants as part of their entourage. Another central Asian invader, Timur faced similar challenges a century later. In the Sack of Delhi, Timur's army faced more than one hundred Indian elephants in battle and almost lost because of the fear they caused amongst his troops. Historical accounts say that the Timurids ultimately won by employing an ingenious strategy: Timur tied flaming straw to the back of his camels before the charge. The smoke made the camels run forward, scaring the elephants, who crushed their own troops in their efforts to retreat. Another account of the campaign by Ahmed ibn Arabshah reports that Timur used oversized caltrops to halt the elephants' charge. Later, the Timurid leader used the captured animals against the Ottoman Empire.
In Southeast Asia, the powerful Khmer Empire had come to regional dominance by the 9th century AD, drawing heavily on the use of war elephants. Uniquely, the Khmer military deployed double cross-bows on the top of their elephants. With the collapse of Khmer power in the 15th century, the successor region powers of Burma (now Myanmar) and Siam (now Thailand) also adopted the widespread use of war elephants. In many battles of the period it was the practice for leaders to fight each other personally in elephant duels. One famous battle occurred when the Burmese army attacked Siam's Kingdom of Ayutthaya. The war may have been concluded when the Burmese crown prince Mingyi Swa was killed by Siamese King Naresuan in personal combat on elephant in 1593. However, this duel may be apocryphal.
In Thailand, the king or general rode on the elephant's neck and carried ngaw, a long pole with a sabre at the end, plus a metal hook for controlling the elephant. Sitting behind him on a howdah, was a signaller, who signalled by waving of a pair of peacock feathers. Above the signaller was the chatras, consisting of progressively stacked circular canopies, the number signifying the rank of the rider. Finally, behind the signaller on the elephant's back, was the steerer, who steered via a long pole. The steerer may have also carried a short musket and a sword.
In Malaysia, 20 elephants battled the Portuguese during the Capture of Malacca (1511).
The Chinese continued to reject the use of war elephants throughout the period, with the notable exception of the Southern Han during the 10th century AD – the "only nation on Chinese soil ever to maintain a line of elephants as a regular part of its army". This anomaly in Chinese warfare is explained by the geographical proximity and close cultural links of the southern Han to Southeast Asia. The military officer who commanded these elephants was given the title "Legate Digitant and Agitant of the Gigantic Elephants". Each elephant supported a wooden tower that could allegedly hold ten or more men. For a brief time, war elephants played a vital role in Southern Han victories such as the invasion of Chu in 948 AD, but the Southern Han elephant corps were ultimately soundly defeated at Shao in 971 AD, defeated by crossbow shooting from troops of the Song dynasty. As one academic has put it, "thereafter this exotic introduction into Chinese culture passed out of history, and the tactical habits of the North prevailed". However, as late as the Ming dynasty in as far north as Beijing, there were still records of elephants being used in Chinese warfare, namely in 1449 where a Vietnamese contingent of war elephants helped the Ming dynasty defend the city from the Mongols.
Modern era
With the advent of gunpowder warfare in the late 15th century, the balance of advantage for war elephants on the battlefield began to change. While muskets had limited impact on elephants, which could withstand numerous volleys, cannon fire was a different matter entirelyan animal could easily be knocked down by a single shot. With elephants still being used to carry commanders on the battlefield, they became even more tempting targets for enemy artillery.
Nonetheless, in south-east Asia the use of elephants on the battlefield continued up until the end of the 19th century. One of the major difficulties in the region was terrain, and elephants could cross difficult terrain in many cases more easily than horse cavalry. Burmese forces used war elephants against the Chinese in the Sino-Burmese War where they routed the Chinese cavalry. The Burmese used them again during the Battle of Danubyu during the First Anglo-Burmese War, where the elephants were easily repulsed by Congreve rockets deployed by British forces. The Siamese Army continued utilising war elephants armed with jingals up until the Franco-Siamese conflict of 1893, while the Vietnamese used them in battle as late as 1885, during the Sino-French War. During the mid to late 19th century, British forces in India possessed specialised elephant batteries to haul large siege artillery pieces over ground unsuitable for oxen.
Into the 20th century, military elephants were used for non-combat purposes in the Second World War, particularly because the animals could perform tasks in regions that were problematic for motor vehicles. Sir William Slim, commander of the XIVth Army wrote about elephants in his introduction to Elephant Bill: "They built hundreds of bridges for us, they helped to build and launch more ships for us than Helen ever did for Greece. Without them our retreat from Burma would have been even more arduous and our advance to its liberation slower and more difficult." Military elephants were used as late as the Vietnam War.
Elephants were as of 2017 being used by the Kachin Independence Army for an auxiliary role.
Elephants are now more valuable to many armies in failing states for their ivory than as transport, and many thousands of elephants have died during civil conflicts due to poaching. They are classed as a pack animal in a U.S. Special Forces field manual issued as recently as 2004, but their use by U.S. personnel is discouraged because elephants are endangered.
Tactical use
There were many military purposes for which elephants could be used. In battle, war elephants were usually deployed in the centre of the line, where they could be useful to prevent a charge or to conduct one of their own. Their sheer size and their terrifying appearance made them valued heavy cavalry. Off the battlefield they could carry heavy materiel, and with a top speed of approximately provided a useful means of transport, before mechanized vehicles rendered them mostly obsolete.
In addition to charging, elephants could provide a safe and stable platform for archers to shoot arrows in the middle of the battlefield, from which more targets could be seen and engaged. The driver, called a mahout, was responsible for controlling the animal, who often also carried weapons himself, like a chisel-blade and a hammer (to kill his own mount in an emergency). Elephants were sometimes further enhanced with their own weaponry and armour as well. In India and Sri Lanka, heavy iron chains with steel balls at the end were tied to their trunks, which the animals were trained to swirl menacingly and with great skill. Numerous cultures designed specialized equipment for elephants, like tusk swords and a protective tower on their backs, called howdahs. The late sixteenth century saw the introduction of culverins, jingals and rockets against elephants, innovations that would ultimately drive these animals out of active service on the battlefield.
Besides the dawn of more efficient means of transportation and weaponry, war elephants also had clear tactical weaknesses that lead to their eventual retirement. After sustaining painful wounds, or when their driver was killed, elephants had the tendency to panic, often causing them to run amok indiscriminately, making casualties on either side. Experienced Roman infantrymen often tried to sever their trunks, causing instant distress, and possibly leading the elephant to flee back into its own lines. Fast skirmishers armed with javelins were also used by the Romans to drive them away, as well as flaming objects or a stout line of long spears, such as Triarii. Another method for disrupting elephant units in classical antiquity was the deployment of war pigs. Ancient writers believed that elephants could be "scared by the smallest squeal of a pig". Some warlords, however, interpreted this expression literally. At the siege of Megara during the Diadochi wars, for example, the Megarians reportedly poured oil on a herd of pigs, set them alight, and drove them towards the enemy's massed war elephants, which subsequently bolted in terror.
The value of war elephants in battle remains a contested issue. In the 19th century, it was fashionable to contrast the western, Roman focus on infantry and discipline with the eastern, exotic use of war elephants that relied merely on psychological tactics to defeat their enemy. One writer commented that war elephants "have been found to be skittish and easily alarmed by unfamiliar sounds and for this reason they were found prone to break ranks and flee". Nonetheless, the continued use of war elephants for several thousand years attests to their enduring value to the historical battlefield commander.
Cultural legacy
The use of war elephants over the centuries has left a deep cultural legacy in many countries. Many traditional war games incorporate war elephants. There is piece in chess called Elephant. While Englishmen call that piece bishop, it is called Gajam in Sanskrit. In Malayalam, it is called Aana (ആന), meaning elephant. In Russian, too, it is an elephant (Слон). In Bengali, the bishop is called hati, Bengali for "elephant". It is called an elephant in Chinese chess. In Arabic – and derived from it, in Spanish – the bishop piece is called al-fil, Arabic for "elephant".
In the Japanese game shogi, there used to be a piece known as the "Drunken Elephant"; it was, however, dropped by order of the Emperor Go-Nara and no longer appears in the version played in today's Japan.
Elephant armour, originally designed for use in war, is today usually only seen in museums. One particularly fine set of Indian elephant armour is preserved at the Leeds Royal Armouries Museum, while Indian museums across the sub-continent display other fine pieces. The architecture of India also shows the deep impact of elephant warfare over the years. War elephants adorn many military gateways, such as those at Lohagarh Fort for example, while some spiked, anti-elephant gates still remain, for example at Kumbhalgarh fort. Across India, older gateways are invariably much higher than their European equivalents, in order to allow elephants with howdahs to pass through underneath.
War elephants also remain a popular artistic trope, either in the Orientalist painting tradition of the 19th century, or in literature following Tolkien, who popularised a fantastic rendition of war elephants in the form of 'oliphaunts' or mûmakil.
In popular culture
Hathi from The Jungle Book by Rudyard Kipling is a former Indian war elephant who pulled heavy artillery for the British Indian Army. Kala-Nag from Toomai of the Elephants performed similar duties during the First Anglo-Afghan War.
Numerous strategy video games feature elephants as special units, usually available only to specific factions or requiring special resources. These include Age of Empires, Celtic Kings: The Punic Wars, the Civilization series, the Total War series, Imperator: Rome, and Crusader Kings III.
In the 2004 film Alexander, the scene depicting the Battle of Hydaspes includes war elephants fighting against the Macedonian phalanx.
In the 2017 video game Assassin's Creed Origins, they are distributed around the map as boss fights.
In The Lord of the Rings: The Return of the King, Mûmakil (or Oliphaunts) are fictional giant elephant-like creatures used by Sauron and his Haradrim army in the Battle of the Pelennor Fields.
In Genndy Tartakovsky's Primal, an episode features war elephants fighting against Egyptians.
In Horizon Forbidden West, there are machines called Tremortusks, which are suited for combat and are based on war elephants.
() is the nickname of the Thailand national football team.
| Technology | Military technology: General | null |
378645 | https://en.wikipedia.org/wiki/Endotherm | Endotherm | An endotherm (from Greek ἔνδον endon "within" and θέρμη thermē "heat") is an organism that maintains its body at a metabolically favorable temperature, largely by the use of heat released by its internal bodily functions instead of relying almost purely on ambient heat. Such internally generated heat is mainly an incidental product of the animal's routine metabolism, but under conditions of excessive cold or low activity an endotherm might apply special mechanisms adapted specifically to heat production. Examples include special-function muscular exertion such as shivering, and uncoupled oxidative metabolism, such as within brown adipose tissue.
Only birds and mammals are considered truly endothermic groups of animals. However, Argentine black and white tegu, leatherback sea turtles, lamnid sharks, tuna and billfishes, cicadas, and winter moths are mesothermic. Unlike mammals and birds, some reptiles, particularly some species of python and tegu, possess seasonal reproductive endothermy in which they are endothermic only during their reproductive season.
In common parlance, endotherms are characterized as "warm-blooded". The opposite of endothermy is ectothermy, although in general, there is no absolute or clear separation between the nature of endotherms and ectotherms.
Origin
Endothermy was thought to have originated towards the end of the Permian Period. One recent study claimed the origin of endothermy within Synapsida (the mammalian lineage) was among Mammaliamorpha, a node calibrated during the Late Triassic period, about 233 million years ago. Another study instead argued that endothermy only appeared later, during the Middle Jurassic, among crown-group mammals.
Evidence for endothermy has been found in basal synapsids ("pelycosaurs"), pareiasaurs, ichthyosaurs, plesiosaurs, mosasaurs, and basal archosauromorphs. Even the earliest amniotes might have been endotherms.
Mechanisms
Generating and conserving heat
Many endotherms have a larger amount of mitochondria per cell than ectotherms. This enables them to generate heat by increasing the rate at which they metabolize fats and sugars. Accordingly, to sustain their higher metabolism, endothermic animals typically require several times as much food as ectothermic animals do, and usually require a more sustained supply of metabolic fuel.
In many endothermic animals, a controlled temporary state of hypothermia conserves energy by permitting the body temperature to drop nearly to ambient levels. Such states may be brief, regular circadian cycles called torpor, or they might occur in much longer, even seasonal, cycles called hibernation. The body temperatures of many small birds (e.g. hummingbirds) and small mammals (e.g. tenrecs) fall dramatically during daily inactivity, such as nightly in diurnal animals or during the day in nocturnal animals, thus reducing the energy cost of maintaining body temperature. Less drastic intermittent reduction in body temperature also occurs in other larger endotherms; for example human metabolism also slows down during sleep, causing a drop in core temperature, commonly of the order of 1 degree Celsius. There may be other variations in temperature, usually smaller, either endogenous or in response to external circumstances or vigorous exertion, and either an increase or a drop.
The resting human body generates about two-thirds of its heat through metabolism in internal organs in the thorax and abdomen, as well as in the brain. The brain generates about 16% of the total heat produced by the body.
Heat loss is a major threat to smaller creatures, as they have a larger ratio of surface area to volume. Small warm-blooded animals have insulation in the form of fur or feathers. Aquatic warm-blooded animals, such as seals, generally have deep layers of blubber under the skin and any pelage (fur) that they might have; both contribute to their insulation. Penguins have both feathers and blubber. Penguin feathers are scale-like and serve both for insulation and streamlining. Endotherms that live in very cold circumstances or conditions predisposing to heat loss, such as polar waters, tend to have specialised structures of blood vessels in their extremities that act as heat exchangers. The veins are adjacent to the arteries full of warm blood. Some of the arterial heat is conducted to the cold blood and recycled back into the trunk. Birds, especially waders, often have very well-developed heat exchange mechanisms in their legs—those in the legs of emperor penguins are part of the adaptations that enable them to spend months on Antarctic winter ice. In response to cold, many warm-blooded animals also reduce blood flow to the skin by vasoconstriction to reduce heat loss. As a result, they blanch (become paler).
Avoiding overheating
In equatorial climates and during temperate summers, overheating (hyperthermia) is as great a threat as cold. In hot conditions, many warm-blooded animals increase heat loss by panting, which cools the animal by increasing water evaporation in the breath, and/or flushing, increasing the blood flow to the skin so the heat will radiate into the environment. Hairless and short-haired mammals, including humans and horses, also sweat, since the evaporation of the water in sweat removes heat. Elephants keep cool by using their huge ears like radiators in automobiles. Their ears are thin and the blood vessels are close to the skin, and flapping their ears to increase the airflow over them causes the blood to cool, which reduces their core body temperature when the blood moves through the rest of the circulatory system.
Pros and cons of an endothermic metabolism
The major advantage of endothermy over ectothermy is decreased vulnerability to fluctuations in external temperature. Regardless of location (and hence external temperature), endothermy maintains a constant core temperature for optimal enzyme activity.
Endotherms control body temperature by internal homeostatic mechanisms. In mammals, two separate homeostatic mechanisms are involved in thermoregulation—one mechanism increases body temperature, while the other decreases it. The presence of two separate mechanisms provides a very high degree of control. This is important because the core temperature of mammals can be controlled to be as close as possible to the optimal temperature for enzyme activity.
The overall rate of an animal's metabolism increases by a factor of about two for every rise in temperature, limited by the need to avoid hyperthermia. Endothermy does not provide greater speed in movement than ectothermy (cold-bloodedness)—ectothermic animals can move as fast as warm-blooded animals of the same size and build when the ectotherm is near or at its optimal temperature, but often cannot maintain high metabolic activity for as long as endotherms. Endothermic/homeothermic animals can be optimally active at more times during the diurnal cycle in places of sharp temperature variations between day and night and during more of the year in places of great seasonal differences of temperature. This is accompanied by the need to expend more energy to maintain the constant internal temperature and a greater food requirement. Endothermy may be important during reproduction, for example, in expanding the thermal range over which a species can reproduce, as embryos are generally intolerant of thermal fluctuations that are easily tolerated by adults. Endothermy may also provide protection against fungal infection. While tens of thousands of fungal species infect insects, only a few hundred target mammals, and often only those with a compromised immune system. A recent study
suggests fungi are fundamentally ill-equipped to thrive at mammalian temperatures. The high temperatures afforded by endothermy might have provided an evolutionary advantage.
Ectotherms increase their body temperature mostly through external heat sources such as sunlight energy; therefore, they depend on environmental conditions to reach operational body temperatures. Endothermic animals mostly use internal heat production through metabolic active organs and tissues (liver, kidney, heart, brain, muscle) or specialized heat producing tissues like brown adipose tissue (BAT). In general, endotherms therefore have higher metabolic rates than ectotherms at a given body mass. As a consequence they also need higher food intake rates, which may limit abundance of endotherms more than ectotherms.
Because ectotherms depend on environmental conditions for body temperature regulation, they typically are more sluggish at night and in the morning when they emerge from their shelters to heat up in the first sunlight. Foraging activity is therefore restricted to the daytime (diurnal activity patterns) in most vertebrate ectotherms. In lizards, for instance, only a few species are known to be nocturnal (e.g. many geckos) and they mostly use 'sit and wait' foraging strategies that may not require body temperatures as high as those necessary for active foraging. Endothermic vertebrate species are, therefore, less dependent on the environmental conditions and have developed a high variability (both within and between species) in their diurnal activity patterns.
It is thought that the evolution of endothermia was crucial in the development of eutherian mammalian species diversity in the Mesozoic period. Endothermia gave the early mammals the capacity to be active during nighttime while maintaining small body sizes. Adaptations in photoreception and the loss of UV protection characterizing modern eutherian mammals are understood as adaptations for an originally nocturnal lifestyle, suggesting that the group went through an evolutionary bottleneck (the nocturnal bottleneck hypothesis). This could have avoided predator pressure from diurnal reptiles and dinosaurs, although some predatory dinosaurs, being equally endothermic, might have adapted a nocturnal lifestyle in order to prey on those mammals.
Facultative endothermy
Many insect species are able to maintain a thoracic temperature above the ambient temperature using exercise. These are known as facultative or exercise endotherms. The honey bee, for example, does so by contracting antagonistic flight muscles without moving its wings (see insect thermoregulation). This form of thermogenesis is, however, only efficient above a certain temperature threshold, and below about , the honey bee reverts to ectothermy.
Facultative endothermy can also be seen in multiple snake species that use their metabolic heat to warm their eggs. Python molurus and Morelia spilota are two python species where females surround their eggs and shiver in order to incubate them.
Regional endothermy
Some ectotherms, including several species of fish and reptiles, have been shown to make use of regional endothermy, where muscle activity causes certain parts of the body to remain at higher temperatures than the rest of the body. This allows for better locomotion and use of the senses in cold environments.
Contrast between thermodynamic and biological terminology
Students encounter a source of possible confusion between the terminology of physics and biology. Whereas the thermodynamic terms "exothermic" and "endothermic" respectively refer to processes that give out heat energy and processes that absorb heat energy, in biology the sense is effectively reversed. The metabolic terms "ectotherm" and "endotherm" respectively refer to organisms that rely largely on external heat to achieve a full working temperature, and to organisms that produce heat from within as a major factor in controlling their body temperatures.
| Biology and health sciences | Basics | Biology |
378653 | https://en.wikipedia.org/wiki/Anthozoa | Anthozoa | Anthozoa is a class of marine invertebrates which includes sessile cnidarians such as the sea anemones, stony corals, soft corals and sea pens. Adult anthozoans are almost all attached to the seabed, while their larvae can disperse as planktons. The basic unit of the adult is the polyp; this consists of a cylindrical column topped by a disc with a central mouth surrounded by tentacles. Sea anemones are mostly solitary, but the majority of corals are colonial, being formed by the budding of new polyps from an original, founding individual. Colonies are strengthened by calcium carbonate and other materials and take various massive, plate-like, bushy or leafy forms.
Members of Anthozoa possess cnidocytes, a feature shared among other cnidarians such as the jellyfish, box jellies and parasitic Myxozoa and Polypodiozoa. The two main subclasses of Anthozoa are the Hexacorallia, members of which have six-fold symmetry and includes the stony corals, sea anemones, tube anemones and zoanthids; and the Octocorallia, which have eight-fold symmetry and includes the soft corals and gorgonians (sea pens, sea fans and sea whips), and sea pansies. The smaller subclass, Ceriantharia, consists of the tube-dwelling anemones. Some additional species are also included as incertae sedis until their exact taxonomic position can be ascertained.
Anthozoans are carnivores, catching prey with their tentacles. Many species supplement their energy needs by making use of photosynthetic single-celled algae that live within their tissues. These species live in shallow water and many are reef-builders. Other species lack the zooxanthellae and, having no need for well-lit areas, typically live in deep-water locations.
Unlike other members of this phylum, anthozoans do not have a medusa stage in their development. Instead, they release sperm and eggs into the water. After fertilisation, the planula larvae form part of the plankton. When fully developed, the larvae settle on the seabed and attach to the substrate, undergoing metamorphosis into polyps. Some anthozoans can also reproduce asexually through budding or by breaking in pieces.
Diversity
The name "Anthozoa" comes from the Greek words (; "flower") and (; "animals"), hence ανθόζωα (anthozoa) = "flower animals", a reference to the floral appearance of their perennial polyp stage.
Anthozoans are exclusively marine, and include sea anemones, stony corals, soft corals, sea pens, sea fans and sea pansies. Anthozoa is the largest taxon of cnidarians; over six thousand solitary and colonial species have been described. They range in size from small individuals less than half a centimetre across to large colonies a metre or more in diameter. They include species with a wide range of colours and forms that build and enhance reef systems. Although reefs and shallow water environments exhibit a great array of species, there are in fact more species of coral living in deep water than in shallow, and many taxa have shifted during their evolutionary history from shallow to deep water and vice versa.
Phylogeny
Anthozoa is subdivided into three subclasses: Octocorallia, Hexacorallia and Ceriantharia, which form monophyletic groups and generally show differentiating reflections on symmetry of polyp structure for each subclass. The relationships within the subclasses are unresolved.
Historically, the "Ceriantipatharia" was thought to be a separate subclass but, of the two orders it comprised, Antipatharia is now considered part of Hexacorallia and Ceriantharia is now considered an independent subclass. The extant orders are shown to the right.
Hexacorallia includes coral reef builders: the stony corals (Scleractinia), sea anemones (Actiniaria), and zoanthids (Zoantharia). Genetic studies of ribosomal DNA has shown Ceriantharia to be a monophyletic group and the oldest, or basal, order among them.
Classification according to the World Register of Marine Species:
subclass Hexacorallia
order Actiniaria — sea anemones
order Antipatharia — black coral
order Corallimorpharia — corallimorphs
order Rugosa †
order Scleractinia — stony corals
order Zoantharia — zoanthids
subclass Octocorallia
order Alcyonacea — soft corals and gorgonians
order Helioporacea — blue corals
order Pennatulacea — pennatules, sea feathers, sea pens, sea pansies
subclass Ceriantharia — ceriantharians, tube-dwelling anemones
order Penicillaria
order Spirularia
Anthozoa incertae sedis
genus Aiptasiodes
order Auloporida †
genus Sarcinula †
Octocorallia comprises the sea pens (Pennatulacea), soft corals (Alcyonacea), and blue coral (Helioporacea). Sea whips and sea fans, known as gorgonians, are part of Alcyonacea and historically were divided into separate orders.
Ceriantharia comprises the related tube-dwelling anemones. Tube-dwelling anemones or cerianthids look very similar to sea anemones, but belong to an entirely different subclass of anthozoans. They are solitary, living buried in soft sediments. Tube anemones live and can withdraw into tubes, which are made of a fibrous material, which is made from secreted mucus and threads of nematocyst-like organelles, known as ptychocysts.
Anatomy
The basic body form of an anthozoan is the polyp. This consists of a tubular column topped by a flattened area, the oral disc, with a central mouth; a whorl of tentacles surrounds the mouth. In solitary individuals, the base of the polyp is the foot or pedal disc, which adheres to the substrate, while in colonial polyps, the base links to other polyps in the colony.
The mouth leads into a tubular pharynx which descends for some distance into the body before opening into the coelenteron, otherwise known as the gastrovascular cavity, that occupies the interior of the body. Internal tensions pull the mouth into a slit-shape, and the ends of the slit lead into two grooves in the pharynx wall called siphonoglyphs. The coelenteron is subdivided by a number of vertical partitions, known as mesenteries or septa. Some of these extend from the body wall as far as the pharynx and are known as "complete septa" while others do not extend so far and are "incomplete". The septa also attach to the oral and pedal discs.
The body wall consists of an epidermal layer, a jellylike mesogloea layer and an inner gastrodermis; the septa are infoldings of the body wall and consist of a layer of mesogloea sandwiched between two layers of gastrodermis. In some taxa, sphincter muscles in the mesogloea close over the oral disc and act to keep the polyp fully retracted. The tentacles contain extensions of the coelenteron and have sheets of longitudinal muscles in their walls. The oral disc has radial muscles in the epidermis, but most of the muscles in the column are gastrodermal, and include strong retractor muscles beside the septa. The number and arrangement of the septa, as well as the arrangement of these retractor muscles, are important in anthozoan classification.
The tentacles are armed with nematocysts, venom-containing cells which can be fired harpoon-fashion to snare and subdue prey. These need to be replaced after firing, a process that takes about forty-eight hours. Some sea anemones have a circle of acrorhagi outside the tentacles; these long projections are armed with nematocysts and act as weapons. Another form of weapon is the similarly armed acontia (threadlike defensive organs) which can be extruded through apertures in the column wall. Some stony corals employ nematocyst-laden "sweeper tentacles" as a defence against the intrusion of other individuals.
Many anthozoans are colonial and consist of multiple polyps with a common origin joined by living material. The simplest arrangement is where a stolon runs along the substrate in a two dimensional lattice with polyps budding off at intervals. Alternatively, polyps may bud off from a sheet of living tissue, the coenosarc, which joins the polyps and anchors the colony to the substrate. The coenosarc may consist of a thin membrane from which the polyps project, as in most stony corals, or a thick fleshy mass in which the polyps are immersed apart from their oral discs, as in the soft corals.
The skeleton of a stony coral in the order Scleractinia is secreted by the epidermis of the lower part of the polyp; this forms a corallite, a cup-shaped hollow made of calcium carbonate, in which the polyp sits. In colonial corals, following growth of the polyp by budding, new corallites are formed, with the surface of the skeleton being covered by a layer of coenosarc. These colonies adopt a range of massive, branching, leaf-like and encrusting forms. Soft corals in the subclass Octocorallia are also colonial and have a skeleton formed of mesogloeal tissue, often reinforced with calcareous spicules or horny material, and some have rod-like supports internally. Other anthozoans, such as sea anemones, are naked; these rely on a hydrostatic skeleton for support. Some of these species have a sticky epidermis to which sand grains and shell fragments adhere, and zoanthids incorporate these substances into their mesogloea.
Biology
Most anthozoans are opportunistic predators, catching prey which drifts within reach of their tentacles. The prey is secured with the help of sticky mucus, spirocysts (non-venomous harpoon cells) and nematocysts (venomous harpoon cells). The tentacles then bend to push larger prey into the mouth, while smaller, plankton-size prey, is moved by cilia to the tip of the tentacles which are then inserted into the mouth. The mouth can stretch to accommodate large items, and in some species, the lips may extend to help receive the prey. The pharynx then grasps the prey, which is mixed with mucus and slowly swallowed by peristalsis and ciliary action. When the food reaches the coelenteron, extracellular digestion is initiated by the discharge of the septa-based nematocysts and the release of enzymes. The partially digested food fragments are circulated in the coelenteron by cilia, and from here they are taken up by phagocytosis by the gastrodermal cells that line the cavity.
Most anthozoans supplement their predation by incorporating into their tissues certain unicellular, photosynthetic organisms known as zooxanthellae (or zoochlorellae in a few instances); many fulfil the bulk of their nutritional requirements in this way. In this symbiotic relationship, the zooxanthellae benefit by using nitrogenous waste and carbon dioxide produced by the host while the cnidarian gains photosynthetic capability and increased production of calcium carbonate, a substance of great importance to stony corals. The presence of zooxanthellae is not a permanent relationship. Under some circumstances, the symbionts can be expelled, and other species may later move in to take their place. The behaviour of the anthozoan can also be affected, with it choosing to settle in a well lit spot, and competing with its neighbours for light to allow photosynthesis to take place. Where an anthozoan lives in a cave or other dark location, the symbiont may be absent in a species that, in a sunlit location, normally benefits from one. Anthozoans living at depths greater than are azooxanthellate because there is insufficient light for photosynthesis.
With longitudinal, transverse and radial muscles, polyps are able to elongate and shorten, bend and twist, inflate and deflate, and extend and contract their tentacles. Most polyps extend to feed and contract when disturbed, often invaginating their oral discs and tentacles into the column. Contraction is achieved by pumping fluid out of the coelenteron, and reflation by drawing it in, a task performed by the siphonoglyphs in the pharynx which are lined with beating cilia. Most anthozoans adhere to the substrate with their pedal discs but some are able to detach themselves and move about, while others burrow into the sediment. Movement may be a passive drifting with the currents or in the case of sea anemones, may involve creeping along a surface on their base.
Gas exchange and excretion is accomplished by diffusion through the tentacles and internal and external body wall, aided by the movement of fluid being wafted along these surfaces by cilia. The sensory system consists of simple nerve nets in the gastrodermis and epidermis, but there are no specialised sense organs.
Anthozoans exhibit great powers of regeneration; lost parts swiftly regrow and the sea anemone Aiptasia pallida can be vivisected in the laboratory and then returned to the aquarium where it will heal. They are capable of a variety of asexual means of reproduction including fragmentation, longitudinal and transverse fission and budding. Sea anemones for example can crawl across a surface leaving behind them detached pieces of the pedal disc which develop into new clonal individuals. Anthopleura species divide longitudinally, pulling themselves apart, resulting in groups of individuals with identical colouring and patterning. Transverse fission is less common, but occurs in Anthopleura stellula and Gonactinia prolifera, with a rudimentary band of tentacles appearing on the column before the sea anemone tears itself apart. Zoanthids are capable of budding off new individuals.
Most anthozoans are unisexual but some stony corals are hermaphrodite. The germ cells originate in the endoderm and move to the gastrodermis where they differentiate. When mature, they are liberated into the coelenteron and thence to the open sea, with fertilisation being external. To make fertilisation more likely, corals emit vast numbers of gametes, and many species synchronise their release in relation to the time of day and the phase of the moon.
The zygote develops into a planula larva which swims by means of cilia and forms part of the plankton for a while before settling on the seabed and metamorphosing into a juvenile polyp. Some planulae contain yolky material and others incorporate zooxanthellae, and these adaptations enable these larvae to sustain themselves and disperse more widely. The planulae of the stony coral Pocillopora damicornis, for example, have lipid-rich yolks and remain viable for as long as 100 days before needing to settle.
Ecology
Coral reefs are some of the most biodiverse habitats on earth, supporting large numbers of species of corals, fish, molluscs, worms, arthropods, starfish, sea urchins, other invertebrates and algae. Because of the photosynthetic requirements of the corals, they are found in shallow waters, and many of these fringe land masses. With a three-dimensional structure, coral reefs are very productive ecosystems; they provide food for their inhabitants, hiding places of various sizes to suit many organisms, perching places, barriers to large predators and solid structures on which to grow. They are used as breeding grounds and as nurseries by many species of pelagic fish, and they influence the productivity of the ocean for miles around. Anthozoans prey on animals smaller than they are and are themselves eaten by such animals as fish, crabs, barnacles, snails and starfish. Their habitats are easily disturbed by outside factors which unbalance the ecosystem. In 1989, the invasive crown-of-thorns starfish (Acanthaster planci) caused havoc in American Samoa, killing 90% of the corals in the reefs.
Corals that grow on reefs are called hermatypic, with those growing elsewhere are known as ahermatypic. Most of the latter are azooxanthellate and live in both shallow and deep sea habitats. In the deep sea they share the ecosystem with soft corals, polychaete worms, other worms, crustaceans, molluscs and sponges. In the Atlantic Ocean, the cold-water coral Lophelia pertusa forms extensive deep-water reefs which support many other species.
Other fauna, such as hydrozoa, bryozoa and brittle stars, often dwell among the branches of gorgonian and coral colonies. The pygmy seahorse not only makes certain species of gorgonians its home, but closely resembles its host and is thus well camouflaged. Some organisms have an obligate relationship with their host species. The mollusc Simnialena marferula is only found on the sea whip Leptogorgia virgulata, is coloured like it and has sequestered its defensive chemicals, and the nudibranch Tritonia wellsi is another obligate symbiont, its feathery gills resembling the tentacles of the polyps.
A number of sea anemone species are commensal with other organisms. Certain crabs and hermit crabs seek out sea anemones and place them on their shells for protection, and fish, shrimps and crabs live among the anemone's tentacles, gaining protection by being in close proximity to the stinging cells. Some amphipods live inside the coelenteron of the sea anemone. Despite their venomous cells, sea anemones are eaten by fish, starfish, worms, sea spiders and molluscs. The sea slug Aeolidia papillosa feeds on the aggregating anemone (Anthopleura elegantissima), accumulating the nematocysts for its own protection.
Paleontology
Several extinct orders of corals from the Paleozoic era (~540–252 million years ago) are thought to be close to the ancestors of modern Scleractinia:
Numidiaphyllida †
Kilbuchophyllida †
Heterocorallia †
Rugosa †
Heliolitida †
Tabulata †
Cothoniida †
Tabuloconida †
These are all corals and correspond to the fossil record time line. With readily-preserved hard calcareous skeletons, they comprise the majority of Anthozoan fossils.
Interactions with humans
Coral reefs and shallow marine environments are threatened, not only by natural events and increased sea temperatures, but also by such man-made problems as pollution, sedimentation and destructive fishing practices. Pollution may be the result of run-off from the land of sewage, agricultural products, fuel or chemicals. These may directly kill or injure marine life, or may encourage the growth of algae that smother native species, or form algal blooms with wide-ranging effects. Oil spills at sea can contaminate reefs, and also affect the eggs and larva of marine life drifting near the surface.
Corals are collected for the aquarium trade, and this may be done with little care for the long-term survival of the reef. Fishing among reefs is difficult and trawling does much mechanical damage. In some parts of the world explosives are used to dislodge fish from reefs, and cyanide may be used for the same purpose; both practices not only kill reef inhabitants indiscriminately but also kill or damage the corals, sometimes stressing them so much that they expel their zooxanthellae and become bleached.
Deep water coral habitats are also threatened by human activities, particularly by indiscriminate trawling. These ecosystems have been little studied, but in the perpetual darkness and cold temperatures, animals grow and mature slowly and there are relatively fewer fish worth catching than in the sunlit waters above. To what extent deep-water coral reefs provide a safe nursery area for juvenile fish has not been established, but they may be important for many cold-water species.
| Biology and health sciences | Cnidarians | Animals |
378661 | https://en.wikipedia.org/wiki/Thermoregulation | Thermoregulation | Thermoregulation is the ability of an organism to keep its body temperature within certain boundaries, even when the surrounding temperature is very different. A thermoconforming organism, by contrast, simply adopts the surrounding temperature as its own body temperature, thus avoiding the need for internal thermoregulation. The internal thermoregulation process is one aspect of homeostasis: a state of dynamic stability in an organism's internal conditions, maintained far from thermal equilibrium with its environment (the study of such processes in zoology has been called physiological ecology). If the body is unable to maintain a normal temperature and it increases significantly above normal, a condition known as hyperthermia occurs. Humans may also experience lethal hyperthermia when the wet bulb temperature is sustained above for six hours.
Work in 2022 established by experiment that a wet-bulb temperature exceeding 30.55°C caused uncompensable heat stress in young, healthy adult humans. The opposite condition, when body temperature decreases below normal levels, is known as hypothermia. It results when the homeostatic control mechanisms of heat within the body malfunction, causing the body to lose heat faster than producing it. Normal body temperature is around 37°C (98.6°F), and hypothermia sets in when the core body temperature gets lower than . Usually caused by prolonged exposure to cold temperatures, hypothermia is usually treated by methods that attempt to raise the body temperature back to a normal range.
It was not until the introduction of thermometers that any exact data on the temperature of animals could be obtained. It was then found that local differences were present, since heat production and heat loss vary considerably in different parts of the body, although the circulation of the blood tends to bring about a mean temperature of the internal parts. Hence it is important to identify the parts of the body that most closely reflect the temperature of the internal organs. Also, for such results to be comparable, the measurements must be conducted under comparable conditions. The rectum has traditionally been considered to reflect most accurately the temperature of internal parts, or in some cases of sex or species, the vagina, uterus or bladder. Some animals undergo one of various forms of dormancy where the thermoregulation process temporarily allows the body temperature to drop, thereby conserving energy. Examples include hibernating bears and torpor in bats.
Classification of animals by thermal characteristics
Endothermy vs. ectothermy
Thermoregulation in organisms runs along a spectrum from endothermy to ectothermy. Endotherms create most of their heat via metabolic processes and are colloquially referred to as warm-blooded. When the surrounding temperatures are cold, endotherms increase metabolic heat production to keep their body temperature constant, thus making the internal body temperature of an endotherm more or less independent of the temperature of the environment. Endotherms possess a larger number of mitochondria per cell than ectotherms, enabling them to generate more heat by increasing the rate at which they metabolize fats and sugars. Ectotherms use external sources of temperature to regulate their body temperatures. They are colloquially referred to as cold-blooded despite the fact that body temperatures often stay within the same temperature ranges as warm-blooded animals. Ectotherms are the opposite of endotherms when it comes to regulating internal temperatures. In ectotherms, the internal physiological sources of heat are of negligible importance; the biggest factor that enables them to maintain adequate body temperatures is due to environmental influences. Living in areas that maintain a constant temperature throughout the year, like the tropics or the ocean, has enabled ectotherms to develop behavioral mechanisms that respond to external temperatures, such as sun-bathing to increase body temperature, or seeking the cover of shade to lower body temperature.
Ectotherms
Ectothermic cooling
Vaporization:
Evaporation of sweat and other bodily fluids.
Convection:
Increasing blood flow to body surfaces to maximize heat transfer across the advective gradient.
Conduction:
Losing heat by being in contact with a colder surface. For instance:
Lying on cool ground.
Staying wet in a river, lake or sea.
Covering in cool mud.
Radiation:
Releasing heat by radiating it away from the body.
Ectothermic heating (or minimizing heat loss)
Convection:
Climbing to higher ground up trees, ridges, rocks.
Entering a warm water or air current.
Building an insulated nest or burrow.
Conduction:
Lying on a hot surface.
Radiation:
Lying in the sun (heating this way is affected by the body's angle in relation to the sun).
Folding skin to reduce exposure.
Concealing wing surfaces.
Exposing wing surfaces.
Insulation:
Changing shape to alter surface/volume ratio.
Inflating the body.
To cope with low temperatures, some fish have developed the ability to remain functional even when the water temperature is below freezing; some use natural antifreeze or antifreeze proteins to resist ice crystal formation in their tissues. Amphibians and reptiles cope with heat gain by evaporative cooling and behavioral adaptations. An example of behavioral adaptation is that of a lizard lying in the sun on a hot rock in order to heat through radiation and conduction.
Endothermy
An endotherm is an animal that regulates its own body temperature, typically by keeping it at a constant level. To regulate body temperature, an organism may need to prevent heat gains in arid environments. Evaporation of water, either across respiratory surfaces or across the skin in those animals possessing sweat glands, helps in cooling body temperature to within the organism's tolerance range. Animals with a body covered by fur have limited ability to sweat, relying heavily on panting to increase evaporation of water across the moist surfaces of the lungs and the tongue and mouth. Mammals like cats, dogs and pigs, rely on panting or other means for thermal regulation and have sweat glands only in foot pads and snout. The sweat produced on pads of paws and on palms and soles mostly serves to increase friction and enhance grip. Birds also counteract overheating by gular fluttering, or rapid vibrations of the gular (throat) skin. Down feathers trap warm air acting as excellent insulators just as hair in mammals acts as a good insulator. Mammalian skin is much thicker than that of birds and often has a continuous layer of insulating fat beneath the dermis. In marine mammals, such as whales, or animals that live in very cold regions, such as the polar bears, this is called blubber. Dense coats found in desert endotherms also aid in preventing heat gain such as in the case of the camels.
A cold weather strategy is to temporarily decrease metabolic rate, decreasing the temperature difference between the animal and the air and thereby minimizing heat loss. Furthermore, having a lower metabolic rate is less energetically expensive. Many animals survive cold frosty nights through torpor, a short-term temporary drop in body temperature. Organisms, when presented with the problem of regulating body temperature, have not only behavioural, physiological, and structural adaptations but also a feedback system to trigger these adaptations to regulate temperature accordingly. The main features of this system are stimulus, receptor, modulator, effector and then the feedback of the newly adjusted temperature to the stimulus. This cyclical process aids in homeostasis.
Homeothermy compared with poikilothermy
Homeothermy and poikilothermy refer to how stable an organism's deep-body temperature is. Most endothermic organisms are homeothermic, like mammals. However, animals with facultative endothermy are often poikilothermic, meaning their temperature can vary considerably. Most fish are ectotherms, as most of their heat comes from the surrounding water. However, almost all fish are poikilothermic.
Beetles
The physiology of the Dendroctonus micans beetle encompasses a suite of adaptations crucial for its survival and reproduction. Flight capabilities enable them to disperse and locate new host trees, while sensory organs aid in detecting environmental cues and food sources. Of particular importance is their ability to thermoregulate, ensuring optimal body temperature in fluctuating forest conditions. This physiological mechanism, coupled with thermosensation, allows them to thrive across diverse environments. Overall, these adaptations underscore the beetle's remarkable resilience and highlight the significance of understanding their physiology for effective management and conservation efforts.
Vertebrates
By numerous observations upon humans and other animals, John Hunter showed that the essential difference between the so-called warm-blooded and cold-blooded animals lies in observed constancy of the temperature of the former, and the observed variability of the temperature of the latter. Almost all birds and mammals have a high temperature almost constant and independent of that of the surrounding air (homeothermy). Almost all other animals display a variation of body temperature, dependent on their surroundings (poikilothermy).
Brain control
Thermoregulation in both ectotherms and endotherms is controlled mainly by the preoptic area of the anterior hypothalamus. Such homeostatic control is separate from the sensation of temperature.
In birds and mammals
In cold environments, birds and mammals employ the following adaptations and strategies to minimize heat loss:
Using small smooth muscles (arrector pili in mammals), which are attached to feather or hair shafts; this distorts the surface of the skin making feather/hair shaft stand erect (called goose bumps or goose pimples) which slows the movement of air across the skin and minimizes heat loss.
Increasing body size to more easily maintain core body temperature (warm-blooded animals in cold climates tend to be larger than similar species in warmer climates (see Bergmann's rule))
Having the ability to store energy as fat for metabolism
Have shortened extremities
Have countercurrent blood flow in extremities – this is where the warm arterial blood travelling to the limb passes the cooler venous blood from the limb and heat is exchanged warming the venous blood and cooling the arterial (e.g., Arctic wolf or penguins)
In warm environments, birds and mammals employ the following adaptations and strategies to maximize heat loss:
Behavioural adaptations like living in burrows during the day and being nocturnal
Evaporative cooling by perspiration and panting
Storing fat reserves in one place (e.g., camel's hump) to avoid its insulating effect
Elongated, often vascularized extremities to conduct body heat to the air
In humans
As in other mammals, thermoregulation is an important aspect of human homeostasis. Most body heat is generated in the deep organs, especially the liver, brain, and heart, and in contraction of skeletal muscles. Humans have been able to adapt to a great diversity of climates, including hot humid and hot arid. High temperatures pose serious stresses for the human body, placing it in great danger of injury or even death. For example, one of the most common reactions to hot temperatures is heat exhaustion, which is an illness that could happen if one is exposed to high temperatures, resulting in some symptoms such as dizziness, fainting, or a rapid heartbeat. For humans, adaptation to varying climatic conditions includes both physiological mechanisms resulting from evolution and behavioural mechanisms resulting from conscious cultural adaptations. The physiological control of the body's core temperature takes place primarily through the hypothalamus, which assumes the role as the body's "thermostat". This organ possesses control mechanisms as well as key temperature sensors, which are connected to nerve cells called thermoreceptors. Thermoreceptors come in two subcategories; ones that respond to cold temperatures and ones that respond to warm temperatures. Scattered throughout the body in both peripheral and central nervous systems, these nerve cells are sensitive to changes in temperature and are able to provide useful information to the hypothalamus through the process of negative feedback, thus maintaining a constant core temperature.
There are four avenues of heat loss: evaporation, convection, conduction, and radiation. If skin temperature is greater than that of the surrounding air temperature, the body can lose heat by convection and conduction. However, if air temperature of the surroundings is greater than that of the skin, the body gains heat by convection and conduction. In such conditions, the only means by which the body can rid itself of heat is by evaporation. So, when the surrounding temperature is higher than the skin temperature, anything that prevents adequate evaporation will cause the internal body temperature to rise. During intense physical activity (e.g. sports), evaporation becomes the main avenue of heat loss. Humidity affects thermoregulation by limiting sweat evaporation and thus heat loss.
In reptiles
Thermoregulation is also an integral part of a reptile's life, specifically lizards such as Microlophus occipitalis and Ctenophorus decresii who must change microhabitats to keep a constant body temperature. By moving to cooler areas when it is too hot and to warmer areas when it is cold, they can thermoregulate their temperature to stay within their necessary bounds.
In plants
Thermogenesis occurs in the flowers of many plants in the family Araceae as well as in cycad cones. In addition, the sacred lotus (Nelumbo nucifera) is able to thermoregulate itself, remaining on average above air temperature while flowering. Heat is produced by breaking down the starch that was stored in their roots, which requires the consumption of oxygen at a rate approaching that of a flying hummingbird.
One possible explanation for plant thermoregulation is to provide protection against cold temperature. For example, the skunk cabbage is not frost-resistant, yet it begins to grow and flower when there is still snow on the ground. Another theory is that thermogenicity helps attract pollinators, which is borne out by observations that heat production is accompanied by the arrival of beetles or flies.
Some plants are known to protect themselves against colder temperatures using antifreeze proteins. This occurs in wheat (Triticum aestivum), potatoes (Solanum tuberosum) and several other angiosperm species.
Behavioral temperature regulation
Animals other than humans regulate and maintain their body temperature with physiological adjustments and behavior. Desert lizards are ectotherms, and therefore are unable to regulate their internal temperature themselves. To regulate their internal temperature, many lizards relocate themselves to a more environmentally favorable location. They may do this in the morning only by raising their head from its burrow and then exposing their entire body. By basking in the sun, the lizard absorbs solar heat. It may also absorb heat by conduction from heated rocks that have stored radiant solar energy. To lower their temperature, lizards exhibit varied behaviors. Sand seas, or ergs, produce up to , and the sand lizard will hold its feet up in the air to cool down, seek cooler objects with which to contact, find shade, or return to its burrow. They also go to their burrows to avoid cooling when the temperature falls. Aquatic animals can also regulate their temperature behaviorally by changing their position in the thermal gradient. Sprawling prone in a cool shady spot, "splooting," has been observed in squirrels on hot days.
Animals also engage in kleptothermy in which they share or steal each other's body warmth. Kleptothermy is observed, particularly amongst juveniles, in endotherms such as bats and birds (such as the mousebird and emperor penguin). This allows the individuals to increase their thermal inertia (as with gigantothermy) and so reduce heat loss. Some ectotherms share burrows of ectotherms. Other animals exploit termite mounds.
Some animals living in cold environments maintain their body temperature by preventing heat loss. Their fur grows more densely to increase the amount of insulation. Some animals are regionally heterothermic and are able to allow their less insulated extremities to cool to temperatures much lower than their core temperature—nearly to . This minimizes heat loss through less insulated body parts, like the legs, feet (or hooves), and nose.
Different species of Drosophila found in the Sonoran Desert will exploit different species of cacti based on the thermotolerance differences between species and hosts. For example, Drosophila mettleri is found in cacti like the saguaro and senita; these two cacti remain cool by storing water. Over time, the genes selecting for higher heat tolerance were reduced in the population due to the cooler host climate the fly is able to exploit.
Some flies, such as Lucilia sericata, lay their eggs en masse. The resulting group of larvae, depending on its size, is able to thermoregulate and keep itself at the optimum temperature for development.
Koalas also can behaviorally thermoregulate by seeking out cooler portions of trees on hot days. They preferentially wrap themselves around the coolest portions of trees, typically near the bottom, to increase their passive radiation of internal body heat.
Hibernation, estivation and daily torpor
To cope with limited food resources and low temperatures, some mammals hibernate during cold periods. To remain in "stasis" for long periods, these animals build up brown fat reserves and slow all body functions. True hibernators (e.g., groundhogs) keep their body temperatures low throughout hibernation whereas the core temperature of false hibernators (e.g., bears) varies; occasionally the animal may emerge from its den for brief periods. Some bats are true hibernators and rely upon a rapid, non-shivering thermogenesis of their brown fat deposit to bring them out of hibernation.
Estivation is similar to hibernation, however, it usually occurs in hot periods to allow animals to avoid high temperatures and desiccation. Both terrestrial and aquatic invertebrate and vertebrates enter into estivation. Examples include lady beetles (Coccinellidae), North American desert tortoises, crocodiles, salamanders, cane toads, and the water-holding frog.
Daily torpor occurs in small endotherms like bats and hummingbirds, which temporarily reduces their high metabolic rates to conserve energy.
Variation in animals
Normal human temperature
Previously, average oral temperature for healthy adults had been considered , while normal ranges are . In Poland and Russia, the temperature had been measured axillarily (under the arm). was considered "ideal" temperature in these countries, while normal ranges are .
Recent studies suggest that the average temperature for healthy adults is (same result in three different studies). Variations (one standard deviation) from three other studies are:
for males, for females
Measured temperature varies according to thermometer placement, with rectal temperature being higher than oral temperature, while axillary temperature is lower than oral temperature. The average difference between oral and axillary temperatures of Indian children aged 6–12 was found to be only 0.1 °C (standard deviation 0.2 °C), and the mean difference in Maltese children aged 4–14 between oral and axillary temperature was 0.56 °C, while the mean difference between rectal and axillary temperature for children under 4 years old was 0.38 °C.
Variations due to circadian rhythms
In humans, a diurnal variation has been observed dependent on the periods of rest and activity, lowest at 11 p.m. to 3 a.m. and peaking at 10 a.m. to 6 p.m. Monkeys also have a well-marked and regular diurnal variation of body temperature that follows periods of rest and activity, and is not dependent on the incidence of day and night; nocturnal monkeys reach their highest body temperature at night and lowest during the day. Sutherland Simpson and J.J. Galbraith observed that all nocturnal animals and birds – whose periods of rest and activity are naturally reversed through habit and not from outside interference – experience their highest temperature during the natural period of activity (night) and lowest during the period of rest (day). Those diurnal temperatures can be reversed by reversing their daily routine.
In essence, the temperature curve of diurnal birds is similar to that of humans and other homeothermic animals, except that the maximum occurs earlier in the afternoon and the minimum earlier in the morning. Also, the curves obtained from rabbits, guinea pigs, and dogs were quite similar to those from humans. These observations indicate that body temperature is partially regulated by circadian rhythms.
Variations due to human menstrual cycles
During the follicular phase (which lasts from the first day of menstruation until the day of ovulation), the average basal body temperature in women ranges from . Within 24 hours of ovulation, women experience an elevation of due to the increased metabolic rate caused by sharply elevated levels of progesterone. The basal body temperature ranges between throughout the luteal phase, and drops down to pre-ovulatory levels within a few days of menstruation. Women can chart this phenomenon to determine whether and when they are ovulating, so as to aid conception or contraception.
Variations due to fever
Fever is a regulated elevation of the set point of core temperature in the hypothalamus, caused by circulating pyrogens produced by the immune system. To the subject, a rise in core temperature due to fever may result in feeling cold in an environment where people without fever do not.
Variations due to biofeedback
Some monks are known to practice Tummo, biofeedback meditation techniques, that allow them to raise their body temperatures substantially.
Effect on lifespan
The effects of such a genetic change in body temperature on longevity is difficult to study in humans.
Limits compatible with life
There are limits both of heat and cold that an endothermic animal can bear and other far wider limits that an ectothermic animal may endure and yet live. The effect of too extreme a cold is to decrease metabolism, and hence to lessen the production of heat. Both catabolic and anabolic pathways share in this metabolic depression, and, though less energy is used up, still less energy is generated. The effects of this diminished metabolism become telling on the central nervous system first, especially the brain and those parts concerning consciousness; both heart rate and respiration rate decrease; judgment becomes impaired as drowsiness supervenes, becoming steadily deeper until the individual loses consciousness; without medical intervention, death by hypothermia quickly follows. Occasionally, however, convulsions may set in towards the end, and death is caused by asphyxia.
In experiments on cats performed by Sutherland Simpson and Percy T. Herring, the animals were unable to survive when rectal temperature fell below . At this low temperature, respiration became increasingly feeble; heart-impulse usually continued after respiration had ceased, the beats becoming very irregular, appearing to cease, then beginning again. Death appeared to be mainly due to asphyxia, and the only certain sign that it had taken place was the loss of knee-jerks.
However, too high a temperature speeds up the metabolism of different tissues to such a rate that their metabolic capital is soon exhausted. Blood that is too warm produces dyspnea by exhausting the metabolic capital of the respiratory centre; heart rate is increased; the beats then become arrhythmic and eventually cease. The central nervous system is also profoundly affected by hyperthermia and delirium, and convulsions may set in. Consciousness may also be lost, propelling the person into a comatose condition. These changes can sometimes also be observed in patients experiencing an acute fever. Mammalian muscle becomes rigid with heat rigor at about 50 °C, with the sudden rigidity of the whole body rendering life impossible.
H.M. Vernon performed work on the death temperature and paralysis temperature (temperature of heat rigor) of various animals. He found that species of the same class showed very similar temperature values, those from the Amphibia examined being 38.5 °C, fish 39 °C, reptiles 45 °C, and various molluscs 46 °C. Also, in the case of pelagic animals, he showed a relation between death temperature and the quantity of solid constituents of the body. In higher animals, however, his experiments tend to show that there is greater variation in both the chemical and physical characteristics of the protoplasm and, hence, greater variation in the extreme temperature compatible with life.
A 2022 study on the effect of heat on young people found that the critical wet-bulb temperature at which heat stress can no longer be compensated, Twb,crit, in young, healthy adults performing tasks at modest metabolic rates mimicking basic activities of daily life was much lower than the 35°C usually assumed, at about 30.55°C in 36–40°C humid environments, but progressively decreased in hotter, dry ambient environments.
Arthropoda
The maximum temperatures tolerated by certain thermophilic arthropods exceeds the lethal temperatures for most vertebrates.
The most heat-resistant insects are three genera of desert ants recorded from three different parts of the world. The ants have developed a lifestyle of scavenging for short durations during the hottest hours of the day, in excess of , for the carcasses of insects and other forms of life which have died from heat stress.
In April 2014, the South Californian mite Paratarsotomus macropalpis has been recorded as the world's fastest land animal relative to body length, at a speed of 322 body lengths per second. Besides the unusually great speed of the mites, the researchers were surprised to find the mites running at such speeds on concrete at temperatures up to , which is significant because this temperature is well above the lethal limit for the majority of animal species. In addition, the mites are able to stop and change direction very quickly.
Spiders like Nephila pilipes exhibits active thermal regulation behavior. During high temperature sunny days, it aligns its body with the direction of sunlight to reduce the body area under direct sunlight.
| Biology and health sciences | Basics_3 | null |
378763 | https://en.wikipedia.org/wiki/Accipiter | Accipiter | Accipiter () is a genus of birds of prey in the family Accipitridae. Most species are called sparrowhawks, but there are many sparrowhawks in other genera too, such as Tachyspiza.
These birds are slender with short, broad, rounded wings and a long tail which helps them maneuver in flight. They have long legs and long, sharp talons used to kill their prey, and a sharp, hooked bill used in feeding. Females tend to be larger than males. They often ambush their prey, mainly small birds and mammals, capturing them after a short chase. The typical flight pattern is a series of flaps followed by a short glide. They are commonly found in wooded or shrubby areas.
The genus Accipiter was introduced by the French zoologist Mathurin Jacques Brisson in 1760. The type species is the Eurasian sparrowhawk (Accipiter nisus). The name is Latin for "hawk", from accipere, "to grasp".
Procoracoid foramen
The procoracoid foramen (or coracoid foramen, coracoid fenestra) is a hole through the process at the front of the coracoid bone, which accommodates the supracoracoideus nerve. In some groups of birds it may be present as a notch, or incisura; or the notch may be partially or weakly closed with bone. In other groups the feature is completely absent.
The foramen is generally present in birds of prey, but it is absent in most Accipiter hawks that have been studied. This absence is proposed as a diagnostic feature.
A study of accipitrid skeletons found procoracoid incisurae (as opposed to foramina) in some specimens of the eagles Aquila gurneyi and A. chrysaetos, but not in four other Aquila species. The notch was variably open or weakly ossified in Spizastur melanoleucos, Lophoaetus occipitalis, Spizaetus ornatus, and Stephanoaetus coronatus. Also the buteonine hawks Buteo brachyurus and B. hemilasius had incisurae, differing from 17 other Buteo species.
In Circus the foramen was found to be variable, not only within species but even between sides in the same individual. It is usually open or absent but may be closed by "a thread of bone". Research in genetic phylogeny has since indicated that Circus is closely related to Accipiter.
The notch was also absent or indistinct in Harpagus bidentatus.
Urotriorchis macrourus has a well-developed procoracoid foramen, which suggests a separation from Accipiter. It may be related to the chanting goshawks in tribe Melieraxini.
Taxonomy
The genus Accipiter formerly contained around 50 species. A series of molecular phylogenetic studies found that the traditional arrangement was non-monophyletic. The publication of a densely sampled study of the Accipitridae in 2024 allowed the generic boundaries to be redefined. To create monophyletic genera, species were moved from Accipiter to five new or resurrected genera leaving only 9 species in Accipiter. The southeast Asian crested goshawk and the Sulawesi goshawk were found to be only distantly related to other species in Accipiter. They were moved to a resurrected genus Lophospiza, the only genus placed in the new subfamily Lophospizinae. Similarly, the very small south America tiny hawk and semicollared hawk were found to be only distantly related to species in Accipiter. They were moved to a newly erected genus Microspizias which together with Harpagus is placed in the subfamily Harpaginae. The genera Circus, Megatriorchis, and Erythrotriorchis were found to be nested within Accipiter. Rather than subsuming these genera into an expanded Accipiter, species were moved from Accipiter to the resurrected genera Aerospiza, Tachyspiza and Astur.
List of Accipiter species
There are 9 species in the Accipiter genus.
| Biology and health sciences | Accipitriformes and Falconiformes | null |
378783 | https://en.wikipedia.org/wiki/Ectotherm | Ectotherm | An ectotherm (from the Greek () "outside" and () "heat"), more commonly referred to as a "cold-blooded animal", is an animal in which internal physiological sources of heat, such as blood, are of relatively small or of quite negligible importance in controlling body temperature. Such organisms (frogs, for example) rely on environmental heat sources, which permit them to operate at very economical metabolic rates.
Some of these animals live in environments where temperatures are practically constant, as is typical of regions of the abyssal ocean and hence can be regarded as homeothermic ectotherms. In contrast, in places where temperature varies so widely as to limit the physiological activities of other kinds of ectotherms, many species habitually seek out external sources of heat or shelter from heat; for example, many reptiles regulate their body temperature by basking in the sun, or seeking shade when necessary in addition to a host of other behavioral thermoregulation mechanisms.
In contrast to ectotherms, endotherms rely largely, even predominantly, on heat from internal metabolic processes, and mesotherms use an intermediate strategy.
Because there are more than two categories of temperature control utilized by animals, the terms warm-blooded and cold-blooded have been deprecated as scientific terms.
Adaptations
Various patterns of behavior enable certain ectotherms to regulate body temperature to a useful extent. To warm up, reptiles and many insects find sunny places and adopt positions that maximise their exposure; at harmfully high temperatures they seek shade or cooler water. In cold weather, honey bees huddle together to retain heat. Butterflies and moths may orient their wings to maximize exposure to solar radiation in order to build up heat before take-off. Gregarious caterpillars, such as the forest tent caterpillar and fall webworm, benefit from basking in large groups for thermoregulation. Many flying insects, such as honey bees and bumble bees, also raise their internal temperatures endothermally prior to flight, by vibrating their flight muscles without violent movement of the wings. Such endothermal activity is an example of the difficulty of consistent application of terms such as poikilothermy and homeothermy.
In addition to behavioral adaptations, physiological adaptations help ectotherms regulate temperature. Diving reptiles conserve heat by heat exchange mechanisms, whereby cold blood from the skin picks up heat from blood moving outward from the body core, re-using and thereby conserving some of the heat that otherwise would have been wasted. The skin of bullfrogs secretes more mucus when it is hot, allowing more cooling by evaporation.
During periods of cold, some ectotherms enter a state of torpor, in which their metabolism slows or, in some cases, such as the wood frog, effectively stops. The torpor might last overnight or last for a season, or even for years, depending on the species and circumstances.
Owners of reptiles may use an ultraviolet light system to assist their pets' basking.
Pros and cons
Ectotherms rely largely on external heat sources such as sunlight to achieve their optimal body temperature for various bodily activities. Accordingly, they depend on ambient conditions to reach operational body temperatures. In contrast, endothermic animals maintain nearly constant high operational body temperatures largely by reliance on internal heat produced by metabolically active organs (liver, kidney, heart, brain, muscle) or even by specialized heat producing organs like brown adipose tissue. Ectotherms typically have lower metabolic rates than endotherms at a given body mass. As a consequence, endotherms generally rely on higher food consumption, and commonly on food of higher energy content. Such requirements may limit the carrying capacity of a given environment for endotherms as compared to its carrying capacity for ectotherms.
Because ectotherms depend on environmental conditions for body temperature regulation, as a rule, they are more sluggish at night and in early mornings. When they emerge from shelter, many diurnal ectotherms need to heat up in the early sunlight before they can begin their daily activities. In cool weather the foraging activity of such species is therefore restricted to the day time in most vertebrate ectotherms, and in cold climates most cannot survive at all. In lizards, for instance, most nocturnal species are geckos specialising in "sit and wait" foraging strategies. Such strategies do not require as much energy as active foraging and do not require hunting activity of the same intensity. From another point of view, sit-and-wait predation may require very long periods of unproductive waiting. Endotherms cannot, in general, afford such long periods without food, but suitably adapted ectotherms can wait without expending much energy. Endothermic vertebrate species are therefore less dependent on the environmental conditions and have developed a higher variability (both within and between species) in their daily patterns of activity.
In ectotherms, fluctuating ambient temperatures may affect the body temperature. Such variation in body temperature is called poikilothermy, though the concept is not widely satisfactory and the use of the term is declining. In small aquatic creatures such as Rotifera, poikilothermy is practically absolute, but other creatures (like crabs) have wider physiological options at their disposal, and they can move to preferred temperatures, avoid ambient temperature changes, or moderate their effects. Ectotherms can also display the features of homeothermy, especially within aquatic organisms. Normally their range of ambient environmental temperatures is relatively constant, and there are few in number that attempt to maintain a higher internal temperature due to the high associated costs.
| Biology and health sciences | Basics | Biology |
379035 | https://en.wikipedia.org/wiki/Asian%20elephant | Asian elephant | The Asian elephant (Elephas maximus), also known as the Asiatic elephant, is a species of elephant distributed throughout the Indian subcontinent and Southeast Asia, from India in the west to Borneo in the east, and Nepal in the north to Sumatra in the south. Three subspecies are recognised—E. m. maximus, E. m. indicus and E. m. sumatranus. The Asian elephant is characterised by its long trunk with a single finger-like processing; large tusks in males; laterally folded large ears but smaller in contrast to African elephants; and wrinkled grey skin. The skin is smoother than African elephants and may be depigmented on the trunk, ears or neck. Adult males average in weight, and females .
It is one of only three living species of elephants or elephantids anywhere in the world, the others being the African bush elephant and African forest elephant. Further, the Asian elephant is the only living species of the genus Elephas. It is the second largest species of elephant after the African bush elephant. It frequently inhabits grasslands, tropical evergreen forests, semi-evergreen forests, moist deciduous forests, dry deciduous forests and dry thorn forests. They are herbivorous, eating about of vegetation per day. Cows and calves form groups, while males remain solitary or form "bachelor groups" with other males. During the breeding season, males will temporarily join female groups to mate. Asian elephants have a large and well-developed neocortex of the brain, are highly intelligent and self-aware being able to display behaviors associated with grief, learning, greeting etc.
The Asian elephant is the largest living land animal in Asia. Since 1986, the Asian elephant has been listed as Endangered on the IUCN Red List, as the population has declined by at least 50 per cent over the last three elephant generations, which is about 60–75 years. It is primarily threatened by loss of habitat, habitat degradation, fragmentation and poaching. Wild Asian elephants live to be about 60 years old. While female captive elephants are recorded to have lived beyond 60 years when kept in semi-natural surroundings, Asian elephants die at a much younger age in captivity; captive populations are declining due to a low birth and high death rate. The earliest indications of captive use of Asian elephants are engravings on seals of the Indus Valley civilisation dated to the 3rd millennium BC.
Taxonomy
Carl Linnaeus proposed the scientific name Elephas maximus in 1758 for an elephant from Ceylon. Elephas indicus was proposed by Georges Cuvier in 1798, who described an elephant from India. Coenraad Jacob Temminck named an elephant from Sumatra Elephas sumatranus in 1847. Frederick Nutter Chasen classified all three as subspecies of the Asian elephant in 1940. These three subspecies are currently recognised as valid taxa. Results of phylogeographic and morphological analyses indicate that the Sri Lankan and Indian elephants are not distinct enough to warrant classification as separate subspecies.
Three subspecies are recognised:
Sri Lankan elephant (E. maximus maximus )
Indian elephant (E. maximus indicus )
Sumatran elephant (E. maximus sumatranus )
Sri Lankan elephants are the largest subspecies. Their skin colour is darker than of E. m. indicus and of E. m. sumatranus with larger and more distinct patches of depigmentation on ears, face, trunk and belly. The skin color of the Indian elephant is generally grey and lighter than that of E. m. maximus but darker than that of E. m. sumatranus.
A potential fourth subspecies, the Borneo elephant (Elephas maximus borneensis), occurs in Borneo's northeastern parts, primarily in Sabah (Malaysia), and sometimes in Kalimantan (Indonesia). It was proposed by Paules Deraniyagala in 1950, who described an elephant in an illustration published in the National Geographic magazine, but not as a living elephant in accordance with the rules of the International Code of Zoological Nomenclature. These elephants living in northern Borneo are smaller than all the other subspecies, but had larger ears, a longer tail, and straight tusks. Results of genetic analysis indicate that their ancestors separated from the mainland population about 300,000 years ago. A study in 2003, using mitochondrial DNA analysis and microsatellite data, indicated that the Borneo elephant population is derived from stock that originated in the region of the Sunda Islands, and suggests that the Borneo population has been separated from the other elephant populations of southeast Asia since the Pleistocene.
The following Asian elephants were proposed as extinct subspecies, but are now considered synonymous with the Indian elephant:
Syrian elephant (E. m. asurus), proposed by Deraniyagala, based on fossil remains and Bronze Age illustrations.
Chinese elephant (E. m. rubridens), also proposed by Deraniyagala, based on a Chinese bronze statuette.
Javan elephant (E. m. sondaicus), also proposed by Deraniyagala, based on an illustration of a carving on the Buddhist monument of Borobudur.
Evolution
The genus Elephas, of which the Asian elephant is the only living member, is the closest relative of the extinct mammoths. The two groups are estimated to have split from each other around 7 million years ago.
Elephas originated in Sub-Saharan Africa during the Pliocene and spread throughout Africa before expanding into the southern half of Asia. The earliest Elephas species, Elephas ekorensis, is known from the Early Pliocene of East Africa, around 5–4.2 million years ago. The oldest remains of the genus in Asia are known from the Siwalik Hills in the Indian subcontinent, dating to the late Pliocene, around 3.6-3.2 million years ago, assigned to the species Elephas planifrons. The modern Asian elephant is suggested to have evolved from the species Elephas hysudricus, which first appeared at the beginning of the Early Pleistocene around 2.6 million years ago, and is primarily known from remains of Early-Middle Pleistocene age found on the Indian subcontinent. Skeletal remains of E. m. asurus have been recorded from the Middle East: Iran, Iraq, Syria, and Turkey from periods dating between at least 1800 BC and likely 700 BC.
Description
In general, the Asian elephant is smaller than the African bush elephant and has the highest body point on the head. The back is convex or level. The ears are small with dorsal borders folded laterally. It has up to 20 pairs of ribs and 34 caudal vertebrae. The feet have five nail-like structures on each forefoot, and four on each hind foot. The forehead has two hemispherical bulges, unlike the flat front of the African elephants. Its long trunk or proboscis has only one fingerlike tip, in contrast to the African elephants which have two. Hence, the Asian species relies more on wrapping around a food item and squeezing it into its mouth, rather than grasping with the tip. Asian elephants have more muscle coordination and can perform more complex tasks.Cows usually lack tusks; if tusks—in that case, called "tushes"—are present, they are barely visible and only seen when the mouth is open. The enamel plates of the molars are greater in number and closer together in Asian elephants. Some bulls may also lack tusks; these individuals are called "makhnas", and are especially common among the Sri Lankan elephant population. A tusk from an tall elephant killed by Sir Victor Brooke measured in length, and nearly in circumference, and weighed . This tusk's weight is, however, exceeded by the weight of a shorter tusk of about in length which weighed , and there have reportedly been tusks weighing over .
Skin colour is usually grey, and may be masked by soil because of dusting and wallowing. Their wrinkled skin is movable and contains many nerve centres. It is smoother than that of African elephants and may be depigmented on the trunk, ears, or neck. The epidermis and dermis of the body average thick; skin on the dorsum is thick providing protection against bites, bumps, and adverse weather. Its folds increase surface area for heat dissipation. They can tolerate cold better than excessive heat. Skin temperature varies from . Body temperature averages .
Size
On average, when fully-grown, bulls are about tall at the shoulder and in weight, while cows are smaller at about at the shoulder and in weight. Sexual dimorphism in body size is relatively less pronounced in Asian elephants than in African bush elephants; with bulls averaging 15% and 23% taller in the former and latter respectively. Length of body and head including trunk is with the tail being long. The largest bull elephant ever recorded was shot by the Maharajah of Susang in the Garo Hills of Assam, India, in 1924, it weighed an estimated , stood tall at the shoulder and was long from head to tail. The Raja Gaj elephant of Bardia National Park was estimated to be tall at the shoulder and one of the biggest Asian bull elephant. There are reports of larger individuals as tall as .
Distribution and habitat
Asian elephants are distributed throughout the Indian subcontinent and Southeast Asia, from India in the west, to Borneo in the east, and Nepal in the north, to Sumatra in the south. They inhabit grasslands, tropical evergreen forests, semi-evergreen forests, moist deciduous forests, dry deciduous forests and dry thorn forests, in addition to cultivated and secondary forests and scrublands. Over this range of habitat types elephants occur from sea level to over . In the eastern Himalaya in northeast India, they regularly move up above in summer at a few sites.
In Bangladesh, some isolated populations survived in the south-east Chittagong Hills in the early 1990s. In Malaysia's northern Johor and Terengganu National Park, two Asian elephants tracked using satellite tracking technology spent most of their time in secondary or "logged-over forest"; they travelled 75% of their time in an area less than away from a water source. In China, the Asian elephant survives only in the prefectures of Xishuangbanna, Simao and Lincang of southern Yunnan. , the estimated population was around 300 individuals.
As of 2017, the estimated wild population in India account for nearly three-fourths of the extant population, at 27,312 individuals. In 2019, the Asian elephant population in India increased to an estimated 27,000–29,000 individuals. , the global wild population was estimated at 48,323–51,680 individuals.
Ecology and behaviour
Asian elephants are crepuscular. They are classified as megaherbivores and consume up to of plant matter per day. Around 50 to 75% of the day is devoted to eating. They are generalist feeders, and are both grazers and browsers. They are known to feed on at least 112 different plant species, most commonly of the order Malvales, as well as the legume, palm, sedge and true grass families. They browse more in the dry season with bark constituting a major part of their diet in the cool part of that season. They drink at least once a day and are never far from a permanent source of fresh water. They need 80–200 litres of water a day and use even more for bathing. At times, they scrape the soil for clay or minerals.
Cows and calves move about together as groups, while bulls disperse from their mothers upon reaching adolescence. Bulls are solitary or form temporary "bachelor groups". Cow-calf units generally tend to be small, typically consisting of three adults (most likely related females) and their offspring. Larger groups of as many as 15 adult females have also been recorded. Seasonal aggregations of 17 individuals including calves and young adults have been observed in Sri Lanka's Uda Walawe National Park. Until recently, Asian elephants, like African elephants, were thought to be under the leadership of older adult females, or matriarchs. It is now recognized that cows form extensive and very fluid social networks, with varying degrees of associations between individuals. Social ties generally tend to be weaker than in African bush elephants. Unlike African elephants, which rarely use their forefeet for anything other than digging or scraping soil, Asian elephants are more agile at using their feet in conjunction with the trunk for manipulating objects. They can sometimes be known for their violent behavior.
Asian elephants are recorded to make three basic sounds: growls, squeaks and snorts. Growls in their basic form are used for short distance communication. During mild arousal, growls resonate in the trunk and become rumbles while for long-distance communication, they escalate into roars. Low-frequency growls are infrasonic and made in many contexts. Squeaks come in two forms: chirpings and trumpets. Chirping consists of multiple short squeaks and signals conflict and nervousness. Trumpets are lengthened squeaks with increased loudness and are produced during extreme arousal. Snorts signal changes in activity and increase in loudness during mild or strong arousal. During the latter case, when an elephant bounces the tip of the trunk, it creates booms which serve as threat displays. Elephants can distinguish low-amplitude sounds.
Rarely, tigers have been recorded attacking and killing calves, especially if the calves become separated from their mothers, stranded from their herd, or orphaned. Adults are largely invulnerable to natural predation. There is a singular anecdotal case of a mother Asian elephant allegedly being killed alongside her calf; however, this account is contestable. In 2011 and 2014, two instances were recorded of tigers successfully killing adult elephants; one by a single tiger in Jim Corbett National Park on a 20-year-old young adult elephant cow, and another on a 28-year-old sick adult bull in Kaziranga National Park further east, which was taken down and eaten by several tigers hunting cooperatively. Elephants appear to distinguish between the growls of larger predators like tigers and smaller predators like leopards; they react to leopards less fearfully and more aggressively.
Reproduction
Reproduction in Asian elephants can be attributed to the production and perception of signaling compounds called pheromones. These signals are transmitted through various bodily fluids. They are commonly released in urine but in males they are also found in special secretions from the temporal glands. Once integrated and perceived, these signals provide the receiver with information about the reproductive status of the sender. If both parties are ready to breed, reproductive ritualic behavior occurs and the process of sexual reproduction proceeds.
Bulls will fight one another to get access to oestrus cows. Strong fights over access to females are extremely rare. Bulls reach sexual maturity around the age of 12–15. Between the ages of 10 and 20 years, bulls undergo an annual phenomenon known as "musth". This is a period where the testosterone level is up to 100 times greater than non-musth periods, and they become aggressive. Secretions containing pheromones occur during this period, from the paired temporal glands located on the head between the lateral edge of the eye and the base of the ear. The aggressive behaviors observed during musth can be attributed to varying amounts of frontalin (1,5-dimethyl-6,8-dioxabicyclo[3.2.1]octane) throughout the maturation process of bulls. Frontalin is a pheromone that was first isolated in bark beetles but can also be produced in the bulls of both Asian and African Elephants. The compound can be excreted through urine as well as through the temporal glands of the bull, allowing signaling to occur. During musth, increased concentrations of frontalin in the bull's urine communicate the reproductive status of the bull to female elephants.
Similar to other mammals, hormone secretion in female elephants is regulated by an estrous cycle. This cycle is regulated by surges in Luteinizing hormone that are observed three weeks from each other. This type of estrous cycle has also been observed in African Elephants but is not known to affect other mammals. The first surge in Luteinizing hormone is not followed by the release of an egg from the ovaries. However, some female elephants still exhibit the expected mating protocols during this surge. Female elephants give ovulatory cues by utilizing sex pheromones. A principal component thereof, (Z)-7-dodecen-1-yl acetate, has also been found to be a sex pheromone in numerous species of insects. In both insects and elephants, this chemical compound is used as an attractant to assist the mating process. In elephants, the chemical is secreted through urination and this aids in the attraction of bulls to mate. Once detected, the chemical stimulates the vomeronasal organ of the bull, thus providing information on the maturity of the female.
Reproductive signaling exchange between male and female elephants are transmitted through olfactory cues in bodily fluids. In males, the increase in frontalin during musth heightens their sensitivity to the (Z)-7-dodecen-1-yl acetate produced by female elephants. Once perceived by receptors in the trunk, a sequence of ritualistic behaviors follow. The responses in males vary based on both the stage of development and the temperament of the elephant. This process of receiving and processing signals through the trunk is referred to as flehmen. The difference in body movements give cues to gauge if the male is interested in breeding with the female that produced the secretion. A bull that is ready to breed will move closer to the urine and in some cases an erection response is elicited. A bull that is not ready to breed will be timid and try to dissociate themselves from the signal. In addition to reproductive communication, chemosensory signaling is used to facilitate same-sex interactions. When less developed males detect pheromones from a male in musth, they often retreat to avoid coming in contact with aggressive behaviors. Female elephants have also been seen to communicate with each other through pheromone in urine. The purpose of this type of intersex communication is still being investigated. However, there are clear differences in signaling strength and receiver response throughout different stages of the estrous cycle.
The gestation period is 18–22 months, and the cow gives birth to one calf, only occasionally twins. The calf is fully developed by the 19th month, but stays in the womb to grow so that it can reach its mother to feed. At birth, the calf weighs about , and is suckled for up to three years. Once a female gives birth, she usually does not breed again until the first calf is weaned, resulting in a four to five-year birth interval. During this period, mother to calf communication primarily takes place through temporal means. However, male calves have been known to develop sex pheromone-producing organs at a young age. Early maturity of the vomeronasal organ allows immature elephants to produce and receive pheromones. It is unlikely that the integration of these pheromones will result in a flehmen response in a calf. Females stay on with the herd, but mature males are chased away.
Female Asian elephants sexually mature around the age of 10~15 and keep growing until 30, while males fully mature at more than the age of 25, and constantly grow throughout their life. Average elephant life expectancy is approximately 60 years. Some individuals are known to have lived into their late 80s. Generation length of the Asian elephant is 22 years.
Intelligence
Asian elephants have a very large and highly developed neocortex, a trait also shared by humans, apes and certain dolphin species. They have a greater volume of cerebral cortex available for cognitive processing than all other existing land animals. Results of studies indicate that Asian elephants have cognitive abilities for tool use and tool-making similar to great apes. They exhibit a wide variety of behaviours, including those associated with grief, learning, allomothering, mimicry, play, altruism, use of tools, compassion, cooperation, self-awareness, memory, and language. Elephants reportedly head to safer ground during natural disasters like tsunamis and earthquakes, but data from two satellite-collared Sri Lankan elephants indicate this may be untrue. Several students of elephant cognition and neuroanatomy are convinced that Asian elephants are highly intelligent and self-aware. Others contest this view.
Threats
The pre-eminent threats to the Asian elephant today are the loss, degradation and fragmentation of its habitat, which leads to increasing conflicts between humans and elephants. Asian elephants are poached for ivory and a variety of other products including meat and leather. The demand for elephant skin has risen due to it being an increasingly-common ingredient in traditional Chinese medicine.
Human–elephant conflict
In some parts of Asia, people and elephants have co-existed for thousands of years. In other areas, people and elephants come into conflict, resulting in violence, and ultimately, the displacement of elephants. The main causes of human-elephant conflict includes the growing human population, large-scale development projects and poor top-down governance. Proximate causes includes habitat loss due to deforestation, disruption of elephant migratory routes, expansion of agriculture and illegal encroachment into protected areas.
Destruction of forests through logging, encroachment, slash-and-burn, shifting cultivation, and monoculture tree plantations are major threats to the survival of elephants. Human–elephant conflicts occur when elephants raid crops of shifting cultivators in fields, which are scattered over a large area interspersed with forests. Depredation in human settlements is another major area of human–elephant conflict occurring in small forest pockets, encroachments into elephant habitat, and on elephant migration routes. However, studies in Sri Lanka indicate that traditional slash-and-burn agriculture may create optimal habitats for elephants by creating a mosaic of successional-stage vegetation. Populations inhabiting small habitat fragments are much more liable to come into conflict with humans.
Development such as border fencing along the India–Bangladesh border has become a major impediment to the free movement of elephants. In Assam, more than 1,150 humans and 370 elephants died as a result of human-elephant conflict between 1980 and 2003. In a 2010 study, it was estimated that in India alone, over 400 people were killed by elephants each year, and 0.8 to 1 million hectares were damaged, affecting at least 500,000 families across the country. Moreover, elephants are known to destroy crops worth up to US$2–3 million annually. This has major impacts on the welfare and livelihoods of local communities, as well as the future conservation of this species. In countries like Bangladesh and Sri Lanka, the Asian elephant is one of the most feared wild animals, even though they are less deadly than other local animals such as venomous snakes (which were estimated to claim more than 30 times more lives in Sri Lanka than elephants).
As a whole, Asian elephants display highly sophisticated and sometimes unpredictable behaviour. Most untamed elephants try to avoid humans, but if they are caught off guard by any perceived physical threat, including humans, they will likely charge. This is especially true of males in musth and of females with young. Gunfire and other similar methods of deterring, which are known to be effective against many kinds of wild animals including tigers, may or may not work with elephants, and can even worsen the situation. Elephants that have been abused by humans in the past often become "rogue elephants", which regularly attack people with no provocation.
Poaching
For ivory
The demand for ivory during the 1970s and 1980s, particularly in East Asia, led to rampant poaching and the serious decline of elephants in both Africa and Asia. In Thailand, the illegal trade in live elephants and ivory still flourishes. Although the amount of ivory being openly sold has decreased substantially since 2001, Thailand still has one of the largest and most active black markets for ivory seen anywhere in the world. Tusks from Thai-poached elephants also enter the market; between 1992 and 1997 at least 24 male elephants were killed for their tusks.
Up to the early 1990s, Vietnamese ivory craftsmen used exclusively Asian elephant ivory from Vietnam and neighbouring Lao and Cambodia. Before 1990, there were few tourists and the low demand for worked ivory could be supplied by domestic elephants. Economic liberalisation and an increase in tourism raised both local and visitors' demands for worked ivory, which resulted in heavy poaching.
For skin
The skin of the Asian elephant is used as an ingredient in Chinese medicine as well as in the manufacture of ornamental beads. The practice has been aided by China's State Forestry Administration (SFA), which has issued licences for the manufacture and sale of pharmaceutical products containing elephant skin, thereby making trading legal. In 2010, four skinned elephants were found in a forest in Myanmar; 26 elephants were killed by poachers in 2013 and 61 in 2016. According to the NGO Elephant Family, Myanmar is the main source of elephant skin, where a poaching crisis has developed rapidly since 2010.
Disease
The elephant endotheliotropic herpesvirus (EEHV) is a member of the Proboscivirus genus, a novel clade most closely related to the mammalian betaherpesviruses. As of 2011, it is responsible for as many as 70 deaths of both zoo and wild Asian elephants worldwide, especially in young calves. In particular, several incidents of calves dying from elephant endotheliotropic herpesvirus have been recorded in Myanmar. The elephant schistosome is a parasitic trematode that uses the Asian elephant as a definitive host. Two other hosts may be the Indian elephant and the greater one-horned rhinoceros.
Conservation
The Asian elephant is listed on CITES Appendix I. It is a quintessential flagship species, deployed to catalyze a range of conservation goals, including habitat conservation at landscape scales, generating public awareness on conservation issues, and mobilisation as a popular cultural icon both in India and the West. A key aspect of conservation is connectivity of the preferred movement routes of Asian elephants through areas with high vegetation cover and low human population density.
The World Elephant Day is celebrated annually on 12 August since 2012. Events are organized to divulge information and to engage people about the problems that the Asian elephant is facing. August has been established as the Asian Elephant Awareness Month by zoos and conservation partners in the United States.
Karnataka state in India hosts the most Asian elephants of any known area, comprising around 20% of the total population in the country. The distribution of elephants in the state according to one estimate is roughly . In a 2013 study, an estimated 10, 000 elephants inhabited the Western Ghats, and were primarily threatened by poaching and habitat fragmentation. An increase in conflict with humans was also cited as a likely issue. Conservation plans aimed to establish wildlife corridors, stop poaching of bulls, and protect or manage land area. Project Elephant was initiated in 1992 as a Centrally Sponsored Scheme (CSS) by the Ministry of Environment, Forest and Climate Change of the Government of India. The project was initiated to protect the Indian elephant and its habitats and to establish dedicated elephant reserves for sustaining elephant populations.
The distribution of elephants in Sri Lanka is only two-fifths of what it was in the late 19th and early 20th centuries. Due to this decrease, interactions with humans occur much more frequently. During a 2003 survey, the local people expressed some form of disapproval towards the conservation of Asian elephants as farmers viewed them as pests, however, most of the participants were supportive of the idea.
In China, Asian elephants are under first-level protection. Yunnan province has 11 national and regional nature reserves. In total, the covered protected area in China is about . In 2020, the population of Asian elephants in Yunnan was estimated at around 300 individuals. As conflicts between humans and wild elephants have emerged around protected areas in the last years, the prefecture of Xishuangbanna built food bases and planted bananas and bamboo to create a better habitat.
In Thailand, Salakpra Wildlife Sanctuary and Tham Than Lot National Park are protected areas hosting around 250–300 elephants, according to figures from . In recent years the National Park has faced issues due to encroachment and over-exploitation. In India, the National Board of Wildlife recommended to allow coal mining in Dehing Patkai National Park in April 2020. The decision raised concerns between students and environmental activists who launched an online campaign to stop the project.
In captivity
About half of the global zoo elephant population is kept in European zoos, where they have less than half (18.9 years) the median life span of conspecifics (41.6 years) in protected populations in range countries. This discrepancy is clearest in Asian elephants: infant mortality is more than two to three times that seen in Burmese timber camps, and adult survivorship in zoos has not improved significantly in recent years. One risk factor for Asian zoo elephants is being moved between institutions, with early removal from the mother tending to have additional adverse effects. Another risk factor is being born into a zoo rather than being imported from the wild, with poor adult survivorship in zoo-born Asians apparently being conferred prenatally or in early infancy. Likely causes for compromised survivorship is stress and/or obesity. Foot problems are commonly observed in captive elephants. These are related to lack of exercise, long hours standing on hard substrates, and contamination resulting from standing in their dung. Many of these problems are treatable. However, mistreatment may lead to serious disability or death.
Demographic analysis of captive Asian elephants in North America indicates that the population is not self-sustaining. First year mortality is nearly 30 per cent, and fecundity is extremely low throughout the prime reproductive years. Data from North American and European regional studbooks from 1962 to 2006 were analysed for deviations in the birth and juvenile death sex ratios. Of 349 captive calves born, 142 died prematurely. They died within one month of birth, major causes being stillbirth and infanticide by either the calf's mother or by one of the exhibition mates. The sex ratio of stillbirths in Europe was found to have a tendency for excess of males.
Handling methods
Young elephants are captured from the wild and illegally imported to Thailand from Myanmar for use in the tourism industry; calves are used mainly in amusement parks and are trained to perform various stunts for tourists. The calves are often subjected to a 'breaking in' process, which may involve being tied up, confined, starved, beaten and tortured; as a result, two-thirds may perish. Handlers use a technique known as the training crush, in which "handlers use sleep-deprivation, hunger, and thirst to "break" the elephants' spirit and make them submissive to their owners"; moreover, handlers drive nails into the elephants' ears and feet.
In culture
The Asian elephant is the national animal of Thailand and Laos. It has also been declared as the national heritage animal of India. Bones of Asian elephants excavated at Mohenjo-daro in the Indus Valley indicate that they were tamed in the Indus Valley Civilisation and used for work. Decorated elephants are also depicted on seals and were modelled in clay. The Asian elephant became a siege engine, a mount in war, a status symbol, a beast of burden, and an elevated platform for hunting during historical times in South Asia.
Asian elephants have been captured from the wild and tamed for use by humans. Elephants can remember tone, melody, and words, allowing them to recognise more than 20 verbal commands. Their ability to work under instruction makes them particularly useful for carrying heavy objects. They have been used particularly for timber-carrying in jungle areas. Other than their work use, they have been used in war, in ceremonies, and for carriage. It is reported that war elephants are still used by the Kachin Independence Army (KIA) in Kachin State in northern Myanmar against Myanmar's military. The KIA use about four dozen elephants to carry supplies.
The Asian elephant plays an important part in the culture of the subcontinent and beyond, being featured prominently in the Panchatantra fables and the Buddhist Jataka tales. They play a major role in Hinduism: the god Ganesha's head is that of an elephant, and the "blessings" of a temple elephant are highly valued. Elephants are frequently used in processions where the animals are adorned with festive outfits.
The Asian elephant is depicted in several Indian manuscripts and treatises with notable amongst these including Matanga Lila (elephant sport) of Nilakantha. The manuscript Hastividyarnava is from Assam in northeast India. In the Burmese, Thai and Sinhalese animal and planetary zodiac, the Asian elephant, both tusked and tuskless, are the fourth and fifth animal zodiacs of the Burmese, the fourth animal zodiac of the Thai, and the second animal zodiac of the Sinhalese people of Sri Lanka. Similarly, the elephant is the twelfth animal zodiac in the Dai animal zodiac of the Dai people in southern China.
| Biology and health sciences | Proboscidea | Animals |
379202 | https://en.wikipedia.org/wiki/American%20bullfrog | American bullfrog | The American bullfrog (Lithobates catesbeianus), often simply known as the bullfrog in Canada and the United States, is a large true frog native to eastern North America. It typically inhabits large permanent water bodies such as swamps, ponds, and lakes. Bullfrogs can also be found in manmade habitats such as pools, koi ponds, canals, ditches and culverts. The bullfrog gets its name from the sound the male makes during the breeding season, which sounds similar to a bull bellowing. The bullfrog is large and is commonly eaten throughout its range, especially in the southern United States where they are plentiful.
Their presence as a food source has led to bullfrogs being distributed around the world outside of their native range. Bullfrogs have been introduced into the Western United States, South America, Western Europe, China, Japan, and southeast Asia. In these places they are considered an invasive species due to their voracious appetite and the large number of eggs they produce, which has a negative effect on native amphibians, certain insects and other fauna. Bullfrogs are very skittish which can make their capture difficult and thus they often become established.
Other than for food, bullfrogs are also used for dissection in science classes. Albino bullfrogs are sometimes kept as pets, and bullfrog tadpoles are often sold at ponds or fish stores.
Taxonomy
Some authorities use the scientific name, Lithobates catesbeiana, although others prefer Rana catesbeiana.
Genome
The nuclear genome (~5.8Gbp) of the North American bullfrog (Rana [Lithobates] catesbeiana) was published in 2017 and provides a resource for future Ranidae research.
Etymology
The specific name, catesbeiana (feminine) or catesbeianus (masculine), is in honor of English naturalist Mark Catesby.
Description
The dorsal (upper) surface of the bullfrog has an olive-green basal color, either plain or with mottling and banding of grayish brown. The ventral (under) surface is off-white blotched with yellow or gray. Often, a marked contrast in color is seen between the green upper lip and the pale lower lip. The teeth are tiny and are useful only in grasping. The eyes are prominent with brown irises and horizontal, almond-shaped pupils. The tympana (eardrums) are easily seen just behind the eyes and the dorsolateral folds of skin enclose them. The limbs are blotched or banded with gray. The fore legs are short and sturdy and the hind legs are long. The front toes are not webbed, but the back toes have webbing between the digits with the exception of the fourth toe, which is unwebbed.
Bullfrogs are sexually dimorphic, with males being smaller than females and having yellow throats. Males have tympana larger than their eyes, whereas the tympana in females are about the same size as the eyes. Bullfrogs measure about in snout–to–vent length. They grow fast in the first eight months of life, typically increasing in weight from , and large, mature individuals can weigh up to . In some cases bullfrogs have been recorded as attaining and measuring up to from snout to vent. The American bullfrog is the largest species of true frog in North America.
Online Model Organism Database
xenbase provides limited support (BLAST, JBrowse tracks, genome download) for the bullfrog.
Distribution
The bullfrog is originally native to eastern North America, where it is commonly found in every U.S. state east of the Mississippi River. Its natural range extends from the eastern Canadian Maritime Provinces to as far west as Idaho and Texas, and as far north as Michigan (including the Upper Peninsula), Minnesota and Montana; it is largely absent in North Dakota. The bullfrog has also been introduced onto Nantucket island, as well as portions of the western U.S., including Arizona, Colorado, Hawaii, Idaho, Nevada, New Mexico, Oregon, Utah, Washington and Wyoming. In these states, it is considered to be an invasive species, as concerns exist that it may outcompete or prey upon native species of reptiles and amphibians, disrupting the delicate ecological balance of certain areas. The bullfrog has been introduced in Hawaii, South America, Asia, the Caribbean, and Europe for various purposes including frog farming and population control of other species. It is very common on the West Coast, especially in California, where it is believed to pose a threat to the California red-legged frog, and is considered to be a factor in the decline of that vulnerable species. Bullfrogs have been found to feed on the young of several snakes, including the California endemic giant garter snake, a threatened species. In early 2023, the Utah Department of Natural Resources began tweeting tips on how to catch and cook bullfrogs in an effort to encourage residents to help control the growing population by catching the invasive frogs for food.
Other countries and regions into which the bullfrog has been introduced include the extreme south of British Columbia, Canada, nearly every state in Mexico, as well as Belgium, Cuba, the Dominican Republic, Haiti, Italy, Jamaica, the Netherlands, and Puerto Rico. It is also found in Argentina, Brazil, China, Colombia, Japan, South Korea, Uruguay and Venezuela. The reasons for introducing the bullfrog to these areas have largely been intentional, either to provide humans with a source of food or as biological control agents. In addition to the unintended escape of frogs from breeding establishments or scientific research facilities, captive escapees or released pets are also a possibility. Conservationists are concerned that the bullfrog is relatively immune to the fungal infection chytridiomycosis (also called 'chytrid' fungus) which has been ravaging numerous frog species, and, as it invades new territories, it may assist in the spread of this lethal fungus as an asymptomatic carrier to the more susceptible, native species of frog it encounters.
Breeding behavior
The bullfrog breeding season typically lasts two to three months. A study of bullfrogs in Michigan showed the males arriving at the breeding site in late May or early June, and remaining in the area into July. The territorial males that occupy sites are usually spaced some apart and call loudly. At least three different types of calls have been noted in male bullfrogs under different circumstances. These distinctive calls include territorial calls made as threats to other males, advertisement calls made to attract females, and encounter calls which precede combat.
The bullfrogs have a prolonged breeding season, with the males continuously engaging in sexual activity throughout. Though The reproductive cycle of American bullfrogs in Oregon is mainly restricted to the summer season when individuals congregate in lentic freshwater systems. Males are present at the breeding pond for longer periods than females during the entire season, increasing their chances of multiple matings. The sex ratio is typically skewed toward males. Conversely, females have brief periods of sexual receptivity during the season. In one study, female sexual activity typically lasted for a single night and mating did not occur unless the females initiated the physical contact. Males only clasp females after they have indicated their willingness to mate. This finding refutes previous claims that a male frog will clasp any proximate female with no regard to whether the female has consented. Once a male finds a receptive female he will clasp onto her and undergo amplexus—reproductive position—by utilization of the males' forelimbs. The enlargement of forelimb muscles is a sexually dimorphic trait seen in the male bullfrog. One study investigating male and female bullfrog forelimbs muscles found males had significantly stronger muscles that could undergo longer durations of activity before the onset of fatigue. The significance of forelimb sexual dimorphism allow males to remain in amplexus with the female for longer durations increasing their chance at reproductive success in the highly competitive mating environment.
These male and female behaviors cause male-to-male competition to be high within the bullfrog population and sexual selection for the females to be an intense process. Kentwood Wells postulated leks, territorial polygyny, and harems are the most likely classifications for the bullfrog mating system. Leks would be a valid description because males congregate to attract females, and the females arrive to the site for the purpose of copulation. In a 1980 study on bullfrogs in New Jersey, the mating system was classified as resource-defense polygyny. The males defended territories within the group and demonstrated typical physical forms of defense.
Choruses
Male bullfrogs aggregate into groups called choruses. The male chorus behavior is analogous to the lek formation of birds, mammals, and other vertebrates. Choruses are dynamic, forming and remaining associated for a few days, breaking down temporarily, and then forming again in a new area with a different group of males. Male movement has experimentally been noted to be dynamic. In the Michigan study, the choruses were described as "centers of attraction" in which their larger numbers enhanced the males' overall acoustical displays. This is more attractive to females and also attractive to other sexually active males. Choruses in this study were dynamic, constantly forming and breaking up. New choruses were formed in other areas of the site. Males moved around and were highly mobile within the choruses.
A review of multiple studies on bullfrogs and other anurans noted male behavior within the groups changes according to the population density of the leks. At higher population densities, leks are favored due to the difficulty in defending individual territories among a large population of males. This variance causes differences in how females choose their mates. When the male population density is low and males maintain clearer, more distinct territories, female choice is mostly determined by territory quality. When male population density is higher, females depend on other cues to select their mates. These cues include the males' positions within the chorus and differences in male display behaviors among other determinants. Social dominance within the choruses is established through challenges, threats, and other physical displays. Older males tend to acquire more central locations while younger males were restricted to the periphery.
Chorus tenure is the number of nights that a male participates in the breeding chorus. One study distinguishes between chorus tenure and dominant tenure. Dominant tenure is more strictly defined as the amount of time a male maintains a dominant status. Chorus tenure is restricted due to increased risk of predation, lost foraging opportunities, and higher energy consumption. Calling is postulated to be energetically costly to anurans in general. Energy is also expended through locomotion and aggressive interactions of male bullfrogs within the chorus.
Aggressive behavior
To establish social dominance within choruses, bullfrogs demonstrate various forms of aggression, especially through visual displays. Posture is a key factor in establishing social position and threatening challengers. Territorial males have inflated postures while non-territorial males remain in the water with only their heads showing. For dominant (territorial) males, their elevated posture reveals their yellow-colored throats. When two dominant males encounter each other, they engage in a wrestling bout. The males have their venters clasped, each individual in an erect position rising to well above water level. The New Jersey study noted the males would approach each other to within a few centimeters and then tilt back their heads, displaying their brilliantly colored gular sacs. The gular is dichromatic in bullfrogs, with dominant and fitter males displaying yellow gulars. The New Jersey study also reported low posture with only the head exposed above the water surface was typical of subordinate, or non-territorial males, and females. High posture was demonstrated by territorial males, which floated on the surface of the water with their lungs inflated, displaying their yellow gulars. Males optimize their reproductive fitness in a number of ways. Early arrival at the breeding site, prolonged breeding with continuous sexual activity throughout the season, ownership of a centrally located territory within the chorus, and successful movement between the dynamically changing choruses are all common ways for males to maintain dominant, or territorial, status within the chorus. Older males have greater success in all of these areas than younger males. Some of the males display a more inferior role, termed by many researchers as the silent male status. These silent males adopt a submissive posture, sit near resident males and make no attempt to displace them. The silent males do not attempt to intercept females but are waiting for the territories to become vacant. This has also been called the alternate or satellite male strategy.
Growth and development
After selecting a male, the female deposits eggs in his territory. During the mating grasp, or amplexus, the male rides on top of the female, grasping her just behind her fore limbs. The female chooses a site in shallow water among vegetation, and lays a batch of up to 20,000 eggs, and the male simultaneously releases sperm, resulting in external fertilization. The eggs form a thin, floating sheet which may cover an area of . The embryos develop best at water temperatures between and hatch in three to five days.
If the water temperature rises above
, developmental abnormalities occur, and if it falls below , normal development ceases. Newly hatched tadpoles show a preference for living in shallow water on fine gravel bottoms. American bullfrog tadpoles have also "showed a preference for habitats
containing structure." This may reflect a lesser number of predators in these locations. As they grow, they tend to move into deeper water. The tadpoles initially have three pairs of external gills and several rows of labial teeth. They pump water through their gills by movements of the floor of their mouths, trapping bacteria, single-celled algae, protozoans, pollen grains, and other small particles on mucus in a filtration organ in their pharynges. As they grow, they begin to ingest larger particles and use their teeth for rasping. They have downward-facing mouths, deep bodies, and tails with broad dorsal and ventral fins.
Time to metamorphosis ranges from a few months in the southern part of the range to 3 years in the north, where the colder water slows development. Maximum lifespan in the wild is estimated to be 8 to 10 years, but one frog lived for almost 16 years in captivity.
Feeding
Bullfrogs are voracious, opportunistic, ambush predators that prey on any small animal they can overpower and consume. Bullfrog stomachs have been found to contain rodents, small lizards and snakes, other frogs and toads, other amphibians, crayfish, other crustaceans, small birds, scorpions, tarantulas and bats, as well as the many types of invertebrates, such as snails, worms and insects, which are the usual food of ranid frogs. These studies revealed the bullfrog's diet to be unique among North American ranids in the inclusion of a large percentage of aquatic animals, such as fish, tadpoles, ram's horn snails, and dytiscid beetles, as well as the aquatic eggs of fish, frogs, insects, or salamanders. Cannibalism has been observed in bullfrog populations in resource-limited environments. Bullfrogs are able to capture large, strong prey because of the powerful grip of their jaws after the initial ranid tongue strike. However, there is a correlation found with size of prey relative to body size of the bullfrog. Juveniles and adults typically go after prey that is relative to their own body size. The bullfrog is able to make allowance for light refraction at the water-air interface by striking at a position posterior to the target's perceived location. The comparative ability of bullfrogs to capture submerged prey, compared to that of the green frog, leopard frog, and wood frog (L. clamitans, L. pipiens, and L. sylvaticus, respectively) was also demonstrated in laboratory experiments.
Prey motion elicits feeding behavior. First, if necessary, the frog performs a single, orienting bodily rotation ending with the frog aimed towards the prey, followed by approaching leaps, if necessary. Once within striking distance, the bullfrog begins its feeding strike, which consists of a ballistic lunge (eyes closed as during all leaps) that ends with the mouth opening. At this stage, the fleshy, mucus-coated tongue is extended towards the prey, often engulfing it, while the jaws continue their forward travel to close (bite) just as the tongue is retracted. Large prey that do not fit entirely into the mouth are stuffed in with the hands. In laboratory observations, bullfrogs taking mice usually swam underwater with prey in mouth, apparently with the advantageous result of altering the mouse's defense from counter-attack to struggling for air. Asphyxiation is the most likely cause of death of warm-blooded prey.
Biomechanical background of tongue projection
The speed of a bullfrog's tongue strike is much faster than it should be if muscles were the only force behind it. Similar to the tension on a slingshot pulled all the way back, when the frog's mouth is closed, tension is put into the elastic tissues of the tongue, and also into the elastic tendons of the lower jaw. When the frog attacks prey, opening its mouth is like letting go of the slingshot; the elastic force stored up in both the tongue and the jaw are combined to shoot the tip of the tongue toward the prey much faster than the prey's ability to see the strike and evade capture, completing the strike and retrieval in approximately 0.07 seconds. Another benefit of this elastic-force based attack is that it is not dependent on background temperature. A frog with a cold body temperature has muscles that move more slowly, but it can still attack prey with the same speed as if its body was warm.
Ballistic tongue projection of the related leopard frog is possible due to the presence of elastic structures that allow storage and subsequent release of elastic recoil energy. This accounts for the tongue projecting with higher power output than would develop by muscular action alone. Also, such mechanism relieves the tongue's musculature from physiological constraints such as limited peak power output - mechanical efficiency and thermal dependence by uncoupling the activation of the depressor mandibulae's contractile units from actual muscular movement. In other words, the kinematic parameters developed by contribution of the elastic structures differ from those developed by muscular projection, accounting for the difference in velocity, power output, and thermal dependence.
Ecology
Bullfrogs are an important item of prey to many birds (especially large herons), North American river otters (Lontra canadensis), predatory fish, and occasionally other amphibians. Predators of American bullfrogs once in their adult stages can range from belted kingfishers (Megaceryle alcyon) to American alligators (Alligator mississippiensis). The eggs and larvae are unpalatable to many salamanders and fish, but the high levels of activity of the tadpoles may make them more noticeable to a predator not deterred by their unpleasant taste. Humans hunt bullfrogs as game and consume their legs. Adult frogs try to escape by splashing and leaping into deep water. A trapped individual may squawk or emit a piercing scream, which may surprise the attacker sufficiently for the frog to escape. An attack on one bullfrog is likely to alert others in the vicinity to danger and they will all retreat into the safety of deeper water. Bullfrogs may be at least partially resistant to the venom of copperhead (Agkistrodon contortrix) and cottonmouth (Agkistrodon piscivorus) snakes, though these species are known natural predators of bullfrogs as are northern water snakes (Nerodia sipedon).
Considering the invasive nature of the L. catesbeianus, multiple traits within the species contribute to its competitive ability. The generalist diet of the American bullfrog allows for it to consume food in different environments. When observing the contents of American bullfrog stomachs, it was discovered that adult bullfrogs regularly consume predators of bullfrog young, including dragonfly nymphs, garter snakes, and giant water bugs. Thus, the ecological check on American bullfrog juveniles in invaded areas become less effective. L. catesbeianus seems to exhibit traits of immunity or resistance against the antipredator defenses of other organisms. Analysis of stomach contents from bullfrog populations in New Mexico show the regular consumption of wasps, with no conditioned avoidance due to the wasps' stingers. Along the Colorado river, L. catesbeianus stomach contents indicate the ability to withstand the discomforting spines of the stickleback fish. Reports of American bullfrogs eating scorpions and rattlesnakes also exist.
Analysis of the American bullfrog's realized niche at various sites in Mexico, and comparisons with the niches of endemic frogs show that it is possible that the American bullfrog capable of niche shift, and pose a threat to many endemic Mexican frog species, even those that are not currently in competition with the American bullfrog.
Invasive species
In areas where the American bullfrog is introduced, the population can be controlled by various means. One project (3n-Bullfrog project) uses sterile triploïd (3n) bullfrogs. In Europe, the American bullfrog is included since 2016 in the list of Invasive Alien Species of Union concern (the Union list). This means that this species cannot under any circumstances be imported, bred, transported, commercialized, or intentionally released into the environment in the whole of the European Union.
Self-sustaining populations of American bullfrogs became established in the United Kingdom around 1999, where their introduction was likely due to accidental escapes and deliberate releases from captivity. These populations appear to be quite small, and are undergoing control by Natural England as the species poses a threat to native amphibians.
The American bullfrog has been known to spread the amphibian pathogen Batrachochytrium dendrobatidis among populations that it has been introduced to.
Human use
The American bullfrog provides a food source, especially in the Southern and some areas of the Midwestern United States. The traditional way of hunting them is to paddle or pole silently by canoe or flatboat in ponds or swamps at night; when the frog's call is heard, a light is shone at the frog which temporarily inhibits its movement. The frog will not jump into deeper water as long as it is approached slowly and steadily. When close enough, the frog is gigged with a multiple-tined spear and brought into the boat. Bullfrogs can also be stalked on land, by again taking great care not to startle them. In some states, breaking the skin while catching them is illegal, and either grasping gigs or hand captures are used. Like most frogs, the hind legs of the bullfrog are the only parts generally eaten. When cooked, they resemble small chicken drumsticks, have a similar flavor and texture and can be prepared in similar ways.
Commercial bullfrog culture in near-natural enclosed ponds has been attempted, but is fraught with difficulties. Although pelleted feed is available, frogs usually will not willingly consume artificial diets, and providing sufficient live prey is challenging. Disease among frogs also tends to be a problem even when great care is taken to provide sanitary conditions. Other challenges to be overcome may be predation, cannibalism, and low water quality. The frogs are large, have powerful leaps, and inevitably escape after which they may wreak havoc among the native frog population. Countries that export bullfrog legs include the Netherlands, Belgium, Mexico, Bangladesh, Japan, China, Taiwan, and Indonesia. Most of these frogs are caught in the wild, but some are raised in captivity. The United States is a net importer of frog legs.
The American bullfrog is used as a specimen for dissection in many biology and anatomy classes in schools across the world. It is the state amphibian of Missouri, Ohio, and Oklahoma.
| Biology and health sciences | Amphibians | null |
379210 | https://en.wikipedia.org/wiki/Colorado%20potato%20beetle | Colorado potato beetle | The Colorado potato beetle (Leptinotarsa decemlineata; also known as the Colorado beetle, the ten-striped spearman, the ten-lined potato beetle, and the potato bug) is a beetle known for being a major pest of potato crops. It is about long, with a bright yellow/orange body and five bold brown stripes along the length of each of its wings. Native to the Rocky Mountains, it spread rapidly in potato crops across America and then Europe from 1859 onwards.
The Colorado potato beetle was first observed in 1811 by Thomas Nuttall and was formally described in 1824 by American entomologist Thomas Say. The beetles were collected in the Rocky Mountains, where they were feeding on the buffalo bur, Solanum rostratum.
Description
Adult beetles typically are in length and in width. They weigh 50–170 mg. The beetles are orange-yellow in color with 10 characteristic black stripes on their front wings or elytra. The specific name decemlineata, meaning "ten-lined", derives from this feature.
Adult beetles may be visually confused with L. juncta, which is the false potato beetle. Unlike the Colorado potato beetle, it is not an agricultural pest. L. juncta also has alternating black and white strips on its back, but one of the white strips in the center of each wing cover is missing and replaced by a light brown strip.
Larvae
The orange-pink larvae have a large, 9-segmented abdomen, black head, and prominent spiracles, and may measure up to in length in their final instar stage.
The beetle larva has four instar stages. The head remains black throughout these stages, but the pronotum changes colour from black in first- and second-instar larvae to having an orange-brown edge in its third-instar. In fourth-instar larvae, about half the pronotum is coloured light brown. This tribe is characterised within the subfamily by round to oval-shaped convex bodies, which are usually brightly coloured, simple claws which separate at the base, open cavities behind the procoxae, and a variable apical segment of the maxillary palp.
Distribution
The beetle is most likely native to the area between Colorado and northern Mexico, and was discovered in 1824 by Thomas Say in the Rocky Mountains. It is found in North America, and is present in every state and province except Alaska, California, Hawaii, and Nevada. It now has a wide distribution across Europe and Asia, totaling over 16 million km2.
Its first association with the potato plant (Solanum tuberosum) was not made until about 1859, when it began destroying potato crops in the region of Omaha, Nebraska. Its spread eastward was rapid, at an average distance of 140 km per year. The beetle has the potential to spread to temperate areas of East Asia, India, South America, Africa, New Zealand, and Australia.
Human interaction
By 1874 it had reached the Atlantic Coast. From 1871, American entomologist Charles Valentine Riley warned Europeans about the potential for an accidental infestation caused by the transportation of the beetle from America. From 1875, several Western European countries, including Germany, Belgium, France, and Switzerland, banned imports of American potatoes to avoid infestation by L. decemlineata.
These controls proved ineffective, as the beetle soon reached Europe. In 1877, L. decemlineata reached the United Kingdom and was first recorded from Liverpool docks, but it did not become established. Many further outbreaks have occurred; the species has been eradicated in the UK at least 163 times. The last major outbreak was in 1977. It remains as a notifiable quarantine pest in the United Kingdom and is monitored by the Plant Health and Seeds Inspectorate of the Animal and Plant Health Agency (APHA) to prevent it from becoming established. A cost-benefit analysis from 1981 suggested that the cost of the measures used to exclude L. decemlineata from the UK was less than the likely costs of control if it became established.
In July 2023, Colorado beetle were officially confirmed in a potato field in Kent, England. Farmers and growers, gardeners and members of the public are being encouraged to remain vigilant for signs of the pest and to report potential sightings to APHA.
Elsewhere in Europe, the beetle became established near USA military bases in Bordeaux during or immediately following World War I and had proceeded to spread by the beginning of World War II to Belgium, the Netherlands, and Spain. The population increased dramatically during and immediately following World War II and spread eastward, and the beetle is now found over much of the continent. After World War II, in the Soviet occupation zone of Germany, almost half of all potato fields were infested by the beetle by 1950. In East Germany, they were known as Amikäfer ('Yankee beetles') following a governmental claim that the beetles were dropped by American planes. In the European Union, it remains a regulated (quarantine) pest for the Republic of Ireland, Balearic Islands, Cyprus, Malta, and southern parts of Sweden and Finland. It is not established in any of these member states, but occasional infestations can occur when, for example, wind blows adults from Russia to Finland.
Lifecycle
Colorado potato beetle females are very prolific and are capable of laying over 500 eggs in a 4- to 5-week period. The eggs are yellow to orange, and are about long. They are usually deposited in batches of about 30 eggs on the underside of host leaves. Development of all life stages depends on temperature.
After 4–15 days, the eggs hatch into reddish-brown larvae with humped backs and two rows of dark brown spots, one row on each side. They feed on the leaves of their host plants. Larvae progress through four distinct growth stages (instars). First instars measure about long, and the last (fourth) instars about long. The first through third instars each last about 2–3 days; the fourth lasts 4–7 days.
Upon reaching full size, each fourth instar spends several days as a nonfeeding prepupa, which can be recognized by its inactivity and lighter coloration. The prepupae drop to the soil and burrow to a depth of several inches, then pupate. In 5 to 10 days, the adult beetle emerges to feed and mate. This beetle can thus go from egg to adult in as little as 21 days. Depending on temperature, light conditions, and host quality, the adults may enter diapause and delay emergence until spring. They then return to their host plants to mate and feed; overwintering adults may begin mating within 24 hours of spring emergence. In some locations, three or more generations may occur each growing season.
Mate and host searching
Visual cues are important for Colorado potato beetles during the mate and host search. In a study done by Szentsi, Weber, and Jermy in the paper Role of visual stimuli in host and mate location of the Colorado potato beetle, the beetles' attraction to boards with different spectral bands, reaction to beetle-sized stationary objects, responses to such objects on boards, and attraction to prior female substances were investigated. The researchers' hypothesis was that experience with female substances would cause behavior changes in males. When shown colored boards, the beetles had a positive response between 45° and 0° in terms of mean angular directions (MADs). Beads and dead beetles without boards evoked a weaker response with MADs being variable. Colored boards and bead combinations displayed more positive MADs responses between 45° and 0°. Experience with female substances showed that male beetles showed high responses to female scent. According to the study, 43 of the 49 runs where female smear was used had a response score of 5 in contrast to the 23/42 runs without female smear receiving a score of 5.
Colorado potato beetles are also attracted to the volatiles potato plants emit. In the article Sexual contact influences orientation to plant attractant in Colorado potato beetle, Leptinotarsa decemlineata Say (Coleoptera: Chrysomelidae) by Joseph Dickens, the beetles were attracted to kairomone substance but after mating, their attraction to it reduced. Within 24 hours of mating, there was no difference between levels of attraction to kairomone and control solvent. Lack of attraction occurred for two days but resumed three days after mating. Male beetles produce a pheromone that is further enhanced by plant host volatiles like the kairomone. After a beetle is attracted to the host, mating occurs and the female lays her eggs on the plant. The beetles' attraction to kairomone decreases until 72 hours later once oviposition occurs and the probability of re-mating increases.
Behavior and ecology
Diet
L. decemlineata has a strong association with plants in the family Solanaceae, particularly those of the genus Solanum. It is directly associated with Solanum cornutum (buffalo-bur), Solanum nigrum (black nightshade), Solanum melongena (eggplant or aubergine), Solanum dulcamara (bittersweet nightshade), Solanum luteum (hairy nightshade), Solanum tuberosum (potato), and Solanum elaeagnifolium (silverleaf nightshade). They are also associated with other plants in this family, namely the species Solanum lycopersicum (tomato) and the genus Capsicum (pepper).
Enemies
At least 13 insect genera, three spider families, one phalangid (Opiliones), and one mite have been recorded as either generalist or specialized predators of the varying stages of L. decemlineata. These include the ground beetle Lebia grandis, the coccinellid beetles Coleomegilla maculata and Hippodamia convergens, the shield bugs Perillus bioculatus and Podisus maculiventris, various species of the lacewing genus Chrysopa, the wasp genus Polistes, and the damsel bug genus Nabis.
The predatory ground beetle L. grandis is a predator of both the eggs and larvae of L. decemlineata, and its larvae are parasitoids of the pupae. An adult L. grandis may consume up to 23 eggs or 3.3 larvae in a single day.
In a laboratory experiment, Podisus maculiventris was used as a predatory threat to female L. decemlineata specimens, resulting in the production of unviable trophic eggs alongside viable ones; this response to a predator ensured that additional food was available for newly hatched offspring to increase their survival rate. The same experiment also demonstrated the cannibalism of unhatched eggs by newly hatched L. decemlineata larvae as an antipredator response.
Sexual dimorphism
Colorado potato beetles exhibit sexual dimorphism. In particular, they exhibit dimorphism in the adhesive tarsal setae. The paper "Sexual dimorphism in the attachment ability of the Colorado potato beetle Leptinotarsa decemlineata (Coleoptera: Chrysomelidae) to rough substrates" by Voigt demonstrates this dimorphism. The setae, hair-like structures, in males is to help them adhere to the females' elytra when mating. Colorado potato beetles also have adhesive setae that allows them to attach to host plants.
Three current setae are known: simple pointed with an asymmetric narrowing at the tip (males and females), spatula-like with a pin on its dorsal surface (males and females), setae with an adhesive terminal disc (males only). Male setae are better designed for smooth surfaces; male Colorado potato beetles have been observed attaching onto smooth glass and plastic surfaces and also attach to the smooth female elytra.
Microscopy of the tarsal reveals five articulated tarsomeres and paired curved claws. Males and females have an adhesive setae covering the first three tarsomeres. The fourth is hidden and the fifth bears sensory setae with no adhesive function. Both males and females have filamentous with a tapered terminal part, lanceolate with a flattened tapered terminal part, and spatula-shaped with an enlarged tape-like terminal part. Males have a discoidal terminal part with a bulge around the disc. Female elytra appears smooth on the surface, but further magnification shows irregular lines. This indicates fluid on the elytra.
Genetics
Genetic differentiation from agriculture
Colorado potato beetles display genetic differentiation based on region. In the Columbia Basin and Central Sands, beetles in the Columbia Basin had less genetic diversity than those in Central Sands. According to the study done by Crossly, Rondon, and Schoville, in the paper Effects of contemporary agricultural land cover on Colorado potato beetle genetic differentiation in the Columbia Basin and Central Sands, nucleotide diversity in the Columbia Basin beetles ranged from 0.0056-0.0063 and 0.0073-0.0080 in Central Sands. Heterozygosity data showed the Columbia Basin was 19.4% ± 0.4% and 21.6% ± 0.8% in the Central Sands. Additional mitochondrial DNA sequencing showed two haplotypes in the Columbia Basin compared to places like Wisconsin showed seven haplotypes.
Reasoning behind the genetic diversity is the landscapes of the regions: shrub-land and grains in the Columbia Basin versus the forest, corn, and beans in the Central Sands. In the same study, potatoes covered 3.5% in the Columbia Basin and 1.8% in the Central Sands. Landscape resistance can be characterized by how the land responds to the spread of beetles. Its overall effect on allele frequency covariance was low, and the Central Sands had a higher rate of decay in allele frequency. Potatoes' relative effect sizes of land cover variables on genetic differentiation was the highest in the Columbia Basin. However, when comparing all the land types, no particular land cover displayed any significant difference from the others.
Genetic differentiation in the Colorado potato beetle can be impacted by agricultural practices such as crop rotation. The same study mentioned earlier examines crop rotation's effects on genetic differentiation in Colorado potato beetles that were not found in the Central Sands. On the other hand, genetic diversity decreased with increased crop rotation in the Columbia Basin. This difference could be attributed to larger rotation differences in the Columbia Basin or differences in the landscape itself that affect the spread of the beetles. Genetic diversity is not directly impacted by the land cover type. Instead, other factors such as climate could be responsible for the differences between the Colorado potato beetle in these two regions.
Genetic differentiation due to invasion
The Colorado potato beetle has invaded North America and Europe. Because of its widespread invasion, the Colorado potato beetle displays genetic diversity in its different regions. In the paper The voyage of an invasive species across continents: genetic diversity of North American and European Colorado potato beetle populations by Grapputo, Boman, Lindström, and Mappes, sequencing of amplified mtDNA from 109 beetles in 13 populations showed 20 unique haplotypes. Three haplotypes were shared in the populations and all others were restricted to a single population in North America. 51 European beetles collected from eight populations yielded in one haplotype that was also fixed in the Idaho population. Mitochondrial data, mtDNA, of North American beetles showed significant population differentiation. For example, 44% of the variation can be attributed to subdivision among populations, especially in Kentucky and Idaho.
Polymorphism was highest in Colorado potato beetles in Colorado and the lowest was in France. Polymorphism and heterozygosity was higher in North America than in Europe. Heterozygosity ranged from 0.25 in New Brunswick to 0.14 in France. Further analysis revealed population differentiations between North America and Europe. There were two separate groups of European beetles, one formed by western European beetles and the second being eastern European beetles. 13% of total variation is from variation among the two continent groups, and 17% of variation is from population variance within groups. Beetles from North American and Europe formed clusters. With the exception of New Brunswick and Kentucky beetles, most beetles from the same population cluster together. In Europe, there were more complex relations between the beetles. Estonian and Spanish beetles clustered, French and Italian beetles formed separate groups, and Russian and Finnish beetles were closely related to Estonian ones. European beetles could be categorized by East and West except for Polish beetles which had relations to multiple countries.
Importance of transposable elements in genome
To help explain why Colorado Potato beetles are such difficult agricultural pests to manage and control, a group of researchers sought to test both structural and functional genetic changes in the species of beetle as compared to other arthropod species. Using community annotation, transcriptomics, and genome sequencing, they uncovered that Colorado Potato beetles have a genome consisting of several transposable elements. Transposons are sequences of genetic material that can shift/move their place within an organism's genome, and 17% of the Colorado Potato beetles’ genome consists of transposable elements. This helps explain their rapid evolution to continually resist insecticides, contributing to their global spread.
As an agricultural pest
Factors affecting beetle dispersal
Colorado potato beetles are highly mobile and are considered pests. Colorado potato beetles disperse to hosts via walking and flight. Flights have three types: short, long, and diapause. Diapause is a long-distance flight that occurs at the end of the summer. In order for dispersal to occur, certain conditions need to be met, both abiotic and biotic.
Abiotic conditions
Abiotic factors include temperature, photoperiod, insolation, wind, and gravity. A soil temperature of 9 °C causes soiled beetles to move up. They emerge when soil surface temperature is 14–15 °C. Optimal flight takeoff temperature is 27 °C. Long photoperiods enable proper flight-muscle development. Insolation is also important for flight; at least 6 hours of insolation paired with 25–28 °C temperatures are optimal for takeoff. Wind is another condition that needs to be met. Speeds of 1–3 m/s assist with takeoff for short-distance flights. Gravity can also affect flight speed in the beetles; as the Colorado potato beetle moves out of soil, it does so on slopes of 20° or more.
Biotic conditions
Biotic conditions include availability of energy reserves, insect weight, insect density, overwintered adults, and summer adults. It is speculated that proline is the primary energy substance for Colorado potato beetles in flight. Beetles that gain more than 15% of their weight after emerging fly less and for shorter distances than beetles that remain the same weight. Wing loadings for male and female beetles were 10.83 and 15.60 N/m2. Wing loading changes as beetles feed, drink, and develop eggs. Cases of large groups of beetles leaving crops have been observed when there is higher population density. However, this is likely due to destruction of the food source, not the population itself. Overwintered beetles exhibit different behavior than summer beetles. They typically fly less because it is an adaptation to the higher risks of food deprivation in the spring compared to summer. During the summer, the adults that emerge walk until they eat enough to develop proper flight muscles and develop a proper elytra.
Motivations for dispersal and stimuli
Colorado potato beetles walk in orientations to find food. In the dark, they walk at slow speeds and in circles. Beetles also move in response to olfactory cues. The beetles respond and move faster to familiar odors. Depending on satiation levels, the beetles move differently with the winds. A parallel walk with the wind is found in satiated beetles whereas starved beetles walk against it.
Visual cues are also important for the beetles. Colorado potato beetles respond to light, and intensity is proportional to rest period. Beetles exhibit phototactic orientation in which they align themselves with a cone of light and move with it. Compass orientation is when large amounts of beetles walk in a single direction and have memory of their angle to the sun.
The rate of linear displacement is important for the probability of the beetle finding a plant, mate, or habitat. This is important for the success of orientation mechanisms.
New beetles disperse for crops once they emerge. The crops affect colonization: crop rotation prolongs colonization, and neighboring crops are colonized rapidly and by walking. Overwintered beetles fly to find crops and once a host plant is found, flight frequency decreases. The strategy behind this is thought to be minimizing reproductive risk because female beetles that emerge in the spring are already mated. Dispersal continues after finding a host. Moving helps beetles find better resources, mates, and progeny distribution. When moving, flight is less frequent than walking in cultivated fields than in the wild.
Researchers have also evaluated how flight frequency is related to the beetle's diet. In a beetle population that had returned from diapause and been exposed to poor food conditions, mean flight frequency was decreased. This is because beetles required better food conditions to regenerate their flight muscles. Prior to diapause, beetles increased their flight frequency to compensate for poor food conditions.
Potato crop pest
Around 1840, L. decemlineata adopted the cultivated potato into its host range and it rapidly became a most destructive pest of potato crops. It is today considered to be the most important insect defoliator of potatoes. It may also cause considerable damage to tomato and aubergine crops with both adults and larvae feeding on the plant's foliage. Larvae may defoliate potato plants resulting in yield losses up to 100% if the damage occurs prior to tuber formation. Larvae may consume 40 cm2 of potato leaves during the entire larval stage, but adults are capable of consuming 10 cm2 of foliage per day.
The economic cost of insecticide resistance is significant, but published data on the subject are minimal. In 1994, total costs of the insecticide and crop losses in the US state of Michigan were $13.3 million, representing 13.7% of the total value of the crop. The estimate of the cost implication of insecticides and crop losses per hectare is $138–368. Long-term increased cost to the Michigan potato industry caused by insecticide resistance in Colorado potato beetle was estimated at $0.9 to $1.4 million each year.
Potato protection
Colorado potato beetles pose significant dangers to potatoes, which are a quintessential agricultural crop. In response to the damage they do, some potatoes have been genetically modified to resist attack and damage from the beetles. Specifically, the Russet Burbank Potato. The insertion of a cryIIIA gene that codes for the insect control protein Bacillus thuringiensis var. Tenebrionis is the method that was used. Prior to its insertion, research showed that wild-type cryIIIA genes were expressed in low levels in plants. Plants with this gene expressed the cryIIIA protein at levels less than 0.001% of total leaf protein. Plants contain some resistance and toxicity to the Colorado potato beetles, but consistent protection requires higher levels of expression of the cryIIIA gene. Scientists modified cryIIIA by modifying the DNA protein-coding sequence without altering the amino acid sequence. The gene was transferred into the potato through a vector, specifically a Agrobacterium tumefaciens mediated transfer.
Following the introduction of the gene, Russet Burbank potato plants with the gene were tested for kanamycin resistance and Colorado potato beetle resistance. In 308 plants that were tested, 18% (55) displayed complete resistance to the beetle. Later larval stages and adult beetles are more sensitive to cryIIIA protein. Controlling adults is important because they produce the next larvae generation. Colorado potato beetles overwinter as adults in the soil and feed immediately after emerging in the spring. In cryIIIA expression levels above 0.005%, adult feeding was negligible. Oviposition was also affected. In non-transgenic leafs, mean number of eggs per cage were 117 and 143 in two separate trials. On the other hand, transgenic leafs displayed a mean of 1.7 and 0 eggs per cage in two trials. The female beetles were also studied, and the beetles put in the cage with transgenic plants displayed reduced size with ova that were partially or totally reabsorbed. They absorbed body fat and reproductive tissue as a result of ceasing consumption of transgenic plants.
The potatoes showed benefits of the gene treatment; potatoes expressing the cryIIIA gene had protection from Colorado potato beetles in the laboratory and the field. Furthermore, these potato plants displayed agronomic and tuber characteristics that aligned with healthy Russet Burbank Potatoes.
Insecticidal management
The large-scale use of insecticides in agricultural crops effectively controlled the pest until it became resistant to DDT in 1952 and dieldrin in 1958. Insecticides remain the main method of pest control on commercial farms. However, many chemicals are often unsuccessful when used against this pest because of the beetle's ability to rapidly develop insecticide resistance. Different populations in different geographic regions have, between them, developed resistance to all major classes of insecticide, although not every population is resistant to every chemical. The species as a whole has evolved resistance to 56 different chemical insecticides. The mechanisms used include improved metabolism of the chemicals, reduced sensitivity of target sites, less penetration and greater excretion of the pesticides, and some changes in the behavior of the beetles.
CPBs have evolved widespread insecticide resistance. No cases without fitness cost or of negative cost are known.
Nonpesticidal management
Bacterial insecticides can be effective if application is targeted towards the vulnerable early-instar larvae. Two strains of the bacterium Bacillus thuringiensis produce toxins that kill the larvae. Other forms of pest control, through nonpesticidal management are available. Feeding can be inhibited by applying antifeedants, such as fungicides or products derived from Neem (Azadirachta indica), but these may have negative effects on the plants, as well. The steam distillate of fresh leaves and flowers of tansy (Tanacetum vulgare) contains high levels of camphor and umbellulone, and these chemicals are strongly repellent to L. decemlineata.
Beauveria bassiana (Hyphomycetes) is a pathogenic fungus that infects a wide range of insect species, including the Colorado potato beetle. It has shown to be particularly effective as a biological pesticide for L. decemlineata when used in combination with B. thuringiensis.
Crop rotation is, however, the most important cultural control of L. decemlineata. Rotation may delay the infestation of potatoes and can reduce the build-up of early-season beetle populations because the adults emerging from diapause can only disperse to new food sources by walking. One 1984 study showed that rotating potatoes with nonhost plants reduced the density of early-season adults by 95.8%.
Other cultural controls may be used in combination with crop rotation: Mulching the potato crop with straw early in the growing season may reduce the beetle's ability to locate potato fields, and the mulch creates an environment that favours beetle's predators; Plastic-lined trenches have been used as pitfall traps to catch the beetles as they move toward a field of potatoes in the spring, exploiting their inability to fly immediately after emergence; flamethrowers may also be used to kill the beetles when they are visible at the top of the plant's foliage.
Biological management
Some potential sources of control for the Colorado golden beetle is the eulophid egg parasitoid Edovum puttleri. This parasitoid can kill more than 80% of beetle eggs. It does it by parasitization, probing, and host feeding. Edovum specializes in the Colorado potato beetle, which gives it easier access to the eggs it eats. This parasitoid tolerates warmer temperatures than the beetle. The adults hunt during the warmest part of the day and have different food sources. The young feast on beetle eggs. Furthermore, these parasitoids do not overwinter which means using them for biological control raising them in insectaries and periodic releases. However, there are studies that attempt to genetically improve Edovum's tolerance to colder temperatures along with cultural manipulations that enable Edovum to provide useful, economic biological control.
Another enemy/potential control method is a fungal pathogen called Beuveria bassiana. This fungus has implications in population control of the beetles. It cannot be used to quickly contain large populations of beetles. Additionally, the pre-existing use of fungicide in disease management of crops presents an obstacle for the effectiveness of the fungus. Other reasons as to why this fungal treatment has not been utilized heavily include costs of production and longevity of formulations.
Spatial and temporal field management
Colorado Potato beetles have also shown a capacity for spatial and temporal within-field management. In one study, populations of immigrating Colorado Potato beetles were systematically targeted and their established perimeter was measured over a large field. Researchers found that immigrating adult beetles had almost no spatial dependence in any covariance-based treatment, while larval immigrant populations developed the highest densities in field centers. Results imply that perimeter tactics employed by Colorado Potato beetles can give valuable insight into site-specific resistance management programs to optimize insecticide usage. However, researchers are still not confident about the long-term effects of such resistance management programs as yield reduction requires further studying.
Relationship with humans
Cold War villain
During the Cold War, some countries in the Warsaw Pact claimed that the beetles had been introduced by the CIA in an attempt to reduce food security by destroying the agriculture of the Soviet Union. A widespread campaign was launched against the beetles; posters were put up and school children were mobilized to gather the pests and kill them in benzene or spirit.
Philately
L. decemlineata is an iconic species and has been used as an image on stamps because of its association with the recent history of both North America and Europe. For example, in 1956, Romania issued a set of four stamps calling attention to the campaign against insect pests, and it was featured on a 1967 stamp issued in Austria. The beetle also appeared on stamps issued in Benin, Tanzania, the United Arab Emirates, and Mozambique.
In popular culture
Neapolitan mandolins (also called Italian mandolins) are often called tater bugs, a nickname given by American luthier Orville Gibson, because the shape and stripes of the different color wood strips resemble the back of the Colorado beetle.
The fans of Alemannia Aachen carry the nickname "Kartoffelkäfer", from the German name for the Colorado beetle, because of striped yellow-black jerseys of the team.
During the 2014 pro-Russian unrest in Ukraine, the word , from the Ukrainian and Russian term for Colorado beetle (, ), gained popularity among Ukrainians as a derogatory term to describe pro-Russian separatists in the Donetsk and Luhansk Oblasts (provinces) of Eastern Ukraine. The nickname reflects the similarity of black and orange stripes on St. George's ribbons worn by many of the separatists.
In some European cultures, the Colorado potato beetle is known as the 'gourd beetle' due to the likeness of the beetle to various gourds of the Cucurbitaceae family.
| Biology and health sciences | Beetles (Coleoptera) | Animals |
379224 | https://en.wikipedia.org/wiki/True%20frog | True frog | True frogs is the common name for the frog family Ranidae. They have the widest distribution of any frog family. They are abundant throughout most of the world, occurring on all continents except Antarctica. The true frogs are present in North America, northern South America, Europe, Africa (including Madagascar), and Asia. The Asian range extends across the East Indies to New Guinea and a single species, the Australian wood frog (Hylarana daemelii), has spread into the far north of Australia.
Typically, true frogs are smooth and moist-skinned, with large, powerful legs and extensively webbed feet. The true frogs vary greatly in size, ranging from small—such as the wood frog (Lithobates sylvatica)—to large.
Many of the true frogs are aquatic or live close to water. Most species lay their eggs in the water and go through a tadpole stage. However, as in most families of frogs, there is large variation of habitat within the family. There are also arboreal species of true frogs, and the family includes some of the very few amphibians that can live in brackish water.
Evolution
The Ranidae are related to several other frog families that have Eurasian and Indian origins, including Rhacophoridae, Dicroglossidae, Nyctibatrachidae, Micrixalidae, and Ranixalidae. They are thought to be most closely related to the Indian-endemic Nyctibatrachidae, from which they diverged in the early Eocene. However, other studies recover a closer relationship with the Dicroglossidae.
It was previously thought that the Ranidae and their closest relatives were of Gondwanan origins, having evolved on Insular India during the Cretaceous. They were then entirely restricted to the Indian subcontinent until the late Eocene, when India collided with Asia, allowing the Ranidae to colonize Eurasia and eventually the rest of the world. However, more recent studies instead propose that the Ranidae originated in Eurasia, and their close relationship with India-endemic frog lineages is due to those lineages colonizing India from Eurasia during the Paleogene.
Systematics
The subdivisions of the Ranidae are still a matter of dispute, although most are coming to an agreement. Several former subfamilies are now recognised as separate families (Petropedetidae, Cacosterninae, Mantellidae, and Dicroglossidae). The genus Rana has now been split up and is much reduced in size.
While too little of the vast diversity of true frogs has been subject to recent studies to say something definite, as of mid-2008, studies are going on, and several lineages are recognizable.
The genus Staurois is probably a very ancient offshoot of the main Raninae lineage.
Amolops has been generally delimited as a monophyletic group.
Odorrana and Rana plus some proposed minor genera (which probably ought to be included in the latter) form another group.
A group including Clinotarsus, Huia in the strict sense and Meristogenys
An ill-defined assemblage of Babina, Glandirana, Hylarana, Pulchrana, Sanguirana, and Sylvirana, as well as Hydrophylax and Pelophylax, which are probably not monophyletic. Some authorities have treated them as junior synonyms of the genus Hylarana.
The following phylogeny of some genera was recovered by Che et al., 2007 using mitochondrial genes.
Genera
Most of the subfamilies formerly included under Ranidae are now treated as separate families, leaving only Raninae remaining. The following genera are recognised in the family Ranidae:
Abavorana Oliver, Prendini, Kraus, and Raxworthy, 2015 (three species)
Amnirana Dubois, 1992 (11 species)
Amolops Cope, 1865 (80 species)
Babina Thompson, 1912 (two species)
Chalcorana Dubois, 1992 (nine species)
Clinotarsus Mivart 1869 (three species)
Glandirana Fei, Ye, and Huang, 1990 (six species)
Huia Yang, 1991 (monotypic)
Humerana Dubois, 1992 (four species)
Hydrophylax Fitzinger, 1843 (four species)
Hylarana Tschudi 1838 (four species)
Indosylvirana Oliver, Prendini, Kraus, and Raxworthy, 2015 (13 species)
Lithobates Fitzinger, 1843 (55 species)
Meristogenys Yang, 1991 (13 species)
Nidirana Dubois, 1992 (19 species)
Odorrana Fei, Ye, and Huang, 1990 (64 species)
Papurana Dubois, 1992 (19 species)
Pelophylax Fitzinger 1843 (19 species)
Pseudorana Fei, Ye, and Huang, 1990 (monotypic)
Pterorana Kiyasetuo and Khare, 1986 (monotypic)
Pulchrana Dubois, 1992 (18 species)
Rana Linnaeus, 1758 (58 species)
Sanguirana Dubois, 1992 (six species)
Staurois Cope, 1865 (six species)
Sumaterana Arifin, Smart, Hertwig, Smith, Iskandar, and Haas, 2018 (three species)
Sylvirana Dubois, 1992 (12 species)
Wijayarana Arifin, Chan, Smart, Hertwig, Smith, Iskandar, and Haas, 2021 (five species)
In 2023, Amphibian Species of the World tentatively synonymized Amnirana, Chalcorana, Humerana, Hydrophylax, Indosylvirana, Papurana, Pulchrana, and Sylvirana into Hylarana until significant taxonomic confusion surrounding the group could be cleared up. These changes are not recognized by AmphibiaWeb.
Incertae sedis
A number of taxa are placed in Ranidae incertae sedis, that is, their taxonomic status is too uncertain to allow more specific placement.
"Hylarana" chitwanensis (Das, 1998)
"Hylarana" garoensis (Boulenger, 1920)
"Hylarana" latouchii (Boulenger, 1899)
"Hylarana" margariana Anderson, 1879
"Hylarana" montivaga (Smith, 1921)
"Hylarana" persimilis (Van Kampen, 1923)
| Biology and health sciences | Amphibians | null |
379234 | https://en.wikipedia.org/wiki/Sensory%20nervous%20system | Sensory nervous system | The sensory nervous system is a part of the nervous system responsible for processing sensory information. A sensory system consists of sensory neurons (including the sensory receptor cells), neural pathways, and parts of the brain involved in sensory perception and interoception. Commonly recognized sensory systems are those for vision, hearing, touch, taste, smell, balance and visceral sensation. Sense organs are transducers that convert data from the outer physical world to the realm of the mind where people interpret the information, creating their perception of the world around them.
The receptive field is the area of the body or environment to which a receptor organ and receptor cells respond. For instance, the part of the world an eye can see, is its receptive field; the light that each rod or cone can see, is its receptive field. Receptive fields have been identified for the visual system, auditory system and somatosensory system.
Senses and receptors
While debate exists among neurologists as to the specific number of senses due to differing definitions of what constitutes a sense, Gautama Buddha and Aristotle classified five 'traditional' human senses which have become universally accepted: touch, taste, smell, vision, and hearing. Other senses that have been well-accepted in most mammals, including humans, include pain, balance, kinaesthesia, and temperature. Furthermore, some nonhuman animals have been shown to possess alternate senses, including magnetoreception and electroreception.
Receptors
The initialization of sensation stems from the response of a specific receptor to a physical stimulus. The receptors which react to the stimulus and initiate the process of sensation are commonly characterized in four distinct categories: chemoreceptors, photoreceptors, mechanoreceptors, and thermoreceptors. All receptors receive distinct physical stimuli and transduce the signal into an electrical action potential. This action potential then travels along afferent neurons to specific brain regions where it is processed and interpreted.
Chemoreceptors
Chemoreceptors, or chemosensors, detect certain chemical stimuli and transduce that signal into an electrical action potential. The two primary types of chemoreceptors are:
Distance chemoreceptors are integral to receiving stimuli in gases in the olfactory system through both olfactory receptor neurons and neurons in the vomeronasal organ.
Direct chemoreceptors that detect stimuli in liquids include the taste buds in the gustatory system as well as receptors in the aortic bodies which detect changes in oxygen concentration.
Photoreceptors
Photoreceptors are neuron cells and are specialized units that play the main role in initiating vision function. Photoreceptors are light-sensitive cells that capture different wavelengths of light. Different types of photoreceptors are able to respond to the varying light wavelengths in relation to color, and transduce them into electrical signals. Photoreceptors are capable of phototransduction, a process which converts light (electromagnetic radiation) into, among other types of energy, a membrane potential. There are five compartments that are present in these cells. Each compartment corresponds to differences in function and structure. The first compartment is the outer segment (OS), where it is responsible for capturing light and transducing it. The second compartment is the inner segment (IS), which includes the necessary organelles that function in cellular metabolism and biosynthesis. Mainly, these organelles include mitochondria, Golgi apparatus and endoplasmic reticulum as well as among others. The third compartment is the connecting cilium (CC). As its name suggests, CC works to connect the OS and the IS regions together for the purpose of essential protein trafficking. The fourth compartment contains the nucleus and is a continuation of the IS region, known as the nuclear region. Finally, the fifth compartment is the synaptic region, where it acts as a final terminal for the signal, consisting of synaptic vesicles. In this region, glutamate neurotransmitter is transmitted from the cell to secondary neuron cells. The three primary types of photoreceptors are:
cones are photoreceptors which respond significantly to color. In humans, the three different types of cones correspond with a primary response to short wavelength (blue), medium wavelength (green), and long wavelength (yellow/red).
Rods are photoreceptors which are very sensitive to the intensity of light, allowing for vision in dim lighting. The concentrations and ratio of rods to cones is strongly correlated with whether an animal is diurnal or nocturnal. In humans, rods outnumber cones by approximately 20:1, while in nocturnal animals, such as the tawny owl, the ratio is closer to 1000:1.
Ganglion cells reside in the adrenal medulla and retina where they are involved in the sympathetic response. Of the ~1.3 million ganglion cells present in the retina, 1-2% are believed to be photosensitive ganglia. These photosensitive ganglia play a role in conscious vision for some animals, and are believed to do the same in humans.
Mechanoreceptors
Mechanoreceptors are sensory receptors which respond to mechanical forces, such as pressure or distortion. While mechanoreceptors are present in hair cells and play an integral role in the vestibular and auditory systems, the majority of mechanoreceptors are cutaneous and are grouped into four categories:
Slowly adapting type 1 receptors have small receptive fields and respond to static stimulation. These receptors are primarily used in the sensations of form and roughness.
Slowly adapting type 2 receptors have large receptive fields and respond to stretch. Similarly to type 1, they produce sustained responses to a continued stimuli.
Rapidly adapting receptors have small receptive fields and underlie the perception of slip.
Pacinian receptors have large receptive fields and are the predominant receptors for high-frequency vibration.
Thermoreceptors
Thermoreceptors are sensory receptors which respond to varying temperatures. While the mechanisms through which these receptors operate is unclear, recent discoveries have shown that mammals have at least two distinct types of thermoreceptors:
The end-bulb of Krause or bulboid corpuscle detects temperatures above body temperature.
Ruffini's end organ detects temperatures below body temperature.
TRPV1 is a heat-activated channel that acts as a small heat detecting thermometer in the membrane which begins the polarization of the neural fiber when exposed to changes in temperature. Ultimately, this allows us to detect ambient temperature in the warm/hot range. Similarly, the molecular cousin to TRPV1, TRPM8, is a cold-activated ion channel that responds to cold. Both cold and hot receptors are segregated by distinct subpopulations of sensory nerve fibers, which shows us that the information coming into the spinal cord is originally separate. Each sensory receptor has its own "labeled line" to convey a simple sensation experienced by the recipient. Ultimately, TRP channels act as thermosensors, channels that help us to detect changes in ambient temperatures.
Nociceptors
Nociceptors respond to potentially damaging stimuli by sending signals to the spinal cord and brain. This process, called nociception, usually causes the perception of pain. They are found in internal organs, as well as on the surface of the body. Nociceptors detect different kinds of damaging stimuli or actual damage. Those that only respond when tissues are damaged are known as "sleeping" or "silent" nociceptors.
Thermal nociceptors are activated by noxious heat or cold at various temperatures.
Mechanical nociceptors respond to excess pressure or mechanical deformation.
Chemical nociceptors respond to a wide variety of chemicals, some of which are signs of tissue damage. They are involved in the detection of some spices in food.
Sensory cortex
All stimuli received by the receptors listed above are transduced to an action potential, which is carried along one or more afferent neurons towards a specific area of the brain. While the term sensory cortex is often used informally to refer to the somatosensory cortex, the term more accurately refers to the multiple areas of the brain at which senses are received to be processed. For the five traditional senses in humans, this includes the primary and secondary cortices of the different senses: the somatosensory cortex, the visual cortex, the auditory cortex, the primary olfactory cortex, and the gustatory cortex. Other modalities have corresponding sensory cortex areas as well, including the vestibular cortex for the sense of balance.
The human sensory system consists of the following subsystems:
Visual system (Vision)
Auditory system (Hearing)
Somatosensory system (Touch/Temperature/Kinesthesia/Pain)
Gustatory system (Taste)
Olfactory system (Smell)
Vestibular system (Balance)
Somatosensory cortex
Located in the parietal lobe, the primary somatosensory cortex is the primary receptive area for the sense of touch and proprioception in the somatosensory system. This cortex is further divided into Brodmann areas 1, 2, and 3. Brodmann area 3 is considered the primary processing center of the somatosensory cortex as it receives significantly more input from the thalamus, has neurons highly responsive to somatosensory stimuli, and can evoke somatic sensations through electrical stimulation. Areas 1 and 2 receive most of their input from area 3. There are also pathways for proprioception (via the cerebellum), and motor control (via Brodmann area 4). | Biology and health sciences | Nervous system | null |
379257 | https://en.wikipedia.org/wiki/Damselfish | Damselfish | Damselfish are those within the subfamilies Abudefdufinae, Chrominae, Lepidozyginae, Pomacentrinae, and Stegastinae within the family Pomacentridae. Most species within this group are relatively small, with the largest species being about 30cm (12 in) in length. Most damselfish species exist only in marine environments, but a few inhabit brackish or fresh water. These fish are found globally in tropical, subtropical, and temperate waters.
Habitat
in tropical rocky or coral reefs, and many of those are kept as marine aquarium pets. Their diets include small crustaceans, plankton, and algae. However, a few live in fresh and brackish waters, such as the freshwater damselfish, or in warm subtropical climates, such as the large orange Garibaldi, which inhabits the coast of southern California and the Pacific Mexican coast.
Foraging
The domino damselfish D. albisella spends the majority (greater than 85%) of its daytime hours foraging. Larger individuals typically forage higher in a water column than do smaller ones. Damselfish of all sizes feed primarily on caridea and copepods. Males have relatively smaller stomach sizes during spawning season compared to females due to the allocation of resources for courtship and the guarding of nests. When current speeds are low, the damselfish forages higher in a water column where the flux of plankton is greater and they have a larger food source. As current speeds increase, it forages closer to the bottom of the column. Feeding rates tend to be higher when currents are faster. Smaller fishes forage closer to their substrates than do larger ones, possibly in response to predation pressures.
Territoriality
There are many examples of resource partitioning and habitat selection that are driven by aggressive and territorial behaviors in this group. For example, the threespot damselfish S. planifrons is very defensive of its territory and is a classic example of extreme territoriality within the group. One species, the dusky damselfish S. adustus spends the majority of its life within its territory.
Domestication of mysid shrimps
Longfin damselfish (Stegastes diencaeus) around Carrie Bow Cay, Belize (16°48.15′N, 88°04.95′W), have been shown to actively protect planktonic mysids (Mysidium integrum) in their reef farms. The mysids fertilize the algae grown in the reef farms with their excretes which in turn helps the damselfish who feed on algae to be healthier. In the reef farms that house mysids, damselfish aggressively defends the farm area against other fish that would prey on the mysids, significantly more so than they do when their farms do not house mysid shrimps. These damselfish would eat similar small invertebrates. Despite that, they are docile towards mysid shrimp. In the area, mysid shrimps are not found in swarms except in the farms maintained by damselfish. All these observations point to a pet-like relationship between the mysid shrimps and longfin damselfish in the area, with damselfish being the domesticator and mysids being the domesticate.
Courtship
In the species S. partitus, females do not choose to mate with males based on size. Even though large male size can be advantageous in defending nests and eggs against conspecifics among many animals, nest intrusions are not observed in this damselfish species. Females also do not choose their mates based upon the brood sizes of the males. In spite of the increased male parental care, brood size does not affect egg survival, as eggs are typically taken during the night when the males are not defending their nests. Rather, female choice of mates is dependent on male courtship rate. Males signal their parental quality by the vigor of their courtship displays, and females mate preferentially with vigorously courting males.
Male damselfish perform a courtship behavior called the signal jump, in which they rise in a water column and then rapidly swim back downward. The signal jump involves large amounts of rapid swimming, and females choose mates based on the vigor with which males do so. Females determine the male courtship rates using sounds that are produced during signal jumps. As the male damselfish swims down the water column, it creates a pulsed sound. Male courtship varies in the number and rates of those pulses.
In the beaugregory damselfish S. leucostictus males spend more time courting females that are larger in size. Female size is significantly correlated with ovary weight, and males intensify their courtship rituals for the more fecund females. Research has shown that males that mate with larger females do indeed receive and hatch greater numbers of eggs.
Mating
Male bicolor damselfish, E. partitus, exhibit polygamy, often courting multiple females simultaneously. Among this species, evolutionary selection favors those males that begin mating as soon as possible during spawning seasons even if the most favorable egg clutches are spawned at later times. Females often choose which males to mate with depending on the males’ territory quality. Shelter sites are essential for the bicolor damselfish in avoiding predation, and females may evaluate the suitability of these sites at a male territory before depositing their eggs.
Effect of distance on spawning
In the species S. nigricans, females usually mate with a single male each morning during spawning seasons. At dawn, they visit males’ territories to spawn. The distance to the territory of a mate influences the number of visits that a female engages in with a male. At short distances, females make many repeated visits. At longer ones, they may spawn their entire clutch in one visit. This plasticity in mating behavior can be attributed to two factors: (1) intrusions by other fish to feed in the females’ territories while they are away, which could make the females return frequently to their habitats in order to defend their resources, and (2) predatory attacks on the females, which increase in frequency as the distances that the females travel become longer. Intrusion by other fish into a female’s territory can diminish the habitat of its food and render it unstable. Thus, a spawning female should return to its home as often as possible. However, a greater number of spawning visits increases the chance of being attacked, especially when mating with males that are far away. To minimize overall costs, females change their number of spawning visits depending on male territory distance.
Filial cannibalism
The male cortez damselfish, S. rectifraenum, is known to engage in filial cannibalism. Studies have shown it typically consumes over twenty-five percent of its clutches. The males generally consume clutches that are smaller than average in size, as well as those that are still in the early stages of development. Female cortez damselfish tend to deposit their eggs with males who are already caring for early-stage eggs, rather than males with late-stage eggs. This preference is seen particularly in females that deposit smaller-sized clutches, which are more vulnerable to being consumed. For the males, filial cannibalism is an adaptive response to clutches that do not provide enough benefits to warrant the costs of parental care.
| Biology and health sciences | Acanthomorpha | Animals |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.