text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Toxication , toxification or toxicity exaltation is the conversion of a chemical compound into a more toxic form in living organisms or in substrates such as soil or water . The conversion can be caused by enzymatic metabolism in the organisms, as well as by abiotic chemical reactions . While the parent drug is usually less active, both the parent drug and its metabolite can be chemically active and cause toxicity , leading to mutagenesis, teratogenesis, and carcinogenesis. [ 1 ] [ 2 ] Different classes of enzymes, such as P450 monooxygenases , epoxide hydrolase , or acetyltransferases can catalyze the process in the cell, mostly in the liver. [ 2 ]
Parent non-toxic chemicals are generally referred to as protoxins . While toxication is generally undesirable, in certain cases it is required for the in vivo conversion of a prodrug to a metabolite with desired pharmacological or toxicological activity. Codeine is an example of a prodrug, metabolized in the body to the active compounds morphine and codeine-6-glucuronide .
Phase I of drug metabolism are bioactivation pathways, which are catalyzed by CYP450 enzymes, produce toxic metabolites and thus have the potential to damage cells. The unusual level of activity CYP450 enzymes might lead to the changes in drug metabolism and convert drugs into their more toxic forms. Among Phase I CYP450 enzymes, the subfamilies CYP2D6 and CYP3A are responsible for hepatotoxicity during drug metabolism with a number of different drugs, including flucloxacillin , troleandomycin , and troglitazone . [ 3 ] Hepatotoxicity indicates the drug's toxicity to liver.
Paracetamol (acetaminophen, APAP) is converted into the hepatotoxic metabolite NAPQI via the cytochrome P450 oxidase system, mainly by the subfamily CYP2E1. Hepatic reduced glutathione (GSH) will detoxify this formed NAPQI quickly if APAP is taken at a proper level. In the case of overdoses, the storage of GSH will not be enough for NAPQI detoxication, thereby resulting in acute liver injury. [ 4 ]
Oxidoreductases are enzymes that catalyze the reactions that involve the transfer of electrons. Methanol in itself is toxic due to its central nervous system depression properties, but it can be converted to formaldehyde by alcohol dehydrogenase and then converted to formic acid by aldehyde dehydrogenase, which are significantly more toxic. Formic acid and formaldehyde can cause severe acidosis , damage to the optic nerve , and other life-threatening complications. [ 5 ]
Ethylene glycol (common antifreeze) can be converted into toxic glycolic acid , glyoxylic acid and oxalic acid by aldehyde dehydrogenase, lactate dehydrogenase (LDH) and glycolate oxidase in mammalian organisms. [ 5 ] [ 6 ] The accumulation of the end product of the ethylene glycol mechanism, calcium oxalate , may cause malfunction in the kidney and lead to more severe consequences. [ 5 ]
Other examples of toxication by enzymatic metabolism include:
Increases in toxicity can also be caused by abiotic chemical reactions. Non-living elements affect the abiotic chemical reactions. Anthropogenic trace compounds (ATCs) have potential toxicity to the organisms in aquatic system. [ 9 ]
Arsenic contamination in drinking water can be chemically toxic. The uptake and metabolism of arsenic may result in damage to the body. When organic arsenic is converted into more toxic inorganic arsenic, it causes carcinogenesis, cytotoxicity (toxic to cells) and genotoxicity (causing mutations in genes). [ 10 ] | https://en.wikipedia.org/wiki/Toxication |
Toxicity is the degree to which a chemical substance or a particular mixture of substances can damage an organism . [ 1 ] Toxicity can refer to the effect on a whole organism, such as an animal , bacterium , or plant , as well as the effect on a substructure of the organism, such as a cell ( cytotoxicity ) or an organ such as the liver ( hepatotoxicity ). Sometimes the word is more or less synonymous with poisoning in everyday usage.
A central concept of toxicology is that the effects of a toxicant are dose -dependent; even water can lead to water intoxication when taken in too high a dose, whereas for even a very toxic substance such as snake venom there is a dose below which there is no detectable toxic effect. Toxicity is species-specific, making cross-species analysis problematic. Newer paradigms and metrics are evolving to bypass animal testing , while maintaining the concept of toxicity endpoints. [ 2 ]
In Ancient Greek medical literature, the adjective τοξικόν (meaning "toxic") was used to describe substances which had the ability of "causing death or serious debilitation or exhibiting symptoms of infection." [ 3 ] The word draws its origins from the Greek noun τόξον toxon (meaning "arc"), in reference to the use of bows and poisoned arrows as weapons. [ 3 ]
Humans have a deeply rooted history of not only being aware of toxicity, but also taking advantage of it as a tool. Archaeologists studying bone arrows from caves of Southern Africa have noted the likelihood that some aging 72,000 to 80,000 years old were dipped in specially prepared poisons to increase their lethality. [ 4 ] Although scientific instrumentation limitations make it difficult to prove concretely, archaeologists hypothesize the practice of making poison arrows was widespread in cultures as early as the Paleolithic era. [ 5 ] [ 6 ] The San people of Southern Africa have managed to preserved this practice into the modern era, with the knowledge base to form complex mixtures from poisonous beetles and plant derived extracts, yielding an arrow-tip product with a shelf life beyond several months to a year. [ 7 ]
There are generally five types of toxicities: chemical, biological, physical, radioactive and behavioural.
Disease-causing microorganisms and parasites are toxic in a broad sense but are generally called pathogens rather than toxicants. The biological toxicity of pathogens can be difficult to measure because the threshold dose may be a single organism. Theoretically one virus , bacterium or worm can reproduce to cause a serious infection . If a host has an intact immune system , the inherent toxicity of the organism is balanced by the host's response; the effective toxicity is then a combination. In some cases, e.g. cholera toxin , the disease is chiefly caused by a nonliving substance secreted by the organism, rather than the organism itself. Such nonliving biological toxicants are generally called toxins if produced by a microorganism, plant, or fungus, and venoms if produced by an animal.
Physical toxicants are substances that, due to their physical nature, interfere with biological processes. Examples include coal dust, asbestos fibres or finely divided silicon dioxide , all of which can ultimately be fatal if inhaled. Corrosive chemicals possess physical toxicity because they destroy tissues, but are not directly poisonous unless they interfere directly with biological activity. Water can act as a physical toxicant if taken in extremely high doses because the concentration of vital ions decreases dramatically with too much water in the body. Asphyxiant gases can be considered physical toxicants because they act by displacing oxygen in the environment but they are inert, not chemically toxic gases.
Radiation can have a toxic effect on organisms. [ 8 ]
Behavioral toxicity refers to the undesirable effects of essentially therapeutic levels of medication clinically indicated for a given disorder (DiMascio, Soltys and Shader, 1970). These undesirable effects include anticholinergic effects, alpha-adrenergic blockade, and dopaminergic effects, among others. [ 9 ]
Toxicity can be measured by its effects on the target (organism, organ, tissue or cell). Because individuals typically have different levels of response to the same dose of a toxic substance, a population-level measure of toxicity is often used which relates the probabilities of an outcome for a given individual in a population. One such measure is the LD 50 . When such data does not exist, estimates are made by comparison to known similar toxic things, or to similar exposures in similar organisms. Then, " safety factors " are added to account for uncertainties in data and evaluation processes. For example, if a dose of a toxic substance is safe for a laboratory rat, one might assume that one-tenth that dose would be safe for a human, allowing a safety factor of 10 to allow for interspecies differences between two mammals; if the data are from fish, one might use a factor of 100 to account for the greater difference between two chordate classes (fish and mammals). Similarly, an extra protection factor may be used for individuals believed to be more susceptible to toxic effects such as in pregnancy or with certain diseases. Or, a newly synthesized and previously unstudied chemical that is believed to be very similar in effect to another compound could be assigned an additional protection factor of 10 to account for possible differences in effects that are probably much smaller. This approach is very approximate, but such protection factors are deliberately very conservative, and the method has been found to be useful in a wide variety of applications.
Assessing all aspects of the toxicity of cancer-causing agents involves additional issues, since it is not certain if there is a minimal effective dose for carcinogens , or whether the risk is just too small to see. In addition, it is possible that a single cell transformed into a cancer cell is all it takes to develop the full effect (the "one hit" theory).
It is more difficult to determine the toxicity of chemical mixtures than a pure chemical because each component displays its own toxicity, and components may interact to produce enhanced or diminished effects. Common mixtures include gasoline , cigarette smoke , and industrial waste . Even more complex are situations with more than one type of toxic entity, such as the discharge from a malfunctioning sewage treatment plant, with both chemical and biological agents.
The preclinical toxicity testing on various biological systems reveals the species-, organ- and dose-specific toxic effects of an investigational product. The toxicity of substances can be observed by (a) studying the accidental exposures to a substance (b) in vitro studies using cells/ cell lines (c) in vivo exposure on experimental animals. Toxicity tests are mostly used to examine specific adverse events or specific endpoints such as cancer, cardiotoxicity, and skin/eye irritation. Toxicity testing also helps calculate the No Observed Adverse Effect Level (NOAEL) dose and is helpful for clinical studies. [ 10 ]
For substances to be regulated and handled appropriately they must be properly classified and labelled. Classification is determined by approved testing measures or calculations and has determined cut-off levels set by governments and scientists (for example, no-observed-adverse-effect levels , threshold limit values , and tolerable daily intake levels). Pesticides provide the example of well-established toxicity class systems and toxicity labels . While currently many countries have different regulations regarding the types of tests, numbers of tests and cut-off levels, the implementation of the Globally Harmonized System [ 11 ] [ 12 ] has begun unifying these countries.
Global classification looks at three areas: Physical Hazards (explosions and pyrotechnics), [ citation needed ] Health Hazards [ citation needed ] and environmental hazards . [ citation needed ]
The types of toxicities where substances may cause lethality to the entire body, lethality to specific organs, major/minor damage, or cause cancer. These are globally accepted definitions of what toxicity is. [ citation needed ] Anything falling outside of the definition cannot be classified as that type of toxicant. [ citation needed ]
Acute toxicity looks at lethal effects following oral, dermal or inhalation exposure. It is split into five categories of severity where Category 1 requires the least amount of exposure to be lethal and Category 5 requires the most exposure to be lethal. The table below shows the upper limits for each category.
Note: The undefined values are expected to be roughly equivalent to the category 5 values for oral and dermal administration. [ citation needed ]
Skin corrosion and irritation are determined through a skin patch test analysis, similar to an allergic inflammation patch test . This examines the severity of the damage done; when it is incurred and how long it remains; whether it is reversible and how many test subjects were affected.
Skin corrosion from a substance must penetrate through the epidermis into the dermis within four hours of application and must not reverse the damage within 14 days. Skin irritation shows damage less severe than corrosion if: the damage occurs within 72 hours of application; or for three consecutive days after application within a 14-day period; or causes inflammation which lasts for 14 days in two test subjects. Mild skin irritation is minor damage (less severe than irritation) within 72 hours of application or for three consecutive days after application.
Serious eye damage involves tissue damage or degradation of vision which does not fully reverse in 21 days. Eye irritation involves changes to the eye which do fully reverse within 21 days.
An Environmental hazard can be defined as any condition, process, or state adversely affecting the environment. These hazards can be physical or chemical, and present in air, water, and/or soil. These conditions can cause extensive harm to humans and other organisms within an ecosystem.
The EPA maintains a list of priority pollutants for testing and regulation. [ 14 ]
Workers in various occupations may be at a greater level of risk for several types of toxicity, including neurotoxicity. [ 15 ] The expression "Mad as a hatter" and the "Mad Hatter" of the book Alice in Wonderland derive from the known occupational toxicity of hatters who used a toxic chemical for controlling the shape of hats. Exposure to chemicals in the workplace environment may be required for evaluation by industrial hygiene professionals. [ 16 ]
Hazards in the arts have been an issue for artists for centuries, even though the toxicity of their tools, methods, and materials was not always adequately realized. Lead and cadmium, among other toxic elements, were often incorporated into the names of artist's oil paints and pigments, for example, "lead white" and "cadmium red".
20th-century printmakers and other artists began to be aware of the toxic substances, toxic techniques, and toxic fumes in glues, painting mediums, pigments, and solvents, many of which in their labelling gave no indication of their toxicity. An example was the use of xylol for cleaning silk screens . Painters began to notice the dangers of breathing painting mediums and thinners such as turpentine . Aware of toxicants in studios and workshops, in 1998 printmaker Keith Howard published Non-Toxic Intaglio Printmaking which detailed twelve innovative Intaglio -type printmaking techniques including photo etching , digital imaging , acrylic -resist hand-etching methods, and introducing a new method of non-toxic lithography . [ 17 ]
There are many environmental health mapping tools. TOXMAP is a Geographic Information System (GIS) from the Division of Specialized Information Services [ 18 ] of the United States National Library of Medicine (NLM) that uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency 's (EPA) Toxics Release Inventory and Superfund programs. TOXMAP is a resource funded by the US Federal Government. TOXMAP's chemical and environmental health information is taken from NLM's Toxicology Data Network
(TOXNET) [ 19 ] and PubMed , and from other authoritative sources.
Aquatic toxicity testing subjects key indicator species of fish or crustacea to certain concentrations of a substance in their environment to determine the lethality level. Fish are exposed for 96 hours while crustacea are exposed for 48 hours. While GHS does not define toxicity past 100 mg/L, the EPA currently lists aquatic toxicity as "practically non-toxic" in concentrations greater than 100 ppm. [ 20 ]
Note: A category 4 is established for chronic exposure, but simply contains any toxic substance which is mostly insoluble, or has no data for acute toxicity.
Toxicity of a substance can be affected by many different factors, such as the pathway of administration (whether the toxicant is applied to the skin, ingested, inhaled, injected), the time of exposure (a brief encounter or long term), the number of exposures (a single dose or multiple doses over time), the physical form of the toxicant (solid, liquid, gas), the concentration of the substance, and in the case of gases, the partial pressure (at high ambient pressure, partial pressure will increase for a given concentration as a gas fraction), the genetic makeup of an individual, an individual's overall health, and many others. Several of the terms used to describe these factors have been included here.
Considering the limitations of the dose-response concept, a novel Abstract Drug Toxicity Index (DTI) has been proposed recently. [ 21 ] DTI redefines drug toxicity, identifies hepatotoxic drugs, gives mechanistic insights, predicts clinical outcomes and has potential as a screening tool. | https://en.wikipedia.org/wiki/Toxicity |
In 40 CFR 156.62 , the EPA established four Toxicity Categories for acute hazards of pesticide products, with "Category I" being the highest toxicity category ( toxicity class ). Most human hazard, precautionary statements, and human personal protective equipment statements are based upon the Toxicity Category of the pesticide product as sold or distributed. In addition, toxicity categories may be used for regulatory purposes other than labeling , such as classification for restricted use and requirements for child-resistant packaging .
In certain cases, statements based upon the Toxicity Category of the product as diluted for use are also permitted. A Toxicity Category is assigned for each of five types of acute exposure, as specified in the table below. [ citation needed ]
The four toxicity categories, from one to four are:
In the following table, the leftmost column lists the route of administration. | https://en.wikipedia.org/wiki/Toxicity_category_rating |
Toxicity characteristic leaching procedure (TCLP) is a soil sample extraction method for chemical analysis employed as an analytical method to simulate leaching through a landfill . The testing methodology is used to determine if a waste is characteristically hazardous, i.e., classified as one of the "D" listed wastes by the U.S. Environmental Protection Agency (EPA). The extract is analyzed for substances appropriate to the protocol.
In the United States, the Resource Conservation and Recovery Act (RCRA) of 1976 led to establishment of federal standards for the disposal of solid waste and hazardous waste . RCRA requires that industrial wastes and other wastes must be characterized following testing protocols published by EPA. [ 1 ] TCLP is one of these tests.
The Environmental Compliance Supervisor at a typical municipal landfill (as defined by RCRA Subtitle D ) uses TCLP data to determine whether a waste may be accepted into the facility. If TCLP analytical results are below the TCLP D-list maximum contamination levels (MCLs) the waste can be accepted. If they are above these levels the waste must be taken to a hazardous waste disposal facility and the cost of disposal may increase from about $50.00/ton to as much as $1200.00/ton.
As extremely contaminated material is expensive to dispose of, grading is necessary to ensure safe disposal and to avoid paying for disposal of "clean fill." The TCLP procedure is generally useful for classifying waste material for disposal options.
Spent abrasive or soil from a construction site usually needs to have a TCLP for lead done to determine where the waste goes next.
TCLP comprises four fundamental procedures:
In the TCLP procedure the pH of the sample material is first established, and then leached with an acetic acid / sodium hydroxide solution at a 1:20 mix of sample to solvent. For example, a TCLP jug may contain 100g of sample and 2000 mL of solution. The leachate mixture is sealed in extraction vessel for general analytes, or possibly pressure sealed as in zero-headspace extractions (ZHE) for volatile organic compounds and tumbled for 18 hours to simulate an extended leaching time in the ground. It is then filtered so that only the solution (not the sample) remains and this is then analyzed. | https://en.wikipedia.org/wiki/Toxicity_characteristic_leaching_procedure |
Toxicity class refers to a classification system for pesticides that has been created by a national or international government-related or -sponsored organization. It addresses the acute toxicity of agents such as soil fumigants , fungicides , herbicides , insecticides , miticides , molluscicides , nematicides , or rodenticides .
Assignment to a toxicity class is based typically on results of acute toxicity studies such as the determination of LD 50 values in animal experiments, notably rodents , via oral, inhaled, or external application. The experimental design measures the acute death rate of an agent. The toxicity class generally does not address issues of other potential harm of the agent, such as bioaccumulation , issues of carcinogenicity , teratogenicity , mutagenic effects, or the impact on reproduction .
Regulating agencies may require that packaging of the agent be labeled with a signal word , a specific warning label to indicate the level of toxicity.
The World Health Organization (WHO) names four toxicity classes:
The system is based on LD50 determination in rats, thus an oral solid agent with an LD50 at 5 mg or less/kg bodyweight is Class Ia, at 5–50 mg/kg is Class Ib, LD50 at 50–2000 mg/kg is Class II, and at LD50 at the concentration more than 2000 mg/kg is classified as Class III. Values may differ for liquid oral agents and dermal agents. [ 1 ]
There are eight toxicity classes in the European Union 's classification system, which is regulated by Directive 67/548/EEC :
Very toxic and toxic substances are marked by the European toxicity symbol.
The Indian standardized system of toxicity labels for pesticides uses a 4-color system (red, yellow, blue, green) to plainly label containers with the toxicity class of the contents.
The United States Environmental Protection Agency (EPA) uses four toxicity classes in its toxicity category rating . Classes I to III are required to carry a signal word on the label. Pesticides are regulated in the United States primarily by the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA).
Class I materials are estimated to be fatal to an adult human at a dose of less than 5 grams (less than a teaspoon).
Class II materials are estimated to be fatal to an adult human at a dose of 5 to 30 grams.
Class III materials are estimated to be fatal to an adult human at some dose in excess of 30 grams.
Furthermore, the EPA classifies pesticides into those anybody can apply ( general use pesticides ), and those that must be applied by or under the supervision of a certified individual. Application of restricted use pesticides requires that a record of the application be kept. | https://en.wikipedia.org/wiki/Toxicity_class |
Toxicity labels [ 1 ] viz; red label , yellow label , blue label and green label are mandatory labels employed on pesticide containers in India identifying the level of toxicity (that is, the toxicity class ) of the contained pesticide. [ 1 ] [ 2 ] [ 3 ] The schemes follows from the Insecticides Act of 1968 [ 1 ] and the Insecticides Rules of 1971.
The labeling follows a general scheme as laid down in the Insecticides Rules, 1971 , and contains information such as brand name, name of manufacturer, name of the antidote in case of accidental consumption etc. A major aspect of the label is a color mark which represents the toxicity of the material by a color code. Thus the labelling scheme proposes four different colour labels: viz red, yellow, blue, and green. [ 4 ]
The toxicity classification applies only to pesticides which are allowed to be sold in India. Some of the classified pesticides may be banned in some states of India, by decision of the state governments. Some of the red-label and yellow-label pesticides were banned in the state of Kerala following the Endosulfan protests of 2011. [ 2 ] | https://en.wikipedia.org/wiki/Toxicity_label |
Toxicodynamics , termed pharmacodynamics in pharmacology , describes the dynamic interactions of a toxicant with a biological target and its biological effects. [ 1 ] A biological target , also known as the site of action, can be binding proteins, ion channels , DNA , or a variety of other receptors. When a toxicant enters an organism, it can interact with these receptors and produce structural or functional alterations. The mechanism of action of the toxicant, as determined by a toxicant’s chemical properties, will determine what receptors are targeted and the overall toxic effect at the cellular level and organismal level.
Toxicants have been grouped together according to their chemical properties by way of quantitative structure-activity relationships (QSARs), which allows prediction of toxic action based on these properties. endocrine disrupting chemicals (EDCs) and carcinogens are examples of classes of toxicants that can act as QSARs. EDCs mimic or block transcriptional activation normally caused by natural steroid hormones . These types of chemicals can act on androgen receptors , estrogen receptors and thyroid hormone receptors . This mechanism can include such toxicants as dichlorodiphenyltrichloroethane (DDE) and polychlorinated biphenyls (PCBs). Another class of chemicals, carcinogens, are substances that cause cancer and can be classified as genotoxic or nongenotoxic carcinogens. These categories include toxicants such as polycyclic aromatic hydrocarbon (PAHs) and carbon tetrachloride (CCl 4 ).
The process of toxicodynamics can be useful for application in environmental risk assessment by implementing toxicokinetic-toxicodynamic (TKTD) models. TKTD models include phenomenas such as time-varying exposure, carry-over toxicity , organism recovery time, effects of mixtures, and extrapolation to untested chemicals and species . Due to their advantages, these types of models may be more applicable for risk assessment than traditional modeling approaches.
While toxicokinetics describes the changes in the concentrations of a toxicant over time due to the uptake, biotransformation , distribution and elimination of toxicants, toxicodynamics involves the interactions of a toxicant with a biological target and the functional or structural alterations in a cell that can eventually lead to a toxic effect. Depending on the toxicant’s chemical reactivity and vicinity, the toxicant may be able to interact with the biological target. Interactions between a toxicant and the biological target may also be more specific, where high-affinity binding sites increase the selectivity of interactions. For this reason, toxicity may be expressed primarily in certain tissues or organs . The targets are often receptors on the cell surface or in the cytoplasm and nucleus . Toxicants can either induce an unnecessary response or inhibit a natural response, which can cause damage. If the biological target is critical and the damage is severe enough, irreversible injury can occur first at the molecular level, which will translate into effects at higher levels of organization. [ 1 ]
EDCs are generally considered to be toxicants that either mimic or block the transcriptional activation normally caused by natural steroid hormones. [ 2 ] These chemicals include those acting on androgen receptors, estrogen receptors and thyroid hormone receptors. [ 2 ]
Endocrine disrupting chemicals can interfere with the endocrine system in a number of ways including hormone synthesis, storage/release, transport and clearance, receptor recognition and binding, and postreceptor activation. [ 3 ]
In wildlife, exposure to EDCs can result in altered fertility, reduced viability of offspring, impaired hormone secretion or activity and modified reproductive anatomy. [ 4 ] The reproductive anatomy of offspring can particularly be affected if maternal exposure occurs. [ 5 ] In females, this includes mammary glands , fallopian tubes , uterus , cervix , and vagina . In males, this includes the prostate , seminal vesicles , epididymitis and testes . [ 5 ] Exposure of fish to EDCs has also been associated with abnormal thyroid function, decreased fertility, decreased hatching success, de-feminization and masculinization of female fish and alteration of immune function . [ 5 ]
Endocrine disruption as a mode of action for xenobiotics was brought into awareness by Our Stolen Future by Theo Colborn. [ 2 ] Endocrine disrupting chemicals are known to accumulate in body tissue and are highly persistent in the environment. [ 6 ] Many toxicants are known EDCs including pesticides , phthalates , phytoestrogens , some industrial/commercial products, and pharmaceuticals . [ 3 ] These chemicals are known to cause endocrine disruption via a few different mechanisms. While the mechanism associated with the thyroid hormone receptor is not well understood, two more established mechanisms involve the inhibition of the androgen receptor and activation of the estrogen receptor.
Certain toxicants act as endocrine disruptors by interacting with the androgen receptor. DDE is one example of a chemical that acts via this mechanism. DDE is a metabolite of DDT that is widespread in the environment. [ 1 ] Although production of DDT has been banned in the Western world, this chemical is extremely persistent and is still commonly found in the environment along with its metabolite DDE. [ 1 ] DDE is an antiandrogen , which means it alters the expression of specific androgen-regulated genes, and is an androgen receptor (AR)-mediated mechanism. [ 1 ] DDE is a lipophilic compound which diffuses into the cell and binds to the AR. [ 1 ] Through binding, the receptor is inactivated and cannot bind to the androgen response element on DNA. [ 1 ] This inhibits the transcription of androgen-responsive genes [ 1 ] which can have serious consequences for exposed wildlife. In 1980, there was a spill in Lake Apopka , Florida which released the pesticide dicofol and DDT along with its metabolites. [ 4 ] The neonatal and juvenile alligators present in this lake have been extensively studied and observed to have altered plasma hormone concentrations, decreased clutch viability, increased juvenile mortality, and morphological abnormalities in the testis and ovary. [ 4 ]
Toxicants may also cause endocrine disruption through interacting with the estrogen receptor. This mechanism has been well-studied with PCBs. These chemicals have been used as coolants and lubricants in transformers and other electrical equipment due to their insulating properties. [ 7 ] A purely anthropogenic substance, PCBs are no longer in production in the United States due to the adverse health effects associated with exposure, but they are highly persistent and are still widespread in the environment. [ 7 ] PCBs are a xenoestrogen , which elicit an enhancing (rather than inhibiting) response, and are mediated by the estrogen receptor. [ 1 ] These are often referred to as estrogen mimics because they mimic the effects of estrogen. PCBs often build up in sediments and bioaccumulate in organisms. [ 1 ] These chemicals diffuse into the nucleus and bind to the estrogen receptor. [ 1 ] The estrogen receptor is kept in an inactive conformation through interactions with proteins such as heat shock proteins 59, 70, and 90. [ 8 ] After the toxicant binding occurs, the estrogen receptor is activated and forms a homodimer complex which seeks out estrogen response elements in the DNA. [ 8 ] The binding of the complex to these elements causes a rearrangement of the chromatin and transcription of the gene, resulting in production of a specific protein. [ 8 ] In doing this, PCBs elicit an estrogenic response which can affect numerous functions within the organism. [ 1 ] These effects are observed in various aquatic species. The levels of PCBs in marine mammals are often very high as a result of bioaccumulation. [ 9 ] Studies have demonstrated that PCBs are responsible for reproductive impairment in the harbor seal ( Phoca vitulina ). [ 9 ] Similar effects have been found in the grey seal ( Halichoerus grypus ), the ringed seal ( Pusa hispida ) and the California sea lion ( Zalophys californianus ). [ 9 ] In the grey seals and ringed seals, uterine occlusions and stenosis were found which led to sterility . [ 9 ] If exposed to a xenoestrogen such as PCBs, male fish have also been seen to produce vitellogenin . [ 8 ] Vitellogenin is an egg protein female fish normally produce but is not usually present in males except at very low concentrations. [ 8 ] This is often used as a biomarker for EDCs. [ 8 ]
Carcinogens are defined as any substance that causes cancer . The toxicodynamics of carcinogens can be complex due to the varying mechanisms of action for different carcinogenic toxicants. Because of their complex nature, carcinogens are classified as either genotoxic or nongenotoxic carcinogens.
The effects of carcinogens are most often related to human exposures but mammals are not the only species that can be affected by cancer-causing toxicants. [ 10 ] Many studies have shown that cancer can develop in fish species as well. [ 10 ] Neoplasms occurring in epithelial tissue such as the liver , gastrointestinal tract, and the pancreas have been linked to various environmental toxicants. [ 10 ] Carcinogens preferentially target the liver in fish and develop hepatocellular and biliary lesions. [ 10 ]
Genotoxic carcinogens interact directly with DNA and genetic material or indirectly by their reactive metabolites. [ 11 ] Toxicants such as PAHs can be genotoxic carcinogens to aquatic organisms. [ 10 ] [ 12 ] PAHs are widely spread throughout the environment through the incomplete burning of coal, wood, or petroleum products. [ 12 ] Although PAHs do not bioaccumulate in vertebrate tissue, many studies have confirmed that certain PAH compounds such as benzo(a)pyrene , benz(a)anthracene , and Benzofluoranthene, are bioavailable and responsible for liver diseases like cancer in wild fish populations. [ 12 ] One mechanism of action for genotoxic carcinogens includes the formation of DNA adducts . Once the PAH compound enters an organism, it becomes metabolized and available for biotransformation.
The biotransformation process can activate the PAH compound and transform it into a diol epoxide, [ citation needed ] which is a very reactive intermediate . These diol-epoxides covalently bind with DNA base pairs , most often with guanine and adenine to form stable adducts within the DNA structure. [ citation needed ] The binding of diol epoxides and DNA base pairs blocks polymerase replication activity. This blockage ultimately contributes to an increase in DNA damage by reducing repair activity. [ citation needed ]
Due to these processes, PAH compounds are thought to play a role in the initiation and early promotion stage of carcinogenesis . Fish exposed to PAHs develop a range of liver lesions , some of which are characteristic of hepatocarcinogenicity . [ 12 ]
Nongenotoxic, or epigenetic carcinogens are different and slightly more ambiguous than genotoxic carcinogens since they are not directly carcinogenic. Nongenotoxic carcinogens act by secondary mechanisms that do not directly damage genes. This type of carcinogenesis does not change the sequence of DNA; instead it alters the expression or repression of certain genes by a wide variety of cellular processes. [ 11 ] Since these toxicants do not directly act on DNA, little is known about the mechanistic pathway. [ 10 ] It has been proposed that modification of gene expression from nongenotoxic carcinogens can occur by oxidative stress , peroxisome proliferation, suppression of apoptosis , alteration of intercellular communication, and modulation of metabolizing enzymes. [ 11 ]
Carbon tetrachloride is an example of a probable nongenotoxic carcinogen to aquatic vertebrates. Historically, carbon tetrachloride has been used in pharmaceutical production, petroleum refining, and as an industrial solvent. [ 13 ] Due to its widespread industrial use and release into the environment, carbon tetrachloride has been found in drinking water and therefore, has become a concern for aquatic organisms. [ 14 ] Because of its high hepatotoxic properties, carbon tetrachloride could potentially be linked to liver cancer. Experimental cancer studies have shown that carbon tetrachloride may cause benign and malignant liver tumors to rainbow trout . [ 13 ] [ 14 ] carbon tetrachloride works as a nongenotoxic carcinogen by formulating free radicals which induce oxidative stress. [ 12 ] It has been proposed that once carbon tetrachloride enters the organism, it is metabolized to trichloromethyl and trichloromethyl peroxy radicals by the CYP2E1 enzyme . [ 12 ] [ 15 ] The more reactive radical, trichloromethyl peroxy, can attack polyunsaturated fatty acids in the cellular membrane to form fatty acid free radicals and initiate lipid peroxidation . [ 15 ] The attack on the cellular membrane increases its permeability, causing a leakage of enzymes and disrupts cellular calcium homeostasis . [ 15 ] This loss of calcium homeostasis activates calcium dependent degradative enzymes and cytotoxicity , causing hepatic damage. [ 15 ] The regenerative and proliferative changes that occur in the liver during this time could increase the frequency of genetic damage, resulting in a possible increase of cancer. [ 15 ]
Toxicodynamics can be used in combination with toxicokinetics in environmental risk assessment to determine the potential effects of releasing a toxicant into the environment. The most widely used method of incorporating this are TKTD models.
Both toxicokinetics and toxicodynamics have now been described, and using these definitions models were formed, where the internal concentration (TK) and damage (TD) are simulated in response to exposure. TK and TD are separated in the model to allow for the identification of properties of toxicants that determine TK and those that determine TD. To use this type of model, parameter values for TK processes need to be obtained first. Second, the TD parameters need to be estimated. Both of these steps require a large database of toxicity information for parameterization . After establishing all the parameter values for the TKTD model, and using basic scientific precautions, the model can be used to predict toxic effects, calculate recovery times for organisms, or establish extrapolations from the model to toxicity of untested toxicants and species. [ 16 ] [ 17 ]
It has been argued that the current challenges facing risk assessments can be addressed with TKTD modeling. [ 16 ] TKTD models were derived in response to a couple of factors. One is the lack of time being considered as a factor in toxicity and risk assessment. Some of the earliest developed TKTD models, such as the Critical Body Residue (CBR) model and Critical Target Occupation (CTO) model, have considered time as a factor but a criticism has been that they are for very specific circumstances such as reversibly acting toxicants or irreversibly acting toxicants. Further extrapolation of the CTO and CBR models are DEBtox , which can model sublethal endpoints, and hazard versions of the CTO, which takes into account stochastic death as opposed to individual tolerance . [ 18 ] Another significant step to developing TKTD models was the incorporation of a state variable for damage. By using damage as a toxicodynamic state-variable, modeling intermediate recovery rates can be accomplished for toxicants that act reversibly with their targets, without the assumptions of instant recovery (CBR model) or irreversible interactions (CTO model). TKTD models that incorporate damage are the Damage Assessment Model (DAM) and the Threshold Damage Model (TDM). [ 16 ] [ 18 ] For what may seem like straightforward endpoints , a variety of different TKTD approaches exist. A review of the assumptions and hypotheses of each was previously published in the creation of a general unified threshold model of survival (GUTS). [ 18 ]
As referenced above, TKTD models have several advantages to traditional models for risk assessments. The principal advantages to using TKTD models are: [ 16 ]
Due to its advantages, TKTD models may be more powerful than the traditional dose-response models because of their incorporation of chemical concentrations as well as temporal dimensions. [ 16 ] Toxicodynamic modeling (such as TKTD models) has been shown to be a useful tool for toxicological research, with increasing opportunities to use these results in risk assessment to permit a more scientifically based risk assessment that is less reliable on animal testing . [ 21 ] Overall, these types of models can formalize knowledge about the toxicity of toxicants and organism sensitivity , create new hypotheses, and simulate temporal aspects of toxicity, making them useful tools for risk assessment. [ 16 ] [ 18 ] | https://en.wikipedia.org/wiki/Toxicodynamics |
Pythonomorpha ?
Toxicofera ( Latin for " toxin -bearers") is a proposed clade of scaled reptiles (squamates) that includes the Serpentes (snakes), Anguimorpha ( monitor lizards , beaded lizards , and alligator lizards ) and Iguania ( iguanas , agamas , and chameleons ). Toxicofera contains about 4,600 species (nearly 60%) of extant Squamata . [ 2 ] It encompasses all venomous reptile species , as well as numerous related non-venomous species. There is little morphological evidence to support this grouping; however, it has been recovered by all molecular analyses as of 2012. [ 3 ] [ 4 ] [ 5 ] [ needs update ]
Toxicofera combines the following groups from traditional classification : [ 2 ]
The relationship between these extant groups and a couple of extinct taxa are shown in the following cladogram , which is based on Reeder et al. (2015; Fig. 1). [ 6 ]
Serpentes
† Mosasauria
Anguimorpha
† Polyglyphanodontia
Iguania
Alongside these groups, Mosasauria , an extinct group including large marine reptiles primarily known from the Late Cretaceous, has been placed as part of the group. It has often been supposed that mosasaurs are most closely related to snakes, with the group containing the two dubbed Pythonomorpha , however, other studies have quesitioned this, finding that the closest relatives of mosasaurs are members of Varanoidea . [ 7 ] Polyglyphanodontia , a group of extinct herbivorous lizards known from the Cretaceous, have also been placed as part of this group in some studies as the sister group to Iguania, though other studies have instead suggested that they are most closely related to Teiioidea and thus placed outside Toxicofera. [ 8 ]
Venom in squamates has historically been considered a rarity; while it has been known in Serpentes since ancient times , the actual percentage of snake species considered venomous was relatively small (around 25%). [ 9 ] Of the approximately 2,650 species of advanced snakes (Caenophidia), only the front-fanged species (≈650) were considered venomous by the anthropocentric definition.
Following the classification of Helodermatidae in the 19th century, their venom was thought to have developed independently. [ 2 ] In snakes, the venom gland is in the upper jaw, but in helodermatids, it is found in the lower jaw. [ 2 ] The origin of venom in squamates was thus considered relatively recent in evolutionary terms and the result of convergent evolution among the seemingly- polyphyletic venomous snake families . [ citation needed ]
In 2003 a study was published that described venom in snake subfamilies previously thought to lack it. [ 10 ] Further study claimed nearly all "non-venomous" snakes produce venom to a certain extent, suggesting a single, and thus far more ancient origin for venom in Serpentes than had been considered until then. [ 11 ] [ 12 ] As a practical matter, Fry cautioned: [ 13 ]
Some non-venomous snakes have been previously thought to have only mild 'toxic saliva '. But these results suggest that they actually possess true venoms. We even isolated from a rat snake [ Coelognathus radiatus (formerly known as Elaphe radiata ) [ 11 ] ] , a snake common in pet stores, a typical cobra -style neurotoxin , one that is as potent as comparative toxins found in close relatives of the cobra. These snakes typically have smaller quantities of venom and lack fangs, but they can still deliver their venom via their numerous sharp teeth. But not all of these snakes are dangerous. It does mean, however, that we need to re-evaluate the relative danger of non-venomous snakes.
This prompted further research, which led to the discovery of venom (and venom genes ) in species from groups which were not previously known to produce it, e.g. in Iguania (specifically Pogona barbata from the family Agamidae ) and Varanidae (from Varanus varius ). [ 2 ] It is thought that this was the result of descent from a common venom-producing squamate ancestor; the hypothesis was described simply as the "venom clade" when first proposed to the scientific community . [ 2 ] The venom clade included Anguidae for phylogenetic reasons and adopted a previously suggested clade name: Toxicofera. [ 14 ]
It was estimated that the common ancestral species that first developed venom in the venom clade lived on the order of 200 million years ago. [ 2 ] The venoms are thought to have evolved after genes normally active in various parts of the body duplicated and the copies found new use in the salivary glands . [ 10 ]
Among snake families traditionally classified as venomous, the capacity seems to have evolved to extremes more than once by parallel evolution ; 'non-venomous' snake lineages have either lost the ability to produce venom (but may still have lingering venom pseudogenes ) or actually do produce venom in small quantities (e.g. 'toxic saliva'), likely sufficient to assist in small prey capture, but not normally causing harm to humans if bitten. [ citation needed ]
The newly discovered diversity of squamate species producing venoms is a treasure trove for those seeking to develop new pharmaceutical drugs; many of these venoms lower blood pressure , for example. [ 2 ] Previously known venomous squamates have already provided the basis for medications such as Ancrod , Captopril , Eptifibatide , Exenatide and Tirofiban . [ citation needed ]
The world's largest venomous lizard and the largest species of venomous land animal is the Komodo dragon . [ 15 ]
Other scientists such as Washington State University biologist Kenneth V. Kardong and toxicologists Scott A. Weinstein and Tamara L. Smith, have stated that the allegation of venom glands found in many of these animals "has had the effect of underestimating the variety of complex roles played by oral secretions in the biology of reptiles, produced a very narrow view of oral secretions and resulted in misinterpretation of reptilian evolution". According to these scientists "reptilian oral secretions contribute to many biological roles other than to quickly dispatch prey". These researchers concluded that, "Calling all in this clade venomous implies an overall potential danger that does not exist, misleads in the assessment of medical risks, and confuses the biological assessment of squamate biochemical systems". [ 16 ] More recently, it has been suggested that many of the shared toxins that underlie the Toxicofera hypothesis are in fact not toxins at all. [ 17 ] | https://en.wikipedia.org/wiki/Toxicofera |
Toxicogenomics is a subdiscipline of pharmacology that deals with the collection, interpretation, and storage of information about gene and protein activity within a particular cell or tissue of an organism in response to exposure to toxic substances . Toxicogenomics combines toxicology with genomics or other high-throughput molecular profiling technologies such as transcriptomics , proteomics and metabolomics . [ 1 ] [ 2 ] Toxicogenomics endeavors to elucidate the molecular mechanisms evolved in the expression of toxicity, and to derive molecular expression patterns (i.e., molecular biomarkers ) that predict toxicity or the genetic susceptibility to it. [ 3 ]
In pharmaceutical research, toxicogenomics is defined as the study of the structure and function of the genome as it responds to adverse xenobiotic exposure. It is the toxicological subdiscipline of pharmacogenomics , which is broadly defined as the study of inter-individual variations in whole-genome or candidate gene single-nucleotide polymorphism maps, haplotype markers, and alterations in gene expression that might correlate with drug responses. [ 4 ] [ 5 ] Though the term toxicogenomics first appeared in the literature in 1999, [ 6 ] it was by that time already in common use within the pharmaceutical industry as its origin was driven by marketing strategies from vendor companies. The term is still not universally accepted, and others have offered alternative terms such as chemogenomics to describe essentially the same field of study. [ 7 ]
The nature and complexity of the data (in volume and variability) demands highly developed processes of automated handling and storage. The analysis usually involves a wide array of bioinformatics and statistics , [ 8 ] often including statistical classification approaches. [ 9 ]
In pharmaceutical drug discovery and development , toxicogenomics is used to study possible adverse (i.e. toxic ) effects of pharmaceutical drugs in defined model systems in order to draw conclusions on the toxic risk to patients or the environment. Both the United States Environmental Protection Agency (EPA) and the Food and Drug Administration (FDA) currently preclude basing regulatory decision-making on genomics data alone. However, they do encourage the voluntary submission of well-documented, quality genomics data. Both agencies are considering the use of submitted data on a case-by-case basis for assessment purposes (e.g., to help elucidate mechanism of action or contribute to a weight-of-evidence approach) or for populating relevant comparative databases by encouraging parallel submissions of genomics data and traditional toxicological test results. [ 10 ]
Chemical Effects in Biological Systems is a project hosted by the National Institute of Environmental Health Sciences building a knowledge base of toxicology studies including study design, clinical pathology, and histopathology and toxicogenomics data. [ 11 ] [ 12 ]
InnoMed PredTox assesses the value of combining results from various omics technologies together with the results from more conventional toxicology methods in more informed decision-making in preclinical safety evaluation. [ 13 ]
Open TG-GATEs (Toxicogenomics Project-Genomics Assisted Toxicity Evaluation System) is a Japanese public-private effort which has published gene expression and pathology information for more than 170 compounds (mostly drugs). [ 14 ]
The Predictive Safety Testing Consortium aims to identify and clinically qualify safety biomarkers for regulatory use as part of the FDA 's "Critical Path Initiative". [ 13 ]
ToxCast is a program for Predicting Hazard, Characterizing Toxicity Pathways, and Prioritizing the Toxicity Testing of Environmental Chemicals at the United States Environmental Protection Agency . [ 15 ]
Tox21 is a federal collaboration involving the National Institutes of Health (NIH), Environmental Protection Agency (EPA), and Food and Drug Administration (FDA), is aimed at developing better toxicity assessment methods. [ 16 ] Within this project the toxic effects of chemical compounds on cell lines derived from the 1000 Genomes Project individuals were assessed and associations with genetic markers were determined. [ 17 ] Parts of this data were used in the NIEHS-NCATS-UNC DREAM Toxicogenetics Challenge in order to determine methods for cytotoxicity predictions for individuals. [ 18 ] [ 19 ] | https://en.wikipedia.org/wiki/Toxicogenomics |
Toxicokinetics (often abbreviated as 'TK') is the description of both what rate a chemical will enter the body and what occurs to excrete and metabolize the compound once it is in the body.
It is an application of pharmacokinetics to determine the relationship between the systemic exposure of a compound and its toxicity . It is used primarily for establishing relationships between exposures in toxicology experiments in animals and the corresponding exposures in humans. However, it can also be used in environmental risk assessments in order to determine the potential effects of releasing chemicals into the environment. In order to quantify toxic effects, toxicokinetics can be combined with toxicodynamics. Such toxicokinetic-toxicodynamic (TKTD) models are used in ecotoxicology .
Similarly, physiological toxicokinetic models are physiological pharmacokinetic models developed to describe and predict the behavior of a toxicant in an animal body; for example, what parts (compartments) of the body a chemical may tend to enter (e.g. fat, liver, spleen, etc.), and whether or not the chemical is expected to be metabolized or excreted and at what rate.
Four potential processes exist for a chemical interacting with an animal: absorption , distribution, metabolism and excretion (ADME). Absorption describes the entrance of the chemical into the body, and can occur through the air, water, food, or soil. Once a chemical is inside a body, it can be distributed to other areas of the body through diffusion or other biological processes. At this point, the chemical may undergo metabolism and be biotransformed into other chemicals ( metabolites ). These metabolites can be less or more toxic than the parent compound. After this potential biotransformation occurs, the metabolites may leave the body, be transformed into other compounds, or continue to be stored in the body compartments.
A well designed toxicokinetic study may involve several different strategies and depends on the scientific question to be answered. Controlled acute and repeated toxicokinetic animal studies are useful to identify a chemical's biological persistence, tissue and whole body half-life , and its potential to bioaccumulate. Toxicokinetic profiles can change with increasing exposure duration or dose. Real world environmental exposures generally occur as low level mixtures, such as from air, water, food, or tobacco products. Mixture effects may differ from individual chemical toxicokinetic profiles because of chemical interactions, synergistic, or competitive processes. For other reasons, it is equally important to characterize the toxicokinetics of individual chemicals constituents found in mixtures as information on behavior or fate of the individual chemical can help explain environmental, human, and wildlife biomonitoring studies. [ 1 ] | https://en.wikipedia.org/wiki/Toxicokinetics |
Toxicology is a scientific discipline , overlapping with biology , chemistry , pharmacology , and medicine , that involves the study of the adverse effects of chemical substances on living organisms [ 1 ] and the practice of diagnosing and treating exposures to toxins and toxicants . The relationship between dose and its effects on the exposed organism is of high significance in toxicology. Factors that influence chemical toxicity include the dosage, duration of exposure (whether it is acute or chronic), route of exposure, species, age, sex, and environment. Toxicologists are experts on poisons and poisoning . There is a movement for evidence-based toxicology as part of the larger movement towards evidence-based practices . Toxicology is currently contributing to the field of cancer research, since some toxins can be used as drugs for killing tumor cells. One prime example of this is ribosome-inactivating proteins , tested in the treatment of leukemia . [ 2 ]
The word toxicology ( / ˌ t ɒ k s ɪ ˈ k ɒ l ə dʒ i / ) is a neoclassical compound from Neo-Latin , first attested c. 1799 , [ 3 ] from the combining forms toxico- + -logy , which in turn come from the Ancient Greek words τοξικός toxikos , "poisonous", and λόγος logos , "subject matter").
The earliest treatise dedicated to the general study of plant and animal poisons, including their classification, recognition, and the treatment of their effects is the Kalpasthāna , one of the major sections of the Suśrutasaṃhitā , a Sanskrit work composed before ca. 300 CE and perhaps in part as early as the fourth century BCE. [ 4 ] [ 5 ] The Kalpasthāna was influential on many later Sanskrit medical works and was translated into Arabic and other languages, influencing South East Asia, the Middle East, Tibet and eventually Europe. [ 6 ] [ 7 ]
Dioscorides , a Greek physician in the court of the Roman emperor Nero , made an early attempt to classify plants according to their toxic and therapeutic effect. [ 8 ] A work attributed to the 10th century author Ibn Wahshiyya called the Book on Poisons describes various toxic substances and poisonous recipes that can be made using magic . [ 9 ] A 14th century Kannada poetic work attributed to the Jain prince Mangarasa, Khagendra Mani Darpana , describes several poisonous plants. [ 10 ]
The 16th-century Swiss physician Paracelsus is considered "the father" of modern toxicology, based on his rigorous (for the time) approach to understanding the effects of substances on the body. [ 11 ] He is credited with the classic toxicology maxim, " Alle Dinge sind Gift und nichts ist ohne Gift; allein die Dosis macht, dass ein Ding kein Gift ist. " which translates as, "All things are poisonous and nothing is without poison; only the dose makes a thing not poisonous." This is often condensed to: " The dose makes the poison " or in Latin "Sola dosis facit venenum". [ 12 ] : 30
Mathieu Orfila is also considered the modern father of toxicology, having given the subject its first formal treatment in 1813 in his Traité des poisons , also called Toxicologie générale . [ 13 ]
In 1850, Jean Stas became the first person to successfully isolate plant poisons from human tissue. This allowed him to identify the use of nicotine as a poison in the Bocarmé murder case, providing the evidence needed to convict the Belgian Count Hippolyte Visart de Bocarmé of killing his brother-in-law. [ 14 ]
The goal of toxicity assessment is to identify adverse effects of a substance. [ 15 ] Adverse effects depend on two main factors: i) routes of exposure (oral, inhalation, or dermal) and ii) dose (duration and concentration of exposure). To explore dose, substances are tested in both acute and chronic models. [ 16 ] Generally, different sets of experiments are conducted to determine whether a substance causes cancer and to examine other forms of toxicity. [ 16 ]
Factors that influence chemical toxicity: [ 12 ]
The discipline of evidence-based toxicology strives to transparently, consistently, and objectively assess available scientific evidence in order to answer questions in toxicology, [ 17 ] the study of the adverse effects of chemical, physical, or biological agents on living organisms and the environment, including the prevention and amelioration of such effects. [ 18 ] Evidence-based toxicology has the potential to address concerns in the toxicological community about the limitations of current approaches to assessing the state of the science. [ 19 ] [ 20 ] These include concerns related to transparency in decision-making, synthesis of different types of evidence, and the assessment of bias and credibility. [ 21 ] [ 22 ] [ 23 ] Evidence-based toxicology has its roots in the larger movement towards evidence-based practices .
Toxicity experiments may be conducted in vivo (using the whole animal) or in vitro (testing on isolated cells or tissues), or in silico (in a computer simulation). [ 24 ]
The classic experimental tool of toxicology is testing on non-human animals. [ 12 ] Examples of model organisms are Galleria mellonella , [ 25 ] which can replace small mammals, Zebrafish ( Danio rerio ), which allow for the study of toxicology in a lower order vertebrate in vivo [ 26 ] [ 27 ] and Caenorhabditis elegans . [ 28 ] As of 2014, such animal testing provides information that is not available by other means about how substances function in a living organism. [ 29 ] The use of non-human animals for toxicology testing is opposed by some organisations for reasons of animal welfare, and it has been restricted or banned under some circumstances in certain regions, such as the testing of cosmetics in the European Union. [ 30 ]
While testing in animal models remains as a method of estimating human effects, there are both ethical and technical concerns with animal testing. [ 31 ]
Since the late 1950s, the field of toxicology has sought to reduce or eliminate animal testing under the rubric of " Three Rs " – reduce the number of experiments with animals to the minimum necessary; refine experiments to cause less suffering, and replace in vivo experiments with other types, or use more simple forms of life when possible. [ 32 ] [ 33 ] The historical development of alternative testing methods in toxicology has been published by Balls. [ 34 ]
Computer modeling is an example of an alternative in vitro toxicology testing method; using computer models of chemicals and proteins, structure-activity relationships can be determined, and chemical structures that are likely to bind to, and interfere with, proteins with essential functions, can be identified. [ 35 ] This work requires expert knowledge in molecular modeling and statistics together with expert judgment in chemistry, biology and toxicology. [ 35 ]
In 2007 the American NGO National Academy of Sciences published a report called "Toxicity Testing in the 21st Century: A Vision and a Strategy" which opened with a statement: "Change often involves a pivotal event that builds on previous history and opens the door to a new era. Pivotal events in science include the discovery of penicillin, the elucidation of the DNA double helix, and the development of computers. ... Toxicity testing is approaching such a scientific pivot point. It is poised to take advantage of the revolutions in biology and biotechnology. Advances in toxicogenomics, bioinformatics, systems biology, epigenetics, and computational toxicology could transform toxicity testing from a system based on whole-animal testing to one founded primarily on in vitro methods that evaluate changes in biologic processes using cells, cell lines, or cellular components, preferably of human origin." [ 36 ] As of 2014 that vision was still unrealized. [ 29 ] [ 37 ]
The United States Environmental Protection Agency studied 1,065 chemical and drug substances in their ToxCast program (part of the CompTox Chemicals Dashboard ) using in silica modelling and a human pluripotent stem cell -based assay to predict in vivo developmental intoxicants based on changes in cellular metabolism following chemical exposure. Major findings from the analysis of this ToxCast_STM dataset published in 2020 include: (1) 19% of 1065 chemicals yielded a prediction of developmental toxicity , (2) assay performance reached 79%–82% accuracy with high specificity (> 84%) but modest sensitivity (< 67%) when compared with in vivo animal models of human prenatal developmental toxicity, (3) sensitivity improved as more stringent weights of evidence requirements were applied to the animal studies, and (4) statistical analysis of the most potent chemical hits on specific biochemical targets in ToxCast revealed positive and negative associations with the STM response, providing insights into the mechanistic underpinnings of the targeted endpoint and its biological domain. [ 38 ]
In some cases shifts away from animal studies have been mandated by law or regulation; the European Union (EU) prohibited use of animal testing for cosmetics in 2013. [ 39 ]
Most chemicals display a classic dose response curve – at a low dose (below a threshold), no effect is observed. [ 12 ] : 80 Some show a phenomenon known as sufficient challenge – a small exposure produces animals that "grow more rapidly, have better general appearance and coat quality, have fewer tumors, and live longer than the control animals". [ 40 ] A few chemicals have no well-defined safe level of exposure. These are treated with special care. Some chemicals are subject to bioaccumulation as they are stored in rather than being excreted from the body; [ 12 ] : 85–90 these also receive special consideration.
Several measures are commonly used to describe toxic dosages according to the degree of effect on an organism or a population, and some are specifically defined by various laws or organizational usage. These include:
Medical toxicology is the discipline that requires physician status (MD or DO degree plus specialty education and experience).
Clinical toxicology is the discipline that can be practiced not only by physicians but also other health professionals with a master's degree in clinical toxicology: physician extenders ( physician assistants , nurse practitioners ), nurses , pharmacists , and allied health professionals .
Forensic toxicology is the discipline that makes use of toxicology and other disciplines such as analytical chemistry , pharmacology and clinical chemistry to aid medical or legal investigation of death, poisoning, and drug use. The primary concern for forensic toxicology is not the legal outcome of the toxicological investigation or the technology utilized, but rather the obtainment and interpretation of results. [ 43 ]
Computational toxicology is a discipline that develops mathematical and computer-based models to better understand and predict adverse health effects caused by chemicals, such as environmental pollutants and pharmaceuticals. [ 44 ] Within the Toxicology in the 21st Century project, [ 45 ] [ 46 ] the best predictive models were identified to be Deep Neural Networks , Random Forest , and Support Vector Machines , which can reach the performance of in vitro experiments. [ 47 ] [ 48 ] [ 49 ] [ 50 ]
Occupational toxicology is the application of toxicology to chemical hazards in the workplace. [ 51 ]
A toxicologist is a scientist or medical personnel who specializes in the study of chemicals to determine if they are harmful to living organisms. [ 52 ] They may analyze symptoms, mechanisms, treatments and detection of venoms and toxins ; especially the poisoning of people. There are several types of toxicologist including medical, academic and non-profit. [ 53 ] [ 54 ]
To work as a toxicologist one should obtain a degree in toxicology or a related degree like biology , chemistry , pharmacology or biochemistry . [ 55 ] [ 52 ] [ 56 ] Bachelor's degree programs in toxicology cover the chemical makeup of toxins and their effects on biochemistry, physiology and ecology. After introductory life science courses are complete, students typically enroll in labs and apply toxicology principles to research and other studies. Advanced students delve into specific sectors, like the pharmaceutical industry or law enforcement, which apply methods of toxicology in their work. The Society of Toxicology (SOT) recommends that undergraduates in postsecondary schools that do not offer a bachelor's degree in toxicology consider attaining a degree in biology or chemistry. Additionally, the SOT advises aspiring toxicologists to take statistics and mathematics courses, as well as gain laboratory experience through lab courses, student research projects and internships. In the USA, Medical Toxicologists complete residency training such as in Emergency Medicine, Pediatrics or Internal Medicine, followed by fellowship in Medical Toxicology and eventual certification by the American College of Medical Toxicology (ACMT). [ 57 ]
Toxicologists perform many different duties including research in the academic, nonprofit and industrial fields, product safety evaluation, consulting, public service and legal regulation. In order to research and assess the effects of chemicals, toxicologists perform carefully designed studies and experiments. These experiments help identify the specific amount of a chemical that may cause harm and potential risks of being near or using products that contain certain chemicals. Research projects may range from assessing the effects of toxic pollutants on the environment to evaluating how the human immune system responds to chemical compounds within pharmaceutical drugs. While the basic duties of toxicologists are to determine the effects of chemicals on organisms and their surroundings, specific job duties may vary based on industry and employment. For example, forensic toxicologists may look for toxic substances in a crime scene, whereas aquatic toxicologists may analyze the toxicity level of water bodies. [ 58 ] [ 59 ] [ 60 ]
The salary for jobs in toxicology is dependent on several factors, including level of schooling, specialization, experience. The U.S. Bureau of Labor Statistics (BLS) noted that jobs for biological scientists, which generally include toxicologists, were expected to increase by 21% between 2008 and 2018; the BLS noted that this increase could be due to research and development growth in biotechnology, as well as budget increases for basic and medical research in biological science. [ 61 ] | https://en.wikipedia.org/wiki/Toxicology |
Toxicology of carbon nanomaterials is the study of toxicity in carbon nanomaterials like fullerenes and carbon nanotubes .
A review of works on fullerene toxicity by Lalwani et al. found little evidence that C 60 is toxic. [ 1 ] The toxicity of these carbon nanoparticles varies with dose, duration, type (e.g., C 60 , C 70 , M@C 60 , M@C 82 ), functional groups used to water-solubilize these nanoparticles (e.g., OH, COOH), and method of administration (e.g., intravenous, intraperitoneal). The authors recommended that the pharmacology of each fullerene- or metallofullerene-based complex be assessed as a different compound.
Moussa et al. (1996–97) [ 2 ] studied the in vivo toxicity of C 60 after intra- peritoneal administration of large doses. No evidence of toxicity was found and the mice tolerated a dose of 5 g/kg of body weight. Mori et al. (2006) [ 3 ] could not find toxicity in rodents for C 60 and C 70 mixtures after oral administration of a dose of 2 g/kg body weight and did not observe evidence of genotoxic or mutagenic potential in vitro .
Other studies could not establish the toxicity of fullerenes: on the contrary, the work of Gharbi et al. (2005) [ 4 ] suggested that aqueous C 60 suspensions failing to produce acute or subacute toxicity in rodents could also protect their livers in a dose-dependent manner against free-radical damage. In a 2012 primary study of an olive oil / C 60 suspension administered to rats by intra-peritoneal administration or oral gavage , a prolonged lifespan to almost double the normal lifespan of the rats was seen and significant toxicity was not observed. [ 5 ] An investigator for this study, Professor Moussa, generalized from its findings in a video interview and stated that pure C 60 is not toxic. [ 6 ]
When considering toxicological data, care must be taken to distinguish as necessary between what are normally referred to as fullerenes: (C 60 , C 70 , ...); fullerene derivatives: C 60 or other fullerenes with covalently bonded chemical groups; fullerene complexes (e.g., water-solubilized with surfactants, such as C 60 - PVP ; host–guest complexes, such as with cyclodextrin ), where the fullerene is supermolecular bound to another molecule; C 60 nanoparticles, which are extended solid-phase aggregates of C 60 crystallites; and nanotubes, which are generally much larger (in terms of molecular weight and size) molecules, and are different in shape to the spheroidal fullerenes C 60 and C 70 , as well as having different chemical and physical properties.
The molecules above are all fullerenes (close-caged all-carbon molecules) but it is unreliable to extrapolate results from C 60 to nanotubes or vice versa, as they range from insoluble materials in either hydrophilic or lipophilic media, to hydrophilic, lipophilic, or even amphiphilic molecules, and with other varying physical and chemical properties. A quantitative structural analysis relationship ( QSAR ) study can analyze on how close the molecules under consideration are in physical and chemical properties, which can help.
As of 2013, the United States National Institute for Occupational Safety and Health was not aware of any reports of adverse health effects in workers using or producing carbon nanotubes or carbon nanofibers . However a systematic review of 54 laboratory animal studies indicated that they could cause adverse pulmonary effects including inflammation , granulomas , and pulmonary fibrosis , which were of similar or greater potency when compared with other known fibrogenic materials such as silica , asbestos , and ultrafine carbon black . [ 7 ]
With reference to nanotubes, a 2008 study [ 8 ] on carbon nanotubes introduced into the abdominal cavity of mice led the authors to suggest comparisons to " asbestos -like pathogenicity". This was not an inhalation study, though there have been several performed in the past, therefore it is premature to conclude that nanotubes should be considered to have a toxicological profile similar to asbestos. Conversely, and perhaps illustrative of how the various classes of molecules which fall under the general term fullerene cover a wide range of properties, Sayes et al. found that in vivo inhalation of C 60 (OH) 24 and nano-C 60 in rats gave no effect, whereas in comparison quartz particles produced an inflammatory response under the same conditions. [ 9 ] As stated above, nanotubes are quite different in chemical and physical properties to C 60 , i.e., molecular weight, shape, size, physical properties (such as solubility) all are very different, so from a toxicological standpoint, different results for C 60 and nanotubes are not suggestive of any discrepancy in the findings.
A 2016 study reported on workers in a large-scale MWCNT manufacturing facility in Russia with relatively high occupational exposure levels, finding that exposure to MWCNTs caused significant increase in several inflammatory cytokines and other biomarkers for interstitial lung disease. [ 10 ]
The toxicity of carbon nanotubes has been an important question in nanotechnology. As of 2007, such research had just begun. The data is still fragmentary and subject to criticism. Preliminary results highlight the difficulties in evaluating the toxicity of this heterogeneous material. Parameters such as structure, size distribution , surface area , surface chemistry, surface charge , and agglomeration state as well as purity of the samples, have considerable impact on the reactivity of carbon nanotubes. However, available data clearly show that, under some conditions, nanotubes can cross membrane barriers, which suggests that, if raw materials reach the organs, they can induce harmful effects such as inflammatory and fibrotic reactions. [ 11 ] [ 12 ]
In 2014, experts from the International Agency for Research on Cancer (IARC) assessed the carcinogenicity of CNTs, including SWCNTs and MWCNTs. No human epidemiologic or cancer data was available to the IARC Working Group at the time, so the evaluation focused on the results of in vivo animal studies assessing the carcinogenicity of SWCNTs and MWCNTs in rodents.
The Working Group concluded that there was sufficient evidence for the specific MWCNT type "MWCNT-7", limited evidence for the two other types of MWCNTs with dimensions similar to MWCNT-7, and inadequate evidence for SWCNTs. Therefore, it was agreed to specifically classify MWCNT-7 as possibly carcinogenic to humans ( Group 2B ) while the other forms of CNT, namely SWCNT and other types of MWCNT, excluding MWCNT-7, were considered not classifiable as to their carcinogenicity to humans ( Group 3 ) due to a lack of coherent evidence. [ 13 ]
Results of rodent studies collectively show that regardless of the process by which CNTs were synthesized and the types and amounts of metals they contained, CNTs were capable of producing inflammation , epithelioid granulomas (microscopic nodules), fibrosis , and biochemical/toxicological changes in the lungs. [ 14 ] Comparative toxicity studies in which mice were given equal weights of test materials showed that SWCNTs were more toxic than quartz , which is considered a serious occupational health hazard when chronically inhaled. As a control, ultrafine carbon black was shown to produce minimal lung responses. [ 15 ]
Carbon nanotubes deposit in the alveolar ducts by aligning lengthwise with the airways; the nanotubes will often combine with metals. [ 16 ] The needle-like fiber shape of CNTs is similar to asbestos fibers . This raises the idea that widespread use of carbon nanotubes may lead to pleural mesothelioma, a cancer of the lining of the lungs, or peritoneal mesothelioma , a cancer of the lining of the abdomen (both caused by exposure to asbestos). A recently published pilot study supports this prediction. [ 17 ] Scientists exposed the mesothelial lining of the body cavity of mice to long multiwalled carbon nanotubes and observed asbestos-like, length-dependent, pathogenic behavior that included inflammation and formation of lesions known as granulomas . Authors of the study conclude:
This is of considerable importance, because research and business communities continue to invest heavily in carbon nanotubes for a wide range of products under the assumption that they are no more hazardous than graphite. Our results suggest the need for further research and great caution before introducing such products into the market if long-term harm is to be avoided. [ 17 ]
Although further research is required, the available data suggest that under certain conditions, especially those involving chronic exposure, carbon nanotubes can pose a serious risk to human health. [ 11 ] [ 18 ] [ 15 ] [ 17 ]
Exposure scenarios are important to consider when trying to determine toxicity and the risks associated with these diverse and difficult to study materials. Exposure studies have been conducted over the past several years in an effort to determine where and how likely exposures will be. Since CNTs are being incorporated into composite materials for their ability to strengthen materials while not adding significant weight, the manufacture of CNTs and composites or hybrids including CNTs, the subsequent processing of the articles and equipment made from the composites, and end of life processes such as recycling or incineration all represent potential sources of exposure. The potential for exposure to the end user is not as likely, however as CNTs are being incorporated into new products there may be more research needed. [ 19 ]
One study performed personal and area sampling at seven different plants mostly involving the manufacture of MWCNTs. This study found that the work processes that prompt nanoparticle, not necessarily just CNT release, include "spraying, CNT preparation, ultrasonic dispersion, wafer heating, and opening the water bath cover." The exposure concentrations for both personal and area sampling indicated most workers' exposure was well below that set by the ACGIH for carbon black. [ 20 ]
Processing composite materials presents potential for exposure during cutting, drilling, or abrasion. Two different composite types were laboratory tested during processing under differing conditions to determine potential releases. Samples were machined using one dry cutting process and one wet cutting process with measurements taken at the source and in the breathing zone. The composites tested varied by method of manufacture and components. One was graphite and epoxy layered with CNTs aligned within and the other was a woven alumina with aligned CNTs on the surface. Dry cutting of both proved to be of concern regarding concentrations measured at the breathing zone, while wet cutting, a preferred method, showed a much better method of controlling potential exposures during this type of processing. [ 21 ]
Another study provided breathing zone and area sampling results from fourteen sites working with CNTs in a variety of manners for potential exposure assessment. These sites included the manufacture of CNTs, hybrid producers/users, and secondary manufacturers in either the electronics industry or composites industry. The highest mean exposures found in breathing zone samples were found in the secondary manufactures of electronics, then composites and hybrid sites, while the lowest mean exposures were found at the primary manufacturers sites. Relatively few of the samples returned results higher than the recommended exposure level as published by NIOSH. [ 22 ]
While there are developing strategies for the use of CNTs in a variety of products, potentials for exposures thus far appear to be low in most occupational settings. This may change as new products and manufacturing methods or secondary processing advances; therefore risk assessments should be integral to any planning for new applications.
Currently, there is a lack of epidemiological evidence linking exposure to CNT to human health effects. To date, there have been only a handful of published epidemiological studies that have solely examined the health effects related to the exposure of CNT, while several other studies are currently underway and yet to be published. [ 23 ] [ 24 ] [ 25 ] With the limited amount of human data, scientists are more reliant on the results of current animal toxicity studies to predict adverse health effects, as well as applying what is already known about exposures to other fibrous materials such as asbestos or fine and ultra-fine particulates . This limitation of human data has led to the use of the precautionary principle, which urges workplaces to limit exposure levels to CNT as low as possibly achievable in the absence of known health effects data. [ 26 ]
Epidemiology studies of nanomaterials thus far have considered a variety of nanomaterials. Few have been specific to CNTs and each has considered a small sample size. These studies have found some relationships between biological markers and MWCNT exposure. One cross-sectional study to evaluate health effects was conducted to determine associations of biomarkers in relation to measured CNT exposure. While no effect on lung function due to exposure was found, the study did observe some indications of early signs of effects to biomarkers associated with exposure to MWCNTs. Additionally, some results were contradictory to earlier in vitro studies making further studies necessary to further define effects. [ 22 ] [ 27 ]
NIOSH has undertaken a risk assessment based on available studies to determine appropriate recommendations of exposure levels. Their review found that while human health effects had not been directly observed, there were animal studies that showed potential for health effects that could reasonably be expected in humans upon sufficient exposure. In addition to animal studies, human cell studies were reviewed and determined that harmful effects were expressed. Ultimately, the risk assessment found the most relevant data upon which to calculate the REL ( recommended exposure limit ) were animal studies. Corrections for inter-species differences, and updates to reflect advancing technologies in sampling methods and detection capabilities were considered as a part of the risk assessment. The resultant REL is several orders of magnitude smaller than those of other carbonaceous particulate matters of concern, graphite and carbon black. [ 7 ]
To date, several international government agencies, as well as individual authors, have developed occupational exposure limits (OEL) to reduce the risk of any possible human health effects associated with workplace exposures to CNT. The National Institute for Occupational Safety and Health (NIOSH) conducted a risk assessment using animal and other toxicological data relevant to assessing the potential non-malignant adverse respiratory effects of CNT and proposed an OEL of 1 μg/m 3 elemental carbon as a respirable mass 8-hour time-weighted average (TWA) concentration. [ 7 ] Several individual authors have also performed similar risk assessments using animal toxicity data and have established inhalation exposure limits ranging from 2.5 to 50 ug/m 3 . [ 28 ] One such risk assessment used two data from two different types of exposures to work toward an OEL as part of an adaptive management where there is an expectation that recommendations will be reevaluated as more data become available. [ 29 ]
Occupational exposures that could potentially allow the inhalation of CNT is of the greatest concern, especially in situations where the CNT is handled in powder form which can easily be aerosolized and inhaled. Also of concern are any high-energy processes that are applied to various CNT preparations such as the mixing or sonication of CNT in liquids as well as processes that cut or drill into CNT based composites in downstream products. These types of high-energy processes will aerosolize CNT which can then be inhaled.
Guidance for minimizing exposure and risk to CNT have been published by several international agencies which includes several documents from the British Health and Safety Executive titled "Using nanomaterials at work Including carbon nanotubes and other bio-persistent high aspect ratio nanomaterials" and the "Risk Management of Carbon Nanotubes" [ 30 ] [ 31 ] Safe Work Australia has also published guidance titled "Safe Handling and use of Carbon Nanotubes" which describes two approaches to managing the risks that include risk management with detailed hazard analysis and exposure assessment as well as risk management by using Control Banding . [ 32 ] The National Institute for Occupational Safety and Health has also published a document titled "Current Intelligence Bulletin 65: Occupational Exposure to Carbon Nanotubes and Nanofibers" describes strategies for controlling workplace exposures and implementing a medical surveillance program. [ 7 ] The Occupational Safety and Health Administration has published a "OSHA Fact Sheet, Working Safety with Nanomaterials" for use as guidance in addition to a webpage hosting a variety of resources.
These guidance documents generally advocate instituting the principles of the Hierarchy of Hazard Control which is a system used in industry to minimize or eliminate exposure to hazards. The hazard controls in the hierarchy are, in order of decreasing effectiveness: | https://en.wikipedia.org/wiki/Toxicology_of_carbon_nanomaterials |
The toxicology of fire ant venom is relatively well studied. The venom plays a central role in the biology of Red imported fire ants , such as in capturing prey, [ 1 ] and in defending itself from competitors, [ 2 ] assailants, [ 3 ] and diseases. [ 4 ] Some 14 million people are stung annually in the United States, [ 5 ] suffering reactions that vary from mild discomfort, to pustule formation, swelling, [ 6 ] and in rare cases, systemic reactions followed by anaphylactic shock. [ 7 ] Fire ant venoms are mainly composed (>95%) of a complex mixture of insoluble alkaloids added to a watery solution of toxic proteins. [ 8 ] For the Red imported fire ant Solenopsis invicta Buren there are currently 46 described proteins, [ 9 ] of which four are well-characterised as potent allergens . [ 10 ]
Venom plays an important role in the biology of fire ants, being used to capture prey items, [ 1 ] nest defense, [ 2 ] and antimicrobial action. [ 11 ] On average, however, a worker stores very little venom (only about 0.5 μg at any given time). [ 12 ] Newborn workers contain little to no venom within their reservoirs, but workers that are only one day old can produce 1.17 μg/day. However, workers that are 17 days old only produce 0.3 μg/day. Workers deliver 0.66 nl of venom when they sting, which amounts to 3.1% of their supply. Older workers deliver less venom when they sting, but middle-aged workers and nest-defenders deliver much higher quantities. [ 13 ] Like all fire ant species, venom is secreted by the venom gland and is stored in the poison sac. [ 14 ] When in use, it is ejected through the stinger's main duct. Capacity is between 20 and 40 nl, but this depends on the worker's size. [ 15 ] The American entomologist Justin O. Schmidt described it as being "sharp, sudden, mildly alarming", therefore ranking at "1" in the Schmidt sting pain index , a pain scale which ranks the pain intensity of an insect's sting from 0 to 4. [ 16 ]
Over 95% of the venom components are water- insoluble piperidine alkaloids . Piperidines include trans -2-methyl-6-n-undecylpiperidines , trans -2-methyl-6-n-tridecylpiperidine, trans -2-methyl-6-( cis -4-tridecenyl) piperidines, trans -2-methyl-6-n-pentadecylpiperidine, trans -2-methyl-6-( cis -6-pentadecenyl)piperidine and 2,6-dialkylpiperidines (the ants' venom is dominated by the trans - stereoisomers of this specific ingredient). [ 17 ] [ 18 ] [ 19 ] trans -2-Methyl-6-n-undecylpiperidine (solenopsin) has been shown to have cytotoxic , hemolytic , necrotic , insecticidal, antibacterial , antifungal, and anti- HIV properties. [ 20 ] As well as that, the alkaloid has shown antiangiogenic activity. [ 21 ] These components are responsible for the formation of hives , and also for the development of sterile pustules on areas where the ant has stung. [ 22 ] Experiments indicate that the median lethal dose (LD 50 ) on tested female rats is 0.36 mg/kg. [ 23 ]
Approximately 46 proteins have been identified in the red imported fire ant's venom, [ 9 ] although scientists have long believed the venom only contained alkaloids. [ 24 ] [ 25 ] This assumption was mostly due to the difficulties in obtaining sufficient venom for analysis because of its low protein content, which is only 0.1% of the venom's total weight. [ 26 ]
These proteins are experimentally suggested to directly account for the anaphylactic reactions seen in humans sensitive to the venom. [ 27 ] Whilst including a number of neurotoxins and potential allergens, not all of these proteins are involved with venom function. [ 9 ] At least four protein allergens have been characterised, named Sol i 1–4. Of these, Sol i 3, is part of the antigen 5 family, and Sol i 1 is a phospholipase A 1 B; Sol i 1 shows a close relation with wasp venom phospholipases . [ 22 ] [ 28 ] [ 29 ] [ 30 ] Sol i 2 and 4 are unique, odorant-binding proteins of poorly understood function. Other proteins found in the venom may benefit the colony; some of these proteins can kill off bacteria, which may explain why workers spray venom around their nests by vibrating their gasters. Other proteins also bind pheromones which may assist a worker to lay chemical trails to communicate with other nestmates. [ 12 ] [ 9 ] [ 31 ]
In the United States, more than 40 million people live in areas infested with fire ant populations and 14 million people are stung by them annually. A quarter of all victims stung by red imported fire ants are expected to develop sensitivity to the venom, and approximately 6,000 will suffer anaphylaxis. [ 32 ] 51% of people who relocated themselves to infested areas report getting stung within three weeks after arrival. [ 33 ] In a survey conducted in South Carolina , 33,000 people (or 94 per 10,000 population) received medical attention due to red imported fire ants, and 660 people (1.9 per 10 000 population) were treated for anaphylaxis. [ 34 ] In Texas, 79% of participants in a survey stated they had been stung by red imported fire ants, while 20% had not. 61% of West Texans state they had been stung by the ants before, compared to 90% in central Texas, 89% in east Texas, 86% in the gulf coastal regions, 78% in the south and 72% in the north. [ 35 ] In separate survey, 87% of individuals classed their reactions as mild, 12% as moderate and 1% as severe. [ 36 ] In Australia, 64,000 homes are within red imported fire ant infested areas, and 140,000 consultations and 3,000 anaphylactic reaction cases are predicted every year by 2030 if government efforts to eradicate the ant fail. [ 37 ] A survey conducted in China shows that one-third of participants in infested areas were victims of red imported fire ant stings. [ 38 ]
Studies suggest that the rate of systemic reactions to stings may be associated with seasonal variations in the venom's potency. 51% of allergic reactions occurred in summer, and 19% in spring. However, A survey reported a higher incidence during spring (39.9%) than summer (31.9%). [ 39 ] Younger people, usually those under 20 years, experience the highest rate of sting attacks (50%), but the rate declines with older people. Among men and women, the rate varies as some studies report more women being attacked than men and vice versa. [ 36 ] [ 38 ] Deaths from red imported fire ant stings are rare, but may become common the more the ant spreads. Many cases have also been reported in the past. [ 40 ] It is reported that more than 80 deaths have been recorded; of these, 22 cases were recorded in Florida and 19 in Texas. However, when duplicate reports are excluded, four deaths were recorded in Alabama, 10 in Florida, two in Georgia and Louisiana, and 14 in Texas. [ 41 ] [ 42 ] [ 43 ] People can be educated and be made aware of the dangers of red imported fire ants. [ 44 ]
Reactions seen in humans vary; some are hypersensitive to venom while others show resilience. Hypersensitivity can be attributed to certain medical problems such as heart conditions or diabetes. Bacterial infections attributed to sting injuries also pose a problem and may require further medical attention. Most humans can withstand many stings, but others may suffer from severe reactions such as anaphylaxis. [ 45 ]
People who are stung by red imported fire ants may experience intense local burning or flare-ups, followed by reddening of the skin at the sting site. This area will swell into a bump, hive or vesicle within 20 minutes. White fluid-filled sterile pustules begin to form within hours or days after being stung. [ 45 ] [ 38 ] [ 46 ] [ 47 ] Pustules on the skin remain for a couple of days, and may become infected which would require medical attention. In most cases, pustules dry up in a matter of weeks and leave brown scars that either remain for several months or become permanent. [ 45 ] The formation of pustules occurs in almost every person stung by the ants. In one study, 96% of participants reported the formation of pustules, whereas 2% reported large local reactions. [ 33 ] Between 17% and 56% of people stung develop venom-specific IgE . Many of them will experience pruritic lumps around areas where the ants stung, known as late-phase responses or cutaneous allergic reactions. [ 42 ] [ 48 ] [ 49 ]
Pustule formation can only be prevented if the ants are removed before they have a chance to sting. Once venom has been injected, pustules will form and no form of treatment will prevent them from occurring. Medications such as antibiotics, diphenylhydrazines , epinephrines or topical steroids will not affect pustular reactions. [ 45 ] [ 49 ] [ 50 ]
Anaphylaxis occurs in 0.6 to 6% of people who have been stung by the ants, and it can be fatal if left untreated. [ 40 ] [ 42 ] Typical symptoms of anaphylaxis include dizziness, headaches, fever, severe chest pain, nausea, severe sweating, low blood pressure, loss of breath, serious swelling, and slurred speech. [ 45 ] [ 38 ] [ 51 ] One case reports a victim feeling strong vertigo 5 to 10 minutes after being stung, followed by glassy eyes, dry mouth, paleness, unconsciousness and severe cramps on the sting sites. [ 52 ] In addition, neuropathy , seizures (even without any evidence of prior systemic reactions), cerebrovascular accidents , and nephrotic syndrome have been associated with red imported fire ant stings. [ 45 ] [ 42 ] [ 53 ] A series of neurotoxins have been identified in red imported fire ant venom, which may explain why some victims experience hallucinations after they have been stung. [ 12 ]
It is suggested that a conservative approach be used when treating sting injuries; specifically, the kind of treatment used should be based on the symptoms. For minor sting injuries, with symptoms only including pustule formations and pain, over-the-counter products are available to prevent infection. Ants should be removed by washing the area with antiseptic soap, and itchiness. It is rare for ant sting sites to become infected, so the use of antibiotic prophylaxis is not always required. [ 45 ] [ 54 ]
Victims who show signs of anaphylaxis are treated with antihistamines, epinephrines , and parenteral corticosteroids. [ 42 ] Epinephrine is the first product for use to treat systemic allergic responses, particularly if a patient is experiencing dyspnoea or hypotension because it is capable of reversing adverse events quickly and is very safe to use. It is recommended that people who have suffered from anaphylaxis carry an epinephrine autoinjector (EpiPen), should dyspnoea or hypotension begin to occur. [ 37 ]
Whole body extract immunotherapy (WBE) to treat victims of anaphylaxis [ 55 ] [ 56 ] has been in use since 1973. [ 57 ] [ 58 ] Anyone who has a suspected allergy to the venom is redirected to an allergist for assessment. [ 45 ] The treatment uses the entire body of the ant and not just the venom, and unlike fire ant venom immunotherapy (which is occasionally used), WBE contains venom proteins. [ 59 ] [ 56 ] To reduce a patient's sensitivity to the venom, gradual increases of dose extracts are injected into the body. [ 60 ] WBE immunotherapy appears to be very effective in preventing systemic reactions; [ 55 ] [ 56 ] in one study [ 61 ] of participants who completed WBE immunotherapy, two out of fifteen participants suffered from allergic reactions upon being stung 18 months after immunotherapy. [ 54 ] As mentioned, fire ant venom immunotherapy is occasionally used, and studies show it can reduce the risk of systemic reactions. [ 62 ] [ 63 ] In fact, another study claims that fire ant venom immunotherapy is more effective than WBE immunotherapy. [ 64 ] Fire ant venom immunotherapy is not recommended for children with large local reactions, although an exception may be made for those who live in heavily infested areas. There is also an increased risk of systemic allergic reactions to future stings in children who have cutaneous manifestations after getting stung. This makes many experts put some children on fire ant venom immunotherapy, while others do not. [ 54 ] [ 65 ]
The recommended maintenance dose is between 0.5 mL of a 1:100 w/v 1:10 w/v WBE. [ 66 ] For fire ant venom immunotherapy, the most common maintenance dose is 0.5 mL of a 1:200 (wt/vol) dilution. [ 67 ] During the build-up phase, it is recommended that dosing is given weekly or biweekly, although some scientists suggest that rush protocols can be successful. [ 54 ] [ 68 ] It is recommended that patients going through immunotherapy receive treatment for three to five years, and lifelong therapy, although there is no consensus as to how long an individual should be treated. [ 54 ] [ 69 ]
The stings of the red imported fire ant in animals are painful, and may prove life-threatening. [ 70 ] In dogs, stings from the red imported fire ant can cause pustular dermatosis , a condition where pustules appear in crops as a result of the ant sting. [ 71 ] After getting stung, the immediate response consists of erythema and swelling. The pustules remain for approximately 24 hours, whereas in humans they can last for several days. [ 72 ] In livestock, red imported ants mostly sting animals in regions with no hair, particularly around the ears, eyes, muzzle, the perineum and ventral portion of the abdomen. Newborn or young livestock can be blinded or killed when attacked by the ants. [ 73 ] [ 74 ] Healthy individuals are less likely to be attacked than weak or sick animals. Red papule and mild swelling occur, followed by vesicopustule with a red halo developing within 24 to 48 hours. The eyes and eyelids are commonly damaged from the stings; in sheep and goats, ophthalmic ointment containing antibiotics and corticosteroids can be used to treat the eyes of sheep and goats, but this treatment is not recommended for horses. In non-domestic animals, cases of red imported fire ants stings in animals such as ferrets, moles squirrels, white-tailed deer, cottontail rabbits, and newborn blackbucks have been reported, as well as lizards and screech owl nestlings. The aftermath of the injuries is like those in domestic animals. [ 73 ]
Red imported fire ants are known to actively kill vertebrate animals, and cause significant livestock losses. [ 75 ] Animals may trigger major stinging episodes when they disturb active nests, with thousands of ants participating in the attack. During such episodes, an animal may suffer from hundreds to thousands of individual stings. It is suspected that many victims of the red imported fire ants may be depressed as a result of the effects of the toxin. Some animals may swallow red imported fire ants as they lick or bite around the sites they are stinging. This can cause additional injuries inside the animal itself, especially in the upper gastrointestinal tract. In suckling white tail deer fawns, sting sites have been found in the oesophagus and abomasum ; toxins from the ingested ants may cause inflammation of the gastrointestinal lining. [ 73 ] | https://en.wikipedia.org/wiki/Toxicology_of_red_imported_fire_ant_venom |
Toxicology testing , also known as safety assessment , or toxicity testing , is the process of determining the degree to which a substance of interest negatively impacts the normal biological functions of an organism, given a certain exposure duration, route of exposure, and substance concentration. [ 1 ]
Toxicology testing is often conducted by researchers who follow established toxicology test protocols for a certain substance, mode of exposure, exposure environment, duration of exposure, a particular organism of interest, or for a particular developmental stage of interest. Toxicology testing is commonly conducted during preclinical development for a substance intended for human exposure. Stages of in silico , in vitro and in vivo research are conducted to determine safe exposure doses in model organisms. If necessary, the next phase of research involves human toxicology testing during a first-in-man study . Toxicology testing may be conducted by the pharmaceutical industry , biotechnology companies, contract research organizations , or environmental scientists.
The study of poisons and toxic substances has a long history dating back to ancient times, when humans recognized the dangers posed by various natural compounds. However, the formalization and development of toxicology as a distinct scientific discipline can be attributed to notable figures like Paracelsus (1493–1541) and Orfila (1757–1853).
Paracelsus (1493–1541): Often regarded as the "father of toxicology, Paracelsus, whose real name was Theophrastus von Hohenheim, challenged prevailing beliefs about poisons during the Renaissance era. He introduced the fundamental concept that "the dose makes the poison," emphasizing that the toxicity of a substance depends on its quantity. This principle remains a cornerstone of toxicology.
Mathieu Orfila (1787–1853): A Spanish-born chemist and toxicologist, Orfila made significant contributions to the field in the 19th century. He is best known for his pioneering work in forensic toxicology, particularly in developing methods for detecting and analyzing poisons in biological samples. Orfila's work played a vital role in establishing toxicology as a recognized scientific discipline and laid the groundwork for modern forensic toxicology practices in criminal investigations and legal cases.
Around one million animals, primate and non-primate, are used every year in Europe in toxicology tests. [ 2 ] In the UK, one-fifth of animal experiments are toxicology tests. [ 3 ]
Toxicity tests examine finished products such as pesticides , medications , cosmetics , food additives such as artificial sweeteners , packing materials, and air freshener , or their chemical ingredients. The substances are tested using a variety of methods including dermal application, respiration, orally, injected or in water sources. They are applied to the skin or eyes; injected intravenously , intramuscularly , or subcutaneously ; inhaled either by placing a mask over the animals, or by placing them in an inhalation chamber; or administered orally, placing them in the animals' food or through a tube into the stomach. Doses may be given once, repeated regularly for many months, or for the lifespan of the animal. [ 4 ] Toxicity tests can also be conducted on materials need to be disposed such as sediment to be disposed in a marine environment.
Initial toxicity tests often involve computer modelling (in silico) to predict toxicokinetic pathways or to predict potential exposure points by modelling weather and water currents to determine which animals or regions that will be most affected.
Other less intensive and more common in vitro toxicology tests involve, amongst others, microtox assays to observe bacteria growth and productivity. This can be adapted to plant life measure photosynthesis levels and growth of exposed plants.
A contract research organization (CRO) is an organization that provides support to the pharmaceutical , biotechnology , chemical, and medical device industries in the form of research services outsourced on a contract basis. A CRO may provide toxicity testing services, along with others such as assay development, preclinical research , clinical research , clinical trials management, and pharmacovigilance . CROs also support foundations, research institutions, and universities, in addition to governmental organizations (such as the NIH , EMEA , etc.). [ 5 ]
In the United States, toxicology tests are subject to Good Laboratory Practice guidelines and other Food and Drug Administration laws.
Animal testing for cosmetic purposes is currently banned all across the European Union . [ 6 ] | https://en.wikipedia.org/wiki/Toxicology_testing |
A toxicophore is a chemical structure or a portion of a structure (e.g., a functional group ) that is related to the toxic properties of a chemical. Toxicophores can act directly (e.g., dioxins ) or can require metabolic activation (e.g., tobacco-specific nitrosamines ).
Most toxic substances exert their toxicity through some interaction (e.g., covalent bonding , oxidation ) with cellular macromolecules like proteins or DNA . This interaction leads to changes in the normal cellular biochemistry and physiology and downstream toxic effects.
Occasionally, the toxicophore requires bioactivation , mediated by enzymes , to produce a more reactive metabolite that is more toxic. For example, tobacco-specific nitrosamines are activated by cytochrome P450 enzymes to form a more reactive substance that can covalently bind to DNA, causing mutations that, if not repaired , can lead to cancer. Generally, different chemical compounds that contain the same toxicophore elicit similar toxic effects at the same site of toxicity. [ 1 ]
Medicinal chemists and structural biologists study toxicophores in order to predict (and hopefully avoid) potentially toxic compounds early in the drug development process. Toxicophores can also be identified in lead compounds and removed or replaced later in the process with less toxic moieties . [ 2 ] Both techniques, in silico (predictive) and a posteriori (experimental), are active areas of chemoinformatics research and development, within the field known as Computational Toxicology . [ 3 ] For example, in the United States, the EPA 's National Center for Computational Toxicology [ 4 ] sponsors several toxicity databases [ 5 ] [ 6 ] [ 7 ] [ 8 ] based on predictive modeling as well as high-throughput screening experimental methods.
This toxicology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Toxicophore |
Toxicovigilance is the process of identifying and evaluating the risks of poisoning that exist within a community, and proposing and evaluating measures taken to reduce, eliminate or manage them. More specifically, the goal of toxicovigilance is to identify specific circumstances or agents giving rise to poisoning, or certain populations suffering a higher incidence of poisoning. This way, emerging toxicological issues can be revealed, such as the reformulation of a chemical product or a change to its packaging or labelling, the spread of a new illegal drug , or a hazardous environmental contamination . Once an issue has been identified, appropriate health and other authorities can be alerted so they can take the necessary preventive, repressive or regulatory measures. [ 1 ]
The practice of toxicovigilance often involves the registration of cases of poisoning by health professionals , or the analysis of enquiries made to poison control centers . Because of this, practising toxicovigilance is often one of the core tasks of a poison control center.
There is an overlap between toxicovigilance and for example pharmacovigilance or environmental health . They are all are aspects of the broader concept of public health surveillance . | https://en.wikipedia.org/wiki/Toxicovigilance |
The Toxics Release Inventory ( TRI ) is a publicly available database containing information on toxic chemical releases and other waste management activities in the United States .
The database is available from the United States Environmental Protection Agency (EPA) and contains information reported annually by some industry groups as well as federal facilities. Each year, companies across a wide range of industries (including chemical manufacturing , metal mining , coal- or oil-burning electric utilities , and other industries) that manufacture (includes importing), process, or otherwise use more than a certain amount of a listed chemical must report it to the TRI. For most listed chemicals, facilities must report if they manufacture 25,000 pounds or process or otherwise use 10,000 pounds of the chemical, but some chemicals have lower reporting thresholds. [ 1 ]
The inventory was first proposed in a 1985 New York Times op-ed piece written by David Sarokin and Warren Muir, researchers for an environmental group, Inform, Inc. [ 2 ] Congress established TRI under Section 313 of the Emergency Planning and Community Right-to-Know Act of 1986 (EPCRA), and later expanded it in the Pollution Prevention Act of 1990 (PPA). [ 3 ] [ 4 ] The law was developed out of concern about Union Carbide's releases of toxic gases in the 1984 Bhopal disaster (India) and a smaller 1985 release at its plant in Institute, West Virginia . [ 5 ]
Facilities are required to report to the TRI if they meet all of the following requirements:
If certain criteria are met, the facility may be allowed to complete a "Form A" certification statement instead of the more detailed "Form R." Form A may only be used for chemicals that are not considered chemicals of special concern, for which amounts manufactured, processed, or otherwise used at the facility do not exceed 1 million pounds, and which do not exceed 500 pounds of annual reportable amount (i.e., total quantity released/disposed of, treated, recycled, and combusted for energy recovery ) in the calendar year. [ 7 ]
Facilities must report quantities of listed chemicals released to the air through stacks or fugitive emissions ; quantities directly discharged to water on-site or to a publicly owned treatment works ; released or disposed of to land, such as in a landfill or injection well ; and quantities of waste transferred off-site for disposal or release. The PPA added requirements for facilities to report information on quantities of production-related waste managed on- and off-site through recycling, combustion for energy recovery, treatment, and disposal of other releases, and to report information on quantities of waste managed due to one-time or non-production-related events. Facilities also report information on any source reduction activities undertaken to prevent pollution. [ citation needed ]
Every year, EPA publishes the TRI National Analysis, which interprets the reported TRI data and examines trends in releases, waste management practices, and pollution prevention (P2) activities. The National Analysis includes an analysis of trends in releases and waste management, as well as analyses of important industry sectors, chemicals of special concern, and a "Where You Live" tool that maps the data. [ 8 ]
The TRI data can be accessed in multiple ways through EPA's website. | https://en.wikipedia.org/wiki/Toxics_Release_Inventory |
A toxidrome (a portmanteau of toxic and syndrome , coined in 1970 by Mofenson and Greensher [ 2 ] ) is a syndrome caused by a dangerous level of toxins in the body. It is often the consequence of a drug overdose . Common symptoms include dizziness , disorientation , nausea , vomiting and oscillopsia . It may indicate a medical emergency requiring treatment at a poison control center . Aside from poisoning , a systemic infection may also lead to one. Classic toxidromes may be variable [ 1 ] or obscured by co-ingestion of multiple drugs. [ 3 ]
A common tool for assessing for the presence of toxidrome in the United Kingdom is the CRESS tool. [ 4 ]
The symptoms of an anticholinergic toxidrome include blurred vision, coma , decreased bowel sounds, delirium , dry skin , fever , flushing , hallucinations , ileus , memory loss , mydriasis (dilated pupils ), myoclonus , psychosis , seizures and urinary retention . Complications include hypertension , hyperthermia and tachycardia . Substances that may cause this toxidrome include antihistamines , antipsychotics , antidepressants , antiparkinsonian drugs, atropine , benztropine , datura , diphenhydramine and scopolamine . [ 3 ]
The symptoms of a cholinergic toxidrome include bronchorrhea , confusion , defecation , diaphoresis , diarrhea , emesis , lacrimation , miosis , muscle fasciculations , salivation , seizures , urination and weakness. Complications include bradycardia , hypothermia and tachypnea . Substances that may cause this toxidrome include carbamates , mushrooms and organophosphates .
The symptoms of a hallucinogenic toxidrome include disorientation , hallucinations , hyperactive bowel sounds, panic and seizures . Complications include hypertension , tachycardia and tachypnea . Substances that may cause this toxidrome include substituted amphetamines , cocaine and phencyclidine .
The symptoms of an opiate toxidrome include the classic triad of coma , pinpoint pupils and respiratory depression [ 3 ] as well as altered mental states , shock , pulmonary edema and unresponsiveness. Complications include bradycardia , hypotension and hypothermia . Substances that may cause this toxidrome are opioids .
The symptoms of sedative/hypnotic toxidrome include ataxia , blurred vision, coma , confusion , delirium , deterioration of central nervous system functions, diplopia , dysesthesias , hallucinations , nystagmus , paresthesias , sedation , slurred speech and stupor . Apnea is a potential complication. Substances that may cause it include anticonvulsants , barbiturates , benzodiazepines , gamma-Hydroxybutyric acid , Methaqualone and ethanol . While most sedative-hypnotics are anticonvulsant , some such as GHB and methaqualone instead lower the seizure threshold, so can cause paradoxical seizures in overdose.
The symptoms of a sympathomimetic toxidrome include anxiety , delusions , diaphoresis , hyperreflexia , mydriasis , paranoia , piloerection and seizures . Complications include hypertension and tachycardia . Substances that may cause this toxidrome include cocaine , amphetamine and compounds based upon amphetamine's structure such as ephedrine ( Ma Huang ), methamphetamine , phenylpropanolamine and pseudoephedrine . The bronchodilator salbutamol may also cause this toxidrome. It may appear very similar to the anticholinergic toxidrome, but is distinguished by hyperactive bowel sounds and sweating. [ 3 ]
Nelson, Lewis H.; Flomenbaum, Neal; Goldfrank, Lewis R.; Hoffman, Robert Louis; Howland, Mary Deems; Neal A. Lewin (2006). Goldfrank's toxicologic emergencies . New York: McGraw-Hill, Medical Pub. Division. ISBN 0-07-143763-0 . | https://en.wikipedia.org/wiki/Toxidrome |
A toxin is a naturally occurring poison [ 1 ] produced by metabolic activities of living cells or organisms . [ 2 ] They occur especially as proteins , often conjugated . [ 3 ] The term was first used by organic chemist Ludwig Brieger (1849–1919), [ 4 ] derived from toxic .
Toxins can be small molecules , peptides , or proteins that are capable of causing disease on contact with or absorption by body tissues interacting with biological macromolecules such as enzymes or cellular receptors . They vary greatly in their toxicity , ranging from usually minor (such as a bee sting ) to potentially fatal even at extremely low doses (such as botulinum toxin ). [ 5 ] [ 6 ]
Toxins are often distinguished from other chemical agents strictly based on their biological origin. [ 7 ]
Less strict understandings embrace naturally occurring inorganic toxins, such as arsenic . [ 8 ] [ 9 ] [ 10 ] Other understandings embrace synthetic analogs of naturally occurring organic poisons as toxins, [ 11 ] and may [ 12 ] or may not [ 13 ] embrace naturally occurring inorganic poisons. It is important to confirm usage if a common understanding is critical.
Toxins are a subset of toxicants . The term toxicant is preferred when the poison is man-made and therefore artificial. [ 14 ] The human and scientific genetic assembly of a natural-based toxin should be considered a toxin as it is identical to its natural counterpart. [ 15 ] The debate is one of linguistic semantics .
The word toxin does not specify method of delivery (as opposed to venom , a toxin delivered via a bite, sting, etc.). Poison is a related but broader term that encompasses both toxins and toxicants; poisons may enter the body through any means - typically inhalation , ingestion , or skin absorption . Toxin, toxicant, and poison are often used interchangeably despite these subtle differences in definition. The term toxungen has also been proposed to refer to toxins that are delivered onto the body surface of another organism without an accompanying wound . [ 16 ]
A rather informal terminology of individual toxins relates them to the anatomical location where their effects are most notable:
On a broader scale, toxins may be classified as either exotoxins , excreted by an organism, or endotoxins , which are released mainly when bacteria are lysed .
The term "biotoxin" is sometimes used to explicitly confirm the biological origin as opposed to environmental or anthropogenic origins. [ 17 ] [ 18 ] Biotoxins can be classified by their mechanism of delivery as poisons (passively transferred via ingestion, inhalation, or absorption across the skin), toxungens (actively transferred to the target's surface by spitting, spraying, or smearing), or venoms (delivered through a wound generated by a bite, sting, or other such action). [ 16 ] They can also be classified by their source, such as fungal biotoxins , microbial toxins , plant biotoxins , or animal biotoxins. [ 19 ] [ 20 ]
Toxins produced by microorganisms are important virulence determinants responsible for microbial pathogenicity and/or evasion of the host immune response . [ 21 ]
Biotoxins vary greatly in purpose and mechanism, and can be highly complex (the venom of the cone snail can contain over 100 unique peptides , which target specific nerve channels or receptors). [ 22 ]
Biotoxins in nature have two primary functions:
Some of the more well known types of biotoxins include:
Many living organisms employ toxins offensively or defensively. A relatively small number of toxins are known to have the potential to cause widespread sickness or casualties. They are often inexpensive and easily available, and in some cases it is possible to refine them outside the laboratory. [ 24 ] As biotoxins act quickly, and are highly toxic even at low doses, they can be more efficient than chemical agents. [ 24 ] Due to these factors, it is vital to raise awareness of the clinical symptoms of biotoxin poisoning, and to develop effective countermeasures including rapid investigation, response, and treatment. [ 19 ] [ 25 ] [ 24 ]
The term "environmental toxin" can sometimes explicitly include synthetic contaminants [ 26 ] such as industrial pollutants and other artificially made toxic substances. As this contradicts most formal definitions of the term "toxin", it is important to confirm what the researcher means when encountering the term outside of microbiological contexts.
Environmental toxins from food chains that may be dangerous to human health include:
In general, when scientists determine the amount of a substance that may be hazardous for humans, animals and/or the environment they determine the amount of the substance likely to trigger effects and if possible establish a safe level. In Europe, the European Food Safety Authority produced risk assessments for more than 4,000 substances in over 1,600 scientific opinions and they provide open access summaries of human health, animal health and ecological hazard assessments in their OpenFoodTox [ 37 ] database. [ 38 ] [ 39 ] The OpenFoodTox database can be used to screen potential new foods for toxicity. [ 40 ]
The Toxicology and Environmental Health Information Program (TEHIP) [ 41 ] at the United States National Library of Medicine (NLM) maintains a comprehensive toxicology and environmental health web site that includes access to toxins-related resources produced by TEHIP and by other government agencies and organizations. [ 42 ] This web site includes links to databases, bibliographies, tutorials, and other scientific and consumer-oriented resources. TEHIP also is responsible for the Toxicology Data Network (TOXNET), [ 43 ] an integrated system of toxicology and environmental health databases that are available free of charge on the web.
TOXMAP is a Geographic Information System (GIS) that is part of TOXNET. [ 44 ] TOXMAP uses maps of the United States to help users visually explore data from the United States Environmental Protection Agency 's (EPA) Toxics Release Inventory and Superfund Basic Research Programs . | https://en.wikipedia.org/wiki/Toxin |
TADB is a database of Type 2 toxin-antitoxin loci in bacterial and archaeal genomes . [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Toxin-antitoxin_database |
The Toxin and Toxin-Target Database (T3DB) , [ 1 ] [ 2 ] also known as the Toxic Exposome Database , is a freely accessible online database of common substances that are toxic to humans, along with their protein , DNA or organ targets . The database currently houses nearly 3,700 toxic compounds or poisons described by nearly 42,000 synonyms. This list includes various groups of toxins , including common pollutants , pesticides , drugs , food toxins, household and industrial/workplace toxins, cigarette toxins, and uremic toxins . These toxic substances are linked to 2,086 corresponding protein/DNA target records. In total there are 42,433 toxic substance-toxin target associations. Each toxic compound record (ToxCard) in T3DB contains nearly 100 data fields and holds information such as chemical properties and descriptors, mechanisms of action , toxicity or lethal dose values, molecular and cellular interactions, medical (symptom and treatment) information (Fig. 1–3), NMR an MS spectra , and up- and down-regulated genes . This information has been extracted from over 18,000 sources , which include other databases , government documents, books, and scientific literature.
The primary focus of the T3DB is on providing mechanisms of toxicity and identifying target proteins for common toxic substances. While a number of other toxic compound databases do exist, their emphasis is on covering large numbers of chemical compounds that are almost never seen outside a chemical laboratory. T3DB attempts to capture data on only those toxic substances that are abundant or in widespread use and have been detected or measured in humans. T3DB is fully searchable and supports extensive text, sequence, chemical structure , relational query and spectral searches. It is both modelled after and closely linked to the Human Metabolome Database ( HMDB ) and DrugBank . Potential applications of T3DB include metabolomics and environmental exposure studies, toxic compound metabolism prediction, toxin/drug interaction prediction, and general toxic substance awareness.
All data in T3DB is non-proprietary or is derived from a non-proprietary source. It is freely accessible and available to anyone. In addition, nearly every data item is fully traceable and explicitly referenced to the original source. T3DB data is available through a public web interface and downloads. | https://en.wikipedia.org/wiki/Toxin_and_Toxin-Target_Database |
Toxinology is a subfield of toxicology dedicated to toxic substances produced by or occurring in living organisms. [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Toxinology |
A toxoid is an inactivated toxin (usually an exotoxin ) whose toxicity has been suppressed either by chemical ( formalin ) or heat treatment, while other properties, typically immunogenicity , are maintained. [ 1 ] Toxins are secreted by bacteria, whereas toxoids are altered form of toxins; toxoids are not secreted by bacteria. Thus, when used during vaccination , an immune response is mounted and immunological memory is formed against the molecular markers of the toxoid without resulting in toxin-induced illness. Such a preparation is also known as an anatoxin . [ 2 ] There are toxoids for prevention of diphtheria , tetanus and botulism . [ 3 ]
Toxoids are used as vaccines because they induce an immune response to the original toxin or increase the response to another antigen since the toxoid markers and toxin markers are preserved. For example, the tetanus toxoid is derived from the tetanospasmin produced by Clostridium tetani . [ 4 ] The latter causes tetanus and is vaccinated against by the DTaP vaccine. While patients may sometimes complain of side effects after a vaccine, these are associated with the process of mounting an immune response and clearing the toxoid, not the direct effects of the toxoid. The toxoid does not have virulence as the toxin did before inactivation.
Toxoids are also useful in the production of human antitoxins . Multiple doses of tetanus toxoid are used by many plasma centers in the United States for the development of highly immune persons for the production of human anti-tetanus immune globulin ( tetanus immune globulin (TIG), HyperTet (c) [ 5 ] ), which has replaced horse serum -type tetanus antitoxin in most of the developed world.
Toxoids are also used in the production of conjugate vaccines . The highly antigenic toxoids help draw attention to weaker antigens such as polysaccharides found in the bacterial capsule . [ 6 ]
This article about vaccines or vaccination is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Toxoid |
A toy fort is a miniature fortress or castle that is used as a setting to stage battles using toy soldiers . Toy forts come in many shapes and sizes; some are copies of existing historical structures, while others are imagined with specific elements to enable realistic play, such as moats , drawbridges , and battlements . Toy fort designs range from the châteaux of Europe to the stockade forts of the American wild west .
Toy forts and castles first appeared at the beginning of the nineteenth century in Germany , a country that dominated the world of toy manufacturing up until WW1 . The earliest examples came as a set of generic wooden blocks which could be configured in many different ways. As time went on, some of these sets were designed to portray specific structures associated with real battles .
Around 1850 dollhouse manufacturers started to apply their production methods and capabilities towards the production of toy forts and castles. Sets would consist of wooden components, some blocks and some flat, painted to depict details such as stone, brick, windows, arches and vegetation. The parts would be shipped in a box which was designed to be inverted and then used as the base for the toy fort. This design became the standard design for toy forts and castles for the next 100 years.
The Germans dominated the toy fort market until about 1900 when other manufacturers from France, Denmark, Britain, and the USA started to appear on the scene. As technology progressed, new materials were used in the manufacturing of toy forts including tin , zinc alloy , composition, cardboard , hardboard , MDF , and finally plastics.
The three best-known manufacturers of toy forts were Moritz Gottschalk (Germany), O. and M. Hausser (Germany), and Lines Bros . ( Great Britain ). | https://en.wikipedia.org/wiki/Toy_forts_and_castles |
In scientific modeling , a toy model is a deliberately simplistic model with many details removed so that it can be used to explain a mechanism concisely. It is also useful in a description of the fuller model.
The phrase "tinker-toy model" is also used, [ citation needed ] in reference to the Tinkertoys product used for children's constructivist learning .
Examples of toy models in physics include: | https://en.wikipedia.org/wiki/Toy_model |
In scientific disciplines, a toy problem [ 1 ] [ 2 ] or a puzzlelike problem [ 3 ] is a problem that is not of immediate scientific interest, yet is used as an expository device to illustrate a trait that may be shared by other, more complicated, instances of the problem, or as a way to explain a particular, more general, problem solving technique. A toy problem is useful to test and demonstrate methodologies. Researchers can use toy problems to compare the performance of different algorithms . They are also good for game designing .
For instance, while engineering a large system, the large problem is often broken down into many smaller toy problems which have been well understood in detail. Often these problems distill a few important aspects of complicated problems so that they can be studied in isolation. Toy problems are thus often very useful in providing intuition about specific phenomena in more complicated problems.
As an example, in the field of artificial intelligence , classical puzzles, games and problems are often used as toy problems. These include sliding-block puzzles , N-Queens problem , missionaries and cannibals problem , tic-tac-toe , chess , [ 1 ] Tower of Hanoi and others. [ 2 ] [ 3 ]
This mathematics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Toy_problem |
A toy soldier is a miniature figurine that represents a soldier . The term applies to depictions of uniformed military personnel from all eras, and includes knights , cowboys , American Indians , pirates , samurai , and other subjects that involve combat -related themes. Toy soldiers vary from simple playthings to highly realistic and detailed models . The latter are of more recent development and are sometimes called model figures to distinguish them from traditional toy soldiers. Larger scale toys such as dolls and action figures may come in military uniforms, but they are not generally considered toy soldiers.
Toy soldiers are made from all types of material, but the most common mass-produced varieties are metal and plastic . There are many different kinds of toy soldiers, including tin soldiers or flats , hollow-cast metal figures, composition figures, and plastic army men . Metal toy soldiers were traditionally sold in sets; plastic figures were sold in toy shops individually in Britain and Europe and in large boxed sets in the U.S. Modern, collectable figures are often sold individually.
Scale for toy soldiers is expressed as the soldier's approximate height from head to foot in millimeters. Because many figures do not stand up straight, height is usually an approximation. Standard toy soldier scale, originally adopted by W. Britain , is 54 mm (2.25 inches) or 1:32 scale. Among different manufacturers, standard scale may range from 50 mm or 1:35 scale , to 60 mm or 1:28 scale. For gamers and miniatures enthusiasts, 25 mm and even smaller scales are available. On the larger end of the scale are American dimestore figures , and many of the toy soldiers produced in Germany, which are approximately 75 mm (3 inches) or 1:24 scale .
Tin soldiers were produced in Germany as early as the 1730s, by molding the metal between two pieces of slate. [ citation needed ] Toy soldiers became widespread during the 18th century, inspired by the military exploits of Frederick the Great . Miniature soldiers were also used in the 17th, 18th, and 19th centuries by military strategists to plan battle tactics by using the figures to show the locations of real soldiers. In 1893, the British toy company William Britain revolutionized the production of toy soldiers by devising the method of hollow casting , making soldiers that were cheaper and lighter than their German counterparts. [ 1 ]
In addition to Britains, there have been many other manufacturers of toy soldiers over the years. For example, John Hill & Company produced hollow cast lead figures in the same style and scale. Companies such as Elastolin and Lineol were well known for their composite figures made of glue and sawdust that included both military and civilian subjects. After 1950, rising production costs and the development of plastic meant that many shop keepers liked the lighter, cheaper, and far less prone to break in transit polythene figure. This led to greater demand for plastic toy soldiers. [ 2 ] The first American plastic soldiers were made by Beton as early as 1937. The first plastic toy soldiers produced in Great Britain were made in 1946 by Airfix before they became known for their famous model kits range.
One large historical producer in plastic was Louis Marx and Company , which produced both realistic soldiers of great detail and also historical collections of plastic men and women, including the "Presidents of the United States" collection, "Warriors of the World", "Generals of World War II", "Jesus and the Apostles", and figures from the Coronation of Queen Elizabeth II. Marx also produced boxed playsets that featured many famous battles with armies of two sides, character figures, and terrain features. Britains produced plastic figures under the brand names of Herald and Deetail . Also in England, the scale model company, Airfix produced a variety of high quality plastic sets, which were frequently painted by hobbyists. Many Airfix figures were imitated by other companies and reproduced as inexpensive, bagged plastic army men .
Timpo Toys, Britains main competitor in terms of sales and quality in the 1960s and 70s developed the 'Over - Moulding' system. Different coloured plastics were injected into the mould at various stages, creating a fully coloured figure without the need of paint.
During the 1990s, the production of metal toy-grade painted figures and connoisseur-grade painted toy soldiers increased to serve the demands of the collectors' market. The style of many of these figures shifted from the traditional gloss-coat enamel paint to the matte-finished acrylic paint , which allows for greater detail and historical accuracy. The change was largely inspired by the introduction of very high quality painted figures from St. Petersburg , Russia. [ citation needed ]
There is a substantial hobby devoted to collecting both old and new toy soldiers, with an abundance of small manufacturers, dealers, and toy soldier shows. There are even specialty magazines devoted to the hobby, such as "Toy Soldier Collector", "Plastic Warrior" and "Toy Soldier and Model Figure". Collectors often specialize in a particular type of soldier or historical period, though some people enjoy collecting many different kinds of figures. The most popular historical periods for collecting are Napoleonic , Victorian , American Civil War , World War I , and World War II . Many collectors modify and paint plastic figures, and some even cast and paint their own metal figures.
Actor Douglas Fairbanks Jr had a collection of 3000 toy soldiers when he sold it in 1977. Fantasy novelist George R. R. Martin has a substantial collection of toy knights and castles. [ 3 ] The most extensive collection of toy soldiers was probably that of Malcolm Forbes , who began collecting toy soldiers in the late 1960s and amassed a collection of over 90,000 figures by the time of his death in 1990. Anne Seddon Kinsolving Brown of Providence, Rhode Island, US, began collecting miniature toy soldiers on her honeymoon to Europe in 1930, eventually amassing a collection of over 6,000 figures; these are on display at the Anne S. K. Brown Military Collection at Brown University Library in Providence.
Some of the more noteworthy, annual toy soldier and historical figure shows include the Plastic Warrior Show, which is the oldest established show in the UK. Beginning in 1985 and still being held annually in Richmond, South London. Another well known show is the London Toy Soldier Show held in central London (now owned and operated by the magazine Toy Soldier Collector), the Miniature Figure Collectors of America (MFCA) show in Valley Forge, the Chicago Toy Soldier Show (OTSN) in Illinois, the East Coast Toy Soldier Show in New Jersey, the West Coaster Toy Soldier Show in California, the Sammlerbörse (Collector's Market) in Friedberg, Germany and the biennial Zinnfigurenbörse (Tin Figure Market) in Kulmbach, Germany.
In recent years, collectors of vintage toy soldiers made of polythene PE and polypropylene PP thermoplastics as well as PC / ABS plastic blends have reported brittling and disintegration of collectible miniatures or components thereof. [ 4 ]
Different types and styles of toy soldiers have been produced over the years, depending on the cost and availability of materials, as well as manufacturing technologies. Here is a list of some of the most commonly collected varieties of toy soldiers. [ 5 ]
Prominent vintage toy soldier makers include Airfix , Barclay , Britains , Herald, Elastolin , Johillco , Lineol , Marx , Manoil , Reamsa and Timpo .
The playing of wargames with toy figures was pioneered by H. G. Wells in his 1913 book, Little Wars . [ 7 ] Wells, a pacifist, was the first to publish detailed rules for playing war games with toy soldiers. He suggested that this could provide a cathartic experience, possibly preventing future real wars. Although this was not to be, Little Wars was a predecessor to the modern hobby of miniatures wargaming . According to Wells, the idea of the game developed from a visit by his friend Jerome K. Jerome . After dinner, Jerome began shooting down toy soldiers with a toy cannon and Wells joined in to compete. [ 7 ]
A similar book titled Shambattle: How to Play with Toy Soldiers [ 8 ] was published by Harry Dowdall and Joseph Gleason in 1929.
Although people continue to play wargames with miniature figures, most contemporary wargamers use a smaller scale than that favored by collectors, typically under 25 mm.
Media related to Toy soldiers at Wikimedia Commons | https://en.wikipedia.org/wiki/Toy_soldier |
In mathematics , a toy theorem is a simplified instance ( special case ) of a more general theorem , which can be useful in providing a handy representation of the general theorem, or a framework for proving the general theorem. One way of obtaining a toy theorem is by introducing some simplifying assumptions in a theorem.
In many cases, a toy theorem is used to illustrate the claim of a theorem, while in other cases, studying the proofs of a toy theorem (derived from a non-trivial theorem) can provide insight that would be hard to obtain otherwise.
Toy theorems can also have educational value as well. For example, after presenting a theorem (with, say, a highly non-trivial proof), one can sometimes give some assurance that the theorem really holds, by proving a toy version of the theorem.
A toy theorem of the Brouwer fixed-point theorem is obtained by restricting the dimension to one. In this case, the Brouwer fixed-point theorem follows almost immediately from the intermediate value theorem .
Another example of toy theorem is Rolle's theorem , which is obtained from the mean value theorem by equating the function values at the endpoints.
This article incorporates material from toy theorem on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Toy_theorem |
The Toyota Production System ( TPS ) is an integrated socio-technical system , developed by Toyota , that comprises its management philosophy and practices. The TPS is a management system [ 1 ] that organizes manufacturing and logistics for the automobile manufacturer, including interaction with suppliers and customers. The system is a major precursor of the more generic " lean manufacturing ". Taiichi Ohno and Eiji Toyoda , Japanese industrial engineers, developed the system between 1948 and 1975. [ 2 ]
Originally called "Just-in-time production", it builds on the approach created by the founder of Toyota, Sakichi Toyoda , his son Kiichiro Toyoda , and the engineer Taiichi Ohno . The principles underlying the TPS are embodied in The Toyota Way . [ 3 ]
The main objectives of the TPS are to design out overburden ( muri ) and inconsistency ( mura ), and to eliminate waste ( muda ). The most significant effects on process value delivery are achieved by designing a process capable of delivering the required results smoothly; by designing out "mura" (inconsistency). It is also crucial to ensure that the process is as flexible as necessary without stress or "muri" (overburden) since this generates "muda" (waste). Finally the tactical improvements of waste reduction or the elimination of muda are very valuable. There are eight kinds of muda that are addressed in the TPS: [ 4 ]
Toyota Motor Corporation published an official description of TPS for the first time in 1992; this booklet was revised in 1998. [ 5 ] In the foreword it was said: "The TPS is a framework for conserving resources by eliminating waste. People who participate in the system learn to identify expenditures of material, effort and time that do not generate value for customers and furthermore we have avoided a 'how-to' approach. The booklet is not a manual. Rather it is an overview of the concepts, that underlie our production system. It is a reminder that lasting gains in productivity and quality are possible whenever and wherever management and employees are united in a commitment to positive change". TPS is grounded on two main conceptual pillars:
Toyota has developed various tools to transfer these concepts into practice and apply them to specific requirements and conditions in the company and business.
Toyota has long been recognized as a leader in the automotive manufacturing and production industry. [ 8 ]
Toyota received their inspiration for the system, not from the American automotive industry (at that time the world's largest by far), but from visiting a supermarket. The idea of just-in-time production was originated by Kiichiro Toyoda , founder of Toyota. [ 9 ] The question was how to implement the idea. In reading descriptions of American supermarkets, Ohno saw the supermarket as the model for what he was trying to accomplish in the factory. A customer in a supermarket takes the desired amount of goods off the shelf and purchases them. The store restocks the shelf with enough new product to fill up the shelf space. Similarly, a work-center that needed parts would go to a "store shelf" (the inventory storage point) for the particular part and "buy" (withdraw) the quantity it needed, and the "shelf" would be "restocked" by the work-center that produced the part, making only enough to replace the inventory that had been withdrawn. [ 4 ] [ 10 ]
While low inventory levels are a key outcome of the System, an important element of the philosophy behind its system is to work intelligently and eliminate waste so that only minimal inventory is needed. [ 9 ] Many Western businesses, having observed Toyota's factories, set out to attack high inventory levels directly without understanding what made these reductions possible. [ 11 ] The act of imitating without understanding the underlying concept or motivation may have led to the failure of those projects. [ citation needed ]
The underlying principles, called the Toyota Way, have been outlined by Toyota as follows: [ 12 ] [ 13 ]
External observers have summarized the principles of the Toyota Way as: [ 14 ]
What this means is that it is a system for thorough waste elimination. Here, waste refers to anything which does not advance the process, everything that does not increase added value. Many people settle for eliminating the waste that everyone recognizes as waste. But much remains that simply has not yet been recognized as waste or that people are willing to tolerate.
People had resigned themselves to certain problems, had become hostage to routine and abandoned the practice of problem-solving. This going back to basics, exposing the real significance of problems and then making fundamental improvements, can be witnessed throughout the Toyota Production System. [ 15 ]
The principles of the Toyota Production System have been compared to production methods in the industrialization of construction. [ 16 ]
Toyota originally began sharing TPS with its parts suppliers in the 1990s. Because of interest in the program from other organizations, Toyota began offering instruction in the methodology to others. Toyota has even "donated" its system to charities, providing its engineering staff and techniques to non-profits in an effort to increase their efficiency and thus ability to serve people. For example, Toyota assisted the Food Bank For New York City to significantly decrease waiting times at soup kitchens, packing times at a food distribution center, and waiting times in a food pantry. [ 17 ] Toyota announced on June 29, 2011 the launch of a national program to donate its Toyota Production System expertise towards nonprofit organizations with goal of improving their operations, extending their reach, and increasing their impact. [ 18 ] By September, less than three months later, SBP , a disaster relief organization based out of New Orleans, reported that their home rebuilds had been reduced from 12 to 18 weeks, to 6 weeks. [ 19 ] Additionally, employing Toyota methods (like kaizen [ 20 ] ) had reduced construction errors by 50 percent. [ 19 ] The company included SBP among its first 20 community organizations, along with AmeriCorps . [ 18 ]
Taiichi Ohno's Workplace Management (2007) outlines in 38 chapters how to implement the TPS. Some important concepts are: | https://en.wikipedia.org/wiki/Toyota_Production_System |
Toyota TTC ( Toyota Total Clean System ) [ 1 ] is a moniker used in Japan to identify vehicles built with emission control technology. This technology was installed so that vehicles would comply with Japanese emission regulations passed in 1968. The term was introduced in Japan and included an externally mounted badge on the trunk of equipped vehicles. The technology first appeared in January 1975 on the Toyota Crown , Toyota Corona Mark II , Toyota Corona , Toyota Chaser , Toyota Carina , Toyota Corolla , and Toyota Sprinter . There were three different versions initially introduced: TTC-C for Catalyst (installing a catalytic converter ), TTC-V for Vortex (installing an exhaust gas recirculation valve), and TTC-L for Lean Burn (using a lean burn method). As Toyota's technology evolved, the three systems were eventually used in conjunction in future models.
The TTC-V was a licensed copy of Honda's CVCC system [ 2 ] and was introduced in February 1975. It was only available in the Carina and Corona lines, and only on the 19R engine, a modified 18R. From March 1976, the TTC-V system was upgraded to meet the stricter 1976 emissions standards. [ 3 ] The TTC-V engine was discontinued in 1977. The "Vortex" approach was also used with Mitsubishi's MCA-Jet technology, with Mitsubishi installing an extra valve in the cylinder head, as opposed to Honda's pre-chamber approach.
Toyota installed its emission control technology in select Daihatsu vehicles, as Toyota was a part owner. The system was labeled " DECS " ( Daihatsu Economical Cleanup System ). [ 4 ] The first version to be installed was the DECS-C (catalyst) in the Daihatsu Charmant and the Consorte . As the Japanese emissions regulations continued to be tightened, the DECS-C system was replaced by the DECS-L (lean burn) [ 5 ] [ 6 ] method, which was also installed in the Daihatsu Fellow , on the Daihatsu A-series engine , the Daihatsu Charade , and the Daihatsu Delta .
Venza
This article about an automotive technology is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Toyota_TTC |
By the end of 2006 there were about 15 hybrid vehicles from various car makers available in the U.S. [ 1 ] By May 2007 Toyota sold its first million hybrids and had sold a total of two million hybrids at the end of August 2009. [ 2 ]
Below is a comparison of the Toyota hybrid models. | https://en.wikipedia.org/wiki/Toyota_hybrid_vehicles |
Trabectedin , sold under the brand name Yondelis , is an antitumor chemotherapy medication for the treatment of advanced soft-tissue sarcoma and ovarian cancer . [ 3 ] [ 4 ]
The most common adverse reactions include nausea, fatigue, vomiting, constipation, decreased appetite, diarrhea, peripheral edema, dyspnea, and headache. [ 3 ] [ 4 ]
It is sold by Pharma Mar S.A. and Johnson and Johnson. It is approved for use in the European Union, Russia, South Korea and the United States. The European Commission and the U.S. Food and Drug Administration (FDA) granted orphan drug status to trabectedin for soft-tissue sarcomas and ovarian cancer .
During the 1950s and 1960s, the National Cancer Institute carried out a wide-ranging program of screening plant and marine organism material. As part of that program, extract from the sea squirt Ecteinascidia turbinata was found to have anticancer activity in 1969. [ 5 ] Separation and characterization of the active molecules had to wait many years for the development of sufficiently sensitive techniques, and the structure of one of them, Ecteinascidin 743, was determined by KL Rinehart at the University of Illinois in 1984. [ 6 ] Rinehart had collected his sea squirts by scuba diving in the reefs of the West Indies. [ 7 ] The biosynthetic pathway responsible for producing the drug has been determined to come from Candidatus Endoecteinascidia frumentensis, a microbial symbiont of the tunicate. [ 8 ] The Spanish company PharmaMar licensed the compound from the University of Illinois before 1994 [ citation needed ] and attempted to farm the sea squirt with limited success. [ 7 ] Yields from the sea squirt are extremely low as around 1,000 kilograms of animals is needed to isolate 1 gram of trabectedin - and about 5 grams were believed to be needed for a clinical trial [ 9 ] so Rinehart asked the Harvard chemist E. J. Corey to search for a synthetic method of preparation. His group developed such a method and published it in 1996. [ 10 ] This was later followed by a simpler and more tractable method which was patented by Harvard and subsequently licensed to PharmaMar. [ 7 ] The current [ when? ] supply is based on a semisynthetic process developed by PharmaMar starting from safracin B, a chemical obtained by fermentation of the bacterium Pseudomonas fluorescens . [ 11 ] PharmaMar entered into an agreement with Johnson & Johnson to market the compound outside Europe. [ citation needed ]
Trabectedin was first trialed in humans in 1996. [ citation needed ]
In 2007, the European Commission gave authorization for the marketing of trabectedin, under the trade name Yondelis, "for the treatment of patients with advanced soft tissue sarcoma, after failure of anthracyclines and ifosfamide , or who are unsuited to receive these agents". [ 12 ] [ 4 ] The European Medicine Agency's evaluating committee, the Committee for Medicinal Products for Human Use (CHMP), observed that trabectedin had not been evaluated in an adequately designed and analyzed randomized controlled trial against current best care, and that the clinical efficacy data were mainly based on patients with liposarcoma and leiomyosarcoma . However, the pivotal study did show a significant difference between two different trabectedin treatment regimens, and due to the rarity of the disease, the CHMP considered that marketing authorization could be granted under exceptional circumstances. [ 13 ] As part of the approval PharmaMar agreed to conduct a further trial to identify whether any specific chromosomal translocations could be used to predict responsiveness to trabectedin. [ 14 ]
Trabectedin is also approved in South Korea [ 15 ] and Russia. [ citation needed ]
In 2015, (after a phase III study comparing trabectedin with dacarbazine [ 16 ] ), the US FDA approved trabectedin (Yondelis) for the treatment of liposarcoma and leiomyosarcoma that is either unresectable or has metastasized. Patients must have received prior chemotherapy with an anthracycline. [ 17 ]
In 2008, the submission was announced of a registration dossier to the European Medicines Agency and the FDA for Yondelis when administered in combination with pegylated liposomal doxorubicin (Doxil, Caelyx) for the treatment of women with relapsed ovarian cancer . In 2011, Johnson & Johnson voluntarily withdrew the submission in the United States following a request by the FDA for an additional phase III study to be done in support of the submission. [ 18 ]
Trabectedin is [ when? ] also in phase II trials for prostate, breast, and paediatric cancers. [ 19 ]
Trabectedin is composed of three tetrahydroisoquinoline moieties , eight rings including one 10-membered heterocyclic ring containing a cysteine residue, and seven chiral centers. [ 20 ] [ 21 ]
The biosynthesis of trabectedin in the tunicate symbiotic bacteria Candidatus Endoecteinascidia frumentensis starts with a fatty acid loading onto the acyl-ligase domain of the EtuA3 module. A cysteine and glycine are then loaded as canonical NRPS amino acids. A tyrosine residue is modified by the enzymes EtuH, EtuM1, and EtuM2 to add a hydroxyl at the meta position of the phenol, and adding two methyl groups at the para-hydroxyl and the meta carbon position. This modified tyrosine reacts with the original substrate via a Pictet-Spengler reaction , where the amine group is converted to an imine by deprotonation, then attacks the free aldehyde to form a carbocation that is quenched by electrons from the methyl-phenol ring. This is done in the EtuA2 T-domain. This reaction is done a second time to yield a dimer of modified tyrosine residues that have been further cyclized via Pictet-Spengler reaction, yielding a bicyclic ring moiety. The EtuO and EtuF3 enzymes continue to post-translationally modify the molecule, adding several functional groups and making a sulfide bridge between the original cysteine residue and the beta-carbon of the first tyrosine to form ET-583, ET-597, ET-596, and ET-594 which have been previously isolated. [ 8 ] A third O -methylated tyrosine is added and cyclized via Pictet-Spengler to yield the final product. [ 8 ]
The total synthesis by E.J. Corey [ 10 ] used this proposed biosynthesis to guide their synthetic strategy. The synthesis uses such reactions as the Mannich reaction , Pictet-Spengler reaction , the Curtius rearrangement , and chiral rhodium -based diphosphine - catalyzed enantioselective hydrogenation . A separate synthetic process also involved the Ugi reaction to assist in the formation of the pentacyclic core. This reaction was unprecedented for using such a one pot multicomponent reaction in the synthesis of such a complex molecule.
Recently, [ when? ] it has been shown that trabectedin blocks DNA binding of the oncogenic transcription factor FUS-CHOP and reverses the transcriptional program in myxoid liposarcoma . By reversing the genetic program created by this transcription factor, trabectedin promotes differentiation and reverses the oncogenic phenotype in these cells. [ 22 ]
Other than transcriptional interference, the mechanism of action of trabectedin is complex and not completely understood. The compound is known to bind and alkylate DNA at the N2 position of guanine. It is known from in vitro work that this binding occurs in the minor groove, spans approximately three to five base pairs and is most efficient with CGG sequences. Additional favorable binding sequences are TGG, AGC, or GGC. Once bound, this reversible covalent adduct bends DNA toward the major groove, interferes directly with activated transcription, poisons the transcription-coupled nucleotide excision repair complex, promotes degradation of RNA polymerase II, and generates DNA double-strand breaks. [ 22 ]
In 2024, researchers from ETH Zürich and UNIST determined that abortive transcription-coupled nucleotide excision repair of trabectedin-DNA adducts forms persistent single-strand breaks (SSBs) as the adducts block the second of the two sequential NER incisions. The researchers mapped the 3’-hydroxyl groups of SSBs originating from the first NER incision at trabectedin lesions, recording TC-NER on a genome-wide scale, which resulted in a TC-NER-profiling assay TRABI-Seq. [ 23 ]
In September 2020, the European Medicines Agency recommended that the use of trabectedin in treating ovarian cancer remain unchanged. [ 24 ] | https://en.wikipedia.org/wiki/Trabectedin |
Trabecular cartilages ( trabeculae cranii , sometimes simply trabeculae , prechordal cartilages ) are paired, rod-shaped cartilages, which develop in the head of the vertebrate embryo . They are the primordia of the anterior part of the cranial base , and are derived from the cranial neural crest cells .
The trabecular cartilages generally appear as a paired, rod-shaped cartilages at the ventral side of the forebrain and lateral side of the adenohypophysis in the vertebrate embryo. During development, their anterior ends fuse and form the trabecula communis . Their posterior ends fuse with the caudal-most parachordal cartilages .
Most skeletons are of mesodermal origin in vertebrates. Especially axial skeletal elements, such as the vertebrae, are derived from the paraxial mesoderm (e.g., somites ), which is regulated by molecular signals from the notochord . Trabecular cartilages, however, originate from the neural crest , and since they are located anterior to the rostral tip of the notochord, [ 1 ] they cannot receive signals from the notochord. Due to these specializations, and their essential role in cranial development, many comparative morphologists and embryologists have argued their developmental or evolutionary origins. The general theory is that the trabecular cartilage is derived from the neural crest mesenchyme which fills anterior to the mandibular arch (premandibular domain).
As clearly seen in the lamprey , Cyclostome also has a pair of cartilaginous rods in the embryonic head which is similar to the trabecular cartilages in jawed vertebrates.
However, in 1916, Alexei Nikolajevich Sewertzoff pointed out that the cranial base of the lamprey is exclusively originated from the paraxial mesoderm. Then in 1948, Alf Johnels reported the detail of the skeletogenesis of the lamprey, and showed that the “trabecular cartilages” in lamprey appear just beside the notochord, in a similar position to the parachordal cartilages in jawed vertebrates. [ 2 ] Recent experimental studies also showed that the cartilages are derived from the head mesoderm. [ 3 ] The “trabecular cartilages” in the Cyclostome is no longer considered to be the homologue of the trabecular in the jawed vertebrates: the (true) trabecular cartilages were firstly acquired in the Gnathostome lineage.
The trabecular cartilages were first described in the grass snake by Martin Heinrich Rathke at 1839. [ 4 ] In 1874, Thomas Henry Huxley suggested that the trabecular cartilages are a modified part of the splanchnocranium : they arose as the serial homologues of the pharyngeal arches . [ citation needed ]
The vertebrate jaw is generally thought to be the modification of the mandibular arch (1st pharyngeal arch). Since the trabecular cartilages appear anterior to the mandibular arch, if the trabecular cartilages are serial homologues of the pharyngeal arches, ancestral vertebrates should possess more than one pharyngeal arch (so-called "premandibular arches") anterior to the mandibular arch. The existence of premandibular arch(es) has been accepted by many comparative embryologists and morphologists (e.g., Edwin Stephen Goodrich , Gavin de Beer ). Moreover, Erik Stensio reported premandibular arches and the corresponding branchiomeric nerves by the reconstruction of the Osteostracans (e.g., Cephalaspis ; recently this arch was reinterpreted as the mandibular arch)
However, the existence of the premandibular arch(es) has been rejected, and the trabecular cartilages are no longer assumed to be one of the pharyngeal arches. | https://en.wikipedia.org/wiki/Trabecular_cartilage |
In linear algebra , the trace of a square matrix A , denoted tr( A ) , [ 1 ] is the sum of the elements on its main diagonal , a 11 + a 22 + ⋯ + a n n {\displaystyle a_{11}+a_{22}+\dots +a_{nn}} . It is only defined for a square matrix ( n × n ).
The trace of a matrix is the sum of its eigenvalues (counted with multiplicities). Also, tr( AB ) = tr( BA ) for any matrices A and B of the same size. Thus, similar matrices have the same trace. As a consequence, one can define the trace of a linear operator mapping a finite-dimensional vector space into itself, since all matrices describing such an operator with respect to a basis are similar.
The trace is related to the derivative of the determinant (see Jacobi's formula ).
The trace of an n × n square matrix A is defined as [ 1 ] [ 2 ] [ 3 ] : 34 tr ( A ) = ∑ i = 1 n a i i = a 11 + a 22 + ⋯ + a n n {\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{n}a_{ii}=a_{11}+a_{22}+\dots +a_{nn}} where a ii denotes the entry on the i th row and i th column of A . The entries of A can be real numbers , complex numbers , or more generally elements of a field F . The trace is not defined for non-square matrices.
Let A be a matrix, with A = ( a 11 a 12 a 13 a 21 a 22 a 23 a 31 a 32 a 33 ) = ( 1 0 3 11 5 2 6 12 − 5 ) {\displaystyle \mathbf {A} ={\begin{pmatrix}a_{11}&a_{12}&a_{13}\\a_{21}&a_{22}&a_{23}\\a_{31}&a_{32}&a_{33}\end{pmatrix}}={\begin{pmatrix}1&0&3\\11&5&2\\6&12&-5\end{pmatrix}}}
Then tr ( A ) = ∑ i = 1 3 a i i = a 11 + a 22 + a 33 = 1 + 5 + ( − 5 ) = 1 {\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{3}a_{ii}=a_{11}+a_{22}+a_{33}=1+5+(-5)=1}
The trace is a linear mapping . That is, [ 1 ] [ 2 ] tr ( A + B ) = tr ( A ) + tr ( B ) tr ( c A ) = c tr ( A ) {\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} )\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} )\end{aligned}}} for all square matrices A and B , and all scalars c . [ 3 ] : 34
A matrix and its transpose have the same trace: [ 1 ] [ 2 ] [ 3 ] : 34 tr ( A ) = tr ( A T ) . {\displaystyle \operatorname {tr} (\mathbf {A} )=\operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\right).}
This follows immediately from the fact that transposing a square matrix does not affect elements along the main diagonal.
The trace of a square matrix which is the product of two matrices can be rewritten as the sum of entry-wise products of their elements, i.e. as the sum of all elements of their Hadamard product . Phrased directly, if A and B are two m × n matrices, then: tr ( A T B ) = tr ( A B T ) = tr ( B T A ) = tr ( B A T ) = ∑ i = 1 m ∑ j = 1 n a i j b i j . {\displaystyle \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {B} \right)=\operatorname {tr} \left(\mathbf {A} \mathbf {B} ^{\mathsf {T}}\right)=\operatorname {tr} \left(\mathbf {B} ^{\mathsf {T}}\mathbf {A} \right)=\operatorname {tr} \left(\mathbf {B} \mathbf {A} ^{\mathsf {T}}\right)=\sum _{i=1}^{m}\sum _{j=1}^{n}a_{ij}b_{ij}\;.}
If one views any real m × n matrix as a vector of length mn (an operation called vectorization ) then the above operation on A and B coincides with the standard dot product . According to the above expression, tr( A ⊤ A ) is a sum of squares and hence is nonnegative, equal to zero if and only if A is zero. [ 4 ] : 7 Furthermore, as noted in the above formula, tr( A ⊤ B ) = tr( B ⊤ A ) . These demonstrate the positive-definiteness and symmetry required of an inner product ; it is common to call tr( A ⊤ B ) the Frobenius inner product of A and B . This is a natural inner product on the vector space of all real matrices of fixed dimensions. The norm derived from this inner product is called the Frobenius norm , and it satisfies a submultiplicative property, as can be proven with the Cauchy–Schwarz inequality : 0 ≤ [ tr ( A B ) ] 2 ≤ tr ( A T A ) tr ( B T B ) , {\displaystyle 0\leq \left[\operatorname {tr} (\mathbf {A} \mathbf {B} )\right]^{2}\leq \operatorname {tr} \left(\mathbf {A} ^{\mathsf {T}}\mathbf {A} \right)\operatorname {tr} \left(\mathbf {B} ^{\mathsf {T}}\mathbf {B} \right),} if A and B are real matrices such that A B is a square matrix. The Frobenius inner product and norm arise frequently in matrix calculus and statistics .
The Frobenius inner product may be extended to a hermitian inner product on the complex vector space of all complex matrices of a fixed size, by replacing B by its complex conjugate .
The symmetry of the Frobenius inner product may be phrased more directly as follows: the matrices in the trace of a product can be switched without changing the result. If A and B are m × n and n × m real or complex matrices, respectively, then [ 1 ] [ 2 ] [ 3 ] : 34 [ note 1 ]
tr ( A B ) = tr ( B A ) {\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} )=\operatorname {tr} (\mathbf {B} \mathbf {A} )}
This is notable both for the fact that AB does not usually equal BA , and also since the trace of either does not usually equal tr( A )tr( B ) . [ note 2 ] The similarity-invariance of the trace, meaning that tr( A ) = tr( P −1 AP ) for any square matrix A and any invertible matrix P of the same dimensions, is a fundamental consequence. This is proved by tr ( P − 1 ( A P ) ) = tr ( ( A P ) P − 1 ) = tr ( A ) . {\displaystyle \operatorname {tr} \left(\mathbf {P} ^{-1}(\mathbf {A} \mathbf {P} )\right)=\operatorname {tr} \left((\mathbf {A} \mathbf {P} )\mathbf {P} ^{-1}\right)=\operatorname {tr} (\mathbf {A} ).} Similarity invariance is the crucial property of the trace in order to discuss traces of linear transformations as below.
Additionally, for real column vectors a ∈ R n {\displaystyle \mathbf {a} \in \mathbb {R} ^{n}} and b ∈ R n {\displaystyle \mathbf {b} \in \mathbb {R} ^{n}} , the trace of the outer product is equivalent to the inner product:
tr ( b a T ) = a T b {\displaystyle \operatorname {tr} \left(\mathbf {b} \mathbf {a} ^{\textsf {T}}\right)=\mathbf {a} ^{\textsf {T}}\mathbf {b} }
More generally, the trace is invariant under circular shifts , that is,
tr ( A B C D ) = tr ( B C D A ) = tr ( C D A B ) = tr ( D A B C ) . {\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} \mathbf {D} )=\operatorname {tr} (\mathbf {B} \mathbf {C} \mathbf {D} \mathbf {A} )=\operatorname {tr} (\mathbf {C} \mathbf {D} \mathbf {A} \mathbf {B} )=\operatorname {tr} (\mathbf {D} \mathbf {A} \mathbf {B} \mathbf {C} ).}
This is known as the cyclic property .
Arbitrary permutations are not allowed: in general, tr ( A B C D ) ≠ tr ( A C B D ) . {\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} \mathbf {D} )\neq \operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} \mathbf {D} )~.}
However, if products of three symmetric matrices are considered, any permutation is allowed, since: tr ( A B C ) = tr ( ( A B C ) T ) = tr ( C B A ) = tr ( A C B ) , {\displaystyle \operatorname {tr} (\mathbf {A} \mathbf {B} \mathbf {C} )=\operatorname {tr} \left(\left(\mathbf {A} \mathbf {B} \mathbf {C} \right)^{\mathsf {T}}\right)=\operatorname {tr} (\mathbf {C} \mathbf {B} \mathbf {A} )=\operatorname {tr} (\mathbf {A} \mathbf {C} \mathbf {B} ),} where the first equality is because the traces of a matrix and its transpose are equal. Note that this is not true in general for more than three factors.
The trace of the Kronecker product of two matrices is the product of their traces: tr ( A ⊗ B ) = tr ( A ) tr ( B ) . {\displaystyle \operatorname {tr} (\mathbf {A} \otimes \mathbf {B} )=\operatorname {tr} (\mathbf {A} )\operatorname {tr} (\mathbf {B} ).}
The following three properties: tr ( A + B ) = tr ( A ) + tr ( B ) , tr ( c A ) = c tr ( A ) , tr ( A B ) = tr ( B A ) , {\displaystyle {\begin{aligned}\operatorname {tr} (\mathbf {A} +\mathbf {B} )&=\operatorname {tr} (\mathbf {A} )+\operatorname {tr} (\mathbf {B} ),\\\operatorname {tr} (c\mathbf {A} )&=c\operatorname {tr} (\mathbf {A} ),\\\operatorname {tr} (\mathbf {A} \mathbf {B} )&=\operatorname {tr} (\mathbf {B} \mathbf {A} ),\end{aligned}}} characterize the trace up to a scalar multiple in the following sense: If f {\displaystyle f} is a linear functional on the space of square matrices that satisfies f ( x y ) = f ( y x ) , {\displaystyle f(xy)=f(yx),} then f {\displaystyle f} and tr {\displaystyle \operatorname {tr} } are proportional. [ note 3 ]
For n × n {\displaystyle n\times n} matrices, imposing the normalization f ( I ) = n {\displaystyle f(\mathbf {I} )=n} makes f {\displaystyle f} equal to the trace.
Given any n × n matrix A , there is
tr ( A ) = ∑ i = 1 n λ i {\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i=1}^{n}\lambda _{i}}
where λ 1 , ..., λ n are the eigenvalues of A counted with multiplicity. This holds true even if A is a real matrix and some (or all) of the eigenvalues are complex numbers. This may be regarded as a consequence of the existence of the Jordan canonical form , together with the similarity-invariance of the trace discussed above.
When both A and B are n × n matrices, the trace of the (ring-theoretic) commutator of A and B vanishes: tr([ A , B ]) = 0 , because tr( AB ) = tr( BA ) and tr is linear. One can state this as "the trace is a map of Lie algebras gl n → k from operators to scalars", as the commutator of scalars is trivial (it is an Abelian Lie algebra ). In particular, using similarity invariance, it follows that the identity matrix is never similar to the commutator of any pair of matrices.
Conversely, any square matrix with zero trace is a linear combination of the commutators of pairs of matrices. [ note 4 ] Moreover, any square matrix with zero trace is unitarily equivalent to a square matrix with diagonal consisting of all zeros.
tr ( I n ) = n {\displaystyle \operatorname {tr} \left(\mathbf {I} _{n}\right)=n}
When the characteristic of the base field is zero, the converse also holds: if tr( A k ) = 0 for all k , then A is nilpotent.
The trace of an n × n {\displaystyle n\times n} matrix A {\displaystyle A} is the coefficient of t n − 1 {\displaystyle t^{n-1}} in the characteristic polynomial , possibly changed of sign, according to the convention in the definition of the characteristic polynomial.
If A is a linear operator represented by a square matrix with real or complex entries and if λ 1 , ..., λ n are the eigenvalues of A (listed according to their algebraic multiplicities ), then
tr ( A ) = ∑ i λ i {\displaystyle \operatorname {tr} (\mathbf {A} )=\sum _{i}\lambda _{i}}
This follows from the fact that A is always similar to its Jordan form , an upper triangular matrix having λ 1 , ..., λ n on the main diagonal. In contrast, the determinant of A is the product of its eigenvalues; that is, det ( A ) = ∏ i λ i . {\displaystyle \det(\mathbf {A} )=\prod _{i}\lambda _{i}.}
Everything in the present section applies as well to any square matrix with coefficients in an algebraically closed field .
If ΔA is a square matrix with small entries and I denotes the identity matrix , then we have approximately
det ( I + Δ A ) ≈ 1 + tr ( Δ A ) . {\displaystyle \det(\mathbf {I} +\mathbf {\Delta A} )\approx 1+\operatorname {tr} (\mathbf {\Delta A} ).}
Precisely this means that the trace is the derivative of the determinant function at the identity matrix. Jacobi's formula
d det ( A ) = tr ( adj ( A ) ⋅ d A ) {\displaystyle d\det(\mathbf {A} )=\operatorname {tr} {\big (}\operatorname {adj} (\mathbf {A} )\cdot d\mathbf {A} {\big )}}
is more general and describes the differential of the determinant at an arbitrary square matrix, in terms of the trace and the adjugate of the matrix.
From this (or from the connection between the trace and the eigenvalues), one can derive a relation between the trace function, the matrix exponential function, and the determinant: det ( exp ( A ) ) = exp ( tr ( A ) ) . {\displaystyle \det(\exp(\mathbf {A} ))=\exp(\operatorname {tr} (\mathbf {A} )).}
A related characterization of the trace applies to linear vector fields . Given a matrix A , define a vector field F on R n by F ( x ) = Ax . The components of this vector field are linear functions (given by the rows of A ). Its divergence div F is a constant function, whose value is equal to tr( A ) .
By the divergence theorem , one can interpret this in terms of flows: if F ( x ) represents the velocity of a fluid at location x and U is a region in R n , the net flow of the fluid out of U is given by tr( A ) · vol( U ) , where vol( U ) is the volume of U .
The trace is a linear operator, hence it commutes with the derivative: d tr ( X ) = tr ( d X ) . {\displaystyle d\operatorname {tr} (\mathbf {X} )=\operatorname {tr} (d\mathbf {X} ).}
In general, given some linear map f : V → V (where V is a finite- dimensional vector space ), we can define the trace of this map by considering the trace of a matrix representation of f , that is, choosing a basis for V and describing f as a matrix relative to this basis, and taking the trace of this square matrix. The result will not depend on the basis chosen, since different bases will give rise to similar matrices , allowing for the possibility of a basis-independent definition for the trace of a linear map.
Such a definition can be given using the canonical isomorphism between the space End( V ) of linear maps on V and V ⊗ V * , where V * is the dual space of V . Let v be in V and let g be in V * . Then the trace of the indecomposable element v ⊗ g is defined to be g ( v ) ; the trace of a general element is defined by linearity. The trace of a linear map f : V → V can then be defined as the trace, in the above sense, of the element of V ⊗ V * corresponding to f under the above mentioned canonical isomorphism. Using an explicit basis for V and the corresponding dual basis for V * , one can show that this gives the same definition of the trace as given above.
The trace can be estimated unbiasedly by "Hutchinson's trick": [ 5 ]
Given any matrix W ∈ R n × n {\displaystyle {\boldsymbol {W}}\in \mathbb {R} ^{n\times n}} , and any random u ∈ R n {\displaystyle {\boldsymbol {u}}\in \mathbb {R} ^{n}} with E [ u u ⊺ ] = I {\displaystyle \mathbb {E} [{\boldsymbol {u}}{\boldsymbol {u}}^{\intercal }]=\mathbf {I} } , we have E [ u ⊺ W u ] = tr W {\displaystyle \mathbb {E} [{\boldsymbol {u}}^{\intercal }{\boldsymbol {W}}{\boldsymbol {u}}]=\operatorname {tr} {\boldsymbol {W}}} .
For a proof expand the expectation directly.
Usually, the random vector is sampled from N ( 0 , I ) {\displaystyle \operatorname {N} (\mathbf {0} ,\mathbf {I} )} (normal distribution) or { ± n − 1 / 2 } n {\displaystyle \{\pm n^{-1/2}\}^{n}} ( Rademacher distribution ).
More sophisticated stochastic estimators of trace have been developed. [ 6 ]
If a 2 x 2 real matrix has zero trace, its square is a diagonal matrix .
The trace of a 2 × 2 complex matrix is used to classify Möbius transformations . First, the matrix is normalized to make its determinant equal to one. Then, if the square of the trace is 4, the corresponding transformation is parabolic . If the square is in the interval [0,4) , it is elliptic . Finally, if the square is greater than 4, the transformation is loxodromic . See classification of Möbius transformations .
The trace is used to define characters of group representations . Two representations A , B : G → GL ( V ) of a group G are equivalent (up to change of basis on V ) if tr( A ( g )) = tr( B ( g )) for all g ∈ G .
The trace also plays a central role in the distribution of quadratic forms .
The trace is a map of Lie algebras tr : g l n → K {\displaystyle \operatorname {tr} :{\mathfrak {gl}}_{n}\to K} from the Lie algebra g l n {\displaystyle {\mathfrak {gl}}_{n}} of linear operators on an n -dimensional space ( n × n matrices with entries in K {\displaystyle K} ) to the Lie algebra K of scalars; as K is Abelian (the Lie bracket vanishes), the fact that this is a map of Lie algebras is exactly the statement that the trace of a bracket vanishes: tr ( [ A , B ] ) = 0 for each A , B ∈ g l n . {\displaystyle \operatorname {tr} ([\mathbf {A} ,\mathbf {B} ])=0{\text{ for each }}\mathbf {A} ,\mathbf {B} \in {\mathfrak {gl}}_{n}.}
The kernel of this map, a matrix whose trace is zero , is often said to be traceless or trace free , and these matrices form the simple Lie algebra s l n {\displaystyle {\mathfrak {sl}}_{n}} , which is the Lie algebra of the special linear group of matrices with determinant 1. The special linear group consists of the matrices which do not change volume, while the special linear Lie algebra is the matrices which do not alter volume of infinitesimal sets.
In fact, there is an internal direct sum decomposition g l n = s l n ⊕ K {\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K} of operators/matrices into traceless operators/matrices and scalars operators/matrices. The projection map onto scalar operators can be expressed in terms of the trace, concretely as: A ↦ 1 n tr ( A ) I . {\displaystyle \mathbf {A} \mapsto {\frac {1}{n}}\operatorname {tr} (\mathbf {A} )\mathbf {I} .}
Formally, one can compose the trace (the counit map) with the unit map K → g l n {\displaystyle K\to {\mathfrak {gl}}_{n}} of "inclusion of scalars " to obtain a map g l n → g l n {\displaystyle {\mathfrak {gl}}_{n}\to {\mathfrak {gl}}_{n}} mapping onto scalars, and multiplying by n . Dividing by n makes this a projection, yielding the formula above.
In terms of short exact sequences , one has 0 → s l n → g l n → tr K → 0 {\displaystyle 0\to {\mathfrak {sl}}_{n}\to {\mathfrak {gl}}_{n}{\overset {\operatorname {tr} }{\to }}K\to 0} which is analogous to 1 → SL n → GL n → det K ∗ → 1 {\displaystyle 1\to \operatorname {SL} _{n}\to \operatorname {GL} _{n}{\overset {\det }{\to }}K^{*}\to 1} (where K ∗ = K ∖ { 0 } {\displaystyle K^{*}=K\setminus \{0\}} ) for Lie groups . However, the trace splits naturally (via 1 / n {\displaystyle 1/n} times scalars) so g l n = s l n ⊕ K {\displaystyle {\mathfrak {gl}}_{n}={\mathfrak {sl}}_{n}\oplus K} , but the splitting of the determinant would be as the n th root times scalars, and this does not in general define a function, so the determinant does not split and the general linear group does not decompose: GL n ≠ SL n × K ∗ . {\displaystyle \operatorname {GL} _{n}\neq \operatorname {SL} _{n}\times K^{*}.}
The bilinear form (where X , Y are square matrices) B ( X , Y ) = tr ( ad ( X ) ad ( Y ) ) {\displaystyle B(\mathbf {X} ,\mathbf {Y} )=\operatorname {tr} (\operatorname {ad} (\mathbf {X} )\operatorname {ad} (\mathbf {Y} ))}
B ( X , Y ) {\displaystyle B(\mathbf {X} ,\mathbf {Y} )} is called the Killing form ; it is used to classify Lie algebras .
The trace defines a bilinear form: ( X , Y ) ↦ tr ( X Y ) . {\displaystyle (\mathbf {X} ,\mathbf {Y} )\mapsto \operatorname {tr} (\mathbf {X} \mathbf {Y} )~.}
The form is symmetric, non-degenerate [ note 5 ] and associative in the sense that: tr ( X [ Y , Z ] ) = tr ( [ X , Y ] Z ) . {\displaystyle \operatorname {tr} (\mathbf {X} [\mathbf {Y} ,\mathbf {Z} ])=\operatorname {tr} ([\mathbf {X} ,\mathbf {Y} ]\mathbf {Z} ).}
For a complex simple Lie algebra (such as s l {\displaystyle {\mathfrak {sl}}} n ), every such bilinear form is proportional to each other; in particular, to the Killing form [ citation needed ] .
Two matrices X and Y are said to be trace orthogonal if tr ( X Y ) = 0. {\displaystyle \operatorname {tr} (\mathbf {X} \mathbf {Y} )=0.}
There is a generalization to a general representation ( ρ , g , V ) {\displaystyle (\rho ,{\mathfrak {g}},V)} of a Lie algebra g {\displaystyle {\mathfrak {g}}} , such that ρ {\displaystyle \rho } is a homomorphism of Lie algebras ρ : g → End ( V ) . {\displaystyle \rho :{\mathfrak {g}}\rightarrow {\text{End}}(V).} The trace form tr V {\displaystyle {\text{tr}}_{V}} on End ( V ) {\displaystyle {\text{End}}(V)} is defined as above. The bilinear form ϕ ( X , Y ) = tr V ( ρ ( X ) ρ ( Y ) ) {\displaystyle \phi (\mathbf {X} ,\mathbf {Y} )={\text{tr}}_{V}(\rho (\mathbf {X} )\rho (\mathbf {Y} ))} is symmetric and invariant due to cyclicity.
The concept of trace of a matrix is generalized to the trace class of compact operators on Hilbert spaces , and the analog of the Frobenius norm is called the Hilbert–Schmidt norm.
If K {\displaystyle K} is a trace-class operator, then for any orthonormal basis { e n } n = 1 {\displaystyle \{e_{n}\}_{n=1}} , the trace is given by tr ( K ) = ∑ n ⟨ e n , K e n ⟩ , {\displaystyle \operatorname {tr} (K)=\sum _{n}\left\langle e_{n},Ke_{n}\right\rangle ,} and is finite and independent of the orthonormal basis. [ 7 ]
The partial trace is another generalization of the trace that is operator-valued. The trace of a linear operator Z {\displaystyle Z} which lives on a product space A ⊗ B {\displaystyle A\otimes B} is equal to the partial traces over A {\displaystyle A} and B {\displaystyle B} : tr ( Z ) = tr A ( tr B ( Z ) ) = tr B ( tr A ( Z ) ) . {\displaystyle \operatorname {tr} (Z)=\operatorname {tr} _{A}\left(\operatorname {tr} _{B}(Z)\right)=\operatorname {tr} _{B}\left(\operatorname {tr} _{A}(Z)\right).}
For more properties and a generalization of the partial trace, see traced monoidal categories .
If A {\displaystyle A} is a general associative algebra over a field k {\displaystyle k} , then a trace on A {\displaystyle A} is often defined to be any functional tr : A → k {\displaystyle \operatorname {tr} :A\to k} which vanishes on commutators; tr ( [ a , b ] ) = 0 {\displaystyle \operatorname {tr} ([a,b])=0} for all a , b ∈ A {\displaystyle a,b\in A} . Such a trace is not uniquely defined; it can always at least be modified by multiplication by a nonzero scalar.
A supertrace is the generalization of a trace to the setting of superalgebras .
The operation of tensor contraction generalizes the trace to arbitrary tensors.
Gomme and Klein (2011) define a matrix trace operator trm {\displaystyle \operatorname {trm} } that operates on block matrices and use it to compute second-order perturbation solutions to dynamic economic models without the need for tensor notation . [ 8 ]
Given a vector space V , there is a natural bilinear map V × V ∗ → F given by sending ( v , φ) to the scalar φ( v ) . The universal property of the tensor product V ⊗ V ∗ automatically implies that this bilinear map is induced by a linear functional on V ⊗ V ∗ . [ 9 ]
Similarly, there is a natural bilinear map V × V ∗ → Hom( V , V ) given by sending ( v , φ) to the linear map w ↦ φ( w ) v . The universal property of the tensor product, just as used previously, says that this bilinear map is induced by a linear map V ⊗ V ∗ → Hom( V , V ) . If V is finite-dimensional, then this linear map is a linear isomorphism . [ 9 ] This fundamental fact is a straightforward consequence of the existence of a (finite) basis of V , and can also be phrased as saying that any linear map V → V can be written as the sum of (finitely many) rank-one linear maps. Composing the inverse of the isomorphism with the linear functional obtained above results in a linear functional on Hom( V , V ) . This linear functional is exactly the same as the trace.
Using the definition of trace as the sum of diagonal elements, the matrix formula tr( AB ) = tr( BA ) is straightforward to prove, and was given above. In the present perspective, one is considering linear maps S and T , and viewing them as sums of rank-one maps, so that there are linear functionals φ i and ψ j and nonzero vectors v i and w j such that S ( u ) = Σ φ i ( u ) v i and T ( u ) = Σ ψ j ( u ) w j for any u in V . Then
for any u in V . The rank-one linear map u ↦ ψ j ( u ) φ i ( w j ) v i has trace ψ j ( v i ) φ i ( w j ) and so
Following the same procedure with S and T reversed, one finds exactly the same formula, proving that tr( S ∘ T ) equals tr( T ∘ S ) .
The above proof can be regarded as being based upon tensor products, given that the fundamental identity of End( V ) with V ⊗ V ∗ is equivalent to the expressibility of any linear map as the sum of rank-one linear maps. As such, the proof may be written in the notation of tensor products. Then one may consider the multilinear map V × V ∗ × V × V ∗ → V ⊗ V ∗ given by sending ( v , φ , w , ψ ) to φ ( w ) v ⊗ ψ . Further composition with the trace map then results in φ ( w ) ψ ( v ) , and this is unchanged if one were to have started with ( w , ψ , v , φ ) instead. One may also consider the bilinear map End( V ) × End( V ) → End( V ) given by sending ( f , g ) to the composition f ∘ g , which is then induced by a linear map End( V ) ⊗ End( V ) → End( V ) . It can be seen that this coincides with the linear map V ⊗ V ∗ ⊗ V ⊗ V ∗ → V ⊗ V ∗ . The established symmetry upon composition with the trace map then establishes the equality of the two traces. [ 9 ]
For any finite dimensional vector space V , there is a natural linear map F → V ⊗ V ' ; in the language of linear maps, it assigns to a scalar c the linear map c ⋅id V . Sometimes this is called coevaluation map , and the trace V ⊗ V ' → F is called evaluation map . [ 9 ] These structures can be axiomatized to define categorical traces in the abstract setting of category theory . | https://en.wikipedia.org/wiki/Trace_(linear_algebra) |
Trace amines are an endogenous group of trace amine-associated receptor 1 (TAAR1) agonists [ 1 ] – and hence, monoaminergic neuromodulators [ 2 ] [ 3 ] [ 4 ] – that are structurally and metabolically related to classical monoamine neurotransmitters . [ 5 ] Compared to the classical monoamines, they are present in trace concentrations. [ 5 ] They are distributed heterogeneously throughout the mammalian brain and peripheral nervous tissues and exhibit high rates of metabolism . [ 5 ] [ 6 ] Although they can be synthesized within parent monoamine neurotransmitter systems, [ 7 ] there is evidence that suggests that some of them may comprise their own independent neurotransmitter systems. [ 2 ]
Trace amines play significant roles in regulating the quantity of monoamine neurotransmitters in the synaptic cleft of monoamine neurons with co-localized TAAR1 . [ 6 ] They have well-characterized presynaptic amphetamine-like effects on these monoamine neurons via TAAR1 activation; [ 3 ] [ 4 ] specifically, by activating TAAR1 in neurons they promote the release [ note 1 ] and prevent reuptake of monoamine neurotransmitters from the synaptic cleft as well as inhibit neuronal firing . [ 6 ] [ 8 ] Phenethylamine and amphetamine possess analogous pharmacodynamics in human dopamine neurons , as both compounds induce efflux from vesicular monoamine transporter 2 (VMAT2) [ 7 ] [ 9 ] and activate TAAR1 with comparable efficacy. [ 6 ]
Like dopamine , norepinephrine , and serotonin , the trace amines have been implicated in a vast array of human disorders of affect and cognition, such as ADHD , [ 3 ] [ 4 ] [ 10 ] depression , [ 3 ] [ 4 ] and schizophrenia , [ 2 ] [ 3 ] [ 4 ] among others. [ 3 ] [ 4 ] [ 10 ] Trace aminergic hypo-function is particularly relevant to ADHD, since urinary and plasma phenethylamine concentrations are significantly lower in individuals with ADHD relative to controls and the two most commonly prescribed drugs for ADHD, amphetamine and methylphenidate , increase phenethylamine biosynthesis in treatment-responsive individuals with ADHD. [ 3 ] [ 11 ] A systematic review of ADHD biomarkers also indicated that urinary phenethylamine levels could be a diagnostic biomarker for ADHD. [ 11 ]
The human trace amines include:
While not trace amines themselves, the classical monoamines norepinephrine , serotonin , and histamine are all partial agonists at the human TAAR1 receptor; [ 6 ] dopamine is a high-affinity agonist at human TAAR1. [ 8 ] [ 17 ] [ 18 ] N -Methyltryptamine and N , N -dimethyltryptamine are endogenous amines in humans, however, their human TAAR1 binding has not been determined as of 2015. [update] [ 2 ]
Trace amines are so-named because they are present in the nervous system at trace or very concentrations. [ 19 ] These concentrations are much lower than for classical monoamine neurotransmitters like serotonin, dopamine, and norepinephrine. [ 19 ] However, the rapid metabolic turnover of trace amines, consequent to strong susceptibility to monoamine oxidases , is suggestive that they may be present as chemical synapses at much higher concentrations than predicted by steady-state measures. [ 19 ]
A thorough review of trace amine-associated receptors that discusses the historical evolution of this research particularly well is that of Grandy. [ 20 ] | https://en.wikipedia.org/wiki/Trace_amine |
Trace amine-associated receptors ( TAARs ), sometimes referred to as trace amine receptors ( TA s or TAR s), are a class of G protein-coupled receptors that were discovered in 2001. [ 1 ] [ 2 ] TAAR1 , the first of six functional human TAARs, has gained considerable interest in academic and proprietary pharmaceutical research due to its role as the endogenous receptor for the trace amines phenethylamine , tyramine , and tryptamine – metabolic derivatives of the amino acids phenylalanine , tyrosine and tryptophan , respectively – ephedrine, as well as the synthetic psychostimulants , amphetamine , methamphetamine and methylenedioxymethamphetamine ( MDMA , ecstasy). [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] In 2004, it was shown that mammalian TAAR1 is also a receptor for thyronamines , decarboxylated and deiodinated relatives of thyroid hormones . [ 5 ] TAAR2–TAAR9 function as olfactory receptors for volatile amine odorants in vertebrates . [ 9 ]
The following is a list of the TAARs contained in selected animal genomes : [ 10 ] [ 11 ]
Six human trace amine-associated receptors (hTAARs) – hTAAR1 , hTAAR2 , hTAAR5 , hTAAR6 , hTAAR8 , and hTAAR9 – have been identified and partially characterized. The table below contains summary information from literature reviews, pharmacology databases, and supplementary primary research articles on the expression profiles, signal transduction mechanisms, ligands, and physiological functions of these receptors.
Ulotaront / SEP 363856 is a TAAR1 agonist in phase 3 clinical trials for schizophrenia and earlier trials for Parkinson's Disease psychosis. The medicine has obtained Breakthrough designation from the US FDA. [ 30 ] [ 31 ] [ 32 ] | https://en.wikipedia.org/wiki/Trace_amine-associated_receptor |
In mathematics , trace diagrams are a graphical means of performing computations in linear and multilinear algebra . They can be represented as (slightly modified) graphs in which some edges are labeled by matrices . The simplest trace diagrams represent the trace and determinant of a matrix. Several results in linear algebra, such as Cramer's Rule and the Cayley–Hamilton theorem , have simple diagrammatic proofs. They are closely related to Penrose's graphical notation .
Let V be a vector space of dimension n over a field F (with n ≥2), and let Hom( V , V ) denote the linear transformations on V . An n -trace diagram is a graph D = ( V 1 ⊔ V 2 ⊔ V n , E ) {\displaystyle {\mathcal {D}}=(V_{1}\sqcup V_{2}\sqcup V_{n},E)} , where the sets V i ( i = 1, 2, n ) are composed of vertices of degree i , together with the following additional structures:
Note that V 2 and V n should be considered as distinct sets in the case n = 2. A framed trace diagram is a trace diagram together with a partition of the degree-1 vertices V 1 into two disjoint ordered collections called the inputs and the outputs .
The "graph" underlying a trace diagram may have the following special features, which are not always included in the standard definition of a graph:
Every framed trace diagram corresponds to a multilinear function between tensor powers of the vector space V . The degree-1 vertices correspond to the inputs and outputs of the function, while the degree- n vertices correspond to the generalized Levi-Civita symbol (which is an anti-symmetric tensor related to the determinant ). If a diagram has no output strands, its function maps tensor products to a scalar. If there are no degree-1 vertices, the diagram is said to be closed and its corresponding function may be identified with a scalar.
By definition, a trace diagram's function is computed using signed graph coloring. For each edge coloring of the graph's edges by n labels, so that no two edges adjacent to the same vertex have the same label, one assigns a weight based on the labels at the vertices and the labels adjacent to the matrix labels. These weights become the coefficients of the diagram's function.
In practice, a trace diagram's function is typically computed by decomposing the diagram into smaller pieces whose functions are known. The overall function can then be computed by re-composing the individual functions.
Several vector identities have easy proofs using trace diagrams. This section covers 3-trace diagrams. In the translation of diagrams to functions, it can be shown that the positions of ciliations at the degree-3 vertices has no influence on the resulting function, so they may be omitted.
It can be shown that the cross product and dot product of 3-dimensional vectors are represented by
In this picture, the inputs to the function are shown as vectors in yellow boxes at the bottom of the diagram. The cross product diagram has an output vector, represented by the free strand at the top of the diagram. The dot product diagram does not have an output vector; hence, its output is a scalar.
As a first example, consider the scalar triple product identity
To prove this diagrammatically, note that all of the following figures are different depictions of the same 3-trace diagram (as specified by the above definition):
Combining the above diagrams for the cross product and the dot product, one can read off the three leftmost diagrams as precisely the three leftmost scalar triple products in the above identity. It can also be shown that the rightmost diagram represents det[ u v w ]. The scalar triple product identity follows because each is a different representation of the same diagram's function.
As a second example, one can show that
(where the equality indicates that the identity holds for the underlying multilinear functions). One can show that this kind of identity does not change by "bending" the diagram or attaching more diagrams, provided the changes are consistent across all diagrams in the identity. Thus, one can bend the top of the diagram down to the bottom, and attach vectors to each of the free edges, to obtain
which reads
a well-known identity relating four 3-dimensional vectors.
The simplest closed diagrams with a single matrix label correspond to the coefficients of the characteristic polynomial , up to a scalar factor that depends only on the dimension of the matrix. One representation of these diagrams is shown below, where ∝ {\displaystyle \propto } is used to indicate equality up to a scalar factor that depends only on the dimension n of the underlying vector space.
Let G be the group of n×n matrices. If a closed trace diagram is labeled by k different matrices, it may be interpreted as a function from G k {\displaystyle G^{k}} to an algebra of multilinear functions. This function is invariant under simultaneous conjugation , that is, the function corresponding to ( g 1 , … , g k ) {\displaystyle (g_{1},\ldots ,g_{k})} is the same as the function corresponding to ( a g 1 a − 1 , … , a g k a − 1 ) {\displaystyle (ag_{1}a^{-1},\ldots ,ag_{k}a^{-1})} for any invertible a ∈ G {\displaystyle a\in G} .
Trace diagrams may be specialized for particular Lie groups by altering the definition slightly. In this context, they are sometimes called birdtracks , tensor diagrams , or Penrose graphical notation .
Trace diagrams have primarily been used by physicists as a tool for studying Lie groups . The most common applications use representation theory to construct spin networks from trace diagrams. In mathematics, they have been used to study character varieties .
Books: | https://en.wikipedia.org/wiki/Trace_diagram |
A trace element is a chemical element of a minute quantity, a trace amount, especially used in referring to a micronutrient , [ 1 ] [ 2 ] but is also used to refer to minor elements in the composition of a rock , or other chemical substance .
In nutrition , trace elements are classified into two groups: essential trace elements, and non-essential trace elements. Essential trace elements are needed for many physiological and biochemical processes in both plants and animals. Not only do trace elements play a role in biological processes but they also serve as catalysts to engage in redox – oxidation and reduction mechanisms. [ 3 ] Trace elements of some heavy metals have a biological role as essential micronutrients .
The two types of trace element in biochemistry are classed as essential or non-essential.
An essential trace element is a dietary element , a mineral that is only needed in minute quantities for the proper growth, development, and physiology of the organism. [ 4 ] The essential trace elements are those that are required to perform vital metabolic activities in organisms. [ 5 ] Essential trace elements in human nutrition , and other animals include iron (Fe) (hemoglobin), copper (Cu) ( respiratory pigments ), cobalt (Co) ( Vitamin B12 ), iodine (I), manganese (Mn), chlorine (Cl), molybdenum (Mo), selenium (Se) and zinc (Zn) (enzymes). [ 5 ] [ 6 ] Although they are essential, they become toxic at high concentrations. [ 7 ]
Non-essential trace elements include silver (Ag), cadmium (Cd), mercury (Hg), and lead (Pb). They have no known biological function in mammals, with toxic effects even at low concentration. [ 5 ]
The structural components of cells and tissues that are required in the diet in gram quantities daily are known as bulk elements . [ 8 ] | https://en.wikipedia.org/wiki/Trace_element |
A trace fossil , also called an ichnofossil ( / ˈ ɪ k n oʊ ˌ f ɒ s ɪ l / ; from Ancient Greek ἴχνος ( íkhnos ) ' trace, track ' ), is a fossil record of biological activity by lifeforms , but not the preserved remains of the organism itself. [ 1 ] Trace fossils contrast with body fossils, which are the fossilized remains of parts of organisms' bodies, usually altered by later chemical activity or by mineralization . The study of such trace fossils is ichnology - the work of ichnologists . [ 2 ]
Trace fossils may consist of physical impressions made on or in the substrate by an organism. [ 3 ] For example, burrows , borings ( bioerosion ), urolites (erosion caused by evacuation of liquid wastes), footprints , feeding marks, and root cavities may all be trace fossils.
The term in its broadest sense also includes the remains of other organic material produced by an organism; for example coprolites (fossilized droppings) or chemical markers (sedimentological structures produced by biological means; for example, the formation of stromatolites ). However, most sedimentary structures (for example those produced by empty shells rolling along the sea floor) are not produced through the behaviour of an organism and thus are not considered trace fossils.
The study of traces – ichnology – divides into paleoichnology , or the study of trace fossils, and neoichnology , the study of modern traces. Ichnological science offers many challenges, as most traces reflect the behaviour – not the biological affinity – of their makers. Accordingly, researchers classify trace fossils into form genera based on their appearance and on the implied behaviour, or ethology , of their makers.
Traces are better known in their fossilized form than in modern sediments. [ 4 ] This makes it difficult to interpret some fossils by comparing them with modern traces, even though they may be extant or even common. [ 4 ] The main difficulties in accessing extant burrows stem from finding them in consolidated sediment, and being able to access those formed in deeper water.
Trace fossils are best preserved in sandstones; [ 4 ] the grain size and depositional facies both contributing to the better preservation. They may also be found in shales and limestones. [ 4 ]
Trace fossils are generally difficult or impossible to assign to a specific maker. Only in very rare occasions are the makers found in association with their tracks. Further, entirely different organisms may produce identical tracks. Therefore, conventional taxonomy is not applicable, and a comprehensive form of taxonomy has been erected. At the highest level of the classification, five behavioral modes are recognized: [ 4 ]
Fossils are further classified into form genera, a few of which are even subdivided to a "species" level. Classification is based on shape, form, and implied behavioural mode.
To keep body and trace fossils nomenclatorially separate, ichnospecies are erected for trace fossils. Ichnotaxa are classified somewhat differently in zoological nomenclature than taxa based on body fossils (see trace fossil classification for more information). Examples include:
Trace fossils are important paleoecological and paleoenvironmental indicators, because they are preserved in situ , or in the life position of the organism that made them. [ 5 ] Because identical fossils can be created by a range of different organisms, trace fossils can only reliably inform us of two things: the consistency of the sediment at the time of its deposition, and the energy level of the depositional environment . [ 6 ] Attempts to deduce such traits as whether a deposit is marine or non-marine have been made, but shown to be unreliable. [ 6 ]
Trace fossils provide us with indirect evidence of life in the past , such as the footprints, tracks, burrows, borings, and feces left behind by animals, rather than the preserved remains of the body of the actual animal itself. Unlike most other fossils, which are produced only after the death of the organism concerned, trace fossils provide us with a record of the activity of an organism during its lifetime. Unlike body fossils, which can be transported far away from where an individual organism lived, trace fossils record the type of environment an animal actually inhabited and thus can provide a more accurate palaeoecological sample than body fossils. [ 7 ]
Trace fossils are formed by organisms performing the functions of their everyday life, such as walking, crawling, burrowing, boring, or feeding. Tetrapod footprints, worm trails and the burrows made by clams and arthropods are all trace fossils.
Perhaps the most spectacular trace fossils are the huge, three-toed footprints produced by dinosaurs and related archosaurs . These imprints give scientists clues as to how these animals lived. Although the skeletons of dinosaurs can be reconstructed, only their fossilized footprints can determine exactly how they stood and walked. Such tracks can tell much about the gait of the animal which made them, what its stride was, and whether the front limbs touched the ground or not.
However, most trace fossils are rather less conspicuous, such as the trails made by segmented worms or nematodes . Some of these worm castings are the only fossil record we have of these soft-bodied creatures. [ citation needed ]
Fossil footprints made by tetrapod vertebrates are difficult to identify to a particular species of animal, but they can provide valuable information such as the speed, weight, and behavior of the organism that made them. Such trace fossils are formed when amphibians , reptiles , mammals , or birds walked across soft (probably wet) mud or sand which later hardened sufficiently to retain the impressions before the next layer of sediment was deposited. Some fossils can even provide details of how wet the sand was when they were being produced, and hence allow estimation of paleo-wind directions. [ 8 ]
Assemblages of trace fossils occur at certain water depths, [ 4 ] and can also reflect the salinity and turbidity of the water column.
Some trace fossils can be used as local index fossils , to date the rocks in which they are found, such as the burrow Arenicolites franconicus which occurs only in a 4 cm ( 1 + 1 ⁄ 2 in) layer of the Triassic Muschelkalk epoch, throughout wide areas in southern Germany . [ 9 ]
The base of the Cambrian period is defined by the first appearance of the trace fossil Treptichnus pedum . [ 10 ]
Trace fossils have a further utility, as many appear before the organism thought to create them, extending their stratigraphic range. [ 11 ]
Ichnofacies are assemblages of individual trace fossils that occur repeatedly in time and space. [ 12 ] Palaeontologist Adolf Seilacher pioneered the concept of ichnofacies, whereby geologists infer the state of a sedimentary system at its time of deposition by noting the fossils in association with one another. [ 4 ] The principal ichnofacies recognized in the literature are Skolithos , Cruziana , Zoophycos , Nereites , Glossifungites, Scoyenia , Trypanites , Teredolites , and Psilonichus . [ 12 ] [ 13 ] These assemblages are not random. In fact, the assortment of fossils preserved are primarily constrained by the environmental conditions in which the trace-making organisms dwelt. [ 13 ] Water depth, salinity , hardness of the substrate, dissolved oxygen, and many other environmental conditions control which organisms can inhabit particular areas. [ 12 ] Therefore, by documenting and researching changes in ichnofacies, scientists can interpret changes in environment. [ 13 ] For example, ichnological studies have been utilized across mass extinction boundaries, such as the Cretaceous–Paleogene mass extinction , to aid in understanding environmental factors involved in mass extinction events. [ 14 ] [ 15 ]
Most trace fossils are known from marine deposits. [ 16 ] Essentially, there are two types of traces, either exogenic ones, which are made on the surface of the sediment (such as tracks) or endogenic ones, which are made within the layers of sediment (such as burrows).
Surface trails on sediment in shallow marine environments stand less chance of fossilization because they are subjected to wave and current action. Conditions in quiet, deep-water environments tend to be more favorable for preserving fine trace structures.
Most trace fossils are usually readily identified by reference to similar phenomena in modern environments. However, the structures made by organisms in recent sediment have only been studied in a limited range of environments, mostly in coastal areas, including tidal flats . [ citation needed ]
The earliest complex trace fossils, not including microbial traces such as stromatolites , date to 2,000 to 1,800 million years ago . This is far too early for them to have an animal origin, and they are thought to have been formed by amoebae . [ 17 ] Putative "burrows" dating as far back as 1,100 million years may have been made by animals which fed on the undersides of microbial mats, which would have shielded them from a chemically unpleasant ocean; [ 18 ] however their uneven width and tapering ends make a biological origin so difficult to defend [ 19 ] that even the original author no longer believes they are authentic. [ 20 ]
The first evidence of burrowing which is widely accepted dates to the Ediacaran (Vendian) period, around 560 million years ago . [ 21 ] During this period the traces and burrows basically are horizontal on or just below the seafloor surface. Such traces must have been made by motile organisms with heads, which would probably have been bilateran animals . [ 22 ] The traces observed imply simple behaviour, and point to organisms feeding above the surface and burrowing for protection from predators. [ 23 ] Contrary to widely circulated opinion that Ediacaran burrows are only horizontal the vertical burrows Skolithos are also known. [ 24 ] The producers of burrows Skolithos declinatus from the Vendian (Ediacaran) beds in Russia with date 555.3 million years ago have not been identified; they might have been filter feeders subsisting on the nutrients from the suspension. The density of these burrows is up to 245 burrows/dm 2 . [ 25 ] Some Ediacaran trace fossils have been found directly associated with body fossils. Yorgia and Dickinsonia are often found at the end of long pathways of trace fossils matching their shape. [ 26 ] The feeding was performed in a mechanical way, supposedly the ventral side of body these organisms was covered with cilia . [ 27 ] The potential mollusc related Kimberella is associated with scratch marks, perhaps formed by a radula , [ 28 ] further traces from 555 million years ago appear to imply active crawling or burrowing activity. [ 29 ]
As the Cambrian got underway, new forms of trace fossil appeared, including vertical burrows (e.g. Diplocraterion ) and traces normally attributed to arthropods . [ 30 ] These represent a "widening of the behavioural repertoire", [ 31 ] both in terms of abundance and complexity. [ 32 ]
Trace fossils are a particularly significant source of data from this period because they represent a data source that is not directly connected to the presence of easily fossilized hard parts, which are rare during the Cambrian. Whilst exact assignment of trace fossils to their makers is difficult, the trace fossil record seems to indicate that at the very least, large, bottom-dwelling, bilaterally symmetrical organisms were rapidly diversifying during the early Cambrian . [ 33 ]
Further, less rapid [ verification needed ] diversification occurred since, [ verification needed ] and many traces have been converged upon independently by unrelated groups of organisms. [ 4 ]
Trace fossils also provide our earliest evidence of animal life on land. [ 34 ] Evidence of the first animals that appear to have been fully terrestrial dates to the Cambro-Ordovician and is in the form of trackways. [ 35 ] Trackways from the Ordovician Tumblagooda sandstone allow the behaviour of other terrestrial organisms to be determined. [ 8 ] The trackway Protichnites represents traces from an amphibious or terrestrial arthropod going back to the Cambrian. [ 36 ]
Less ambiguous than the above ichnogenera, are the traces left behind by invertebrates such as Hibbertopterus , a giant "sea scorpion" or eurypterid of the early Paleozoic era. This marine arthropod produced a spectacular track preserved in Scotland. [ 40 ]
Bioerosion through time has produced a magnificent record of borings, gnawings, scratchings and scrapings on hard substrates. These trace fossils are usually divided into macroborings [ 41 ] and microborings. [ 42 ] [ 43 ] Bioerosion intensity and diversity is punctuated by two events. One is called the Ordovician Bioerosion Revolution (see Wilson & Palmer, 2006) and the other was in the Jurassic. [ 44 ] For a comprehensive bibliography of the bioerosion literature, please see the External links below.
The oldest types of tetrapod tail-and-footprints date back to the latter Devonian period. These vertebrate impressions have been found in Ireland , Scotland , Pennsylvania , and Australia . A sandstone slab containing the track of tetrapod, dated to 400 million years, is amongst the oldest evidence of a vertebrate walking on land. [ 45 ]
Important human trace fossils are the Laetoli ( Tanzania ) footprints, imprinted in volcanic ash 3.7 Ma (million years ago) – probably by an early Australopithecus . [ 46 ]
Trace fossils are not body casts. The Ediacara biota , for instance, primarily comprises the casts of organisms in sediment. Similarly, a footprint is not a simple replica of the sole of the foot, and the resting trace of a seastar has different details than an impression of a seastar.
Early paleobotanists misidentified a wide variety of structures they found on the bedding planes of sedimentary rocks as fucoids ( Fucales , a kind of brown algae or seaweed ). However, even during the earliest decades of the study of ichnology, some fossils were recognized as animal footprints and burrows. Studies in the 1880s by A. G. Nathorst and Joseph F. James comparing 'fucoids' to modern traces made it increasingly clear that most of the specimens identified as fossil fucoids were animal trails and burrows. True fossil fucoids are quite rare.
Pseudofossils , which are not true fossils, should also not be confused with ichnofossils, which are true indications of prehistoric life.
Charles Darwin 's The Formation of Vegetable Mould through the Action of Worms [ a ] is an example of a very early work on ichnology, describing bioturbation and, in particular, the burrowing of earthworms . [ 47 ] | https://en.wikipedia.org/wiki/Trace_fossil |
Trace fossils are classified in various ways for different purposes. Traces can be classified taxonomically (by morphology), ethologically (by behavior), and toponomically , that is, according to their relationship to the surrounding sedimentary layers. Except in the rare cases where the original maker of a trace fossil can be identified with confidence, phylogenetic classification of trace fossils is an unreasonable proposition.
The taxonomic classification of trace fossils parallels the taxonomic classification of organisms under the International Code of Zoological Nomenclature . In trace fossil nomenclature a Latin binomial name is used, just as in animal and plant taxonomy , with a genus and specific epithet . However, the binomial names are not linked to an organism, but rather just a trace fossil. This is due to the rarity of association between a trace fossil and a specific organism or group of organisms. Trace fossils are therefore included in an ichnotaxon separate from Linnaean taxonomy . When referring to trace fossils, the terms ichnogenus and ichnospecies parallel genus and species respectively.
The most promising cases of phylogenetic classification are those in which similar trace fossils show details complex enough to deduce the makers, such as bryozoan borings , large trilobite trace fossils such as Cruziana , and vertebrate footprints . However, most trace fossils lack sufficiently complex details to allow such classification.
Adolf Seilacher was the first to propose a broadly accepted ethological basis for trace fossil classification. [ 1 ] [ 2 ] He recognized that most trace fossils are created by animals in one of five main behavioural activities, and named them accordingly:
Since the inception of behavioural categorization, several other ethological classes have been suggested and accepted, as follows:
Over the years several other behavioural groups have been proposed, but in general they have been quickly discarded by the ichnological community. Some of the failed proposals are listed below, with a brief description.
Fixichnia [ 10 ] is perhaps the group with the most weight as a candidate for the next accepted ethological class, being not fully described by any of the eleven currently accepted categories. There is also potential for the three plant traces (cecidoichnia, corrosichnia and sphenoichnia) to gain recognition in coming years, with little attention having been paid to them since their proposal. [ 11 ]
Another way to classify trace fossils is to look at their relation to the sediment of origin. Martinsson [ 12 ] has provided the most widely accepted of such systems, identifying four distinct classes for traces to be separated in this regard:
Other classifications have been proposed, [ 2 ] [ 13 ] [ 14 ] but none stray far from the above.
Early paleontologists originally classified many burrow fossils as the remains of marine algae , as is apparent in ichnogenera named with the -phycus suffix. Alfred Gabriel Nathorst and Joseph F. James both controversially challenged this incorrect classification, suggesting the reinterpretation of many "algae" as marine invertebrate trace fossils. [ 15 ]
Several attempts to classify trace fossils have been made throughout the history of paleontology. In 1844, Edward Hitchcock proposed two orders : Apodichnites , including footless trails, and Polypodichnites , including trails of organisms with more than four feet. [ 15 ] | https://en.wikipedia.org/wiki/Trace_fossil_classification |
Electric heat tracing , heat tape or surface heating , is a system used to maintain or raise the temperature of pipes and vessels using heat tracing cables. Trace heating takes the form of an electrical heating element run in physical contact along the length of a pipe. The pipe is usually covered with thermal insulation to retain heat losses from the pipe. Heat generated by the element then maintains the temperature of the pipe. Trace heating may be used to protect pipes from freezing, to maintain a constant flow temperature in hot water systems, or to maintain process temperatures for piping that must transport substances that solidify at ambient temperatures. Electric trace heating cables are an alternative to steam trace heating where steam is unavailable or unwanted. [ 2 ]
Electric trace heating began in the 1930s but initially no dedicated equipment was available. Mineral insulated cables ran at high current densities to produce heat, and control equipment was adapted from other applications. [ 3 ] Mineral-insulated resistance heating cable was introduced in the 1950s, and parallel-type heating cables that could be cut to length in the field became available. Self-limiting thermoplastic cables were marketed in 1971. [ 4 ]
Control systems for trace heating systems developed from capillary filled-bulb thermostats and contactors in the 1970s to networked computerized controls in the 1990s, in large systems that require centralized control and monitoring. [ 5 ]
One paper projected that between 2000 and 2010 trace heating would account for 100 megawatts of connected load, and that trace heating and insulation would account for up to CAD $700 million capital investment in the Alberta oil sands. [ 6 ]
International standards applied in the design and installation of electric trace heating systems include IEEE standards 515 and 622, British standard BS 6351, and IEC standard 60208. [ 5 ]
The most common pipe trace heating applications include: [ citation needed ]
Other uses of trace heating cables include: [ citation needed ]
Every pipe or vessel is subject to heat loss when its temperature is greater than ambient temperature. Thermal insulation reduces the rate of heat loss but does not eliminate it. Trace heating maintains the temperature above freezing by balancing heat lost with heat supplied. Normally, a thermostat is used to energise when it measures temperature falling below a set temperature value - usually between 3 °C and 5 °C and often referred to as the 'setpoint'. The thermostat will de-energise the trace heating when it measures temperature rising past another set temperature value - usually 2 °C higher than the setpoint value.
Placement of heat trace cable on roofs or in gutters to melt ice during winter months. When used in gutters the cable is not meant to keep the gutters free of ice or snow, but only to provide a free path for the melted water to get off the roof and down the downspout or drain piping.
Hot water service piping can also be traced, so that a circulating system is not needed to provide hot water at outlets. The combination of trace heating and the correct thermal insulation for the operating ambient temperature maintains a thermal balance where the heat output from the trace heating matches the heat loss from the pipe. Self-limiting or regulating heating tapes have been developed and are very successful in this application.
A similar principle can be applied to process piping carrying fluids which may congeal at low temperatures, for example, tars or molten sulfur. Hit-temperature trace heating elements can prevent blockage of pipes.
Industrial applications for trace heating range from chemical industry , oil refineries , nuclear power plants , food factories. For example, wax is a material which starts to solidify below 70 °C which is usually far above the temperature of the surrounding air. Therefore, the pipeline must be provided with an external source of heat to prevent the pipe and the material inside it from cooling down. Trace heating can also be done with steam, but this requires a source of steam and may be inconvenient to install and operate.
In laboratories, researchers working in the field of materials science use trace heating to heat a sample isotropically. They may use trace heating in conjunction with a variac , so as to control the heat energy delivered. This is an effective means of slowly heating an object to measure thermodynamic properties such as thermal expansion.
As heating a thick fluid decreases its viscosity, it reduces losses occurring in a pipe. Therefore, the net positive suction head (pressure difference) available can be raised, decreasing the likelihood of cavitation when pumping. However, care must be taken not to increase the vapour pressure of the fluid too much, as this would have a strong side effect on the available head, possibly outweighing any benefit. [ 7 ]
A series heating cable is made of a run of high-resistance wire, insulated and often enclosed in a protective jacket. It is powered at a specific voltage and the resistance heat of the wire creates heat. The downside of these types of heaters is that if they are crossed over themselves they can overheat and burn out, they are provided in specific lengths and cannot be shortened in the field, also, a break anywhere along the line will result in a failure of the entire cable. The upside is that they are typically inexpensive (if plastic style heaters) or, as is true with mineral insulated heating cables, they can be exposed to very high temperatures. Mineral insulated heating cables are good for maintaining high temperatures on process lines or maintaining lower temperatures on lines which can get extremely hot such as high temperature steam lines.
[ 2 ]
Typically series elements are used on long pipe line process heating, for example long oil pipe lines and quay side of load pipes on oil refineries.
A constant wattage cable is composed of multiple constant electric power zones and is made by wrapping a fine heating element around two insulated parallel bus wires, then on alternating sides of the conductors a notch is made in the insulation. The heating element is then normally soldered to the exposed conductor wire which creates a small heating circuit; this is then repeated along the length of the cable. There is then an inner jacket which separates the bus wires from the grounding braid. In commercial and industrial cables, an additional outer jacket of rubber or Teflon is applied. [ 2 ]
The benefits of this system over series elements is that should one small element fail then the rest of the system will continue to operate, on the other hand damaged sections of cable (usually 3 ft span) will stay cold and possibly lead to freeze ups on said section. Also, this cable can be cut-to-length in-field due to its parallel circuitry, however, due to the circuit only running to the last zone on the cable, when installing on site you normally have to install slightly beyond the end of the pipe work. When installing constant wattage, or any heat tracing cable, it is important to not overlap or touch the cable to itself as it will be subject to overheating and burnout. Constant wattage cable is always installed with a thermostat to control the power output of the cable, making it a very reliable heating source.
The disadvantage of this cable is that most constant wattage cables do not have soldered connections to the bus wires but press on type contact and are therefore more prone to have cold circuits due to loose connections caused by cable manipulation and installation.
Self-regulating heat tracing tapes are cables whose resistance varies with temperature - low resistance for temperatures below the cable set point and high resistance for temperatures above the cable set point. When the cable temperature reaches the set point, the resistance reaches a high point, resulting in no more heat being supplied.
These cables use two parallel bus wires which carry electricity but do not create significant heat. They are encased in a semi-conductive polymer. This polymer is loaded with carbon; as the polymer element heats, it allows less current to flow so the cable is inherently power saving and only delivering heat and power where and when required by the system. The cables are manufactured and then irradiated and by varying both the carbon content and the dosage then different tape with different output characteristics can be produced. The benefits of this cable are the ability to cut to length in the field. It is more rugged, and much more reliable than a constant wattage cable; it cannot over-heat itself so it can be crossed over, but it is bad practice to install tape in this way. Self-regulating and constant wattage heating cables have specific maximum exposure temperature, which means that if they are subject to high temperatures then the tape can be damaged beyond repair.
Also self-limiting tapes are subject to higher inrush currents on cold starting up similar to an induction motor , so a higher rated contactor is required.
Trace heat cables may be connected to single-phase or (in groups) to three-phase power supplies. Power is controlled either by a contactor or a solid-state controller. For self-regulating cable, the supply must furnish a large warm-up current if the system is switched on from a cold starting condition. The contactor or controller may include a thermostat if accurate temperature maintenance is required, or may just shut off a freeze-protection system in mild weather.
Electrical heat tracing systems may be required to have earth leakage (ground fault or RCD ) devices for personnel and equipment protection. The system design must minimize leakage current to prevent nuisance tripping; this may limit the length of any individual heating circuit.
The three phase systems are fed via contactors similar to a three phase motor 'direct on line' starter which is controlled by a thermostat somewhere in the line. This ensures that the temperature is kept constant and the line does not overheat or underheat.
If a line becomes frozen because the heating was switched off then this may take some time to thaw out using trace heating. This thawing out is done on the three phase systems by using an 'auto transformer' to give a higher voltage, and consequently higher current, and make the trace heating elements a bit hotter. The boost system is usually on a timer and switches back to 'normal' after a period of time. | https://en.wikipedia.org/wiki/Trace_heating |
In mathematics , there are many kinds of inequalities involving matrices and linear operators on Hilbert spaces . This article covers some important operator inequalities connected with traces of matrices. [ 1 ] [ 2 ] [ 3 ] [ 4 ]
Let H n {\displaystyle \mathbf {H} _{n}} denote the space of Hermitian n × n {\displaystyle n\times n} matrices, H n + {\displaystyle \mathbf {H} _{n}^{+}} denote the set consisting of positive semi-definite n × n {\displaystyle n\times n} Hermitian matrices and H n + + {\displaystyle \mathbf {H} _{n}^{++}} denote the set of positive definite Hermitian matrices. For operators on an infinite dimensional Hilbert space we require that they be trace class and self-adjoint , in which case similar definitions apply, but we discuss only matrices, for simplicity.
For any real-valued function f {\displaystyle f} on an interval I ⊆ R , {\displaystyle I\subseteq \mathbb {R} ,} one may define a matrix function f ( A ) {\displaystyle f(A)} for any operator A ∈ H n {\displaystyle A\in \mathbf {H} _{n}} with eigenvalues λ {\displaystyle \lambda } in I {\displaystyle I} by defining it on the eigenvalues and corresponding projectors P {\displaystyle P} as f ( A ) ≡ ∑ j f ( λ j ) P j , {\displaystyle f(A)\equiv \sum _{j}f(\lambda _{j})P_{j}~,} given the spectral decomposition A = ∑ j λ j P j . {\displaystyle A=\sum _{j}\lambda _{j}P_{j}.}
A function f : I → R {\displaystyle f:I\to \mathbb {R} } defined on an interval I ⊆ R {\displaystyle I\subseteq \mathbb {R} } is said to be operator monotone if for all n , {\displaystyle n,} and all A , B ∈ H n {\displaystyle A,B\in \mathbf {H} _{n}} with eigenvalues in I , {\displaystyle I,} the following holds, A ≥ B ⟹ f ( A ) ≥ f ( B ) , {\displaystyle A\geq B\implies f(A)\geq f(B),} where the inequality A ≥ B {\displaystyle A\geq B} means that the operator A − B ≥ 0 {\displaystyle A-B\geq 0} is positive semi-definite. One may check that f ( A ) = A 2 {\displaystyle f(A)=A^{2}} is, in fact, not operator monotone!
A function f : I → R {\displaystyle f:I\to \mathbb {R} } is said to be operator convex if for all n {\displaystyle n} and all A , B ∈ H n {\displaystyle A,B\in \mathbf {H} _{n}} with eigenvalues in I , {\displaystyle I,} and 0 < λ < 1 {\displaystyle 0<\lambda <1} , the following holds f ( λ A + ( 1 − λ ) B ) ≤ λ f ( A ) + ( 1 − λ ) f ( B ) . {\displaystyle f(\lambda A+(1-\lambda )B)\leq \lambda f(A)+(1-\lambda )f(B).} Note that the operator λ A + ( 1 − λ ) B {\displaystyle \lambda A+(1-\lambda )B} has eigenvalues in I , {\displaystyle I,} since A {\displaystyle A} and B {\displaystyle B} have eigenvalues in I . {\displaystyle I.}
A function f {\displaystyle f} is operator concave if − f {\displaystyle -f} is operator convex;=, that is, the inequality above for f {\displaystyle f} is reversed.
A function g : I × J → R , {\displaystyle g:I\times J\to \mathbb {R} ,} defined on intervals I , J ⊆ R {\displaystyle I,J\subseteq \mathbb {R} } is said to be jointly convex if for all n {\displaystyle n} and all A 1 , A 2 ∈ H n {\displaystyle A_{1},A_{2}\in \mathbf {H} _{n}} with eigenvalues in I {\displaystyle I} and all B 1 , B 2 ∈ H n {\displaystyle B_{1},B_{2}\in \mathbf {H} _{n}} with eigenvalues in J , {\displaystyle J,} and any 0 ≤ λ ≤ 1 {\displaystyle 0\leq \lambda \leq 1} the following holds g ( λ A 1 + ( 1 − λ ) A 2 , λ B 1 + ( 1 − λ ) B 2 ) ≤ λ g ( A 1 , B 1 ) + ( 1 − λ ) g ( A 2 , B 2 ) . {\displaystyle g(\lambda A_{1}+(1-\lambda )A_{2},\lambda B_{1}+(1-\lambda )B_{2})~\leq ~\lambda g(A_{1},B_{1})+(1-\lambda )g(A_{2},B_{2}).}
A function g {\displaystyle g} is jointly concave if − g {\displaystyle g} is jointly convex, i.e. the inequality above for g {\displaystyle g} is reversed.
Given a function f : R → R , {\displaystyle f:\mathbb {R} \to \mathbb {R} ,} the associated trace function on H n {\displaystyle \mathbf {H} _{n}} is given by A ↦ Tr f ( A ) = ∑ j f ( λ j ) , {\displaystyle A\mapsto \operatorname {Tr} f(A)=\sum _{j}f(\lambda _{j}),} where A {\displaystyle A} has eigenvalues λ {\displaystyle \lambda } and Tr {\displaystyle \operatorname {Tr} } stands for a trace of the operator.
Let f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } be continuous, and let n be any integer . Then, if t ↦ f ( t ) {\displaystyle t\mapsto f(t)} is monotone increasing, so
is A ↦ Tr f ( A ) {\displaystyle A\mapsto \operatorname {Tr} f(A)} on H n .
Likewise, if t ↦ f ( t ) {\displaystyle t\mapsto f(t)} is convex , so is A ↦ Tr f ( A ) {\displaystyle A\mapsto \operatorname {Tr} f(A)} on H n , and
it is strictly convex if f is strictly convex.
See proof and discussion in, [ 1 ] for example.
For − 1 ≤ p ≤ 0 {\displaystyle -1\leq p\leq 0} , the function f ( t ) = − t p {\displaystyle f(t)=-t^{p}} is operator monotone and operator concave.
For 0 ≤ p ≤ 1 {\displaystyle 0\leq p\leq 1} , the function f ( t ) = t p {\displaystyle f(t)=t^{p}} is operator monotone and operator concave.
For 1 ≤ p ≤ 2 {\displaystyle 1\leq p\leq 2} , the function f ( t ) = t p {\displaystyle f(t)=t^{p}} is operator convex. Furthermore,
The original proof of this theorem is due to K. Löwner who gave a necessary and sufficient condition for f to be operator monotone. [ 5 ] An elementary proof of the theorem is discussed in [ 1 ] and a more general version of it in. [ 6 ]
For all Hermitian n × n matrices A and B and all differentiable convex functions f : R → R {\displaystyle f:\mathbb {R} \rightarrow \mathbb {R} } with derivative f ' , or for all positive-definite Hermitian n × n matrices A and B , and all differentiable
convex functions f :(0,∞) → R {\displaystyle \mathbb {R} } , the following inequality holds,
Tr [ f ( A ) − f ( B ) − ( A − B ) f ′ ( B ) ] ≥ 0 . {\displaystyle \operatorname {Tr} [f(A)-f(B)-(A-B)f'(B)]\geq 0~.}
In either case, if f is strictly convex, equality holds if and only if A = B .
A popular choice in applications is f ( t ) = t log t , see below.
Let C = A − B {\displaystyle C=A-B} so that, for t ∈ ( 0 , 1 ) {\displaystyle t\in (0,1)} ,
varies from B {\displaystyle B} to A {\displaystyle A} .
Define
By convexity and monotonicity of trace functions, F ( t ) {\displaystyle F(t)} is convex, and so for all t ∈ ( 0 , 1 ) {\displaystyle t\in (0,1)} ,
which is,
and, in fact, the right hand side is monotone decreasing in t {\displaystyle t} .
Taking the limit t → 0 {\displaystyle t\to 0} yields,
which with rearrangement and substitution is Klein's inequality:
Note that if f ( t ) {\displaystyle f(t)} is strictly convex and C ≠ 0 {\displaystyle C\neq 0} , then F ( t ) {\displaystyle F(t)} is strictly convex. The final assertion follows from this and the fact that F ( t ) − F ( 0 ) t {\displaystyle {\tfrac {F(t)-F(0)}{t}}} is monotone decreasing in t {\displaystyle t} .
In 1965, S. Golden [ 7 ] and C.J. Thompson [ 8 ] independently discovered that
For any matrices A , B ∈ H n {\displaystyle A,B\in \mathbf {H} _{n}} ,
This inequality can be generalized for three operators: [ 9 ] for non-negative operators A , B , C ∈ H n + {\displaystyle A,B,C\in \mathbf {H} _{n}^{+}} ,
Let R , F ∈ H n {\displaystyle R,F\in \mathbf {H} _{n}} be such that Tr e R = 1.
Defining g = Tr Fe R , we have
The proof of this inequality follows from the above combined with Klein's inequality . Take f ( x ) = exp( x ), A = R + F , and B = R + gI . [ 10 ]
Let H {\displaystyle H} be a self-adjoint operator such that e − H {\displaystyle e^{-H}} is trace class . Then for any γ ≥ 0 {\displaystyle \gamma \geq 0} with Tr γ = 1 , {\displaystyle \operatorname {Tr} \gamma =1,}
with equality if and only if γ = exp ( − H ) / Tr exp ( − H ) . {\displaystyle \gamma =\exp(-H)/\operatorname {Tr} \exp(-H).}
The following theorem was proved by E. H. Lieb in. [ 9 ] It proves and generalizes a conjecture of E. P. Wigner , M. M. Yanase , and Freeman Dyson . [ 11 ] Six years later other proofs were given by T. Ando [ 12 ] and B. Simon, [ 3 ] and several more have been given since then.
For all m × n {\displaystyle m\times n} matrices K {\displaystyle K} , and all q {\displaystyle q} and r {\displaystyle r} such that 0 ≤ q ≤ 1 {\displaystyle 0\leq q\leq 1} and 0 ≤ r ≤ 1 {\displaystyle 0\leq r\leq 1} , with q + r ≤ 1 {\displaystyle q+r\leq 1} the real valued map on H m + × H n + {\displaystyle \mathbf {H} _{m}^{+}\times \mathbf {H} _{n}^{+}} given by
Here K ∗ {\displaystyle K^{*}} stands for the adjoint operator of K . {\displaystyle K.}
For a fixed Hermitian matrix L ∈ H n {\displaystyle L\in \mathbf {H} _{n}} , the function
is concave on H n + + {\displaystyle \mathbf {H} _{n}^{++}} .
The theorem and proof are due to E. H. Lieb, [ 9 ] Thm 6, where he obtains this theorem as a corollary of Lieb's concavity Theorem.
The most direct proof is due to H. Epstein; [ 13 ] see M.B. Ruskai papers, [ 14 ] [ 15 ] for a review of this argument.
T. Ando's proof [ 12 ] of Lieb's concavity theorem led to the following significant complement to it:
For all m × n {\displaystyle m\times n} matrices K {\displaystyle K} , and all 1 ≤ q ≤ 2 {\displaystyle 1\leq q\leq 2} and 0 ≤ r ≤ 1 {\displaystyle 0\leq r\leq 1} with q − r ≥ 1 {\displaystyle q-r\geq 1} , the real valued map on H m + + × H n + + {\displaystyle \mathbf {H} _{m}^{++}\times \mathbf {H} _{n}^{++}} given by
is convex.
For two operators A , B ∈ H n + + {\displaystyle A,B\in \mathbf {H} _{n}^{++}} define the following map
For density matrices ρ {\displaystyle \rho } and σ {\displaystyle \sigma } , the map R ( ρ ∥ σ ) = S ( ρ ∥ σ ) {\displaystyle R(\rho \parallel \sigma )=S(\rho \parallel \sigma )} is the Umegaki's quantum relative entropy .
Note that the non-negativity of R ( A ∥ B ) {\displaystyle R(A\parallel B)} follows from Klein's inequality with f ( t ) = t log t {\displaystyle f(t)=t\log t} .
The map R ( A ∥ B ) : H n + + × H n + + → R {\displaystyle R(A\parallel B):\mathbf {H} _{n}^{++}\times \mathbf {H} _{n}^{++}\rightarrow \mathbf {R} } is jointly convex.
For all 0 < p < 1 {\displaystyle 0<p<1} , ( A , B ) ↦ Tr ( B 1 − p A p ) {\displaystyle (A,B)\mapsto \operatorname {Tr} (B^{1-p}A^{p})} is jointly concave, by Lieb's concavity theorem , and thus
is convex. But
and convexity is preserved in the limit.
The proof is due to G. Lindblad. [ 16 ]
The operator version of Jensen's inequality is due to C. Davis. [ 17 ]
A continuous, real function f {\displaystyle f} on an interval I {\displaystyle I} satisfies Jensen's Operator Inequality if the following holds
for operators { A k } k {\displaystyle \{A_{k}\}_{k}} with ∑ k A k ∗ A k = 1 {\displaystyle \sum _{k}A_{k}^{*}A_{k}=1} and for self-adjoint operators { X k } k {\displaystyle \{X_{k}\}_{k}} with spectrum on I {\displaystyle I} .
See, [ 17 ] [ 18 ] for the proof of the following two theorems.
Let f be a continuous function defined on an interval I and let m and n be natural numbers. If f is convex, we then have the inequality
for all ( X 1 , ... , X n ) self-adjoint m × m matrices with spectra contained in I and
all ( A 1 , ... , A n ) of m × m matrices with
Conversely, if the above inequality is satisfied for some n and m , where n > 1, then f is convex.
For a continuous function f {\displaystyle f} defined on an interval I {\displaystyle I} the following conditions are equivalent:
for all ( X 1 , … , X n ) {\displaystyle (X_{1},\ldots ,X_{n})} bounded, self-adjoint operators on an arbitrary Hilbert space H {\displaystyle {\mathcal {H}}} with
spectra contained in I {\displaystyle I} and all ( A 1 , … , A n ) {\displaystyle (A_{1},\ldots ,A_{n})} on H {\displaystyle {\mathcal {H}}} with ∑ k = 1 n A k ∗ A k = 1. {\displaystyle \sum _{k=1}^{n}A_{k}^{*}A_{k}=1.}
every self-adjoint operator X {\displaystyle X} with spectrum in I {\displaystyle I} .
E. H. Lieb and W. E. Thirring proved the following inequality in [ 19 ] 1976: For any A ≥ 0 , {\displaystyle A\geq 0,} B ≥ 0 {\displaystyle B\geq 0} and r ≥ 1 , {\displaystyle r\geq 1,} Tr ( ( B A B ) r ) ≤ Tr ( B r A r B r ) . {\displaystyle \operatorname {Tr} ((BAB)^{r})~\leq ~\operatorname {Tr} (B^{r}A^{r}B^{r}).}
In 1990 [ 20 ] H. Araki generalized the above inequality to the following one: For any A ≥ 0 , {\displaystyle A\geq 0,} B ≥ 0 {\displaystyle B\geq 0} and q ≥ 0 , {\displaystyle q\geq 0,} Tr ( ( B A B ) r q ) ≤ Tr ( ( B r A r B r ) q ) , {\displaystyle \operatorname {Tr} ((BAB)^{rq})~\leq ~\operatorname {Tr} ((B^{r}A^{r}B^{r})^{q}),} for r ≥ 1 , {\displaystyle r\geq 1,} and Tr ( ( B r A r B r ) q ) ≤ Tr ( ( B A B ) r q ) , {\displaystyle \operatorname {Tr} ((B^{r}A^{r}B^{r})^{q})~\leq ~\operatorname {Tr} ((BAB)^{rq}),} for 0 ≤ r ≤ 1. {\displaystyle 0\leq r\leq 1.}
There are several other inequalities close to the Lieb–Thirring inequality, such as the following: [ 21 ] for any A ≥ 0 , {\displaystyle A\geq 0,} B ≥ 0 {\displaystyle B\geq 0} and α ∈ [ 0 , 1 ] , {\displaystyle \alpha \in [0,1],} Tr ( B A α B B A 1 − α B ) ≤ Tr ( B 2 A B 2 ) , {\displaystyle \operatorname {Tr} (BA^{\alpha }BBA^{1-\alpha }B)~\leq ~\operatorname {Tr} (B^{2}AB^{2}),} and even more generally: [ 22 ] for any A ≥ 0 , {\displaystyle A\geq 0,} B ≥ 0 , {\displaystyle B\geq 0,} r ≥ 1 / 2 {\displaystyle r\geq 1/2} and c ≥ 0 , {\displaystyle c\geq 0,} Tr ( ( B A B 2 c A B ) r ) ≤ Tr ( ( B c + 1 A 2 B c + 1 ) r ) . {\displaystyle \operatorname {Tr} ((BAB^{2c}AB)^{r})~\leq ~\operatorname {Tr} ((B^{c+1}A^{2}B^{c+1})^{r}).} The above inequality generalizes the previous one, as can be seen by exchanging A {\displaystyle A} by B 2 {\displaystyle B^{2}} and B {\displaystyle B} by A ( 1 − α ) / 2 {\displaystyle A^{(1-\alpha )/2}} with α = 2 c / ( 2 c + 2 ) {\displaystyle \alpha =2c/(2c+2)} and using the cyclicity of the trace, leading to Tr ( ( B A α B B A 1 − α B ) r ) ≤ Tr ( ( B 2 A B 2 ) r ) . {\displaystyle \operatorname {Tr} ((BA^{\alpha }BBA^{1-\alpha }B)^{r})~\leq ~\operatorname {Tr} ((B^{2}AB^{2})^{r}).}
Additionally, building upon the Lieb-Thirring inequality the following inequality was derived: [ 23 ] For any A , B ∈ H n , T ∈ C n × n {\displaystyle A,B\in \mathbf {H} _{n},T\in \mathbb {C} ^{n\times n}} and all 1 ≤ p , q ≤ ∞ {\displaystyle 1\leq p,q\leq \infty } with 1 / p + 1 / q = 1 {\displaystyle 1/p+1/q=1} , it holds that | Tr ( T A T ∗ B ) | ≤ Tr ( T ∗ T | A | p ) 1 p Tr ( T T ∗ | B | q ) 1 q . {\displaystyle |\operatorname {Tr} (TAT^{*}B)|~\leq ~\operatorname {Tr} (T^{*}T|A|^{p})^{\frac {1}{p}}\operatorname {Tr} (TT^{*}|B|^{q})^{\frac {1}{q}}.}
E. Effros in [ 24 ] proved the following theorem.
If f ( x ) {\displaystyle f(x)} is an operator convex function, and L {\displaystyle L} and R {\displaystyle R} are commuting bounded linear operators, i.e. the commutator [ L , R ] = L R − R L = 0 {\displaystyle [L,R]=LR-RL=0} , the perspective
is jointly convex, i.e. if L = λ L 1 + ( 1 − λ ) L 2 {\displaystyle L=\lambda L_{1}+(1-\lambda )L_{2}} and R = λ R 1 + ( 1 − λ ) R 2 {\displaystyle R=\lambda R_{1}+(1-\lambda )R_{2}} with [ L i , R i ] = 0 {\displaystyle [L_{i},R_{i}]=0} (i=1,2), 0 ≤ λ ≤ 1 {\displaystyle 0\leq \lambda \leq 1} ,
Ebadian et al. later extended the inequality to the case where L {\displaystyle L} and R {\displaystyle R} do not commute . [ 25 ]
Von Neumann's trace inequality , named after its originator John von Neumann , states that for any n × n {\displaystyle n\times n} complex matrices A {\displaystyle A} and B {\displaystyle B} with singular values α 1 ≥ α 2 ≥ ⋯ ≥ α n {\displaystyle \alpha _{1}\geq \alpha _{2}\geq \cdots \geq \alpha _{n}} and β 1 ≥ β 2 ≥ ⋯ ≥ β n {\displaystyle \beta _{1}\geq \beta _{2}\geq \cdots \geq \beta _{n}} respectively, [ 26 ] | Tr ( A B ) | ≤ ∑ i = 1 n α i β i , {\displaystyle |\operatorname {Tr} (AB)|~\leq ~\sum _{i=1}^{n}\alpha _{i}\beta _{i}\,,} with equality if and only if A {\displaystyle A} and B † {\displaystyle B^{\dagger }} share singular vectors. [ 27 ]
A simple corollary to this is the following result: [ 28 ] For Hermitian n × n {\displaystyle n\times n} positive semi-definite complex matrices A {\displaystyle A} and B {\displaystyle B} where now the eigenvalues are sorted decreasingly ( a 1 ≥ a 2 ≥ ⋯ ≥ a n {\displaystyle a_{1}\geq a_{2}\geq \cdots \geq a_{n}} and b 1 ≥ b 2 ≥ ⋯ ≥ b n , {\displaystyle b_{1}\geq b_{2}\geq \cdots \geq b_{n},} respectively), ∑ i = 1 n a i b n − i + 1 ≤ Tr ( A B ) ≤ ∑ i = 1 n a i b i . {\displaystyle \sum _{i=1}^{n}a_{i}b_{n-i+1}~\leq ~\operatorname {Tr} (AB)~\leq ~\sum _{i=1}^{n}a_{i}b_{i}\,.} | https://en.wikipedia.org/wiki/Trace_inequality |
Trace metals are the metals subset of trace elements ; that is, metals normally present in small but measurable amounts in animal and plant cells and tissues. Some of these trace metals are a necessary part of nutrition and physiology . Some biometals are trace metals. Ingestion of, or exposure to, excessive quantities can be toxic . However, insufficient plasma or tissue levels of certain trace metals can cause pathology , as is the case with iron.
Trace metals within the human body include iron , lithium , zinc , copper , chromium , nickel , cobalt , vanadium , molybdenum , manganese and others. [ 1 ] [ 2 ] [ 3 ]
Some of the trace metals are needed by living organisms to function properly and are depleted through the expenditure of energy by various metabolic processes of living organisms. They are replenished in animals through diet as well as environmental exposure, and in plants through the uptake of nutrients from the soil in which the plant grows. Human vitamin pills and plant fertilizers can be a source of trace metals.
Trace metals are sometimes referred to as trace elements, although the latter includes minerals and is a broader category. See also Dietary mineral . Trace elements are required by the body for specific functions. Things such as vitamins, sports drinks, fresh fruits and vegetables are sources. Taken in excessive amounts, trace elements can cause problems. For example, fluorine is required for the formation of bones and enamel on teeth. However, when taken in an excessive amount can cause a disease called "Fluorosis', in which bone deformations and yellowing of teeth are seen. Fluorine can occur naturally in some areas in ground water.
Roughly 5 grams of iron are present in the human body and is the most abundant trace metal. [ 1 ] It is absorbed in the intestine as heme or non-heme iron depending on the food source. Heme iron is derived from the digestion of hemoproteins in meat. [ 4 ] Non-heme iron is mainly derived from plants and exist as iron(II) or iron(III) ions. [ 4 ]
Iron is essential for more than 500 hemeproteins, the likes of which include hemoglobin and myoglobin , and account for 80% of iron usage. [ 1 ] The other 20% is present in ferritin , hemosiderin , [ 1 ] iron-sulfur (Fe/S) proteins , such as ferrochelatase , and more.
A relatively non-toxic metal to humans and the second most abundant, the body has 2-3 grams of zinc. [ 1 ] It can enter the body through inhalation, skin absorption, and ingestion, [ 5 ] with the latter of the bunch being the most common. The mucosal cells of the digestive tract contain metallothionein proteins that store the zinc ions. [ 1 ]
Nearly 90% of zinc is found in the bones, muscles, [ 5 ] and vesicles in the brain. [ 1 ] Zinc is a cofactor in hundreds of enzyme reactions and a major component of zinc finger proteins.
The third most abundant trace metal in the human body. [ 1 ]
It is found in cytochrome c oxidase , a protein necessary for the electron transport chain in mitochondria . [ 1 ] | https://en.wikipedia.org/wiki/Trace_metal |
Trace metal stable isotope biogeochemistry is the study of the distribution and relative abundances of trace metal isotopes in order to better understand the biological, geological, and chemical processes occurring in an environment. Trace metals are elements such as iron , magnesium , copper , and zinc that occur at low levels in the environment. Trace metals are critically important in biology and are involved in many processes that allow organisms to grow and generate energy. In addition, trace metals are constituents of numerous rocks and minerals, thus serving as an important component of the geosphere. Both stable and radioactive isotopes of trace metals exist, but this article focuses on those that are stable. Isotopic variations of trace metals in samples are used as isotopic fingerprints to elucidate the processes occurring in an environment and answer questions relating to biology, geochemistry, and medicine.
In order to study trace metal stable isotope biogeochemistry, it is necessary to compare the relative abundances of isotopes of trace metals in a given biological, geological, or chemical pool to a standard (discussed individually for each isotope system below) and monitor how those relative abundances change as a result of various biogeochemical processes. Conventional notations used to mathematically describe isotope abundances, as exemplified here for 56 Fe, include the isotope ratio ( 56 R), fractional abundance ( 56 F) and delta notation (δ 56 Fe). Furthermore, as different biogeochemical processes vary the relative abundances of the isotopes of a given trace metal, different reaction pools or substances will become enriched or depleted in specific isotopes. This partial separation of isotopes between different pools is termed isotope fractionation , and is mathematically described by fractionation factors α or ε (which express the difference in isotope ratio between two pools), or by "cap delta" (Δ; the difference between two δ values). For a more complete description of these notations, see the isotope notation section in Hydrogen isotope biogeochemistry .
In nature, variations in isotopic ratios of trace metals on the order of a few tenths to several ‰ are observed within and across diverse environments spanning the geosphere, hydrosphere and biosphere. A complete understanding of all processes that fractionate trace metal isotopes is presently lacking, but in general, isotopes of trace metals are fractionated during various chemical and biological processes due to kinetic and equilibrium isotope effects .
Certain isotopes of trace metals are preferentially oxidized or reduced; thus, transitions between redox species of the metal ions (e.g., Fe 2+ → Fe 3+ ) are fractionating, resulting in different isotopic compositions between the different redox pools in the environment. [ 1 ] Additionally, at high temperatures, metals ions can evaporate (and subsequently condense upon cooling), and the relative differences in isotope masses of a given heavy metal leads to fractionation during these evaporation and condensation processes. [ 2 ] Diffusion of isotopes through a solution or material can also result in fractionations, as the lighter mass isotopes are able to diffuse at a faster rate. [ 3 ] Additionally, isotopes can have slight variations in their solubility and other chemical and physical properties, which can also drive fractionation.
In sediments, oceans, and rivers, distinct trace metal isotope ratios exist due to biological processes such as metal ion uptake and abiotic processes such as adsorption to particulate matter that preferentially remove certain isotopes. [ 4 ] The trace metal isotopic composition of a given organism results from a combination of the isotopic compositions of source material (i.e., food and water) and any fractionations imparted during metal ion uptake, translocation and processing inside cells.
Stable isotope ratios of trace metals can be used to answer a variety of questions spanning diverse fields, including oceanography, geochemistry, biology, medicine, anthropology and astronomy. In addition to their modern applications, trace metal isotopic compositions can provide insight into ancient biogeochemical processes operated on Earth. [ 5 ] These signatures arise because the processes that form and modify samples are recorded in the trace metal isotopic compositions of the samples. By analyzing and understanding trace metal isotopic compositions in biological, chemical or geological materials, one can answer questions such as the sources of nutrients for phytoplankton in the ocean, processes that drove the formation of geologic structures, the diets of modern or ancient organisms, and accretionary processes that took place in the early Solar System. [ 6 ] [ 7 ] [ 8 ] Trace metal stable isotope biogeochemistry is still an emerging field, yet each trace metal isotope system has clear, powerful applications to diverse and important questions. Important heavy metal isotope systems are discussed (in order of increasing atomic mass) in the proceeding sections.
Naturally occurring iron has four stable isotopes , 54 Fe, 56 Fe, 57 Fe, and 58 Fe.
Stable iron isotopes are described as the relative abundance of each of the stable isotopes with respect to 54 Fe. The standard for iron is elemental iron, IRMM-014, and it is distributed by the Institute for Reference Materials and Measurement. The delta value is compared to this standard, and is defined as:
δ 56 / 54 F e = ( 56 F e / 54 F e ) s a m p l e ( 56 F e / 54 F e ) I R M M 014 − 1 {\displaystyle \delta ^{56/54}Fe={\frac {(^{56}Fe/^{54}Fe)_{sample}}{(^{56}Fe/^{54}Fe)_{IRMM014}}}-1}
Delta values are often reported as per mil values (‰), or part-per-thousand differences from the standard. Iron isotopic fractionation is also commonly described in units of per mil per atomic mass unit.
In many cases, the δ 56 Fe value can be related to the δ 57 Fe and δ 58 Fe values through mass-dependent fractionation: [ 9 ]
δ 57 F e = 1.5 × δ 56 F e {\displaystyle \delta ^{57}Fe=1.5\times \delta ^{56}Fe}
δ 58 F e = 2 × δ 56 F e {\displaystyle \delta ^{58}Fe=2\times \delta ^{56}Fe}
One of the most prevalent features of iron chemistry is its redox chemistry . Iron has three oxidation states : metallic iron (Fe 0 ), ferrous iron (Fe 2+ ), and ferric iron (Fe 3+ ). Ferrous iron is the reduced form of iron, and ferric iron is the oxidized form of iron. In the presence of oxygen, ferrous iron is oxidized to ferric iron, thus ferric iron is the dominant redox state of iron at Earth's surface conditions. However, ferrous iron is the dominant redox state below the surface at depth. Because of this redox chemistry, iron can act as either an electron donor or receptor, making it a metabolically useful species.
Each form of iron has a specific distribution of electrons (i.e., electron configuration ), tabulated below:
Variations in iron isotopes are caused by a number of chemical processes which result in the preferential incorporation of certain isotopes of iron into certain phases. Many of the chemical processes which fractionate iron are not well understood and are still being studied. The most well-documented chemical processes which fractionate iron isotopes relate to its redox chemistry, the evaporation and condensation of iron, and the diffusion of dissolved iron through systems. These processes are described in more detail below.
To first order, reduced iron favors isotopically light iron and oxidized iron favors isotopically heavy iron. This effect has been studied in regards to the abiotic oxidation of Fe 2+ to Fe 3+ , which results in fractionation. The mineral ferrihydrite , which forms in acidic aquatic conditions, is precipitated via the oxidation of aqueous Fe 2+ to Fe 3+ . [ 10 ] Precipitated ferrihydrite has been found to be enriched in the heavy isotopes by 0.45‰ per atomic mass unit with respect to the starting material. [ 10 ] This indicates that heavier iron isotopes are preferentially precipitated as a result of oxidizing processes. [ 10 ]
Theoretical calculations in combination with experimental data have also aimed to quantify the fractionation between Fe(III) aq and Fe(II) aq in HCl. [ 1 ] Based on modeling, the fractionation factor between the two species is temperature dependent: [ 1 ]
10 3 × ln α F e ( I I I ) − F e ( I I ) = 0.334 ± 0.032 × 10 6 T 2 − 0.66 ± 0.38 {\displaystyle 10^{3}\times \ln {\alpha _{Fe(III)-Fe(II)}}={\frac {0.334\pm 0.032\times 10^{6}}{T^{2}}}-0.66\pm 0.38}
Evaporation and condensation can give rise to both kinetic and equilibrium isotope effects. While equilibrium mass fractionation is present evaporation and condensation, it is negligible compared to kinetic effects. [ 1 ] During condensation, the condensate is enriched in the light isotope, whereas in evaporation, the gas phase is enriched in the light isotope. Using the kinetic theory of gases , for 56 Fe/ 54 Fe, a fractionation factor of α = 1.01835 for the evaporation of a pool containing equimolar amounts of 56 Fe and 54 Fe. [ 1 ] In evaporation experiments, the evaporation of FeO at 1,823 K gave a fractionation factor of α = 1.01877. [ 2 ] Presently, there have been no experimental attempts to determine the 56 Fe/ 54 Fe fractionation factor of condensation. [ 1 ]
Kinetic fractionation of dissolved iron occurs as a result of diffusion . When isotopes diffuse, the lower mass isotopes diffuse more quickly than the heavier isotopes, resulting in fractionation. This difference in diffusion rates has been approximated as: [ 3 ]
D 1 D 2 = ( m 1 m 2 ) β {\displaystyle {\frac {D_{1}}{D_{2}}}=\left({\frac {m_{1}}{m_{2}}}\right)^{\beta }}
In this equation, D 1 and D 2 are the diffusivities of the isotopes, m 1 and m 2 are the masses of the isotopes, and β, which can vary between 0 and 0.5, depending on the system. [ 9 ] More work is required to fully understand fractionation as a result of diffusion, studies of diffusion of iron on metal have consistently given β values of approximately 0.25. [ 11 ] Iron diffusion between silicate melts and basaltic/rhyolitic melts have given lower β values (~0.030). [ 12 ] In aqueous environments, a β value of 0.0025 has been obtained. [ 13 ]
There may be equilibrium fractionation between coexisting minerals. This would be particularly relevant when considering the formation of planetary bodies early in the Solar System . Experiments have aimed to simulate the formation of the Earth at high temperatures using a platinum-iron alloy and an analog for the silicate earth at 1,500 °C. [ 14 ] However, the observed fractionation was very small, less than 0.2‰ per atomic mass unit. [ 14 ] More experimental work is needed to fully understand this effect.
In biology, iron plays a number of roles. Iron is widespread in most living organisms and is essential for their function. In microbes, iron redox chemistry is utilized as an electron donor or receptor in microbial metabolism, allowing microbes to generate energy. In the oceans, iron is essential for the growth and survival of phytoplankton, which use iron to fix nitrogen. Iron is also important in plants, given that they need iron to transfer electrons during photosynthesis. Finally, in animals, iron plays many roles, however, its most essential function is to transport oxygen in the bloodstream throughout the body. Thus, iron undergoes many biological processes, each of which have variations in which isotopes of iron they preferentially use. While iron isotopic fractionations are observed in many organisms, they are still not well understood. Improvements in the understanding the iron isotope fractionations observed in biology will enable the development of a more complete knowledge of the enzymatic, metabolic, and other biologic pathways in different organisms. Below, the known iron isotopic variations for different classes of organisms are described.
Iron reducing bacteria reduce ferric iron to ferrous iron under anaerobic conditions. One of the first studies that studied iron fractionation in iron-reducing bacteria studied the bacterium Shewanella algae . [ 15 ] S. algae was grown on a ferrihydrite substrate, and was then allowed to reduce iron. [ 15 ] The study found that S. algae preferentially reduced 54 Fe over 56 Fe, with a δ 56/54 Fe value of -1.3‰. [ 15 ]
More recent experiments have studied the bacterium Shewanella putrefaciens and its reduction of Fe(III) in goethite . These studies have found δ 56/54 Fe values of -1.2‰ relative to the goethite. [ 16 ] The kinetics of this fractionation were also studied in this experiment, and it was suggested that the iron isotope fractionation is likely related to the kinetics of the electron transfer step. [ 16 ]
Most studies of other iron reducing bacteria have found δ 56/54 Fe values of approximately -1.3‰. [ 17 ] [ 18 ] At high Fe(III) reduction rates, δ 56/54 Fe values of -2 – -3‰ relative to the substrate have been observed. [ 19 ] The study of iron isotopes in iron reducing bacteria enable the development of an improved understanding regarding the metabolic processes operating in these organisms.
While most iron is oxidized as a result of interaction with atmospheric oxygen or oxygenated waters, oxidation by bacteria is an active process in anoxic environments and in oxygenated, low pH (<3) environments. Studies of the acidophilic Fe(II)-oxidizing bacterium , Acidthiobacillus ferrooxidans , have been used to determine the fractionation as a result of iron-oxidizing bacteria. In most cases, δ 56/54 Fe values between 2 and 3‰ were measured. [ 20 ] However, a Rayleigh trend with a fractionation factor of α Fe(III)aq-Fe(II)aq ~ 1.0022 was observed, which is smaller than the fractionation factor in the abiotic control experiments (α Fe(III)aq-Fe(II)aq ~ 1.0034), which has been inferred to reflect a biological isotope effect. [ 20 ] Using iron isotopes, an improvement in the understanding of the metabolic processes controlling iron oxidation and energy production in these organisms can be developed.
Photoautrophic bacteria, which oxidize Fe(II) under anaerobic conditions, have also been studied. The Thiodictyon bacteria precipitate poorly crystalline hydrous ferric oxide when they oxidize iron. [ 21 ] The precipitate was enriched in the 56 Fe relative to Fe(II) aq , with a δ 56/54 Fe value of +1.5 ± 0.2‰. [ 21 ]
Magnetotactic bacteria are bacteria with magnetosomes that contain magnetic crystals, usually magnetite or greigite , which allow them to orient themselves with the Earth’s magnetic field lines . These bacteria mineralize magnetite via the reduction of Fe(III), usually in microaerobic or anoxic environments. In the magnetotactic bacteria that have been studied, there was no significant iron isotope fractionation observed. [ 22 ]
Iron is important for the growth of phytoplankton . In phytoplankton, iron is used for electron transfer reactions in photosynthesis in both photosystem I and photosystem II . [ 23 ] Additionally, iron is an important component of the enzyme nitrogenase , which is used to fix nitrogen. [ 23 ] In measurements at open ocean stations, phytoplankton are isotopically light, with the fractionation as a result of biological uptake measured between -0.25‰ and -0.13‰. [ 24 ] Improvement in the understanding of this fractionation will enable the more precise understanding of phytoplankton photosynthetic processes.
Iron has many important roles in animal biology, specifically when considering oxygen transport in the bloodstream, oxygen storage in muscles, and enzymes. Known isotope variations are shown in the figure below. [ 25 ] Iron isotopes could be useful tracers of the iron biochemical pathways in animals, and also be indicative of trophic levels in a food chain. [ 7 ]
Iron isotope variations in humans reflects a number of processes. Specifically, iron in the blood stream reflects dietary iron, which is isotopically lighter than iron in the geosphere. [ 26 ] Iron isotopes are distributed heterogeneously throughout the body, primarily to red blood cells, the liver, muscle, skin, enzymes, nails, and hair. Iron losses in the body (intestinal bleeding, bile, sweat, etc.) favor the loss of isotopically heavy iron, with mean losses averaging a δ 56 Fe of +10‰. [ 26 ] Iron absorption in the intestine favors lighter iron isotopes. [ 26 ] This is largely due to the fact that iron is carried by transport proteins and transferrin , both of which are kinetic processes, resulting in the preferential uptake of isotopically light iron. [ 26 ]
The observed iron isotopic variations in humans and animals are particularly important as tracers. Iron isotopic signatures are utilized to determine the geographic origin of food. Additionally, anthropologists and paleontologists use iron isotope data in order to track the transfer of iron between the geosphere and the biosphere, specifically between plant foods and animals. This allows for the reconstruction of ancient dietary habits based on the variations in iron isotopes in food.
By mass, iron is the most common element on Earth, and it is the fourth most abundant element in the Earth's crust . Thus, iron is widespread throughout the geosphere, and is also common on other planetary bodies. Natural variations in the iron in the geosphere are relatively small. Currently, the values of δ 56/54 Fe measured in rocks and minerals range from -2.5‰ to +1.5‰. Iron isotope composition is homogeneous in igneous rocks to ±0.05‰, indicating that much of the geologic isotopic variability is a result of the formation of rocks and minerals at low temperature. [ 17 ] This homogeneity is particularly useful when tracing processes which result in fractionation through the system. While fractionation of igneous rocks is relatively constant, there are larger variations in the iron isotopic composition of chemical sediments. [ 27 ] Thus, iron isotopes are used to determine the origin of the protolith of heavily metamorphosed rocks of a sedimentary origin. [ 27 ] Improvements of the understanding regarding the way in which iron isotopes fractionate in the geosphere can help to better understand geologic processes of formation.
To date, iron is one of the most widely studied trace metals, and iron isotope compositions are relatively well-documented. Based on measurements, iron isotopes exhibit minimal variation (±3‰) in the terrestrial environment. A list of iron isotopic values of different materials from different environments is presented below.
There is an extreme constancy of the isotopic composition of igneous rocks. The mean value of δ 56 Fe of terrestrial rocks is 0.00 ± 0.05‰. [ 17 ] More precise isotopic measurements indicate that the small deviations from 0.00‰ may reflect a slight mass-dependent fractionation. [ 17 ] This mass fractionation has been proposed to be F Fe = 0.039 ± 0.008‰ per atomic mass unit relative to IRMM-014. [ 28 ] There may also be slight isotopic variations in igneous rocks depending on their composition and process of formation. The average value of δ 56 Fe for ultramafic igneous rocks is -0.06‰, whereas the average value of δ 56 Fe for mid-ocean ridge basalts (MORB) is +0.03‰. [ 17 ] Sedimentary rocks exhibit slightly larger variations in δ 56 Fe, with values between -1.6‰ and +0.9‰ relative to IRMM-014. [ 28 ] Banded iron formations δ 56 Fe span the entire range observed on Earth, from -2.5‰ to +1‰. [ 5 ]
There are slight iron isotopic variations in the oceans relative to IRMM-014, which likely reflect variations in the biogeochemical cycling of iron within a given ocean basin. In the southeastern Atlantic, δ 56 Fe values between -0.13 and +0.21‰ have been measured. [ 29 ] In the north Atlantic, δ 56 Fe values between -1.35 and +0.80‰ have been measured. [ 30 ] [ 31 ] In the equatorial Pacific δ 56 Fe values between -0.03 and +0.58‰ have been measured. [ 32 ] [ 33 ] The supply of aerosol iron particles to the ocean have an isotopic composition of approximately 0‰. [ 1 ] Dissolved iron riverine input to the ocean is isotopically light relative to igneous rocks, with δ 56 Fe values between -1 and 0‰. [ 1 ]
Most modern marine sediments have δ 56 Fe values similar to those of igneous δ 56 Fe values. [ 1 ] Marine ferromanganese nodules have δ 56 Fe values between -0.8 and 0‰.
Hot (> 300 °C) hydrothermal fluids from mid ocean ridges are isotopically light, with δ 56 Fe between -0.2 and -0.8‰. [ 1 ] Particles in hydrothermal plumes are isotopically heavy relative to the hydrothermal fluids, with δ 56 Fe between 0.1 and 1.1‰. [ 1 ] Hydrothermal deposits have average δ 56 Fe between -1.6 and 0.3‰. [ 1 ] The sulfide minerals within these deposits have δ 56 Fe between -2.0 and 1.1‰. [ 1 ]
Variations in iron isotopic composition have been observed in meteorite samples from other planetary bodies. The Moon has variations in iron isotopes of 0.4‰ per atomic mass unit. [ 34 ] Mars has very small isotope fractionation of 0.001 ± 0.006‰ per atomic mass unit. [ 8 ] Vesta has iron fractionations of 0.010 ± 0.010‰ per atomic mass unit. [ 8 ] The chondritic reservoir exhibits fractionations of 0.069 ± 0.010‰ per atomic mass unit. [ 8 ] Isotopic variations observed on planetary bodies can help to constrain and better understand their formation and processes occurring in the early Solar System.
High precision iron isotope measurements are obtained either via thermal ionization mass spectrometry (TIMS) or multi-collector inductively coupled plasma mass spectrometry (MC-ICP-MS).
Iron isotopes have many applications in the geosciences, biology, medicine, and other fields. Their ability to act as isotopic tracers allows for their use to determine information regarding the formation of geologic units and as a potential proxy for life on Earth and other planets. Iron isotopes also have applications in anthropology and paleontology, as they are used to study the diets of ancient civilizations and animals. The widespread uses of iron in biology make its isotopes a promising frontier in biomedical research, specifically their use to prevent and treat blood conditions and other pathological blood diseases. Some of the more prevalent applications of iron isotopes are described below.
Banded iron formations (BIFs) are particularly important when considering the surface environments of the early Earth, which were significantly different from the surface environments observed today. This is manifested in the mineralogy of these formations, which are indicative of different redox conditions. [ 35 ] Additionally, BIFs are interesting in that they were deposited while major changes were occurring in the atmosphere and in the biosphere 2.8 to 1.8 billion years ago. [ 35 ] Iron isotopic studies can reveal the details of the formation of BIFs, which allows for the reconstruction of redox and climatic conditions at the time of deposition.
BIFs formed as a result of the oxidation of iron by oxygen, which was likely generated by the evolution of cyanobacteria . [ 36 ] This was followed by the subsequent precipitation of iron particles in the ocean. [ 36 ] Observed variations in the iron isotopic composition of BIFs span the entire range observed on Earth, with δ 56/54 Fe values between -2.5 and +1‰. [ 5 ] The cause of these variations are hypothesized to occur for three reasons. The first relates to the varying mineralogy of the BIFs. Within the BIFs, minerals such as hematite , magnetite , siderite , and pyrite are observed. [ 5 ] These minerals each having varying isotopic fractionation, likely as a result of their structures and the kinetics of their growth. [ 5 ] The isotopic composition of the BIFs is indicative of the fluids from which they precipitated, which has applications when reconstructing environmental conditions of the ancient Earth. [ 5 ] It has also been suggested that BIFs may be biologic in origin. The range of their δ 56/54 Fe values fall within the range of those observed to occur as a result of biologic processes relating to bacterial metabolic processes, such as those of anoxygenic phototrophic iron-oxidizing bacteria. [ 36 ] Ultimately, the improved understanding of BIFs using iron isotope fractionations would allow for the reconstruction of past environments and the constraint of processes occurring on the ancient Earth. However, given that the values observed as a result of biogenic and abiogenic fractionation are relatively similar, the exact processes of BIFs are still unclear. Thus, the continued study and improved understanding of biologic and abiologic fractionation effects would be beneficial in providing better details regarding BIF formation.
Iron isotopes have become particularly useful in recent years for tracing biogeochemical cycling in the oceans. Iron is an important micronutrient for living species in the ocean, particularly for the growth of phytoplankton. Iron is estimated to limit phytoplankton growth in about one half of the ocean. [ 38 ] As a result, the development of a better understanding of sources and cycling of iron in the modern oceans is important. Iron isotopes have been used to better constrain these pathways through data collected by the GEOTRACES program, which has collected iron isotopic data throughout the ocean. Based on the variations in iron isotopes, biogeochemical cycling and other processes controlling iron distribution in the ocean can be elucidated.
For example, the combination of iron concentration and iron isotope data can use to determine the sources of oceanic iron. In the South Atlantic and in the Southern Ocean, isotopically light iron is observed in intermediate waters (200 - 1,300 meters), whereas isotopically heavy iron is observed in surface waters and deep waters (> 1,300 meters). [ 38 ] To first order, this demonstrates that there are different sources, sinks, and processes contributing to the iron cycle in varying water masses. The isotopically light iron in intermediate waters suggests that the dominant iron sources include remineralized organic matter. [ 38 ] This organic matter is isotopically light because phytoplankton preferentially take up light iron. [ 38 ] In the surface ocean, the isotopically heavy iron represents the external sources of iron, such as dust, which is isotopically heavy relative to IRMM-014, and the sink of light isotopes as a result of their preferential uptake by phytoplankton. [ 38 ] The isotopically heavy iron in the deep ocean suggests that the iron cycle is dominated by the abiotic, non-reductive release of iron, via desorption or dissolution , from particles. [ 38 ] Isotopic analyses similar to the one above are utilized throughout all of the world's oceans to better understand regional variability in the processes which control iron cycling. [ 39 ] [ 40 ] These analyses can then be synthesized to better model the global biogeochemical cycling of iron, which is particularly important when considering primary production in the ocean.
Iron isotopes have been applied for a number of purposes on planetary bodies. Their variations have been measured to more precisely determine the processes that occurred during planetary accretion . In the future, the comparison of observed biological fractionation of iron on Earth to fractionation on other planetary bodies may have astrobiological implications.
One of the primary challenges in the study of planetary accretion is the fact that many tracers of the processes occurring in the early Solar System have been eliminated as a result of subsequent geologic events. Because transition metals do not show large stable isotope fractionations as a result of these events and because iron is one of the most abundant elements in the terrestrial planets, its isotopic variability has been used as a tracer of early Solar System processes. [ 41 ] [ 8 ]
Variations in δ 57/54 Fe between samples from Vesta , Mars , the Moon , and Earth have been observed, and these variations cannot be explained by any known petrological, geochemical, or planetary processes, thus, it has been inferred that the observed fractionations are a result of planetary accretion. [ 8 ] It is interesting to note that the isotopic compositions of the Earth and the Moon are much heavier than that of Vesta and Mars. This provides strong support for the giant-impact hypothesis as an impact of this energy would generate large amounts of energy, which would melt and vaporize iron, leading to the preferential escape of the lighter iron isotopes to space. [ 8 ] More of the heavier isotopes would remain, resulting in the heavier iron isotopic compositions observed for the Earth and the Moon. The samples from Vesta and Mars exhibit minimal fractionation, consistent with the theory of runaway growth for their formations, as this process would not yield significant fractionations. [ 8 ] Further study of the stable isotope of iron in other planetary bodies and samples could provide further evidence and more precise constraints for planetary accretion and other processes that occurred in the early Solar System.
The use of iron isotopes may also have applications when studying potential evidence for life on other planets. The ability of microbes to utilize iron in their metabolisms makes it possible for organisms to survive in anoxic, iron-rich environments, such as Mars. Thus, the continual improvement of knowledge regarding the biological fractionations of iron observed on Earth can have applications when studying extraterrestrial samples in the future. [ 42 ] While this field of research is still developing, this could provide evidence regarding whether a sample was generated as a result of biologic or abiologic processes depending on the isotopic fractionation. For example, it has been hypothesized that magnetite crystals found in Martian meteorites may have formed biologically as a result of their striking similarity to magnetite crystals produced by magnetotactic bacteria on Earth. [ 43 ] Iron isotopes could be used to study the origin of the proposed " magnetofossils " and other rock formations on Mars. [ 43 ]
Iron plays many roles in human biology, specifically in oxygen transport , short-term oxygen storage , and metabolism [ 44 ] Iron also plays a role in the body's immune system . [ 4 ] Current biomedical research aims to use iron isotopes to better understand the speciation of iron in the body, with hopes of eventually being able to reduce the availability of free iron, as this would help to defend against infection. [ 4 ]
Iron isotopes can also be utilized to better understand iron absorption in humans. [ 26 ] The iron isotopic composition of blood reflects an individual's long-term absorption of dietary iron. [ 26 ] This allows for the study of genetic predisposition to blood conditions, such as anemia , which will ultimately enable the prevention, identification, and resolution of blood disorders. [ 26 ] Iron isotopic data could also aid in identifying impairments of the iron absorption regulatory system in the body, which would help to prevent the development of pathological conditions related to issues with iron regulation. [ 45 ]
Copper has two naturally occurring stable isotopes : 63 Cu and 65 Cu, which exist in the following natural abundances:
The isotopic composition of Cu is conventionally reported in delta notation (in ‰) relative to a NIST SRM 976 standard:
δ 65 C u = [ ( 65 C u / 63 C u ) s a m p l e ( 65 C u / 63 C u ) N I S T 976 ] {\displaystyle \delta ^{65}Cu=\left[{\frac {(^{65}Cu/^{63}Cu)_{sample}}{(^{65}Cu/^{63}Cu)_{NIST976}}}\right]}
Copper can exist in non-ionic form (as Cu 0 ) or in one of two redox states: Cu 1+ (reduced) or Cu 2+ (oxidized). Each form of Cu has a specific distribution of electrons (i.e., electron configuration ), tabulated below:
The electronic configurations of Cu control the number and types of bonds Cu can form with other atoms (e.g., see Copper Biology section). These diverse coordination chemistries are what enable Cu to participate in many different biological and chemical reactions.
Finally, due to its full d-orbital, Cu 1+ has diamagnetic resonance . In contrast, Cu 2+ has one unpaired electron in its d-orbital, giving it paramagnetic resonance . The different resonances of the Cu ions enable determination of Cu's redox state by techniques such as electron paramagnetic resonance (epr) spectroscopy , which can identify atoms with unpaired electrons by exciting electron spins.
Transitions between redox species Cu 1+ and Cu 2+ fractionate Cu isotopes. 63 Cu 2+ is preferentially reduced over 65 Cu 2+ , leaving the residual Cu 2+ enriched in 65 Cu. The equilibrium fractionation factor for speciation between Cu 2+ and Cu 1+ (α Cu(II)-Cu(I) ) is 1.00403 (i.e., dissolved Cu 2+ is enriched in 65 Cu by ~+4‰ relative to Cu 1+ ). [ 46 ] [ 47 ]
Copper can be found in the active sites of most enzymes that catalyze redox reactions (i.e., oxidoreductases ), as it facilitates single electron transfers while reversibly oscillating between the Cu 1+ and Cu 2+ redox states. Enzymes typically contain between one (mononuclear) and four (tetranuclear) copper centers, which enable enzymes to catalyze different reactions. These copper centers coordinate with different ligands depending on the Cu redox state. Oxidized Cu 2+ preferentially coordinates with "hard donor" ligands (e.g., N- or O-containing ligands such as histidine, aspartic acid, glutamic acid or tyrosine), while reduced Cu 1+ preferentially coordinates with "soft donor" ligands (e.g., S-containing ligands such as cysteine or methionine). [ 48 ] Copper's powerful redox capability makes it critically important for biology, but comes at a cost: Cu 1+ is a highly toxic metal to cells because it readily abstracts single electrons from organic compounds and cellular material, leading to production of free radicals. Thus, cells have evolved specific strategies for carefully controlling the activity of Cu 1+ while exploiting its redox behavior.
Copper serves catalytic and structural roles in many essential enzymes in biology. In the context of catalytic activity, copper proteins function as electron or oxygen carriers, oxidases , mono - and dioxygenases and nitrite reductases . In particular, copper-containing enzymes include hemocyanins , one flavor of superoxide dismutase (SOD) , metallothionein , cytochrome c oxidase , multicopper oxidase and particulate methane monooxygenase (pMMO) .
Biological processes that fractionate Cu isotopes are not well-understood, but play an important role in driving the δ 65 Cu values of materials observed in the marine and terrestrial environments. The natural 65 Cu/ 63 Cu varies according to copper's redox form and the ligand to which copper binds. Oxidized Cu 2+ preferentially coordinates with hard donor ligands (e.g., N- or O-containing ligands), while reduced Cu 1+ preferentially coordinates with soft donor ligands (e.g., S-containing ligands). [ 48 ] As 65 Cu is preferentially oxidized over 63 Cu, these isotopes tend to coordinate with hard and soft donor ligands, respectively. [ 48 ] Cu isotopes can fractionate upon Cu-bacteria interactions from processes that include Cu adsorption to cells, intracellular uptake, metabolic regulation and redox speciation. [ 46 ] [ 49 ] Fractionation of Cu isotopes upon adsorption to cellular walls appears to depend on the surface functional groups that Cu complexes with, and can span positive and negative values. [ 49 ] Furthermore, bacteria preferentially incorporate the lighter Cu isotope intracellularly and into proteins. For example, E. coli , B. subtilis and a natural consortia of microbes sequestered Cu with apparent fractionations (ε 65 Cu) ranging from ~-1.0 to -4.4‰. [ 49 ] Additionally, fractionation of Cu upon incorporation into the apoprotein of azurin was ~-1‰ in P. aeruginosa , and -1.5‰ in E. coli , while ε 65 Cu values of Cu incorporation into Cu-metallothionein and Cu-Zn-SOD in yeast were -1.7 and -1.2‰, respectively. [ 46 ]
The concentration of Cu in bulk silicate Earth is ~30 ppm, [ 50 ] slightly less than its average concentration (~72 ppm) in fresh mid-oceanic ridge basalt (MORB) glass. [ 51 ] Cu 1+ and Cu 2+ form a variety of sulfides (often in association with Fe), as well as carbonates and hydroxides (e.g., chalcopyrite , chalcocite , cuprite and malachite ). In mafic and ultramafic rocks, Cu tends to be concentrated in sulfidic materials. [ 51 ] In freshwater, the predominant form of Cu is free Cu 2+ ; in seawater, Cu complexes with carbonate ligands to form CuCO 3 and [Cu(CO 3 ) 2 ] 2− .
In order to measure Cu isotope ratios of various materials, several steps must be taken prior to the isotopic measurement in order to extract and purify copper. The first step in the analytical pipeline to measure Cu isotopes is to liberate Cu from its host material. Liberation should be quantitative, otherwise fractionation may be introduced at this step. Cu-containing rocks are generally dissolved with HF ; biological materials are commonly digested with HNO 3 . Seawater samples must be concentrated due to the low (nM) concentrations of Cu in the ocean. The sample material is subsequently run through an anion-exchange column to isolated and purify Cu. This step can also introduce Cu isotope fractionation if Cu is not quantitatively recovered from the column. [ 52 ] If samples are from seawater, other ions (e.g., Na + , Mg 2+ , Ca 2+ ) must be removed in order to eliminate isobaric interferences during the isotope measurement. Prior to 1992, 65 Cu/ 63 Cu ratios were measured via thermal ionization mass spectrometry (TIMS) . [ 53 ] [ 54 ] Today, Cu isotopic compositions are measured via multi-conductor inductively coupled plasma mass spectrometry (MC-ICP-MS) , which ionizes samples using inductively coupled plasma and introduces smaller errors than TIMS.
The field of Cu isotope biogeochemistry is still in a relatively early stage, so the Cu isotope compositions of materials in the environment are not well-documented. However, based on a compilation of measurements already made, it appears that Cu isotope ratios vary somewhat widely within and between environmental materials (e.g., plants, minerals, seawater, etc.), though as a whole, these ratios do not vary by more than ±10‰.
In human bodies, coppers is an important constituent of many essential enzymes, including ceruloplasmin (which carries Cu and oxidizes Fe 2+ in human plasma), cytochrome c oxidase , metallothionein and superoxide dismutase 1 . [ 55 ] Serum in human blood is typically 65 Cu-depleted by ~0.8‰ relative to erythrocytes (i.e., red blood cells). In a study of 49 male and female blood donors, the average δ 65 Cu value of the donors' blood serum was -0.26 ± 0.40‰, while that of their erythrocytes was +0.56 ± 0.50‰. [ 56 ] In a separate study, δ 65 Cu values of serum in 20 healthy patients ranged from -0.39 to +0.38‰, while the δ 65 Cu values of their erythrocytes ranged from +0.57 to +1.24‰. [ 57 ] To balance Cu loss due to menstruation, a large portion of Cu in the blood of menstruating women comes from their liver. Due to fractionation associated with Cu transport from the liver to the blood, the total blood of pre-menopausal women is generally 65 Cu-depleted relative to that of males and non-menstruating women. [ 58 ] The δ 65 Cu values of healthy human liver tissue in 7 patients ranged from -0.45 to -0.11‰. [ 57 ]
To first order, δ 65 Cu values in organisms are driven by the δ 65 Cu values of source materials. The δ 65 Cu values of various soils from different regions have been found to vary from -0.34 to +0.33‰ [ 59 ] [ 60 ] depending on the biogeochemical processes taking place in the soil and the ligands with which Cu complexes. Organic-rich soils generally have lighter δ 65 Cu values than mineral soils because the organic layers result from plant litter, which is isotopically light. [ 59 ]
In plants, δ 65 Cu values vary between the different components (seeds, roots, stem and leaves). The δ 65 Cu values the roots of rice, lettuce, tomato and durum wheat plants were found to be 0.5 to 1.0‰ 65 Cu-depleted relative to their source, while their shoots were up to 0.5‰ lighter than the roots. [ 61 ] [ 62 ] [ 63 ] Seeds appear to be the most isotopically light component of plants, followed by leaves, then stems. [ 63 ]
Rivers sampled throughout the world have a range of dissolved δ 65 Cu values from +0.02 to +1.45‰. [ 64 ] The average δ 65 Cu values of the Amazon , Brahmaputra and Nile rivers are 0.69, 0.64 and 0.58‰, respectively. The average δ 65 Cu value of the Chang Jiang river is 1.32‰, while that of the Missouri river is 0.13‰. [ 64 ]
In general, igneous, metamorphic and sedimentary processes do not appear to strongly fractionate Cu isotopes, while δ 65 Cu values of Cu minerals vary widely. The average Cu isotopic composition of bulk silicate Earth has been measured as 0.06 ± 0.20‰ based on 132 different terrestrial samples. [ 67 ] MORBs and oceanic island basalts (OIBs) generally have homogenous Cu isotopic compositions that fall around 0‰, [ 67 ] [ 68 ] [ 69 ] [ 70 ] while arc and continental basalts have more heterogeneous Cu isotope compositions that range from -0.19 to +0.47‰. [ 67 ] These Cu isotope ratios of basalts suggest that mantle partial melting imparts negligible Cu isotopic fractionation, while recycling of crustal materials leads to widely variable δ 65 Cu values. [ 67 ] The Cu isotope compositions of copper-containing minerals vary over a wide range, likely due to alteration of the primary high-temperature deposits. [ 51 ] In one study that investigated Cu isotopic compositions of various minerals from hydrothermal fields along the mid-Atlantic ridge, chalcopyrite from mafic igneous rocks had δ 65 Cu values of -0.1 to -0.2‰, while Cu minerals in black smokers (chalcopyrite, bornite, covellite and atacamite) exhibited a wider range of δ 65 Cu values from -1.0 to +4.0‰. [ 71 ] Additionally, atacamite lining the outer rims of black smokers can be up to 2.5‰ heavier than chalcopyrite contained within the black smoker. δ 65 Cu values of Cu minerals (including chrysocolle, azurite, malachite, cuprite and native copper) in low-temperature deposits have been observed to vary widely over a range of -3.0 to +5.6‰. [ 72 ] [ 71 ]
Cu is strongly cycled in the surface and deep ocean. In the deep ocean, Cu concentrations are ~5 nM in the Pacific [ 65 ] and ~1.5 nM in the Atlantic. [ 66 ] The deep/surface ratio of Cu in the ocean is typically <10, and vertical concentration profiles for Cu are roughly linear due to biological recycling and scavenging processes [ 6 ] as well as adsorption to particles.
Due to equilibrium and biological processes that fractionate Cu isotopes in the marine environment, the bulk copper isotopic composition (δ 65 Cu = +0.6 to +1.5‰) is different from the δ 65 Cu values of the riverine input (δ 65 Cu = +0.02 to +1.45‰, with discharge-weighted average δ 65 Cu = +0.68‰) to the oceans. [ 64 ] [ 66 ] [ 73 ] δ 65 Cu values of the surface layers of FeMn-nodules are fairly homogenous throughout the oceans (average = 0.31‰), [ 69 ] suggesting low biological demand for Cu in the marine environment compared to that of Fe or Zn. Additionally, δ 65 Cu values in the Atlantic ocean do not markedly vary with depth, ranging from +0.56 to +0.72‰. [ 66 ] However, Cu isotope compositions of material collected on sediment traps at depths of 1,000 and 2,500 m in the central Atlantic ocean show seasonal variation with heaviest δ 65 Cu values in the spring and summer seasons [ 68 ] suggesting seasonal preferential uptake of 63 Cu by biological processes.
Equilibrium processes that fractionate Cu isotopes include high temperature ion exchange and redox speciation between mineral phases, and low temperature ion exchange between aqueous species or redox speciation between inorganic species. [ 46 ] In riverine and marine environments, 65 Cu/ 63 Cu ratios are driven by preferential adsorption of 63 Cu to particulate matter and preferential binding of 65 Cu to organic complexes. [ 64 ] As a net result, ocean sediments tend to be depleted in 63 Cu relative to the bulk ocean. For example, the downcore δ 65 Cu values of a 760 cm sedimentary core taken from the Central Pacific ocean varied from -0.94 to -2.83‰, significantly lighter than the bulk ocean. [ 51 ]
Due to its relatively short turnover time of ~6 weeks in the human body, [ 74 ] Cu serves as an important indicator of cancer and other diseases that rapidly evolve. The serum of cancer patients contains significantly higher levels of Cu than that of healthy patients due to copper chelation by lactate, which is produced via anaerobic glycolysis by tumor cells. [ 55 ] These imbalances in Cu homeostasis are reflected isotopically in the serum and organ tissues of patients with various types of cancer, where the serum of cancer patients is generally 65 Cu-depleted relative to the serum of healthy patients, while organ tumors are generally 65 Cu-enriched. [ 75 ] In one study, [ 57 ] the blood components of patients with hepatocellular carcinomas (HCC) was found to be, on average, depleted in 65 Cu by 0.4‰ relative to the blood of non-cancer patients. In particular, the δ 65 Cu values of the serum in patients with HCC ranged from -0.66 to +0.47‰ (compared to serum δ 65 Cu values of -0.39 to +0.38‰ in matched control patients), and the δ 65 Cu values of the erythrocytes in the HCC patients ranged from -0.07 to +0.92‰ (compared to erythrocyte δ 65 Cu values of +0.57 to +1.24‰ in matched control patients). The liver tumor tissues in the HCC patients were 65 Cu-enriched relative to healthy liver tissue in the same patients (δ 65 Cu liver, HCC = -0.02 to +0.43‰; δ 65 Cu liver, healthy = -0.45 to -0.11‰), and the magnitude of 65 Cu-enrichment mirrored that of the 65 Cu-depletion observed in the cancer patients' serum. Though our understanding of how copper isotopes are fractionated during cancer physiologies is still limited, it is clear that copper isotope ratios may serve as a powerful biomarker of cancer presence and progression.
Zinc has five stable isotopes, tabulated along with their natural abundances below:
The isotopic composition of Zn is reported in delta notation (in ‰):
δ x Z n = [ ( x Z n / 64 Z n ) s a m p l e ( x Z n / 64 Z n ) s t d ] {\displaystyle \delta ^{x}Zn=\left[{\frac {(^{x}Zn/^{64}Zn)_{sample}}{(^{x}Zn/^{64}Zn)_{std}}}\right]}
where x Zn is a Zn isotope other than 64 Zn (commonly either 66 Zn or 68 Zn). Standard reference materials used for Zn isotope measurements are JMC 3-0749C, NIST-SRM 683 or NIST-SRM 682.
Because it has just one valence state (Zn 2+ ), zinc is a redox-inert element. The electronic configurations of Zn 0 and Zn 2+ are shown below:
Zinc is present in almost 3,000 human proteins, and thus is essential for nearly all cellular functions. [ 76 ] Zn is also a key constituent of enzymes involved in cell regulation. [ 77 ] Consistent with its ubiquitous presence, total cellular Zn concentrations are typically very high (~200 μM), [ 78 ] while the concentrations of free Zn ions in the cytoplasms of cells can be as low as a few hundred picomolar, [ 79 ] [ 80 ] maintained within a narrow range to avoid deficiency and toxicity. One feature of Zn that makes it so critical in cellular biology is its flexibility in coordination to different numbers and types of ligands . Zn can coordinate with anywhere between three and six N-, O- and S-containing ligands (such as histidine, glutamic acid, aspartic acid and cysteine), resulting in a large number of possible coordination chemistries . Zn tends to bind to metal sites of proteins with relatively high affinities compared to other metal ions which, aside from its important functions in enzymatic reactions, partly explains its ubiquitous presence in cellular enzymes. [ 78 ]
Zn is present in the active sites of most hydrolytic enzymes, and is used as an electrophilic catalyst to activate water molecules that ultimately hydrolyze chemical bonds. Examples of zinc-based enzymes include superoxide dismutase (SOD) , metallothionein , carbonic anhydrase , Zn finger proteins , alcohol dehydrogenase and carboxypeptidase .
Relatively little is known about isotopic fractionation of zinc by biological processes, but several studies have elucidated that Zn isotopes fractionate during surface adsorption , intracellular uptake processes and speciation. Many organisms, including certain species of fish, plants and marine phytoplankton , have both high- and low-affinity Zn transport systems, which appear to fractionate Zn isotopes differently. A study by John et al. [ 81 ] observed apparent isotope effects associated with Zn uptake by the marine diatom Thalassiosira oceanica of -0.2‰ for high-affinity uptake (at low Zn concentrations) and -0.8‰ for low-affinity uptake (at high Zn concentrations). Additionally, in this study, unwashed cells were enriched in 65 Zn, indicating preferential adsorption of 65 Zn to the extracellular surfaces of T. oceanica . Results from John et al. [ 81 ] demonstrating apparent discrimination against the heavy isotope ( 66 Zn) during uptake conflict with results by Gélabert et al. [ 82 ] in which marine phytoplankton and freshwater periphytic organisms preferentially uptook 66 Zn from solution. The latter authors explained these results as due to a preferential partitioning of 66 Zn into a tetrahedrally coordinated structure (i.e., with carboxylate, amine or silanol groups on or inside the cell) over an octahedral coordination with six water molecules in the aqueous phase, consistent with quantum mechanical predictions. Kafantaris and Borrok [ 83 ] grew model organisms B. subtilis , P. mendocina and E. coli , as well as a natural bacterial consortium collected from soil, on high and low concentrations of Zn. In the high [Zn] condition, the average fractionation of Zn isotopes imparted by cellular surface adsorption was +0.46‰ (i.e., 66 Zn was preferentially adsorbed), while fractionation upon intracellular incorporation varied from -0.2 to +0.5‰ depending on the bacterial species and growth phase. Empirical models of the low [Zn] condition estimated larger Zn isotope fractionation factors for surface adsorption ranging from +2 to +3‰. Overall, Zn isotope ratios in microbes appear to be driven by a number of complex factors including surface interactions, bacterial metal metabolism and metal speciation, but by understanding the relative contributions of these factors to Zn isotope signals, one can use Zn isotopes to investigate metal-binding pathways operating in natural communities of microbes.
The concentration of Zn in bulk silicate Earth is ~55 ppm, [ 50 ] while its average concentration in fresh mid-oceanic ridge basalt (MORB) glass is ~87 ppm. [ 51 ] Like Cu, Zn commonly associates with Fe to form a variety of zinc sulfide minerals such as sphalerite . Additionally, Zn associates with carbonates and hydroxides to form numerous diverse minerals (e.g., smithsonite , sweetite , etc.). In mafic and ultramafic rocks, Zn tends to concentrate in oxides such as spinel and magnetite . In freshwater, Zn predominantly complexes with water to form an octahedrally coordinated aqua ion [Zn(H 2 O) 6 ] 2+ . In seawater, Cl − ions replace up to four water molecules in the Zn aqua ion, forming [ZnCl(H 2 O) 5 ] + , [ZnCl 2 (H 2 O) 4 ] 0 and [ZnCl 4 (H 2 O) 2 ] − . [ 84 ] [ 85 ]
The analytical pipeline for preparation of sample material for Zn isotope measurements is similar to that of Cu, consisting of digestion of host material or concentration from seawater, isolation and purification via anion-exchange chromatography , removal of ions of interfering mass (in particular, 64 Ni) and isotope measurement via MC-ICP-MS (see Copper Isotope Measurement section for more details).
As with Cu, the field of Zn isotope biogeochemistry is still in a relatively early stage, so the Zn isotope compositions of materials in the environment are not well-documented. However, based on a compilation of some reported measurements, it appears that Zn isotope ratios do not vary widely among environmental materials (e.g., plants, minerals, seawater, etc.), as δ 66 Zn values of materials typically fall within a range of -1 to +1‰.
Zn isotope ratios vary between individual blood components, bones and the different organs in humans, though in general, δ 66 Zn values fall within a narrow range. In the blood of healthy individuals, the Zn isotopic composition of erythrocytes is typically ~0.3‰ lighter than that of serum, and no significant differences in erythrocyte or serum δ 66 Zn values exist between men and women. For example, in the blood of 49 healthy blood donors, the average erythrocyte δ 66 Zn value was +0.44 ± 0.33‰, while that of serum was +0.17 ± 0.26‰. [ 56 ] In a separate study on 29 donors, a similar average δ 66 Zn value of +0.29 ± 0.27‰ was obtained for the patients' serum. [ 86 ] Additionally, in a small sample set of volunteers, whole blood δ 66 Zn values were ~+0.15‰ higher for vegetarians than for omnivores, [ 87 ] suggesting diet plays an important role in driving Zn isotope compositions in the human body.
Zn isotope ratios vary on small scales throughout the terrestrial biosphere. Zn is released into soils during mineral weathering, and isotopes of Zn fractionate upon interaction with mineral and organic components in the soil. In 5 soil profiles collected from Iceland (all derived from the same parent basalt), soil δ 66 Zn values varied from +0.10 to +0.35‰, and the organic-rich layers were 66 Zn-depleted relative to the mineral-rich layers, likely due to contribution by isotopically light organic matter and Zn loss by leaching. [ 88 ]
Isotopic discrimination of Zn varies in different components of higher plants, likely due to the various processes involved in Zn uptake, binding, transport, diffusion, speciation and compartmentalization. For example, Weiss et al. [ 61 ] observed heavier δ 66 Zn values in the roots of several plants (rice, lettuce and tomato) relative to the bulk solution in which the plants were grown, and the shoots of those plants were 66 Zn-depleted relative to both their roots and bulk solution. Furthermore, Zn isotopes partition differently between different Zn-ligand complexes, so the form of Zn incorporated by organisms in the terrestrial biosphere plays a role in driving Zn isotope compositions of the organisms. In particular, based on ab initio calculations, Zn-phosphate complexes are expected to be isotopically heavier than Zn-citrates, Zn-malates and Zn-histidine complexes by 0.6 to 1‰. [ 89 ] [ 90 ]
The discharge- and [Zn]-weighted average δ 66 Zn value of rivers throughout the world is +0.33‰. [ 73 ] In particular, the average δ 66 Zn values of the Kalix and Chang Jiang rivers are +0.64 and +0.56‰, respectively. The Amazon , Missouri and Brahmaputra rivers have average δ 66 Zn values near +0.30‰, and the average δ 66 Zn value of the Nile river is +0.21‰. [ 73 ]
In general, δ 66 Zn values of various rocks and minerals do not appear to significantly vary. The δ 66 Zn value of bulk silicate Earth (BSE) is +0.28 + 0.05‰. [ 91 ] Fractionation of Zn isotopes by igneous processes is generally insignificant, and δ 66 Zn values of basalt fall within the range of +0.2 to +0.3‰, [ 68 ] [ 69 ] [ 70 ] encompassing the value for BSE. δ 66 Zn values of clay minerals from diverse environments and of diverse ages have been found to fall within the same range as basalts, [ 69 ] suggesting negligible fractionation between the basaltic precursors and sedimentary materials. Carbonates appear to be more 66 Zn-enriched than other sedimentary and igneous rocks. For example, the δ 66 Zn value of a limestone core taken from the Central Pacific was +0.6‰ at the surface and increased to +1.2‰ with depth [ 92 ] The Zn isotopic compositions of various ores are not well-characterized, but smithsonites and sphalerites (Zn carbonates and Zn sulfides, respectively) collected from various localities in Europe had δ 66 Zn values ranging from -0.06 to +0.69‰, with smithsonite potentially slightly heavier by 0.3‰ than sphalerite. [ 68 ] [ 69 ]
Zn is an essential biological nutrient in the oceans, and its concentration is largely controlled by uptake by phytoplankton and remineralization . In addition to its critical role in many metalloenzymes (see Zinc Biology section), Zn is an important component of the carbonate shells of foraminifera [ 93 ] and siliceous frustules in diatoms . [ 94 ] The main inputs of Zn to the ocean are thought to be from rivers and dust. [ 73 ] In some photic zones in the ocean, Zn is a limiting nutrient for phytoplankton, [ 95 ] and thus its concentration in surface waters serves as one control on marine primary productivity . Zn concentrations are extremely low in the surface ocean (<0.1 nM) but are maximal at depth (~2 nM in the deep Atlantic; ~10 nM in the deep Pacific), indicating a deep regeneration cycle. [ 65 ] [ 66 ] The deep/surface ratio of Zn is typically on the order of 100, significantly larger than that observed for Cu.
A multitude of complex processes fractionate Zn isotopes in the marine environment. As seen with copper isotopes, the bulk isotopic composition of zinc in the oceans (δ 66 Zn = +0.5‰) is heavier than that of the riverine input (δ 66 Zn = +0.3‰), reflecting both equilibrium, biological and other processes that affect Zn isotope ratios in the ocean. [ 73 ] [ 66 ] In the surface ocean, phytoplankton preferentially uptake 64 Zn, and as a result have average δ 66 Zn values of ~+0.16‰ (i.e., 0.34‰ lighter than the bulk ocean). [ 69 ] This preferential removal of 64 Zn by photosynthetic marine organisms in the photic zone is most prominent in the spring and summer seasons when primary productivity is highest, and the seasonal variability of Zn isotope ratios is reflected in the δ 66 Zn values of settling materials, which are heavier (e.g., by ~+0.20‰ in the Atlantic Ocean) during spring and summer than during the colder seasons. [ 69 ] Additionally, the surface layers of FeMn-nodules are 66 Zn enriched at high-latitudes (average δ 66 Zn = +1‰), while δ 66 Zn values of low-latitude samples are smaller and more variable (spanning +0.5 to +1‰). [ 69 ] This observation has been interpreted as due to high levels of Zn consumption and preferential uptake of 64 Zn above the seasonal thermocline at high latitudes during warmer seasons, and transfer of this heavy δ 66 Zn signal to the settling sedimentary Fe-Mn hydroxides. [ 69 ]
Sources and sinks for Zn isotopes are further highlighted in the vertical profile of 66 Zn/ 64 Zn in the water column. In the upper 2,000 m of the Atlantic Ocean, δ 66 Zn values are highly variable near the surface (δ 66 Zn = +0.05 to +0.33‰) [ 66 ] due to biological uptake and other surface processes, then gradually increase to ~+0.50‰ at 2,000 m depth. [ 66 ] Potential sinks for light Zn isotopes, which enrich the residual bulk Zn isotope ratios in the ocean, include binding to and burial with sinking particulate matter, as well as Zn sulfide precipitation in buried sediments. [ 73 ] [ 96 ] As a result of preferential burial of 64 Zn over the heavier Zn isotopes, sediments in the ocean are generally isotopically lighter than that of bulk seawater. For example, δ 66 Zn values in 8 sedimentary cores from three different continental margins were depleted in 66 Zn relative to the bulk ocean (δ 66 Zn cores = -0.15 to +0.2‰), and furthermore the vertical profiles of δ 66 Zn values in the cores showed no downcore isotopic variability, [ 96 ] suggesting diagenesis does not significantly fractionate Zn isotopes.
Zn isotopes may be useful as a tracer for breast cancer. Relative to non-cancerous patients, breast cancer patients are known to have significantly higher concentrations of Zn in their breast tissue, but lower concentrations in their blood serum and erythrocytes, due to overexpression of Zn transporters in breast cancer cells. [ 97 ] [ 98 ] [ 99 ] [ 100 ] Consistent with these body-wide shifts in Zn homeostasis, δ 66 Zn values in breast cancer tumors of 5 patients were found to be anomalously light (varying from -0.9 to -0.6‰) relative to healthy tissue in 3 breast cancer patients and 1 healthy control (δ 66 Zn = -0.5 to -0.3‰). [ 101 ] In this study, δ 66 Zn values of blood and serum were not found to be significantly different between cancerous and non-cancerous patients, suggesting an unknown isotopically heavy pool of Zn must exist in cancer patients. Though results from this study are promising regarding the use of Zn isotope ratios as a biomarker for breast cancer, a mechanistic understanding of how Zn isotopes fractionate during tumor formation in breast cancer is still lacking. Fortunately, increasing attention is being devoted to the use of stable metal isotopes as tracers of cancer and other diseases, and the usefulness of these isotope systems in medical applications will become more apparent in the next few decades. | https://en.wikipedia.org/wiki/Trace_metal_stable_isotope_biogeochemistry |
A trace radioisotope is a radioisotope that occurs naturally in trace amounts (i.e. extremely small). Generally speaking, trace radioisotopes have half-lives that are short in comparison with the age of the Earth , since primordial nuclides tend to occur in larger than trace amounts. Trace radioisotopes are therefore present only because they are continually produced on Earth by natural processes. Natural processes which produce trace radioisotopes include cosmic ray bombardment of stable nuclides, ordinary alpha and beta decay of the long-lived heavy nuclides, thorium-232 , uranium-238 , and uranium-235 , spontaneous fission of uranium-238 , and nuclear transmutation reactions induced by natural radioactivity , such as the production of plutonium-239 [ 1 ] and uranium-236 [ 2 ] from neutron capture [ 3 ] by natural uranium .
The elements that occur on Earth only in traces are listed below.
Isotopes of other elements (not exhaustive):
This radioactivity –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Trace_radioisotope |
Traceability is the capability to trace something. [ 1 ] In some cases, it is interpreted as the ability to verify the history, location, or application of an item by means of documented recorded identification. [ 2 ]
Other common definitions include the capability (and implementation) of keeping track of a given set or type of information to a given degree, or the ability to chronologically interrelate uniquely identifiable entities in a way that is verifiable.
Traceability is applicable to measurement, supply chain, software development, healthcare and security.
The term measurement traceability or metrological traceability is used to refer to an unbroken chain of comparisons relating an instrument 's measurements to a known standard . Calibration to a traceable standard can be used to determine an instrument's bias, precision , and accuracy . It may also be used to show a chain of custody —from current interpretation of evidence to the actual evidence in a legal context, or history of handling of any information.
In many countries, national standards for weights and measures are maintained by a National Metrological Institute (NMI) which provides the highest level of standards for the calibration / measurement traceability infrastructure in that country. Examples of government agencies include the National Physical Laboratory, UK (NPL) the National Institute of Standards and Technology (NIST) in the USA, the Physikalisch-Technische Bundesanstalt (PTB) in Germany, the Instituto Nazionale di Ricerca Metrologica (INRiM) in Italy, and the National Research Council of Canada (NRC). As defined by NIST, "Traceability of measurement requires the establishment of an unbroken chain of comparisons to stated references each with a stated uncertainty."
A clock providing traceable time is traceable to a time standard such as Coordinated Universal Time or International Atomic Time . The Global Positioning System is a source of traceable time.
Within a product's supply chain , traceability may be both a regulatory and an ethical or environmental issue. [ 3 ] Traceability is increasingly becoming a core criterion for sustainability efforts related to supply chains wherein knowing the producer, workers and other links stands as a necessary factor that underlies credible claims of social, economic, or environmental impacts. [ 4 ] Environmentally friendly retailers may choose to make information regarding their supply chain freely available to customers, illustrating the fact that the products they sell are manufactured in factories with safe working conditions, by workers that earn a fair wage, using methods that do not damage the environment. [ 5 ]
In regard to materials , traceability refers to the capability to associate a finished part with destructive test results performed on material from the same ingot with the same heat treatment, or to associate a finished part with results of a test performed on a sample from the same melt identified by the unique lot number of the material. Destructive tests typically include chemical composition and mechanical strength tests. A heat number is usually marked on the part or raw material which identifies the ingot it came from, and a lot number may identify the group of parts that experienced the same heat treatment (i.e., were in the same oven at the same time). Material traceability is important to the aerospace, nuclear, and process industry because they frequently make use of high strength materials that look identical to commercial low strength versions. In these industries, a part made of the wrong material is called "counterfeit", even if the substitution was accidental.
This same practice extends throughout industries using military hardware, including the fastener industry. [ 6 ]
In logistics , traceability refers to the capability for tracing goods along the distribution chain on a batch number or series number basis. Traceability is an important aspect for example in the automotive industry, where it makes recalls possible, or in the food industry where it contributes to food safety .
The international standards organization EPCglobal under GS1 has ratified the EPCglobal Network standards (especially the EPC Information Services EPCIS standard) which codify the syntax and semantics for supply chain events and the secure method for selectively sharing supply chain events with trading partners. These standards for traceability have been used in successful deployments in many industries and there are now a wide range of products that are certified as being compatible with these standards.
In food processing ( meat processing , fresh produce processing), the term traceability refers to the recording through means of barcodes or RFID tags and other tracking media, all movement of product and steps within the production process. One of the key reasons this is such a critical point is in instances where an issue of contamination arises, and a recall is required. Where traceability has been closely adhered to, it is possible to identify, by precise date/time and exact location which goods must be recalled, and which are safe, potentially saving millions of dollars in the recall process. Traceability within the food processing industry is also utilised to identify key high production and quality areas of a business, versus those of low return, and where points in the production process may be improved.
In food processing software , traceability systems imply the use of a unique piece of data (e.g., order date/time or a serialized sequence number, generally through the use of a barcode / RFID ) which can be traced through the entire production flow, linking all sections of the business, including suppliers and future sales through the supply chain. Messages and files at any point in the system can then be audited for correctness and completeness, using the traceability software to find the particular transaction and/or product within the supply chain.
In food systems , ISO 22005, as part of the ISO 22000 family of standards , has been developed to define the principles for food traceability and specifies the basic requirements for the design and implementation of a feed and food traceability system. It can be applied by an organization operating at any step in the feed and food chain.
The European Union 's General Food Law came into force in 2002, making traceability compulsory for food and feed operators and requiring those businesses to implement traceability systems. The EU introduced its Trade Control and Expert System, or TRACES , in April 2004. The system provides a central database to track movement of animals within the EU and from third countries. [ 7 ]
Australia has its National Livestock Identification System to keep track of livestock from birth to slaughterhouse.
India has started taking initiatives for setting up traceability systems at Government and Corporate levels. Grapenet, [ 8 ] an initiative by Agriculture and Processed Food Products Export Development Authority (APEDA), Ministry of Commerce, Government of India is an example in this direction. GrapeNet is an internet based traceability software system for monitoring fresh grapes exported from India to the European Union. GrapeNet is a first of its kind initiative in India that has put in place an end-to-end system for monitoring pesticide residue, achieve product standardization and facilitate tracing back from pallets to the farm of the Indian grower, through the various stages of sampling, testing, certification and packing. Grapenet won the National Award (Gold), in the winners announced for the best e-Governance initiatives undertaken in India in 2007. [ 9 ]
The Directorate Generate Foreign Trade (DGFT), Government of India, through its notification [ 10 ] dated 04.02.2009 relating to Amendment in Foreign Trade Policy (RE2008)has mandated that Export to the European Union is permitted subject to registration with APEDA, thereby making Grapenet mandatory for all exports of fresh grapes from India to Europe.
Uruguay has also designed a system called "Traceability & Electronic Information System of the Beef Industry". [ 11 ]
Within the context of supporting legal and sustainable forest supply chains , traceability has emerged in the last decade as a new tool to verify claims and assure buyers about the source of their materials. Mostly led out of Europe, and targeting countries where illegal logging has been a key problem ( FLEGT countries), timber tracking is now part of daily business for many enterprises and jurisdictions. Full traceability offers advantages for multiple partners along the supply chain beyond certification systems, including:
A number of timber tracking companies are in operation to service global demand.
Enhanced traceability ensures that the supply chain data is 100% accurate from the forest to the point of export. Nowadays, there are techniques to predict geographical provenance of wood and contribute to the fight against illegal logging. [ 12 ]
In systems and software development , the term traceability (or requirements traceability ) refers to the ability to link product requirements back to stakeholders' rationales and forward to corresponding design artifacts, code, and test cases . Traceability supports numerous software engineering activities such as change impact analysis , compliance verification or traceback of code, regression test selection, and requirements validation. It is usually accomplished in the form of a matrix created for the verification and validation of the project. Unfortunately, the practice of constructing and maintaining a requirements trace matrix (RTM) can be very arduous and over time the traces tend to erode into an inaccurate state unless date/time stamped. Alternate automated approaches for generating traces using information retrieval methods have been developed.
In transaction processing software, traceability implies use of a unique piece of data (e.g., order date/time or a serialized sequence number) which can be traced through the entire software flow of all relevant application programs. Messages and files at any point in the system can then be audited for correctness and completeness, using the traceability key to find the particular transaction. This is also sometimes referred to as the transaction footprint.
Patient safety during healthcare service plays an important role in preventing delayed recovery or even mortality, by increasing and improving the quality of life of citizens, and is considered an indicator of the quality status of health services [ 13 ] Maintaining patient safety is a complex task and involves factors inherent to the environment and human actions. [ 14 ] New technologies facilitate the traceability tools of patients and medications. This is particularly relevant for drugs that are considered high risk and cost. [ 15 ] [ 16 ]
Recent research in the healthcare industry emphasizes the significant impact of Blockchain Technology (BCT) on improving the performance of healthcare supply chain management. [ 17 ] It highlights BCT's role in enhancing transparency, data immutability, and efficient management, leading to better cooperation among stakeholders and effective risk mitigation in healthcare services.
The World Health Organization has recognized the importance of traceability for medical products of human origin (MPHO) and urged member states "to encourage the implementation of globally consistent coding systems to facilitate national and international traceability". [ 18 ]
To prevent theft, and assist in locating stolen objects, goods may be marked indelibly or undetectably so that they may be determined to be stolen, and in some cases identified. For example, it is sometimes arranged that stolen banknotes are marked with indelible dye to show that they are stolen; they can be identified by their unique serial numbers. Announcing that cash machines were fitted with sprayers of SmartWater , an invisible gel detectable for years, to mark thieves and their clothing when breaking into or tampering with the machine was found in a 2016 pilot scheme to reduce theft by 90%. [ 19 ] | https://en.wikipedia.org/wiki/Traceability |
Tracers are used in the oil industry in order to qualitatively or quantitatively gauge how fluid flows through the reservoir, [ 1 ] as well as being a useful tool for estimating residual oil saturation . Tracers can be used in either interwell tests or single well tests. [ 2 ] In interwell tests, the tracer is injected at one well along with the carrier fluid (water in a waterflood or gas in a gasflood) and detected at a producing well after some period of time, which can be anything from days to years. In single well tests , the tracer is injected into the formation from a well and then produced out the same well. The delay between a tracer that does not react with the formation (a conservative tracer) and one that does (a partitioning tracer) will give an indication of residual oil saturation, a piece of information that is difficult to acquire by other means.
Tracers can be radioactive or chemical, gas or liquid and have been used extensively in the oil industry and hydrology for decades. [ 3 ]
This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Tracer_use_in_the_oil_industry |
Tracheal cytotoxin (TCT) is a 921 dalton glycopeptide released by Bordetella pertussis , [ 1 ] Vibrio fischeri (as a symbiosis chemical), [ 2 ] and Neisseria gonorrhoeae (among other peptidoglycan-derived cytotoxins it produces). [ 3 ] It is a soluble piece of peptidoglycan (PGN) found in the cell wall of all gram-negative bacteria , [ 4 ] but only some bacteria species release TCT due to inability to recycle this piece of anhydromuropeptide. [ 5 ]
In 1980, it was discovered that B. pertussis could attach to hamster tracheal epithelial (HTE) cells , and also, that the supernatant from the cultured bacterium could disrupt the cell cycle of uninfected cells . [ 6 ] This prompted the scientists W. E. Goldman, D. G. Klapper, and J. B. Baseman to isolate and characterize a novel substance from B. pertussis supernatant . The novel disaccharide tetrapeptide that they had purified showed toxicity for HTE cells and tracheal ring cultures. Subsequently, they named the newly sequestered molecule tracheal cytotoxin (TCT). [ 7 ]
TCT is a soluble piece of peptidoglycan (PGN) found in the cell wall of all gram-negative bacteria . [ 4 ] Like all PGNs, TCT is composed of a disaccharide and a peptide chain. The IUPAC name for TCT is N-acetylglucosaminyl-1,6-anhydro-N-acetylmuramyl-(L)-alanyl-γ-(D)-glutamyl-mesodiaminopimelyl-(D)-alanine. [ 8 ] It is classified as a DAP ( diaminopimelic acid )-type PGN due to the third amino group within the chain being a diaminopimelyl peptide. [ citation needed ]
The DAP residue is responsible for directly bonding to the D-alanine peptide of another PGN molecule, thus aiding TCT's attachment within the cell wall . [ 9 ]
The DAP portion of TCT also implies importance in cytopathogenicity as analogs lacking DAP show a significant reduction in toxicity . [ 10 ]
Most Gram-negative bacteria keep TCT within the cell wall by using a PGN-transporter protein known as AmpG . However, B. pertussis is not capable of recycling PGNs via AmpG and thus, TCT escapes into the surrounding environment. [ 11 ] [ 5 ] Also, TCT is constitutively released by B. pertussis . [ 4 ]
The first murine-model studies using TCT involved treatment of hamster tracheal cells. These experiments alluded to TCT's role in ciliostasis and cellular extrusion of ciliated hamster cells. Also, HTE cells had a markedly reduced level of DNA synthesis post-treatment with TCT.
While previous studies using murine models reported evidence of TCT causing ciliostasis, in vitro studies using human tracheal cells have shown that TCT does not affect ciliary beat frequency of living cells, but instead causes damage and eventual extrusion of ciliated cells. [ 12 ] In gonorrhea infections, vaginal ciliated epithelial cells have also displayed the same cytopathogenic effects due to TCT recognition. [ 13 ] The extensive damage to ciliated epithelial tissue caused by TCT results in major disruption to the ciliary escalator; an important asset of the host's non-specific defenses. This disruption hinders the host's ability to remove mucous and foreign microbes from the epithelial tissue. Paroxysmal cough, e.g. whooping cough , is a direct symptom of said mucous build-up due to ciliated tissue damage. [ citation needed ]
NOD-1 recognition and the presence of Lipooligosaccharide (LOS) are two factors that modulate the effect of TCT. NOD-1 is a pattern recognition receptor that detects peptidoglycan. This receptor reacts weakly to TCT in humans, but robustly in mice. TCT is thought to work synergistically with LOS to mediate an inflammatory response, thus causing damage to ciliated epithelial cells. [ 14 ] Notably, the human pathogens (B. pertussis and N. gonorrhea) that produce excess TCT, causing damage to cilia also both produce LOS in their outer membrane. [ citation needed ]
TCT has been classified as an adjuvant molecule because of the stimulating effects it has on the immune system. Cellular damage associated with TCT is thought to be a result of increased levels of nitric oxide (NO) secretion by mucosal cells as part of an innate defense response to extracellular lipopolysaccharide (LPS) and TCT. [ 15 ] In humans, peptidoglycan recognition proteins, e.g. PGRPIαC, appear to bind with TCT and consequently induce the Tumor Necrosis Factor Receptor (TNFR) pathway. [ 16 ] Studies using murine macrophages have shown that TCT encourages cytokine secretion, probably through the NOD1 receptor. [ 17 ] As a pleiotropic toxin, TCT also acts as a pyrogen and as a stimulant of slow-wave sleep . [ 18 ]
Peptidoglycan recognition protein 4 ( PGLYRP4 ), in mammals (mice), interacts with TCT and reduces damage from pertussis inflammation. [ 19 ] This molecule has similar immune-eliciting properties in Drosophila , where a pair of PGRPs perform the recognition. [ 20 ] | https://en.wikipedia.org/wiki/Tracheal_cytotoxin |
Track automation or sometimes only automation refers to the recording or handling of time-based controlling data in time-based computer applications such as digital audio workstations , video editing software and computer animation software.
In modern DAWs every parameter that exists can usually be automatized, be it settings for a track's volume, applied filters or a software synthesizer . Example automations in this context include:
To achieve automation, either the user turns some knobs or faders on a physical controller connected to the computer or the user can set key frames with the mouse, between which the computer interpolates , or the user can draw entire data curves. [ 1 ]
The user sets some keyframes for i.e. position/rotation/size of an object or the position/angle/focus of a camera, and this movement data can be altered over time.
Blending between 2 clips. The track automation curve affects how one image changes into the other, be it slow/fast with/without acceleration, maybe even back and forth if one uses a Sinus -like wave. | https://en.wikipedia.org/wiki/Track_automation |
A track hub is a structured directory of genomic data, such as gene expression or epigenetic data, viewable over the web with a genome browser . Track hubs are defined by the track hub standard. [ 1 ] Originally developed as part of the UCSC genome browser, they are now supported by Ensembl and BioDalliance. [ 2 ]
Track hubs are a useful and efficient tool for visualizing large data sets. Collections of wiggle plots produced by a transcriptomics study can be organized hierarchically into so called composite and super-tracks . [ 3 ]
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Track_hub |
Track technology is an industrial material handling system which uses linear motors to move material along a track. It has been variously termed a smart conveyance system, intelligent track system, industrial transport system, independent cart technology, smart carriage technology, linear or extended or flexible transport system, or simply a conveyor or conveyance platform [ citation needed ] . They are also referred to simply as linear motors or long stator linear motors [ citation needed ] , reflecting the underlying technology of the track (stator) and shuttles.
The following is a list of commercially available track systems by product name:
Track technology is – among other technologies like machine vision and robotics – one of the key enablers for the adaptive machine .
The concept of the adaptive machine also goes beyond track technology to achieve their high levels of flexibility. One complementary technology is the industrial robot, which by definition possesses the same programmable flexibility. Of particular interest is the ability of both robots and track systems to operate safely along with humans in a collaborative environment. This recent development allows for a combination of manual and automated assembly tasks, maintenance and materials replenishment without stopping production. [ 9 ] [ 10 ]
Machine vision can play a pivotal role when integrated into an adaptive machine. Vision can identify individual shuttles and their contents in order to guide them to the appropriate workstations. Vision has long been used to automate robot guidance , inspection, orientation and related tasks.
Given the adaptive machine's flexibility to respond to consumer demand generation , Internet of Things and e-commerce technologies are complementary, providing the connection between internal production resources and commercial systems in a manufacturer's digital business model. | https://en.wikipedia.org/wiki/Track_technology |
A tracked loader or crawler loader is an engineering vehicle consisting of a tracked chassis with a front bucket for digging and loading material. The history of tracked loaders can be defined by three evolutions of their design. Each of these evolutions made the tracked loader a more viable and versatile tool in the excavation industry. These machines are capable in nearly every task, but master of none. A bulldozer , excavator , or wheeled loader will outperform a tracked loader under specific conditions, but the ability of a tracked loader to perform almost every task on a job site is why it remains a part of many companies' fleets.
The first tracked loaders were built from tracked tractors with custom-built loader buckets. The first loaders were cable-operated like the bulldozers of the era. These tracked loaders lacked the ability to dig in hard ground, but so did the bulldozers of the day. They were mostly used for moving stockpiled material and loading trucks and rail cars.
The first major design change to tracked loaders came with the integration of hydraulic systems. Using hydraulics to power the loader linkages increased the power of the loader. More importantly, the loaders could apply down pressure to the bucket, vastly increasing their ability to dig compacted ground. Most of the tracked loaders were still based on a bulldozer equivalent. The weight of the engine was still on the front half of the tracks along with the heavy loader components. This caused many problems with heavy wear of the front idler wheels and the undercarriage in general. The Caterpillar 983 tracked loader, the second largest tracked loader ever built, was notorious for heavy undercarriage wear.
The hydrostatic drive system was the second major innovation to affect the design of tracked loaders.
Tracked loaders have become sophisticated machines, using hydrostatic transmissions and electro-hydraulic controls to increase efficiency. Until the rise in popularity of excavators, tracked loaders had little competition with regard to digging and loading jobs.
Media related to Tracked loaders at Wikimedia Commons | https://en.wikipedia.org/wiki/Tracked_loader |
Tracking in hunting and ecology is the science and art of observing animal tracks and other signs, with the goal of gaining understanding of the landscape and the animal being tracked (the "quarry"). A further goal of tracking is the deeper understanding of the systems and patterns that make up the environment surrounding and incorporating the tracker.
The practice of tracking may focus on, but is not limited to, the patterns and systems of the local animal life and ecology. Trackers must be able to recognize and follow animals through their tracks, signs, and trails, also known as spoor . Spoor may include tracks, scat , feathers, kills, scratching posts, trails, drag marks, sounds, scents, marking posts , feeding signs, the behavior of other animals, habitat cues, and any other clues about the identity and whereabouts of the quarry.
The skilled tracker is able to discern these clues, recreate what transpired on the landscape, and make predictions about the quarry. The tracker may attempt to predict the current location of the quarry and follow the quarry's spoor to that location, in an activity known as trailing.
Prehistoric hunters used tracking principally to gather food. Even in historic times, tracking has been traditionally practiced by the majority of tribal people all across the world. The military and intelligence agencies also use tracking to find enemy combatants in the bush, land, sea, and desert. [ 1 ]
It has been suggested that the art of tracking may have been the first implementation of science , practiced by hunter-gatherers since the evolution of modern humans. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Apart from knowledge based on direct observations of animals, trackers gain a detailed understanding of animal behavior through the interpretation of tracks and signs. In this way much information can be obtained that would otherwise remain unknown, especially on the behavior of rare or nocturnal animals that are not often seen.
Tracks and signs offer information on undisturbed, natural behavior, while direct observations often influence the animal by the mere presence of the observer. Tracking is therefore a non-invasive method of information gathering, in which potential stress caused to animals can be minimized.
Some of the most important applications of tracking are in hunting and trapping, as well as controlling poaching, ecotourism, environmental education, police investigation, search and rescue, and in scientific research.
The modern science of animal tracking is widely practiced in the fields of wildlife biology, zoology, mammalogy, conservation, and wildlife management. Tracking enables the detection of rare, endangered, and elusive species. The science of tracking is utilized in the study of forest carnivores like the Canada lynx (Felis lynx) and the wolverine (Gulo gulo). Various measurements of tracks, and/or an animal's paws, and subsequent analyses of the datum, can also reveal important information about animals' physiology and their behavior. For example, measurements of lynx paws demonstrate their support capacity (on snow) to be double that of bobcat. [ 7 ]
In order to recognize a specific sign, a tracker often has a preconceived image of what a typical sign looks like. Without preconceived images many signs may be overlooked. However, with a preconceived image of a specific animal's spoor in mind, trackers will tend to 'recognize' spoor in markings made by another animal, or even in random markings. [ 2 ] Their mind will be prejudiced to see what they want to see, and in order to avoid making such errors they must be careful not to reach decisions too soon. Decisions made at a glance can often be erroneous, so when encountering new signs, trackers take their time to study signs in detail. While preconceived images may help in recognizing signs, the tracker must, however, avoid the preconditioned tendency to look for one set of things in the environment to the exclusion of all others. [ citation needed ] [ original research? ]
Trackers will always try to identify the trail positively by some distinguishing mark or mannerism in order not to lose it in any similar spoor. They will look for such features in the footprints as well as for an individual manner of walking. Often hoofs of antelope are broken or have chipped edges, or when the animal is walking it may leave a characteristic scuff mark. Experienced trackers will memorise a spoor and be able to distinguish that individual animal's spoor from others. When following a spoor, trackers will walk next to it, not on it, taking care not to spoil the trail so that it can easily be found again if the spoor is lost. [ citation needed ]
The shadows cast by ridges in the spoor show up best if the spoor is kept between the tracker and the sun. With the sun shining from behind the spoor, the shadows cast by small ridges and indentations in the spoor will be clearly visible. With the sun behind the tracker, however, these shadows will be hidden by the ridges that cast them. Tracking is easiest in the morning and late afternoon, as the shadows cast by the ridges in the spoor are longer and stand out better than at or near midday. As the sun moves higher in the sky, the shadows grow shorter. At midday the spoor may cast no shadows at all, making them difficult to see in the glare of the sunlight. [ citation needed ]
Trackers will never look down at their feet if they can help it, since this will slow them down. By looking up, well ahead of themselves, approximately five to ten meters (15–30 feet) depending on the terrain, they are able to track much faster and with more ease. Unless they need to study the spoor more closely, it is not necessary to examine every sign. If they see a sign ten meters ahead, those in between can be ignored while they look for spoor further on. Over difficult terrain it may not be possible to see signs well ahead, so trackers will have to look at the ground in front of them and move more slowly.
Trackers must also avoid concentrating all their attention on the tracks, thereby ignoring everything around them. Tracking requires varying attention, a constant refocusing between minute details of the track and the whole pattern of the environment.
Although in principle it is possible to follow a trail by simply looking for one sign after the other, this may prove so time-consuming that the tracker will never catch up with the quarry. Instead, trackers place themselves in the position of their quarry in order to anticipate the route it may have taken. [ 2 ] They will thereby be able to decide in advance where they can expect to find signs and thus not waste time looking for them.
Trackers will often look for spoor in obvious places such as openings between bushes, where the animal would most likely have moved. In thick bushes they will look for the most accessible thruways. Where the spoor crosses an open clearing, they will look in the general direction for access ways on the other side of the clearing. If the animal was moving from shade to shade, they will look for spoor in the shade ahead. If their quarry has consistently moved in a general direction, it may be possible to follow the most likely route by focusing on the terrain, and to look for signs of spoor only occasionally. They must, however, always be alert for an abrupt change in direction.
Animals usually make use of a network of paths to move from one locality to another. If it is clear that an animal was using a particular path, this can simply be followed up to the point where it forks, or to where the animal has left the path. Where one of several paths may have been used, trackers must of course determine which path that specific animal used. This may not always be easy, since many animals often use the same paths.
In areas of high animal densities that have much-used animal paths which interlink, it may seem impossible to follow tracks. However, once tracks have been located on an animal path, it is often possible for a tracker to follow the path even though no further tracks are seen. By looking to either side of the path, the tracker can establish if the animal has moved away from the path, and then follow the new trail.
In difficult terrain, where signs are sparse, trackers may have to rely extensively on anticipating the animal's movements. In order to move fast enough to overtake the animal, one may not be able to detect all the signs. Trackers sometimes identify themselves with the animal to such an extent that they follow an imaginary route which they think the animal would most likely have taken, only confirming their expectations with occasional signs. [ 2 ]
When trackers come to hard, stony ground, where tracks are virtually impossible to discern, apart from the odd small pebble that has been overturned, they may move around the patch of hard ground in order to find the spoor in softer ground.
When the trackers lose the spoor, they first search obvious places for signs, choosing several likely access ways through the bush in the general direction of movement. When several trackers work together, they can simply fan out and quarter the ground until one of them finds it. An experienced tracker may be able to predict more or less where the animal was going, and will not waste time in one spot looking for signs, but rather look for it further ahead. [ 2 ]
Knowledge of the terrain and animal behavior allows trackers to save valuable time by predicting the animal's movements. Once the general direction of movement is established and it is known that an animal path, river or any other natural boundary lies ahead, they can leave the spoor and move to these places, cutting across the trail by sweeping back and forth across the predicted direction in order to pick up tracks a considerable distance ahead. [ 2 ]
To be able to anticipate and predict the movements of an animal, trackers must know the animal and its environment so well that they can identify themselves with that animal. They must be able to visualize how the animal was moving around, and place themselves in its position. If the animal was moving in a straight line at a steady pace, and it is known that there is a waterhole or a pan further ahead, trackers should leave the spoor to look for signs of it at the waterhole or pan. While feeding, an animal will usually move into the wind, going from one bush to another. If the trackers know the animal's favored food, and know moreover how they generally move, they need not follow its zigzag path, but leave the spoor at places, moving in a straight course to save time, and pick up the spoor further on. [ 2 ]
Since signs may be fractional or partly obliterated, it may not always be possible to make a complete reconstruction of the animal's movements and activities on the basis of spoor evidence alone. Trackers may therefore have to create a working hypothesis in which spoor evidence is supplemented with hypothetical assumptions based not only on their knowledge of animal behavior, but also on their creative ability to solve new problems and discover new information. The working hypothesis is often a reconstruction of what the animal was doing, how fast it was moving, when it was there, where it was going to and where it might be at that time. Such a working hypothesis enables the trackers to predict the animal's movements. As new information is gathered, they may have to revise their working hypothesis, creating a better reconstruction of the animal's activities. Anticipating and predicting an animal's movements, therefore, involves a continuous process of problem-solving, creating new hypotheses and discovering new information. [ 2 ]
In order to come close to an animal, trackers must remain undetected not only by the animal, but also by other animals that may alert it. Moving as quietly as possible, trackers will avoid stepping on dry leaves and twigs, and take great care when moving through dry grass.
If the trackers are in close proximity to the animal, it is important that they remain downwind of it, that is, in a position where the wind is blowing away from the animal in the direction of the tracker. They must never be in a position where their scent could be carried in the wind towards the animal and thereby alert it. It is also important that the animal does not have the opportunity to cross their tracks, since the lingering human scent will alert it. Most animals prefer to keep the wind in their faces when traveling so that they can scent danger ahead of them. Trackers will therefore usually be downwind from them as they approach the animals from behind. The wind direction may, however, have changed. If the wind direction is unfavorable, the trackers may have to leave the spoor to search for their quarry from the downwind side. [ 2 ]
As the trackers get closer to the animal, they must make sure that they see it before it sees them. Some trackers maintain that an animal keeps looking back down its own trail, always on the alert for danger coming from behind. When the spoor is very fresh, trackers may have to leave the spoor so that the animal does not see them first. Animals usually rest facing downwind, so that they can see danger approaching from the downwind side, while they can smell danger coming from behind them. An animal may also double back on its spoor and circle downwind before settling down to rest. [ 2 ] A predator following its trail will move past the resting animal on the upwind side before realizing that the animal had doubled back, and the resting animal will smell the predator in time to make its escape.
When stalking an animal, trackers use the cover of bushes, going down on their hands and knees where necessary. In long grass they go down on their stomachs pulling themselves forward with their elbows. The most important thing is not to attract attention by sudden movements. Trackers take their time, moving slowly when the animal is not looking, and keeping still when the animal is looking in their direction. When stalking an animal, trackers must also be careful not to disturb other animals. A disturbed animal will give its alarm signal, thereby alerting all animals in the vicinity, including the animal being tracked down. | https://en.wikipedia.org/wiki/Tracking_(hunting) |
Tracklib is a Swedish music service that allows producers to sample original music and clear the samples for official use. The platform was founded with the aim to solve legal and ethical issues surrounding sampling and music clearances. The platform has been previously used to sample and clear tracks for commercial releases by J. Cole , Lil Wayne , DJ Khaled , Mary J Blige , Brockhampton , A-Reece among others. [ 1 ] [ 2 ] [ 3 ]
Tracklib is based in Stockholm , Sweden and was originally founded in 2014. After an invite-only beta version in 2017, the music service officially launched to the public in April 2018. In May 2020, Tracklib changed their service to a subscription model . [ 4 ] [ 5 ] [ 6 ]
On 20 May 2025, Tracklib released a mobile app for their service. [ 7 ]
The catalog of Tracklib consists of original master recordings and stems . Each track is part of one out of three tiers (Category A, B, or C) which each its purchase and clearance costs. Users can browse and hear all music before downloading it in WAV -format to use in a digital audio workstation (DAW) such as Ableton , Reason , or FL Studio . In 2019, Tracklib developed and launched a technology for users to select and preview loops . [ 8 ] [ 9 ] Tracklib functions as an intermediary between record labels , publishers , copyright owners , and artists . This allows users to clear all music and purchase a license for official usage of the selected recording(s). The difference with other music services such as Splice and Loopmasters , is that Tracklib only includes original master recordings and stems . All music is previously released and no royalty-free sounds or sample packs are available on Tracklib. [ 10 ] [ 11 ] [ 12 ]
Original master recordings on Tracklib include music from artists such as Bob James , Louis Armstrong , Billie Holiday , Sly and Robbie , Ray Charles , across genres such as jazz , R&B/soul , reggae , classical music , rock music , and hip hop . The catalog also includes previously unreleased recordings by Isaac Hayes . [ 13 ] [ 14 ] [ 15 ] [ 16 ]
Other notable artists with songs containing Tracklib samples are Firebeatz , A-Trak , Young M.A , $NOT & Statik Selektah . [ 19 ] [ 20 ]
Tracklib's advisory board consists of producers Prince Paul , Erick Sermon , and Drumma Boy , later joined by producer Zaytoven in 2018 and Scott Storch in 2020. Former Spotify executives Petra Hansson and Niklas Ivarsson joined the advisory board in 2019. [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Tracklib |
The Tractatus Logico-Philosophicus (widely abbreviated and cited as TLP ) is the only book-length philosophical work by the Austrian philosopher Ludwig Wittgenstein that was published during his lifetime. The project had a broad goal: to identify the relationship between language and reality, and to define the limits of science. [ 1 ] Wittgenstein wrote the notes for the Tractatus while he was a soldier during World War I and completed it during a military leave in the summer of 1918. It was originally published in German in 1921 as Logisch-Philosophische Abhandlung (Logical-Philosophical Treatise). In 1922 it was published together with an English translation and a Latin title, which was suggested by G. E. Moore as homage to Baruch Spinoza 's Tractatus Theologico-Politicus (1670).
The Tractatus is written in an austere and succinct literary style, containing almost no arguments as such, but consists of 525 declarative statements altogether, which are hierarchically numbered.
The Tractatus is recognized by philosophers as one of the most significant philosophical works of the twentieth century and was influential chiefly amongst the logical positivist philosophers of the Vienna Circle , such as Rudolf Carnap and Friedrich Waismann and Bertrand Russell 's article "The Philosophy of Logical Atomism".
Wittgenstein's later works, notably the posthumously published Philosophical Investigations , criticised many of his ideas in the Tractatus . There is, however, a common thread in Wittgenstein's thinking, in spite of those criticisms of the Tractatus in later writings. Indeed, the contrast between 'early' and 'late' Wittgenstein has been countered by such scholars as Pears (1987) and Hilmy (1987). For example, a relevant, yet neglected aspect of continuity in Wittgenstein's thought concerns 'meaning' as 'use'. Connecting his early and later writings on 'meaning as use' is his appeal to direct consequences of a term or phrase, reflected, for example, in his speaking of language as a 'calculus'. These passages are crucial to Wittgenstein's view of 'meaning as use', though they have been widely neglected in scholarly literature. The centrality and importance of these passages are corroborated and augmented by renewed examination of Wittgenstein's Nachlaß , as is done in "From Tractatus to Later Writings and Back – New Implications from the Nachlass " (de Queiroz 2023).
The Tractatus employs an austere and succinct literary style. The work contains almost no arguments as such, but rather consists of declarative statements, or passages, that are meant to be self-evident. The statements are hierarchically numbered, with seven basic propositions at the primary level (numbered 1–7), with each sub-level being a comment on or elaboration of the statement at the next higher level (e.g., 1, 1.1, 1.11, 1.12, 1.13). In all, the Tractatus comprises 525 numbered statements.
When Bertrand Russell suggested to Wittgenstein that he ought to provide arguments and not merely state what he thinks, Wittgenstein replied that this would spoil the book's beauty and would be like touching a flower with muddy hands. [ 2 ]
The Tractatus is recognized by philosophers as a significant philosophical work of the twentieth century and was influential chiefly amongst the logical positivist philosophers of the Vienna Circle , such as Rudolf Carnap and Friedrich Waismann . Bertrand Russell's article "The Philosophy of Logical Atomism" is presented as a working out of ideas that he had learned from Wittgenstein. [ 3 ]
The English translation of the Tractatus was published with an introduction by Bertrand Russell. [ 4 ] Wittgenstein was unimpressed by Russell's introduction, considering it superficial and a misunderstanding of his work. [ 5 ]
There are seven main propositions in the text. These are:
The first chapter is very brief:
This, along with the beginning of two, can be taken to be the relevant parts of Wittgenstein's metaphysical view that he will use to support his picture theory of language.
These sections concern Wittgenstein's view that the sensible, changing world we perceive does not consist of substance but of facts. Proposition two begins with a discussion of objects, form and substance.
This epistemic notion is further clarified by a discussion of objects or things as metaphysical substances.
His use of the word "composite" in 2.021 can be taken to mean a combination of form and matter, in the Platonic sense .
The notion of a static unchanging Form and its identity with Substance represents the metaphysical view that has come to be held as an assumption by the vast majority of the Western philosophical tradition since Plato and Aristotle , as it was something they agreed on. "[W]hat is called a form or a substance is not generated." [ 6 ] (Z.8 1033b13)
The opposing view states that unalterable Form does not exist, or at least if there is such a thing, it contains an ever changing, relative substance in a constant state of flux. Although this view was held by Greeks like Heraclitus , it has existed only on the fringe of the Western tradition since then. It is commonly known now only in "Eastern" metaphysical views where the primary concept of substance is Qi , or something similar, which persists through and beyond any given Form. The former view is shown to be held by Wittgenstein in what follows:
Although Wittgenstein largely disregarded Aristotle (Ray Monk's biography suggests that he never read Aristotle at all) it seems that they shared some anti-Platonist views on the universal/particular issue regarding primary substances. He attacks universals explicitly in his Blue Book.
"The idea of a general concept being a common property of its particular instances connects up with other primitive, too simple, ideas of the structure of language. It is comparable to the idea that properties are ingredients of the things which have the properties; e.g. that beauty is an ingredient of all beautiful things as alcohol is of beer and wine, and that we therefore could have pure beauty, unadulterated by anything that is beautiful." [ 7 ]
And Aristotle agrees: "The universal cannot be a substance in the manner in which an essence is", [ 6 ] (Z.13 1038b17) as he begins to draw the line and drift away from the concepts of universal Forms held by his teacher Plato.
The concept of Essence, taken alone is a potentiality, and its combination with matter is its actuality. "First, the substance of a thing is peculiar to it and does not belong to any other thing" [ 6 ] (Z.13 1038b10), i.e. not universal and we know this is essence. This concept of form/substance/essence, which we have now collapsed into one, being presented as potential is also, apparently, held by Wittgenstein:
Here ends what Wittgenstein deems to be the relevant points of his metaphysical view and he begins in 2.1 to use said view to support his Picture Theory of Language. "The Tractatus's notion of substance is the modal analogue of Immanuel Kant 's temporal notion. Whereas for Kant, substance is that which 'persists' (i.e., exists at all times), for Wittgenstein it is that which, figuratively speaking, 'persists' through a 'space' of possible worlds." [ 8 ] Whether the Aristotelian notions of substance came to Wittgenstein via Kant, or via Bertrand Russell , or even whether Wittgenstein arrived at his notions intuitively, one cannot but see them.
The further thesis of 2. and 3. and their subsidiary propositions is Wittgenstein's picture theory of language. This can be summed up as follows:
The 4s are significant as they contain some of Wittgenstein's most explicit statements concerning the nature of philosophy and the distinction between what can be said and what can only be shown. It is here, for instance, that he first distinguishes between material and grammatical propositions, noting:
A philosophical treatise attempts to say something where nothing can properly be said. It is predicated upon the idea that philosophy should be pursued in a way analogous to the natural sciences ; that philosophers are looking to construct true theories. This sense of philosophy does not coincide with Wittgenstein's conception of philosophy.
Wittgenstein is to be credited with the popularization of truth tables (4.31) and truth conditions (4.431) which now constitute the standard semantic analysis of first-order sentential logic. [ 9 ] [ 10 ] The philosophical significance of such a method for Wittgenstein was that it alleviated a confusion, namely the idea that logical inferences are justified by rules. If an argument form is valid, the conjunction of the premises will be logically equivalent to the conclusion and this can be clearly seen in a truth table; it is displayed . The concept of tautology is thus central to Wittgenstein's Tractarian account of logical consequence , which is strictly deductive .
At the beginning of Proposition 6, Wittgenstein postulates the essential form of all sentences. He uses the notation [ p ¯ , ξ ¯ , N ( ξ ¯ ) ] {\displaystyle [{\bar {p}},{\bar {\xi }},N({\bar {\xi }})]} , where
Proposition 6 says that any logical sentence can be derived from a series of NOR operations on the totality of atomic propositions. Wittgenstein drew from Henry M. Sheffer 's logical theorem making that statement in the context of the propositional calculus . Wittgenstein's N-operator is a broader infinitary analogue of the Sheffer stroke , which applied to a set of propositions produces a proposition that is equivalent to the denial of every member of that set. Wittgenstein shows that this operator can cope with the whole of predicate logic with identity, defining the quantifiers at 5.52, and showing how identity would then be handled at 5.53–5.532.
The subsidiaries of 6. contain more philosophical reflections on logic, connecting to ideas of knowledge, thought, and the a priori and transcendental . The final passages argue that logic and mathematics express only tautologies and are transcendental, i.e. they lie outside of the metaphysical subject's world. In turn, a logically "ideal" language cannot supply meaning , it can only reflect the world, and so, sentences in a logical language cannot remain meaningful if they are not merely reflections of the facts.
From Propositions 6.4–6.54, the Tractatus shifts its focus from primarily logical considerations to what may be considered more traditionally philosophical foci (God, ethics, meta-ethics, death, the will) and, less traditionally along with these, the mystical. The philosophy of language presented in the Tractatus attempts to demonstrate just what the limits of language are – to delineate precisely what can and cannot be sensically said. Among the sensibly sayable for Wittgenstein are the propositions of natural science, and to the nonsensical, or unsayable, those subjects associated with philosophy traditionally – ethics and metaphysics, for instance. [ 11 ] Curiously, on this score, the penultimate proposition of the Tractatus, proposition 6.54, states that once one understands the propositions of the Tractatus, he will recognize that they are senseless, and that they must be thrown away. Proposition 6.54, then, presents a difficult interpretative problem. If the so-called 'picture theory' of meaning is correct, and it is impossible to represent logical form, then the theory, by trying to say something about how language and the world must be for there to be meaning, is self-undermining. This is to say that the 'picture theory' of meaning itself requires that something be said about the logical form sentences must share with reality for meaning to be possible. [ 12 ] This requires doing precisely what the 'picture theory' of meaning precludes. It would appear, then, that the metaphysics and the philosophy of language endorsed by the Tractatus give rise to a paradox: for the Tractatus to be true, it will necessarily have to be nonsense by self-application; but for this self-application to render the propositions of the Tractatus nonsense (in the Tractarian sense), then the Tractatus must be true. [ 13 ]
There are three primarily dialectical approaches to solving this paradox [ 12 ] 1) the traditionalist, or Ineffable-Truths View; [ 13 ] 2) the resolute, 'new Wittgenstein', or Not-All-Nonsense View; [ 13 ] 3) the No-Truths-At-All View. [ 13 ] The traditionalist approach to resolving this paradox is to hold that Wittgenstein accepted that philosophical statements could not be made, but that nevertheless, by appealing to the distinction between saying and showing, that these truths can be communicated by showing. [ 13 ] On the resolute reading, some of the propositions of the Tractatus are withheld from self-application, they are not themselves nonsense, but point out the nonsensical nature of the Tractatus. This view often appeals to the so-called 'frame' of the Tractatus, comprising the preface and propositions 6.54. [ 12 ] The No-Truths-At-All View states that Wittgenstein held the propositions of the Tractatus to be ambiguously both true and nonsensical, at once. While the propositions could not be, by self-application of the attendant philosophy of the Tractatus, true (or even sensical), it was only the philosophy of the Tractatus itself that could render them so. This is presumably what made Wittgenstein compelled to accept the philosophy of the Tractatus as specially having solved the problems of philosophy. It is the philosophy of the Tractatus, alone, that can solve the problems. Indeed, the philosophy of the Tractatus is for Wittgenstein, on this view, problematic only when applied to itself. [ 13 ]
At the end of the text Wittgenstein uses an analogy from Arthur Schopenhauer and compares the book to a ladder that must be thrown away after it has been climbed.
As the last line in the book, proposition 7 has no supplementary propositions. It ends the book with the proposition "Whereof one cannot speak, thereof one must be silent" ( German : Wovon man nicht sprechen kann, darüber muss man schweigen ).
A prominent view set out in the Tractatus is the picture theory, sometimes called the picture theory of language . The picture theory is a proposed explanation of the capacity of language and thought to represent the world. [ 14 ] : p44 Although something need not be a proposition to represent something in the world, Wittgenstein was largely concerned with the way propositions function as representations. [ 14 ]
According to the theory, propositions can "picture" the world as being a certain way, and thus accurately represent it either truly or falsely. [ 14 ] If someone thinks the proposition, "There is a tree in the yard", then that proposition accurately pictures the world if and only if there is a tree in the yard. [ 14 ] : p53 One aspect of pictures which Wittgenstein finds particularly illuminating in comparison with language is the fact that we can directly see in the picture what situation it depicts without knowing if the situation actually obtains. This allows Wittgenstein to explain how false propositions can have meaning (a problem which Russell struggled with for many years): just as we can see directly from the picture the situation which it depicts without knowing if it in fact obtains, analogously, when we understand a proposition we grasp its truth conditions or its sense, that is, we know what the world must be like if it is true, without knowing if it is in fact true (TLP 4.024, 4.431). [ 15 ]
It is believed that Wittgenstein was inspired for this theory by the way that traffic courts in Paris reenact automobile accidents. [ 16 ] : p35 A toy car is a representation of a real car, a toy truck is a representation of a real truck, and dolls are representations of people. In order to convey to a judge what happened in an automobile accident, someone in the courtroom might place the toy cars in a position like the position the real cars were in, and move them in the ways that the real cars moved. In this way, the elements of the picture (the toy cars) are in spatial relation to one another, and this relation itself pictures the spatial relation between the real cars in the automobile accident. [ 14 ] : p45
Pictures have what Wittgenstein calls Form der Abbildung or pictorial form, which they share with what they depict. This means that all the logically possible arrangements of the pictorial elements in the picture correspond to the possibilities of arranging the things which they depict in reality. [ 17 ] Thus if the model for car A stands to the left of the model for car B, it depicts that the cars in the world stand in the same way relative to each other. This picturing relation, Wittgenstein believed, was our key to understanding the relationship a proposition holds to the world. [ 14 ] Although language differs from pictures in lacking direct pictorial mode of representation (e.g., it does not use colors and shapes to represent colors and shapes), still Wittgenstein believed that propositions are logical pictures of the world by virtue of sharing logical form with the reality which they represent (TLP 2.18–2.2). And that, he thought, explains how we can understand a proposition without its meaning having been explained to us (TLP 4.02); we can directly see in the proposition what it represents as we see in the picture the situation which it depicts just by virtue of knowing its method of depiction: propositions show their sense (TLP 4.022). [ 18 ]
However, Wittgenstein claimed that pictures cannot represent their own logical form, they cannot say what they have in common with reality but can only show it (TLP 4.12–4.121). If representation consist in depicting an arrangement of elements in logical space, then logical space itself cannot be depicted since it is itself not an arrangement of anything ; rather logical form is a feature of an arrangement of objects and thus it can be properly expressed (that is depicted) in language by an analogous arrangement of the relevant signs in sentences (which contain the same possibilities of combination as prescribed by logical syntax), hence logical form can only be shown by presenting the logical relations between different sentences. [ 19 ] [ 15 ]
Wittgenstein's conception of representation as picturing also allows him to derive two striking claims: that no proposition can be known a priori – there are no apriori truths (TLP 3.05), and that there is only logical necessity (TLP 6.37). Since all propositions, by virtue of being pictures, have sense independently of anything being the case in reality, we cannot see from the proposition alone whether it is true (as would be the case if it could be known apriori), but we must compare it to reality in order to know that it is true (TLP 4.031 "In the proposition a state of affairs is, as it were, put together for the sake of experiment"). And for similar reasons, no proposition is necessarily true except in the limiting case of tautologies, which Wittgenstein say lack sense (TLP 4.461). If a proposition pictures a state of affairs in virtue of being a picture in logical space, then a non-logical or metaphysical "necessary truth" would be a state of affairs which is satisfied by any possible arrangement of objects (since it is true for any possible state of affairs), but this means that the would-be necessary proposition would not depict anything as being so but will be true no matter what the world is actually like; but if that's the case, then the proposition cannot say anything about the world or describe any fact in it – it would not be correlated with any particular state of affairs, just like a tautology (TLP 6.37). [ 20 ] [ 21 ]
Although Wittgenstein did not use the term himself, his metaphysical view throughout the Tractatus is commonly referred to as logical atomism . While his logical atomism resembles that of Bertrand Russell , the two views are not strictly the same. [ 14 ] : p58
Russell's theory of descriptions is a way of logically analyzing sentences containing definite descriptions without presupposing the existence of an object satisfying the description. According to the theory, a statement like "There is a man to my left" should be analyzed into: "There is some x such that x is a man and x is to my left, and for any y , if y is a man and y is to my left, y is identical to x ". If the statement is true, x refers to the man to my left. [ 22 ]
Whereas Russell believed the names (like x ) in his theory should refer to things we can know directly by virtue of acquaintance, Wittgenstein did not believe that there are any epistemic constraints on logical analyses: the simple objects are whatever is contained in the elementary propositions which cannot be logically analyzed any further. [ 14 ] : p63
By objects , Wittgenstein did not mean physical objects in the world, but the absolute base of logical analysis, that can be combined but not divided (TLP 2.02–2.0201). [ 14 ] According to Wittgenstein's logico-atomistic metaphysical system, objects each have a "nature", which is their capacity to combine with other objects. When combined, objects form "states of affairs". A state of affairs that obtains is a "fact". Facts make up the entirety of the world; they are logically independent of one another, as are states of affairs. That is, the existence of one state of affairs (or fact) does not allow us to infer whether another state of affairs (or fact) exists or does not exist. [ 14 ] : pp58–59
Within states of affairs, objects are in particular relations to one another. [ 14 ] : p59 This is analogous to the spatial relations between toy cars discussed above. The structure of states of affairs comes from the arrangement of their constituent objects (TLP 2.032), and such arrangement is essential to their intelligibility, just as the toy cars must be arranged in a certain way in order to picture the automobile accident. [ 14 ]
A fact might be thought of as the obtaining state of affairs that Madison is in Wisconsin, and a possible (but not obtaining) state of affairs might be Madison's being in Utah. These states of affairs are made up of certain arrangements of objects (TLP 2.023). However, Wittgenstein does not specify what objects are. Madison, Wisconsin, and Utah cannot be atomic objects: they are themselves composed of numerous facts. [ 14 ] Instead, Wittgenstein believed objects to be the things in the world that would correlate to the smallest parts of a logically analyzed language, such as names like x . Our language is not sufficiently (i.e., not completely) analyzed for such a correlation, so one cannot say what an object is. [ 14 ] : p60 We can, however, talk about them as "indestructible" and "common to all possible worlds". [ 14 ] Wittgenstein believed that the philosopher's job is to discover the structure of language through analysis. [ 16 ] : p38
Anthony Kenny provides a useful analogy for understanding Wittgenstein's logical atomism : a slightly modified game of chess . [ 14 ] : pp60–61 Just like objects in states of affairs, the chess pieces alone do not constitute the game—their arrangements, together with the pieces (objects) themselves, determine the state of affairs. [ 14 ]
Through Kenny's chess analogy, we can see the relationship between Wittgenstein's logical atomism and his picture theory of representation . [ 14 ] : p61 For the sake of this analogy, the chess pieces are objects, they and their positions constitute states of affairs and therefore facts, and the totality of facts is the entire particular game of chess. [ 14 ]
We can communicate such a game of chess in the exact way that Wittgenstein says a proposition represents the world. [ 14 ] We might say "WR/KR1" to communicate a white rook's being on the square commonly labeled as king's rook 1. Or, to be more thorough, we might make such a report for every piece's position. [ 14 ]
The logical form of our reports must be the same logical form of the chess pieces and their arrangement on the board in order to be meaningful. Our communication about the chess game must have as many possibilities for constituents and their arrangement as the game itself. [ 14 ] Kenny points out that such logical form need not strictly resemble the chess game. The logical form can be had by the bouncing of a ball (for example, twenty bounces might communicate a white rook's being on the king's rook 1 square). One can bounce a ball as many times as one wishes, which means that the ball's bouncing has "logical multiplicity", and can therefore share the logical form of the game. [ 14 ] : p62 A motionless ball cannot communicate this same information, as it does not have logical multiplicity. [ 14 ]
According to traditional reading of the Tractatus, Wittgenstein's views about logic and language led him to believe that some features of language and reality cannot be expressed in senseful language but only "shown" by the form of certain expressions. Thus for example, according to the picture theory, when a proposition is thought or expressed, the proposition represents reality (truly or falsely) by virtue of sharing some features with that reality in common. However, those features themselves are something Wittgenstein claimed we could not say anything about, because we cannot describe the relationship that pictures bear to what they depict, but only show it via fact-stating propositions (TLP 4.121). Thus we cannot say that there is a correspondence between language and reality; the correspondence itself can only be shown , [ 14 ] : p56 since our language is not capable of describing its own logical structure. [ 16 ] : p47
However, on the more recent "resolute" interpretation of the Tractatus (see below), the remarks on "showing" were not in fact an attempt by Wittgenstein to gesture at the existence of some ineffable features of language or reality, but rather, as Cora Diamond and James Conant have argued, [ 23 ] the distinction was meant to draw a sharp contrast between logic and descriptive discourse. On their reading, Wittgenstein indeed meant that some things are shown when we reflect on the logic of our language, but what is shown is not that something is the case, as if we could somehow think it (and thus understand what Wittgenstein tries to show us) but for some reason we just could not say it. As Diamond and Conant explain: [ 23 ]
Speaking and thinking are different from activities the practical mastery of which has no logical side; and they differ from activities like physics the practical mastery of which involves the mastery of content specific to the activity. On Wittgenstein's view ... linguistic mastery does not, as such, depend on even an inexplicit mastery of some sort of content. ... The logical articulation of the activity itself can be brought more clearly into view, without that involving our coming to awareness that anything. When we speak about the activity of philosophical clarification, grammar may impose on us the use of 'that'-clauses and 'what'-constructions in the descriptions we give of the results of the activity. But, one could say, the final 'throwing away of the ladder' involves the recognition that that grammar of 'what'-ness has been pervasively misleading us, even as we read through the Tractatus. To achieve the relevant sort of increasingly refined awareness of the logic of our language is not to grasp a content of any sort.
Similarly, Michael Kremer suggested that Wittgenstein's distinction between saying and showing could be compared with Gilbert Ryle 's famous distinction between "knowing that" and "knowing how". [ 24 ] Just as practical knowledge or skill (such as riding a bike) is not reducible to propositional knowledge according to Ryle, Wittgenstein also thought that the mastery of the logic of our language is a unique practical skill that does not involve any sort of propositional "knowing that", but rather is reflected in our ability to operate with senseful sentences and grasping their internal logical relations.
At the time of its publication in 1921, Wittgenstein concluded that the Tractatus had resolved all philosophical problems, [ 25 ] leaving one free to focus on what really matters – ethics, faith, music and so on. [ 26 ] He would later recant this view, beginning in 1945, [ 27 ] leading him to begin work on what would ultimately become the Philosophical Investigations .
The book was translated into English in 1922 by C. K. Ogden with help from the teenaged Cambridge mathematician and philosopher Frank P. Ramsey . Ramsey later visited Wittgenstein in Austria. Translation issues make the concepts hard to pinpoint, especially given Wittgenstein's usage of terms and difficulty in translating ideas into words. [ 28 ]
The Tractatus caught the attention of the philosophers of the Vienna Circle (1921–1933), especially Rudolf Carnap and Moritz Schlick . The group spent many months working through the text out loud, line by line. Schlick eventually convinced Wittgenstein to meet with members of the circle to discuss the Tractatus when he returned to Vienna (he was then working as an architect). Although the Vienna Circle's logical positivists appreciated the Tractatus , they argued that the last few passages, including Proposition 7, are confused. Carnap hailed the book as containing important insights but encouraged people to ignore the concluding sentences. Wittgenstein responded to Schlick, commenting: "I cannot imagine that Carnap should have so completely misunderstood the last sentences of the book and hence the fundamental conception of the entire book." [ 29 ]
A more recent interpretation comes from The New Wittgenstein family of interpretations under development since 2000. [ 30 ] This so-called "resolute reading" is controversial and much debated. [ 31 ] The main contention of such readings is that Wittgenstein in the Tractatus does not provide a theoretical account of language that relegates ethics and philosophy to a mystical realm of the unsayable. Rather, the book has a therapeutic aim. By working through the propositions of the book the reader comes to realize that language is perfectly suited to all our needs, and that philosophy rests on a confused relation to the logic of our language. The confusion that the Tractatus seeks to dispel is not a confused theory, such that a correct theory would be a proper way to clear the confusion. Rather, the confusion lies in the notion that any theory is needed. The method of the Tractatus is to make the reader aware of the logic of our language as we are already familiar with it. Dispelling the need for a theoretical account of the logic of our language is intended to spread to other areas of philosophy. Thereby the confusion involved in putting forward ethical and metaphysical theories, for example, is cleared in the same "coup". [ citation needed ]
Wittgenstein would not meet the Vienna Circle proper, but only a few of its members, including Moritz Schlick, Rudolf Carnap, and Friedrich Waismann . Often, though, he refused to discuss philosophy, and would insist on giving the meetings over to reciting the poetry of Rabindranath Tagore with his chair turned to the wall. He largely broke off formal relations even with these members of the circle after coming to believe Carnap had used some of his ideas without permission. [ 32 ]
Alfred Korzybski credits Wittgenstein as an influence in his book, Science and Sanity: An Introduction to Non-Aristotelian Systems and General Semantics . [ 33 ]
Casimir Lewy wrote that "I do not find...any evidence in Mind that the book had much direct influence on the development of philosophy in England during the period that I am reviewing [1976]", with the exception of Frank Ramsey's paper on universals . This is in contrast to Wittgenstein's latter philosophy, "with the later philosophy of Wittgenstein the story is very different". [ 34 ]
The Tractatus was the theme of a 1992 film by the Hungarian filmmaker Péter Forgács . The 32-minute production, named Wittgenstein Tractatus , features citations from the Tractatus and other works by Wittgenstein.
In 1989 the Finnish artist M. A. Numminen released a black vinyl album, The Tractatus Suite , consisting of extracts from the Tractatus set to music, on the Forward! label (GN-95). The tracks were [T. 1] "The World is...", [T. 2] "In order to tell", [T. 4] "A thought is...", [T. 5] "A proposition is...", [T. 6] "The general form of a truth-function", and [T. 7] "Wovon man nicht sprechen kann" . It was recorded at Finnvox Studios, Helsinki between February and June 1989. The "lyrics" were provided in German, English, Esperanto, French, Finnish and Swedish. [ 35 ] The music was reissued as a CD in 2003, M. A. Numminen sings Wittgenstein . [ 36 ]
The Tractatus is featured as a predominate thematic basis for the visual novel Wonderful Everyday . [ 37 ] [ 38 ]
The Tractatus is the English translation of:
A notable German edition of the works of Wittgenstein is:
The first two English translations of the Tractatus , as well as the first publication in German from 1921, include an introduction by Bertrand Russell . Wittgenstein revised the Ogden translation. [ 39 ]
A manuscript of an early version of the Tractatus was discovered in Vienna in 1965 by Georg Henrik von Wright , who named it the Prototractatus and provided a historical introduction to a published facsimile with English translation: Wittgenstein, Ludwig (1971). McGuinness, B. F.; Nyberg, T.; von Wright, G. H. (eds.). Prototractatus, an Early Version of Tractatus Logico-Philosophicus . Translated by Pears, D. F.; McGuinness, B. F. London: Routledge and Kegan Paul. ISBN 9780415136679 . [ 39 ] [ 44 ]
Online English versions
Online German versions
Visualization graphs | https://en.wikipedia.org/wiki/Tractatus_Logico-Philosophicus |
Traction , traction force or tractive force is a force used to generate motion between a body and a tangential surface, through the use of either dry friction or shear force . [ 1 ] [ 2 ] [ 3 ] [ 4 ] It has important applications in vehicles , as in tractive effort .
Traction can also refer to the maximum tractive force between a body and a surface, as limited by available friction; when this is the case, traction is often expressed as the ratio of the maximum tractive force to the normal force and is termed the coefficient of traction (similar to coefficient of friction ). It is the force which makes an object move over the surface by overcoming all the resisting forces like friction , normal loads (load acting on the tiers in negative Z axis), air resistance , rolling resistance , etc.
Traction can be defined as:
a physical process in which a tangential force is transmitted across an interface between two bodies through dry friction or an intervening fluid film resulting in motion, stoppage or the transmission of power.
In vehicle dynamics, tractive force is closely related to the terms tractive effort and drawbar pull , though all three terms have different definitions.
The coefficient of traction is defined as the usable force for traction divided by the weight on the running gear (wheels, tracks etc.) [ 6 ] [ 7 ] i.e.: usable traction = coefficient of traction × normal force .
Traction between two surfaces depends on several factors:
In the design of wheeled or tracked vehicles, high traction between wheel and ground is more desirable than low traction, as it allows for higher acceleration (including cornering and braking) without wheel slippage. One notable exception is in the motorsport technique of drifting , in which rear-wheel traction is purposely lost during high speed cornering.
Other designs dramatically increase surface area to provide more traction than wheels can, for example in continuous track and half-track vehicles. [ citation needed ] A tank or similar tracked vehicle uses tracks to reduce the pressure on the areas of contact. A 70-ton M1A2 would sink to the point of high centering if it used round tires. The tracks spread the 70 tons over a much larger area of contact than tires would and allow the tank to travel over much softer land.
In some applications, there is a complicated set of trade-offs in choosing materials. For example, soft rubbers often provide better traction but also wear faster and have higher losses when flexed—thus reducing efficiency. Choices in material selection may have a dramatic effect. For example: tires used for track racing cars may have a life of 200 km, while those used on heavy trucks may have a life approaching 100,000 km. The truck tires have less traction and also thicker rubber.
Traction also varies with contaminants. A layer of water in the contact patch can cause a substantial loss of traction. This is one reason for grooves and siping of automotive tires.
The traction of trucks, agricultural tractors, wheeled military vehicles, etc. when driving on soft and/or slippery ground has been found to improve significantly by use of Tire Pressure Control Systems (TPCS). A TPCS makes it possible to reduce and later restore the tire pressure during continuous vehicle operation. Increasing traction by use of a TPCS also reduces tire wear and ride vibration. [ 9 ] | https://en.wikipedia.org/wiki/Traction_(mechanics) |
In cellular biology , traction force microscopy ( TFM ) is an experimental method for determining the tractions on the surface of a cell by obtaining measurements of the surrounding displacement field within an in vitro extracellular matrix (ECM).
The dynamic mechanical behavior of cell-ECM and cell-cell interactions is known to influence a vast range of cellular functions, including necrosis, differentiation , adhesion , migration , locomotion, and growth . TFM utilizes experimentally observed ECM displacements to calculate the traction , or stress vector, at the surface of a cell.
Before TFM, efforts observed cellular tractions on silicone rubber substrata wrinkling around cells; [ 1 ] however, accurate quantification of the tractions in such a technique is difficult due to the nonlinear and unpredictable behavior of the wrinkling. Several years later, the terminology TFM was introduced to describe a more advanced computational procedure that was created to convert measurements of substrate deformation into estimated traction stresses. [ 2 ]
In conventional TFM, cellular cultures are seeded on, or within, an optically transparent 3D ECM embedded with fluorescent microspheres (typically latex beads with diameters ranging from 0.2-1 μm ). [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] A wide range of natural and synthetic hydrogels can be used for this purpose, with the prerequisite that mechanical behavior of the material is well characterized, and the hydrogel is capable of maintaining cellular viability. The cells will exert their own forces into this substrate which will consequently displace the beads in the surrounding ECM. In some studies, a detergent , enzyme , or drug is used to disturb the cytoskeleton , thereby altering, or sometimes completely eliminating, the tractions generated by the cell.
First, a continuous displacement field is computed from a pair of images: the first image being the reference configuration of microspheres surrounding an isolated cell, and the second image being the same isolated cell surrounded by microspheres that are now displaced due to the cellular-generated tractions. Confocal fluorescence microscopy is usually employed to image the cell surface and fluorescent beads. After computing the translational displacement field between a deformed and undeformed configuration, a strain field can be calculated by often using a regularization approach, the best of which is the elastic net regularization. [ 8 ] From the strain field, the stress field surrounding the cell can be calculated with knowledge of the stress-strain behavior, or constitutive model, of the surrounding hydrogel material. It is possible to proceed one step further, and use the stress field to compute the tractions at the surface of the cell, if the normal vectors to the cell surface can be obtained from a 3D image stack . Although this procedure is a common way to obtain the cellular tractions from microsphere displacement, some studies have successfully utilized an inverse computational algorithm to yield the traction field. [ 9 ] [ 10 ] [ 11 ]
The spatial resolution of the traction field that can be recovered with TFM is limited by the number of displacement measurements per area. [ 12 ] The spacing of independent displacement measurements varies with experimental setups, but is usually on the order of one micrometer. The traction patterns produced by cells frequently contain local maxima and minima that are smaller. Detection of these fine variations in local cellular traction with TFM remains challenging.
In 2D TFM, cells are cultured as a monolayer on the surface of a thin substrate with a tunable stiffness, and the microspheres near the surface of the substrate undergo deformation through cell-ECM connections. 2.5D cell cultures are similarly grown on top of a thin layer of ECM, and diluted structural ECM proteins are mixed to the medium added above the cells and substrate. Although most of the seminal work in TFM was performed in 2D, or 2.5D, many cell types require the complex biophysical and biochemical cues from a 3D ECM to behave in a truly physiologically realistic manner within an in vitro environment. [ 13 ]
When the rotation or stretch of a sub volume is large, errors can be introduced into the calculation of cell surface tractions since most TFM techniques employ a computational framework based on linear elasticity. Recent advances in TFM have shown that cells are capable of exerting deformations with strain magnitudes up to 40%, which requires usage of a finite deformation theory approach to account for large strain magnitudes. [ 14 ]
Although TFM is frequently used to observe tractions at the surface of a spatially isolated individual cell, a variation of TFM can also be used to analyze the collective behavior of multicellular systems. For example, cellular migration velocities and plithotaxis are observed alongside a computed stress variation map of a monolayer sheet of cells, in an approach termed monolayer stress microscopy. [ 15 ] The mechanical behavior of single cells versus a confluent layer of cells differ in that the monolayer experiences a "tug-of-war" state. There is also evidence of a redistribution of tractions that can take place earlier than changes in cell polarity and migration. [ 16 ]
TFM has proven particularly useful to study durotaxis as well.
TFM has recently been applied to explore the mechanics of cancer cell invasion with the hypothesis that cells which generate large tractions are more invasive than cells with lower tractions. [ 17 ] It is also hoped that recent findings from TFM will contribute to the design of optimal scaffolds for tissue engineering and regeneration of the peripheral nervous system , [ 18 ] artery grafts , [ 19 ] and epithelial skin cells. [ 20 ] | https://en.wikipedia.org/wiki/Traction_force_microscopy |
Tractor vaporising oil ( TVO ) is a fuel for petrol-paraffin engines . It is seldom made or used today. In the United Kingdom and Australia , after the Second World War , it was commonly used for tractors until diesel engines became commonplace, especially from the 1960s onward. In Australian English it was known as power kerosene .
TVO existed for at least fifteen years before it became widely used. A 1920 publication mentions it as a product of British Petroleum . [ 1 ] But it was not until the late 1930s that it first became widely used. The post war Ferguson TE20 tractor, a carefully researched and near-ideal tractor for use on British farms, was designed around a petrol (gasoline) engine, the Standard inline-four . Although there was a campaign for the reintroduction of agricultural Road Duty (tax)-free petrol, which had been curtailed during the war, this was not forthcoming. Perkins Engines supplied some conversions into diesel engines which could use untaxed red diesel . [ citation needed ]
On the early Fordson model N, the tap which changed over from petrol to TVO was marked G for gasoline and K for kerosene, reflecting that these tractors had their design origin in the USA. In the UK tractor vaporising oil was usually called TVO.
As a substitute for petrol, TVO was developed. Paraffin (kerosene) was commonly used as a domestic heating fuel and was untaxed. Paraffin has a low octane rating and would damage an engine built for petrol. The manufacture of paraffin involves the removal of aromatic hydrocarbons from what is now sold as heating oil . These aromatics have an octane rating, so adding some of that otherwise waste product material back in a controlled manner into paraffin gave TVO. The resulting octane rating of TVO was somewhere between 55 and 70.
The words paraffin and kerosene are often used interchangeably but the tables suggest that this is incorrect because they have different octane ratings . However, kerosene and heating oil have similar octane ratings. Paraffin, kerosene and petrol ( gasoline ) are all rather loosely defined. For example, gasoline may have an octane rating between 88 and 102.
Because TVO has a lower octane rating than petrol, the engine needs a lower compression ratio . On the TVO version of the Ferguson TE20 tractor, the cylinder head was re-designed to reduce the compression ratio to 4.5:1. This reduced the power output, so the cylinder bore was increased to 85 mm to restore the power. [ 2 ] The petrol version had a compression ratio of 5.77:1 and a cylinder bore of 80 mm on early versions..
In practice TVO had most of the properties of paraffin, including the need for heating to encourage vapourisation. As a result, the exhaust and inlet manifolds were adapted so that more heat from the former warmed the latter. Such a setup was called a vaporiser . To get the tractor to start from cold, a small second fuel tank was added that contained petrol. The tractor was started on the expensive petrol, then – once the engine was warm – the fuel supply switched over to TVO or paraffin. So long as the engine was working hard, as when ploughing or pulling a load, the TVO would burn well. Under light conditions, such as travelling unloaded on the highway, the engine was better on petrol.
Some tractor designs included a radiator "blind" that would restrict the flow of air over the radiator which led to the engine running hotter, which could help with starting. If the radiator blind was left shut, though, there was a risk of engine damage, especially in warm weather.
The phrase petrol-paraffin engine is often used to describe an engine that uses TVO. This can be interpreted either as
TVO was withdrawn from sale by UK suppliers in 1974. An approximation to the correct specification can be made from petrol and heating oil (burning oil). [ 4 ] In the UK there is an exception that permits the use of rebated kerosene and fuel oils in vintage vehicles. [ 5 ]
In North America a similar product, called distillate , was produced. Of lower quality than TVO, its octane rating varied between 33 and 45. Manufacture of tractors using distillate ended by 1956, when gasoline and diesel-engined tractors had captured the North American farming equipment market. [ 6 ] [ 7 ] | https://en.wikipedia.org/wiki/Tractor_vaporising_oil |
The Trade Unions International of Chemical, Oil and Allied Workers was a trade union international affiliated with the World Federation of Trade Unions . It was often known by its French initials, ICPS (Union Internationale des Syndicats des Industries, Chimiques, du Petroles et Similares).
The Union was established at a conference in Budapest , Hungary in March 1950 as the Trade Unions International of Chemical and Allied Workers. It changed its name in 1954 when it expanded its scope to include oil workers . It also represented workers in the glassware , paper and ceramic industries. [ 1 ]
In 1998 a Conference was held in Havana which merged Trade Unions International of Energy Workers (formerly known as the Trade Unions International of Miners) and Trade Union International of Metal and Engineering Workers to form the Trade Union International of Energy, Metal, Chemical, Oil and Allied Industries. This organization was reorganized again as the Trade Unions International of Energy Workers in 2007. This left the metal workers an opportunity create a new TUI the next year, Trade Union International of Workers in the Mining, the Metallurgy and the Metal Industries . [ 2 ] [ 3 ]
The TUI was governed by an international trade conference held every four years. The Conference drew up the groups program, made policy decisions and elected an administrative committee. The latter consisted of 25 members drawn from 20 countries and met once a year. The bureau, composed of the TUI president, vice-presidents, general secretary and secretaries met to ensure the fulfillment of the administrative committees decisions and a permanent secretariat oversaw day-to-day operations. [ 4 ] Industrial Commissions were also set up to deal with specific issues. In 1978 there commissions on oil ( drilling , refining , oil pipelines , distribution ), chemicals-pharmaceuticals, paper board , pulp , cellulose , rubber , glass and ceramics. [ 5 ]
In 1955 its headquarters were reported to be in Bucharest , Romania . [ 6 ] In 1958 its headquarters was reported to be at 17 Sztalin ter , Budapest VI , Hungary . [ 7 ] From 1978 to 1989 it was reported to be at 1415 Budapest, sometimes given the street name Benczur ut 45. [ 8 ] [ 9 ] In 1991 it was reported at EM26, H 1097. [ 10 ]
In 1958 the ICPS claimed membership in 25 countries. [ 11 ] In 1976 it claimed 7 million members in 59 affiliated unions in 37 countries. [ 12 ] In 1985 it claimed 13 million members in 100 affiliate in 50 countries. [ 13 ] [ 14 ]
In 1975 the following unions were affiliated with ICPS: [ 15 ]
The Union published an Information Bulletin and Information Sheet . [ 16 ] | https://en.wikipedia.org/wiki/Trade_Unions_International_of_Chemical,_Oil_and_Allied_Workers |
A trade name , trading name , or business name is a pseudonym used by companies that do not operate under their registered company name. [ 1 ] The term for this type of alternative name is fictitious business name . [ 1 ] Registering the fictitious name with a relevant government body is often required.
In a number of countries, the phrase " trading as " (abbreviated to t/a ) is used to designate a trade name. In the United States , the phrase " doing business as " (abbreviated to DBA , dba , d.b.a. , or d/b/a ) is used, [ 1 ] [ 2 ] among others, such as assumed business name [ 3 ] or fictitious business name . [ 4 ] In Canada , " operating as " (abbreviated to o/a ) and " trading as " are used, although " doing business as " is also sometimes used. [ 5 ]
A company typically uses a trade name to conduct business using a simpler name rather than using their formal and often lengthier name. Trade names are also used when a preferred name cannot be registered, often because it may already be registered or is too similar to a name that is already registered.
Using one or more fictitious business names does not create additional separate legal entities. [ 2 ] The distinction between a registered legal name and a fictitious business name, or trade name, is important because fictitious business names do not always identify the entity that is legally responsible .
Legal agreements (such as contracts ) are normally made using the registered legal name of the business. If a corporation fails to consistently adhere to such important legal formalities like using its registered legal name in contracts, it may be subject to piercing of the corporate veil . [ 6 ]
In English , trade names are generally treated as proper nouns . [ 7 ]
In Argentina , a trade name is known as a nombre de fantasía ('fantasy' or 'fiction' name), and the legal name of business is called a razón social (social name).
In Brazil , a trade name is known as a nome fantasia ('fantasy' or 'fiction' name), and the legal name of business is called razão social (social name).
In some Canadian jurisdictions , such as Ontario , when a businessperson writes a trade name on a contract, invoice, or cheque, they must also add the legal name of the business. [ 8 ]
Numbered companies will very often operate as something other than their legal name, which is unrecognizable to the public.
In Chile , a trade name is known as a nombre de fantasía ('fantasy' or 'fiction' name), and the legal name of business is called a razón social (social name).
In Ireland , businesses are legally required to register business names where these differ from the surname(s) of the sole trader or partners, or the legal name of a company. The Companies Registration Office publishes a searchable register of such business names. [ 9 ]
In Japan , the word yagō ( 屋号 ) is used.
In Colonial Nigeria , certain tribes had members that used a variety of trading names to conduct business with the Europeans. Two examples were King Perekule VII of Bonny , who was known as Captain Pepple in trade matters, and King Jubo Jubogha of Opobo , who bore the pseudonym Captain Jaja . Both Pepple and Jaja would bequeath their trade names to their royal descendants as official surnames upon their deaths.
In Singapore , there is no filing requirement for a "trading as" name, but there are requirements for disclosure of the underlying business or company's registered name and unique entity number. [ 10 ]
In the United Kingdom , there is no filing requirement for a "business name", defined as "any name under which someone carries on business" that, for a company or limited liability partnership, "is not its registered name", but there are requirements for disclosure of the owner's true name and some restrictions on the use of certain names. [ 11 ]
A minority of U.S. states, including Washington , still use the term trade name to refer to "doing business as" (DBA) names. [ 12 ] In most U.S. states now, however, DBAs are officially referred to using other terms. Almost half of the states, including New York and Oregon , use the terms assumed business name or assumed name ; [ 13 ] [ 14 ] nearly as many, including Pennsylvania , use the term fictitious name . [ 15 ]
For consumer protection purposes, many U.S. jurisdictions require businesses operating with fictitious names to file a DBA statement, though names including the first and last name of the owner may be accepted. [ 16 ] This also reduces the possibility of two local businesses operating under the same name, although some jurisdictions do not provide exclusivity for a name, or may allow more than one party to register the same name. Note, though, that this is not a substitute for filing a trademark application. A DBA filing carries no legal weight in establishing trademark rights. [ 17 ] In the U.S., trademark rights are acquired by use in commerce, but there can be substantial benefits to filing a trademark application. [ 18 ] Sole proprietors are the most common users of DBAs. Sole proprietors are individual business owners who run their businesses themselves. Since most people in these circumstances use a business name other than their own name, [ citation needed ] it is often necessary for them to get DBAs.
Generally, a DBA must be registered with a local or state government, or both, depending on the jurisdiction. For example, California, Texas and Virginia require a DBA to be registered with each county (or independent city in the case of Virginia) where the owner does business. Maryland and Colorado have DBAs registered with a state agency. Virginia also requires corporations and LLCs to file a copy of their registration with the county or city to be registered with the State Corporation Commission.
DBA statements are often used in conjunction with a franchise . The franchisee will have a legal name under which it may sue and be sued, but will conduct business under the franchiser's brand name (which the public would recognize). A typical real-world example can be found in a well-known pricing mistake case, Donovan v. RRL Corp. (2001), [ 19 ] where the named defendant, RRL Corporation, was a Lexus car dealership doing business as " Lexus of Westminster ", but remaining a separate legal entity from Lexus, a division of Toyota Motor Sales, USA, Inc. .
In California , filing a DBA statement also requires that a notice of the fictitious name be published in local newspapers for some set period of time to inform the public of the owner's intent to operate under an assumed name . The intention of the law is to protect the public from fraud, by compelling the business owner to first file or register his fictitious business name with the county clerk, and then making a further public record of it by publishing it in a newspaper. [ 20 ] Several other states, such as Illinois , require print notices as well. [ 21 ]
In Uruguay , a trade name is known as a nombre fantasía , and the legal name of business is called a razón social . | https://en.wikipedia.org/wiki/Trade_name |
A trade study or trade-off study , also known as a figure of merit analysis or a factor of merit analysis , is the activity of a multidisciplinary team to identify the most balanced technical solutions among a set of proposed viable solutions (FAA 2006). These viable solutions are judged by their satisfaction of a series of measures or cost functions . These measures describe the desirable characteristics of a solution. They may be conflicting or even mutually exclusive . Trade studies are commonly used in the design of aerospace and automotive vehicles and the software selection process (Phillips et al. 2002) to find the configuration that best meets conflicting performance requirements.
The measures are dependent on variables that characterize the different potential solutions. If the system can be characterized by a set of equations , one can write the definition of the trade study problem as: Find the set of variables , x i , that give the best overall satisfaction to the measures:
Where T j is a target value and f(...) denotes some functional relationship among the variables. Further, the equality between the target and the function may be a richer relationship, as will be developed below. If the equations are linear , as in the production volume example used as a starting point below, then this problem is solvable using linear programming techniques. Generally, one or more of the targets is not fixed at a specific value, and it is desired to make these T values as large or small as possible. These are generally referred to as cost functions, and the other measures are treated as constraints .
If the situation was as described above, formal optimization or linear programming methods would work totally. However, in practice, needed information is: | https://en.wikipedia.org/wiki/Trade_study |
Tradebe is a waste management company based in Barcelona that was established in 1980. It operates in Spain, France, the United Kingdom, the United States and Oman. [ 1 ] The chairman is Josep Creixell, and the Chief Executive is Victor Creixell.
Tradebe operates within the solvent recycling [ 2 ] and automated oil tank cleaning markets. [ 3 ]
The company has been prosecuted in the UK. In 2016 it was fined £38,960 after a chemical leak at their Hendon Dock plant in Sunderland . It was prosecuted in 2013 after a spillage of highly flammable liquid at a site in Knottingley . The United States Environmental Protection Agency fined it after environmental violations at the firm’s hazardous waste treatment facilities in Connecticut . Their subsidiary Norlite had to pay around £15,000 for air pollution violations in Cohoes, New York , in 2016. [ 4 ]
In 2018 it was involved in the debate over clinical waste management in the UK and the collapse of Healthcare Environmental Services . It owns and operates its own treatment and incineration facilities and said that there was "sufficient waste incineration capacity within the UK to meet current market demand". [ 7 ]
It is to take over clinical waste management in Scotland in August 2019. [ 8 ] | https://en.wikipedia.org/wiki/Tradebe |
A Trademark in computer security is a contract between code that verifies security properties of an object and code that requires that an object have certain security properties. As such it is useful in ensuring secure information flow. In object-oriented languages, trademarking is analogous to signing of data but can often be implemented without cryptography.
A trademark has two operations:
This operation is analogous to the private key in a digital signature process, so must not be exposed to untrusted code.
It should only be applied to immutable objects , and makes sure that when VerifyTrademark? is called on the same value that it returns true.
This operation is analogous to the public key in a digital signature process, so can be exposed to untrusted code.
Returns true if-and-only-if, ApplyTrademark! has been called with the given object.
Trademarking is the inverse of taint checking . Whereas taint checking is a black-listing approach that says that certain objects should not be trusted, trademarking is a white-listing approach that marks certain objects as having certain security properties.
The apply trademark can be thought of as memoizing a verification process.
Sometimes a verification process does not need to be done because the fact that a value has a particular security property can be verified statically . In this case, the apply property is being used to assert that an object was produced by code that has been formally verified to only produce outputs with the particular security property.
One way of applying a trademark in java: | https://en.wikipedia.org/wiki/Trademark_(computer_security) |
The metaphor of a trading zone is being applied to collaborations in science and technology . The basis of the metaphor is anthropological studies of how different cultures are able to exchange goods, despite differences in language and culture.
Peter Galison produced the "trading zone" metaphor in order to explain how physicists from different paradigms went about collaborating with each other, and with engineers to develop particle detectors and radar .
According to Galison, "Two groups can agree on rules of exchange even if they ascribe utterly different significance to the objects being exchanged; they may even disagree on the meaning of the exchange process itself. Nonetheless, the trading partners can hammer out a local coordination, despite vast global differences. In an even more sophisticated way, cultures in interaction frequently establish contact languages, systems of discourse that can vary from the most function-specific jargons, through semispecific pidgins, to full-fledged creoles rich enough to support activities as complex as poetry and metalinguistic reflection" (Galison 1997, p. 783)
In the case of radar, for example, the physicists and engineers had to gradually develop what was effectively a pidgin or creole language involving shared concepts like ‘equivalent circuits’ that the physicists represented symbolically in terms of field theory and the engineers saw as extensions of their radio toolkit.
Exchanges across disciplinary boundaries can also be carried out with the help of an agent: namely, a person who is familiar enough with the language of two or more cultures to facilitate trade.
At one point in the development of MRI , surgeons saw a lesion where an engineer familiar with the device would have recognized an artifact produced by the way the device was being used. It took someone with expertise in both physics and surgery to see how each of the different disciplines viewed the device, and develop procedures for correcting the problem (Baird & Cohen, 1999). The ability to converse expertly in more than one discipline is called interactional expertise ( Collins & Evans, 2002).
A workshop at Arizona State University on Trading Zones, Interactional Expertise and Interdisciplinary Collaboration raised the possibility of applying these concepts to other applications like global health and service science, and also identified avenues for future research ( https://archive.today/20121215123346/http://bart.tcc.virginia.edu/Tradzoneworkshop/index.htm ). | https://en.wikipedia.org/wiki/Trading_zones |
Traditional Phenological Knowledge can be seen as a "subset of Indigenous Knowledge ". [ 1 ] Traditional Phenological Knowledge (TPK) is the knowledge based on traditional observations made by Indigenous Peoples that predict seasonal changes of nature and their immediate environment. [ 1 ] [ 2 ] [ 3 ] This can be useful for the management of naturally occurring phenomenon, as well as "adaptive management" such as fire management. [ 1 ] TPK is not a novel practice and has been practiced for hundreds of years. [ 2 ] TPK encompasses history, observations and Traditional Knowledge (TK) or Indigenous Knowledge (IK). Indigenous Knowledge is flexible and always evolves. [ 3 ] It considers the past, present and future of environmental and biological generations. [ 2 ] [ 1 ]
TPK is integrative and interactive. [ 1 ] [ 3 ] It falls under the same teachings of Traditional Ecological Knowledge also known as TEK. Both TPK and TEK share close definitions which IK can be an umbrella term. [ 1 ] Traditional forms of knowledge are combined with sustainable interaction with the land. Indigenous knowledge creates a relationship that is respectful and symbiotic with the natural world and promotes the existence of passing on hands-on experiences to future generations. [ 2 ] [ 1 ]
Phenology in TPK can be qualitative and quantitative. Observations can be described, passed down by oral histories . [ 2 ] [ 1 ] [ 3 ] TPK can reinforce what is measured and recorded scientifically. TPK can be a tool to help leverage climate change and biodiversity loss in today's climate crisis. [ 1 ] [ 2 ] [ 4 ]
TPK can be "direct" or "indirect". Direct observations of phenology in TPK can refer to species signals and timings of secondary species. [ 1 ] Direct TPK is translated through the use of belief systems, spirituality, stories, myth and ceremonial events. [ 1 ] [ 2 ] Indirect TPK is passed on through the use of language specifically. [ 2 ] [ 1 ] The use of both direct and indirect embodies, reinforces and defines the values TPK. The observation of nature timings along with stories and beliefs, pass down the knowledge from elders and family members that also contribute to the essence of TPK. [ 3 ] [ 2 ] [ 1 ]
Phenology observes the timing of seasonality of biological and weather events. [ 2 ] Plant cycles, animal behaviour, weather patterns and climate change cycle through seasonality [ 2 ] [ 1 ] i.e. Flowering . As Swartz defines; "Phenology is the study of recurring plant, fungi and animal life cycle stages, especially as they relate to climate and weather". [ 1 ] [ 5 ]
Some of these observations can be variant depending on location. [ 2 ] For instance, observing temperature and photoperiod can be indicators of seasonal change in parts the Northern Hemisphere and the Southern Hemisphere . [ 2 ]
Observing plant species is an example of TPK. In temperate locations , the change of increased temperatures will signal growth which, in turn will create an environmental response that indicates spring and/or summer. [ 2 ] Consequently, plants will flower with enough "accumulated heat". [ 2 ]
Phenology is describes as a process that revolves around the development of an organism (plants or animals) in relation to the change of the seasons. [ 1 ] Moreover, temperature is a factor of these processes that create changes in the cycles. For instance, vegetation or biological beings can change due to temperature increase or decrease and that surpass a threshold which creates change in behaviour or change in seasonality. [ 1 ]
Human observations and knowledge throughout generations is tied to TPK. The human response tied with seasonal change can create a symbiotic relationship with their proximity to the environment, hence Indigenous Peoples have been and still are practicing traditional practices that match timings of the seasonal change and seasonal indicators. [ 2 ]
Time in most Indigenous communities is based on the pace of nature. Indigenous communities live synchronously with temporal phenological events that present themselves. [ 3 ] [ 6 ] It is also the interconnectivity between the natural environment and traditional Indigenous practices. [ 6 ] The epistemology may differ from group to group, however, the many Indigenous groups share similarities regarding the innate knowledge of seasonal timings and landscape ecological practices. [ 6 ]
The perception of time is unlike that of the Western world. For instance, the rainy season in some Indigenous communities may signal spring and/ or fall and the growing season signals spring and summer. [ 6 ] Seasonal timings can relate to traditional practices as well. The observations of fish behaviour and migration patterns can indicate time windows in a season where one can fish. Spawning of salmon is also an indication of reproduction and multiplying of the species. Timing is important for availability. [ 7 ] Too early fishing can affect spawning which can result in a decrease in numbers of fish. Fishing is also a cultural practice that many Indigenous communities still practise today and with TPK, these communities know there can be variance in time and change of numbers of fish from one year to the next. [ 7 ]
TPK can be used as a predictive and management tool in both Traditional Indigenous practices and Western practices. Embracing TK and continuous observations of the physical environment creates reliable information for future generations. [ 1 ] [ 7 ] It pertains to the interconnectivity of animal species, plant species and human behaviour. [ 1 ] [ 2 ] [ 8 ]
Fire management can be timed with phenological events in North-American Indigenous Nations. [ 1 ] Burning shrubs in vast areas would help deer find food in the next season. [ 1 ] Burning causes more water to be retained in the soil which promotes seedling sprouts in the spring and summer. [ 1 ] For Indigenous communities in California, there can be more grass growth which is used for cultural "deer grass" weaving. [ 1 ] Spring burning also promote species diversity and different cultivation such as Tobacco. Fire can kill fast growing vegetation and pests, and aid full-light vegetation to grow in these areas i.e. Oak and Huckleberries . [ 1 ] Hence, Traditional Knowledge and TPK can help with food security, food for wildlife. [ 1 ]
TPK and TEK are seen as sustainable practices to help fight against climate change and are starting to be recognised as a tool to help mitigate food insecurity and issues regarding biodiversity loss. [ 6 ] [ 3 ] [ 1 ] A term to describe the combination of Indigenous Knowledge and Western Knowledge is known as Two-eyed seeing . [ 8 ] For instance, TPK is a tool for fire management that Western communities have adopted to decrease the severity of fires. [ 1 ]
The transmission of TPK is passed down through stories which can be in the form of indirect TPK. [ 1 ] It is not actually observed by the eye of the learner but rather transmitted through language by family members and community members. [ 3 ]
Conservation of the land is engrained in indigenous knowledge. Practises of Indigenous Knowledge can be useful for sustainability and solutions for modern day environmental issues regarding climate change and biodiversity loss. [ 1 ] [ 3 ] TPK,TEK, TK, IK are ways to look at landscape ecology in a method that also scientists and the general public can learn from. Many practises can aid sustainable practises and fights against climate change. [ 1 ] [ 3 ]
TPK can be a tool in understanding climate change . TPK is based on historical observations that can help climate scientists because of records of past and current changes in the environments around the world. [ 4 ] [ 6 ] TPK can provide knowledge and information that are not easily accessible to the Western Sciences regarding climate change. It can be a tool for decision making, revolving ecology and conservation where gaps of information and data are lacking. [ 6 ] [ 4 ]
Climate change affects First Nations and Indigenous communities differently than Western-based communities.The Western world is impacted mostly economically, financially and ecologically, whereas Indigenous communities have certain practices and traditions that are directly tied to the land. [ 6 ] The change in climate might affect and threaten their livelihood and their relationship to the land. In other words, these communities might adapt their practices in new ways to fight against climate change. [ 6 ] In recent years, communities have noticed changes of rainy periods and dry periods which can change the predictability of timings of traditional practices. [ 6 ] Moreover, TPK can change and adapt due to climate change.
The dynamics of climate change in the Western world are linked to the growth of capital. This tends to lead to exploitation of natural resources, therefore leading to increased greenhouse gases in the atmosphere, degradation of the environment, affecting fresh water systems and soil health, etc. [ 6 ]
Changes in climate also change indicators of seasonality. [ 6 ] [ 4 ] TPK can play a role in the study of climate change and sustainability. [ 6 ]
In most parts of the world, especially in higher elevations and northern latitudes such as Alaska, Alaskan communities have observed changes in phenological cycles. [ 6 ] [ 4 ] Locations such as Alaska are severely impacted due to their northern locations and closeness to shore which intensifies the changes in climate.
The Yukon-Kuskokwim Delta Indigenous communities have observed changes in berry resources. [ 4 ] These communities have noticed a decrease in snowpack in recent winters. [ 4 ] Hotter summers and thawing of permafrost also create an unsteady landscape which affects negatively the vegetation in this region, for instance, wild berries. [ 4 ] Berries are essential for human consumption and food for wildlife. For Alaskan communities, berry picking provides nutrition, but also indicators of seasonality change. [ 4 ] These communities have seen changes in the last ten years; variability in berry abundance from year to year and earlier ripening. [ 4 ] This is seen in cloud berries , blueberries and crowberries . [ 4 ]
Some of the barriers of TPK would be that some institutions do not recognise TPK as a scientific way of practice due to Western ways of teaching. [ 1 ] [ 8 ] [ 3 ] [ 9 ] This is can be due to priority of importance of the institutions and education systems already put in place. [ 1 ] [ 3 ]
Indigenous communities in the North-East of India such as Tripura can use TPK to predict weather patterns which aid with activities of agroforestry, farming and agriculture. Additionally it is used for prevention of natural disasters. Through folklore and myths, Traditional Knowledge is shared in these means. These lores and myths about weather can be found in ancient scriptures such as Vedang Jyotish of Maharshi Laugakshi, Seval Samhita and Gura Samhita among others.
TPK can be found under two categories; theories of prediction and observations. The use of astronomy and observation of planetary positions such as conjunctions are important for Tripura Indigenous communities. Atmospheric observations play a big part of weather conditions such as looking at clouds. Behavioural patterns of plants and animals are also indicators of predicting weather.
The night-flowering jasmine ( Nyctanthes arbor-tristis L. ) helps predict abundant rainfall. The night-flowering jasmine flowers year round, therefore, depending on the time of the year it flowers, different amount of rain is predicted. June and July are the months containing the highest amount of rainfall and accurately predicted by traditional farmers in the region which is confirmed by the Meteorological Department of Narsinghgarh Bimangarh India. [ 10 ] [ 11 ] The Indian Laburnum ( Cassia fistula L. ) also known as Golden Shower predicts rain. When the Indian Laburnum flows it predicts the beginning of the Monsoon .
In the Teso Sub-region in Eastern Uganda , the Iteso people of Teso practice TPK in terms of agriculture and pasture. This region has a hot and humid climate. There is a strong agricultural practice in the region of Teso. The main crops cultivated are cereal such as millet, corn, cassava and cotton. [ 12 ] The main animals for livestock are pigs, cattle, chickens amongst other animals. [ 12 ] Due to the region's geographical location, there are several lakes which permit the community to practise fishing. [ 12 ] TPK helps for drought and flood predictions, infestations, water conservation and timings for fishing and plays a role in the prediction of rainfall. [ 12 ]
The use of TPK has changed in the recent decades because of climate unpredictability, however, the Iteso people have adapted TPK to recent climatic events. [ 12 ] Previously, traditional communities of this region used to sow millet seeds according to leaf fall and growth because it would predict the time frame to seed grain to be on time for future rainfall. [ 12 ] Now, however, this predictability now varies. [ 12 ] Astronomical observations such as the location of the moon and its color are used to determine when is the next onset of rainfall and rain intensity. [ 12 ] TPK is in the hands of elders whose phenological observations are monitored, but there is concern for the younger generations to lose Traditional Knowledge of phenological events. [ 12 ] Elders notice longer of droughts, increase winds and species disappearance, and a halt of fish migrations to previously plentiful rivers due to climate change which is affecting phenological cycles at a quicker and greater rate. [ 12 ]
Traditional herders of the land-locked country of Mongolia use TPK and TEK for herding animals, determining height of grasses in the next seasons and their understanding of the ever-changing land. [ 13 ] [ 14 ] Observations of seasons are present in story telling and observations made through their nomadic way of life. [ 13 ] [ 14 ]
Herders have a distinct way of understanding plants of the landscape. Mongolia has a mix of extreme climates which temperatures could reach 40 degrees Celsius in warmer months, to below 30 degrees in colder months. [ 13 ] Mongolia is also desertic, mountainous, contains grasslands. [ 13 ] It is a considerably dry climate. [ 13 ] [ 14 ] Traditional herders use TPK to determine rainfall, time of movement and timing of vegetation growth and blooms of medicinal plants. [ 13 ] [ 14 ] Herder's traditional movement is in accordance with seasonality where observations are made on the relationship and behaviour of vegetation and animals. [ 13 ] These observations and knowledge can vary from herder to herder. [ 14 ] For instance, in more desertic parts of Mongolia , prediction of grass growth can be described as the same height as the snow in the winter and grass grows to the extent of the amount of rain the area receives. [ 14 ] Additionally, numbers are associated with certain areas based on characteristics of viability, zones of ecology, soil health and topology. [ 14 ] Furthermore, knowing when a plant will bloom comes from repeated observations and counting the joints of vegetation and relative humidity in the atmosphere i.e. Bagluur or Anabasis Brevifolia . [ 14 ]
Some herders intertwine human existence with phenology. The seasons change and also humans change regarding the seasons. Humans are also an important part in phenology. In the light of climate change, the earth gets older and changes, so do humans. [ 14 ]
With the information provided by Indigenous Peoples, TPK is based on knowledge and intellectual property. Intellectual property ought to be respected, acknowledged, protected and accredited. [ 15 ] | https://en.wikipedia.org/wiki/Traditional_Phenological_Knowledge |
Traditional ecological knowledge ( TEK ) is a cumulative body of knowledge, practice, and belief, evolving by adaptive processes and handed down through generations by cultural transmission, about the relationship of living beings (including humans) with one another and with their environment. [ a ]
The application of TEK in the field of ecological management and science is still controversial, as methods of acquiring and collecting knowledge—although often including forms of empirical research and experimentation — may differ from those most often used to create and validate scientific ecological knowledge . [ citation needed ] Non-tribal government agencies, such as the U.S. EPA , have established integration programs with some tribal governments in order to incorporate TEK in environmental plans and climate change tracking. In contrast to the universality towards which contemporary academic pursuits often aim, TEK is not necessarily a universal concept among various societies, instead referring to a system of knowledge traditions or practices that are heavily dependent on "place". [ citation needed ]
There is a debate whether Indigenous populations retain intellectual property rights over traditional knowledge and whether use of this knowledge requires prior permission and license. [ 2 ] [ better source needed ] This is especially complicated because TEK is most frequently preserved as oral tradition and as such may lack objectively confirmed documentation . As such, the same methods that could resolve the issue of documentation to meet legal requirements may compromise the very nature of traditional knowledge.
Traditional knowledge is used by its holders to maintain ecological resources necessary for survival. [ citation needed ] While TEK and the communities which contain it are threatened in the context of rapid climate change or environmental degradation , TEK also can help to explain the impacts of those changes within the ecosystem . [ citation needed ]
"The earliest systematic studies of TEK were done by anthropologists. Ecological knowledge was studied through the lens of ethnoecology (an approach that focuses on the conceptions of ecological relationships held by a people or a culture)..." [ 3 ] in understanding how systems of knowledge were developed by a given culture. Harold Colyer Conklin , an American anthropologist took the lead in documenting indigenous ways of understanding the natural world. Conklin and others documented how traditional peoples, such as Philippine horticulturists, had detailed knowledge about the plants and animals where they resided. [ 4 ] Direct involvement in gathering, fashioning products from, and using local plants and animals created a scheme in which the biological world and the cultural world were tightly intertwined. The field of TEK encompasses a broad range of questions related to cultural ecology and ecological anthropology by emphasizing the study of human-nature relations, adaptive processes, which argues that social organization itself is an ecological adaptational response by a group to its local environment, and the practical techniques on which these relationships and culture depend.
in 1987 report, Our Common Future , [ 5 ] by the World Commission on Environment and Development was published by the United Nations . The report points out that the successes of the 20th century (decreases in infant mortality, increases in life expectancy, increases in literacy, and global food production) have given rise to trends that have caused environmental degradation "in an ever more polluted world among ever decreasing resources." The report declared that tribal and indigenous peoples had lifestyles that could provide modern societies with lessons for management of resources in complex forest, mountain, and dryland ecosystems.
Fulvio Mazzocchi of the Italian National Research Council 's Institute of Atmospheric Pollution outlines the characteristics of TEK as follows:
Traditional knowledge has developed a concept of the environment that emphasizes the symbiotic character of humans and nature. It offers an approach to local development that is based on co‐evolution with the environment, and on respecting the carrying capacity of ecosystems. This knowledge--based on long‐term empirical observations adapted to local conditions--ensures a sound use and control of the environment, and enables indigenous people to adapt to environmental changes. Moreover, it supplies much of the world's population with the principal means to fulfil their basic needs, and forms the basis for decisions and strategies in many practical aspects, including interpretation of meteorological phenomena, medical treatment, water management, production of clothing, navigation, agriculture and husbandry, hunting and fishing, and biological classification systems.... Beyond its obvious benefit for the people who rely on this knowledge, it might provide humanity as a whole with new biological and ecological insights; it has potential value for the management of natural resources and might be useful in conservation education as well as in development planning and environmental assessment. [ 6 ]
Some anthropologists, such as M. Petriello and A. Stronza, warn that presenting TEK as an "indigenous" construct will cause the privileging of certain types of TEK over others and restricting which groups are thought to possess TEK results in reduced understanding of and collaboration with groups such as campesinos who while not often classified as "indigenous" nevertheless possess TEK. [ 7 ] The term TEK has been criticised as a form of intellectual appropriation that modifies traditional/indigenous knowledges to better fit a conventional Western modern science framework. [ 8 ]
Nicholas Houde, in an article published in Ecology and Society , identifies six facets of traditional ecological knowledge: factual observations, management systems, past and current uses, ethics and values, culture and identity, and cosmology. [ 9 ] These aspects emphasize how "cooperative management [can] better identify areas of difference and convergence when attempting to bring two ways of thinking and knowing together." [ 9 ]
The first aspect of traditional ecological knowledge incorporates the factual, specific observations generated by recognition, naming, and classification of discrete components of the environment. This type of "empirical knowledge consists of a set of generalized observations conducted over a long period of time and reinforced by accounts of other TEK holders." [ 10 ]
The second facet refers to the ethical and sustainable use of resources in regards to management systems. More specifically, issues such as dealing with pest management, resource conversion, multiple cropping patterns, and methods for estimating the state of resources can be thought of as part of such management systems. How resource management can adapt to local environments is another crucial aspect of such considerations. [ 9 ]
The third facet refers to the time dimension of TEK, focusing on past and current uses of the environment transmitted through oral history, [ 10 ] such as land use, settlement, occupancy, and harvest levels. Oral history is used to transmit cultural heritage generation to generation about such topics as medicinal plants and the existence of historical sites, and contributes to a sense of family and community. [ 9 ]
The fourth facet refers to value statements and connections between the belief system and the organization of facts. In regards to TEK it refers to environmental ethics that keeps exploitative abilities in check. This facet also refers to the expression of values concerning the relationship with the habitats of species and their surrounding environment - the human-relationship environment.
The fifth facet refers to the role of language and images of the past giving life to culture. This facet reflects the stories, values, and social relations that reside in places as contributing to the survival, reproduction, and evolution of aboriginal cultures, and identities while stressing "the restorative benefits of cultural landscapes as places for renewal." [ 9 ]
The sixth facet is a culturally based cosmology that is the foundation of the other aspects. This can vary greatly from one culture to the next. The term 'cosmology' relates to the assumptions and beliefs cultures have about how things work, explains the way in which things are connected, and gives principles that regulate human–animal relations and the role of humans in the world.
Ecosystem management is a multifaceted approach to natural resource management that can incorporate science and TEK to collate long-term measurements that would otherwise be unavailable. This can be achieved by scientists and researchers collaborating with Indigenous peoples through a consensus decision-making process while meeting the socioeconomic, political and cultural needs of current and future generations. Concerns over instances where indigenous knowledge has been used without consent ( cultural appropriation ), acknowledgment, or compensation have been raised by some critics. [ citation needed ]
Ecological restoration is the practice of restoring a degraded ecosystem through human intervention. There are many links between ecological restoration and ecosystem management practices involving TEK. [ 11 ] Due to the aforementioned unequal power between indigenous and non-indigenous peoples, equitable partnerships formed in theses contexts can help mitigate extant social injustices, as in the case when Indigenous Peoples lead ecological restoration projects. [ 12 ] [ opinion ]
In some areas, environmental degradation has led to a decline in traditional ecological knowledge. For example, at the Aamjiwnaang community of Anishnaabe First Nations people in Sarnia, Ontario , Canada, residents suffer from a "noticeable decrease in male birth ratio ..., which residents attribute to their proximity to petrochemical plants". [ 13 ] [ further explanation needed ]
Climate change is affecting indigenous people in different ways depending on the geographic region which require different adaption and mitigation actions. For example, to immediately deal with these conditions, the indigenous people adjust when they harvest and what they harvest and also adjust their resource use. Climate change can change the accuracy of the information of TEK. The indigenous people have relied on indicators in nature to plan activities and even for short- term weather predictions. [ 14 ] [ better source needed ] As a result of ever more increasingly unusual conditions, entire indigenous cultures have been disrupted and displaced. As a result, there is a loss of the cultural ties to the lands they once resided on and there is also a loss of the traditional ecological knowledge they had with the land there. [ 15 ] Climate change adaptations have the potential to harm indigenous rights. [ citation needed ] The US EPA promised to take traditional ecological knowledge into consideration in planning adaptations to climate change. [ 16 ]
The rising temperature poses a threat for ecosystems including the locations where plants grow, the times that insects emerge throughout the year, and changes to the seasonal habitats of animals. [ 15 ] For many harvesting seasons, indigenous people have shifted their activity months earlier due to impacts from climate change, adaptations that becomes more important in the face of a rapidly changing climate. Climate change can therefore affect the availability and quality of environmental resources for indigenous people. [ 15 ] For example, as sea ice levels decrease, Alaska Native peoples have experienced changes in their daily lives. [ 17 ] [ how? ] Thawing permafrost has damaged buildings and roadways while clean water resources dwindle. [ 15 ] Fishing, transportation, social and economic aspects of their lives are destabilized. [ 18 ] [ better source needed ] Additionally, as the temperature gets hotter, disasters such as uncontrolled wild fires become more likely. One Indigenous nation in Australia was recently given back land and they reinstated their traditional practice of controlled burning. This was documented to increase the area's biodiversity and decrease the severity of the wildfires. [ 19 ] [ failed verification ] Traditional ecological knowledge can help provide information about climate change across generations and geography of the actual residents in the area. [ 17 ] [ 16 ] The National Resource Conservation Service of the United States Department of Agriculture has used methods of the indigenous people to combat climate change conditions. [ 16 ]
Instances where TEK was recognized in the literature are included below.
Environmental sociologist Kirsten Vinyeta and tribal climate change researcher Kathy Lynn reported on the Karuk Tribe of California: "Traditional burning practices have been critical to the Karuk since time immemorial. For the Tribe, fire serves as a critical land management tool as well as a spiritual practice." [ 20 ] Environmental studies professor Tony Marks-Block, ecological researcher Frank K. Lake, and tropical forester Lisa M. Curran explained how the Karuk and the Yurok Tribes organized controlled burns and fuel reduction treatments in their ancestral territories to reduce wildfire risk and "restore ecocultural resources depleted from decades of fire exclusion". [ 21 ] Professor of sociology Kari Norgaard and Karuk tribe member William Tripp recommend "this process... be replicated and expanded to other communities throughout the western Klamath Mountains and beyond" to promote the positive outcomes seen as a result of the custodial burns of these tribes. [ 22 ] [ further explanation needed ]
Indigenous philosopher and climate/ environmental justice scholar Kyle Powys Whyte writes " Anishinaabek/Neshnabék throughout the Great Lakes region are at the forefront of native species conservation and ecological restoration projects that seek to learn from, adapt, and put into practice local human and nonhuman relationships and stories at the convergence of deep Anishinaabe history and the disruptiveness of industrial settler campaigns." [ 23 ]
Ecological scholars Paul Guernsey, Kyle Keeler and Lummi member Jeremiah Julius describe in a paper how "In 2018, the Lummi Nation dedicated itself to a Totem Pole Journey across the United States calling for the return of their relative "Lolita" (a Southern Resident Killer Whale ) to her home waters.... [additionally] asking for NOAA to collaborate in feeding the whales until the chinook runs of the Puget Sound can sustain them." [ 24 ]
In India, indigenous knowledge relating to agroforestry has been passed down for generations. [ 25 ] [ failed verification ] One paper suggests mitigating the negative impacts of colonial-era and more recent corporate land management practices could be achieved through a revival of traditional farming methods. [ 26 ]
One traditional farming practice is jhum [ 27 ] , also known as shifting cultivation or "slash and burn". This is a common practice in northeastern India, where sections of land are regularly burned and returned to after the soil's fertility is restored. The practice of jhum heightens carbon storage and biodiversity. [ 28 ] [ failed verification ] Jhum paired with certain plant-based pesticides was demonstrated to create an agroforestry structure that could function without dependence on industrial fertilizers and pesticides. [ 29 ] | https://en.wikipedia.org/wiki/Traditional_ecological_knowledge |
Traditional engineering , also known as sequential engineering , is the process of marketing , engineering design , manufacturing , testing and production where each stage of the development process is carried out separately, and the next stage cannot start until the previous stage is finished. Therefore, the information flow is only in one direction, and it is not until the end of the chain that errors, changes and corrections can be relayed to the start of the sequence, causing estimated costs to be under predicted.
This can cause many problems; such as time consumption due to many modifications being made as each stage does not take into account the next. This method is hardly used today [ citation needed ] , as the concept of concurrent engineering is more efficient.
Traditional engineering is also known as over the wall engineering as each stage blindly throws the development to the next stage over the wall .
Traditional manufacturing has been driven by sales forecasts that companies need to produce and stockpile inventory to support. Lean manufacturing is based on the concept that production should be driven by the actual customer demands and requirements. Instead of pushing product to the marketplace, it is pulled through by the customers' actual needs.
This engineering-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Traditional_engineering |
Traffic classification is an automated process which categorises computer network traffic according to various parameters (for example, based on port number or protocol ) into a number of traffic classes . [ 1 ] Each resulting traffic class can be treated differently in order to differentiate the service implied for the data generator or consumer.
Packets are classified to be processed differently by the network scheduler . Upon classifying a traffic flow using a particular protocol, a predetermined policy can be applied to it and other flows to either guarantee a certain quality (as with VoIP or media streaming service [ 2 ] ) or to provide best-effort delivery. This may be applied at the ingress point (the point at which traffic enters the network, typically an edge device ) with a granularity that allows traffic management mechanisms to separate traffic into individual flows and queue, police and shape them differently. [ 3 ]
Classification is achieved by various means.
Matching bit patterns of data to those of known protocols is a simple, widely used technique. An example to match the BitTorrent protocol handshaking phase would be a check to see if a packet began with character 19 which was then followed by the 19-byte string "BitTorrent protocol". [ 4 ]
A comprehensive comparison of various network traffic classifiers, which depend on Deep Packet Inspection (PACE, OpenDPI, 4 different configurations of L7-filter, NDPI, Libprotoident, and Cisco NBAR), is shown in the Independent Comparison of Popular DPI Tools for Traffic Classification. [ 5 ]
Nowadays the traffic is more complex, and more secure, for this, we need a method to classify the encrypted traffic in a different way than the classic mode (based on IP traffic analysis by probes in the core network). A form to achieve this is by using traffic descriptors from connection traces in the radio interface to perform the classification. [ 7 ]
This same problem with traffic classification is also present in multimedia traffic. It has been generally proven that using methods based on neural networks, vector support machines, statistics, and the nearest neighbors are a great way to do this traffic classification, but in some specific cases some methods are better than others, for example: neural networks work better when the whole observation set is taken into account. [ 8 ]
Both the Linux network scheduler and Netfilter contain logic to identify and mark or classify network packets.
Operators often distinguish two broad types of network traffic: time-sensitive and best-effort. [ 9 ]
Time-sensitive traffic is traffic the operator has an expectation to deliver on time. This includes VoIP , online gaming , video conferencing , and web browsing . Traffic management schemes are typically tailored in such a way that the quality of service of these selected uses is guaranteed, or at least prioritized over other classes of traffic. This can be accomplished by the absence of shaping for this traffic class, or by prioritizing sensitive traffic above other classes.
Best-effort traffic is all other kinds of traffic. This is traffic that the ISP deems isn't sensitive to quality of service metrics (jitter, packet loss, latency). A typical example would be peer-to-peer and email applications. Traffic management schemes are generally tailored so best-effort traffic gets what is left after time-sensitive traffic.
Peer-to-peer file sharing applications are often designed to use any and all available bandwidth which impacts QoS -sensitive applications (like online gaming ) that use comparatively small amounts of bandwidth. P2P programs can also suffer from download strategy inefficiencies, namely downloading files from any available peer, regardless of link cost. The applications use ICMP and regular HTTP traffic to discover servers and download directories of available files.
In 2002, Sandvine Incorporated determined, through traffic analysis, that P2P traffic accounted for up to 60% of traffic on most networks. [ 10 ] This shows, in contrast to previous studies and forecasts, that P2P has become mainstream.
P2P protocols can and are often designed so that the resulting packets are harder to identify (to avoid detection by traffic classifiers), and with enough robustness that they do not depend on specific QoS properties in the network (in-order packet delivery, jitter, etc. - typically this is achieved through increased buffering and reliable transport, with the user experiencing increased download time as a result). The encrypted BitTorrent protocol does for example rely on obfuscation and randomized packet sizes in order to avoid identification. [ 11 ] File sharing traffic can be appropriately classified as Best-Effort traffic. At peak times when sensitive traffic is at its height, download speeds will decrease. However, since P2P downloads are often background activities, it affects the subscriber experience little, so long as the download speeds increase to their full potential when all other subscribers hang up their VoIP phones. Exceptions are real-time P2P VoIP and P2P video streaming services that need permanent QoS and use excessive [ citation needed ] overhead and parity traffic to enforce this as far as possible.
Some P2P applications [ 12 ] can be configured to act as self-limiting sources , serving as a traffic shaper configured to the user's (as opposed to the network operator's) traffic specification.
Some vendors advocate managing clients rather than specific protocols, particularly for ISPs. By managing per-client (that is, per customer), if the client chooses to use their fair share of the bandwidth running P2P applications, they can do so, but if their application is abusive, they only clog their own bandwidth and cannot affect the bandwidth used by other customers. | https://en.wikipedia.org/wiki/Traffic_classification |
A traffic alert and collision avoidance system ( TCAS ), pronounced / ˈ t iː k æ s / TEE -kas ), also known as an Airborne Collision Avoidance System ( ACAS ), [ 1 ] is an aircraft collision avoidance system designed to reduce the incidence of mid-air collision (MAC) between aircraft . It monitors the airspace around an aircraft for other aircraft equipped with a corresponding active transponder , independent of air traffic control , and warns pilots of the presence of other transponder-equipped aircraft which may present a threat of MAC. It is a type of airborne collision avoidance system mandated by the International Civil Aviation Organization to be fitted to all aircraft with a maximum take-off mass (MTOM) of over 5,700 kg (12,600 lb) or authorized to carry more than 19 passengers. In the United States, CFR 14 , Ch I, part 135 requires that TCAS I be installed for aircraft with 10–30 passengers and TCAS II for aircraft with more than 30 passengers. ACAS/TCAS is based on secondary surveillance radar (SSR) transponder signals, but operates independently of ground-based equipment to provide advice to the pilot on potentially conflicting aircraft.
In modern glass cockpit aircraft, the TCAS display may be integrated in the navigation display (ND) or electronic horizontal situation indicator (EHSI).
In older glass cockpit aircraft and those with mechanical instrumentation, an integrated TCAS display including an instantaneous vertical speed indicator (IVSI) may replace the mechanical IVSI, which only indicates the rate at which the aircraft is descending or climbing.
Research into collision avoidance systems has been ongoing since at least the 1950s, and the airline industry has been working with the Air Transport Association of America (ATA) since 1955 toward a collision avoidance system. ICAO and aviation authorities such as the Federal Aviation Administration (FAA) were spurred into action by the 1956 Grand Canyon mid-air collision . [ 2 ]
Although ATCRBS airborne transponders were available, it was not until the mid-1970s that research focused on using their signals as the cooperative element for a collision avoidance system. This technical approach enabled an independent collision avoidance capability on the flight deck, separate from the ground system. In 1981, the FAA decided to implement the Traffic Alert and Collision Avoidance System (TCAS), which was developed based on industry and agency efforts in the field of beacon-based collision avoidance systems and air-to-air discrete address communication techniques that used Mode S airborne transponder message formats. [ 3 ]
A short time later, prototypes of TCAS II were installed on two Piedmont Airlines Boeing 727 aircraft, and were flown on regularly scheduled flights. Although the displays were located outside the view of the flight crew and seen only by trained observers, these tests did provide valuable information on the frequency and circumstances of alerts and their potential for interaction with the ATC system. On a follow-on phase II program, a later version of TCAS II was installed on a single Piedmont Airlines Boeing 727, and the system was certified in April 1986, then subsequently approved for operational evaluation in early 1987. Since the equipment was not developed to full standards, the system was only operated in visual meteorological conditions (VMC). Although the flight crew operated the system, the evaluation was primarily for the purpose of data collection and its correlation with flight crew and observer observation and response. [ 3 ]
Later versions of TCAS II manufactured by Bendix /King Air Transport Avionics Division were installed and approved on United Airlines airplanes in early 1988. Similar units manufactured by Honeywell were installed and approved on Northwest Airlines airplanes in late 1988. This limited installation program operated TCAS II units approved for operation as a full-time system in both visual and instrument meteorological conditions (IMC) on three different aircraft types. The operational evaluation programs continued through 1988 to validate the operational suitability of the systems. [ 3 ]
The implementation of TCAS added a safety barrier to help prevent mid-air collisions . However, further study, refinements, training and regulatory measures were still required because the limitations and misuse of the system still resulted in other incidents and fatal accidents, which include:
TCAS involves communication between all aircraft equipped with an appropriate transponder (provided the transponder is enabled and set up properly). Each TCAS-equipped aircraft interrogates all other aircraft in a determined range about their position (via the 1030 MHz radio frequency ), and all other aircraft reply to other interrogations (via 1090 MHz). This interrogation-and-response cycle may occur several times per second. [ 6 ] [ 2 ]
The TCAS system builds a three dimensional map of aircraft in the airspace, incorporating their range (garnered from the interrogation and response round trip time), altitude (as reported by the interrogated aircraft), and bearing (by the directional antenna from the response). Then, by extrapolating current range and altitude difference to anticipated future values, it determines if a potential collision threat exists.
TCAS and its variants are only able to interact with aircraft that have a correctly operating mode C or mode S transponder. A unique 24-bit identifier is assigned to each aircraft that has a mode S transponder.
The next step beyond identifying potential collisions is automatically negotiating a mutual avoidance manoeuver (currently, manoeuvers are restricted to changes in altitude and modification of climb/sink rates) between the two (or more) conflicting aircraft. These avoidance manoeuvers are communicated to the flight crew by a cockpit display and by synthesized voice instructions. [ 6 ] [ 2 ]
A protected volume of airspace surrounds each TCAS equipped aircraft. The size of the protected volume depends on the altitude, speed, and heading of the aircraft involved in the encounter. The illustration below gives an example of a typical TCAS protection volume.
A TCAS installation consists of the following components: [ 6 ] [ 2 ]
The following section describes the TCAS operation based on TCAS II, since this is the version that has been adopted as an international standard (ACAS II) by ICAO and aviation authorities worldwide. [ 6 ] [ 2 ]
TCAS II can be currently operated in the following modes: [ 6 ] [ 2 ]
TCAS works in a coordinated manner, so when an RA is issued to conflicting aircraft, a required action (i.e., Climb. Climb. ) has to be immediately performed by one of the aircraft, while the other one receives a similar RA in the opposite direction (i.e., Descend. Descend. ).
TCAS II issues the following types of aural annunciations:
When a TA is issued, pilots are instructed to initiate a visual search for the traffic causing the TA. If the traffic is visually acquired, pilots are instructed to maintain visual separation from the traffic. Training programs also indicate that no horizontal maneuvers are to be made based solely on information shown on the traffic display. Slight adjustments in vertical speed while climbing or descending, or slight adjustments in airspeed while still complying with the ATC clearance are acceptable. [ 8 ]
When an RA is issued, pilots are expected to respond immediately to the RA unless doing so would jeopardize the safe operation of the flight. This means that aircraft will at times have to manoeuver contrary to ATC instructions or disregard ATC instructions. In these cases, the controller is no longer responsible for separation of the aircraft involved in the RA until the conflict is terminated.
On the other hand, ATC can potentially interfere with a pilot's response to RAs. If a conflicting ATC instruction coincides with an RA, a pilot may assume that ATC is fully aware of the situation and is providing the better resolution. But in reality, ATC is not aware of the RA until the RA is reported by the pilot. Once the RA is reported by the pilot, ATC is required not to attempt to modify the flight path of the aircraft involved in the encounter. Hence, the pilot is expected to "follow the RA" but in practice this does not always happen.
Some countries have implemented "RA downlink" which provides air traffic controllers with information about RAs posted in the cockpit. Currently, there are no ICAO provisions concerning the use of RA downlink by air traffic controllers.
The following points receive emphasis during pilot training:
An RA occurs on average every 1,000 flight hours on short/ medium-haul aircraft and every 3,000 hours for long-haul aircraft.
In its December 2017 ACAS guide, Eurocontrol found in about 25% of the cases, the pilots follow the RA inaccurately.
Airbus offers the option of an autopilot / flight director TCAS for automatic avoidance maneuvers. [ 9 ]
Safety studies on TCAS estimate that the system improves safety in the airspace by a factor of between 3 and 5. [ 11 ]
However, it is well understood that part of the remaining risk is that TCAS may induce midair collisions: "In particular, it is dependent on the accuracy of the threat aircraft's reported altitude and on the expectation that the threat aircraft will not make an abrupt maneuver that defeats the TCAS Resolution Advisory (RA). The safety study also shows that TCAS II will induce some critical near midair collisions..." (See page 7 of Introduction to TCAS II Version 7 and 7.1 (PDF) in external links below). [ 6 ] [ 2 ]
One potential problem with TCAS II is the possibility that a recommended avoidance maneuver might direct the flight crew to descend toward terrain below a safe altitude. Recent requirements for incorporation of ground proximity mitigate this risk. Ground proximity warning alerts have priority in the cockpit over TCAS alerts.
Some pilots have been unsure how to act when their aircraft was requested to climb whilst flying at their maximum altitude. The accepted procedure is to follow the climb RA as best as possible, temporarily trading speed for height . The climb RA should quickly finish. In the event of a stall warning, the stall warning would take priority.
Both cases have been addressed by Version 7.0 of TCAS II and are currently handled by a corrective RA together with a visual indication of a green arc in the IVSI display to indicate the safe range for the climb or descent rate. However, it has been found that in some cases these indications could lead to a dangerous situation for the involved aircraft. For example, if a TCAS event occurs when two aircraft are descending one over the other for landing, the aircraft at the lower altitude will first receive a "Descend, descend" RA, and when reaching an extreme low altitude, this will change to a "Level off, level off" RA, together with a green arc indication directing the pilot to level off the aircraft. This could place the aircraft dangerously into the path of the intruder above, who is descending to land. A change proposal has been issued to correct this problem. [ 12 ]
TCAS was not designed with security in mind, even in its newest versions. With the rise of software-defined radios , security researchers have investigated wireless attacks on TCAS. Researchers demonstrate [ 13 ] how to take full control over the collision avoidance displays and create RAs of arbitrary aircraft on a collision course. These attacks can be launched using commercial off-the-shelf hardware and could be used to instruct the pilot to climb or descend at will. However, these attacks are only possible when the attacker is close to the victim aircraft (up to a distance of 4.2 km), limiting the risk of abuse in the real world.
TCAS technology has proved to be too expensive for small business and general aviation aircraft. Manufacturers and authorities recognized the need for an alternative to TCAS; this led to the development of the Traffic Advisory System. TAS is actually a simplified version of TCAS I. The system structure, components, operation, traffic display and TA logic are identical, but the minimum operational performance standards (MOPS) of TAS allow some simplification compared to TCAS I: [ 14 ] [ 15 ]
The following documents contain all of the differences between TCAS I and TAS:
In spite of all this, most of the manufacturers do not take the above-mentioned opportunities to make simplified devices. As a result of market forces, many TAS systems operate just like TCAS I (with interference limiting, using TCAS I symbology, etc.), with some having even have better surveillance performance (in range and tracked aircraft) and specifications than TCAS I.
Automatic dependent surveillance – broadcast (ADS–B) messages are transmitted from aircraft equipped with suitable transponders, containing information such as identity, location, and velocity. The signals are broadcast on the 1090 MHz radio frequency. ADS-B messages are also carried on a universal access transceiver (UAT) in the 978 MHz band. [ 16 ]
TCAS equipment which is capable of processing ADS–B messages may use this information to enhance the performance of TCAS, using techniques known as "hybrid surveillance". As currently implemented, hybrid surveillance uses reception of ADS–B messages from an aircraft to reduce the rate at which the TCAS equipment interrogates that aircraft. This reduction in interrogations reduces the use of the 1030/1090 MHz radio channel, and will over time extend the operationally useful life of TCAS technology. The ADS–B messages will also allow low cost (for aircraft) technology to provide real time traffic in the cockpit for small aircraft. [ 17 ] Currently UAT based traffic uplinks are provided in Alaska and in regions of the East coast of the US.
Hybrid surveillance does not make use of ADS–B's aircraft flight information in the TCAS conflict detection algorithms; ADS–B is used only to identify aircraft that can safely be interrogated at a lower rate.
In the future, prediction capabilities may be improved by using the state vector information present in ADS–B messages. Also, since ADS–B messages can be received at greater range than TCAS normally operates, aircraft can be acquired earlier by the TCAS tracking algorithms.
The identity information present in ADS–B messages can be used to label other aircraft on the cockpit display (where present), painting a picture similar to what an air traffic controller would see and improving situational awareness. [ 18 ] [ 19 ]
TCAS I is a cheaper but less capable system than the modern TCAS II system introduced for general aviation use after the FAA mandate for TCAS II in air transport aircraft. TCAS I systems are able to monitor the traffic situation around a plane (to a range of about 40 miles) and offer information on the approximate bearing and altitude of other aircraft. It can also generate collision warnings in the form of a "Traffic Advisory" (TA). The TA warns the pilot that another aircraft is in near vicinity, announcing "Traffic, traffic" , but does not offer any suggested remedy; it is up to the pilot to decide what to do, usually with the assistance of Air Traffic Control. When a threat has passed, the system announces "Clear of conflict" . [ 20 ]
TCAS II is the first system that was introduced in 1989 and is the current generation of instrument warning TCAS, used in the majority of commercial aviation aircraft (see table below). A US Airways 737 was the first aircraft certified with the AlliedBendix (now Honeywell) TCAS II system. It offers all the benefits of TCAS I, but will also offer the pilot direct, vocalized instructions to avoid danger, known as a "Resolution Advisory" (RA). The suggestive action may be "corrective", suggesting the pilot change vertical speed by announcing, "Descend, descend" , "Climb, climb" or "Level off, level off" (meaning reduce vertical speed). By contrast a "preventive" RA may be issued which simply warns the pilots not to deviate from their present vertical speed, announcing, "Monitor vertical speed" or "Maintain vertical speed, Maintain" . TCAS II systems coordinate their resolution advisories before issuing commands to the pilots, so that if one aircraft is instructed to descend, the other will typically be told to climb – maximising the separation between the two aircraft. [ 2 ]
As of 2006, the only implementation that meets the ACAS II standards set by ICAO [ 21 ] was Version 7.0 of TCAS II, produced by three avionics manufacturers: Rockwell Collins , Honeywell , and ACSS (Aviation Communication & Surveillance Systems; an L3Harris and Thales Avionics joint venture company).
After the 2002 Überlingen mid-air collision (July 1, 2002), studies have been made to improve TCAS II capabilities. Following extensive Eurocontrol input and pressure, a revised TCAS II Minimum Operational Performance Standards (MOPS) document has been jointly developed by RTCA (Special Committee SC-147 [ 22 ] ) and EUROCAE. As a result, by 2008 the standards for Version 7.1 of TCAS II have been issued [ 23 ] and published as RTCA DO-185B [ 7 ] (June 2008) and EUROCAE ED-143 (September 2008).
TCAS II Version 7.1 [ 2 ] will be able to issue RA reversals in coordinated encounters, in case one of the aircraft doesn't follow the original RA instructions (Change proposal CP112E). [ 24 ] Other changes in this version are the replacement of the ambiguous "Adjust Vertical Speed, Adjust" RA with the "Level off, Level off" RA, to prevent improper response by the pilots (Change proposal CP115).; [ 25 ] and the improved handling of corrective/preventive annunciation and removal of green arc display when a positive RA weakens solely due to an extreme low or high altitude condition (1000 feet AGL or below, or near the aircraft top ceiling) to prevent incorrect and possibly dangerous guidance to the pilot (Change proposal CP116). [ 12 ] [ 26 ]
Studies conducted for Eurocontrol , using recently recorded operational data, indicate that currently [ when? ] the probability of a mid-air collision for each flight hour in European airspace is 2.7 x 10 −8 which equates to one in every 3 years. When TCAS II Version 7.1 is implemented, that probability will be reduced by a factor of 4. [ 26 ]
Although ACAS III is mentioned as a future system in ICAO Annex 10, ACAS III is unlikely to materialize due to difficulties the current surveillance systems have with horizontal tracking. Currently, research is being conducted to develop a future collision avoidance system (under the working name of ACAS X). [ 27 ]
Originally designated TCAS II Enhanced, TCAS III was envisioned as an expansion of the TCAS II concept to include horizontal resolution advisory capability. TCAS III was the "next generation" of collision avoidance technology which underwent development by aviation companies such as Honeywell . TCAS III incorporated technical upgrades to the TCAS II system, and had the capability to offer traffic advisories and resolve traffic conflicts using horizontal as well as vertical manoeuvring directives to pilots. For instance, in a head-on situation, one aircraft might be directed, "turn right, climb" while the other would be directed "turn right, descend." This would act to further increase the total separation between aircraft, in both horizontal and vertical aspects. Horizontal directives would be useful in a conflict between two aircraft close to the ground where there may be little if any vertical maneuvering space. [ 28 ]
TCAS III attempted to use the TCAS directional antenna to assign a bearing to other aircraft, and thus be able to generate a horizontal maneuver (e.g. turn left or right). However, it was judged by the industry to be unfeasible due to limitations in the accuracy of the TCAS directional antennas. The directional antennas were judged not to be accurate enough to generate an accurate horizontal-plane position, and thus an accurate horizontal resolution. By 1995, years of testing and analysis determined that the concept was unworkable using available surveillance technology (due to the inadequacy of horizontal position information), and that horizontal RAs were unlikely to be invoked in most encounter geometries. Hence, all work on TCAS III was suspended and there are no plans for its implementation. The concept has later evolved and been replaced by TCAS IV. [ 29 ] [ 30 ]
TCAS IV uses additional information encoded by the target aircraft in the Mode S transponder reply (i.e. target encodes its own position into the transponder signal) to generate a horizontal resolution to an RA. In addition, some reliable source of position (such as Inertial Navigation System or GPS ) is needed on the target aircraft in order for it to be encoded.
TCAS IV had replaced the TCAS III concept by the mid-1990s. One of the results of TCAS III experience was that the directional antenna used by the TCAS processor to assign a bearing to a received transponder reply was not accurate enough to generate an accurate horizontal position, and thus a safe horizontal resolution. TCAS IV used additional position information encoded on an air-to-air data link to generate the bearing information, so that the accuracy of the directional antenna would not be a factor.
TCAS IV development continued for some years, but the appearance of new trends in data link such as Automatic Dependent Surveillance – Broadcast ( ADS-B ) have pointed out a need to re-evaluate whether a data link system dedicated to collision avoidance such as TCAS IV should be incorporated into a more generic system of air-to-air data link for additional applications. As a result of these issues, the TCAS IV concept was abandoned as ADS-B development started. [ 30 ] [ 31 ]
Although the system occasionally suffers from false alarms, pilots are now under strict instructions to regard all TCAS messages as genuine alerts demanding an immediate, high-priority response. Only Windshear Detection and GPWS alerts and warnings have higher priority than the TCAS. The FAA , EASA and most other countries' authorities' rules state that in the case of a conflict between TCAS RA and air traffic control (ATC) instructions, the TCAS RA always takes precedence. This is mainly because of the TCAS-RA inherently possessing a more current and comprehensive picture of the situation than air traffic controllers, whose radar / transponder updates usually happen at a much slower rate than the TCAS interrogations. [ 6 ] [ 2 ] If one aircraft follows a TCAS RA and the other follows conflicting ATC instructions, a collision can occur, such as the July 1, 2002 Überlingen disaster . In this mid-air collision, both airplanes were fitted with TCAS II Version 7.0 systems which functioned properly, but one obeyed the TCAS advisory while the other ignored the TCAS and obeyed the controller; both aircraft descended into a fatal collision. [ 32 ]
This accident could have been prevented if TCAS was able to reverse the original RA for one of the aircraft when it detects that the crew of the other one is not following their original TCAS RA, but conflicting ATC instructions instead. This is one of the features implemented in Version 7.1 of TCAS II. [ 23 ] [ 33 ] [ 34 ]
Implementation of TCAS II Version 7.1 has been originally planned to start between 2009 and 2011 by retrofitting and forward fitting all the TCAS II equipped aircraft, with the goal that by 2014 the version 7.0 will be completely phased out and replaced by version 7.1. The FAA and EASA have already published the TCAS II Version 7.1 Technical Standard Order (TSO-C119c [ 35 ] and ETSO-C119c, [ 36 ] respectively) effective since 2009, based on the RTCA DO-185B [ 7 ] and EUROCAE ED-143 standards. On 25 September 2009 FAA issued Advisory Circular AC 20-151A [ 37 ] providing guidance for obtaining airworthiness approval for TCAS II systems, including the new version 7.1. On 5 October 2009, the Association of European Airlines (AEA) published a Position Paper [ 38 ] showing the need to mandate TCAS II Version 7.1 on all aircraft as a matter of priority. On 25 March 2010, the European Aviation Safety Agency (EASA) published Notice of Proposed Amendment (NPA) No. 2010-03 pertaining to the introduction of ACAS II software version 7.1. [ 39 ] On 14 September 2010, EASA published the Comment Response Document (CRD) to the above-mentioned NPA. [ 40 ] Separately, a proposal has been made to amend the ICAO standard to require TCAS II Version 7.1 for compliance with ACAS II SARPs.
ICAO has circulated an amendment for formal member state agreement which recommends TCAS II Change 7.1 adoption by 1 January 2014 for forward fit and 1 January 2017 for retrofit. Following the feedback and comments from airline operators, EASA has proposed the following dates for the TCAS II Version 7.1 mandate in European airspace: forward fit (for new aircraft) 1 March 2012, retrofit (for existing aircraft) 1 December 2015. These dates are proposed dates, subject to further regulatory processes, and are not final until the Implementing Rule has been published. [ 26 ]
Among the system manufacturers, by February 2010 ACSS [ 41 ] certified Change 7.1 for their TCAS 2000 and Legacy TCAS II systems, [ 42 ] and is currently offering Change 7.1 upgrade for their customers. [ 43 ] By June 2010 Honeywell published a white paper with their proposed solutions for TCAS II Version 7.1. [ 44 ] Rockwell Collins currently announces that their TCAS-94, TCAS-4000 and TSS-4100 TCAS II compliant systems are software upgradeable to Change 7.1 when available. [ 45 ]
While the safety benefits of current TCAS implementations are self-evident, the full technical and operational potential of TCAS is not fully exploited due to limitations in current implementations (most of which will need to be addressed in order to further facilitate the design and implementation of Free flight ) and NextGen :
To overcome some of these limitations, the FAA is developing a new collision avoidance logic based on dynamic programming.
In response to a series of midair collisions involving commercial airliners, Lincoln Laboratory was directed by the Federal Aviation Administration in the 1970s to participate in the development of an onboard collision avoidance system. In its current manifestation, the Traffic Alert and Collision Avoidance System is mandated worldwide on all large aircraft and has significantly improved the safety of air travel, but major changes to the airspace planned over the coming years will require substantial modification to the system. [ 48 ]
A set of new systems called ACAS X [ 49 ] will use this new logic:
The first FAA-scheduled industry meeting was held in October 2011 in Washington DC, to brief avionics manufacturers on the development plans for "ACAS X" – including flight demonstrations scheduled for fiscal 2013. The FAA says its work "will be foundational to the development of minimum operational performance standards" for ACAS X by standards developer RTCA. [ 50 ]
It is estimated that, if ACAS X will be further developed and certified, ACAS X will not be commercially available before the mid 2020s. And it is said to be unclear at this stage whether ACAS X would provide any horizontal resolutions. [ 51 ]
Airbus and Honeywell have tested a proposed automated system, where if the pilot ignored a Resolution Advisory, the aircraft's autopilot would automatically take evasive action. [ 60 ] | https://en.wikipedia.org/wiki/Traffic_collision_avoidance_system |
In transportation engineering , traffic flow is the study of interactions between travellers (including pedestrians, cyclists, drivers, and their vehicles) and infrastructure (including highways, signage, and traffic control devices), with the aim of understanding and developing an optimal transport network with efficient movement of traffic and minimal traffic congestion problems.
The foundation for modern traffic flow analysis dates back to the 1920s with Frank Knight 's analysis of traffic equilibrium, further developed by Wardrop in 1952. Despite advances in computing, a universally satisfactory theory applicable to real-world conditions remains elusive. Current models blend empirical and theoretical techniques to forecast traffic and identify congestion areas, considering variables like vehicle use and land changes.
Traffic flow is influenced by the complex interactions of vehicles, displaying behaviors such as cluster formation and shock wave propagation. Key traffic stream variables include speed, flow, and density, which are interconnected. Free-flowing traffic is characterized by fewer than 12 vehicles per mile per lane, whereas higher densities can lead to unstable conditions and persistent stop-and-go traffic. Models and diagrams, such as time-space diagrams, help visualize and analyze these dynamics. Traffic flow analysis can be approached at different scales: microscopic (individual vehicle behavior), macroscopic (fluid dynamics-like models), and mesoscopic (probability functions for vehicle distributions). Empirical approaches, such as those outlined in the Highway Capacity Manual , are commonly used by engineers to model and forecast traffic flow, incorporating factors like fuel consumption and emissions.
The kinematic wave model, introduced by Lighthill and Whitham in 1955, is a cornerstone of traffic flow theory, describing the propagation of traffic waves and impact of bottlenecks. Bottlenecks, whether stationary or moving, significantly disrupt flow and reduce roadway capacity. The Federal Highway Authority attributes 40% of congestion to bottlenecks. Classical traffic flow theories include the Lighthill-Whitham-Richards model and various car-following models that describe how vehicles interact in traffic streams. An alternative theory, Kerner's three-phase traffic theory , suggests a range of capacities at bottlenecks rather than a single value. The Newell-Daganzo merge model and car-following models further refine our understanding of traffic dynamics and are instrumental in modern traffic engineering and simulation.
Attempts to produce a mathematical theory of traffic flow date back to the 1920s, when American Economist Frank Knight first produced an analysis of traffic equilibrium, which was refined into Wardrop's first and second principles of equilibrium in 1952.
Nonetheless, even with the advent of significant computer processing power, to date there has been no satisfactory general theory that can be consistently applied to real flow conditions. Current traffic models use a mixture of empirical and theoretical techniques. These models are then developed into traffic forecasts , and take account of proposed local or major changes, such as increased vehicle use, changes in land use or changes in mode of transport (with people moving from bus to train or car, for example), and to identify areas of congestion where the network needs to be adjusted.
Traffic behaves in a complex and nonlinear way, depending on the interactions of a large number of vehicles . Due to the individual reactions of human drivers, vehicles do not interact simply following the laws of mechanics, but rather display cluster formation and shock wave propagation, [ citation needed ] both forward and backward, depending on vehicle density . Some mathematical models of traffic flow use a vertical queue assumption, in which the vehicles along a congested link do not spill back along the length of the link.
In a free-flowing network, traffic flow theory refers to the traffic stream variables of speed, flow, and concentration. These relationships are mainly concerned with uninterrupted traffic flow, primarily found on freeways or expressways. [ 1 ] Flow conditions are considered "free" when less than 12 vehicles per mile per lane are on a road. "Stable" is sometimes described as 12–30 vehicles per mile per lane. As the density reaches the maximum mass flow rate (or flux ) and exceeds the optimum density (above 30 vehicles per mile per lane), traffic flow becomes unstable, and even a minor incident can result in persistent stop-and-go driving conditions. A "breakdown" condition occurs when traffic becomes unstable and exceeds 67 vehicles per mile per lane. [ 2 ] "Jam density" refers to extreme traffic density when traffic flow stops completely, usually in the range of 185–250 vehicles per mile per lane. [ 3 ]
However, calculations about congested networks are more complex and rely more on empirical studies and extrapolations from actual road counts. Because these are often urban or suburban in nature, other factors (such as road-user safety and environmental considerations) also influence the optimum conditions.
Traffic flow is generally constrained along a one-dimensional pathway (e.g. a travel lane). A time-space diagram shows graphically the flow of vehicles along a pathway over time. Time is displayed along the horizontal axis, and distance is shown along the vertical axis. Traffic flow in a time-space diagram is represented by the individual trajectory lines of individual vehicles. Vehicles following each other along a given travel lane will have parallel trajectories, and trajectories will cross when one vehicle passes another. Time-space diagrams are useful tools for displaying and analyzing the traffic flow characteristics of a given roadway segment over time (e.g. analyzing traffic flow congestion).
There are three main variables to visualize a traffic stream: speed (v), density (indicated k; the number of vehicles per unit of space), and flow [ clarification needed ] (indicated q; the number of vehicles per unit of time).
Speed is the distance covered per unit of time. One cannot track the speed of every vehicle; so, in practice, average speed is measured by sampling vehicles in a given area over a period of time. Two definitions of average speed are identified: "time mean speed" and "space mean speed".
v t = ( 1 / m ) ∑ i = 1 m v i {\displaystyle v_{t}=(1/m)\sum _{i=1}^{m}v_{i}}
where m represents the number of vehicles passing the fixed point and v i is the speed of the i th vehicle.
v s = ( ( 1 / n ) ∑ i = 1 n ( 1 / v i ) ) − 1 {\displaystyle v_{s}=\left((1/n)\sum _{i=1}^{n}(1/v_{i})\right)^{-1}}
where n represents the number of vehicles passing the roadway segment.
The "space mean speed" is thus the harmonic mean of the speeds. The time mean speed is never less than space mean speed: v t = v s + σ s 2 v s {\displaystyle v_{t}=v_{s}+{\frac {\sigma _{s}^{2}}{v_{s}}}} , where σ s 2 {\displaystyle \sigma _{s}^{2}} is the variance of the space mean speed [ 4 ]
In a time-space diagram, the instantaneous velocity, v = dx/dt, of a vehicle is equal to the slope along the vehicle's trajectory. The average velocity of a vehicle is equal to the slope of the line connecting the trajectory endpoints where a vehicle enters and leaves the roadway segment. The vertical separation (distance) between parallel trajectories is the vehicle spacing (s) between a leading and following vehicle. Similarly, the horizontal separation (time) represents the vehicle headway (h). A time-space diagram is useful for relating headway and spacing to traffic flow and density, respectively.
Density (k) is defined as the number of vehicles per unit length of the roadway. In traffic flow, the two most important densities are the critical density ( k c ) and jam density ( k j ). The maximum density achievable under free flow is k c , while k j is the maximum density achieved under congestion. In general, jam density is five times the critical density. Inverse of density is spacing (s), which is the center-to-center distance between two vehicles.
k = 1 s {\displaystyle k={\frac {1}{s}}}
The density ( k ) within a length of roadway ( L ) at a given time ( t 1 ) is equal to the inverse of the average spacing of the n vehicles.
K ( L , t 1 ) = n L = 1 s ¯ ( t 1 ) {\displaystyle K(L,t_{1})={\frac {n}{L}}={\frac {1}{{\bar {s}}(t_{1})}}}
In a time-space diagram, the density may be evaluated in the region A.
k ( A ) = n L = n d t L d t = t t | A | {\displaystyle k(A)={\frac {n}{L}}={\frac {n\,dt}{L\,dt}}={\frac {tt}{\left|A\right\vert }}}
where tt is the total travel time in A .
Flow ( q ) is the number of vehicles passing a reference point per unit of time, vehicles per hour. The inverse of flow is headway ( h ), which is the time that elapses between the i th vehicle passing a reference point in space and the ( i + 1)th vehicle. In congestion, h remains constant. As a traffic jam forms, h approaches infinity.
q = k v {\displaystyle q=kv\,}
q = 1 / h {\displaystyle q=1/h\,}
The flow ( q ) passing a fixed point ( x 1 ) during an interval ( T ) is equal to the inverse of the average headway of the m vehicles.
q ( T , x 1 ) = m T = 1 h ¯ ( x 1 ) {\displaystyle q(T,x_{1})={\frac {m}{T}}={\frac {1}{{\bar {h}}(x_{1})}}}
In a time-space diagram, the flow may be evaluated in the region B .
q ( B ) = m T = m d x T d x = t d | B | {\displaystyle q(B)={\frac {m}{T}}={\frac {m\,dx}{T\,dx}}={\frac {td}{\left|B\right\vert }}}
where td is the total distance traveled in B .
Analysts approach the problem in three main ways, corresponding to the three main scales of observation in physics:
The engineering approach to analysis of highway traffic flow problems is primarily based on empirical analysis (i.e., observation and mathematical curve fitting). One major reference used by American planners is the Highway Capacity Manual , [ 5 ] published by the Transportation Research Board , which is part of the United States National Academy of Sciences . This recommends modelling traffic flows using the whole travel time across a link using a delay/flow function, including the effects of queuing. This technique is used in many US traffic models and in the SATURN model in Europe. [ 6 ]
In many parts of Europe, a hybrid empirical approach to traffic design is used, combining macro-, micro-, and mesoscopic features. Rather than simulating a steady state of flow for a journey, transient "demand peaks" of congestion are simulated. These are modeled by using small "time slices" across the network throughout the working day or weekend. Typically, the origins and destinations for trips are first estimated and a traffic model is generated before being calibrated by comparing the mathematical model with observed counts of actual traffic flows, classified by type of vehicle. "Matrix estimation" is then applied to the model to achieve a better match to observed link counts before any changes, and the revised model is used to generate a more realistic traffic forecast for any proposed scheme. The model would be run several times (including a current baseline, an "average day" forecast based on a range of economic parameters and supported by sensitivity analysis) in order to understand the implications of temporary blockages or incidents around the network. From the models, it is possible to total the time taken for all drivers of different types of vehicle on the network and thus deduce average fuel consumption and emissions.
Much of UK, Scandinavian, and Dutch authority practice is to use the modelling program CONTRAM for large schemes, which has been developed over several decades under the auspices of the UK's Transport Research Laboratory , and more recently with the support of the Swedish Road Administration . [ 7 ] By modelling forecasts of the road network for several decades into the future, the economic benefits of changes to the road network can be calculated, using estimates for value of time and other parameters. The output of these models can then be fed into a cost-benefit analysis program. [ 8 ]
A cumulative vehicle count curve, the N -curve, shows the cumulative number of vehicles that pass a certain location x by time t , measured from the passage of some reference vehicle. [ 9 ] This curve can be plotted if the arrival times are known for individual vehicles approaching a location x , and the departure times are also known as they leave location x . Obtaining these arrival and departure times could involve data collection: for example, one could set two point sensors at locations X 1 and X 2 , and count the number of vehicles that pass this segment while also recording the time each vehicle arrives at X 1 and departs from X 2 . The resulting plot is a pair of cumulative curves where the vertical axis ( N ) represents the cumulative number of vehicles that pass the two points: X 1 and X 2 , and the horizontal axis ( t ) represents the elapsed time from X 1 and X 2 .
If vehicles experience no delay as they travel from X 1 to X 2 , then the arrivals of vehicles at location X 1 is represented by curve N 1 and the arrivals of the vehicles at location X 2 is represented by N 2 in figure 8. More commonly, curve N 1 is known as the arrival curve of vehicles at location X 1 and curve N 2 is known as the arrival curve of vehicles at location X 2 . Using a one-lane signalized approach to an intersection as an example, where X 1 is the location of the stop bar at the approach and X 2 is an arbitrary line on the receiving lane just across of the intersection, when the traffic signal is green, vehicles can travel through both points with no delay and the time it takes to travel that distance is equal to the free-flow travel time. Graphically, this is shown as the two separate curves in figure 8.
However, when the traffic signal is red, vehicles arrive at the stop bar ( X 1 ) and are delayed by the red light before crossing X 2 some time after the signal turns green. As a result, a queue builds at the stop bar as more vehicles are arriving at the intersection while the traffic signal is still red. Therefore, for as long as vehicles arriving at the intersection are still hindered by the queue, the curve N 2 no longer represents the vehicles’ arrival at location X 2 ; it now represents the vehicles’ virtual arrival at location X 2 , or in other words, it represents the vehicles' arrival at X 2 if they did not experience any delay. The vehicles' arrival at location X 2 , taking into account the delay from the traffic signal, is now represented by the curve N′ 2 in figure 9.
However, the concept of the virtual arrival curve is flawed. This curve does not correctly show the queue length resulting from the interruption in traffic (i.e. red signal). It assumes that all vehicles are still reaching the stop bar before being delayed by the red light. In other words, the virtual arrival curve portrays the stacking of vehicles vertically at the stop bar. When the traffic signal turns green, these vehicles are served in a first-in-first-out (FIFO) order. For a multi-lane approach, however, the service order is not necessarily FIFO. Nonetheless, the interpretation is still useful because of the concern with average total delay instead of total delays for individual vehicles. [ 10 ]
The traffic light example depicts N -curves as smooth functions. Theoretically, however, plotting N -curves from collected data should result in a step-function (figure 10). Each step represents the arrival or departure of one vehicle at that point in time. [ 10 ] When the N -curve is drawn on larger scale reflecting a period of time that covers several cycles, then the steps for individual vehicles can be ignored, and the curve will then look like a smooth function (figure 8).
The aim of traffic flow analysis is to create and implement a model which would enable vehicles to reach their destination in the shortest possible time using the maximum roadway capacity. This is a four-step process:
This cycle is repeated until the solution converges.
There are two main approaches to tackle this problem with the end objectives:
In short, a network is in system optimum (SO) when the total system cost is the minimum among all possible assignments.
System Optimum is based on the assumption that routes of all vehicles would be controlled by the system, and that rerouting would be based on maximum utilization of resources and minimum total system cost. (Cost can be interpreted as travel time.) Hence, in a System Optimum routing algorithm, all routes between a given OD pair have the same marginal cost.
In traditional transportation economics, System Optimum is determined by equilibrium of demand function and marginal cost function. In this approach, marginal cost is roughly depicted as increasing function in traffic congestion. In traffic flow approach, the marginal cost of the trip can be expressed as sum of the cost (delay time, w ) experienced by the driver and the externality ( e ) that a driver imposes on the rest of the users. [ 11 ]
Suppose there is a freeway (0) and an alternative route (1), which users can be diverted onto off-ramp. Operator knows total arrival rate ( A ( t )), the capacity of the freeway ( μ 0 ), and the capacity of the alternative route ( μ 1 ). From the time 't 0 ', when freeway is congested, some of the users start moving to alternative route. However, when t 1 , alternative route is also full of capacity. Now operator decides the number of vehicles(N), which use alternative route. The optimal number of vehicles ( N ) can be obtained by calculus of variation, to make marginal cost of each route equal. Thus, optimal condition is T 0 = T 1 + ∆ 1 . In this graph, we can see that the queue on the alternative route should clear ∆ 1 time units before it clears from the freeway. This solution does not define how we should allocates vehicles arriving between t 1 and T 1 , we just can conclude that the optimal solution is not unique. If operator wants freeway not to be congested, operator can impose the congestion toll, e 0 ― e 1 , which is the difference between the externality of freeway and alternative route. In this situation, freeway will maintain free flow speed, however alternative route will be extremely congested.
In brief, A network is in user equilibrium (UE) when every driver chooses the routes in its lowest cost between origin and destination regardless whether total system cost is minimized.
The user optimum equilibrium assumes that all users choose their own route towards their destination based on the travel time that will be consumed in different route options. The users will choose the route which requires the least travel time. The user optimum model is often used in simulating the impact on traffic assignment by highway bottlenecks. When the congestion occurs on highway, it will extend the delay time in travelling through the highway and create a longer travel time. Under the user optimum assumption, the users would choose to wait until the travel time using a certain freeway is equal to the travel time using city streets, and hence equilibrium is reached. This equilibrium is called User Equilibrium, Wardrop Equilibrium or Nash Equilibrium.
The core principle of User Equilibrium is that all used routes between a given OD pair have the same travel time. An alternative route option is enabled to use when the actual travel time in the system has reached the free-flow travel time on that route.
For a highway user optimum model considering one alternative route, a typical process of traffic assignment is shown in figure 15. When the traffic demand stays below the highway capacity, the delay time on highway stays zero. When the traffic demand exceeds the capacity, the queue of vehicle will appear on the highway and the delay time will increase. Some of users will turn to the city streets when the delay time reaches the difference between the free-flow travel time on highway and the free-flow travel time on city streets. It indicates that the users staying on the highway will spend as much travel time as the ones who turn to the city streets. At this stage, the travel time on both the highway and the alternative route stays the same. This situation may be ended when the demand falls below the road capacity, that is the travel time on highway begins to decrease and all the users will stay on the highway. The total of part area 1 and 3 represents the benefits by providing an alternative route. The total of area 4 and area 2 shows the total delay cost in the system, in which area 4 is the total delay occurs on the highway and area 2 is the extra delay by shifting traffic to city streets.
Navigation function in Google Maps can be referred as a typical industrial application of dynamic traffic assignment based on User Equilibrium since it provides every user the routing option in lowest cost (travel time).
Both User Optimum and System Optimum can be subdivided into two categories on the basis of the approach of time delay taken for their solution:
Predictive time delay assumes that the user of the system knows exactly how long the delay is going to be right ahead. Predictive delay knows when a certain congestion level will be reached and when the delay of that system would be more than taking the other system, so the decision for reroute can be made in time. In the vehicle counts-time diagram, predictive delay at time t is horizontal line segment on the right side of time t, between the arrival and departure curve, shown in Figure 16. the corresponding y coordinate is the number nth vehicle that leaves the system at time t.
Reactive time delay is when the user has no knowledge of the traffic conditions ahead. The user waits to experience the point where the delay is observed and the decision to reroute is in reaction to that experience at the moment. Predictive delay gives significantly better results than the reactive delay method. In the vehicle counts-time diagram, predictive delay at time t is horizontal line segment on the left side of time t, between the arrival and departure curve, shown in Figure 16. the corresponding y coordinate is the number nth vehicle that enters the system at time t.
This is an upcoming approach of eliminating shockwave and increasing safety for the vehicles. The concept is based on the fact that the risk of accident on a roadway increases with speed differential between the upstream and downstream vehicles. The two types of crash risk which can be reduced from VSL implementation are the rear-end crash and the lane-change crash. Variable speed limits seek to homogenize speed, leading to a more constant flow. [ 12 ] Different approaches have been implemented by researchers to build a suitable VSL algorithm.
Variable speed limits are usually enacted when sensors along the roadway detect that congestion or weather events have exceeded thresholds. The roadway speed limit will then be reduced in 5-mph increments through the use of signs above the roadway (Dynamic Message Signs) controlled by the Department of Transportation. The goal of this process is the both increase safety through accident reduction and to avoid or postpone the onset of congestion on the roadway. The ideal resulting traffic flow is slower overall, but less stop-and-go, resulting in fewer instances of rear-end and lane-change crashes. The use of VSL's also regularly employs shoulder-lanes permitted for transportation only under congested states which this process aims to combat. The need for a variable speed limit is shown by Flow-Density diagram to the right.
In this figure ("Flow-Speed Diagram for a Typical Roadway"), the point of the curve represents optimal traffic movement in both flow and speed. However, beyond this point the speed of travel quickly reaches a threshold and starts to decline rapidly. In order to reduce the potential risk of this rapid speed decline, variable speed limits reduce the speed at a more gradual rate (5-mph increments), allowing drivers to have more time to prepare and acclimate to the slowdown due to congestion/weather. The development of a uniform travel speed reduces the probability of erratic driver behavior and therefore crashes.
Through historical data obtained at VSL sites, it has been determined that implementation of this practice reduces accident numbers by 20-30%. [ 12 ]
In addition to safety and efficiency concerns, VSL's can also garner environmental benefits such as decreased emissions, noise, and fuel consumption. This is due to the fact that vehicles are more fuel-efficient when at a constant rate of travel, rather than in a state of constant acceleration and deacceleration like that usually found in congested conditions. [ 13 ]
A major consideration in road capacity relates to the design of junctions. By allowing long "weaving sections" on gently curving roads at graded intersections, vehicles can often move across lanes without causing significant interference to the flow. However, this is expensive and takes up a large amount of land, so other patterns are often used, particularly in urban or very rural areas. Most large models use crude simulations for intersections, but computer simulations are available to model specific sets of traffic lights, roundabouts, and other scenarios where flow is interrupted or shared with other types of road users or pedestrians. A well-designed junction can enable significantly more traffic flow at a range of traffic densities during the day. By matching such a model to an "Intelligent Transport System", traffic can be sent in uninterrupted "packets" of vehicles at predetermined speeds through a series of phased traffic lights.
The UK's TRL has developed junction modelling programs for small-scale local schemes that can take account of detailed geometry and sight lines; ARCADY for roundabouts, PICADY for priority intersections, and OSCADY and TRANSYT for signals. Many other junction analysis software packages [ 14 ] exist such as Sidra and LinSig and Synchro .
The kinematic wave model was first applied to traffic flow by Lighthill and Whitham in 1955. Their two-part paper first developed the theory of kinematic waves using the motion of water as an example. In the second half, they extended the theory to traffic on “crowded arterial roads.” This paper was primarily concerned with developing the idea of traffic “humps” (increases in flow) and their effects on speed, especially through bottlenecks. [ 15 ]
The authors began by discussing previous approaches to traffic flow theory. They note that at the time there had been some experimental work, but that “theoretical approaches to the subject [were] in their infancy.” One researcher in particular, John Glen Wardrop, was primarily concerned with statistical methods of examination, such as space mean speed, time mean speed, and “the effect of increase of flow on overtaking” and the resulting decrease in speed it would cause. Other previous research had focused on two separate models: one related traffic speed to traffic flow and another related speed to the headway between vehicles. [ 15 ]
The goal of Lighthill and Whitham, on the other hand, was to propose a new method of study “suggested by theories of the flow about supersonic projectiles and of flood movement in rivers.” The resulting model would capture both of the aforementioned relationships, speed-flow and speed-headway, into a single curve, which would “[sum] up all the properties of a stretch of road which are relevant to its ability to handle the flow of congested traffic.” The model they presented related traffic flow to concentration (now typically known as density). They wrote, “The fundamental hypothesis of the theory is that at any point of the road the flow q (vehicles per hour) is a function of the concentration k (vehicles per mile).” According to this model, traffic flow resembled the flow of water in that “Slight changes in flow are propagated back through the stream of vehicles along ‘kinematic waves,’ whose velocity relative to the road is the slope of the graph of flow against concentration.” The authors included an example of such a graph; this flow-versus-concentration (density) plot is still used today (see figure 3 above). [ 15 ]
The authors used this flow-concentration model to illustrate the concept of shock waves, which slow down vehicles which enter them, and the conditions that surround them. They also discussed bottlenecks and intersections, relating both to their new model. For each of these topics, flow-concentration and time-space diagrams were included. Finally, the authors noted that no agreed-upon definition for capacity existed, and argued that it should be defined as the “maximum flow of which the road is capable.” Lighthill and Whitham also recognized that their model had a significant limitation: it was only appropriate for use on long, crowded roadways, as the “continuous flow” approach only works with a large number of vehicles. [ 15 ]
The kinematic wave model of traffic flow theory is the simplest dynamic traffic flow model that reproduces the propagation of traffic waves . It is made up of three components: the fundamental diagram , the conservation equation, and initial conditions. The law of conservation is the fundamental law governing the kinematic wave model:
∂ k ∂ t + ∂ q ∂ x = 0 , {\displaystyle {\frac {\partial k}{\partial t}}+{\frac {\partial q}{\partial x}}=0,}
The fundamental diagram of the kinematic wave model relates traffic flow with density, as seen in figure 3 above. It can be written as:
q = F ( k ) {\displaystyle {q}={F(k)}}
Finally, initial conditions must be defined to solve a problem using the model. A boundary is defined to be k ( t , x ) {\displaystyle {k(t,x)}} , representing density as a function of time and position. These boundaries typically take two different forms, resulting in initial value problems (IVPs) and boundary value problems (BVPs). Initial value problems give the traffic density at time t = 0 {\displaystyle {t}={0}} , such that k ( 0 , x ) = g ( x ) {\displaystyle {k(0,x)}={g(x)}} , where g ( x ) {\displaystyle {g(x)}} is the given density function. Boundary value problems give some function g ( t ) {\displaystyle {g(t)}} that represents the density at the x = 0 {\displaystyle {x=0}} position, such that k ( t , 0 ) = g ( t ) {\displaystyle {k(t,0)}={g(t)}} .
The model has many uses in traffic flow. One of the primary uses is in modeling traffic bottlenecks, as described in the following section.
Traffic bottlenecks are disruptions of traffic on a roadway caused either due to road design, traffic lights, or accidents. There are two general types of bottlenecks, stationary and moving bottlenecks. Stationary bottlenecks are those that arise due to a disturbance that occurs due to a stationary situation like narrowing of a roadway, an accident. Moving bottlenecks on the other hand are those vehicles or vehicle behavior that causes the disruption in the vehicles which are upstream of the vehicle. Generally, moving bottlenecks are caused by heavy trucks as they are slow moving vehicles with less acceleration and also may make lane changes.7
Bottlenecks are important considerations because they impact the flow in traffic, the average speeds of the vehicles. The main consequence of a bottleneck is an immediate reduction in capacity of the roadway. The Federal Highway Authority has stated that 40% of all congestion is from bottlenecks. [ citation needed ]
The general cause of stationary bottlenecks are lane drops which occurs when the a multilane roadway loses one or more its lane. This causes the vehicular traffic in the ending lanes to merge onto the other lanes.
As explained above, moving bottlenecks are caused due to slow moving vehicles that cause disruption in traffic. Moving bottlenecks can be active or inactive bottlenecks. If the reduced capacity(q u ) caused due to a moving bottleneck is greater than the actual capacity(μ) downstream of the vehicle, then this bottleneck is said to be an active bottleneck.
The generally accepted classical fundamentals and methodologies of traffic and transportation theory are as follows:
Three-phase traffic theory is an alternative theory of traffic flow created by Boris Kerner at the end of 1990s. [ 24 ] [ 25 ] [ 26 ] Three-phase theory states that at any time instance there is a range of highway capacities of free flow at a bottleneck. The capacity range is between some maximum and minimum capacities. The range of highway capacities of free flow at the bottleneck in three-phase traffic theory contradicts fundamentally classical traffic theories as well as methods for traffic management and traffic control which at any time instant assume the existence of a particular deterministic or stochastic highway capacity of free flow at the bottleneck. [ citation needed ]
In the condition of traffic flows leaving two branch roadways and merging into a single flow through a single roadway, determining the flows that pass through the merging process and the state of each branch of roadways becomes an important task for traffic engineers. The Newell-Daganzo merge model is a good approach to solve these problems. This simple model is the output of the result of both Gordon Newell's description of the merging process [ 27 ] and the Daganzo's cell transmission model . [ 28 ] In order to apply the model to determine the flows which exiting two branch of roadways and the stat of each branch of roadways, one needs to know the capacities of the two input branches of roadways, the exiting capacity, the demands for each branch of roadways, and the number of lanes of the single roadway. The merge ratio will be calculated in order to determine the proportion of the two input flows when both of branches of roadway are operating in congested conditions.
As can be seen in a simplified model of the process of merging, [ 29 ] the exiting capacity of the system is defined to be μ, the capacities of the two input branches of roadways are defined as μ 1 and μ 2 , and the demands for each branch of roadways are defined as q 1 D and q 2 D . The q 1 and q 2 are the output of the model which are the flows that pass through the merging process. The process of the model is based on the assumption that the sum of capacities of the two input branches of roadways is less than the exiting capacity of the system, μ 1 +μ 2 ≤ μ.
Car-following models describe how one vehicle follows another vehicle in an uninterrupted traffic flow. They are a type of microscopic traffic flow model .
A survey about the state of art in traffic flow modeling:
Useful books from the physical point of view: | https://en.wikipedia.org/wiki/Traffic_flow |
Traffic indication map (TIM) is a structure used in 802.11 wireless network management frames.
The traffic indication map information element is covered under section 7.3.2.6 of 802.11-1999 standard. [ 1 ]
The IEEE 802.11 standards use a bitmap to indicate to any sleeping listening stations that the access point (AP) has buffered data waiting for it. Because stations should listen to at least one beacon during the listen interval , the AP periodically sends this bitmap in its beacons as an information element. The bit mask is called the traffic indication map and consists of 2008 bits, each bit representing the association ID (AID) of a station.
However, in most situations an AP only has data for a few stations, so only the portion of the bitmap representing those stations needs to be transmitted.
Because the bitmap is never transmitted in its entirety, it is referred to as a virtual bitmap , and the portion that is actually transmitted is referred to as a partial virtual bitmap .
The structure of the TIM is as follows:
[ 2 ] [ 3 ]
A delivery traffic indication message ( DTIM ) is a kind of TIM which informs the clients about the presence of buffered multicast or broadcast data on the access point . It is generated within the periodic beacon at a frequency specified by the DTIM interval . Beacons are packets sent by an access point to synchronize a wireless network. Normal TIMs that are present in every beacon are for signaling the presence of buffered unicast data. After a DTIM, the access point will send the buffered multicast and broadcast data on the channel following the normal channel access rules ( CSMA/CA ). This helps to have minimum collision and, in effect, increased throughput. In cases where there is not much interference, or where the number of clients is limited, the DTIM interval has little or no significance.
According to the 802.11 standards, a delivery traffic indication message ( DTIM ) period value is a number that determines how often a beacon frame includes a DTIM, and this number is included in each beacon frame. A DTIM is included in beacon frames, according to the DTIM period, to indicate to the client devices whether the access point has buffered broadcast or multicast data waiting for them. Following a beacon frame that includes a DTIM, the access point will release the buffered broadcast and multicast data, if any exists.
Since beacon frames are sent using the mandatory 802.11 algorithm for carrier-sense multiple access with collision avoidance (CSMA/CA), the access point must wait if a client device is sending a frame when the beacon is to be sent. As a result, the actual time between beacons may be longer than the beacon interval. Client devices that awaken from power-save mode may find that they have to wait longer than expected to receive the next beacon frame. Client devices, however, compensate for this inaccuracy by utilizing the time stamp found within the beacon frame.
The 802.11 standards define a power-save mode for client devices. In power-save mode, a client device may choose to sleep for one or more beacon intervals, waking for beacon frames that include DTIMs. When the DTIM period is 2, a client device in power-save mode will awaken to receive every other beacon frame. Upon entering power-save mode, a client device will transmit a notification to the access point, so that the access point will know how to handle unicast traffic destined for the client device. The client device will begin to sleep according to the DTIM period.
The higher the DTIM period, the longer a client device may sleep and therefore the more power a particular client device may save.
Client devices in wireless networks may have conflicting requirements for power consumption and communication throughput when in power-save mode. For example, laptop computers may require relatively high communication throughput and may have low sensitivity to power consumption. Therefore, a relatively low DTIM period, for example 1, may be suitable for these devices. Pocket devices, however, may require relatively low communication throughput and may be operated by batteries of relatively low capacity. Therefore, a higher DTIM period, for example 8, may be suitable for pocket devices. But some of these have a medium to high communication throughput, while still having small batteries, so would benefit from a medium DTIM period, such as 4.
In the present standards, an access point is able to store only a single DTIM period. Consequently, different client devices in power-save mode will all wake up for the same beacon frames according to the DTIM period. A network manager may need to balance the conflicting requirements for power consumption and communication throughput when in power-save mode of client devices in different wireless networks when configuring the DTIM period of an access point. In the future, an access point that can serve multiple service sets (multiple SSIDs) may have a separate DTIM period for each service set. A network manager may consider the requirements of power consumption and communication throughput of client devices in a particular wireless network when determining which DTIM period to configure for which service set. A higher DTIM period may increase the potential savings in power consumption but reduce the communication throughput, and vice versa. | https://en.wikipedia.org/wiki/Traffic_indication_map |
Traffic mix is a traffic model in telecommunication engineering and teletraffic theory .
A traffic mix is a modelisation of user behaviour. In telecommunications, user behaviour activities may be described by a number of systems, ranging from simple to complex. For example, for plain old telephone service (POTS), a sequence of connection requests to an exchange can be modelled by fitting negative exponential distributions to the average time between requests and the average duration of a connection. This in turn can be used to work out the utilisation of the line for the purposes of network planning and dimensioning.
Traffic mix has two goals:
Both these functions are extremely important to network operators. If insufficient capability is deployed at a node (for example, if a backbone router has 1 gigabit/sec of switching capacity and more than this is offered) then the risk of equipment failure increases, and customers experience poor service. However, if the network is overprovisioned the cost in equipment can be high. Most providers therefore seek to maximise the effect of their spending by maintaining an unused overhead capacity for growth, and expanding key nodes to relieve problem areas. Identification of these areas is accomplished by network dimensioning.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Traffic_mix |
In communications , traffic policing is the process of monitoring network traffic for compliance with a traffic contract and taking steps to enforce that contract. Traffic sources which are aware of a traffic contract may apply traffic shaping to ensure their output stays within the contract and is thus not discarded. Traffic exceeding a traffic contract may be discarded immediately, marked as non-compliant, or left as-is, depending on administrative policy and the characteristics of the excess traffic.
The recipient of traffic that has been policed will observe packet loss distributed throughout periods when incoming traffic exceeded the contract. If the source does not limit its sending rate (for example, through a feedback mechanism ), this will continue, and may appear to the recipient as if link errors or some other disruption is causing random packet loss. The received traffic, which has experienced policing en route, will typically comply with the contract, although jitter may be introduced by elements in the network downstream of the policer.
With reliable protocols, such as TCP as opposed to UDP , the dropped packets will not be acknowledged by the receiver, and therefore will be resent by the emitter, thus generating more traffic.
Sources with feedback-based congestion control mechanisms (for example TCP ) typically adapt rapidly to static policing, converging on a rate just below the policed sustained rate. [ citation needed ]
Co-operative policing mechanisms, such as packet-based discard [ 1 ] facilitate more rapid convergence, higher stability and more efficient resource sharing. As a result, it may be hard for endpoints to distinguish TCP traffic that has been merely policed from TCP traffic that has been shaped .
Where cell -level dropping is enforced (as opposed to that achieved through packet-based policing) the impact is particularly severe on longer packets. Since cells are typically much shorter than the maximum packet size, conventional policers discard cells which do not respect packet boundaries, and hence the total amount of traffic dropped will typically be distributed throughout a number of packets. Almost all known packet reassembly mechanisms will respond to a missing cell by dropping the packet entirely, and consequently a very large number of packet losses can result from moderately exceeding the policed contract.
RFC 2475 describes traffic policing elements like a meter and a dropper . [ 2 ] They may also optionally include a marker . The meter measures the traffic and determines whether or not it exceeds the contract (for example by GCRA ). Where it exceeds the contract, some policy determines if any given PDU is dropped, or if marking is implemented, if and how it is to be marked. Marking can comprise setting a congestion flag (such as ECN flag of TCP or CLP bit of ATM ) or setting a traffic aggregate indication (such as Differentiated Services Code Point of IP ).
In simple implementations, traffic is classified into two categories, or "colors" : compliant (green) and in excess (red). RFC 2697 proposes a more precise classification, with three "colors". [ 3 ] In this document, the contract is described through three parameters: Committed Information Rate (CIR), Committed Burst Size (CBS), and Excess Burst Size (EBS). A packet is "green" if it doesn't exceed the CBS, "yellow" if it does exceed the CBS, but not the EBS, and "red" otherwise.
The "single-rate three-color marker" described by RFC 2697 allows for temporary bursts. The bursts are allowed when the line was under-used before they appeared. A more predictable algorithm is described in RFC 2698, which proposes a "double-rate three-color marker". [ 4 ] RFC 2698 defines a new parameter, the Peak Information Rate (PIR). RFC 2859 describes the "Time Sliding Window Three Colour Marker" which meters a traffic stream and marks packets based on measured throughput relative to two specified rates: Committed Target Rate (CTR) and Peak Target Rate (PTR). [ 5 ]
On Cisco equipment, both traffic policing and shaping are implemented through the token bucket algorithm. [ 6 ]
Traffic policing in ATM networks is known as Usage/Network Parameter Control . [ 7 ] The network can also discard non-conformant traffic in the network (using Priority Control ). The reference for both traffic policing and traffic shaping in ATM (given by the ATM Forum and the ITU-T ) is the Generic Cell Rate Algorithm ( GCRA ), which is described as a version of the leaky bucket algorithm. [ 8 ] [ 9 ]
However, comparison of the leaky bucket and token bucket algorithms shows that they are simply mirror images of one another, one adding bucket content where the other takes it away and taking away bucket content where the other adds it. Hence, given equivalent parameters, implementations of both algorithms will see exactly the same traffic as conforming and non-conforming.
Traffic policing requires maintenance of numerical statistics and measures for each policed traffic flow, but it does not require implementation or management of significant volumes of packet buffer. Consequently, it is significantly less complex to implement than traffic shaping .
Connection-oriented networks (for example ATM systems) can perform Connection Admission Control (CAC) based on traffic contracts. In the context of Voice over IP (VoIP), this is also known as Call Admission Control (CAC). [ 10 ]
An application that wishes to use a connection-oriented network to transport traffic must first request a connection (through signalling, for example Q.2931 ), which involves informing the network about the characteristics of the traffic and the quality of service (QoS) required by the application. [ 11 ] This information is matched against a traffic contract. If the connection request is accepted, the application is permitted to use the network to transport traffic.
This function protects the network resources from malicious connections and enforces the compliance of every connection to its negotiated traffic contract.
Difference between CAC and traffic policing is that CAC is an a priori verification (before the transfer occurs), while traffic policing is an a posteriori verification (during the transfer). | https://en.wikipedia.org/wiki/Traffic_policing_(communications) |
Traffic shaping is a bandwidth management technique used on computer networks which delays some or all datagrams to bring them into compliance with a desired traffic profile . [ 1 ] [ 2 ] Traffic shaping is used to optimize or guarantee performance, improve latency , or increase usable bandwidth for some kinds of packets by delaying other kinds. It is often confused with traffic policing , the distinct but related practice of packet dropping and packet marking . [ 3 ]
The most common type of traffic shaping is application-based traffic shaping. [ 4 ] [ failed verification ] In application-based traffic shaping, fingerprinting tools are first used to identify applications of interest, which are then subject to shaping policies. Some controversial cases of application-based traffic shaping include bandwidth throttling of peer-to-peer file sharing traffic. Many application protocols use encryption to circumvent application-based traffic shaping.
Another type of traffic shaping is route-based traffic shaping. Route-based traffic shaping is conducted based on previous- hop or next-hop information. [ 5 ]
If a link becomes utilized to the point where there is a significant level of congestion , latency can rise substantially. Traffic shaping can be used to prevent this from occurring and keep latency in check. Traffic shaping provides a means to control the volume of traffic being sent into a network in a specified period ( bandwidth throttling ), or the maximum rate at which the traffic is sent ( rate limiting ), or more complex criteria such as generic cell rate algorithm . This control can be accomplished in many ways and for many reasons; however traffic shaping is always achieved by delaying packets.
Traffic shaping is commonly applied at the network edges to control traffic entering the network, but can also be applied by the traffic source (for example, computer or network card [ 6 ] ) or by an element in the network.
Traffic shaping is sometimes applied by traffic sources to ensure the traffic they send complies with a contract which may be enforced in the network by traffic policing .
Shaping is widely used for teletraffic engineering , and appears in domestic ISPs' networks as one of several Internet Traffic Management Practices (ITMPs). [ 7 ] Some ISPs may use traffic shaping to limit resources consumed by peer-to-peer file-sharing networks, such as BitTorrent . [ 8 ]
Data centers use traffic shaping to maintain service level agreements for the variety of applications and the many tenants hosted as they all share the same physical network. [ 9 ]
Audio Video Bridging includes an integral traffic-shaping provision defined in IEEE 802.1Qav.
Nodes in an IP network which buffer packets before sending on a link which is at capacity produce an unintended traffic shaping effect. This can appear across, for example, a low bandwidth link, a particularly expensive WAN link or satellite hop.
A traffic shaper works by delaying metered traffic such that each packet complies with the relevant traffic contract . Metering may be implemented with, for example, the leaky bucket or token bucket algorithms (the former typically in ATM and the latter in IP networks ). Metered packets or cells are then stored in a FIFO buffer , one for each separately shaped class, until they can be transmitted in compliance with the associated traffic contract. Transmission may occur immediately (if the traffic arriving at the shaper is already compliant), after some delay (waiting in the buffer until its scheduled release time) or never (in case of packet loss ).
All traffic shaper implementations have a finite buffer, and must cope with the case where the buffer is full. A simple and common approach is to drop traffic arriving while the buffer is full
a strategy known as tail drop and which results in traffic policing as well as shaping. A more sophisticated implementation could apply a dropping algorithm such as random early detection .
Simple traffic shaping schemes shape all traffic uniformly. More sophisticated shapers first classify traffic. Traffic classification categorises traffic (for example, based on port number or protocol ). Different classes can then be shaped separately to achieve a desired effect.
A self-limiting source produces traffic which never exceeds some upper bound, for example media sources which cannot transmit faster than their encoded rate allows. [ 10 ] Self-limiting sources shape the traffic they generate to a greater or lesser degree. Congestion control mechanisms can also affect traffic shaping of sorts - for example TCP's window mechanism implements a variable rate constraint related to bandwidth-delay product .
TCP Nice, a modified version of TCP developed by researchers at the University of Texas at Austin, allows applications to request that certain TCP connections be managed by the operating system as near zero-cost background transfers, or nice flows. Such flows interfere only minimally with foreground (non-nice) flows, while reaping a large fraction of spare network bandwidth. [ 11 ]
Traffic shaping is a specific technique and one of several which combined constitute bandwidth management . [ 12 ]
Traffic shaping is of interest especially to internet service providers (ISPs). Their high-cost, high-traffic networks are their major assets, and as such, are the focus of their attentions. They sometimes use traffic shaping to optimize the use of their network, sometimes by shaping traffic according to their assessment of importance and thus discouraging use of certain applications. [ 13 ]
Most companies with remote offices are now connected via a wide area network (WAN). Applications tend to be centrally hosted at the head office and remote offices are expected to pull data from central databases and server farms . As applications become more hungry in terms of bandwidth and prices of dedicated circuits being relatively high in most areas of the world, instead of increasing the size of their WAN circuits, companies feel the need to properly manage their circuits to make sure business-oriented traffic gets priority over other traffic. Traffic shaping is thus a good means for companies to avoid purchasing additional bandwidth while properly managing these resources.
Alternatives to traffic shaping in this regard are application acceleration and WAN optimization and compression, which are fundamentally different from traffic shaping. Traffic shaping defines bandwidth rules whereas application acceleration using multiple techniques like a TCP performance-enhancing proxy . WAN optimization, on the other hand, compresses data streams or sends only differences in file updates. The latter is quite effective for chatty protocols like CIFS .
There are several methods to detect and measure traffic shaping. Tools have been developed to assist with detection. [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Traffic_shaping |
Traffic signs or road signs are signs erected at the side of or above roads to give instructions or provide information to road users. The earliest signs were simple wooden or stone milestones . Later, signs with directional arms were introduced, for example the fingerposts in the United Kingdom and their wooden counterparts in Saxony .
With traffic volumes increasing since the 1930s, many countries have adopted pictorial signs or otherwise simplified and standardized their signs to overcome language barriers, and enhance traffic safety. Such pictorial signs use symbols (often silhouettes) in place of words and are usually based on international protocols. Such signs were first developed in Europe, and have been adopted by most countries to varying degrees.
International conventions such as Vienna Convention on Road Signs and Signals and Geneva Convention on Road Traffic have helped to achieve a degree of uniformity in traffic signing in various countries. [ 1 ] Countries have also unilaterally (to some extent) followed other countries in order to avoid confusion.
Traffic signs can be grouped into several types. For example, Annexe 1 of the Vienna Convention on Road Signs and Signals (1968), which on 30 June 2004 had 52 signatory countries, defines eight categories of signs:
In the United States, Canada, Australia, and New Zealand signs are categorized as follows:
In the United States, the categories, placement, and graphic standards for traffic signs and pavement markings are legally defined in the Federal Highway Administration 's Manual on Uniform Traffic Control Devices as the standard.
A rather informal distinction among the directional signs is the one between advance directional signs, interchange directional signs, and reassurance signs. Advance directional signs appear at a certain distance from the interchange, giving information for each direction. A number of countries do not give information for the road ahead (so-called "pull-through" signs), and only for the directions left and right. Advance directional signs enable drivers to take precautions for the exit (e.g., switch lanes, double check whether this is the correct exit, slow down).
They often do not appear on lesser roads, but are normally posted on expressways and motorways, as drivers would be missing exits without them. While each nation has its own system, the first approach sign for a motorway exit is mostly placed at least 1,000 metres (3,300 ft) from the actual interchange. After that sign, one or two additional advance directional signs typically follow before the actual interchange itself.
The earliest road signs were milestones , giving distance or direction; for example, the Romans erected stone columns throughout their empire giving the distance to Rome. According to Strabo, Mauryas erected signboards at distance of 10 stades to mark their roads. [ 2 ] In the Middle Ages , multidirectional signs at intersections became common, giving directions to cities and towns.
In 1686, the first known Traffic Regulation Act in Europe was established by King Peter II of Portugal . This act foresaw the placement of priority signs in the narrowest streets of Lisbon , stating which traffic should back up to give way. One of these signs still exists at Salvador street, in the neighborhood of Alfama . [ 3 ]
The first modern road signs erected on a wide scale were designed for riders of high or "ordinary" bicycles in the late 1870s and early 1880s. These machines were fast, silent and their nature made them difficult to control, moreover their riders travelled considerable distances and often preferred to tour on unfamiliar roads. For such riders, cycling organizations began to erect signs that warned of potential hazards ahead (particularly steep hills), rather than merely giving distance or directions to places, thereby contributing the sign type that defines "modern" traffic signs.
The development of automobiles encouraged more complex signage systems using more than just text-based notices. One of the first modern-day road sign systems was devised by the Italian Touring Club in 1895. By 1900, a Congress of the International League of Touring Organizations in Paris was considering proposals for standardization of road signage. In 1903 the British government introduced four "national" signs based on shape, but the basic patterns of most traffic signs were set at the 1908 World Road Congress in Paris . [ citation needed ] In 1909, nine European governments agreed on the use of four pictorial symbols, indicating "bump", "curve", "intersection", and "grade-level railroad crossing". The intensive work on international road signs that took place between 1926 and 1949 eventually led to the development of the European road sign system. Both Britain and the United States developed their own road signage systems, both of which were adopted or modified by many other nations in their respective spheres of influence. The UK adopted a version of the European road signs in 1964 and, over past decades, North American signage began using some symbols and graphics mixed in with English.
In the U.S., the first road signs were erected by the American Automobile Association (AAA). Starting in 1906, regional AAA clubs began paying for and installing wooden signs to help motorists find their way. In 1914, AAA started a cohesive transcontinental signage project, installing more than 4,000 signs in one stretch between Los Angeles and Kansas City alone. [ 4 ]
Over the years, change was gradual. Pre-industrial signs were stone or wood, but with the development of Darby's method of smelting iron using coke-painted cast iron became favoured in the late 18th and 19th centuries. Cast iron continued to be used until the mid-20th century, but it was gradually displaced by aluminium or other materials and processes, such as vitreous enamelled and/or pressed malleable iron, or (later) steel. Since 1945 most signs have been made from sheet aluminium with adhesive plastic coatings; these are normally retroreflective for nighttime and low-light visibility. Before the development of reflective plastics, reflectivity was provided by glass reflectors set into the lettering and symbols.
New generations of traffic signs based on electronic displays can also change their text (or, in some countries, symbols) to provide for "intelligent control" linked to automated traffic sensors or remote manual input. In over 20 countries, real-time Traffic Message Channel incident warnings are conveyed directly to vehicle navigation systems using inaudible signals carried via FM radio, 3G cellular data and satellite broadcasts. Finally, cars can pay tolls and trucks pass safety screening checks using video numberplate scanning, or RFID transponders in windshields linked to antennae over the road, in support of on-board signalling, toll collection, and travel time monitoring.
Yet another "medium" for transferring information ordinarily associated with visible signs is RIAS (Remote Infrared Audible Signage) , e.g., "talking signs" for print-handicapped (including blind/low-vision/illiterate) people. These are infra-red transmitters serving the same purpose as the usual graphic signs when received by an appropriate device such as a hand-held receiver or one built into a cell phone.
Then, finally, in 1914, the world's first electric traffic signal is put into place on the corner of Euclid Avenue and East 105th Street in Cleveland , Ohio, on August 5. [ citation needed ]
Typefaces used on traffic signs vary by location, with some typefaces being designed specifically for the purpose of being used on traffic signs and based on attributes that aid viewing from a distance. A typeface chosen for a traffic sign is selected based on its readability, which is essential for conveying information to drivers quickly and accurately at high speeds and long distances.
Factors such as clear letterforms, lines of copy, appropriate spacing, and simplicity contribute to readability. Increased X-height and counters specifically help with letter distinction and reduced halation , which especially affects aging drivers. In cases of halation, certain letters can blur and look like others, such as a lowercase "e" appearing as an "a", "c", or "o". [ 5 ] [ 6 ]
In 1997, a design team at T.D. Larson Transportation Institute began testing Clearview , a typeface designed to improve readability and halation issues with the FHWA Standard Alphabet, also known as Highway Gothic , which is the standard typeface for highway signs in the U.S. [ 7 ] [ 8 ]
The adoption of Clearview for traffic signs over Highway Gothic has been slow since its initial proposal. Country-wide adoption faced resistance from both local governments and the Federal Highway Administration (FHWA), citing concerns about consistency and cost, along with doubts of the studies done on Clearview’s improved readability. As stated by the FHWA, "This process (of designing Clearview) did not result in a necessarily better set of letter styles for highway signing, but rather a different set of letter styles with increased letter height and different letter spacing that was not comparable to the Standard Alphabets." [ 9 ]
The FHWA allowed use of Clearview to be approved on an interim basis as opposed to national change, where local governments could decide to submit a request to the FHWA for approval to update their signs with Clearview, but in 2016 they rescinded this approval, wanting to limit confusion and inconsistency that could come from a mix of two typefaces being used. In 2018, they again allowed interim approval of Clearview, with Highway Gothic remaining the standard. [ 9 ] [ 10 ]
Cars are beginning to feature cameras with automatic traffic sign recognition , beginning 2008 with the Opel Insignia . It mainly recognizes speed limits and no-overtaking areas. [ 11 ] It also uses GPS and a database over speed limits, which is useful in the many countries which signpost city speed limits with a city name sign, not a speed limit sign.
Rail traffic has often a lot of differences between countries and often not much similarity with road signs. Rail traffic has professional drivers who have much longer education than what's normal for road driving licenses. Differences between neighboring countries cause problems for cross border traffic and causes need for additional education for drivers. | https://en.wikipedia.org/wiki/Traffic_sign |
This article is a summary of traffic signs used in each country.
Roads can be motorways , expressways or other routes. In many countries, expressways share the same colour as primary routes, but there are some exceptions where they share the colour of motorways (Austria, Liechtenstein, Hungary, Switzerland, Spain, Sweden) or have their own colour (the countries comprising former Yugoslavia employ white text on blue specifically for expressways).
When it comes to motorways and route colours, the following schemes are adopted:
Local traffic road signs usually employ black text on white. Exceptions are the Czech Republic (yellow-on-black), Finland (white-on-black), Austria and Spain (white-on-green), as well as Denmark, Iceland and Poland (blue-on-white).
Tourist sighting signs usually employ white on some shade of brown. Detours use black on a shade of yellow or orange.
Typefaces used in road signage varies across countries. Usually a country will have a standardized typeface throughout the country. In some countries however, it is not unlikely to find other typefaces in use — as well as road signs with the wrong typeface printed by manufacturers who default on some other font. The following list show-cases what is the mostly standardized typeface of each country, outlining however significant variations.
The rest of the world usually employs Transport, Highway Gothic or Arial for the Latin text, and a sans-serif font for the non-Latin text which may or may not have a specific name. Libya has the peculiarity of sign-posting in Arabic only and employing no Latin text.
Some countries may prefer to write cities and town names in all-uppercase (among which: Albania, Bangladesh, Burundi, the Czech Republic, Finland, France and former colonies, Ireland for place names in English, Italy, Luxembourg, Portugal, China and North Korea, Sierra Leone, Spain, Sweden, all of the former Soviet Union except for Ukraine), others instead prefer to use normal mixed case names.
Generally, road signs in African countries closely follow those used in Europe, but most African countries have not ratified the Vienna Convention on Road Signs and Signals .
Although the Trans-African Highway network exists, Trans-African route numbers are not signed at all in any African country, except Kenya and Uganda where the Mombasa – Nairobi – Kampala – Fort Portal section (or the Kampala– Kigali feeder road) of Trans-African Highway 8 is sometimes referred to as the "Trans-Africa Highway".
In member states of the Southern African Development Community , road signs are based on the SADC Road Traffic Signs Manual , [ 11 ] [ 12 ] [ 13 ] a document designed to harmonise traffic signs in these countries. However, not all member states have adopted the SADC-RTSM , and those that have may not use all signs listed in the SADC-RTSM or may use regional variations.
Road signs in Angola are particularly modelled on the Portuguese road signs since Angola is a former Portuguese colony. Since the country is a member of the Southern African Development Community, road signs are going to be harmonised with the traffic signs in member states of the Community according to the SADC Road Traffic Signs Manual despite they are transitional in nature. [ 14 ]
Highway signs use white text on green backgrounds.
Road signs in Burundi are similar in appearance to those used in Italy with certain distinctions. They are written in French in uppercase letters.
Road signs in the Democratic Republic of the Congo are largely derived on the Belgian road signs since the DRC is a former Belgian colony. They are written in French.
Road signs in Egypt are regulated under the Egypt Traffic Signs Manual (ETSM). [ citation needed ] They closely follow those used in the United Kingdom with certain distinctions. They are written in Arabic and English.
Road signs in Mauritius are regulated by the Traffic Signs Regulations 1990. They are largely derived from the British road sign system since Mauritius is a former British colony.
Road signs in Nigeria do not differ greatly from those used in the rest of the African continent. However, it notably makes use of a yellow background for warning and prohibitory signs, as well as yellow text for the stop sign. [ 15 ]
Road signs in Sierra Leone are similar in appearance to those used in Italy with certain distinctions. They are written in English in uppercase letters.
Road signs in Somalia are similar in appearance to those used in Italy with certain distinctions. They are written in Arab and Somali.
Road signs in Uganda are largely derived from the British road sign system since the country is a former British colony.
Road signs in Asia differ by country. Typically, Asian countries closely follow Europe in terms of road sign design, which means they are influenced by the Vienna Convention on Road Signs and Signals , though a number of countries' signage has been influenced from the Manual on Uniform Traffic Control Devices (MUTCD), for example Cambodia, Japan, Thailand, Myanmar, Sri Lanka, Malaysia and Indonesia.
Asian Highway Network signs are marked using white letters on a dark blue background. In Turkey and Russia, European route numbers are indicated using white characters on a green rectangle and are signposted; however this is not the case in many other Asian countries.
Road signs in Armenia are similar in design to those used in the Soviet Union before its collapse in 1991 as the country was a Soviet Socialist Republic until 1991. Modern road signs used in Armenia generally maintain the same design as those used in Russia, with the exception that inscriptions on road signs are written in both Armenian and English, including the stop sign .
Road signs in Azerbaijan are similar in design to those used in the Soviet Union before its collapse in 1991 as the country was a Soviet Socialist Republic until 1991.
In Cambodia, road signs are prescribed by the Ministry of Public Works and Transport . [ 16 ] Cambodian road signage practice closely follows those used in Europe — with the exception of warning signs which follow the American MUTCD — matching these designs used in other Asian countries like Japan, Myanmar, Thailand, Malaysia and Indonesia.
A variety of road signs are used in mainland China, specified in the Guobiao standard GB 5678–2009. Most road signs in China, like warning signs, appear to adopt the practices of the ISO standards not intended for use in traffic signage, which are ISO 3864 and ISO 7010 .
Direction signs have these colours:
Hong Kong's traffic signs are derived from the British road sign system, and are bilingual in English and Chinese (English on top, and traditional Chinese characters at the bottom).
Road signs in Macau are inherited from Portuguese road signage system prior to 1994/1998. Inscriptions are written in Chinese (traditional Chinese characters) and Portuguese.
Road signs in Taiwan are reminiscent of the early 1940s Japanese road signage, which was used in Japan itself until 1950. Overall, Taiwan is lenient towards European road signs in terms of design, but with some influences from road signs used in Japan and China, as well as the MUTCD for guide signs and temporary signs (amber rhombic warning signs).
Road signs in Georgia are mostly inherited from those used in the former Soviet Union, but with some modifications in design. Inscriptions on road signs are usually written in Georgian and English.
Road signs in the Republic of India are similar to those used in some parts of the United Kingdom, except that they are multilingual. Most urban roads and state highways have signs in the state language and English. National highways have signs in the state language and English.
Road signs in Iran mainly follow the Vienna Convention. Text is written in Persian and English.
Road signs in Iraq are regulated in Chapter 11 of the Highway Geometric Design Code . [ 17 ] They are written in Arabic and English.
Road signs in Israel mainly follow the Vienna Convention, but have some variants. Many signs are trilingual , with text written in Hebrew , Arabic and English.
Road signs in Japan are either controlled by local police authorities under Road Traffic Law ( 道路交通法 , Dōro Kōtsūhō ) or by other road-controlling entities including Ministry of Land, Infrastructure, Transport and Tourism , local municipalities, NEXCO (companies controlling expressways), under Road Law ( 道路法 , Dōrohō ) . Most of the design of the road signs in Japan are similar to the signs on the Vienna Convention, except for some significant variances, such as stop sign with a red downward triangle.
The design of road signs in Kazakhstan is largely based on that of the former Soviet Union. Inscriptions on road signs, including the names of settlements and streets, are usually written in two languages: Kazakh and Russian.
Both North Korea and South Korea developed their own road signage systems.
Road signs in South Korea are standardised and regulated by the Korean Road Traffic Authority. South Korean road signage closely follows those used in Europe, but with some influences from road signs in Japan . Similar to road signs of Poland and Greece, road signs are triangular, have a yellow background and a red border. Like other countries, the signs use pictograms to display their meaning.
Road signs in North Korea differ by locale. Most of the time, they tend to closely follow China in design (but identically), and some road signs are unique to North Korea (such as an exclamation mark drawn on another sign to indicate other dangers), so they never appear elsewhere. The font used for Latin letters appear to be the same as in China.
South Korea keeps close to the Vienna Convention on Road Signs and Signals as South Korea is an original signatory. On the other hand, North Korea is not a signatory to the convention and instead designs its own signs, creating confusion. [ 18 ]
Road signs in Kuwait are regulated under the 2011 Kuwait Manual on Traffic Control Devices . They closely follow those used in the United Kingdom with certain distinctions, with text written in Arabic and English. [ 19 ]
The design of road signs in Kyrgyzstan is largely based on that of the former Soviet Union.
The design of road signs in Mongolia is largely based on that of the former Soviet Union, despite having never been part of it. Inscriptions on road signs are usually written in Mongolian and English.
Road signs in the Philippines are standardized in the Road Signs and Pavement Markings Manual , published by the Department of Public Works and Highways . Philippine road signage practice closely follow those used in Europe, but with local adaptations and some minor influences from the US MUTCD and Australian road signs. However, some road signs may differ by locale, and mostly diverge from the national standard. For example, the Metropolitan Manila Development Authority (MMDA) has used pink and light blue in its signage for which it has been heavily criticised. [ 20 ] [ 21 ]
Regulatory road signs are generally circular, and most warning signs take the form of a triangle. Since 2012, however, a more visibly distinctive design (taken from that used for school signs in the US) has been adopted for pedestrian-related signs: these consist of a fluorescent yellow-green pentagon with black border and symbol.
The Philippines signed the Vienna Convention on Road Signs and Signals on 8 November 1968, and ratified it on 27 December 1973. [ 22 ]
Road signs in Qatar are regulated under the Qatar Traffic Control Manual (QTCM). They closely follow those used in the United Kingdom with certain distinctions. They are written in Arabic and English. [ 23 ]
Road signs in the Asian part of Russia follow the Vienna Convention, specified in the GOST standard 52290-2004 [ 24 ] (the Soviet Union was an original signatory to the convention, but only a few post-Soviet states are signatories to the convention). However, direction signs in the Asian part of Russia omit European route numbers (which are used in the European part ), replaced by Asian route numbers, which are dark blue in background with white lettering, with a few exceptions. The same also apples to road signs used in Central Asian countries such as Kazakhstan, Kyrgyzstan, Tajikistan, Turkmenistan and Uzbekistan.
Russia signed the Vienna Convention on Road Signs and Signals on 8 November 1968 and ratified it on 7 June 1974. [ 25 ]
Road signs in Saudi Arabia are generally written in Arabic and English. A particular aspect of Saudi signage is that they indicate areas which are forbidden to non-Muslims in the cities of Mecca and Medina . [ 26 ]
Singapore's traffic signs closely follow British road sign conventions, although the government has introduced some changes to them.
Road signs in Thailand are standardised and are uniform throughout the country. Since the late twentieth century, Thai road signage practice closely follows the designs used in the United States, Europe and Japan. Road signs are often written in Thai language and display in metric units. In tourist areas, English is also used for important public places such as tourist attractions , airports, railway stations, and immigration checkpoints . Destinations on direction signage is written are written in both Thai and English.
Thailand signed the Vienna Convention on Road Signs and Signals on 8 November 1968 but has yet to fully ratify it. [ 22 ]
Road signs in Turkmenistan are mostly based on those used in the Soviet Union before its dissolution in 1991. However, modern road signs in Turkmenistan are similar to those used in Turkey.
Road signs in Uzbekistan are very similar in design to those used in the Soviet Union until its dissolution in 1991, as the country was a Soviet Socialist Republic until 1991, when it declared its independence from the Soviet Union. [ 27 ] Modern road signs in Uzbekistan on the one hand follow modern road signs used in Russia from the GOST R 52290-2004 standard, but on the other hand follow road signs from European countries such as Spain, Germany and Italy. [ 27 ]
Road signs in Yemen are regulated under the Yemen Highway Design Standards (YHDS). They closely follow those used in Portugal with certain distinctions. [ citation needed ] They are written in Arabic and English.
Previously, the Colony of Aden (which later became South Yemen in 1967 prior to the Yemeni unification in 1990) used pre-Worboys road signs like many former British colonies. [ citation needed ]
The standardization of traffic signs in Europe commenced with the signing of the 1931 Geneva Convention concerning the Unification of Road Signals by several countries. [ 28 ] The 1931 Convention rules were developed in the 1949 Geneva Protocol on Road Signs and Signals [ 29 ] and a European Agreement supplementing the 1949 Protocol. [ 30 ]
In 1968, the European countries signed the Vienna Convention on Road Traffic treaty, with the aim of standardizing traffic regulations in participating countries to facilitate international road traffic and to increase road safety. Part of the treaty was the Vienna Convention on Road Signs and Signals , which defined the traffic signs and signals. As a result, in Western Europe the traffic signs are well standardized, although there are still some country-specific exceptions, mostly dating from the pre-1968 era.
The principle of the European traffic sign standard is that certain shapes and colours are to be used with consistent meanings:
The signposting of road numbers also differs greatly, except that European route numbers , if displayed, are always indicated using white characters on a green rectangle. European route numbers are, however, not signed at all in the United Kingdom, Albania, Iceland and Andorra.
The Convention recommends that certain signs – such as "STOP", "ZONE", etc. – be in English; however, use of the local language is also permitted. If a language uses non-Latin characters, a Latin-script transliteration of the names of cities and other important places should also be given. Road signs in Ireland are bilingual, using Irish and English. Wales similarly uses bilingual Welsh–English signs, while some parts of Scotland have bilingual Scottish Gaelic–English signs. Finland also uses bilingual signs, in Finnish and Swedish . Signs in Belgium are in French, Dutch, or German depending on the region. In the Brussels Capital Region, road signs are in both French and Dutch. Signs in Switzerland are in French, German , Italian, or Romansh depending on the canton.
For countries driving on the left, the convention stipulates that the traffic signs should be mirror images of those used in countries driving on the right. This practice, however, is not systematically followed in the four European countries driving on the left – the United Kingdom, Cyprus, Malta and Ireland. The convention permits the use of two background colours for danger and prohibition signs: white or yellow. Most countries use white, with a few – such as Finland, Iceland, Poland and Sweden – opting for yellow as this tends to improve the winter-time visibility of signs in areas where snow is prevalent. In some countries, such as France or Italy, white is the normal background colour for such signs, but yellow is used for temporary signage (as, for example, at road works).
European countries – with the notable exception of the United Kingdom, where distances and lengths are indicated in miles, yards, feet, and inches, and speed limits are expressed in miles per hour – use the metric system on road signs.
European traffic signs have been designed with the principles of heraldry in mind; [ citation needed ] i.e., the sign must be clear and able to be resolved at a glance. Most traffic signs conform to heraldic tincture rules, and use symbols rather than written texts for better semiotic clarity.
Albanian road signs are predominantly based on the Italian sign system, hence both follow the same convention on road sign design set out by the Vienna Convention on Road Signs and Signals.
Road signs in Andorra are similar to those set out in the Vienna Convention on Road Signs and Signals. Its direction signage is always white. Other signs, such as warning and regulatory, are identical to those used in Spain.
Unlike other European countries, route numbers are not always shown. This can cause problems for drivers from neighbouring European countries when trying to find an international destination.
Road signs in Belarus are visually not much different from those neighboring post-Soviet countries like Russia and Ukraine. Inscriptions on road signs, including names of settlements, are written in Belarusian or Russian, most often in Belarusian.
Croatian road signs follow the Vienna Convention ( SFR Yugoslavia was the original signatory for Croatia, which is now a contracting party itself).
In the first years following Croatia's independence, its traffic signs were the same as in the rest of the former Yugoslavia. In the early 2000s, replacement of the yellow background of warning signs began, and new signs now use a white background.
While road signs in Greece do largely resemble those in use elsewhere in Europe, a notable exception to this is the use of a yellow background for warning signs.
Although Iceland is not a signatory to the Vienna Convention on Road Signs and Signals, road signs in Iceland follow the Vienna Convention guidelines. Warning and regulatory, specifically prohibitory, signs use a yellow background.
Until the partition of Ireland in 1922 and the independence of the Irish Free State (now the Republic of Ireland), British standards applied across the island. In 1926 road sign standards similar to those used in the UK at the time were adopted. [ 31 ] Law requires that the signs be written in both Irish and English.
In 1956, warning road signs in the Republic were changed from the UK standard with the adoption of US-style "diamond" signs for many road hazard warnings. [ 32 ] A number of regulatory signs were also introduced.
Directional signage is similar to current United Kingdom standards, in that the same colours and typefaces are used. However, Irish text is rendered in a unique oblique variant of the Transport typeface.
In line with the majority of Europe, Ireland uses the metric system, which has been displayed on directional signs based on the Worboys Committee standard since 1977 and, upon adopting metric speed limits , on speed limit signs since 2005.
Road signs in Liechtenstein use the same design as those used in Switzerland.
Road signs in Malta use a mixture of British and Italian designs. As Malta drives on the left, some Italian signs are therefore mirrored to reflect this system. Information signs are often bilingual, displaying text English and Maltese.
Road signs in Moldova are in some ways similar in design to those used in the Soviet Union before its dissolution in 1991. However, modern road signs in Moldova tend to follow those used in Romania.
Road signs in the Netherlands follow the Vienna Convention. Its directional signs are unique in that blue is the only colour used for the background, regardless of the classification of the road.
Information intended for cyclists always appear on white signs with red or green letters.
The Dutch RWS (formerly ANWB ) typeface was replaced by a new font, named ANWB-Uu (also known as Redesign ), on some signs in the country. The typeface was developed in 1997 and appeared on many signs but has been discontinued since 2015. The language of the signs is typically Dutch, though bilingual signs may be used when the information is relevant for tourists.
The road signs in Poland follow the Vienna convention. Poland uses yellow as the background colour for warning signs (an alternative allowed under the convention), rather than the much more widely adopted white.
Road signs in Russia follow the Vienna Convention, specified in the GOST standard 52290-2004 [ 24 ] (the Soviet Union was an original signatory to the convention, but only a few post-Soviet states are signatories to the convention).
European route numbers are signposted on direction signs in the European part of Russia, but are replaced by Asian Highway route numbers in the Asian part .
In February 2019, the traffic police has supported proposals for the introduction of reduced sizes of road signs. The idea was initiated by the Moscow government. They are planned to be installed throughout Russia after a successful experiment. The allowable size of signs will be reduced to 40 cm (16 inches) in diameter, and in some cases to 35 cm (14 inches), which is almost half the current standard of 60 cm (24 inches). [ 33 ]
Modern road signs in use in Slovakia since 2019 are similar in design to those used in Germany, though the main difference is the typeface used. Road signs from before 2019 are permitted to be left in place until 2034. [ citation needed ]
Road signs in Sweden mostly follow the Vienna Convention, though it notably uses yellow for the background of its warning and prohibitory signs. City names are written in uppercase letters.
Swiss road signs mostly follow the Vienna Convention with a few adaptations and exceptions. Distances and other measurements are displayed in metric units.
Bicycle and mountain bike routes, and routes for vehicle-like transport means are white text on falu red background.
Road signs in Ukraine broadly conform to European norms, and they are based on the road signage systems used consistently throughout the former USSR. They are written mostly in English and Ukrainian.
Traffic signing in the UK conforms broadly to European norms, though a number of signs are unique to Britain and direction signs omit European route numbers. The current sign system, introduced on 1 January 1965, was developed in the late 1950s and early 1960s by the Anderson Committee, which established the motorway signing system, and by the Worboys Committee , which reformed signing for existing all-purpose roads.
The UK remains the only Commonwealth country to use imperial measurements for distance and speed, although "authorised weight" signs have been in metric tonnes since 1981 and there is currently a dual-unit (metric first) option for height and width restriction signage, intended for use on safety grounds. Additionally, kilometre signs are installed at intervals of 500 metres (1,600 ft) indicating the distance from the start of the motorway.
Signs are generally bilingual in all parts of Wales (English/ Welsh or Welsh/English), and similar signs are beginning to be seen in parts of the Scottish Highlands ( Scottish Gaelic /English).
All signs and their associated regulations can be found in the Traffic Signs Regulations and General Directions , and are complemented by the various chapters of the Traffic Signs Manual .
For road signs in Australia, this is covered by AS 1742 which is unofficially known as Manual of Uniform Traffic Control Devices for Australia , and it serves as a similar role to the FHWA MUTCD. [ 34 ] As a result, road signs in Australia closely follow those used in America, but some sign designs closely follow the ones used in the United Kingdom.
New Zealand road signs are generally influenced both by American and European practices.
Warning signs are diamond-shaped with a yellow background for permanent warnings, and an orange background for temporary warnings. They are somewhat more pictorial than their American counterparts. This is also true for Canadian and Mexican signage.
Regulatory signs also follow European practice, with a white circle with a red border indicating prohibitive actions, and a blue circle indicating mandatory actions. White rectangular signs with a red border indicate lane usage directions. Information and direction signs are rectangular, with a green background indicating a state highway , a blue background for all other roads and all services (except in some, where directional signage is white), and a brown background for tourist attractions.
Before 1987, most road signs had black backgrounds – diamonds indicated warnings, and rectangles indicated regulatory actions (with the exception of the Give Way sign (an inverted trapezium), and Stop sign and speed limit signs (which were the same as today)). Information signs were yellow, and direction signage was green on motorways and black everywhere else.
Road signs in Papua New Guinea are standardised and closely follow those used in Australia with certain distinctions. They are written in English. [ 35 ]
In North America (including Mexico) these colours normally have these meanings. These are standard but exceptions may exist, especially outside the US:
The US Manual on Uniform Traffic Control Devices (MUTCD) prescribes four other colours: [ 37 ]
Regulatory signs are also sometimes seen with white letters on red or black signs. In Quebec , blue is often used for public services such as rest areas; many black-on-yellow signs are red-on-white instead.
Many US states and Canadian provinces now use fluorescent orange for construction signs. [ 39 ]
Highway symbols and markers
Every state in the U.S. and province in Canada has different markers for its own highways, but uses standard ones for all federal highways. Many special highways – such as the Queen Elizabeth Way , Trans-Canada Highway , and various auto trails in the U.S. – have used unique signs. Counties in the US sometimes use a pentagonal blue sign with yellow letters for numbered county roads , though the use is inconsistent even within states.
In Australia, the five states have alphanumeric markers for their own highways, based on the Great Britain road numbering scheme of 1963. Tasmania was the first state to implement this scheme in 1979. [ 40 ] "M" roads signified motorways , "A" roads signified primary highways, "B" roads signified less significant roads and "C" roads linked smaller settlements. Western Australia never implemented the alphanumeric scheme, instead retaining the shield system. [ 41 ]
Units
Distances are displayed using the metric system in all countries except for the United States, where English units are used. However, the MUTCD 2000 [ 42 ] and 2003 [ 43 ] editions developed by the Federal Highway Administration contain (but rarely used) metric versions of the signs, some of which do get used outside of the US, in particular, Belize and Guyana.
Languages
Where signs use a language, the recognized language/s of the area is normally used. Signs in most of the US, Canada, Australia, and New Zealand are in English. Quebec uses French. In contrast, the New Brunswick , Jacques-Cartier , and Champlain bridges, in Montreal (as well as some parts in the West Island ), use both English and French, and a number of other provinces and states, such as Ontario , Manitoba, and Vermont use bilingual French–English signs in certain localities. Mexico uses Spanish. Within a few miles of the US–Mexico border , road signs are often in English and Spanish in places like San Diego, Yuma, and El Paso. Indigenous languages, mainly Nahuatl as well as some Mayan languages , have been used as well.
In both Canada and Mexico, pictorial signs are common compared to the US, where some signs are simply written in English.
Typefaces
The typefaces predominantly used on signs in the US and Canada are the FHWA alphabet series (Series B through Series F and Series E Modified). Details of letter shape and spacing for these alphabet series are given in "Standard Alphabets for Traffic Control Devices", first published by the Bureau of Public Roads (BPR) in 1945 and subsequently updated by the Federal Highway Administration (FHWA). It is now part of Standard Highway Signs (SHS), the companion volume to the MUTCD which gives full design details for signfaces.
Initially, all the alphabet series consisted of uppercase letters and digits only, although lowercase extensions were provided for each alphabet series in a 2002 revision of SHS. Series B through Series F evolved from identically named alphabet series which were introduced in 1927.
Straight-stroke letters in the 1927 series were substantially similar to their modern equivalents, but unrounded glyphs were used for letters such as B, C, D, etc., to permit more uniform fabrication of signs by illiterate painters. Various state highway departments and the federal BPR experimented with rounded versions of these letters in the following two decades.
The modern, rounded alphabet series was finally standardized in 1945 after rounded versions of some letters (with widths loosely appropriate for Series C or D) were specified as an option in the 1935 MUTCD and draft versions of the new typefaces had been used in 1942 for guide signs on the newly constructed Pentagon road network .
The mixed-case alphabet now called Series E Modified, which is the standard for destination legend on freeway guide signs, originally existed in two parts: an all-uppercase Series E Modified, which was essentially similar to Series E, except for a larger stroke width, and a lowercase-only alphabet. Both parts were developed by the California Division of Highways (now Caltrans ) for use on freeways in 1948–1950.
Initially, the Division used all-uppercase Series E Modified for button-reflectorized letters on ground-mounted signs and mixed-case legend (lowercase letters with Series D capitals) for externally illuminated overhead guide signs. Several Eastern turnpike authorities blended all-uppercase Series E Modified with the lowercase alphabet for destination legends on their guide signs.
Eventually, this combination was accepted for destination legend in the first manual for signing Interstate highways, which was published in 1958 by the American Association of State Highway Officials and adopted as the national standard by the BPR.
Uses of non-FHWA typefaces
The US National Park Service uses NPS Rawlinson Roadway , a serif typeface, for guide signage; it typically appears on a brown background. Rawlinson has replaced Clarendon as the official NPS typeface, but some states still use Clarendon for recreational signage.
Georgia , in the past, used uppercase Series D with a custom lowercase alphabet on its freeway guide signs; the most distinctive feature of this typeface is the lack of a dot on lowercase i and lowercase j . This was discontinued in 2012. [ 44 ] More recent installations appear to include dots. [ 45 ]
The Clearview typeface, developed by US researchers to provide improved legibility, is permitted for light legend on dark backgrounds under FHWA interim approval. Clearview has seen widespread use by state departments of transportation in Arkansas , Arizona , [ 46 ] Illinois , [ 47 ] Kentucky , Maryland , Michigan , Ohio , Pennsylvania , Texas , Vermont , and Virginia . The Kansas Turnpike Authority has also introduced Clearview typeface to some of its newer guide signs along the Kansas Turnpike , but the state of Kansas continues to use the FHWA typefaces for signage on its non-tolled Interstates and freeways.
In Canada, the Ministry of Transportation for the Province of British Columbia specifies Clearview for use on its highway guide signs, [ 48 ] and its usage has shown up in Ontario on the Don Valley Parkway and Gardiner Expressway in Toronto and on new 400-series highway installations in Hamilton , Halton and Niagara , as well as street signs in various parts of the province. The font is also being used on newer signs in Alberta , Manitoba, and Quebec .
It is common for local governments, airport authorities, and contractors to fabricate traffic signs using typefaces other than the FHWA series; Helvetica , Futura and Arial are common choices.
For road signs in Canada, the Transportation Association of Canada (TAC) publishes its own Manual of Uniform Traffic Control Devices for Canada for use by Canadian jurisdictions. [ 49 ] Although it serves a similar role to the FHWA MUTCD, it has been independently developed and has a number of key differences with its US counterpart, most notably the inclusion of bilingual (English/French) signage for jurisdictions such as New Brunswick and Ontario with significant anglophone and francophone population, a heavier reliance on symbols rather than text legends and metric measurements instead of imperial.
The Ministry of Transportation of Ontario (MTO) also has historically used its own MUTCD which bore many similarities to the TAC MUTCDC. However, as of approximately 2000, MTO has been developing the Ontario Traffic Manual (OTM), a series of smaller volumes each covering different aspects of traffic control (e.g., regulatory signs, warning signs, sign design principles, traffic signals, etc.).
Road signs in Central American countries heavily influenced by US MUTCD but used metric units instead of imperial/US units and regulated under Manual Centroamericano de Dispositivos Uniformes para el Control del Transito , a Central American equivalent to US MUTCD published by the Central American Integration System (SICA). [ 50 ]
Road signs in Mexico are influenced by road signs in America, and are published under Manual de Dispositivos para el Control del Tránsito en Calles y Carreteras . It serves as a similar role to the FHWA MUTCD, but is independently developed and has a number of key differences with the US counterpart, and the language used is Mexican Spanish . Like Canada but unlike America, Mexico had a heavier reliance on symbols than text legends, and metric measurements instead of imperial. [ 51 ]
Road signs in the United States are, for the most part, standardized by federal regulations, most notably in the Manual on Uniform Traffic Control Devices (MUTCD) and its companion volume the Standard Highway Signs (SHS). The MUTCD was most recently updated on 19 December 2023, when the 11th edition was released, [ 52 ] and became effective on 18 January 2024, 30 days after publication. States have two years after the effective date to do one of the following options: adopt the revised MUTCD, adopt the revised MUTCD with a state supplement, or adopt a state-specific MUTCD. [ 53 ]
Road signs in Puerto Rico share the same design as those used in the mainland United States, but with inscriptions in Spanish instead of English, since Spanish is an official language in Puerto Rico.
Road signs in Caribbean and Latin America vary from country to country. For the most part, conventions in signage tend to resemble United States signage conventions more so than European and Asian conventions. For example, warning signs are typically diamond-shaped and yellow rather than triangular and white. Some variations include the "Parking" and "No Parking" signs, which contain either a letter E or P , depending on which word is used locally for "Parking" (Spanish estacionamiento or parqueo , Portuguese estacionamento ), as well as the Stop sign, which usually reads "Pare" or "Alto". Notable exceptions include speed limit signs, which follow the European conventions, and the "No Entry" sign, often replaced with a crossed upwards arrow.
Of all the countries in South America, only 4 countries Brazil, Chile, Ecuador and Venezuela have signed the Vienna Convention on Road Signs and Signals. Chile is also the only country in South America to have ratified this convention.
Road signs in Bolivia are regulated by the Manuales Técnicos para el Diseño de Carreteras standard which is based on the United States' MUTCD (FHWA), Central America's Manuales Técnicos para el Diseño de Carreteras (SICA), Colombia's Manual de Señalización Vial ( Ministry of Transport ), and Chile's Manual de Carreteras . [ 54 ] Thus, road signs used in Bolivia generally have many similarities to road signs used in the United States, Central America, Colombia and neighboring Chile.
Traffic signs in Colombia are classified into three categories:
Warning signs are very similar to warning signs in United States. They are yellow diamond-shaped with a black symbol (the yellow colour is changed to an orange colour in areas under construction). In certain cases, the yellow colour is shifted to fluorescent yellow (in the School area sign and Chevron sign).
Mandatory signs are similar to European signs. They are circular with a red border, a white background and a black symbol. Stop sign and Yield sign are as European, except the word "Stop" is changed for "Pare" and the Yield sign has no letters; it is a red triangle with white centre.
Information signs have many shapes and colours. Principally they are blue with white symbols and in many cases these signs have an information letter below the symbol.
Road signs in Cuba are very similar to those used in European countries and generally conform to the Vienna Convention on Road Signs and Signals. On September 30, 1977, Cuba acceded to the Convention. Cuba still uses a circular STOP sign, with a triangle inside, which was used in the past in several European countries.
Road signs in Ecuador are regulated in Manual Básico de Señalización Vial [ 56 ] [ 57 ] [ 58 ] and Reglamento Técnico Ecuatoriano. RTE INEN 004-1:2011. Señalización vial . [ 59 ] Signs are similar in design to those used in the United States.
Ecuador signed the Vienna Convention on Road Signs and Signals on November 8, 1968 but has yet to fully ratify it. [ 22 ]
Road signs in Guyana generally follow the same design as those in the United States and are based on the MUTCD with the exception that some signs are reversed since the country drives on the left. [ 60 ] However, most of current signs found in Guyana, are non-compliant with MUTCD standards. [ 61 ] [ 62 ] Metric speed limit signs in km/h are found in Guyana, while in the United States such signs with speed limits in km/h are extremely rare, usually seen near the borders with Canada and Mexico, both of which use the metric system.
Road signs in Haiti are standardized road signs closely following those used in France with certain distinctions. [ citation needed ] They are written in French and Haitian Creole .
Road signs in Paraguay are regulated in the Manual de Carreteras del Paraguay standard developed by the Ministry of Public Works and Communications ( Spanish : Ministerio de Obras Públicas y Comunicaciones ). [ 63 ]
Road signs in Peru are regulated by the Manual de Dispositivos de Control del Tránsito Automotor para Calles y Carreteras , [ 64 ] developed by the Ministry of Transport and Communications of Peru. This standard is based on the United States' Manual on Uniform Traffic Control Devices (MUTCD) developed by the Federal Highway Administration, [ 65 ] Colombia's Manual de Señalización Vial and Chile's Manual de Señalización de Tránsito . [ 66 ] As a result, road signs in Peru are similar in design to those used in the United States on one side and in neighbouring Chile and Colombia on the other side.
Road signs in Suriname are particularly modelled on the signage system used in the Netherlands since Suriname is a former Dutch colony. As Suriname drives on the left, unlike the Netherlands, certain signs are reversed to reflect this system.
Road signs in Venezuela are regulated in Manual Venezolano de Dispositivos Uniformes para el Control del Tránsito and are based on the United States ' MUTCD . [ 67 ] | https://en.wikipedia.org/wiki/Traffic_signs_by_country |
A warning sign is a type of sign which indicates a potential hazard, obstacle, or condition requiring special attention. Some are traffic signs that indicate hazards on roads that may not be readily apparent to a driver. [ 1 ]
While warning traffic sign designs vary, they usually take the shape of an equilateral triangle with a white background and thick red border. In the People's Republic of China (excluding Macau and Hong Kong ) and North Korea , they appear with a black border and a yellow background. In Sweden , Greece , Finland , Iceland , Poland , Cuba , Nigeria , South Korea and Vietnam , they have a red border with an amber background. The polar bear warning sign in Svalbard recently changed from displaying a black bear on white background to a white bear on black background (both signs are triangular with a red border). Some countries (like France , Norway and Spain ) that normally use a white background have adopted an orange or amber background for road work or construction signs.
Warning signs in some countries have a diamond shape in place of the standard triangular shape. In the United States , Canada , Mexico , Australia , Japan , Liberia , Sri Lanka , New Zealand , [ 2 ] most of Central and South America , some countries of Southeast Asia , and also Ireland (diverging from the standards of the rest of Europe) warning signs are black on a yellow background and usually diamond-shaped, while temporary signs (which are typically construction signs) are black on an orange background. Some other countries, like Argentina and Taiwan , use a combination of triangle and diamond-shaped warning signs.
The warning signs usually contain a symbol. In Europe they are based on the UNECE Vienna Convention on Road Signs and Signals . In the United States they are based on the MUTCD standard and often contain text only.
Some of the first roadside signs—ancient milestones —merely gave distance measures. Hazard warnings were rare though occasional specimens appeared, such as the specific warning about horse-drawn vehicles backing up which was carved in stone in Lisbon's Alfama neighborhood in 1686. The early signs did not have high-contrast lettering and their messages might have been easily overlooked. Signs were written in the local language ( example ); symbolic signs, though long used on certain tradesmen's signs (like the pawnbrokers' tri-ball symbol ) were to be used for traffic only much later in history.
Complex signage systems emerged with the appearance of motorcars. In 1908 the automobile association in West London erected some warning signs. In 1909, nine European governments agreed on the use of four pictorial symbols, indicating bump , curve , intersection , and railroad crossing . The intensive work on international road signs that took place between 1926 and 1949 eventually led to the development of the European road sign system.
As the 20th century progressed, and traffic volume and vehicle speeds increased, so did the importance of road sign visibility. Earlier flat-painted signs gave way to signs with embossed letters , which in turn gave way to button copy signs— round retroreflective "buttons" helped to achieve greater night visibility. Flat metal signs reappeared in the 1980s with the widespread use of surfaces covered with retroflective sheeting materials like Scotchlite .
In Europe, 1968 Vienna Convention on Road Signs and Signals (which became effective in 1978) tried, among other things, to standardize important signs. After the fall of the Iron Curtain and greater ease of country-to-country driving in the EU , European countries moved toward lessening the regional differences in warning signs .
In modern regulations, U.S. warning signs are classified as Series W signs, starting with the W1 Series (curves and turns) and ending with the W25 Series (concerning extended green traffic lights ). Some U.S. warning signs are without category while others like the warning stripes at tunnel portals or plain red End of Roadway signs are classified as Object Markers (OM Series). In the U.S., Stop and speed limit signs fall under the R Series (Regulatory). Modern U.S. signs are widely standardized ; unless they are antique holdovers from an earlier era, oddities like a yellow Stop sign or a red Slippery When Wet sign would typically appear only on private property—perhaps at a hospital campus or in a shopping mall parking lot.
Street sign theft by pranksters, souvenir hunters, and scrappers has become problematic: removal of warning signs costs municipalities money to replace lost signs, and can contribute to traffic collisions. Some authorities affix theft-deterrence stickers to the back sides of signs. Some jurisdictions have criminalized unauthorized possession of road signs or have outlawed their resale to scrap metal dealers. In some cases, thieves whose sign-removal lead to road fatalities have been charged with manslaughter . [ 3 ] [ 4 ] [ 5 ] Artistically inclined vandals sometimes paint additional details onto warning signs: a beer bottle, a handgun, or a boom box added to the outstretched hand of the Pedestrian Crossing person, for example.
Warning signs can indicate any potential hazard, obstacle or condition requiring special attention. Some of the most common warning signs are the following.
General warning signs are used in instances in which the particular hazard, obstacle or condition is not covered by a standard sign. In Europe, they usually comprise an exclamation mark on the standard triangular sign ( Unicode U+26A0 ⚠ WARNING SIGN ) with an auxiliary sign below in the local language identifying the hazard, obstacle or condition. In Sweden , the general warning sign has only a vertical line instead of exclamation mark; this model was used also in many other European countries until the 1990s. In the United States and other countries using diamond-shaped signs, the explanatory language is often written directly on the diamond-shaped sign, although it may contain only a general warning such as "Caution", and pictograms may also be used.
Warning signs can be placed in advance of, next to or on a specific obstacle. Obstacles such as railway level crossings may have several warning signs beforehand, while bridge ramparts typically have reflective signs placed directly on them on either side. These signs can be specific to the shape requirements of the obstacle, for example, bridge rampart signs are often tall and skinny so as not to intrude into the lane.
These signs indicate that dangerous or unexpected bends in the road are ahead. Signs typically indicate whether the curves are to the right or to the left, the angle of the curve and whether it is one curve or a series of curves.
Chevron -shaped symbols or arrows on rectangular signs may be placed at the actual location of the bend or curve to further mark the location of the curve and to assist in the negotiation of the curve. They may also be used to indicate "merge" with other traffic, as for an on-ramp of a limited-access highway . An unusual occurrence of the rectangular arrow sign appears on the eastbound approach to Dead Man's Curve in Cleveland, Ohio , US, a curve so sharp that in places an arrow's stem is printed on one sign and the arrow's point is printed on another larger sign further down the road; from the driver's perspective at a distance, the two signs visually blend together to form one large arrow image.
Truck drivers will need to pay attention to "Steep grade" warnings (or "Down grade, use lower gear"), sometimes posted with the percent grade (e.g., 12%). Steep hills may also feature "Runaway truck escape" or "Emergency stop" areas with corresponding signs. The UK has a sign warning of "Adverse camber " on a curve.
These signs indicate when a multilane highway is being narrowed, when a passing lane is ending, or where the road is widening or a passing lane starting. Another type of sign is used to indicate central "two-way" left turning lane in center of roadway. Warning signs may also warn of "Highway ends", where the road changes class or type.
In the United States and Canada , there is special signage for lanes that are about to exit, so that drivers who wish to remain on the main road have adequate time to merge. Such lanes are sometimes indicated by special striping ("alligator stripes") and the sign, "Through Traffic Merge Left" (or right). On freeways, the green directions sign for the exit ramp may have the additional notation, "Exit Only", and should have black letters on a yellow background for emphasis.
These signs are used where traffic may be constricted to a narrow bridge, or where the bridge may have a movable span closed to vehicles while boats pass (e.g., drawbridge ). They may also be used for underpass to indicate low overhead clearance.
These signs are used to indicate tunnels , where lights are usually required, and a general change in the light level. May also indicate low ceiling clearance . Truck drivers should also watch for prohibited cargo signs (e.g., propane ) upon approach to tunnels.
Warns road users of severe road conditions ("Icy road", "Bump", etc). Also "Loose gravel", "Soft shoulder", " Speed hump ", and "Watch for ice".
These signs may be used to indicate the hazards of fallen or falling rocks on the road ahead. They are usually pictographs, but may also include wording, such as "falling rocks". In Italy the words may be " caduta sassi " or " caduta massi "; in France " chute de pierres "; in Mexico " derrumbe s".
These signs are used to warn drivers of people walking in the street. They may also be used to warn of children playing, playgrounds, bicycle area, deaf child, blind pedestrians, and thickly settled zones where pedestrians may enter the road.
In California , United States near the Mexican border , there were warning signs showing a running family . This was to warn motorists to look out for illegal immigrants who try to escape authorities by running through freeway traffic. The symbol was created by California Department of Transportation employee John Hood in the late 1980s. [ 6 ]
These signs mark school zones (in which lower speed limits may be in place), student crossings, crossing guards or signals ahead. In the U.S. and Canada, pentagon-shaped signs are used in place of the usual diamond-shaped signs. The shape of the U.S. school zone resembles a one-room schoolhouse and is the only U.S. sign shaped this way. Some Canadian provinces use an identical sign.
Bicycle crossing signs warn that bicycles will cross at their location.
These signs warn of animals that may stray onto the road. They can be either wild (as with deer or moose) or farm animals (as with cows or ducks). In the United States, a "share the road" plaque is sometimes placed below these warning signs when used in this manner.
These signs are found where road users can encounter slow, large or non-typical vehicles such as forklifts, handcarts and military vehicles. They are more common around quarries, airports, industrial zones, military installations and rural areas.
These signs are often temporary in nature and used to indicate road work (construction), poor roads, or temporary conditions ahead on the road including flagmen, uneven pavement, etc. (Note that some "high water" signs are posted to alert drivers of a flood-prone area and do not actually mean that there is a flooded section of road ahead.) In France, Italy, Spain, Norway etc., warning (and speed limit) signs connected with road work have a yellow background in place of the usual white background on signs. In North America and Ireland, signs connected with road work have an orange background.
These warning signs indicate that traffic lights are ahead, and are often used when it is difficult to see that a traffic light may already be showing red, to warn a driver to prepare to slow down. They may be supplemented with flashing light or lighted sign when light is red or turning red. Some countries also have signs warning of signals for ramp meters , fire stations, and airfields.
These signs warn of road crossings at crossroads, T-intersections, roundabouts, or Y-intersections. They may also indicate "hidden driveway" intersecting the road ahead. (Compare with bridges, overpasses , viaducts ).
Like for traffic signals, some "stop" or "yield" signs may require additional warning or reminder, especially in dense areas or where the sign has been added recently.
These signs may be used to warn people of oncoming traffic; shown when a motorway becomes a dual carriageway or a normal road without a central reservation or median.
These signs are used to warn of level crossings ahead. In most countries, a red triangle warning sign is used, with various pictograms for unguarded crossings, crossings with manual gates, and automatic level crossings. In most of Europe, an old-style gate is used for a crossing with gates, and a steam locomotive for a crossing without gates. Germany uses an electric train. Similar pictograms are also used in Ireland, albeit on an amber diamond sign. In the United States the warning of all types of railway crossings is made using a circular yellow sign. The actual crossing is also marked with crossed "railroad crossing" crossbuck signs (stop, look, listen) and possibly lights, bells, and barriers .
A warning sign with the image of an aircraft in the middle of it indicates an airport or airfield, where drivers should be prepared for low-flying aircraft.
Flying socks, as indicated in Poland by a windsock on red triangle or yellow diamond signs, indicate locations where a strong side wind may cause the trajectory of the moving vehicle to change drastically, perhaps even "flying" across lanes, causing an accident.
Roadways that only have one entry/exit point - " dead end ", "not a through street" or "no outlet".
Signs indicating the end point of a roadway .
These signs warn of approach to where firefighters may be entering the road with fire engines or other emergency apparatus, where other drivers will have to stop and wait until they pass.
Some warning signs have flashing lights to alert drivers of conditions ahead or remind drivers to slow down. In Britain , they are called warning lights . Flashing lights can be dangerous for people with certain forms of epilepsy and/or sensory processing disorder . | https://en.wikipedia.org/wiki/Traffic_warning_sign |
Traffic waves , which are also called stop waves , ghost jams , traffic snakes or traffic shocks , are traveling disturbances in the distribution of cars on a highway . Traffic waves travel backwards relative to the cars themselves. [ 1 ] Relative to a fixed spot on the road the wave can move with, or against the traffic, or even be stationary (when the wave moves away from the traffic with exactly the same speed as the traffic). Traffic waves are a type of traffic jam . A deeper understanding of traffic waves is a goal of the physical study of traffic flow , in which traffic itself can often be seen using techniques similar to those used in fluid dynamics . It is related to the accordion effect .
It has been said [ 2 ] [ unreliable source? ] that by knowing how traffic waves are created, drivers can sometimes reduce their effects by increasing vehicle headways and reducing the use of brakes , ultimately alleviating traffic congestion for everyone in the area.
However, in other models, [ which? ] increasing headway leads to diminishing the capacity of the travel lanes, increasing the congestion; however, disputed by acknowledging that similar principles apply to herding sheep through gates, and that in such a case, via human intervention, solitons are diminished simply by slapping "stuck sheep" and holding back aggressive sheep. In funnelling sheep through gates it can be determined how much intervention is needed to curb bottlenecks. Similar principles can be applied to human traffic streams, where, if each individual had the knowledge of final destination and complete route planning, then traversal along a route would be done so with the full knowledge that any abrupt change from any itinerary causes delays for those about to traverse the same route.
The earliest theoretical model of traffic shock waves was offered by Lighthill and Whitham in 1955. [ 3 ] The following year Paul Richards independently published a similar model. [ 4 ] Both papers were based on fluid dynamics and the model is known as the Lighthill-Whitham-Richards model. [ 5 ] | https://en.wikipedia.org/wiki/Traffic_wave |
The tragedy of the anticommons is a type of coordination breakdown, in which a commons does not emerge, even when general access to resources or infrastructure would be a social good. It is a mirror-image of the older concept of tragedy of the commons , in which numerous rights holders' combined use exceeds the capacity of a resource and depletes or destroys it. [ 2 ] The "tragedy of the anticommons" covers a range of coordination failures, including patent thickets and submarine patents . Overcoming these breakdowns can be difficult, but there are assorted means, including eminent domain , laches , patent pools , or other licensing organizations. [ citation needed ]
The two 'tragedies' may be compared as follows:
The term originally appeared in Michael Heller 's 1998 article of the same name [ 2 ] and is the thesis of his 2008 book. [ 3 ] The model was formalized by James M. Buchanan and Yong Yoon. [ 4 ] In a 1998 Science article, Heller and Rebecca S. Eisenberg , while not disputing the role of patents in general in motivating invention and disclosure, argue that biomedical research was one of several key areas where competing patent rights could actually prevent useful and affordable products from reaching the marketplace. [ 5 ]
In early aviation, the Wright brothers held patents on certain aspects of aircraft, while Glenn Curtiss held patents on ailerons which was an advance on the Wrights' system, but antipathy between the patent holders prevented their use. The government was forced to step in and create a patent pool during World War I . [ 6 ]
In his 1998 Harvard Law Review article, [ 2 ] Michael Heller noted that there were a lot of open air kiosks but also a lot of empty stores in many Eastern European cities after the fall of Communism . Upon investigation, he concluded that it was difficult or even impossible for a startup retailer to negotiate successfully for the use of that space because many different agencies and private parties had rights over the use of store space. Even though all the persons with ownership rights were losing money with the empty stores, and stores were in great demand, competing interests got in the way.
Heller says that the rise of the " robber barons " in medieval Germany was the result of the tragedy of the anticommons. [ 3 ] Nobles commonly attempted to collect tolls on stretches of the Rhine passing by or through their fiefs, building towers alongside the river and stretching iron chains to prevent boats from carrying cargo up and down the river without paying a fee. [ 3 ] Repeated attempts by the Holy Roman Empire , including several over the centuries by the Emperor , to regulate toll collection on the Rhine failed. And it was not until the establishment of the " Rhine League " of the Emperor with certain nobles and clergy that the "Robber barons" control over the Rhine was crushed. River tolls on the Rhine, increasingly imposed by states rather than individual lords, remained a sticking point in relations and commerce in the Rhine basin until the establishment of the Central Commission for Navigation on the Rhine in 1815.
Heller and Rebecca S. Eisenberg are academic law professors who believe that biological patents create a "tragedy of the anticommons", "in which people underuse scarce resources because too many owners can block each other." [ 5 ] Others claim that patents have not created this "anticommons" effect on research, based on surveys of scientists. [ 7 ] [ 8 ]
The housing crisis in many western cities can also be characterized as a tragedy of the anticommons. Zoning and planning processes give neighborhood groups, environmental groups and other stakeholders significant power over whether new housing projects move forward and on what terms. This creates a situation where many stakeholders have the right to exclude others from use of a private resource. [ 9 ] In many cases zoning and planning processes make it impossible for private landholders to construct new housing, especially multi-family housing. [ 10 ] This results in high prices, reduced inventory and significant homelessness. [ 11 ]
The Institute of Medicine highlighted an example of the tragedy of the anticommons in the context of data sharing clinical trial data in the medical field. Some notable challenges included protecting patient privacy, balancing conflicting interests, lack of standardization in data, ensuring data quality and integrity, and legal and regulatory barriers, which all contributed to issues regarding coordination and collaboration down the line. [ 12 ] This exemplifies the tragedy of the anticommons because the lower potential efficiency of clinical operation, as well as the satisfaction of patients, is compromised due to an underutilization of clinical data.
The ongoing legal battles between Apple Inc. and Samsung can be viewed as an example of the tragedy of the anticommons, specifically in intellectual property rights. Both Apple and Samsung own numerous patents related to mobile devices, [ 13 ] and the 10-year long legal dispute has been centred around patent infringement. This situation is a prime example of the tragedy of the anticommons, in the sense where many owners of a resource have the ability to exclude others from using it, leading to the underutilization of that resource. Apple Inc. and Samsung 's limitations on their ability to innovate is an overall detriment to them, as well as the common people.
Several patent disputes halted the production of the COVID-19 vaccine during its inception. Some notable cases include Pfizer vs. Moderna , [ 14 ] Bharat Biotech vs. Serum Institute of India , [ 15 ] and Moderna vs. Arbutus. [ 16 ] These three cases are potentially a case of the tragedy of the anticommons. While the COVID-19 vaccine response was relatively quick, this was primarily due to scientists, doctors, ethics approval boards, manufacturers and regulatory agencies moving quickly in the state of emergency. [ 17 ] A patent dispute, in the sense of the tragedy of the anticommons, may have slowed down the production process, and limited the technology available, to competing firms. Patent disputes related to the COVID-19 vaccine are complex and often involve multiple parties and legal jurisdictions. The issues involved can be highly technical and may take years to resolve. | https://en.wikipedia.org/wiki/Tragedy_of_the_anticommons |
Traian V. Chirilă (born 14 February 1948 in Arad , Romania ) [ 1 ] is a Romanian-Australian polymer and organic chemist who is the inventor of AlphaCor , an artificial cornea in current clinical use throughout the world. [ 2 ]
His past and current research has contributed in several areas of biomaterials , polymer science and bioengineering , especially in the understanding of biomaterials and biocompatibility , in the development of polymers , hydrophilic sponges, artificial cornea , artificial vitreous substitutes and in topics such as interaction of laser radiation with polymers, photoresponsive polymers, supramolecular polymers , sustained release of bioactive agents, tissue engineering and the use of polymers in genetic therapies. [ 2 ]
Chirilă was born and educated in Romania, where he obtained a BEng in polymer technology (1972) and a PhD in organic chemistry (1981) from the Polytechnic University of Timișoara . [ citation needed ]
After ten years of research in polymers and organic chemistry, he relocated to Australia. During 1984 he was a research fellow at the Curtin School of Applied Chemistry. In 1986 he joined Lions Eye Institute in Perth as a senior scientist with the task of establishing a department for research and development of polymeric biomaterials for ophthalmology . In 2005, he joined the newly founded Queensland Eye Institute in Brisbane , where he was offered a position of senior scientist to continue his research and to establish a department of ophthalmic bioengineering . He was made a fellow of Royal Australian Chemical Institute (RACI) in 1992. Currently, he holds three adjunct professorships at the School of Physical and Chemical Sciences of Queensland University of Technology , Australian Institute for Bioengineering and Nanotechnology of University of Queensland , and Faculty of Health Sciences of University of Queensland. [ citation needed ]
His research has resulted to date in 175 journal publications and 13 patents. He has contributed over 175 presentations at scientific meetings and he has been invited to present lectures in China, the United States, Japan, Romania, Italy, France, Switzerland, Korea, Germany and The Netherlands. [ citation needed ]
Since 1987, he has. Retrieved 30 research grants totalling over A$ 12 million. [ citation needed ]
He spent his childhood in a small town in Transylvania, Chișineu-Criș , graduating from the local high school in 1966. He is nicknamed "Tanu". His mother died in 2019 while still living in Timisoara, Romania where he graduated from the university. Traian is married to Mika, who is from Japan, and they have a son, Sebastian. [ 1 ] | https://en.wikipedia.org/wiki/Traian_V._Chirilă |
Trail ethics define appropriate ranges of behavior for hikers on a public trail . It is similar to both environmental ethics and human rights in that it deals with the shared interaction of humans and nature . There are multiple agencies and groups that support and encourage ethical behavior on trails. [ 1 ] [ 2 ]
Trail ethics applies to the use of trails, by pedestrians , dog walkers , hikers , backpackers , mountain bikers , equestrians , hunters , and off-road vehicles .
Sometimes conflicts can develop between different types of users of a trail or pathway. Etiquette has developed to minimize such interference. Examples include:
Some cities have worked to add pathways for pedestrians and cyclists . [ 5 ] This can reduce the amount of vehicle traffic in busy urban areas, and make visiting downtown areas more pleasant, There can be difficulties when a path is used by people travelling at different speeds, such as pedestrians, joggers, and cyclists, and the appropriate etiquette is not observed. [ 6 ]
In the US off-road vehicle use on public land has been criticized by some members of the government [ 7 ] and environmental organizations including the Sierra Club and The Wilderness Society . [ 8 ] [ 9 ] They have noted several consequences of illegal ORV use such as pollution , trail damage, erosion , land degradation , possible species extinction , [ 10 ] and habitat destruction [ 11 ] [ 12 ] which can leave hiking trails impassable. [ 13 ] ORV proponents argue that legal use taking place under planned access along with the multiple environment and trail conservation efforts by ORV groups will mitigate these issues. Groups such as the Blue-ribbon Coalition advocate Treadlightly, which is the responsible use of public lands used for off-road activities. | https://en.wikipedia.org/wiki/Trail_ethics |
Trail pheromones are semiochemicals secreted from the body of an individual to affect the behavior of another individual receiving it. Trail pheromones often serve as a multi purpose chemical secretion that leads members of its own species towards a food source, while representing a territorial mark in the form of an allomone to organisms outside of their species. [ 1 ] Specifically, trail pheromones are often incorporated with secretions of more than one exocrine gland to produce a higher degree of specificity. [ 2 ] Considered one of the primary chemical signaling methods in which many social insects depend on, trail pheromone deposition can be considered one of the main facets to explain the success of social insect communication today. Many species of ants, including those in the genus Crematogaster use trail pheromones.
In 1962, Harvard professor Edward O. Wilson published one of the first concrete studies constructing the groundwork for the notion of trail pheromones. [ 2 ] Claiming an odor trail is deposited by the sting apparatus of the hymenopteran Solenopsis saevissima which results in a pathway from the colony to a food source, this study encouraged further investigation of how this chemical is laid, how it affects communication between species within and outside of its own, the evolution of the semiochemical , etc.
The pheromone is synthesized in the same region as venom, or other primary hormonal departments within the organism. Often, trail pheromone synthesis occurs in the ventral venom gland, poison gland, Dufour's gland, sternal gland, or hindgut. [ 3 ] When secreted, the pheromone is dropped in a blotch-like fashion from the foraging organism onto the surface leading to the food source. As the organism proceeds to the food source, the trail pheromone creates a narrow and precise pathway between the food source and the nesting location, which another organism of the same species, and often the same nest, follows precisely. Commonly, an organism, when initially laying down the trail may renew the trail a number of times to demonstrate the value of the food source while running in tandem. [ 4 ] Once the trail is laid, other members of the species will recognize the chemical signal and follow the trail, and each individually renew the trail on the way back to the home source. While this pheromone is constantly deposited by its members, the chemicals diffuse up into the environment propagating its message. Once the food source runs out the organisms will simply skip the task of renewing the trail on the way back, thus resulting in the diffusion and weakening of the pheromone. [ 5 ] Studies have shown that with quality of food, distance from nest, and amounts of food, the strength of the trail pheromone may vary. [ 1 ] Often the foraging individual may synthesize the trail pheromone as a mixture of chemicals produced by different glands which allows such specificity. [ 3 ] While members of the same species who discovered the food constantly renew this trail pathway, as the chemical is secreted into the environment as a signal for food in their umwelt , the very same chemical can often be interpreted as a territorial mark for outside species.
Ants typically use trail pheromones to coordinate roles like nest defense and foraging . [ 6 ] Ants can produce a trail of defensive secretions that trigger an alarm response within their nestmates. [ 7 ] In regards to foraging, an ant can communicate the quality of a food source to its colony; the more rewarding a food source is, the higher the concentration of the trail produced. [ 8 ] Additionally, some species, like Lasius niger ants, can "eavesdrop" on the trails produced by another species in order to procure food.
Myrmicine ants produce their trail pheromones through their poison glands. [ 9 ] The major component in the trail pheromones secreted by Pristomyrex pungens is 6- n -pentyl-2-pyrone; several monoterpenes were also found in the secretion, but they provided only marginal effects when combined with the former. [ 10 ] The major components found in the secretions of Aphaenogaster rudis include anabaseine , anabasine , and 2,3'-bipyridyl, though the third contributes less than the other two. [ 9 ] When secreted, this trail pheromone does not recruit ants directly from their nest; instead, worker ants may stumble upon to the trail unintentionally and follow it thereafter to the food source.
Bees can use trail pheromones to mark food sources [ 11 ] and the entrance of their hives. [ 12 ] Oftentimes, when finding a source, bees will mark that exact location as well as secreting pheromones along the flight back to their hives. Employment of trail pheromones is extensively studied in honey bees and stingless bees , for both are highly social.
The trail pheromone of the stingless bee Trigona recursa is produced by its labial glands. [ 13 ] One of its key compounds is hexyl decanoate, and when secreted, the pheromone will recruit other bees towards the source. The stingless bee Scaptotrigona pectoralis , like ants, can utilize another colony's food trail. Specifically, they can learn foreign pheromone trails at a source, broadening their options for foraging. [ 14 ] However, in some cases of aggressive bees, like Trigona corvina , encounters between individuals from different colonies at a food source will result in fights and ultimately death amongst both parties. [ 15 ]
Termites use trail pheromones primarily as a means of foraging. They can lay pheromones along a trail as their abdomens touch the ground, specifically through their abdominal sternal glands. [ 16 ] As the other termites follow, they will continue to add to the trail.
The basal termite Mastotermes darwiniensis produces trail pheromones from at least two sternal glands despite every other species producing theirs from only one. [ 17 ] This pheromone, composed solely of a norsesquiterpene alcohol, elicits trail-following from other termites. As aforementioned, these successive termites can add to the trail, depending if it is used for foraging or recruiting workers to complete tasks. In the case of Reticulitermes santonensis , foraging trails have spotted markings throughout the path, whereas recruitment trails are more continuous from the termites dragging their bodies along the path. [ 18 ]
Trail pheromone deposition from an organism is correlated with its environment. In the event where a food source is identified and a trail pheromone is deposited, certain wildlife may flock towards or away from the trail causing temporary or dispersal of the population or individual. With relocation of wildlife, surrounding plant life may change as well; for example, pollen attached to the migrating organism is also relocating, thus may potentially regenerate in different patches. | https://en.wikipedia.org/wiki/Trail_pheromone |
A train event recorder – also called On-Train Monitoring Recorder ( OTMR ), On-Train Data Recorder ( OTDR ), Event Recorder System ( ERS ), Event Recorder Unit ( ERU ), or Juridical Recording Unit ( JRU ) – is a device that records data about the operation of train controls, the performance of the train in response to those controls, and the operation of associated control systems. It is similar in purpose to the flight data recorder or black box used on aircraft .
Because event recorders are integrated with most car-borne systems, they are an attractive target for enhanced diagnostic and control functions. Some event recorders feature outputs controlling penalty braking or emergency braking systems, as well as speedometers .
Data storage can be provided by magnetic tape , battery-backed RAM and, more recently, non-volatile EEPROM or flash memory , overwritten in a FIFO continuous loop. The data is intended for use in the investigation of accidents and other incidents, but is also used to monitor the performance of traction units, the competence of drivers, and the general state of a train over a period of time.
A suggestion in The Times of 10 October 1853, commenting on a train collision near Portarlington station , on the Great Southern and Western Railway , on 5 October that year, called for a paper-roll recorder, keeping a log of wheel revolutions against time, to be carried in a locked box on trains, the record to be removed and stored by station masters at the destination station. [ 1 ] In 1864, a similar proposal came from Charles Babbage , inspired by his 1840 experiments for the Great Western Railway using self-inking pens on paper rolls, which were part of the equipment carried on dynamometer cars . [ 2 ] [ 3 ] No action seems to have been taken in either case. The earliest event recorders were the mechanical "TEL" [ 4 ] speed recorders of 1891, which recorded both time and speed. [ 5 ] The TEL's manufacturer, Hasler Rail of Switzerland, remains a leading producer of train event recorders. [ 6 ]
France developed the Flaman Speed Indicator and Recorder . In Germany, the Indusi train protection system included recording equipment using a ticker tape on paper. For I60R a generalized recorder system was installed (Datenspeicherkassette [DSK] / data storage cassette) that allowed for the entry of the train number, driver information and train weight, along with the driving events. The standardized DSK black box allows for approximately 30,000 km of general recording data and 90 km of detailed recording data. Later models of the DSK are electronic especially since the introduction of the computerized PZ80/PZB90 train protection generations. [ citation needed ]
Modern train event recorders follow international or national standards, such as IEEE Std. 1482.1-1999, FRA 49 CFR Part 229, and IEC 62625-1, specified the digital and analogue data to be acquired, recorded and transmitted for further analysis. [ citation needed ] The need for event recorders to survive any accident led companies such as Grinsty Rail (UK), Faiveley (France), Hasler Rail (Switzerland), Bach-Simpson (Canada), Saira Electronics (Italy) (previously FAR Systems), and MIOS Elettronica (Italy) to develop crash-protected memory modules as a part of their event recorders. [ citation needed ] Those new-generation event recorders are in growing demand both for rapid transit systems and mainline trains. [ 7 ] [ failed verification ]
Canadian regulations provide in the "Locomotives Design Requirements (Part II)"
U.S. regulations define event recorders as follows: (CFR 49 Ch II 229.5):
The Federal Railroad Administrations (FRA) "Final Rule 49 CFR Part 229", (revised June 30, 2005) [ 10 ] [ 11 ] requires that event recorders be fitted to the leading locomotives of all US, Canadian and Mexican trains operating above 30 miles per hour (48 km/h) on the US rail network including all freight, passenger and commuter rail locomotives but does not apply to transit running on its own dedicated tracks.
The new ruling applies to locomotives either ordered before Oct 1, 2006 or placed in service after Oct 1, 2009 and included:
All trains operating on Network Rail controlled infrastructure are required to be fitted with an event recorder complying with RIS-2472-RST-Iss-1, [ 12 ] the standard also cross references with BS EN 62625-1:2013. [ 13 ] Ireland has also adopted this regulation. RSSB (Rail Safety and Standards Board) is responsible for event recorder standards in the UK.
Crash protection requirements:
The UK approach is similar to US requirements, but the list of required signals is more comprehensive. This reflects, in part, the prevalence of passenger trains and the inevitable possibility of incidents involving access doors.
Signals to be recorded include:
Speed recording equipment has been used by Swiss Federal Railways for many years. [ citation needed ] | https://en.wikipedia.org/wiki/Train_event_recorder |
In the mathematical area of topology , a train track is a family of curves embedded on a surface , meeting the following conditions:
The main application of train tracks in mathematics is to study laminations of surfaces, that is, partitions of closed subsets of surfaces into unions of smooth curves. Train tracks have also been used in graph drawing .
A lamination of a surface is a partition of a closed subset of the surface into smooth curves. The study of train tracks was originally motivated by the following observation: If a generic lamination on a surface is looked at from a distance by a myopic person, it will look like a train track.
A switch in a train track models a point where two families of parallel curves in the lamination merge to become a single family, as shown in the illustration. Although the switch consists of three curves ending in and intersecting at a single point, the curves in the lamination do not have endpoints and do not intersect each other.
For this application of train tracks to laminations, it is often important to constrain the shapes that can be formed by connected components of the surface between the curves of the track. For instance, Penner and Harer require that each such component, when glued to a copy of itself along its boundary to form a smooth surface with cusps, have negative cusped Euler characteristic .
A train track with weights , or weighted train track or measured train track , consists of a train track with a non-negative real number , called a weight , assigned to each branch. The weights can be used to model which of the curves in a parallel family of curves from a lamination are split to which sides of the switch. Weights must satisfy the following switch condition : The weight assigned to the ingoing branch at a switch should equal the sum of the weights assigned to the branches outgoing from that switch.
Weights are closely related to the notion of carrying . A train track is said to carry a lamination if there is a train track neighborhood such that every leaf of the lamination is contained in the neighborhood and intersects each vertical fiber transversely. If each vertical fiber has nontrivial intersection with some leaf, then the lamination is fully carried by the train track. | https://en.wikipedia.org/wiki/Train_track_(mathematics) |
Trained immunity is a long-term functional modification of cells in the innate immune system which leads to an altered response to a second unrelated challenge. [ 1 ] For example, the BCG vaccine leads to a reduction in childhood mortality caused by unrelated infectious agents. [ 2 ] The term "innate immune memory" is sometimes used as a synonym for the term trained immunity [ 3 ] [ 4 ] which was first coined by Mihai Netea in 2011. [ 5 ] The term "trained immunity" is relatively new – immunological memory has previously been considered only as a part of adaptive immunity – and refers only to changes in innate immune memory of vertebrates . [ 6 ] [ 7 ] This type of immunity is thought to be largely mediated by epigenetic modifications . The changes to the innate immune response may last up to several months, in contrast to the classical immunological memory (which may last up to a lifetime), and is usually unspecific because there is no production of specific antibodies/receptors. [ 8 ] Trained immunity has been suggested to possess a transgenerational effect, for example the children of mothers who had also received vaccination against BCG had a lower mortality rate than children of unvaccinated mothers. [ 9 ] The BRACE trial is currently assessing if BCG vaccination can reduce the impact of COVID-19 in healthcare workers. [ 10 ] Other vaccines are also thought to induce immune training such as the DTPw vaccine. [ 11 ]
Trained immunity is thought to be largely mediated by functional reprogramming of myeloid cells . [ 1 ] One of the first described adaptive changes in macrophages were associated with lipopolysaccharide tolerance, which resulted in the silencing of inflammatory genes. [ 12 ] Similarly, Candida albicans and fungal β-glucan trigger changes in monocyte histone methylation, this functional reprogramming eventually provides protection against reinfection. [ 13 ] Also, a non-specific manner of protection in training with different microbial ligands was shown, for example treatment with fungal β-glucan induced protection against Staphylococcus aureus infection [ 14 ] or CpG oligodeoxynucleotide training protecting against infectious with Escherichia coli . [ 15 ]
Evidence of trained immunity is found mainly at monocytes / macrophages and NK cells and, less at γδ T cells and innate lymphoid cells . [ 16 ]
Monocytes/macrophages can undergo epigenetic modifications after a ligation of their pattern recognition receptors (PRRs). This ligation prepares these cells for a second encounter with the training pathogen. [ 16 ] The secondary response may be heightened not only against the training pathogen, but also against different pathogens whose antigens are recognized by the same PRRs. This effect has been observed when stimulating cells by β-glucan , Candida albicans , or by vaccination against tuberculosis with a vaccine containing BCG . [ 17 ] [ 7 ] Monocytes are very short-lived cells; however, the heightened secondary response can be spotted even several months after the primary stimulation. This shows that the immune memory is created at the level of progenitor cells , but so far it is not known how this memory is achieved. [ 7 ] Though the epigenetic modification is beneficial to the innate immune system response, it can impair macrophage resolution pathways- promoting unfavorable tissue remodeling at the inflammatory site. [ 18 ] Additionally, dendritic cells isolated from mice exposed to Cryptococcus neoformans , manifested an immunological memory response, associated with a strong interferon-γ production after C. neoformans reinfection. [ 19 ]
Trained immunity can shift macrophages toward a pro-inflammatory glycolytic M1 phenotype by an Akt/mTor HIF1α dependent pathway, away from the M2 phenotype in which macrophages maintain the Krebs cycle and oxidative phosphorylation [ 20 ] [ 21 ]
The trained immunity involving NK cells looks more like classic immunological memory, because there is development of at least partially-specific clones of NK cells. These cells have receptors on their surface against the antigens with which they came in contact during the first stimulation. [ 8 ] For example, after the encounter with cytomegalovirus , certain clones of NK cells (those that have a Ly49H receptor on their surface) expand and then show signs of immunological memory. [ 22 ] Reinfection of memory NK cells in mouse led to an enhanced cytokine production by Ly49H receptor with a more specific response to pathogen. [ 23 ] In human NK cells, this is mediated by NKG2C a receptor with a similar function as mouse Ly49H. [ 24 ] NK cells are known for their memory specific to different pathogens. The first descriptions of NK memory-like phenotype were made on mouse models with murine cytomegalovirus infections. [ 25 ] Other viral infections such as Herpes Simplex Virus [ 26 ] or Influenza Virus [ 27 ] also induce memory or memory-like responses. Memory or memory-like phenotype can be caused by bacterial phatogens, for example Mycobacterium tuberculosis, [ 28 ] or eukaryotic pathogens, for example Toxoplasma gondii. [ 29 ]
Another resident cell group 1 innate lymphoid cells (ILC1s) were discovered in liver, which expand after the infection with murine cytomegalovirus and which have manifest transcriptional, phenotypical and epigenetic changes. For the induction of ILC1s, pro-inflammatory cytokine and antigen specificity are critical. [ 30 ] Lung specific ILC2 showed memory-like phenotype after allergen exposure [ 31 ]
Trained immunity relies on epigenetic reprogramming which leads to a stronger and rapid response to recurrent triggers. There are multiple potential epigenetic mechanisms such as changes in chromatin accessibility, DNA methylation or histone modifications. Long non-coding RNAs (lncRNAs) are also critical to epigenetic reprogramming, such as their role in the assignment of H3K4me3 markers to genome which modulates gene expression. [ 32 ] Additionally, transcription factors, including STAT4 [ 33 ] and RUNX family transcription factors [ 34 ] play a role in the introduction of histone modifications. Cell metabolism is a crucial mediator of trained immunity, for example monocytes trained with β-glucan had an increased aerobic glycolysis. Additionally, priming with β-glucan resulted in epigenetic upregulation of genes involved in glycolysis 1 week later. [ 35 ] Subsequently, a cross-talk between glycolysis, glutaminolysis and cholesterol synthesis pathways was demonstrated as essential for trained immunity – β-glucan-triggered monocytes. In addition, accumulation of fumarate, caused by glutamine addition into tricarboxylic acid cycle, led to epigenetic reprogramming similar to β-glucan treatment [ 36 ] | https://en.wikipedia.org/wiki/Trained_immunity |
Trairāśika is the Sanskrit term used by Indian astronomers and mathematicians of the pre-modern era to denote what is known as the " rule of three " in elementary mathematics and algebra. In the contemporary mathematical literature, the term "rule of three" refers to the principle of cross-multiplication which states that if a b = c d {\displaystyle {\tfrac {a}{b}}={\tfrac {c}{d}}} then a d = b c {\displaystyle ad=bc} or a = b c d {\displaystyle a={\tfrac {bc}{d}}} . The antiquity of the term trairāśika is attested by its presence in the Bakhshali manuscript , a document believed to have been composed in the early centuries of the Common Era. [ 1 ]
Basically trairāśika is a rule which helps to solve the following problem:
Here p {\displaystyle p} is referred to as pramāṇa ("argument"), h {\displaystyle h} as phala ("fruit") and i {\displaystyle i} as ichcā ("requisition"). The pramāṇa and icchā must be of the same denomination, that is, of the same kind or type like weights, money, time, or numbers of the same objects. Phala can be a of a different denomination. It is also assumed that phala increases in proportion to pramāṇa . The unknown quantity is called icchā-phala , that is, the phala corresponding to the icchā . Āryabhaṭa gives the following solution to the problem: [ 1 ]
In modern mathematical notations, icchā-phala = phala × icchā pramāṇa . {\displaystyle {\text{icchā-phala }}={\tfrac {{\text{phala}}\times {\text{icchā}}}{\text{pramāṇa}}}.}
The four quantities can be presented in a row like this:
Then the rule to get icchā-phala can be stated thus: "Multiply the middle two and divide by the first."
1. This example is taken from Bījagaṇita , a treatise on algebra by the Indian mathematician Bhāskara II (c. 1114–1185). [ 2 ]
2. This example is taken from Yuktibhāṣā , a work on mathematics and astronomy, composed by Jyesthadeva of the Kerala school of astronomy and mathematics around 1530. [ 3 ]
The four quantities associated with trairāśika are presented in a row as follows:
In trairāśika it was assumed that the phala increases with pramāṇa . If it is assumed that phala decreases with increases in pramāṇa , the rule for finding icchā-phala is called vyasta-trairāśika (or, viloma-trairāśika ) or "inverse rule of three". [ 4 ] In vyasta-trairāśika the rule for finding the icchā-phala may be stated as follows assuming that the relevant quantities are written in a row as indicated above.
In modern mathematical notations we have, icchā-phala = phala × pramāṇa icchā . {\displaystyle {\text{icchā-phala }}={\tfrac {{\text{phala}}\times {\text{pramāṇa}}}{\text{icchā}}}.}
This example is from Bījagaṇita : [ 2 ]
In trairāśika there is only one pramāṇa and the corresponding phala . We are required to find the phala corresponding to a given value of ichcā for the pramāṇa . The relevant quantities may also be represented in the following form:
Indian mathematicians have generalized this problem to the case where there are more than one pramāṇa . Let there be n pramāṇa -s pramāṇa -1, pramāṇa -2, . . ., pramāṇa - n and the corresponding phala . Let the iccha -s corresponding to the pramāṇa -s be iccha -1, iccha -2, . . ., iccha - n . The problem is to find the phala corresponding to these iccha -s. This may be represented in the following tabular form:
This is the problem of compound proportion. The ichcā-phala is given by ichcā-phala = ( ichcā-1 × ichcā-2 × ⋯ × ichcā-n ) × phala pramāṇa-1 × pramāṇa-2 × ⋯ × pramāṇa-n . {\displaystyle {\text{ ichcā-phala }}={\tfrac {({\text{ ichcā-1 }}\times {\text{ ichcā-2 }}\times \cdots \times {\text{ ichcā-n }})\times {\text{ phala }}}{{\text{ pramāṇa-1 }}\times {\text{ pramāṇa-2 }}\times \cdots \times {\text{ pramāṇa-n }}}}.}
Since there are 2 n + 1 {\displaystyle 2n+1} quantities, the method for solving the problem may be called the "rule of 2 n + 1 {\displaystyle 2n+1} ". In his Bǐjagaṇita Bhāskara II has discussed some special cases of this general principle, like, "rule of five" ( pañjarāśika ), "rule of seven" ( saptarāśika ), "rule of nine" ("navarāśika") and "rule of eleven" ( ekādaśarāśika ).
This example for rule of nine is taken from Bǐjagaṇita : [ 2 ]
All Indian astronomers and mathematicians have placed the trairāśika principle on a high pedestal. For example, Bhaskara II in his Līlāvatī even compares the trairāśika to God himself! | https://en.wikipedia.org/wiki/Trairāśika |
A trajectoid is a geometric shape designed to trace a predetermined periodic trajectory when rolling under gravity. [ 1 ]
Physicists and mathematicians from the Institute for Basic Science in South Korea and the University of Geneva , in collaboration with colleagues from other institutions, developed an algorithm that links the deformation of objects to their trajectory on an inclined plane. This theoretical framework was then realized through 3D printing technology, printing two halves and a spherical hollow center which would contain a metal sphere for mass. [ 2 ] [ unreliable source? ]
Unlike conventional rolling bodies such as spheres or cylinders , which follow linear or sinusoidal paths, trajectoids are mathematically engineered to follow complex, custom trajectories. The concept extends earlier studies of rolling motion in objects like sphericons and oloids , introducing greater diversity in possible rolling paths.
This geometry-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Trajectoid |
A trajectory or flight path is the path that an object with mass in motion follows through space as a function of time. In classical mechanics , a trajectory is defined by Hamiltonian mechanics via canonical coordinates ; hence, a complete trajectory is defined by position and momentum , simultaneously.
The mass might be a projectile or a satellite . [ 1 ] For example, it can be an orbit — the path of a planet , asteroid , or comet as it travels around a central mass .
In control theory , a trajectory is a time-ordered set of states of a dynamical system (see e.g. Poincaré map ). In discrete mathematics , a trajectory is a sequence ( f k ( x ) ) k ∈ N {\displaystyle (f^{k}(x))_{k\in \mathbb {N} }} of values calculated by the iterated application of a mapping f {\displaystyle f} to an element x {\displaystyle x} of its source.
A familiar example of a trajectory is the path of a projectile, such as a thrown ball or rock. In a significantly simplified model, the object moves only under the influence of a uniform gravitational force field . This can be a good approximation for a rock that is thrown for short distances, for example at the surface of the Moon . In this simple approximation, the trajectory takes the shape of a parabola . Generally when determining trajectories, it may be necessary to account for nonuniform gravitational forces and air resistance ( drag and aerodynamics ). This is the focus of the discipline of ballistics .
One of the remarkable achievements of Newtonian mechanics was the derivation of Kepler's laws of planetary motion . In the gravitational field of a point mass or a spherically-symmetrical extended mass (such as the Sun ), the trajectory of a moving object is a conic section , usually an ellipse or a hyperbola . [ a ] This agrees with the observed orbits of planets , comets , and artificial spacecraft to a reasonably good approximation, although if a comet passes close to the Sun, then it is also influenced by other forces such as the solar wind and radiation pressure , which modify the orbit and cause the comet to eject material into space.
Newton's theory later developed into the branch of theoretical physics known as classical mechanics . It employs the mathematics of differential calculus (which was also initiated by Newton in his youth). Over the centuries, countless scientists have contributed to the development of these two disciplines. Classical mechanics became a most prominent demonstration of the power of rational thought, i.e. reason , in science as well as technology. It helps to understand and predict an enormous range of phenomena ; trajectories are but one example.
Consider a particle of mass m {\displaystyle m} , moving in a potential field V {\displaystyle V} . In physical terms, mass represents inertia , and the field V {\displaystyle V} represents external forces of a particular kind known as "conservative". Given V {\displaystyle V} at every relevant position, there is a way to infer the associated force that would act at that position, say from gravity. Not all forces can be expressed in this way, however.
The motion of the particle is described by the second-order differential equation
On the right-hand side, the force is given in terms of ∇ V {\displaystyle \nabla V} , the gradient of the potential, taken at positions along the trajectory. This is the mathematical form of Newton's second law of motion : force equals mass times acceleration, for such situations.
The ideal case of motion of a projectile in a uniform gravitational field in the absence of other forces (such as air drag) was first investigated by Galileo Galilei . To neglect the action of the atmosphere in shaping a trajectory would have been considered a futile hypothesis by practical-minded investigators all through the Middle Ages in Europe . Nevertheless, by anticipating the existence of the vacuum , later to be demonstrated on Earth by his collaborator Evangelista Torricelli [ citation needed ] , Galileo was able to initiate the future science of mechanics . [ citation needed ] In a near vacuum, as it turns out for instance on the Moon , his simplified parabolic trajectory proves essentially correct.
In the analysis that follows, we derive the equation of motion of a projectile as measured from an inertial frame at rest with respect to the ground. Associated with the frame is a right-hand coordinate system with its origin at the point of launch of the projectile. The x {\displaystyle x} -axis is tangent to the ground, and the y {\displaystyle y} axis is perpendicular to it ( parallel to the gravitational field lines ). Let g {\displaystyle g} be the acceleration of gravity . Relative to the flat terrain, let the initial horizontal speed be v h = v cos ( θ ) {\displaystyle v_{h}=v\cos(\theta )} and the initial vertical speed be v v = v sin ( θ ) {\displaystyle v_{v}=v\sin(\theta )} . It will also be shown that the range is 2 v h v v / g {\displaystyle 2v_{h}v_{v}/g} , and the maximum altitude is v v 2 / 2 g {\displaystyle v_{v}^{2}/2g} . The maximum range for a given initial speed v {\displaystyle v} is obtained when v h = v v {\displaystyle v_{h}=v_{v}} , i.e. the initial angle is 45 ∘ {\displaystyle ^{\circ }} . This range is v 2 / g {\displaystyle v^{2}/g} , and the maximum altitude at the maximum range is v 2 / ( 4 g ) {\displaystyle v^{2}/(4g)} .
Assume the motion of the projectile is being measured from a free fall frame which happens to be at ( x , y ) = (0,0) at t = 0. The equation of motion of the projectile in this frame (by the equivalence principle ) would be y = x tan ( θ ) {\displaystyle y=x\tan(\theta )} . The co-ordinates of this free-fall frame, with respect to our inertial frame would be y = − g t 2 / 2 {\displaystyle y=-gt^{2}/2} . That is, y = − g ( x / v h ) 2 / 2 {\displaystyle y=-g(x/v_{h})^{2}/2} .
Now translating back to the inertial frame the co-ordinates of the projectile becomes y = x tan ( θ ) − g ( x / v h ) 2 / 2 {\displaystyle y=x\tan(\theta )-g(x/v_{h})^{2}/2} That is:
(where v 0 is the initial velocity, θ {\displaystyle \theta } is the angle of elevation, and g is the acceleration due to gravity).
The range , R , is the greatest distance the object travels along the x-axis in the I sector. The initial velocity , v i , is the speed at which said object is launched from the point of origin. The initial angle , θ i , is the angle at which said object is released. The g is the respective gravitational pull on the object within a null-medium.
The height , h , is the greatest parabolic height said object reaches within its trajectory
In terms of angle of elevation θ {\displaystyle \theta } and initial speed v {\displaystyle v} :
giving the range as
This equation can be rearranged to find the angle for a required range
Note that the sine function is such that there are two solutions for θ {\displaystyle \theta } for a given range d h {\displaystyle d_{h}} . The angle θ {\displaystyle \theta } giving the maximum range can be found by considering the derivative or R {\displaystyle R} with respect to θ {\displaystyle \theta } and setting it to zero.
which has a nontrivial solution at 2 θ = π / 2 = 90 ∘ {\displaystyle 2\theta =\pi /2=90^{\circ }} , or θ = 45 ∘ {\displaystyle \theta =45^{\circ }} . The maximum range is then R max = v 2 / g {\displaystyle R_{\max }=v^{2}/g\,} . At this angle sin ( π / 2 ) = 1 {\displaystyle \sin(\pi /2)=1} , so the maximum height obtained is v 2 4 g {\displaystyle {v^{2} \over 4g}} .
To find the angle giving the maximum height for a given speed calculate the derivative of the maximum height H = v 2 sin 2 ( θ ) / ( 2 g ) {\displaystyle H=v^{2}\sin ^{2}(\theta )/(2g)} with respect to θ {\displaystyle \theta } , that is d H d θ = v 2 2 cos ( θ ) sin ( θ ) / ( 2 g ) {\displaystyle {\mathrm {d} H \over \mathrm {d} \theta }=v^{2}2\cos(\theta )\sin(\theta )/(2g)} which is zero when θ = π / 2 = 90 ∘ {\displaystyle \theta =\pi /2=90^{\circ }} . So the maximum height H m a x = v 2 2 g {\displaystyle H_{\mathrm {max} }={v^{2} \over 2g}} is obtained when the projectile is fired straight up.
If instead of a uniform downwards gravitational force we consider two bodies orbiting with the mutual gravitation between them, we obtain Kepler's laws of planetary motion . The derivation of these was one of the major works of Isaac Newton and provided much of the motivation for the development of differential calculus .
If a projectile, such as a baseball or cricket ball, travels in a parabolic path, with negligible air resistance, and if a player is positioned so as to catch it as it descends, he sees its angle of elevation increasing continuously throughout its flight. The tangent of the angle of elevation is proportional to the time since the ball was sent into the air, usually by being struck with a bat. Even when the ball is really descending, near the end of its flight, its angle of elevation seen by the player continues to increase. The player therefore sees it as if it were ascending vertically at constant speed. Finding the place from which the ball appears to rise steadily helps the player to position himself correctly to make the catch. If he is too close to the batsman who has hit the ball, it will appear to rise at an accelerating rate. If he is too far from the batsman, it will appear to slow rapidly, and then to descend. | https://en.wikipedia.org/wiki/Trajectory |
In fluid mechanics , meteorology (weather) and oceanography , a trajectory traces the motion of a single point, often called a parcel , in the flow.
Trajectories are useful for tracking atmospheric contaminants, such as smoke plumes, and as constituents to Lagrangian simulations, such as contour advection or semi-Lagrangian schemes .
Suppose we have a time-varying flow field, v → ( x → , t ) {\displaystyle {\vec {v}}({\vec {x}},~t)} . The motion of a fluid parcel, or trajectory, is given by the following system of ordinary differential equations :
While the equation looks simple, there are at least three concerns when attempting to solve it numerically . The first is the integration scheme . This is typically a Runge-Kutta , [ 1 ] although others can be useful as well, such as a leapfrog . The second is the method of determining the velocity vector, v → {\displaystyle {\vec {v}}} at a given position, x → {\displaystyle {\vec {x}}} , and time, t . Normally, it is not known at all positions and times, therefore some method of interpolation is required. If the velocities are gridded in space and time, then bilinear , trilinear or higher-dimensional linear interpolation is appropriate. Bicubic , tricubic , etc., interpolation is used as well, but is probably not worth the extra computational overhead .
Velocity fields can be determined by measurement, e.g. from weather balloons , from numerical models or especially from a combination of the two, e.g. assimilation models .
The final concern is metric corrections. These are necessary for geophysical fluid flows on a spherical Earth. The differential equations for tracing a two-dimensional, atmospheric trajectory in longitude-latitude coordinates are as follows:
where, θ {\displaystyle \theta } and ϕ {\displaystyle \phi } are, respectively, the longitude and latitude in radians , r is the radius of the Earth , u is the zonal wind and v is the meridional wind.
One problem with this formulation is the polar singularity: notice how the denominator in the first equation goes to zero when the latitude is 90 degrees—plus or minus. One means of overcoming this is to use a locally Cartesian coordinate system close to the poles. Another is to perform the integration on a pair of Azimuthal equidistant projections —one for the N. Hemisphere and one for the S. Hemisphere. [ 2 ]
Trajectories can be validated by balloons in the atmosphere and buoys in the ocean . | https://en.wikipedia.org/wiki/Trajectory_(fluid_mechanics) |
Trajectory inference or pseudotemporal ordering is a computational technique used in single-cell transcriptomics to determine the pattern of a dynamic process experienced by cells and then arrange cells based on their progression through the process. Single-cell protocols have much higher levels of noise than bulk RNA-seq , [ 1 ] so a common step in a single-cell transcriptomics workflow is the clustering of cells into subgroups. [ 2 ] Clustering can contend with this inherent variation by combining the signal from many cells, while allowing for the identification of cell types. [ 3 ] However, some differences in gene expression between cells are the result of dynamic processes such as the cell cycle , cell differentiation , or response to an external stimuli. Trajectory inference seeks to characterize such differences by placing cells along a continuous path that represents the evolution of the process rather than dividing cells into discrete clusters. [ 4 ] In some methods this is done by projecting cells onto an axis called pseudotime which represents the progression through the process. [ 5 ]
Since 2015, more than 50 algorithms for trajectory inference have been created. [ 6 ] Although the approaches taken are diverse there are some commonalities to the methods. Typically, the steps in the algorithm consist of dimensionality reduction to reduce the complexity of the data, trajectory building to determine the structure of the dynamic process, and projection of the data onto the trajectory so that cells are positioned by their development through the process and cells with similar expression profiles are situated near each other. [ 6 ] Trajectory inference algorithms differ in the specific procedure used for dimensionality reduction, the kinds of structures that can be used to represent the dynamic process, and the prior information that is required or can be provided. [ 2 ]
The data produced by single-cell RNA-seq can consist of thousands of cells each with expression levels recorded across thousands of genes. [ 7 ] In order to efficiently process data with such high dimensionality many trajectory inference algorithms employ a dimensionality reduction procedure such as principal component analysis (PCA), independent component analysis (ICA), or t-SNE as their first step. [ 8 ] The purpose of this step is to combine many features of the data into a more informative measure of the data. [ 4 ] For example, a coordinate resulting from dimensionality reduction could combine expression levels from many genes that are associated with the cell cycle into one value that represents a cell's position in the cell cycle. [ 8 ] Such a transformation corresponds to dimensionality reduction in the feature space, but dimensionality reduction can also be applied to the sample space by clustering together groups of similar cells. [ 1 ]
Many methods represent the structure of the dynamic process via a graph -based approach. In such an approach the vertices of the graph correspond to states in the dynamic process, such as cell types in cell differentiation, and the edges between the nodes correspond to transitions between the states. [ 6 ] The creation of the trajectory graph can be accomplished using k-nearest neighbors or minimum spanning tree algorithms. [ 9 ] The topology of the trajectory refers to the structure of the graph and different algorithms are limited to creation of graph topologies of a particular type such as linear , branching , or cyclic . [ 4 ]
Some methods require or allow for the input of prior information which is used to guide the creation of the trajectory. The use of prior information can lead to more accurate trajectory determination, but poor priors can lead the algorithm astray or bias results towards expectations. [ 6 ] Examples of prior information that can be used in trajectory inference are the selection of start cells that are at the beginning of the trajectory, the number of branches in the trajectory, and the number of end states for the trajectory. [ 10 ]
MARGARET employs a deep unsupervised metric learning approach for inferring the cellular latent space and cell clusters. The trajectory is modeled using a cluster-connectivity graph to capture complex trajectory topologies. MARGARET utilizes the inferred trajectory for determining terminal states and inferring cell-fate plasticity using a scalable Absorbing Markov chain model. [ 11 ]
Monocle first employs a differential expression test to reduce the number of genes then applies independent component analysis for additional dimensionality reduction. To build the trajectory Monocle computes a minimum spanning tree , then finds the longest connected path in that tree. Cells are projected onto the nearest point to them along that path. [ 5 ]
p-Creode finds the most likely path through a density-adjusted k-nearest neighbor graph . Graphs from an ensemble are scored with a graph similarity metric to select the most representative topology. p-Creode has been tested on a range of single-cell platforms, including mass cytometry , multiplex immunofluorescence, [ 12 ] and single-cell RNA-seq . No prior information is required. [ 13 ]
Slingshot takes cluster labels as input and then orders these clusters into lineages by the construction of a minimum spanning tree . Paths through the tree are smoothed by fitting simultaneous principal curves and a cell's pseudotime value is determined by its projection onto one or more of these curves. Prior information, such as initial and terminal clusters, is optional. [ 10 ]
TSCAN performs dimensionality reduction using principal component analysis and clusters cells using a mixture model . A minimum spanning tree is calculated using the centers of the clusters and the trajectory is determined as the longest connected path of that tree. TSCAN is an unsupervised algorithm that requires no prior information. [ 14 ]
Wanderlust was developed for analysis of mass cytometry data, but has been adapted for single-cell transcriptomics applications. A k-nearest neighbors algorithm is used to construct a graph which connects every cell to the cell closest to it with respect to a metric such as Euclidean distance or cosine distance . Wanderlust requires the input of a starting cell as prior information. [ 15 ]
Wishbone is built on Wanderlust and allows for a bifurcation in the graph topology, whereas Wanderlust creates a linear graph . Wishbone combines principal component analysis and diffusion maps to achieve dimensionality reduction then also creates a KNN graph. [ 16 ]
Waterfall performs dimensionality reduction via principal component analysis and uses a k-means algorithm to find cell clusters. A minimal spanning tree is built between the centers of the clusters. Waterfall is entirely unsupervised, requiring no prior information, and produces linear trajectories. [ 17 ] | https://en.wikipedia.org/wiki/Trajectory_inference |
Trajectory optimization is the process of designing a trajectory that minimizes (or maximizes) some measure of performance while satisfying a set of constraints. Generally speaking, trajectory optimization is a technique for computing an open-loop solution to an optimal control problem. It is often used for systems where computing the full closed-loop solution is not required, impractical or impossible. If a trajectory optimization problem can be solved at a rate given by the inverse of the Lipschitz constant , then it can be used iteratively to generate a closed-loop solution in the sense of Caratheodory . If only the first step of the trajectory is executed for an infinite-horizon problem, then this is known as Model Predictive Control (MPC) .
Although the idea of trajectory optimization has been around for hundreds of years ( calculus of variations , brachystochrone problem ), it only became practical for real-world problems with the advent of the computer. Many of the original applications of trajectory optimization were in the aerospace industry, computing rocket and missile launch trajectories. More recently, trajectory optimization has also been used in a wide variety of industrial process and robotics applications. [ 1 ]
Trajectory optimization first showed up in 1697, with the introduction of the Brachystochrone problem: find the shape of a wire such that a bead sliding along it will move between two points in the minimum time. [ 2 ] The interesting thing about this problem is that it is optimizing over a curve (the shape of the wire), rather than a single number. The most famous of the solutions was computed using calculus of variations .
In the 1950s, the digital computer started to make trajectory optimization practical for solving real-world problems. The first optimal control approaches grew out of the calculus of variations , based on the research of Gilbert Ames Bliss and Bryson [ 3 ] in America, and Pontryagin [ 4 ] in Russia. Pontryagin's maximum principle is of particular note. These early researchers created the foundation of what we now call indirect methods for trajectory optimization.
Much of the early work in trajectory optimization was focused on computing rocket thrust profiles, both in a vacuum and in the atmosphere. This early research discovered many basic principles that are still used today. Another successful application was the climb to altitude trajectories for the early jet aircraft. Because of the high drag associated with the transonic drag region and the low thrust of early jet aircraft, trajectory optimization was the key to maximizing climb to altitude performance. Optimal control based trajectories were responsible for some of the world records. In these situations, the pilot followed a Mach versus altitude schedule based on optimal control solutions.
One of the important early problems in trajectory optimization was that of the singular arc , where Pontryagin's maximum principle fails to yield a complete solution. An example of a problem with singular control is the optimization of the thrust of a missile flying at a constant altitude and which is launched at low speed. Here the problem is one of a bang-bang control at maximum possible thrust until the singular arc is reached. Then the solution to the singular control provides a lower variable thrust until burnout. At that point bang-bang control provides that the control or thrust go to its minimum value of zero. This solution is the foundation of the boost-sustain rocket motor profile widely used today to maximize missile performance.
There are a wide variety of applications for trajectory optimization, primarily in robotics: industry, manipulation, walking, path-planning, and aerospace. It can also be used for modeling and estimation.
Depending on the configuration, open-chain robotic manipulators require a degree of trajectory optimization. For instance, a robotic arm with 7 joints and 7 links (7-DOF) is a redundant system where one cartesian position of an end-effector can correspond to an infinite number of joint angle positions, thus this redundancy can be used to optimize a trajectory to, for example, avoid any obstacles in the workspace or minimize the torque in the joints. [ 5 ]
Trajectory optimization is often used to compute trajectories for quadrotor helicopters . These applications typically used highly specialized algorithms. [ 6 ] [ 7 ] One interesting application shown by the U.Penn GRASP Lab is computing a trajectory that allows a quadrotor to fly through a hoop as it is thrown. Another, this time by the ETH Zurich Flying Machine Arena , involves two quadrotors tossing a pole back and forth between them, with it balanced like an inverted pendulum. The problem of computing minimum-energy trajectories for a quadcopter, has also been recently studied. [ 8 ]
Trajectory optimization is used in manufacturing, particularly for controlling chemical processes [ 9 ] or computing the desired path for robotic manipulators. [ 10 ]
There are a variety of different applications for trajectory optimization within the field of walking robotics. For example, one paper used trajectory optimization of bipedal gaits on a simple model to show that walking is energetically favorable for moving at a low speed and running is energetically favorable for moving at a high speed. [ 11 ] Like in many other applications, trajectory optimization can be used to compute a nominal trajectory, around which a stabilizing controller is built. [ 12 ] Trajectory optimization can be applied in detailed motion planning complex humanoid robots, such as Atlas . [ 13 ] Finally, trajectory optimization can be used for path-planning of robots with complicated dynamics constraints, using reduced complexity models. [ 14 ]
For tactical missiles , the flight profiles are determined by the thrust and lift histories. These histories can be controlled by a number of means including such techniques as using an angle of attack command history or an altitude/downrange schedule that the missile must follow. Each combination of missile design factors, desired missile performance, and system constraints results in a new set of optimal control parameters. [ 15 ]
The techniques to any optimization problems can be divided into two categories: indirect and direct. An indirect method works by analytically constructing the necessary and sufficient conditions for optimality, which are then solved numerically. A direct method attempts a direct numerical solution by constructing a sequence of continually improving approximations to the optimal solution. [ 16 ]
The optimal control problem is an infinite-dimensional optimization problem, since the decision variables are functions, rather than real numbers. All solution techniques perform transcription, a process by which the trajectory optimization problem (optimizing over functions) is converted into a constrained parameter optimization problem (optimizing over real numbers). Generally, this constrained parameter optimization problem is a non-linear program, although in special cases it can be reduced to a quadratic program or linear program .
Single shooting is the simplest type of trajectory optimization technique. The basic idea is similar to how you would aim a cannon: pick a set of parameters for the trajectory, simulate the entire thing, and then check to see if you hit the target. The entire trajectory is represented as a single segment, with a single constraint, known as a defect constraint, requiring that the final state of the simulation matches the desired final state of the system. Single shooting is effective for problems that are either simple or have an extremely good initialization. Both the indirect and direct formulation tend to have difficulties otherwise. [ 16 ] [ 19 ] [ 20 ]
Multiple shooting is a simple extension to single shooting that renders it far more effective. Rather than representing the entire trajectory as a single simulation (segment), the algorithm breaks the trajectory into many shorter segments, and a defect constraint is added between each. The result is large sparse non-linear program, which tends to be easier to solve than the small dense programs produced by single shooting. [ 19 ] [ 20 ] The particular sparsity structure can be exploited by tailored numerical solvers as implemented in the open-source software package acados .
Direct collocation methods work by approximating the state and control trajectories using polynomial splines . These methods are sometimes referred to as direct transcription. Trapezoidal collocation is a commonly used low-order direct collocation method. The dynamics, path objective, and control are all represented using linear splines, and the dynamics are satisfied using trapezoidal quadrature . Hermite-Simpson Collocation is a common medium-order direct collocation method. The state is represented by a cubic-Hermite spline , and the dynamics are satisfied using Simpson quadrature . [ 16 ] [ 20 ]
Orthogonal collocation is technically a subset of direct collocation, but the implementation details are so different that it can reasonably be considered its own set of methods. Orthogonal collocation differs from direct collocation in that it typically uses high-order splines, and each segment of the trajectory might be represented by a spline of a different order. The name comes from the use of orthogonal polynomials in the state and control splines. [ 20 ] [ 21 ]
In pseudospectral discretization the entire trajectory is represented by a collection of basis functions in the time domain (independent variable). The basis functions need not be polynomials. Pseudospectral discretization is also known as spectral collocation. [ 22 ] [ 23 ] [ 24 ] When used to solve a trajectory optimization problem whose solution is smooth, a pseudospectral method will achieve spectral (exponential) convergence. [ 25 ] If the trajectory is not smooth, the convergence is still very fast, faster than Runge-Kutta methods. [ 26 ] [ 27 ]
In 1990 Dewey H. Hodges and Robert R. Bless [ 28 ] proposed a weak Hamiltonian finite element method for optimal control problems. The idea was to derive a weak variational form of first order necessary conditions for optimality, discretise the time domain in finite intervals and use a simple zero order polynomial representation of states, controls and adjoints over each interval.
Differential dynamic programming , is a bit different than the other techniques described here. In particular, it does not cleanly separate the transcription and the optimization. Instead, it does a sequence of iterative forward and backward passes along the trajectory. Each forward pass satisfies the system dynamics, and each backward pass satisfies the optimality conditions for control. Eventually, this iteration converges to a trajectory that is both feasible and optimal. [ 29 ]
In contrast to the aforementioned classical methods, generative machine learning methods may be used to generate a desirable trajectory. In particular, diffusion models learn to iteratively reverse a destructive forward process in which noise is added to data until it becomes noise itself by estimating the noise to remove at every time step. Thus given easily to sample random noise as input, the diffusion process will recover a plausible corresponding noise-free data point. Recent methods [ 30 ] [ 31 ] have parameterized trajectories as matrices of state-action pairs at consecutive time steps and trained a diffusion model to generate such a matrix. To address the issue of controllability of the generated samples, the Diffuser method [ 30 ] proposes two techniques to steer the generated sample, thereby reducing the optimization problem to a sampling problem. First, guided diffusion [ 32 ] [ 33 ] can be used to incorporate a cost (or reward) function into the generation process. For this purpose the gradient of the cost function modifies the mean of the estimated noise at every time step. Second, for motion planning problems in which the start and the end states of the trajectory are known, and the trajectory needs to comply with constraints to find a viable path, an inpainting approach can be used. Similar to the first technique, a prior modifies the distribution of trajectories, which in this case assigns high probability to trajectories satisfying the constraints (e.g. arriving at a state s {\displaystyle s} at time step t {\displaystyle t} ), and zero probability to all other trajectories. As a result, sampling from this distribution will produce trajectories that satisfy the constraints.
There are many techniques to choose from when solving a trajectory optimization problem. There is no best method, but some methods might do a better job on specific problems. This section provides a rough understanding of the trade-offs between methods.
When solving a trajectory optimization problem with an indirect method, you must explicitly construct the adjoint equations and their gradients. This is often difficult to do, but it gives an excellent accuracy metric for the solution. Direct methods are much easier to set up and solve, but do not have a built-in accuracy metric. [ 16 ] As a result, direct methods are more widely used, especially in non-critical applications. Indirect methods still have a place in specialized applications, particularly aerospace, where accuracy is critical.
One place where indirect methods have particular difficulty is on problems with path inequality constraints. These problems tend to have solutions for which the constraint is partially active. When constructing the adjoint equations for an indirect method, the user must explicitly write down when the constraint is active in the solution, which is difficult to know a priori. One solution is to use a direct method to compute an initial guess, which is then used to construct a multi-phase problem where the constraint is prescribed. The resulting problem can then be solved accurately using an indirect method. [ 16 ]
Single shooting methods are best used for problems where the control is very simple (or there is an extremely good initial guess). For example, a satellite mission planning problem where the only control is the magnitude and direction of an initial impulse from the engines. [ 19 ]
Multiple shooting tends to be good for problems with relatively simple control, but complicated dynamics. Although path constraints can be used, they make the resulting nonlinear program relatively difficult to solve.
Direct collocation methods are good for problems where the accuracy of the control and the state are similar. These methods tend to be less accurate than others (due to their low-order), but are particularly robust for problems with difficult path constraints.
Orthogonal collocation methods are best for obtaining high-accuracy solutions to problems where the accuracy of the control trajectory is important. Some implementations have trouble with path constraints. These methods are particularly good when the solution is smooth. | https://en.wikipedia.org/wiki/Trajectory_optimization |
In logic , finite model theory , and computability theory , Trakhtenbrot's theorem (due to Boris Trakhtenbrot ) states that the problem of validity in first-order logic on the class of all finite models is undecidable . In fact, the class of valid sentences over finite models is not recursively enumerable (though it is co-recursively enumerable ).
Trakhtenbrot's theorem implies that Gödel's completeness theorem (that is fundamental to first-order logic) does not hold in the finite case. Also it seems counter-intuitive that being valid over all structures is 'easier' than over just the finite ones.
The theorem was first published in 1950: "The Impossibility of an Algorithm for the Decidability Problem on Finite Classes". [ 1 ]
The theorem is related to Church 's result that the set of valid formulas in first-order logic is not decidable (however this set is semi-decidable ).
We follow the formulations as in Ebbinghaus and Flum. [ 2 ]
Let σ be a relational vocabulary with at least one binary relation symbol.
Remarks
This proof is taken from Chapter 10, section 4, 5 of Mathematical Logic by H.D. Ebbinghaus.
As in the most common proof of Gödel's First Incompleteness Theorem through using the undecidability of the halting problem , for each Turing machine M {\displaystyle M} there is a corresponding arithmetical sentence ϕ M {\displaystyle \phi _{M}} , effectively derivable from M {\displaystyle M} , such that it is true if and only if M {\displaystyle M} halts on the empty tape. Intuitively, ϕ M {\displaystyle \phi _{M}} asserts "there exists a natural number that is the Gödel code for the computation record of M {\displaystyle M} on the empty tape that ends with halting".
If the machine M {\displaystyle M} does halt in finite steps, then the complete computation record is also finite, then there is a finite initial segment of the natural numbers such that the arithmetical sentence ϕ M {\displaystyle \phi _{M}} is also true on this initial segment. Intuitively, this is because in this case, proving ϕ M {\displaystyle \phi _{M}} requires the arithmetic properties of only finitely many numbers.
If the machine M {\displaystyle M} does not halt in finite steps, then ϕ M {\displaystyle \phi _{M}} is false in any finite model, since there's no finite computation record of M {\displaystyle M} that ends with halting.
Thus, if M {\displaystyle M} halts, ϕ M {\displaystyle \phi _{M}} is true in some finite models. If M {\displaystyle M} does not halt, ϕ M {\displaystyle \phi _{M}} is false in all finite models. So, M {\displaystyle M} does not halt if and only if ¬ ϕ M {\displaystyle \neg \phi _{M}} is true over all finite models.
The set of machines that does not halt is not recursively enumerable, so the set of valid sentences over finite models is not recursively enumerable.
In this section we exhibit a more rigorous proof from Libkin. [ 3 ] Note in the above statement that the corollary also entails the theorem, and this is the direction we prove here.
Theorem
Proof
According to the previous lemma, we can in fact use finitely many binary relation symbols. The idea of the proof is similar to the proof of Fagin's theorem, and we encode Turing machines in first-order logic. What we want to prove is that for every Turing machine M we construct a sentence φ M of vocabulary τ such that φ M is finitely satisfiable if and only if M halts on the empty input, which is equivalent to the halting problem and therefore undecidable.
Let M= ⟨Q, Σ, δ, q 0 , Q a , Q r ⟩ be a deterministic Turing machine with a single infinite tape.
Since we are dealing with the problem of halting on an empty input we may assume w.l.o.g. that Δ={0,1} and that 0 represents a blank, while 1 represents some tape symbol. We define τ so that we can represent computations:
Where:
The sentence φ M states that (i) <, min , T i 's and H q 's are interpreted as above and (ii) that the machine eventually halts. The halting condition is equivalent to saying that H q∗ (s, t) holds for some s, t and q∗ ∈ Q a ∪ Q r and after that state, the configuration of the machine does not change. Configurations of a halting machine (the nonhalting is not finite) can be represented as a τ (finite) sentence (more precisely, a finite τ-structure which satisfies the sentence). The sentence φ M is: φ ≡ α ∧ β ∧ γ ∧ η ∧ ζ ∧ θ.
We break it down by components:
Where θ 2 is:
And:
Where θ 3 is:
s-1 and t+1 are first-order definable abbreviations for the predecessor and successor according to the ordering <. The sentence θ 0 assures that the tape content in position s changes from 0 to 1, the state changes from q to q', the rest of the tape remains the same and that the head moves to s-1 (i. e. one position to the left), assuming s is not the first position in the tape. If it is, then all is handled by θ 1 : everything is the same, except the head does not move to the left but stays put.
If φ M has a finite model, then such a model that represents a computation of M (that starts with the empty tape (i.e. tape containing all zeros) and ends in a halting state). If M halts on the empty input, then the set of all configurations of the halting computations of M (coded with <, T i 's and H q 's) is a model of φ M , which is finite, since the set of all configurations of halting computations is finite. It follows that M halts on the empty input if and only if φ M has a finite model. Since halting on the empty input is undecidable, so is the question of whether φ M has a finite model A {\displaystyle {\mathcal {A}}} (equivalently, whether φ M is finitely satisfiable) is also undecidable (recursively enumerable, but not recursive). This concludes the proof.
Corollary
Proof
Enumerate all pairs ( A , ϕ ) {\displaystyle ({\mathcal {A}},\phi )} where A {\displaystyle {\mathcal {A}}} is finite and A ⊨ ϕ {\displaystyle {\mathcal {A}}\models \phi } .
Corollary
Proof
From the previous lemma, the set of finitely satisfiable sentences is recursively enumerable. Assume that the set of all finitely valid sentences is recursively enumerable. Since ¬φ is finitely valid if and only if φ is not finitely satisfiable, we conclude that the set of sentences which are not finitely satisfiable is recursively enumerable. If both a set A and its complement are recursively enumerable, then A is recursive. It follows that the set of finitely satisfiable sentences is recursive, which contradicts Trakhtenbrot's theorem. | https://en.wikipedia.org/wiki/Trakhtenbrot's_theorem |
In mycology , the term trama is used in two ways. In the broad sense, it is the inner, fleshy portion of a mushroom 's basidiocarp , or fruit body. It is distinct from the outer layer of tissue, known as the pileipellis or cuticle , and from the spore -bearing tissue layer known as the hymenium. In essence, the trama is the tissue that is commonly referred to as the "flesh" of mushrooms and similar fungi. [ 1 ]
The second use is more specific, and refers to the "hymenophoral trama" that supports the hymenium . It is similarly interior, connective tissue, but it is more specifically the central layer of hyphae running from the underside of the mushroom cap to the lamella or gill, upon which the hymenium rests. Various types have been classified by their structure, including trametoid, cantharelloid, boletoid, and agaricoid, with agaricoid the most common by far. In the agarcoid type, the central trama's hyphae usually run parallel to each other, with a clear boundary area called a sub-hymenium followed by the hymenium itself on the outer layer facing the environment. [ 2 ]
The word "trama" is Latin for the " weft " or " woof " yarns in the weaving of cloth. [ 3 ] This is related to the basidiocarp trama being "filler" tissue and that analogously the woof yarn in weaving is sometimes called "fill". Furthermore, the trama tends to be soft tissue, and in weaving, the woof yarn is not tightly stretched; it therefore need not as a rule be as strong as the warp yarn.
This Basidiomycota -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Trama_(mycology) |
Tramadol , sold under the brand name Tramal among others, [ 17 ] [ 1 ] is an opioid pain medication and a serotonin–norepinephrine reuptake inhibitor (SNRI) used to treat moderately severe pain . [ 13 ] [ 17 ] When taken by mouth in an immediate-release formulation, the onset of pain relief usually begins within an hour. [ 13 ] It is also available by injection. [ 18 ] It is available in combination with paracetamol (acetaminophen).
As is typical of opioids, common side effects include constipation , itchiness , and nausea . [ 13 ] Serious side effects may include hallucinations , seizures , increased risk of serotonin syndrome , decreased alertness, and drug addiction . [ 13 ] A change in dosage may be recommended in those with kidney or liver problems. [ 13 ] It is not recommended in those who are at risk of suicide or in those who are pregnant . [ 13 ] [ 18 ] While not recommended in women who are breastfeeding , those who take a single dose should not generally have to stop breastfeeding. [ 19 ] Tramadol is converted in the liver to O -desmethyltramadol (desmetramadol) , an opioid with a stronger affinity for the μ-opioid receptor . [ 13 ] [ 20 ]
Tramadol was patented in 1972 and launched under the brand name Tramal in 1977 by the West German pharmaceutical company Grünenthal GmbH . [ 17 ] [ 21 ] In the mid-1990s, it was approved in the United Kingdom and the United States. [ 17 ] It is available as a generic medication and marketed under many brand names worldwide. [ 1 ] [ 13 ] In 2022, it was the 55th most commonly prescribed medication in the United States, with more than 12 million prescriptions. [ 22 ] [ 23 ]
Tramadol is used primarily to treat mild to severe pain, both acute and chronic. [ 24 ] [ 25 ] There is moderate evidence for use as a second-line treatment for fibromyalgia , but it is not FDA -approved for this use. [ 26 ] Its use is approved for treatment of fibromyalgia as a secondary painkiller by the NHS . [ 27 ]
Its analgesic effects take approximately an hour to be realized, and it takes from two to four hours to reach peak effect after oral administration with an immediate-release formulation. [ 24 ] [ 25 ] On a dose-by-dose basis, tramadol has about one-tenth the potency of morphine (thus 100 mg is commensurate with 10 mg morphine but may vary) and is practically equally potent when compared with pethidine and codeine . [ 28 ] For moderate pain, its effectiveness is roughly equivalent to that of codeine in low doses and hydrocodone at very high doses. For severe pain, it is less effective than morphine. [ 24 ]
Pain-reducing effects last approximately six hours. The potency of analgesia varies considerably as it depends on an individual's genetics. People with specific variants of CYP2D6 enzymes may not produce adequate amounts of the active metabolite (desmetramadol) for effective pain control. [ 16 ] [ 24 ]
Sleep medicine physicians sometimes prescribe tramadol (or other opioid medications) for refractory restless legs syndrome (RLS); [ 29 ] [ 30 ] that is, RLS that does not respond adequately to treatment with first-line medications such as dopamine agonists (e.g., pramipexole ) or gabapentinoids , often due to augmentation. [ 31 ]
Use of tramadol during pregnancy is generally avoided, as it may cause some reversible withdrawal effects in the newborn. [ 32 ] A small prospective study in France found, while an increased risk of miscarriages existed, no major malformations were reported in the newborn. [ 32 ] Its use during lactation is also generally advised against, but a small trial found that infants breastfed by mothers taking tramadol were exposed to about 2.88% of the dose the mothers were taking. No evidence of this dose harming the newborn was seen. [ 32 ]
Its use as an analgesic during labor is not advised due to its long onset of action (1 hour). [ 32 ] The ratio of the mean concentration of the drug in the fetus compared to that of the mother when it is given intramuscularly for labor pains has been estimated to be 1:94. [ 32 ]
Its use in children is generally advised against, although it may be done under the supervision of a specialist. [ 24 ] On 21 September 2015, the FDA started investigating the safety of tramadol in use in persons under the age of 17. The investigation was initiated because some of these people have experienced slowed or difficult breathing. [ 33 ] The FDA lists age under 12 years old as a contraindication. [ 34 ] [ 35 ]
The risk of opioid-related adverse effects such as respiratory depression , falls, cognitive impairment, and sedation is increased. [ 24 ] Tramadol may interact with other medications and increase the risk for adverse events. [ 36 ]
The drug should be used with caution in those with liver or kidney failure, due to metabolism in the liver (to the active molecule desmetramadol) and elimination by the kidneys. [ 24 ]
The most common adverse effects of tramadol include nausea , dizziness , dry mouth , indigestion , abdominal pain, vertigo , vomiting , constipation , drowsiness, and headache . [ 37 ] [ 38 ] Other side effects may result from interactions with other medications. Tramadol has the same dose-dependent adverse effects as morphine including respiratory depression. [ 39 ]
Long-term use of high doses of tramadol causes physical dependence and withdrawal syndrome. [ 40 ] These include both symptoms typical of opioid withdrawal and those associated with serotonin–norepinephrine reuptake inhibitor (SNRI) withdrawal; symptoms include numbness, tingling, paresthesia , and tinnitus. [ 41 ] Psychiatric symptoms may include hallucinations, paranoia, extreme anxiety, panic attacks, and confusion. [ 42 ] In most cases, tramadol withdrawal will set in 12–20 hours after the last dose, but this can vary. [ 41 ] Tramadol withdrawal typically lasts longer than that of other opioids. Seven days or more of acute withdrawal symptoms can occur as opposed to typically 3 or 4 days for other codeine analogs. [ 41 ]
The clinical presentation in overdose cases can vary but typically includes neurological, cardiovascular, and gastrointestinal manifestations. [ 43 ] The predominant neurological symptoms are seizures and altered levels of consciousness, ranging from somnolence to coma . Seizures are particularly notable due to tramadol's lowering of the seizure threshold , occurring in approximately half of acute poisoning cases. [ 44 ] Patients often exhibit tachycardia and mild hypertension . Gastrointestinal disturbances such as nausea and vomiting are common, and agitation, anxiety, and cold and clammy skin may also be present. [ 45 ]
While less common, severe complications like respiratory depression and serotonin syndrome can occur, particularly in polydrug overdoses involving other CNS depressants (such as benzodiazepines , opioids, and alcohol ) and agents with serotonergic activity. [ 46 ] [ 47 ] Additionally, individuals with genetic variations leading to CYP2D6 enzyme duplication (rapid metabolizers) may have an increased risk of adverse effects, due to faster conversion of tramadol to its active metabolite . [ 48 ]
Acute tramadol overdose is generally not life-threatening, with most fatalities resulting from polysubstance overdose. [ 49 ] Management includes cardiovascular monitoring, activated charcoal administration, hydration, and treatment of seizures. [ 50 ] Naloxone , an opioid antagonist, can partially reverse some effects of tramadol overdose, particularly respiratory depression. However, its use may increase the risk of seizures due to unopposed alpha-adrenergic stimulation. [ 24 ] For suspected serotonin syndrome, cyproheptadine , a serotonin antagonist , is considered an effective antidote. [ 46 ]
The incidence of tramadol-related overdose deaths has been on the rise in certain regions. For instance, Northern Ireland has reported an increased frequency of such cases. [ 51 ] In 2013, England and Wales recorded 254 tramadol-related deaths, while Florida reported 379 cases in 2011. [ 52 ] [ 53 ] In 2011, 21,649 emergency room visits in the United States were related to tramadol. [ 54 ] The likely explanation for these observations is due to increase in frequency of prescriptions and use due to easier access due to lighter regulatory scheduling by authorities [ 55 ] but this is starting to change. In 2021, Health Canada announced tramadol would be added to Schedule I of the Controlled Drugs and Substances Act and to the Narcotic Control Regulations due to tramadol being suspected of having contributed to 18 reported deaths in Canada between 2006 and 2017. [ 56 ]
Tramadol can have pharmacodynamic , pharmacokinetic , and pharmacogenetic interactions.
Tramadol is metabolized by CYP2D6 enzymes which contributes to the metabolism of approximately 25% of all medications. [ 57 ] Any medications with the ability to inhibit or induce these enzymes may interact with tramadol. These include common antiarrhythmics, antiemetics, antidepressants ( sertraline , paroxetine , and fluoxetine in particular), [ 58 ] antipsychotics, analgesics, and tamoxifen. [ 59 ]
Due to tramadol's serotonergic effects, tramadol has the potential to contribute to the development of an acute or chronic hyper-serotonin state called serotonin syndrome when used concurrently with other pro-serotonergic medications such as antidepressants ( SSRIs , SNRIs , tricyclics , MAOIs ), antipsychotics , triptans , cold medications containing dextromethorphan , and some herbal products such as St. John's wort . [ 59 ] [ 60 ]
Concurrent use of 5-HT3 antagonists such as ondansetron , dolasetron , and palonosetron may reduce the effectiveness of both drugs. [ 61 ]
Tramadol also acts as an opioid agonist and thus can increase the risk for side effects when used with other opioid and opioid-containing analgesics (such as morphine , pethidine , tapentadol , oxycodone , fentanyl , and Tylenol 3 ). [ 62 ]
Tramadol increases the risk for seizures by lowering the seizure threshold. Using other medications that lower seizure threshold - such as antipsychotic medications , bupropion (an anti-depressant and smoking cessation drug), and amphetamines - can further increase this risk. [ 63 ]
Tramadol induces analgesic effects through a variety of different targets on the noradrenergic system , serotonergic system , and opioid receptors system. [ 64 ] Tramadol affects serotonin and norepinephrine reuptake inhibition similarly to certain antidepressants known as serotonin–norepinephrine reuptake inhibitors (SNRIs), such as venlafaxine and duloxetine . [ 65 ] Tramadol exists as a racemic mixture , the positive enantiomer inhibits serotonin reuptake while the negative enantiomer inhibits noradrenaline re-uptake, by binding to and blocking the transporters. [ 66 ] [ 15 ] Both enantiomers of tramadol are agonists of the μ-opioid receptor and its M1 metabolite, O- desmetramadol , is also a μ-opioid receptor agonist but is 6 times more potent than tramadol itself. [ 67 ] All of these actions may work synergistically to induce analgesia.
Tramadol has been found to possess these actions: [ 69 ] [ 70 ] [ 66 ]
Tramadol acts on the opioid receptors through its major active metabolite desmetramadol , which has as much as 700-fold higher affinity for the MOR relative to tramadol. [ 20 ] Moreover, tramadol itself has been found to possess no efficacy in activating the MOR in functional activity assays, whereas desmetramadol activates the receptor with high intrinsic activity ( E max equal to that of morphine ). [ 76 ] [ 20 ] [ 93 ] As such, desmetramadol is exclusively responsible for the opioid effects of tramadol. [ 94 ] Both tramadol and desmetramadol have pronounced selectivity for the MOR over the DOR and KOR in terms of binding affinity. [ 77 ] [ 72 ] [ 74 ]
Tramadol is well-established as an SRI. [ 69 ] [ 70 ] In addition, a few studies have found that it also acts as a serotonin releasing agent (1–10 μM), similar in effect to fenfluramine . [ 95 ] [ 96 ] [ 97 ] [ 98 ] The serotonin releasing effects of tramadol could be blocked by sufficiently high concentrations of the serotonin reuptake inhibitor 6-nitroquipazine , which is in accordance with other serotonin releasing agents such as fenfluramine and MDMA . [ 95 ] [ 97 ] [ 98 ] However, two more recent studies failed to find a releasing effect of tramadol at respective concentrations up to 10 and 30 μM. [ 99 ] [ 98 ] [ 92 ] In addition to serotonergic activity, tramadol is also a norepinephrine reuptake inhibitor . [ 69 ] [ 70 ] It is not a norepinephrine releasing agent . [ 100 ] [ 101 ] [ 102 ] [ 92 ] Tramadol does not inhibit the reuptake or induce the release of dopamine . [ 100 ] [ 92 ]
A positron emission tomography imaging study found that single oral 50-mg and 100-mg doses of tramadol to human volunteers resulted in 34.7% and 50.2% respective mean occupation of the serotonin transporter (SERT) in the thalamus . [ 103 ] The estimated median effective dose (ED 50 ) for SERT occupancy hence was 98.1 mg, which was associated with a plasma tramadol level of about 330 ng/mL (1,300 nM). [ 103 ] The estimated maximum daily dosage of tramadol of 400 mg (100 mg q.i.d. ) would result in as much as 78.7% occupancy of the SERT (in association with a plasma concentration of 1,220 ng/mL or 4,632 nM). [ 103 ] This is close to that of SSRIs, which occupy the SERT by 80% or more. [ 103 ]
Peak plasma concentrations during treatment with clinical dosages of tramadol have generally been found to be in the range of 70 to 592 ng/mL (266–2,250 nM) for tramadol and 55 to 143 ng/mL (221–573 nM) for desmetramadol. [ 25 ] The highest levels of tramadol were observed with the maximum oral daily dosage of 400 mg per day divided into one 100-mg dose every 6 hours (i.e., four 100-mg doses evenly spaced out per day). [ 25 ] [ 104 ] Some accumulation of tramadol occurs with chronic administration; peak plasma levels with the maximum oral daily dosage (100 mg q.i.d. ) are about 16% higher and the area-under-the-curve levels 36% higher than following a single oral 100-mg dose. [ 25 ] Positron emission tomography imaging studies have reportedly found that tramadol levels are at least four-fold higher in the brain than in plasma . [ 100 ] [ 105 ] Conversely, brain levels of desmetramadol "only slowly approach those in plasma". [ 100 ] The plasma protein binding of tramadol is only 4–20%; hence, almost all tramadol in circulation is free, thus bioactive. [ 106 ] [ 107 ] [ 108 ]
Co-administration of quinidine , a potent CYP2D6 enzyme inhibitor, with tramadol, a combination which results in markedly reduced levels of desmetramadol, was found not to significantly affect the analgesic effects of tramadol in human volunteers. [ 20 ] [ 107 ] However, other studies have found that the analgesic effects of tramadol are significantly decreased or even absent in CYP2D6 poor metabolizers. [ 20 ] [ 94 ] The analgesic effects of tramadol are only partially reversed by naloxone in human volunteers, [ 20 ] hence indicating that its opioid action is unlikely the sole factor; tramadol's analgesic effects are also partially reversed by α 2 -adrenergic receptor antagonists such as yohimbine , the 5-HT 3 receptor antagonist ondansetron , and the 5-HT 7 receptor antagonists SB-269970 and SB-258719 . [ 25 ] [ 109 ] Pharmacologically, tramadol is similar to tapentadol and methadone in that it not only binds to the MOR, but also inhibits the reuptake of serotonin and norepinephrine [ 12 ] due to its action on the noradrenergic and serotonergic systems, such as its "atypical" opioid activity. [ 110 ]
Tramadol has inhibitory actions on the 5-HT 2C receptor. Antagonism of 5-HT 2C could be partially responsible for tramadol's reducing effect on depressive and obsessive–compulsive symptoms in patients with pain and co-morbid neurological illnesses. [ 80 ] 5-HT 2C blockade may also account for its lowering of the seizure threshold , as 5-HT 2C knockout mice display significantly increased vulnerability to epileptic seizures, sometimes resulting in spontaneous death. However, the reduction of seizure threshold could be attributed to tramadol's putative inhibition of GABA A receptors at high doses (significant inhibition at 100 μM). [ 89 ] [ 66 ] In addition, desmetramadol is a high-affinity ligand of the DOR, and activation of this receptor could be involved in tramadol's ability to provoke seizures in some individuals, as DOR agonists are well known for inducing seizures. [ 74 ]
Nausea and vomiting caused by tramadol are thought to be due to activation of the 5-HT 3 receptor via increased serotonin levels. [ 78 ] In accordance, the 5-HT 3 receptor antagonist ondansetron can be used to treat tramadol-associated nausea and vomiting. [ 78 ] Tramadol and desmetramadol themselves do not bind to the 5-HT 3 receptor. [ 78 ] [ 70 ]
Tramadol is metabolised in the liver via the cytochrome P450 isozyme CYP2B6 , CYP2D6 , and CYP3A4 , being O - and N -demethylated to five different metabolites. Of these, desmetramadol ( O -desmethyltramadol) is the most significant, since it has 200 times the μ-affinity of (+)-tramadol, and furthermore has an elimination half-life of 9 hours, compared with 6 hours for tramadol itself. As with codeine, in the 6% of the population who have reduced CYP2D6 activity (hence reducing metabolism), a reduced analgesic effect is seen. Those with decreased CYP2D6 activity require a dose increase of 30% to achieve the same degree of pain relief as those with a normal level of CYP2D6 activity. [ 111 ] [ 112 ]
Phase II hepatic metabolism renders the metabolites water-soluble, which are excreted by the kidneys. Thus, reduced doses may be used in renal and hepatic impairment. [ 25 ]
Its volume of distribution is around 306 L after oral administration and 203 L after parenteral administration. [ 25 ]
Tramadol is marketed as a racemic mixture of both R - and S - stereoisomers , [ 12 ] because the two isomers complement each other's analgesic activities. [ 12 ] The (+)-isomer is predominantly active as an opiate with a higher affinity for the μ-opiate receptor (20 times higher affinity than the (-)-isomer). [ 7 ]
The chemical synthesis of tramadol is described in the literature. [ 113 ] Tramadol [2-(dimethylaminomethyl)-1-(3-methoxyphenyl)cyclohexanol] has two stereogenic centers at the cyclohexane ring. Thus, 2-(dimethylaminomethyl)-1-(3-methoxyphenyl)cyclohexanol may exist in four different configurational forms:
The synthetic pathway leads to the racemate (1:1 mixture) of (1 R ,2 R )-isomer and the (1 S ,2 S )-isomer as the main products. Minor amounts of the racemic mixture of the (1 R ,2 S )-isomer and the (1 S ,2 R )-isomer are formed as well. The isolation of the (1 R ,2 R )-isomer and the (1 S ,2 S )-isomer from the diastereomeric minor racemate [(1 R ,2 S )-isomer and (1 S ,2 R )-isomer] is realized by the recrystallization of the hydrochlorides .
The drug tramadol is a racemate of the hydrochlorides of the (1 R ,2 R )-(+)- and the (1 S ,2 S )-(−)-enantiomers.
The resolution of the racemate [(1 R ,2 R )-(+)-isomer / (1 S ,2 S )-(−)-isomer] was described [ 114 ] employing ( R )-(−)- or ( S )-(+)-mandelic acid. This process does not find industrial application, since tramadol is used as a racemate, despite known different physiological effects [ 115 ] of the (1 R ,2 R )- and (1 S ,2 S )-isomers, because the racemate showed higher analgesic activity than either enantiomer in animals [ 116 ] and in humans. [ 117 ]
Tramadol and desmetramadol may be quantified in blood, plasma, serum, or saliva to monitor for abuse, confirm a diagnosis of poisoning or assist in the forensic investigation of a sudden death. Most commercial opiate immunoassay screening tests do not cross-react significantly with tramadol or its major metabolites, so chromatographic techniques must be used to detect and quantify these substances. The concentration of desmetramadol in the blood or plasma of a person who has taken tramadol is generally 10–20% that of the parent drug. [ 118 ] [ 119 ] [ 120 ]
In 2013, researchers Michel de Waard (then at Université Joseph Fourier , Grenoble and Grenoble Institute of Neuroscience , La Tronche [ 121 ] ) reported in Angewandte Chemie that tramadol was found in relatively high concentrations (>1%) in the roots of the African pin cushion tree , Nauclea latifolia , concluding that it was a natural product in addition to its being a later human synthetic, and presenting a putative biosynthetic hypothesis for its origin. [ 122 ]
In 2014, Michael Spiteller ( Technische Universität Dortmund ) and collaborators reported results, also in Angewandte Chemie , that supported the conclusion that the presence of tramadol in those tree roots was the result of tramadol having been ingested by humans and having been administered to cattle (by farmers in the region); Spiteller et al. presented data that tramadol and its metabolites were present in animal excreta, which they then argue contaminated soil around the trees. [ 121 ] They further observed that tramadol and its mammalian metabolites were found in tree roots in the far north of Cameroon where the commercial drug was in use, but not in the south where it was not being administered. [ 121 ]
A news report appearing in Lab Times at the time of the latter, 2014 paper, and reporting on its contents, also reported that Michel de Waard (communicating author of the original paper) continued to contest the notion that tramadol in tree roots was the result of anthropogenic contamination. [ 123 ] The point was made that samples were taken from trees that grew in national parks, where livestock were forbidden, and it quoted de Waard extensively, who stated that "thousands and thousands of tramadol-treated cattle sitting around a single tree and urinating" would be required to produce the concentrations discovered. [ 123 ] [ better source needed ]
In 2016, Spiteller and colleagues followed up their preceding work with a radiocarbon analysis that supported their contention that the tramadol found in N. latifolia roots was of human synthetic origin rather being plant-derived. [ 124 ]
Available dosage forms include liquids, syrups, drops, elixirs, effervescent tablets, and powders for mixing with water, capsules, tablets including extended-release formulations, suppositories, compounding powder, and injections. [ 24 ]
The U.S. Food and Drug Administration (FDA) approved tramadol in March 1995, and an extended-release (ER) formulation in September 2005. [ 125 ] ER Tramadol was protected by US patents nos. 6,254,887 [ 126 ] and 7,074,430. [ 127 ] [ 128 ] The FDA listed the patents' expiration as 10 May 2014. [ 127 ] However, in August 2009, the US District Court for the District of Delaware ruled the patents invalid, a decision upheld the following year by the Court of Appeals for the Federal Circuit. Manufacture and distribution of generic equivalents of Ultram ER in the United States was therefore permitted before the expiration of the patents. [ 129 ]
Effective August 2014, tramadol has been placed into Schedule IV of the federal Controlled Substances Act in the United States. [ 130 ] [ 131 ] Before that, some US states had already classified tramadol as a Schedule IV controlled substance under their respective state laws. [ 132 ] [ 133 ] [ 134 ]
Tramadol is classified in Schedule 4 (prescription only) in Australia, rather than as a Schedule 8 Controlled Drug (Possession without authority illegal) like most other opioids . [ 24 ]
Effective May 2008, Sweden classified tramadol as a controlled substance in the same category as codeine and dextropropoxyphene , but allows a normal prescription to be used. [ 135 ]
In June 2014, the United Kingdom's Home Office classified tramadol as a Class C, Schedule 3 controlled drug, but exempted it from the safe custody requirement. [ 8 ]
In October 2023, New Zealand's Medsafe reclassified tramadol as a Class C2 Controlled Drug (in addition to its existing status as a prescription only medication). [ 136 ]
Illicit use of the drug is thought to be a major factor in the success of the Boko Haram terrorist organization. [ 137 ] [ 138 ] [ 139 ] When used at higher doses, the drug "can produce similar effects to heroin." [ 137 ] One former member said, "whenever we took tramadol, nothing mattered to us anymore except what we were sent to do because it made us very high and very bold, it was impossible to go on a mission without taking it." [ 137 ] Tramadol is also used as a coping mechanism in the Gaza Strip under Israeli siege . [ 140 ] It is also abused in the United Kingdom , inspiring the title of the TV show Frankie Boyle's Tramadol Nights (2010). [ 141 ] [ 142 ]
From March 2019, the Union Cycliste Internationale (UCI) banned the drug, after riders were using the painkiller to improve their performance. [ 143 ] [ 144 ]
Tramadol may be used to treat post-operative, injury-related, and chronic (e.g., cancer-related) pain in dogs and cats as well as rabbits, coatis , many small mammals including rats and flying squirrels , guinea pigs , ferrets , and raccoons . [ 153 ] | https://en.wikipedia.org/wiki/Tramadol |
In ecology , a tramp species is an organism that has been spread globally by human activities. The term was coined by William Morton Wheeler in the bulletin of the American Museum of Natural History in 1906, used to describe ants that “have made their way as well known tramps or stow-aways [sic] to many islands". [ 1 ] The term has since widened to include non-ant organisms, but remains most popular in myrmecology . Tramp species have been noted in multiple phyla spanning both animal and plant kingdoms, including but not limited to arthropods , mollusca , bryophytes , and pteridophytes . The term "tramp species " was popularized and given a more set definition by Luc Passera in his chapter [ 2 ] of David F. William's 1994 book Exotic Ants: Biology, Impact, And Control Of Introduced Species . [ 3 ]
Tramp species are organisms that have stable populations outside their native ranges. [ 4 ] They are closely associated with human activities. [ 4 ] They are disturbance-specialists, [ 4 ] and are characterized by their synanthropic associations with humans [ 5 ] as their primary mode of expansion is human-mediated dispersal. [ 6 ] That being said, tramp species are not limited to anthropogenically disturbed habitats , they have the potential to invade pristine habitats, especially when established in a new area. [ 7 ] For example, Anoplolepis gracilipes was able to invade undisturbed forest ecosystems in Australia after being introduced and having an established population in northeast Arnhem Land . [ 8 ] It is important to note that while some tramp species are invasive , the majority of them are not. [ 9 ] Some can exist alongside native species without competing with them, simply occupying unfilled niches , such as is the case with some populations of Tapinoma melanocephalum and Monomorium pharaonic , who rarely interfere with native species outside human settlement areas. [ 9 ]
Ants have a more rigid list of criterion to be considered "true" tramp species. The most cited body of work outlining these traits comes from Luc Passera. [ 2 ] His primary and most important criterion is that the distribution of the species must be linked to human activities, what he refers to as " anthropophilic tendency". [ 2 ] He also lists the following traits as being likely common to all tramp species: small size, monomorphism of worker ants (worker ants having only one phenotype ), high rates of polygyny , unicoloniality , strong interspecific aggressiveness, worker ant sterility , and colony reproduction by budding . [ 2 ] These traits may appear with more or less intensity among considered tramp species, [ 2 ] and in fact, literature does not currently require a tramp species to possess every single one of these attributes. [ 10 ] Ant tramp species in particular can be ecological indicators on the susceptibility of an ecosystem to become invaded [ 7 ] or ecological instability. [ 2 ]
All tramp species are distributed globally by as a result of human transportation . [ 6 ] [ 11 ] [ 12 ] [ 13 ] [ 9 ] [ 14 ] [ 7 ] [ 5 ] As such, they are almost always present in urban or human-settled environments, and have colonizing mechanisms that are well adapted to human cohabitation, [ 14 ] referred to as possessing "anthropogenically reinforced dispersal biology". [ 12 ] The globalization of trade and travel have contributed significantly to the dispersal of tramp species worldwide. [ 12 ] Trade activities involving the importation and exportation of cargos on ships (often containing plants, soil, wood and other biological mediums) are noted as being an especially important methods of introduction. [ 6 ] These often repeated introductions (as oftentimes shipments will come from the same place) contribute to fortifying the genetic variability and initial population sizes of newly transplanted tramp species, which facilitates their establishment in novel environments. [ 6 ] After their human-mediated introductions, tramp species can also benefit from human disturbance to the environment. Anthropogenic forces (such as construction and agriculture ) can dramatically impact local fauna and flora , weakening the environment and making the area more susceptible to the encroachment of tramp species. [ 14 ] This phenomenon is noted as a particularly tough issue in Tropical Asia , where monocropping practices of local rubber plant farms have decimated indigenous species assemblages and habitat structures, allowing the establishment of many problematic tramp species. [ 7 ] Another example is the Thousand Islands Archipelago in Indonesia , where the small tropical islands are especially vulnerable to human disturbance, which facilitated the establishment of multiple tramp species. [ 14 ]
The range expansion of tramp ants is projected to increase with weather pattern changes due to climate change . [ 6 ] As many tramp species are well adapted to disturbances in their native habitat, they are particularly resilient to large-scale, unpredictable weather events (such as floods , wildfires and monsoons ), which are set to increase in frequency as anthropogenic activity continues to affect global systems. [ 6 ]
Tramp species can have similar effects to invasive species, and in some literature the term "tramp" species is used as a synonym for invasive. [ 15 ] [ 16 ] [ 7 ] [ 6 ] As such they can outcompete and displace local fauna, decreasing species richness. [ 14 ] [ 10 ] [ 9 ] They can also have direct impacts on human health, such as is the case with Solenopsis geminata and Brachyponera senaarensis . [ 10 ] Both of these venomous species have been known to bite humans, often times causing severe anaphylactic reactions; this has made them known public health hazards in the regions they are found. [ 17 ] [ 18 ] [ 19 ] Tramp species can also be nuisance pests , damaging housing structures and crops . [ 12 ] [ 10 ] However, it is important to note that tramp species are not always invasive, and can cohabitate without harming local environments or species assemblages. [ 9 ]
As tramp species are so diverse in their ecology , there is no universal protocol to prevent their encroachment into new territories. However, there are certain strategies that can be employed to mitigate tramp species. In some environments, maintaining diversity of local species assemblages can deter certain tramp species. [ 7 ] Currently, there is a deficiency in our ability to identify potential new tramp species quickly - a phenomenon dubbed " taxonomic impediment ", which is a delay in identifying invasive species threats. [ 6 ] As such, it is essential to increasing identification tools for preventative action against tramp species. [ 6 ] Interdepartmental cooperation for pest management can be very effective in tramp species management, as a collaborative effort between affected stakeholders can increase the likelihood of success in mitigation . [ 12 ] Direct pest management efforts have included baits with insect growth regulators to sterilize colonies to varying degrees of success. [ 20 ] One method that can be successful for urban infestation of tramp ants specifically (depending on their specific biology) in temperate zones is to shut off heat sources for two weeks or more, as many can be heat-adapted species. [ 9 ] [ 21 ]
Anoplolepis gracilipes [ 13 ] [ 10 ]
Brachyponera sennaarensis [ 10 ]
Cardiocondyla emeryi [ 22 ]
Cardiocondyla kagutsuchi [ 22 ]
Cardiocondyla nuda [ 14 ]
Cardiocondyla obscurior [ 23 ]
Cardiocondyla wroughtonii [ 22 ]
Hypoponera punctatissima [ 24 ]
Iridomyrmex anceps [ 10 ]
Lasius neglectus [ 25 ]
Linepithema humile [ 10 ] [ 13 ] [ 23 ]
Monomorium destructor [ 26 ] [ 10 ] [ 14 ]
Monomorium floricola [ 15 ] [ 22 ] [ 14 ]
Monomorium indicum [ 10 ]
Monomorium monomorium [ 14 ]
Monomorium pharaonic [ 9 ]
Nylanderia spp.*
Paratrechina flavipes [ 10 ]
Paratrechina jaegerskioeldi [ 10 ]
Paratrechina longicornis [ 15 ] [ 13 ] [ 27 ] [ 14 ]
Pheidole fervens [ 22 ]
Pheidole megacephala [ 13 ] [ 22 ] [ 27 ]
Pheidole teneriffana [ 10 ]
Solenopsis geminata [ 10 ] [ 15 ] [ 13 ] [ 27 ]
Solenopsis invicta [ 23 ]
Tetramorium caespitum [ 21 ]
Tetramorium bicarinatum [ 10 ] [ 22 ]
Tetramorium lanuginosum [ 21 ] [ 13 ] [ 22 ]
Tetramorium pacificum [ 21 ]
Tetramorium simillimum [ 21 ]
Tapinoma melanocephalum [ 10 ] [ 13 ] [ 22 ] [ 27 ] [ 9 ] [ 14 ]
Tapinoma simrothi [ 10 ]
Technomyrmex albipes [ 14 ]
Technomyrmex brunneus [ 22 ]
Trichomyrmex destructor [ 13 ]
Wasmannia auropunctata [ 27 ] [ 15 ]
Chondromorpha xanthotricha [ 28 ]
Glyphiulus granulatus [ 28 ]
Orthomorpha coarcata [ 28 ]
Oxidus gracilis [ 28 ]
Pseudospirobolellus avernus [ 28 ]
Trigoniulus corallinus [ 28 ]
Ctenolepisma longicaudata [ 12 ]
Cryptotermes sp. [ 16 ]
Calliscelio elegans [ 11 ]
Platygastroidea superfamily [ 29 ]
Bradybaena similaris [ 30 ]
Deroceras panormitanum [ 31 ]
Deroceras invadens [ 31 ]
Diplasiolejeunea ingekarolae [ 32 ]
Daltonia marginata [ 32 ]
Daltonia splachnoides [ 32 ]
Nephrolepis biserrata [ 32 ]
Williams and Lucky 2020 [ 6 ] provide a thorough listing of all known Nylanderia species with established populations outside their native ranges. | https://en.wikipedia.org/wiki/Tramp_species |
Trampling is the act of walking on something repeatedly by humans or animals.
Trampling on open ground can destroy the above ground parts of many plants and can compact the soil, thereby creating a distinct microenvironment that specific species may be adapted for.
It can be used as part of a wildlife management strategy along grazing .
When carrying out investigations like a belt transect , trampling should be avoided. At other times, it is part of the experimental design .
Trampling can be a disturbance to ecology and to archaeological sites . [ 1 ]
This ecology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Trampling |
In computer programming , the word trampoline has a number of meanings, and is generally associated with jump instructions (i.e. moving to different code paths).
Trampolines (sometimes referred to as indirect jump vectors) are memory locations holding addresses pointing to interrupt service routines, I/O routines, etc. Execution jumps into the trampoline and then immediately jumps out, or bounces, hence the term trampoline . They have many uses:
Some implementations of trampolines cause a loss of no-execute stacks (NX stack). In the GNU Compiler Collection (GCC) in particular, a nested function builds a trampoline on the stack at runtime, and then calls the nested function through the data on stack. The trampoline requires the stack to be executable.
No-execute stacks and nested functions are mutually exclusive under GCC. If a nested function is used in the development of a program, then the NX stack is silently lost. GCC offers the -Wtrampolines warning to alert of the condition.
Software engineered using secure development lifecycle often do not allow the use of nested functions due to the loss of NX stacks. [ 11 ] | https://en.wikipedia.org/wiki/Trampoline_(computing) |
tranSMART is an open-source data warehouse designed to store large amounts of clinical data from clinical trials, as well as data from basic research, so that it can be interrogated together for translational research . It is also designed to be used by many people, across organizations. It was developed by Johnson & Johnson , in partnership with Recombinant Data Corporation. The platform was released in Jan 2012 and has been governed by the tranSMART Foundation since its initiation in 2013. In May 2017, the tranSMART Foundation merged with the i2b2 Foundation to create an organization with the key mission to advance the field of precision medicine.
The tranSMART platform has been adopted and evaluated by numerous pharmaceutical companies, not-for-profits and patient advocacy groups, academics, governmental organisations and service providers. [ 3 ] [ 4 ] At the Bio-IT World industry conference both the Innovative Medicines Initiative 's U-BIOPRED project and The Michael J. Fox Foundation were awarded a Best Practices Award for their application of the platform. [ 5 ] [ 6 ]
tranSMART is built on top of the i2b2 clinical data warehouse and leverages the i2b2 star schema for modelling clinical and low-dimensional data. [ 2 ] High-dimensional omics data is stored in dedicated tables where each of the data types (e.g., gene expression, SNP or metabolomics) retains its specific data structure. Both the Oracle and PostgreSQL database management systems are supported for its data storage. [ 2 ]
Researchers reported missing functionalities in the earlier versions of tranSMART (version 16.2 and before), which restricted the capabilities of the tool and opportunities for research. In response, tranSMART Foundation brought together four leading pharmaceutical companies – Pfizer , Sanofi , Abbvie and Roche – together in October 2016 to help sponsor a joint project to develop the new functionality in a coherent way and improve tranSMART. The Hyve BV was the IT company responsible for the execution of the tranSMART 17.1 development project.
The focus of the sponsors was to add three main functionalities:
In addition, another goal of the project was to improve the quality of the tranSMART back-end to make it ready for the future. The back-end improvements implemented in the development project, delivered early in 2017, had a large impact on the capabilities of tranSMART, as well as its quality nowadays.
Functional improvements of the 17.1 version include the support for time series, samples, and cross-study concepts. This is accomplished by re-alignment with the i2b2 data model, on top of which tranSMART was built, extended with the features which make tranSMART unique: the organization in studies, the support for modelling clinical trial event grouping and high dimensional data support. Technical improvements include automated-test coverage for all Core API and REST API calls and documentation of those calls and the full database schema.
The new functionalities developed in the tranSMART 17.1 project, however, were not supported by the existing tranSMART user interface. Therefore, The Hyve BV started the development of a new, modern user interface for tranSMART 17.1 that would accommodate the added functionalities. In October 2018, Glowing Bear, the user interface built on tranSMART version 17.1 for cohort selection and exploratory analyses, was released. This user interface has been adopted by leading medical and research institutions such as the Princess Máxima Center for Pediatric Oncology, the Netherlands Twin Registry, and Leiden University Medical Center .
The 17.1 project failed to meet its objectives for compatibility with the i2b2 data model, and still lacks the functionality of the full tranSMART interface. The 17.1 code is only server-side code and was only released by the Foundation as a 'developer release' and never met the requirements for a full release. A new release of tranSMART (v19) is now in beta test, and not only meets the compatibility requirements with i2b2, but has an enhanced user interface with additional analytical workflows. This release will be supported by the Foundation, and will be the sole branch of the codebase to continue with support from the i2b2 tranSMART Foundation. | https://en.wikipedia.org/wiki/TranSMART |
Trandolapril is an ACE inhibitor used to treat high blood pressure . It may also be used to treat other conditions. It is similar in structure to another ramipril but has a cyclohexane group. It is a prodrug that must be metabolized into its active form. It has a longer half-life when compared to other agents in this class.
It was patented in 1981 and approved for medical use in 1993. [ 1 ] It is marketed by Abbott Laboratories under the brand name Mavik .
Side effects reported for trandolapril include nausea , vomiting , diarrhea , headache , dry cough, dizziness or lightheadedness when sitting up or standing , hypotension , or fatigue . [ citation needed ]
Patients also on diuretics may experience an excessive reduction of blood pressure after initiation of therapy with trandolapril. It can reduce potassium loss caused by thiazide diuretics and increase serum potassium when used alone. Therefore, hyperkalemia is a possible risk. Increased serum lithium levels can occur in patients who are also on lithium. [ citation needed ]
Trandolapril is teratogenic (US: pregnancy category D) and can cause birth defects and even death of the developing fetus. The highest risk to the fetus is during the second and third trimesters. When pregnancy is detected, trandolapril should be discontinued as soon as possible. Trandolapril should not be administered to nursing mothers. [ citation needed ]
Combination therapy with paricalcitol and trandolapril has been found to reduce fibrosis in obstructive uropathy . [ 2 ]
Trandolapril is a prodrug that is de esterified to trandolaprilat. It is believed to exert its antihypertensive effect through the renin–angiotensin–aldosterone system . Trandolapril has a half-life of about six hours, while trandolaprilat has a half life of about ten hours. Trandolaprilat has about eight times the activity of its parent drug. About one-third of trandolapril and its metabolites are excreted in the urine, and about two-thirds of trandolapril and its metabolites are excreted in the feces. Serum protein binding of trandolapril is about 80%. [ citation needed ]
Trandolapril acts by competitive inhibition of angiotensin converting enzyme (ACE), a key enzyme in the renin–angiotensin system . which plays an important role in regulating blood pressure. [ citation needed ] | https://en.wikipedia.org/wiki/Trandolapril |
The Trans-European Drug Information ( TEDI ) project is a European database compiling information from different drug checking services located on the European continent . The non-governmental organizations feeding into the database are referred to as the TEDI network.
The first drug checking service in Europe opened in 1986 in Amsterdam, allowing drug users to analyze the chemical composition of illicit substances that they consume. [ 1 ] In the following years, a number of nonprofit organizations present in various other drug scenes [ 2 ] in several countries (including in Austria , France , Germany , the Netherlands , Portugal , Spain , and Switzerland ) set up drug checking services. [ 3 ]
In 2011, a database was created for to centralize information from these services and allow for the sharing of alerts (for example on new adulterants in illicit substances [ 4 ] or circulation of novel psychoactive substance [ 5 ] ) and the monitoring of drug markets across borders. [ 6 ]
Between 2008 and 2013, organizations member of the TEDI network analyzed more than 45,000 samples of recreational drugs , showing similarities and discrepancies between areas of the European continent in terms of purity, formulation, or prices. [ 7 ]
The project and the network are hosted by the Polish nonprofit TEDI Nightlife Empowerment & Well-being Network (also known as NEW net or SaferNightlife ). [ citation needed ]
As of 2022, the TEDI network was integrated by 20 organizations across 13 countries (Austria, Belgium, Finland, France, Germany, Italy, Luxembourg, the Netherlands, Portugal, Slovenia, Spain, Switzerland, and the United Kingdom). [ 8 ] A team of professionals from various fields ( substance use disorder prevention workers, pharmacists , chemists , etc.) across network member organizations constitutes the TEDI project's team. [ 6 ]
The aims of the Trans-European Drug Information project are to collect, monitor and analyze the evolution of the European recreational drug market trends, and to regularly report the findings. Since 2011, the database has facilitated the centralization and comparison of information collected at the local level. [ citation needed ]
The TEDI database also feeds into the early warning system of the European Union Drugs Agency (EUDA, formerly EMCDDA). EUDA and the TEDI network also collaborate on the organization of conferences [ 9 ] and trainings. [ 10 ]
In 2019, the mobile application TripApp was launched by a consortium or organizations, sharing in real-time alerts [ 11 ] from the TEDI database, in addition to connecting app users with local harm reduction providers. [ 12 ] The app received an award from the Council of Europe in 2021. [ 13 ]
As part of the project, guidelines and methodological recommendations have been published, such as: | https://en.wikipedia.org/wiki/Trans-European_Drug_Information |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.