text
stringlengths
11
320k
source
stringlengths
26
161
Tara Louise Pukala is an Australian scientist who is a professor of biological chemistry at the University of Adelaide , [ 1 ] [ 2 ] board member of Nature Scientific Reports, Superstar of STEM, 2023–2024, and Director of the Adelaide Proteomics Centre. [ 3 ] [ 4 ] Pukala was awarded a PhD from the University of Adelaide in 2006 for her thesis "Structural and mechanistic studies of bioactive peptides". [ 5 ] She then moved to the University of Cambridge in a post-doctoral role, researching native mass spectrometry . [ 6 ] [ 7 ] Since 2017, she has been the director of the Adelaide Proteomics Centre, [ 8 ] [ 9 ] leading a multidisciplinary group of researchers. [ 10 ] Pukala is a member of the editorial board for Analytical Chemistry , and an associate editor for Frontiers in Chemistry (Medicinal and Pharmaceutical Chemistry). She is also associate editor of Rapid Communications in Mass Spectrometry , [ 11 ] editor of the European Journal of Mass Spectrometry , [ 12 ] and a board member of Nature Scientific Reports . [ 13 ] She was also vice-president of the Australian and New Zealand Society for Mass Spectrometry (ANZSMS). [ 14 ] [ 15 ] Pukala's research involves the intersection of chemistry and biology, working with molecular proteins, including DNA and other biomolecules. She is interested in visualising the structures, shapes, and ways that various biomolecules interact with each other. This research helps with understanding medical and health sciences, and the mechanistic biological and chemical processes. [ 16 ]
https://en.wikipedia.org/wiki/Tara_Pukala
Tarantism ( / ˈ t ɛr ən ˌ t ɪ z əm / TERR -ən-tiz-əm ) [ 1 ] is a form of hysteric behaviour originating in Southern Italy , popularly believed to result from the bite of the wolf spider Lycosa tarantula (distinct from the broad class of spiders also called tarantulas ). A better [ clarification needed ] candidate cause is Latrodectus tredecimguttatus , commonly known as the Mediterranean black widow or steppe spider , although no link between such bites and the behaviour of tarantism has ever been demonstrated. [ 2 ] However, the term historically is used to refer to a dancing mania – characteristic of Southern Italy – which likely had little to do with spider bites. The tarantella dance supposedly evolved from a therapy for tarantism. It was originally described in the 11th century. [ 3 ] The condition was common in Southern Italy , especially in the province of Taranto , during the 16th and 17th centuries. There were strong suggestions that there is no organic cause for the heightened excitability and restlessness that gripped the victims. The stated belief of the time was that victims needed to engage in frenzied dancing to prevent death from tarantism. Supposedly a particular kind of dance, called the tarantella , evolved from this therapy . A prime location for such outbursts was the church at Galatina , particularly at the time of the Feast of Saints Peter and Paul on 29 June. [ 4 ] "The dancing is placed under the sign of Saint Paul , whose chapel serves as a "theatre" for the tarantulees' public meetings. The spider seems constantly interchangeable with Saint Paul; the female tarantulees dress as "brides of Saint Paul". [ 5 ] As a climax, "the tarantulees, after having danced for a long time, meet together in the chapel of Saint Paul and communally attain the paroxysm of their trance, ... the general and desperate agitation was dominated by the stylised cry of the tarantulees, the 'crisis cry', an ahiii uttered with various modulations". [ 6 ] Francesco Cancellieri , in his exhaustive treatise on Tarantism, takes note of semi-scientific, literary, and popular observations, both recent and ancient, giving each similar weight. [ 7 ] He notes a report that in August 1693, a doctor in Naples had himself been bitten by two tarantulas, with six witnesses and a notary, but did not suffer the dancing illness. Cancellieri in part attributes this illness not only to the spiders but to the locale, since Tarantism was mainly seen in Basilicata , Apulia , Sicily , and Calabria . He states: Quando uno è punto da questa mal augurata bestia, si fanno cento diverse mosse in un momento. Si piange, si balla, si vomita, si trema, si ride, s'impallidisce, si grida, si sviene, si soffre gran dolore, e finalmente dopo qualche giorno si muore, se uno non è soccorso. Il sudore, e gli antidoti sollevano l'ammalato; ma il sovrano, ed unico rimedio è la Musica. When one is in the hold of this ill-wished beast, one has a hundred different feelings at a time. One cries, dances, vomits, trembles, laughs, pales, cries, faints, and one will suffer great pain, and finally after a few days, if unaided, you die. Sweat and antidotes relieve the sick, but the sovereign and the only remedy is Music. [ 8 ] He goes on to describe some specific observations of the malady, typically afflicting peasants, alone or in groups. The malady typically affected peasants on hot summer days, causing indolence. Then he describes how only treatment through dancing music could restore them to vitality; for example: [...] e trovammo il misero contadino oppresso da difficile respirazione, ed osservammo inoltre, che la faccia, e le mani erano incominciate a divenir nere. E perchè il suo male era a tutti noto, si portò la Chitarra, la cui armonìa subito, che da lui fu intesa, cominciò a mover prima li piedi, poco dipoi le gambe. Si reggeva appresso sulle ginocchia. Indi a poco intervallo s'alzò passenggiando. Finalmente fra lo spazio di un quarto d'ora saltava si, che si sollevava ben tre palmi da terra. Sospirava, ma con empito sì grande, che portava terrore a' circostanti; e prima d'un'ora se gli tolse il nero dalle mani, e dal viso, riacquistando il suo natio colore. [...] and we found the poor peasant oppressed with difficult breathing, and we observed also that the face and hands had started to become black. And 'cause his illness was known to all, a guitar was brought, whose harmony immediately that he was understood, began first moving the feet, legs shortly afterwards. He stood on his knees. Soon after an interval he arose swaying. Finally, in the space of a quarter of an hour he was leaping, nearly three palms from the ground. Sighed, but with such great impetus, that it terrorised bystanders, and before an hour, the black was gone from his hands and face, and he regained his native colour. [ 9 ] John Crompton proposed that ancient Bacchanalian rites that had been suppressed by the Roman Senate in 186 BC went underground, reappearing under the guise of emergency therapy for bite victims. [ 10 ] Although the popular belief persists that tarantism results from a spider bite, it remains scientifically unsubstantiated. Donaldson, Cavanagh, and Rankin (1997) [ 11 ] conclude that the actual cause or causes of tarantism remain unknown. In recent years, tarantism has been defined by its connection to dance and music. In the 1990s and 2000s, people began rediscovering the genre of Tarantella , and in particular, the pizzica . In 1998, Salento began hosting an annual music festival, Notte della Taranta . Musicians tour throughout the region, and the festival culminates with a large late-night concert held in Melpignano . [ 12 ] Composer and musician, Ludovico Einaudi directed the festival in 2010 and 2011, and released his album Taranta Project in 2015. [ 13 ] Many historical and cultural references are associated with this disease and the ensuing "cure" – the tarantella . It is, for example, a key image in Henrik Ibsen 's A Doll's House and the spell "Tarantallegra" from the Harry Potter series. It was also mentioned in the novel 39 Clues: Superspecial Outbreak . The mention of the spider "tarantula" and description of its venom and the associated addiction has been depicted in the Indian television show "Byomkesh Bakshi" in episode 4 titled "Makdi ka Ras/makorshar rawsh". [ citation needed ]
https://en.wikipedia.org/wiki/Tarantism
Tardigrade specific proteins are types of intrinsically disordered proteins specific to tardigrades . These proteins help tardigrades survive desiccation , one of the adaptations which contribute to tardigrade's extremotolerant nature. Tardigrade specific proteins are strongly influenced by their environment, leading to adaptive malleability across a variety of extreme abiotic environments. The mechanisms of tardigrade desiccation protection were originally thought to result from high levels of the sugar trehalose . Trehalose is used by organisms like yeast to avoid desiccation in dry environments by working with heat shock proteins [ 1 ] to keep desiccation-sensitive proteins in solution. [ 2 ] [ 3 ] However, while tardigrades can accumulate small levels of trehalose, the levels are insufficient to provide protection from extreme conditions. [ 4 ] Other molecules which help certain organisms avoid cellular desiccation include late embryogenesis abundant proteins , which provide protection to embryonic cotton seeds. [ 5 ] Certain proteins actually responsible for the tardigrade's hardiness, including the cytoplasmic and secreted abundant heat soluble proteins, were discovered when searching for late embryogenesis abundant proteins in tardigrades. [ 6 ] One strategy used by the tardigrade to survive in dry environments is anhydrobiosis . Anhydrobiosis is a process in which an organism can lose nearly all of its water and enter an ametabolic state. [ 7 ] Tardigrade specific proteins are a type of intrinsically disordered proteins , which have no predetermined shape or task. These proteins use many different conformations, called an ensemble, to move through different structures. Because of this, intrinsically disordered proteins may react strongly to the environment they inhabit. [ 8 ] There are three families of tardigrade specific proteins, each named after where the protein is localized within a cell. These proteins are similar to late embryogenesis abundant proteins but are specific to tardigrades. The three families do not resemble each other and are expressed or enriched during desiccation. Unlike traditional proteins, intrinsically disordered proteins do not precipitate out of solution or denature during high heat. [ 9 ] Tardigrades rely on these proteins to help them survive extreme environments, where they put their bodies in a dehydrated state called a tun. In most organisms, dehydration causes problems for cells, which need a hydrated environment for their proteins to function. However, tardigrade specific proteins assist in preventing aggregation of cell contents upon dehydration, and maintain the integrity of the cell membrane upon rehydration. Cytoplasmic abundant heat soluble (CAHS) proteins are highly expressed in response to desiccation. There are two hypotheses for their function in tardigrades. The vitrification hypothesis is the idea that, when a tardigrade becomes desiccated, the viscosity within its cells increases to the point that denaturation and membrane fusion in proteins would stop. [ 10 ] A second hypothesis, the water replacement hypothesis, posits that CAHS proteins replace water in other desiccation-sensitive proteins, protecting the hydrogen bonds normally reliant on water. [ 11 ] CAHS proteins are dispersed throughout the cell in normal conditions, but form a network of filaments during environmentally stressful conditions. This network transforms the cytoplasm into a gel-like matrix and prevents the cell from collapsing as water leaches out. [ 12 ] This state is reversible and the proteins disaggregate when exposed to less stressful conditions. [ 13 ] When forming the filament network, CAHS proteins have long helical domains that interact in a coiled manner with each other. These interactions are possible due to the proteins' partial disorder, with two flexible tails surrounding the helical domains. [ 14 ] CAHS proteins have been studied to observe their interactions with trehalose, a sugar used by other species to prevent desiccation. Trehalose was found to interact at higher levels with CAHS proteins than other sugars such as sucrose. [ 15 ] Trehalose averages only 1% in most species of tardigrades, and in no species more than 3%, indicating that tardigrades use other strategies to tolerate dehydration. [ 6 ] Tardigrade CAHS protein injected into mice produced no inflammatory response or hemolysis . [ 16 ] Secreted abundant heat soluble (SAHS) proteins are similar to fatty acid-binding proteins , notably in their structure with an antiparallel beta-barrel and internal fatty acid binding pocket. [ 17 ] [ 18 ] SAHS proteins are often secreted into media and associated with special extracellular structures. [ 19 ] Dried tardigrades have an abundance of secretory cells which are not found in hydrated individuals. The mechanism behind SAHS proteins has not yet been determined, but the presence of secretory cells only during desiccation suggests they are used to protect cells during periods of dehydration. Mitochondrial abundant heat soluble (MAHS) proteins are localized in mitochondria and are responsible for protecting mitochondria during desiccation. [ 20 ] Because of its role in metabolizing reactive oxygen species , the mitochondrion is an important organelle to protect in extreme environments. During dehydration, the mitochondria of tardigrades grow much smaller and lose their cristae . [ 5 ] MAHS proteins may act to replace water in the membrane of the mitochondria, preventing uneven rehydration and membrane rupture. [ 21 ] Mitochondria and muscle contraction due to mitochondria are essential for tardigrade to enter the "tun" state of anhydrobiosis. [ 22 ] Dsup is a DNA -associating protein , unique to the tardigrade, [ 23 ] that suppresses the occurrence of DNA breaks by radiation. [ 24 ] [ 25 ] [ 26 ] [ 27 ] Dsup localized to nuclear DNA reduces single-strand breaks and double-strand breaks when subjected to ionizing radiation . [ 28 ] Late embryogenesis abundant proteins (LEA proteins) are proteins that protect against protein aggregation due to dehydration or osmotic stress. However, no LEA proteins have been found in tardigrades. [ 6 ]
https://en.wikipedia.org/wiki/Tardigrade_specific_proteins
Tarenflurbil , [ 1 ] Flurizan or R -flurbiprofen , is a single enantiomer of the racemate NSAID flurbiprofen . For several years, research and trials for the drug were conducted by Myriad Genetics , to investigate its potential as a treatment for Alzheimer's disease ; that investigation concluded in June 2008 when the company announced it would discontinue development of the compound. [ 2 ] At proposed therapeutic concentrations, this molecule lacks anti-inflammatory activity, and does not inhibit either cyclooxygenase 1 (COX-1) or cyclooxygenase 2 (COX-2) enzymes. Only the S -enantiomers of arylpropionic acid NSAID can potently inhibit COX, whereas the R -enantiomers exert almost no COX activity. R -Flurbiprofen is inefficiently converted into S -flurbiprofen, with 1.5% of the R -enantiomer undergoing bioinversion to the S -form. Although this compound lacks activity against COX, studies have shown that this drug is a potent reducer of levels of beta amyloid , [ 3 ] [ 4 ] the main constituent of amyloid plaques in Alzheimer's disease , and therefore there was interest in this drug as a therapeutic agent. In 2005, Myriad Genetics reported the results of its Phase II clinical trial of Flurizan; it was the largest ever Alzheimer's drug treatment trial using R -flurbiprofen. [ 5 ] Patients were split into three treatment groups, receiving placebo, 400 or 800 mg R -flurbiprofen twice daily for a year. Result from this trial showed that the drug was well tolerated, and positive trends were observed with the 800 mg twice-daily dose in patients with mild Alzheimer's disease. A subgroup of patients that were diagnosed with mild disease, and had high plasma drug levels had significantly less decline in two primary behavioral outcomes (Activities of Daily Living scale (ADCS-ADL = Alzheimer’s Disease Cooperative Study - ADL) and Global Function (CDR-SB = Clinical Dementia Rating-Sum of Boxes)). Approximately 80 patients enrolled in the optional follow-on study showed continuing benefits with R -flurbiprofen, with increasing positive trends over this period for all primary outcomes after 24 months. On March 5, 2007 Myriad reported final results of the two-year trial, showing that 42% of those 80 patients showed improvement or no decline in one or more of the three primary endpoints of cognition, global function and activities of daily living, compared to a typical 10% of patients on placebo. A Phase III clinical study evaluated 800 mg R -flurbiprofen twice-daily versus placebo for 18 months exclusively in 1800 patients with mild Alzheimer's disease. [ 6 ] This second trial concluded in February 2008 with results reported in the summer. After Phase III testing, which included nearly 1,700 patients with mild Alzheimer's disease treated for 18 months with either Flurizan or placebo, Myrial Genetics concluded that the drug did not improve thinking ability or the ability of patients to carry out daily activities significantly more than those patients with placebo. Peter Meldrum, the chief executive of Myriad, announced on June 30, 2008, that the company will no longer be developing Flurizan. [ 2 ] Prior to this termination, Myriad had sold distribution rights in the European Union to Lundbeck for an initial payment of $100 million, which Lundbeck has indicated it will now take as a write-down . [ 2 ] [ 7 ]
https://en.wikipedia.org/wiki/Tarenflurbil
Target-mediated drug disposition ( TMDD ) is the process in which a drug binds with high affinity to its pharmacological target (for example, a receptor ) to such an extent that this affects its pharmacokinetic characteristics. Various drug classes can exhibit TMDD, most often these are large compounds (biologics such as antibodies , cytokines or growth factors [ 1 ] ) but also smaller compounds can exhibit TMDD (such as warfarin and CHK-336 ). [ 2 ] A typical TMDD pattern of antibodies displays non-linear clearance and can be seen at concentration ranges that are usually defined as 'mid-to-low'. In this concentration range, the target is partly saturated . [ 3 ] [ 4 ] This pharmacology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Target-mediated_drug_disposition
In a zinc finger protein , certain sequences of amino acid residues are able to recognise and bind to an extended target-site of four or even five nucleotides [ 1 ] When this occurs in a ZFP in which the three-nucleotide subsites are contiguous, one zinc finger interferes with the target-site of the zinc finger adjacent to it, a situation known as target-site overlap . For example, a zinc finger containing arginine at position -1 and aspartic acid at position 2 along its alpha-helix will recognise an extended sequence of four nucleotides of the sequence 5'-NNG(G/T)-3'. The hydrogen bond between Asp 2 and the N4 of either a cytosine or adenine base paired to the guanine or thymine, respectively defines these two nucleotides at the 3' position, defining a sequence that overlaps into the subsite of any zinc finger that may be attached N-terminally. [ 2 ] [ 3 ] Target-site overlap limits the modularity of those zinc fingers which exhibit it, by restricting the number of situations to which they can be applied. If some of the zinc fingers are restricted in this way, then a larger repertoire is required to address the situations in which those zinc fingers cannot be used. [ 3 ] Target-site overlap may also affect the selection of zinc fingers during by display , in cases where amino acids on a non-randomised finger, and the bases of its associated subsite, influence the binding of residues on the adjacent finger which contains the randomised residues. Indeed, attempts to derive zinc finger proteins targeting the 5'-(A/T)NN-3' family of sequences by site-directed mutagenesis of finger two of the C7 protein were unsuccessful due to the Asp 2 of the third finger of said protein. [ 4 ] The extent to which target-site overlap occurs is largely unknown, with a variety of amino acids having shown involvement in such interactions. [ 1 ] When interpreting the zinc finger repertoires presented by investigations using ZFP phage display , it is important to appreciate the effects that the rest of the zinc finger framework may have had in these selections. [ 3 ] Since the problem only appears to occur in a limited number of cases, the issue is nullified in most situations in which there are a variety of suitable targets to choose from [ 2 ] and only becomes a real issue if binding to a specific DNA sequence is required (e.g. blocking binding by endogenous DNA-binding proteins).
https://en.wikipedia.org/wiki/Target-site_overlap
Target 2035 is a global effort or movement to discover open science , pharmacological modulator(s) for every protein in the human proteome by the year 2035. [ 1 ] [ 2 ] [ 3 ] The effort is led by the Structural Genomics Consortium with the intention that this movement evolves organically. Target 2035 has been borne out of the success that chemical probes have had in elevating or de-prioritizing the therapeutic potential of protein targets. The availability of open access pharmacological tools is a largely unmet aspect of drug discovery especially for the dark proteome. The first five years will include building mechanisms (Phase 1 below) which allow researchers to find collaborators with like-minded goals towards discovering a pharmacological tool for a specific protein or protein family, and make it open access (without encumbrances due to intellectual property). One strategic goal is seeding new open science programs on components of the drug discovery pipeline with the goal to bring medicines to the bedside equitably, affordably and rapidly. [ 4 ] Phase 1 will also build a framework that welcomes new and (re-)emerging enabling technologies in hit-finding and characterization. [ 5 ] [ 6 ] [ 7 ] [ 8 ] An update on the progress was published. [ 9 ] Target 2035 will draw on successes from past and current publicly-funded programs including National Institutes of Health (NIH) Illuminating the Druggable Genome initiative for under-explored kinases, GPCR’s and ion channels , Innovative Medicines Initiative 's RESOLUTE project on human SLCs, Innovative Medicines Initiative 's Enabling and Unlocking Biology in the Open (EUbOPEN), and Innovative Medicines Initiative 's Unrestricted Leveraging of Targets for Research Advancement and Drug Discovery . The NIH recently re-iterated their commitment to making their data open to mitigate the tens of billions due to irreproducible data. [ 10 ] Target 2035 will collaborate with the Chemical Probes Portal and open science platforms, e.g. Just One Giant Lab , in order to spread awareness and education of best practices for chemical modulators [ 11 ] [ 12 ] [ 13 ] and the benefits of open science, respectively. The following draft plan has been outlined in a white paper. [ 14 ] The first phase, from 2020 to 2025, would be structured to build the foundation for a concerted global effort, and would aim to collect, characterize and make available existing pharmacological modulators for key representatives from all proteins families in the current druggable genome (~4,000 proteins), as well as to develop critical and centralized infrastructure to facilitate data collection, curation, dissemination, and mining that will power the scientific community worldwide. This phase might also create centralized facilities to provide quantitative genome-scale biochemical and cell-based profiling assays to the federated community, as well as to coordinate the development of new technologies to extend the definition of druggability. This first phase will complement and extend ongoing efforts to create chemical tools and chemogenomic libraries to blanket priority gene families, such as kinases and epigenetics families. One year into Target 2035 has so far yielded infrastructure to house data on chemogenomic compounds reported in the literature. A progress update was published recently. [ 15 ] Towards the development of new technologies, Target 2035 started a new initiative Critical Assessment of Computational Hit-Finding Experiments (CACHE) aimed at benchmarking computational methods for hit-finding. [ 8 ] The first competition - finding ligands for the WD40 domain of LRRK2 - started in March 2022. The first round of predictions have been submitted. In the meantime, applications for the second CACHE benchmarking - predicting ligands for the RNA-binding domain for Nsp13 - has been posted. The second phase, from 2025 to 2035, will be to apply the new technologies and infrastructure to generate a complete set of pharmacological modulators for > 90% of the ~20,000 proteins encoded by the genome. “Target 2035” sounds ambitious, but its concept and practicality is on firm ground based on a number of pilot studies, which revealed the following success parameters:
https://en.wikipedia.org/wiki/Target_2035
Target angle is the relative bearing of the observing station from the vehicle being observed. It may be used to compute point-of-aim for a fire-control problem when vehicle range and speed can be estimated from other information. Target angle may be best explained from the example of a submarine preparing to launch a straight-run (non-homing) torpedo at a moving target ship. Since the torpedo travels relatively slowly, the torpedo course must be set not toward the target, but toward where the target will be when the torpedo reaches it. Target angle is used to estimate target course. [ 1 ] The submarine observer estimating target angle pictures himself on the target ship looking back at the submarine. Relative bearing of the submarine is the clockwise angle in degrees from the heading of the target ship to a straight line drawn from the target ship to the submarine. [ 1 ] When target angle is 0°  (or 360° ) the target ship is coming directly toward the submarine. Target angles between 0°  and 90°  indicate the target ship is moving toward and to the right of the submarine. Target angles between 90°  and 180°  indicate the target ship is moving to the right and away from the submarine. When target angle is 180°  the target ship is moving directly away from the submarine. Target angles between 180°  and 270°  indicate the target ship is moving away from and to the left of the submarine. Target angles between 270°  and 360°  indicate the target ship is moving to the left and toward the submarine. A target passing a stationary observer from left to right might have target angles progressing from 45°  to 135° , with broadside facing of 90°  marking the minimum distance between target and observer. A target moving from right to left on the same track would have target angles progressing downward from 315°  to 225°  with the closest point of approach occurring at 270° . Angle on the bow is a variation of target angle used by Naval submarines. Angle on the bow is measured over an arc of 180°  clockwise from the bow if viewing the starboard side of the target, or counterclockwise from the bow if viewing the port side of the target. Target angles from 0°  to 180°  are reported as "starboard [ target angle ]", while target angles from 180°  to 360°  are reported as "port [360° - target angle ]". [ 2 ] Angle on the bow provided the basis for submarine attack decisions through the world wars . When angle on the bow was less than 90° , the submarine would continue a submerged approach toward the target to launch torpedoes when angle on the bow increased to 90°  indicating the minimum range torpedo launch opportunity for the submarine with the given target course and speed. Unless the target was already within torpedo range, angle on the bow greater than 90°  required the submarine to attempt to surface and run around the target beyond visual range to submerge ahead of the target. As a practical matter, the speed differential required to run around a target meant most warships and ocean liners could not be attacked when angle on the bow was greater than 90° . [ 2 ] Estimation of target angle is based on the observer's visual identification of target features like differentiating the bow from the stern . Dazzle camouflage patterns pictured in the black and white images illustrate a form of ship camouflage attempting to impair an observer's recognition of ship features. [ 3 ]
https://en.wikipedia.org/wiki/Target_angle
Target costing is an approach to determine a product's life-cycle cost which should be sufficient to develop specified functionality and quality, while ensuring its desired profit . It involves setting a target cost by subtracting a desired profit margin from a competitive market price. [ 1 ] A target cost is the maximum amount of cost that can be incurred on a product, however, the firm can still earn the required profit margin from that product at a particular selling price. Target costing decomposes the target cost from product level to component level. Through this decomposition, target costing spreads the competitive pressure faced by the company to product's designers and suppliers. Target costing consists of cost planning in the design phase of production as well as cost control throughout the resulting product life cycle . The cardinal rule of target costing is to never exceed the target cost. However, the focus of target costing is not to minimize costs, but to achieve a desired level of cost reduction determined by the target costing process. Target costing is defined as "a disciplined process for determining and achieving a full-stream cost at which a proposed product with specified functionality, performance, and quality must be produced in order to generate the desired profitability at the product’s anticipated selling price over a specified period of time in the future." [ 2 ] This definition encompasses the principal concepts: products should be based on an accurate assessment of the wants and needs of customers in different market segments, and cost targets should be what result after a sustainable profit margin is subtracted from what customers are willing to pay at the time of product introduction and afterwards. The fundamental objective of target costing is to manage the business to be profitable in a highly competitive marketplace. In effect, target costing is a proactive cost planning, cost management , and cost reduction practice whereby costs are planned and managed out of a product and business early in the design and development cycle, rather than during the later stages of product development and production. [ 3 ] Target costing was developed independently in both USA and Japan in different time periods. [ 4 ] Target costing was adopted earlier by American companies to reduce cost and improve productivity, such as Ford Motor from 1900s, American Motors from 1950s-1960s. Although the ideas of target costing were also applied by a number of other American companies including Boeing , Caterpillar , Northern Telecom , few of them apply target costing as comprehensively and intensively as top Japanese companies such as Nissan , Toyota , Nippondenso . [ 5 ] Target costing emerged from Japan from 1960s to early 1970s with the particular effort of Japanese automobile industry, including Toyota and Nissan. It did not receive global attention until late 1980s to 1990s when some authors such as Monden (1992), [ 6 ] Sakurai (1989), [ 7 ] Tanaka (1993), [ 8 ] and Cooper (1992) [ 9 ] described the way that Japanese companies applied target costing to thrive in their business (IMA 1994). With superior implementation systems, Japanese manufacturers are more successful than the American companies in developing target costing. [ 4 ] Traditional cost-plus pricing strategy has been impeding the productivity and profitability for a long time. [ 10 ] [ 11 ] As a new strategy, target costing is replacing traditional cost-plus pricing strategy by maximizing customer satisfaction by accepted level of quality and functionality while minimizing costs. The process of target costing can be divided into three sections: the first section involves in market-driven target costing, which focuses on studying market conditions to identify a product's allowable cost in order to meet the company's long-term profit at expected selling price; the second section involves performing cost reduction strategies with the product designer's effort and creativity to identify the product-level target cost; the third section is component-level target cost which decomposes the production cost to functional and component levels to transmit cost responsibility to suppliers. [ 1 ] Market driven target costing is the first section in the target costing process which focuses on studying market conditions and determining the company's profit margin in order to identify the allowable cost of a product. Market driven costing can go through 5 steps including: establish company's long-term sales and profit objective; develop the mix of products; identify target selling price for each product; identify profit margin for each product; and calculate allowable cost of each product. [ 1 ] Company's long-term sales and profit objectives are developed from an extensive analysis of relevant information relating to customers, market and products. Only realistic plans are accepted to proceed to the next step. Product mix is designed carefully to ensure that it satisfies many customers, but also does not contain too many products to confuse customers. Company may use simulation to explore the impact of overall profit objective to different product mixes and determine the most feasible product mix. Target selling price, target profit margin and allowable cost are identified for each product. Target selling price need to consider to the expected market condition at the time launching the product. Internal factors such as product's functionality and profit objective, and external factors such as company's image or expected price of competitive products will influence target selling price. Company's long-term profit plan and life-cycle cost are considered when determining target profit margin. Firms might set up target profit margin based on either actual profit margin of previous products or target profit margin of product line. Simulation for overall group profitability can help to make sure achieving group target. Subtracting target profit margin from target selling price results in allowable cost for each product. Allowable cost is the amount that can be spent on a product to ensure its profit target is met if it is sold at its target price. It is the signal about the magnitude of cost saving that team need to achieve. [ 1 ] [ 5 ] Following the completion of market-driven costing, the next task of the target costing process is product-level target costing. Product-level target costing concentrates on designing products that satisfy the company's customers at the allowable cost. To achieve this goal, product-level target costing is typically divided into three steps as shown below. [ 1 ] The first step is to set a product-level target cost. Since the allowable cost is simply obtained from external conditions without considering the design capabilities of the company as well as the realistic cost for manufacturing, it may not be always achievable in practice. Thus, it is necessary to adjust the unachievable allowable cost to an achievable target cost that the cost increase should be reduced with great effort. The second step is to discipline this target cost process, including monitoring the relationship between the target cost and the estimated product cost at any point during the design process, applying the cardinal rule so that the total target costs at the component-level does not exceed the target cost of the product, and allowing exceptions for products violating the cardinal rule. For a product exception to the cardinal rule, two analyses are often performed after the launch of the product. One involves reviewing the design process to find out why the target cost was unachieved. The other is an immediate effort to reduce the excessive cost to ensure that the period of violation is as short as possible. Once the target cost-reduction objective is identified, the product-level target costing comes to the final step, finding ways to achieve it. Engineering methods such as value engineering (VE), design for manufacture and assembly (DFMA), and quality function deployment (QFD) are commonly adopted in this step. [ 1 ] Value engineering (VE), also known as value analysis (VA), [ 12 ] plays a crucial role in the target costing process, particularly at the product level and the component level. Among the three aforementioned methods in achieving the target cost, VE is the most critical one because not only does it attempt to reduce costs, but also aims to improve the functionality and quality of products. There are a variety of practical VE strategies, including zero-look, first-look and second-look VE approaches, as well as teardown approaches. [ 1 ] Regarding the complexity of problems in the real world, implementing the target costing process often relies on the computer simulation to reproduce stochastic elements. [ 13 ] For example, many firms use simulation to study the complex relationship between selling prices and profit margins, the impact of individual product decisions on overall group profitability, the right mix of products to enhance overall profit, or other economic modeling to overcome organizational inertia by getting the most productive reasoning. In addition, simulation helps estimate results rapidly for dynamic process changes. The factors influencing the target costing process is broadly categorized based on how a company's strategy for a product's quality, functionality and price change over time. However, some factors play a specific role based on what drives a company's approach to target costing. Intensity of competition and nature of the customer affect market-driven costing. [ 14 ] Competitors introducing similar products has been shown to drive rival companies to expend energy on implementing target costing systems such as in the case of Toyota and Nissan or Apple and Google . The costing process is also affected by the level of customer sophistication, changing requirements and the degree to which their future requirements are known. The automotive and camera industry are prime examples for how customers affect target costing based on their exact requirements. Product strategy and product characteristics affect product-level target costing. [ 1 ] Characteristics of product strategy such as number of products in line, rate of redesign operations and level of innovation are shown to have an effect. Higher number of products has a direct correlation with the benefits of target costing. Frequent redesigns lead to the introduction of new products that have created better benefits to target costing. The value of historical information reduces with greater innovation, thereby, reducing the benefits of product level target costing. [ citation needed ] The degree of complexity of the product, level of investments required and the duration of product development process make up the factors that affect the target costing process based on product characteristics. Product viability is determined by the aforementioned factors. In turn, the target costing process is also modified to suit the different degrees of complexity required. [ 1 ] Supplier-Base strategy is the main factor that determines component-level target costing because it is known to play a key role in the details a firm has about its supplier capabilities. [ 1 ] There are three characteristics that make up the supplier-base strategy, including the degree of horizontal integration , power over suppliers and nature of supplier relations. Horizontal integration captures the fraction of product costs sourced externally. Cost pressures on suppliers can drive target costing if the buying power of firms is high enough. In turn, this may lead to better benefits. More cooperative supplier relations have been shown to increase mutual benefits in terms of target costs particularly at a component level. Aside from the application of target costing in the field of manufacturing, target costing are also widely used in the following areas. An Energy Retrofit Loan Analysis Model has been developed using a Monte Carlo (MC) method for target costing in Energy Efficient buildings and construction. MC method has been shown to be effective in determining the impact of financial uncertainties in project performance. [ 15 ] Target Value Design Decision Making Process (TVD-DMP) groups a set of energy efficiency methods at different optimization levels to evaluate costs and uncertainties involved in the energy efficiency process. Some major design parameters are specified using this methods including Facility Operation Schedule, Orientation, Plug load , HVAC and lighting systems. The entire process consists of three phases: initiation, definition and alignment. Initiation stage involves developing a business case for energy efficiency using target value design (TVD) training, organization and compensation. The definition process involves defining and validating the case by tools such as values analysis and bench marking processes to determine the allowable costs. By setting targets and designing the design process to align with those targets, TVD-DMP has been shown to achieve a high level of collaboration needed for energy efficiency investments. This is done by using risk analysis tools, pull planning and rapid estimating processes. Target costing and target value design have applications in building healthcare facilities including critical components such as Neonatal Intensive Care Units (NICUs). The process is influenced by unit locations, degree of comfort, number of patients per room, type of supply location and access to nature. [ 16 ] According to National Vital Statistics Reports, 12.18% of 2009 births were premature and the cost per infant was $51,600. This led to opportunities for NICUs to implement target value design for deciding whether to build a single-family room or more open-bay NICUs. This was achieved using set-based design analysis which challenges the designer to generate multiple alternatives for the same functionality. Designs are evaluated keeping in mind the requirements of the various stakeholders in the NICU including nurses, doctors, family members and administrators. Unlike linear point-based design, set-based design narrows options to the optimal one by eliminating alternatives simultaneously defined by user constraints. Jacomit et al. (2008) noted that about 15% of construction projects in Japan adopted target costing for their cost planning and management. [ 17 ] In the U.S., target costing research has been carried out within the framework of lean construction as target value design (TVD) method [ 18 ] and have been disseminated widely over construction industry in recent years. Research has proven that if being applied systematically, TVD can deliver a significant improvement in project performance with average reduction of 15% in comparison with market cost. [ 19 ] TVD in construction project considers the final cost of project as a design parameter, similar to the capacity and aesthetics requirements for the project. TVD requires the project team to develop a target cost from the beginning. The project team is expected not to design exceeding the target cost without the owner's approval, and must use different skills to maintain this target cost. In some cases, the cost can increase but the project team must commit to decrease and must try their best to decrease without impacting on other functions of the project. [ 20 ] In Scotland , guidance on the use of pain share/pain gain arrangements and target cost contracting was issued to public sector construction procurers in 2017. [ 21 ] This guidance refers to reimbursement to contractors calculated in two stages: The guidance stresses the importance of " good faith and reasonableness" in calculating the target cost, [ 22 ] : Sect. 3.1 but also notes the risk which arises when target cost arrangements are used without fully understanding how they are to operate. [ 22 ] : Sect. 10.2
https://en.wikipedia.org/wiki/Target_costing
A target peptide is a short (3-70 amino acids long) peptide chain that directs the transport of a protein to a specific region in the cell , including the nucleus , mitochondria , endoplasmic reticulum (ER), chloroplast , apoplast , peroxisome and plasma membrane . Some target peptides are cleaved from the protein by signal peptidases after the proteins are transported. Almost all proteins that are destined to the secretory pathway have a sequence consisting of 5-30 hydrophobic amino acids on the N-terminus , which is commonly referred to as the signal peptide , signal sequence or leader peptide. Signal peptides form alpha-helical structures. Proteins that contain such signals are destined for either extra-cellular secretion, the plasma membrane , the lumen or membrane of either the (ER), Golgi or endosomes. Certain membrane-bound proteins are targeted to the secretory pathway by their first transmembrane domain, which resembles a typical signal peptide. In prokaryotes , signal peptides direct the newly synthesized protein to the SecYEG protein-conducting channel, which is present in the plasma membrane . A homologous system exists in eukaryotes , where the signal peptide directs the newly synthesized protein to the Sec61 channel, which shares structural and sequence similarity with SecYEG, but is present in the endoplasmic reticulum. [ 1 ] Both the SecYEG and Sec61 channels are commonly referred to as the translocon , and transit through this channel is known as translocation. While secreted proteins are threaded through the channel, transmembrane domains may diffuse across a lateral gate in the translocon to partition into the surrounding membrane. In eukaryotes, most of the newly synthesized secretory proteins are transported from the ER to the Golgi apparatus . If these proteins have a particular 4-amino-acid retention sequence for the ER's lumen, KDEL , on their C-terminus , they are retained in the ER's lumen or are routed back to the ER's lumen (in instances where they escape) via interaction with the KDEL receptor in the Golgi apparatus. If the signal is KKXX , the retention mechanism to the ER will be similar but the protein will be transmembranal. [ 2 ] A nuclear localization signal (NLS) is a target peptide that directs proteins to the nucleus and is often a unit consisting of five basic, positively charged amino acids. The NLS normally is located anywhere on the peptide chain. A nuclear export signal (NES) is a target peptide that directs proteins from the nucleus back to the cytosol. It often consists of several hydrophobic amino acids (often leucine) interspaced by 2-3 other amino acids. Many proteins are known to constantly shuttle between the cytosol and nucleus and these contain both NESs and NLSs. The nucleolus within the nucleus can be targeted with a sequence called a nucleolar localization signal (abbreviated NoLS or NOS). The mitochondrial targeting signal also known as presequence is a 10-70 amino acid long peptide that directs a newly synthesized protein to the mitochondria . It is found at the N-terminus end consists of an alternating pattern of hydrophobic and positively charged amino acids to form what is called an amphipathic helix. Mitochondrial targeting signals can contain additional signals that subsequently target the protein to different regions of the mitochondria, such as the mitochondrial matrix or inner membrane. In plants, an N-terminal signal (or transit peptide ) targets to the plastid in a similar manner. Like most signal peptides, mitochondrial targeting signals and plastid specific transit peptides are cleaved once targeting is complete. Some plant proteins have an N-terminal transport signal that targets both organelles often referred to as dual-targeted transit peptide. [ 3 ] [ 4 ] Approximately 5% of total organelle proteins are predicted to be dual-targeted however the specific number could be higher considering the variable degree of accumulation of passenger proteins in both organelles. [ 5 ] [ 6 ] The targeting specificity of these transit peptides depends on many factors including net charge and affinity between transit peptides and organelle transport machinery. [ 7 ] There are two types of target peptides directing to peroxisome , which are called peroxisomal targeting signals (PTS). One is PTS1, which is made of three amino acids on the C-terminus. The other is PTS2, which is made of a 9-amino-acid sequence often present on the N-terminus of the protein. The following content uses protein primary structure single-letter location. A "[n]" prefix indicates the N-terminus and a "[c]" suffix indicates the C-terminus ; sequences lacking either are found in the middle of the protein. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Target_peptide
Target proteins are functional biomolecules that are addressed and controlled by biologically active compounds . They are used in the processes of transduction, transformation and conjugation. The identification of target proteins, the investigation of signal transduction processes and the understanding of their interaction with ligands are key elements of modern biomedical research. Since the interaction with target proteins is the molecular origin of most drugs , their particular importance for molecular biology , molecular pharmacy and pharmaceutical sciences is obvious. Target proteins control the action and the kinetic behavior of drugs within the organism. The elucidation of structure, conformational signaling and catalytic properties of particular target proteins facilitates a rational design of drugs and biotechnological processes . Known as biologicals , target proteins can also be drugs by themselves when their modification and formulation is emphasized within the pharmaceutical sciences. Finally, target protein - inducer interactions can be exploited for biomolecular transcription regulating systems in order to control for example gene therapeutic approaches .
https://en.wikipedia.org/wiki/Target_protein
Target selection is the process by which axons (nerve fibres) selectively target other cells for synapse formation . Synapses are structures which enable electrical or chemical signals to pass between nerves. While the mechanisms governing target specificity remain incompletely understood, it has been shown in many organisms that a combination of genetic and activity-based mechanisms govern initial target selection and refinement. The process of target selection has multiple steps that include axon pathfinding when neurons extend processes to specific regions, cellular target selection when neurons choose appropriate partners in a target region from a multitude of potential partners, and subcellular target selection where axons often target particular regions of a partner neuron. As bundled axons finish navigating through various neural circuits during neural development , the growth cones must selectively target with which cells it will synapse . This can be particularly well observed in the visual and olfactory systems of organisms . [ 1 ] In order to develop into a properly functioning nervous system , there must be an extremely high degree of accuracy in which cell the growth cone forms neural connections. [ 1 ] Although the target cell selection must be highly accurate, the degree of specificity that the neural connectivity achieves varies based on the neuronal circuitry system. [ 1 ] The target selection process of an axon to develop synaptic connections with specific cells can be broken down into multiple stages that are not necessarily confined to exact chronological order. [ 2 ] The stages of targeting include: [ 1 ] [ 2 ] The first stage in target selection is specification of target region, a process known as axon pathfinding . Growing neurites follow gradients of cell surface molecules that serve as chemoattractants and repellents to the growth cone. This perspective is an evolution of the chemoaffinity hypothesis posited by the neurobiologist Roger Wolcott Sperry in the 1960s. Sperry studied how the neurons in the visual systems of amphibians and goldfish form topographic maps in the brain, noting that if the optic nerve is crushed and allowed to regenerate , the axons will trace back the same patterns of connections. Sperry hypothesized that the target cells carried "identification tags" that would guide the growing axon, which we now know as recognition molecules that bind the growth cone along a gradient. [ 3 ] Neurons in sensory systems like the visual, auditory , or olfactory cortex grow into topographic maps such that neighboring neurons in the periphery correspond to adjacent target locations in the central nervous system. For example, neurons nearby on the retina will project to nearby cortical cells , creating a so-called retinotopic map . This cortical organization allows organisms to more easily decode stimuli . [ 1 ] The mechanisms governing region specification have been well studied in numerous systems. In Drosophila, numerous axon guidance molecules have been shown to be involved in precise regionalization of the ventral nerve cord. [ 4 ] Once a growing neuron has entered the target area, they must locate and enter the appropriate target cell with which to synapse. This is accomplished through sequential signaling of attractive and repulsive cues, largely neurotrophins. The axon grows along its chemoattractant gradient until approaching the target cell, when its growth is slowed down by a sudden drop in the concentration of chemoattractant. This serves as a signal to enter the target cell.[1] As the growth cone slows down, branches begin to form through one of two modalities: splitting of the growth cone, or interstitial branching. Growth cone splitting results in bifurcation of the main axon and is associated with axon guidance and innervating multiple faraway targets. Conversely, interstitial branching increases axonal coverage locally to define its presynaptic territory. Most mammalian CNS branches extend interstitially.[7] Branching can be caused by repulsive cues in the environment that cause the growth cone to pause and collapse, resulting in the formation of branches. [8] To ensure successful innervation, inappropriate targeting must be prevented. Once the axon has reached its target area and started to slow down and branch, it can be held within the target area by a perimeter of cues repulsive to the growth cone. Axons express patterns of cell-surface adhesion molecules that allow them to match with specific layer targets. An important family of adhesion molecules is constituted by the cadherins , whose different combination on targeting cells allow the traction and guidance of the forming axons. A typical example of layers with combinatorial expression of these molecules is the tectal laminae in the chick tectum, where the N-cadherin molecule is present only in those layers that receive axons form the retina. [ 1 ] Matrix factors and secreted cues are also very important in the formation of layered structures, and can be divided into attractive and repulsive cues, though the same factor can have both functions under varying conditions. For example, semaphorin is a substance with a repulsive effect that has been shown to have a fundamental role in layering between different somatosensory modalities in the spinal cord system. [ 1 ] The molecular mechanism of synapse formation is a process composed by different stages that relies on complex intracellular mechanisms involving both the pre- and postsynaptic cell. When the growth cone of the growing presynaptic axon makes contact with the target cell, it loses the filopodia , while both cells start expressing adhesion molecules on their respective membranes to form tight junctions , called "puncta adherens", which are similar to an adherens junction . [ 5 ] Different classes of adhesion molecules, like SynCAM, cadherins and neuroligins / neurexins play an important role in synapse stabilization and enable synaptic formation. [ 6 ] After the synapses have been stabilized, the pre- and postsynaptic cells undergo subcellular changes on each side of the synapses. Namely, there is an accumulation of the Golgi apparatus on the postsynaptic side, while there is an accumulation of vesicles in the presynaptic terminal. Finally at the end of synaptogenesis, there is an apposition of extracellular matrix between the cells with the formation of a synaptic cleft . Characteristic of the postsynaptic cell is the presence of a postsynaptic density (PSD) , formed by PDZ-domain -containing scaffold proteins whose function is to keep the neurotransmitter receptors clustered inside the synapse.
https://en.wikipedia.org/wiki/Target_selection
Targeted analysis sequencing (sometimes called target amplicon sequencing ) (TAS) is a next-generation DNA sequencing technique focusing on amplicons and specific genes. [ 1 ] It is useful in population genetics since it can target a large diversity of organisms. The TAS approach incorporates bioinformatics techniques to produce a large amount of data at a fraction of the cost involved in Sanger sequencing . TAS is also useful in DNA studies because it allows for amplification of the needed gene area via PCR , which is followed by next-gen sequencing platforms. Next-gen sequencing use shorter reads 50–400 base pairs which allow for quicker sequencing of multiple specimens. Thus TAS allows for a cheaper sequencing approach for that is easily scalable and offers both reliability and speed. [ 2 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Targeted_analysis_sequencing
Targeted covalent inhibitors (TCIs) or Targeted covalent drugs are rationally designed inhibitors that bind and then bond to their target proteins . These inhibitors possess a bond-forming functional group of low chemical reactivity that, following binding to the target protein, is positioned to react rapidly with a proximate nucleophilic residue at the target site to form a bond. [ 1 ] Over the last 100 years covalent drugs have made a major impact on human health and have been highly successful drugs for the pharmaceutical industry . [ 2 ] These inhibitors react with their target proteins to form a covalent complex in which the protein has lost its function. The majority of these successful drugs, which include penicillin , omeprazole , clopidogrel , and aspirin were discovered through serendipity in phenotypic screens . [ 3 ] However, key changes in screening approaches, along with safety concerns, have made pharma reluctant to pursue covalent inhibitors in a systematic way (Liebler & Guengerich, 2005). [ 4 ] [ 5 ] Recently, there has been considerable attention to using rational drug design to create highly selective covalent inhibitors called targeted covalent inhibitors. [ 6 ] The first published example of a targeted covalent drug was for the EGFR kinase. [ 7 ] [ 8 ] But this has now broadened to other kinases [ 9 ] [ 6 ] and other protein families. [ 10 ] [ 11 ] Aside from small molecules, covalent probes are also being derived from peptides or proteins. By incorporation of a reactive group into a binding peptide or protein via posttranslational chemical modification [ 12 ] or as an unnatural amino acid , [ 13 ] a target protein can be conjugated specifically via proximity-induced reaction. Covalent bonding can lead to potencies and ligand efficiencies that are either exceptionally high or, for irreversible covalent interactions, even essentially infinite. Covalent bonding thus allows high potency to be routinely achieved in compounds of low molecular mass , along with all the beneficial pharmaceutical properties that are associated with small size. [ 14 ] [ 15 ] Covalent inhibitors can be designed to target a nucleophile that is unique or rare across a protein family. [ 7 ] [ 6 ] [ 9 ] [ 16 ] Thereby ensuring that covalent bond formation cannot occur with most other family members. This approach can lead to high selectivity against closely related proteins because although the inhibitor might bind transiently to the active sites of such proteins, it will not covalently label them if they lack the targeted nucleophilic residue in the appropriate position. The restoration of pharmacological activity after covalent irreversible inhibition requires re-synthesis of the protein target. This has important and potentially advantageous consequences for drug pharmacodynamics in which the level and frequency of dosing relates to the extent and duration of the resulting pharmacological effect. [ 17 ] Covalent inhibitors can be used to assess target engagement which can sometimes be used pre-clinically and clinically to assess the relationship between dose of drug and efficacy or toxicity . [ 17 ] This approach was used for covalent Btk inhibitors pre-clinically and clinically to understand the relationship between dose administered and efficacy in animal models of arthritis and target occupancy in a clinical study of healthy volunteers. [ 18 ] The design of covalent drugs requires careful optimization of both the non-covalent binding affinity (which is reflected in K i ) and the reactivity of the electrophilic warhead (which is reflected in k 2 ). The initial design of TCIs involves three key steps. First, bioinformatics analysis is used to identify a nucleophilic amino acid (for example, cysteine) that is either inside or near to a functionally relevant binding site on a drug target, but is rare in that protein family. Next, a reversible inhibitor is identified for which the binding mode is known. Finally, structure-based computational methods are used to guide the design of modified ligands that have electrophilic functionality, and are positioned to react specifically with the nucleophilic amino acid in the target protein. [ 1 ] Targeted covalent photoisomerizable ligands (photoswitches) have been developed to remotely and reversibly control the activity of receptor proteins with light. They have been used as molecular prostheses to restore visual input in the retina [ 19 ] and auditory input in the cochlea via glutamate receptors. [ 20 ] Ligand conjugation is targeted to specific lysine residues via an affinity labeling mechanism. There has been a reluctance for modern drug discovery programs to consider covalent inhibitors due to toxicity concerns. [ 5 ] An important contributor has been the drug toxicities of several high-profile drugs believed to be caused by metabolic activation of reversible drugs. [ 5 ] For example, high dose acetaminophen can lead to the formation of the reactive metabolite N-acetyl-p-benzoquinone imine. Also, covalent inhibitors such as beta lactam antibiotics which contain weak electrophiles can lead to idiosyncratic toxicities (IDT) in some patients. It has been noted that many approved covalent inhibitors have been used safely for decades with no observed idiosyncratic toxicity. Also, that IDTs are not limited to proteins with a covalent mechanism of action. [ 21 ] A recent analysis has noted that the risk of idiosyncratic toxicities may be mitigated through lower doses of administered drug. Doses of less than 10 mg per day rarely lead to IDT irrespective of the drug mechanism. [ 22 ] Despite the apparent lack of attention towards covalent inhibitor drug discovery by most pharmaceutical companies, there are several examples of covalent drugs that have been approved or are progressing to late-stage clinical development. AMG 510 by Amgen is a KRAS p.G12C covalent inhibitor that has recently finished Phase I clinical trial. [ 23 ] The drug elicited partial responses in half of evaluable patients with KRAS G12C-mutant non–small cell lung cancer, and led to stable disease in most evaluable patients with colorectal (or appendix) cancer. The second generation EGFR inhibitors Afatinib and Mobocertinib have been approved for the treatment of EGFR driven lung cancer and Dacomitinib is in late stage clinical testing. The third generation EGFR inhibitors which target mutant EGFR which is specific to the tumor but are selective against wild-type EGFR that are expected to lead to a wider therapeutic index. [ 24 ] The pan-ErbB inhibitor Neratinib was approved in the US in 2017 and in the EU in 2018 for the extended adjuvant treatment of adult patients with early-stage HER2 -overexpressed/amplified breast cancer after trastuzumab -based therapy. [ 25 ] [ 26 ] Ibrutinib , a covalent inhibitor of Bruton's tyrosine kinase , has been approved for the treatment of chronic lymphocytic leukemia , waldenstrom’s macroglobulinemia and mantle cell lymphoma . Paxlovid is a covalent inhibitor of the 3CLpro (Mpro) enzyme. It is in Phase III trials for the early treatment of SARS-CoV-2 infected patients who have not progressed to severe COVID-19 disease, and who do not immediately require hospitalisation.
https://en.wikipedia.org/wiki/Targeted_covalent_inhibitors
Targeted drug delivery , sometimes called smart drug delivery , [ 1 ] is a method of delivering medication to a patient in a manner that increases the concentration of the medication in some parts of the body relative to others. This means of delivery is largely founded on nanomedicine, which plans to employ nanoparticle -mediated drug delivery in order to combat the downfalls of conventional drug delivery. These nanoparticles would be loaded with drugs and targeted to specific parts of the body where there is solely diseased tissue, thereby avoiding interaction with healthy tissue. The goal of a targeted drug delivery system is to prolong, localize, target and have a protected drug interaction with the diseased tissue. The conventional drug delivery system is the absorption of the drug across a biological membrane , whereas the targeted release system releases the drug in a dosage form. The advantages to the targeted release system is the reduction in the frequency of the dosages taken by the patient, having a more uniform effect of the drug, reduction of drug side-effects , and reduced fluctuation in circulating drug levels. The disadvantage of the system is high cost, which makes productivity more difficult, and the reduced ability to adjust the dosages. Targeted drug delivery systems have been developed to optimize regenerative techniques. The system is based on a method that delivers a certain amount of a therapeutic agent for a prolonged period of time to a targeted diseased area within the body. This helps maintain the required plasma and tissue drug levels in the body, thereby preventing any damage to the healthy tissue via the drug. The drug delivery system is highly integrated and requires various disciplines, such as chemists, biologists, and engineers, to join forces to optimize this system. [ 2 ] In traditional drug delivery systems such as oral ingestion or intravascular injection, the medication is distributed throughout the body through the systemic blood circulation . For most therapeutic agents, only a small portion of the medication reaches the organ to be affected, such as in chemotherapy where roughly 99% of the drugs administered do not reach the tumor site. [ 3 ] Targeted drug delivery seeks to concentrate the medication in the tissues of interest while reducing the relative concentration of the medication in the remaining tissues. For example, by avoiding the host's defense mechanisms and inhibiting non-specific distribution in the liver and spleen, [ 4 ] a system can reach the intended site of action in higher concentrations. Targeted delivery is believed to improve efficacy while reducing side-effects . When implementing a targeted release system, the following design criteria for the system must be taken into account: the drug properties, side-effects of the drugs, the route taken for the delivery of the drug, the targeted site, and the disease. Increasing developments to novel treatments requires a controlled microenvironment that is accomplished only through the implementation of therapeutic agents whose side-effects can be avoided with targeted drug delivery. Advances in the field of targeted drug delivery to cardiac tissue will be an integral component to regenerate cardiac tissue. [ 5 ] There are two kinds of targeted drug delivery: active targeted drug delivery, such as some antibody medications, and passive targeted drug delivery, such as the enhanced permeability and retention effect (EPR-effect). This ability for nanoparticles to concentrate in areas of solely diseased tissue is accomplished through either one or both means of targeting: passive or active. Passive targeting is achieved by incorporating the therapeutic agent into a macromolecule or nanoparticle that passively reaches the target organ. In passive targeting, the drug's success is directly related to circulation time. [ 6 ] This is achieved by cloaking the nanoparticle with some sort of coating. Several substances can achieve this, with one of them being polyethylene glycol (PEG). By adding PEG to the surface of the nanoparticle, it is rendered hydrophilic, thus allowing water molecules to bind to the oxygen molecules on PEG via hydrogen bonding. The result of this bond is a film of hydration around the nanoparticle which makes the substance antiphagocytic. The particles obtain this property due to the hydrophobic interactions that are natural to the reticuloendothelial system (RES) , thus the drug-loaded nanoparticle is able to stay in circulation for a longer period of time. [ 7 ] To work in conjunction with this mechanism of passive targeting, nanoparticles that are between 10 and 100 nanometers in size have been found to circulate systemically for longer periods of time. [ 8 ] Active targeting of drug-loaded nanoparticles enhances the effects of passive targeting to make the nanoparticle more specific to a target site. There are several ways that active targeting can be accomplished. One way to actively target solely diseased tissue in the body is to know the nature of a receptor on the cell for which the drug will be targeted to. [ 9 ] Researchers can then utilize cell-specific ligands that will allow the nanoparticle to bind specifically to the cell that has the complementary receptor. This form of active targeting was found to be successful when utilizing transferrin as the cell-specific ligand. [ 9 ] The transferrin was conjugated to the nanoparticle to target tumor cells that possess transferrin-receptor mediated endocytosis mechanisms on their membrane. This means of targeting was found to increase uptake, as opposed to non-conjugated nanoparticles. Another cell-specific ligand is the RGD motif which binds to the integrin αvβ3 . [ 10 ] This integrin is upregulated in tumor and activated endothelial cells. [ 11 ] Conjugation of RGD to chemotherapeutic-loaded nanoparticles has been shown to increase cancer cell uptake in vitro and therapeutic efficacy in vivo . [ 10 ] Active targeting can also be achieved by utilizing magnetoliposomes, which usually serves as a contrast agent in magnetic resonance imaging. [ 9 ] Thus, by grafting these liposomes with a desired drug to deliver to a region of the body, magnetic positioning could aid with this process. Furthermore, a nanoparticle could possess the capability to be activated by a trigger that is specific to the target site, such as utilizing materials that are pH responsive. [ 9 ] Most of the body has a consistent, neutral pH. However, some areas of the body are naturally more acidic than others, and, thus, nanoparticles can take advantage of this ability by releasing the drug when it encounters a specific pH. [ 9 ] Another specific triggering mechanism is based on the redox potential. One of the side effects of tumors is hypoxia , which alters the redox potential in the vicinity of the tumor. By modifying the redox potential that triggers the payload release the vesicles can be selective to different types of tumors. [ 12 ] By utilizing both passive and active targeting, a drug-loaded nanoparticle has a heightened advantage over a conventional drug. It is able to circulate throughout the body for an extended period of time until it is successfully attracted to its target through the use of cell-specific ligands, magnetic positioning, or pH responsive materials. Because of these advantages, side effects from conventional drugs will be largely reduced as a result of the drug-loaded nanoparticles affecting only diseased tissue. [ 13 ] However, an emerging field known as nanotoxicology has concerns that the nanoparticles themselves could pose a threat to both the environment and human health with side effects of their own. [ 14 ] Active targeting can also be achieved through peptide based drug targeting system. [ 15 ] There are different types of drug delivery vehicles, such as polymeric micelles, liposomes, lipoprotein-based drug carriers, nano-particle drug carriers, dendrimers, etc. An ideal drug delivery vehicle must be non-toxic, biocompatible, non-immunogenic, biodegradable, [ 5 ] and must avoid recognition by the host's defense mechanisms [3] . Cell Surface Peptides provide one way to introduce drug delivery into a target cell. [ 16 ] This method is accomplished by the peptide binding to a target cells surface receptors, in a way that bypasses immune defenses that would otherwise compromise a slower delivery, without causing harm to the host. In particular, peptides, such as intercellular adhesion molecule-1, have shown a great deal of binding ability in a target cell. This method has shown a degree of efficacy in treating both autoimmune diseases as well as forms of cancer as a result of this binding affinity. [ 17 ] Peptide mediated delivery is also of promise due to the low cost of creating the peptides as well as the simplicity of their structure. The most common vehicle currently used for targeted drug delivery is the liposome . [ 19 ] Liposomes are non-toxic, non- hemolytic , and non- immunogenic even upon repeated injections; they are biocompatible and biodegradable and can be designed to avoid clearance mechanisms (reticuloendothelial system (RES), renal clearance, chemical or enzymatic inactivation, etc.) [ 20 ] [ 21 ] Lipid-based, ligand-coated nanocarriers can store their payload in the hydrophobic shell or the hydrophilic interior depending on the nature of the drug/ contrast agent being carried. [ 5 ] The only problem to using liposomes in vivo is their immediate uptake and clearance by the RES system and their relatively low stability in vitro. To combat this, polyethylene glycol (PEG) can be added to the surface of the liposomes. Increasing the mole percent of PEG on the surface of the liposomes by 4-10% significantly increased circulation time in vivo from 200 to 1000 minutes. [ 5 ] PEGylation of the liposomal nanocarrier elongates the half-life of the construct while maintaining the passive targeting mechanism that is commonly conferred to lipid-based nanocarriers. [ 22 ] When used as a delivery system, the ability to induce instability in the construct is commonly exploited allowing the selective release of the encapsulated therapeutic agent in close proximity to the target tissue/cell in vivo . This nanocarrier system is commonly used in anti-cancer treatments as the acidity of the tumour mass caused by an over-reliance on glycolysis triggers drug release. [ 22 ] [ 23 ] [ 24 ] Additional endogenous trigger pathways have been explored through the exploitation of inner and outer tumor environments, such as reactive oxygen species, glutathione, enzymes, hypoxia, and adenosine-5’- triphosphate (ATP), all of which are generally highly present in and around tumors. [ 25 ] External triggers are also used, such as light, low frequency ultrasound (LFUS), electrical fields, and magnetic fields. [ 26 ] In specific, LFUS has demonstrated high efficacy in the controlled trigger of various drugs in mice, such as cisplatin and calcein. [ 27 ] [ 28 ] Another type of drug delivery vehicle used is polymeric micelles . They are prepared from certain amphiphilic co-polymers consisting of both hydrophilic and hydrophobic monomer units. [ 2 ] They can be used to carry drugs that have poor solubility. This method offers little in the terms of size control or function malleability. Techniques that utilize reactive polymers along with a hydrophobic additive to produce a larger micelle that create a range of sizes have been developed. [ 29 ] Dendrimers are also polymer-based delivery vehicles. They have a core that branches out in regular intervals to form a small, spherical, and very dense nanocarrier. [ 30 ] Biodegradable particles have the ability to target diseased tissue as well as deliver their payload as a controlled-release therapy . [ 31 ] Biodegradable particles bearing ligands to P-selectin , endothelial selectin ( E-selectin ) and ICAM-1 have been found to adhere to inflamed endothelium . [ 32 ] Therefore, the use of biodegradable particles can also be used for cardiac tissue. There are biocompatible microalgae hybrid microrobots for active drug-delivery in the lungs and the gastrointestinal tract. The microrobots proved effective in tests with mice. In the two studies, "Fluorescent dye or cell membrane–coated nanoparticle functionalized algae motors were further embedded inside a pH-sensitive capsule" and "antibiotic-loaded neutrophil membrane-coated polymeric nanoparticles [were attached] to natural microalgae". [ 33 ] [ 34 ] [ 35 ] The success of DNA nanotechnology in constructing artificially designed nanostructures out of nucleic acids such as DNA , combined with the demonstration of systems for DNA computing , has led to speculation that artificial nucleic acid nanodevices can be used to target drug delivery based upon directly sensing its environment. These methods make use of DNA solely as a structural material and a chemical, and do not make use of its biological role as the carrier of genetic information. Nucleic acid logic circuits that could potentially be used as the core of a system that releases a drug only in response to a stimulus such as a specific mRNA have been demonstrated. [ 36 ] In addition, a DNA "box" with a controllable lid has been synthesized using the DNA origami method. This structure could encapsulate a drug in its closed state, and open to release it only in response to a desired stimulus. [ 37 ] Targeted drug delivery can be used to treat many diseases, such as the cardiovascular diseases and diabetes . However, the most important application of targeted drug delivery is to treat cancerous tumors . In doing so, the passive method of targeting tumors takes advantage of the enhanced permeability and retention (EPR) effect . This is a situation specific to tumors that results from rapidly forming blood vessels and poor lymphatic drainage. When the blood vessels form so rapidly, large fenestrae result that are 100 to 600 nanometers in size, which allows enhanced nanoparticle entry. Further, the poor lymphatic drainage means that the large influx of nanoparticles are rarely leaving, thus, the tumor retains more nanoparticles for successful treatment to take place. [ 8 ] The American Heart Association rates cardiovascular disease as the number one cause of death in the United States. Each year 1.5 million myocardial infarctions (MI), also known as heart attacks, occur in the United States, with 500,000 leading to deaths. The costs related to heart attacks exceed $60 billion per year. Therefore, there is a need to come up with an optimum recovery system. The key to solving this problem lies in the effective use of pharmaceutical drugs that can be targeted directly to the diseased tissue. This technique can help develop many more regenerative techniques to cure various diseases. The development of a number of regenerative strategies in recent years for curing heart disease represents a paradigm shift away from conventional approaches that aim to manage heart disease. [ 5 ] Stem cell therapy can be used to help regenerate myocardium tissue and return the contractile function of the heart by creating/supporting a microenvironment before the MI. Developments in targeted drug delivery to tumors have provided the groundwork for the burgeoning field of targeted drug delivery to cardiac tissue. [ 5 ] Recent developments have shown that there are different endothelial surfaces in tumors, which has led to the concept of endothelial cell adhesion molecule-mediated targeted drug delivery to tumors. Liposomes can be used as drug delivery for the treatment of tuberculosis . The traditional treatment for TB is skin to chemotherapy which is not overly effective, which may be due to the failure of chemotherapy to make a high enough concentration at the infection site. The liposome delivery system allows for better microphage penetration and better builds a concentration at the infection site. [ 38 ] The delivery of the drugs works intravenously and by inhalation. Oral intake is not advised because the liposomes break down in the Gastrointestinal System. 3D printing is also used by doctors to investigate how to target cancerous tumors in a more efficient way. By printing a plastic 3D shape of the tumor and filling it with the drugs used in the treatment the flow of the liquid can be observed allowing the modification of the doses and targeting location of the drugs. [ 39 ]
https://en.wikipedia.org/wiki/Targeted_drug_delivery
Targeted immunization strategies are approaches designed to increase the immunization level of populations and decrease the chances of epidemic outbreaks . [ 1 ] Though often in regards to use in healthcare practices and the administration of vaccines to prevent biological epidemic outbreaks, [ 2 ] these strategies refer in general to immunization schemes in complex networks , biological, social or artificial in nature. [ 1 ] Identification of at-risk groups and individuals with higher odds of spreading the disease often plays an important role in these strategies, since targeted immunization in high-risk groups is necessary for effective eradication efforts and has a higher return on investment than immunizing larger but lower-risk groups. [ 1 ] [ 3 ] [ 4 ] The success of vaccines in preventing major outbreaks relies on the mechanism of herd immunity , also known as community immunity, where the immunization of individuals provides protection for not only the individuals, but also the community at large. [ 5 ] In cases of biological contagions such as influenza , measles , and chicken pox , immunizing a critical community size can provide protection against the disease for members who cannot be vaccinated themselves (infants, pregnant women, and immunocompromised individuals). Often however these vaccine programmes require the immunization of a large majority of the population to provide herd immunity. [ 6 ] A few successful vaccine programmes have led to the eradication of infectious diseases like small pox [ 7 ] and rinderpest , and the near eradication of polio , [ 8 ] which plagued the world before the second half of the 20th century. [ 9 ] [ 10 ] More recently researchers have looked at exploiting network connectivity properties to better understand and design immunization strategies to prevent major epidemic outbreaks. [ 11 ] Many real networks like the Internet , World Wide Web , and even sexual contact networks [ 12 ] have been shown to be scale-free networks and as such exhibit a power-law distribution for the degree distribution . In large networks this results in the vast majority of nodes (individuals in social networks ) having few connections or low degree k , while a few "hubs" have many more connections than the average < k >. [ 13 ] This wide variability ( heterogeneity ) in degree offers immunization strategies based on targeting members of the network according to their connectivity rather than random immunization of the network. In epidemic modeling on scale-free networks, targeted immunization schemes can considerably lower the vulnerability of a network to epidemic outbreaks over random immunization schemes. Typically these strategies result in the need for far fewer nodes to be immunized in order to provide the same level of protection to the entire network as in random immunization. [ 1 ] [ 14 ] In circumstances where vaccines are scarce, efficient immunization strategies become necessary to preventing infectious outbreaks. [ 15 ] Examples A common approach for targeted immunization studies in scale-free networks focuses on targeting the highest degree nodes for immunization. These nodes are the most highly connected in the network, making them more likely to spread the contagion if infected. Immunizing this segment of the network can drastically reduce the impact of the disease on the network and requires the immunization of far fewer nodes compared to randomly selecting nodes. [ 1 ] However, this strategy relies on knowing the global structure of the network, which may not always be practical. [ citation needed ] A recent centrality measure, Percolation Centrality, introduced by Piraveenan et al. [ 16 ] is particularly useful in identifying nodes for vaccination based on the network topology. Unlike node degree which depends on topology alone, however, percolation centrality takes into account the topological importance of a node as well as its distance from infected nodes in deciding its overall importance. Piraveenan et al. [ 16 ] has shown that percolation centrality-based vaccination is particularly effective when the proportion of people already infected is on the same order of magnitude as the number of people who could be vaccinated before the disease spreads much further. If infection spread is at its infancy, then ring-vaccination surrounding the source of infection is most effective, whereas if the proportion of people already infected is much higher than the number of people that could be vaccinated quickly, then vaccination will only help those who are vaccinated and herd immunity cannot be achieved. [ 6 ] Percolation centrality-based vaccination is most effective in the critical scenario where the infection has already spread too far to be completely surrounded by ring-vaccination, yet not spread wide enough so that it cannot be contained by strategic vaccination. Nevertheless, Percolation Centrality also needs full network topology to be computed, and thus is more useful in higher levels of abstraction (for example, networks of townships rather than social networks of individuals), where the corresponding network topology can more readily be obtained. [ citation needed ] Millions of children worldwide do not receive all of the routine vaccinations as per their national schedule. As immunization is a powerful public health strategy for improving child survival, it is important to determine what strategies work best to increase coverage. A Cochrane review assessed the effectiveness of intervention strategies to boost and sustain high childhood immunization coverage in low- and middle-income countries. [ 17 ] Forty-one trials were included but most of the evidence was of low quality. [ 17 ] Providing parents and other community members with information on immunization, health education at facilities in combination with redesigned immunization reminder cards, regular immunization outreach with and without household incentives, home visits, and integration of immunization with other services may improve childhood immunization coverage in low-and middle-income countries. [ 17 ]
https://en.wikipedia.org/wiki/Targeted_immunization_strategies
Targeted temperature management ( TTM ), previously known as therapeutic hypothermia or protective hypothermia , is an active treatment that tries to achieve and maintain a specific body temperature in a person for a specific duration of time in an effort to improve health outcomes during recovery after a period of stopped blood flow to the brain. [ 1 ] This is done in an attempt to reduce the risk of tissue injury following lack of blood flow . [ 2 ] Periods of poor blood flow may be due to cardiac arrest or the blockage of an artery by a clot as in the case of a stroke . [ 3 ] Targeted temperature management improves survival and brain function following resuscitation from cardiac arrest. [ 4 ] Evidence supports its use following certain types of cardiac arrest in which an individual does not regain consciousness . [ 1 ] The target temperature is often between 32 and 34 °C. [ 4 ] Targeted temperature management following traumatic brain injury is of unclear benefit. [ 5 ] While associated with some complications, these are generally mild. [ 6 ] Targeted temperature management is thought to prevent brain injury by several methods, including decreasing the brain's oxygen demand, reducing the production of neurotransmitters like glutamate , as well as reducing free radicals that might damage the brain. Body temperature may be lowered by many means, including cooling blankets, cooling helmets, cooling catheters, ice packs and ice water lavage . Targeted temperature management may be used in the following conditions: The 2013 ILCOR and 2010 American Heart Association guidelines support the use of cooling following resuscitation from cardiac arrest. [ 1 ] [ 7 ] These recommendations were largely based on two trials from 2002 which showed improved survival and brain function when cooled to 32–34 °C (90–93 °F) after cardiac arrest. [ 2 ] [ 8 ] However, more recent research suggests that there is no benefit to cooling to 33 °C (91 °F) when compared with less aggressive cooling only to a near-normal temperature of 36 °C (97 °F); it appears cooling is effective because it prevents fever, a common complication seen after cardiac arrest. [ 9 ] There is no difference in long term quality of life following mild compared to more severe cooling. [ 10 ] In children, following cardiac arrest, cooling does not appear useful as of 2018. [ 11 ] A recent Cochrane Review summarized available evidence on the topic and found that targeted temperature management around 33 °C may increase the chance to prevent brain damage after cardiac arrest by 40%. [ 12 ] Hypothermia therapy for neonatal encephalopathy has been proven to improve outcomes for newborn infants affected by perinatal hypoxia-ischemia, hypoxic ischemic encephalopathy or birth asphyxia . A 2013 Cochrane review found that it is useful in full term babies with encephalopathy. [ 13 ] Whole body or selective head cooling to 33–34 °C (91–93 °F), begun within six hours of birth and continued for 72 hours, reduces mortality and reduces cerebral palsy and neurological deficits in survivors. [ citation needed ] Targeted temperature management is used during open-heart surgery because it decreases the metabolic needs of the brain, heart, and other organs, reducing the risk of damage to them. The patient is given medication to prevent shivering. The body is then cooled to 25–32 °C (77–90 °F). The heart is stopped and an external heart-lung pump maintains circulation to the patient's body. The heart is cooled further and is maintained at a temperature below 15 °C (59 °F) for the duration of the surgery. This very cold temperature helps the heart muscle to tolerate its lack of blood supply during the surgery. [ 14 ] [ 15 ] Possible complications may include: infection, bleeding, dysrhythmias and high blood sugar . [ 16 ] One review found an increased risk of pneumonia and sepsis but not the overall risk of infection. [ 17 ] Another review found a trend towards increased bleeding but no increase in severe bleeding. [ 18 ] Hypothermia induces a "cold diuresis" which can lead to electrolyte abnormalities – specifically hypokalemia, hypomagnesaemia, and hypophosphatemia, as well as hypovolemia. [ 19 ] The earliest rationale for the effects of hypothermia as a neuroprotectant focused on the slowing of cellular metabolism resulting from a drop in body temperature. For every one degree Celsius drop in body temperature, cellular metabolism slows by 5–7%. [ 20 ] Accordingly, most early hypotheses suggested that hypothermia reduces the harmful effects of ischemia by decreasing the body's need for oxygen. [ 21 ] The initial emphasis on cellular metabolism explains why the early studies almost exclusively focused on the application of deep hypothermia, as these researchers believed that the therapeutic effects of hypothermia correlated directly with the extent of temperature decline. [ 22 ] In the special case of infants with perinatal asphyxia, it appears that apoptosis is a prominent cause of cell death and that hypothermia therapy for neonatal encephalopathy interrupts the apoptotic pathway. In general, cell death is not directly caused by oxygen deprivation, but occurs indirectly as a result of the cascade of subsequent events. Cells need oxygen to create ATP , a molecule used by cells to store energy, and cells need ATP to regulate intracellular ion levels. ATP is used to fuel both the importation of ions necessary for cellular function and the removal of ions that are harmful to cellular function. Without oxygen, cells cannot manufacture the necessary ATP to regulate ion levels and thus cannot prevent the intracellular environment from approaching the ion concentration of the outside environment. It is not oxygen deprivation itself that precipitates cell death, but rather without oxygen the cell can not make the ATP it needs to regulate ion concentrations and maintain homeostasis. [ 21 ] Notably, even a small drop in temperature encourages cell membrane stability during periods of oxygen deprivation. For this reason, a drop in body temperature helps prevent an influx of unwanted ions during an ischemic insult. By making the cell membrane more impermeable, hypothermia helps prevent the cascade of reactions set off by oxygen deprivation. Even moderate dips in temperature strengthen the cellular membrane, helping to minimize any disruption to the cellular environment. It is by moderating the disruption of homeostasis caused by a blockage of blood flow that many now postulate, results in hypothermia's ability to minimize the trauma resultant from ischemic injuries. [ 21 ] Targeted temperature management may also help to reduce reperfusion injury , damage caused by oxidative stress when the blood supply is restored to a tissue after a period of ischemia. Various inflammatory immune responses occur during reperfusion. These inflammatory responses cause increased intracranial pressure, which leads to cell injury and in some situations, cell death. Hypothermia has been shown to help moderate intracranial pressure and therefore to minimize the harmful effects of a patient's inflammatory immune responses during reperfusion. The oxidation that occurs during reperfusion also increases free radical production. Since hypothermia reduces both intracranial pressure and free radical production, this might be yet another mechanism of action for hypothermia's therapeutic effect. [ 21 ] Overt activation of N-methyl-D-aspartate (NMDA) receptors following brain injuries can lead to calcium entry which triggers neuronal death via the mechanisms of excitotoxicity. [ 23 ] There are a number of methods through which hypothermia is induced. [ 16 ] These include: cooling catheters, cooling blankets, and application of ice applied around the body among others. [ 16 ] [ 24 ] As of 2013 it is unclear if one method is any better than the others. [ 24 ] While cool intravenous fluid may be given to start the process, further methods are required to keep the person cold. [ 16 ] Core body temperature must be measured (either via the esophagus, rectum, bladder in those who are producing urine, or within the pulmonary artery) to guide cooling. [ 16 ] A temperature below 30 °C (86 °F) should be avoided, as adverse events increase significantly. [ 24 ] The person should be kept at the goal temperature plus or minus half a degree Celsius for 24 hours. [ 24 ] Rewarming should be done slowly with suggested speeds of 0.1 to 0.5 °C (0.18 to 0.90 °F) per hour. [ 24 ] Targeted temperature management should be started as soon as possible. [ 25 ] The goal temperature should be reached before 8 hours. [ 24 ] Targeted temperature management remains partially effective even when initiated as long as 6 hours after collapse. [ 26 ] Prior to the induction of targeted temperature management, pharmacological agents to control shivering must be administered. When body temperature drops below a certain threshold—typically around 36 °C (97 °F)—people may begin to shiver. [ 27 ] It appears that regardless of the technique used to induce hypothermia, people begin to shiver when temperature drops below this threshold. [ 27 ] Drugs commonly used to prevent and treat shivering in targeted temperature management include acetaminophen , buspirone , opioids including pethidine (meperidine), dexmedetomidine , fentanyl, and/or propofol . [ 28 ] If shivering is unable to be controlled with these drugs, patients are often placed under general anesthesia and/or are given paralytic medication like vecuronium . People should be rewarmed slowly and steadily in order to avoid harmful spikes in intracranial pressure. [ 26 ] Cooling catheters are inserted into a femoral vein. Cooled saline solution is circulated through either a metal coated tube or a balloon in the catheter. The saline cools the person's whole body by lowering the temperature of a person's blood. Catheters reduce temperature at rates ranging from 1.5 to 2 °C (2.7 to 3.6 °F) per hour. Through the use of the control unit, catheters can bring body temperature to within 0.1 °C (0.18 °F) of the target level. Furthermore, catheters can raise temperature at a steady rate, which helps to avoid harmful rises in intracranial pressure. A number of studies have demonstrated that targeted temperature management via catheter is safe and effective. [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ 33 ] Adverse events associated with this invasive technique include bleeding, infection, vascular puncture, and deep vein thrombosis (DVT). [ 34 ] Infection caused by cooling catheters is particularly harmful, as resuscitated people are highly vulnerable to the complications associated with infections. [ 35 ] Bleeding represents a significant danger, due to a decreased clotting threshold caused by hypothermia. The risk of deep vein thrombosis may be the most pressing medical complication. [ citation needed ] Deep vein thrombosis can be characterized as a medical event whereby a blood clot forms in a deep vein, usually the femoral vein. This condition may become potentially fatal if the clot travels to the lungs and causes a pulmonary embolism . Another potential problem with cooling catheters is the potential to block access to the femoral vein, which is a site normally used for a variety of other medical procedures, including angiography of the venous system and the right side of the heart. However, most cooling catheters are triple lumen catheters, and the majority of people post-arrest will require central venous access. Unlike non-invasive methods which can be administered by nurses, the insertion of cooling catheters must be performed by a physician fully trained and familiar with the procedure. The time delay between identifying a person who might benefit from the procedure and the arrival of an interventional radiologist or other physician to perform the insertion may minimize some of the benefit of invasive methods' more rapid cooling. [ citation needed ] Transnasal evaporative cooling is a method of inducing the hypothermia process and provides a means of continuous cooling of a person throughout the early stages of targeted temperature management and during movement throughout the hospital environment. This technique uses two cannulae, inserted into a person's nasal cavity, to deliver a spray of coolant mist that evaporates directly underneath the brain and base of the skull. As blood passes through the cooling area, it reduces the temperature throughout the rest of the body. [ citation needed ] The method is compact enough to be used at the point of cardiac arrest, during ambulance transport, or within the hospital proper. It is intended to reduce rapidly the person's temperature to below 34 °C (93 °F) while targeting the brain as the first area of cooling. Research into the device has shown cooling rates of 2.6 °C (4.7 °F) per hour in the brain (measured through infrared tympanic measurement) and 1.6 °C (2.9 °F) per hour for core body temperature reduction. [ 36 ] [ 37 ] With these technologies, cold water circulates through a blanket, or torso wraparound vest and leg wraps. To lower temperature with optimal speed, 70% of a person's surface area should be covered with water blankets. The treatment represents the most well studied means of controlling body temperature. Water blankets lower a person's temperature exclusively by cooling a person's skin and accordingly require no invasive procedures. [ citation needed ] Water blankets possess several undesirable qualities. They are susceptible to leaking, which may represent an electrical hazard since they are operated in close proximity to electrically powered medical equipment. [ 38 ] The Food and Drug Administration also has reported several cases of external cooling blankets causing significant burns to the skin of person. Other problems with external cooling include overshoot of temperature (20% of people will have overshoot), slower induction time versus internal cooling, increased compensatory response, decreased patient access, and discontinuation of cooling for invasive procedures such as the cardiac catheterization. [ 39 ] If therapy with water blankets is given along with two litres of cold intravenous saline, people can be cooled to 33 °C (91 °F) in 65 minutes. [ citation needed ] Most machines now come with core temperature probes. When inserted into the rectum , the core body temperature is monitored and feedback to the machine allows changes in the water blanket to achieve the desired set temperature. In the past some of the models of cooling machines have produced an overshoot in the target temperature and cooled people to levels below 32 °C (90 °F), resulting in increased adverse events. They have also rewarmed patients at too fast a rate, leading to spikes in intracranial pressure. Some of the new models have more software that attempt to prevent this overshoot by utilizing warmer water when the target temperature is close and preventing any overshoot. [ citation needed ] Some of the new machines now also have 3 rates of cooling and warming; a rewarming rate with one of these machines allows a patient to be rewarmed at a very slow rate of just 0.17 °C (0.31 °F) an hour in the "automatic mode", allowing rewarming from 33 °C (91 °F) to 37 °C (99 °F) over 24 hours. [ citation needed ] There are a number of non-invasive head cooling caps and helmets designed to target cooling at the brain. [ 40 ] A hypothermia cap is typically made of a synthetic material such as neoprene, silicone, or polyurethane and filled with a cooling agent such as ice or gel which is either cooled to a very cold temperature, −25 to −30 °C (−13 to −22 °F), before application or continuously cooled by an auxiliary control unit. Their most notable uses are in preventing or reducing alopecia in chemotherapy, [ 41 ] and for preventing cerebral palsy in babies born with hypoxic ischemic encephalopathy . [ 42 ] In the continuously cooled iteration, coolant is cooled with the aid of a compressor and pumped through the cooling cap. Circulation is regulated by means of valves and temperature sensors in the cap. If the temperature deviates or if other errors are detected, an alarm system is activated. The frozen iteration involves continuous application of caps filled with Crylon gel cooled to −30 °C (−22 °F) to the scalp before, during and after intravenous chemotherapy. As the caps warm on the head, multiple cooled caps must be kept on hand and applied every 20 to 30 minutes. Hypothermia has been applied therapeutically since antiquity. The Greek physician Hippocrates , the namesake of the Hippocratic Oath , advocated the packing of wounded soldiers in snow and ice. [ 21 ] Napoleonic surgeon Baron Dominique Jean Larrey recorded that officers who were kept closer to the fire survived less often than the minimally pampered infantrymen. [ 21 ] In modern times, the first medical article concerning hypothermia was published in 1945. [ 21 ] This study focused on the effects of hypothermia on patients with severe head injury. In the 1950s, hypothermia received its first medical application, being used in intracerebral aneurysm surgery to create a bloodless field. [ 21 ] Most of the early research focused on the applications of deep hypothermia , defined as a body temperature of 20–25 °C (68–77 °F). Such an extreme drop in body temperature brings with it a whole host of side effects, which made the use of deep hypothermia impractical in most clinical situations. This period also saw sporadic investigation of more mild forms of hypothermia, with mild hypothermia being defined as a body temperature of 32–34 °C (90–93 °F). In the 1950s, Doctor Rosomoff demonstrated in dogs the positive effects of mild hypothermia after brain ischemia and traumatic brain injury. [ 21 ] In the 1980s further animal studies indicated the ability of mild hypothermia to act as a general neuroprotectant following a blockage of blood flow to the brain. This animal data was supported by two landmark human studies that were published simultaneously in 2002 by the New England Journal of Medicine . [ 43 ] Both studies, one occurring in Europe and the other in Australia, demonstrated the positive effects of mild hypothermia applied following cardiac arrest. [ 8 ] Responding to this research, in 2003 the American Heart Association (AHA) and the International Liaison Committee on Resuscitation (ILCOR) endorsed the use of targeted temperature management following cardiac arrest. [ 44 ] Currently, a growing percentage of hospitals around the world incorporate the AHA/ILCOR guidelines and include hypothermic therapies in their standard package of care for patients with cardiac arrest. [ 43 ] Some researchers go so far as to contend that hypothermia represents a better neuroprotectant following a blockage of blood to the brain than any known drug. [ 27 ] Over this same period a particularly successful research effort showed that hypothermia is a highly effective treatment when applied to newborn infants following birth asphyxia . Meta-analysis of a number of large randomised controlled trials showed that hypothermia for 72 hours started within 6 hours of birth significantly increased the chance of survival without brain damage. [ 45 ] TTM has been studied in several use scenarios where it has not usually been found to be helpful, or is still under investigation, despite theoretical grounds for its usefulness. [ 46 ] There is currently no evidence supporting targeted temperature management use in humans for stroke and clinical trials have not been completed. [ 47 ] Most of the data concerning hypothermia's effectiveness in treating stroke is limited to animal studies. These studies have focused primarily on ischemic stroke as opposed to hemorrhagic stroke , as hypothermia is associated with a lower clotting threshold. In these animal studies, hypothermia was represented an effective neuroprotectant . [ 48 ] The use of hypothermia to control intracranial pressure (ICP) after an ischemic stroke was found to be both safe and practical. [ 49 ] Animal studies have shown the benefit of targeted temperature management in traumatic central nervous system (CNS) injuries. Clinical trials have shown mixed results with regards to the optimal temperature and delay of cooling. Achieving therapeutic temperatures of 33 °C (91 °F) is thought to prevent secondary neurological injuries after severe CNS trauma. [ 50 ] A systematic review of randomised controlled trials in traumatic brain injury (TBI) suggests there is no evidence that hypothermia is beneficial. [ 51 ] A clinical trial in cardiac arrest patients showed that hypothermia improved neurological outcome and reduced mortality . [ 8 ] A retrospective study of the use of hypothermia for cardiac arrest patients showed favorable neurological outcome and survival. [ 52 ] Osborn waves on electrocardiogram ( ECG ) are frequent during TTM after cardiac arrest , particularly in patients treated with 33 °C. [ 53 ] Osborn waves are not associated with increased risk of ventricular arrhythmia , and may be considered a benign physiological phenomenon, associated with lower mortality in univariable analyses. [ 53 ] As of 2015 hypothermia had shown no improvements in neurological outcomes or in mortality in neurosurgery. [ 54 ] TTM has been used in some cases of naegleriasis . [ 55 ] This article incorporates text from a free content work. Licensed under CC BY 4.0. Text taken from Anatomy and Physiology​ , J. Gordon Betts et al , Openstax .
https://en.wikipedia.org/wiki/Targeted_temperature_management
The tarmac scam is a confidence trick in which criminals sell fake or shoddy tarmac (asphalt) and driveway resurfacing . It is particularly common in Europe but practiced worldwide. [ 1 ] [ 2 ] Other names include the paving scam , tarmacking , the asphalt scam , driveway fraud or similar variants. Non-English names include " Truffa dell'asfalto " (Italian), " Teerkolonne " (German) and " faux bitumeurs " (French). [ 3 ] [ 4 ] [ 5 ] A conman typically goes door-to-door , claiming to be a builder working on a contract who has some leftover tarmac, and offering to pave a driveway at a low cost. [ 2 ] [ 6 ] The paving is in fact often simply gravel chippings covered with engine oil , [ 2 ] or not the right depth and type of materials to form a lasting road surface. [ 3 ] Milk has been used to make a fake sealant . [ 7 ] [ 8 ] The conmen may target elderly, vulnerable residents , [ 9 ] [ 10 ] [ 11 ] and claim to be official contractors working on roadworks to add credibility. [ 12 ] Reported escalation has included increasing the cost, claiming that the job has required more material than expected, and making threats. [ 13 ] [ 14 ] [ 15 ] Tarmac fraud is particularly associated with the Rathkeale Rovers and other gangs from the Irish traveller community. [ 16 ] [ 17 ] [ 1 ] [ 18 ] The organiser of the scheme may lead a gang of low-paid workers, [ 3 ] or human trafficking victims. [ 19 ] [ 20 ] [ 8 ] Cases have been reported since the 1980s. [ 9 ] [ 21 ] [ 22 ] [ 23 ] Irish crime reporter Eamon Dillon, an expert on the gangs involved, interviewed a builder who worked with a gang who said that they had custom-built lorries which could never do a proper job: "a proper tarring lorry will have sixty jets, our tar lorries have eight". [ 3 ] In another case, the equipment was rented in Romania and then never returned. [ 1 ] Another gang used a lorry with Highways Agency branding. [ 13 ] The relative mundanity of tarmacking may have made it a low priority for law enforcement. [ 2 ] [ 8 ] Dillon has estimated that the scheme may earn up to $140 million a year [ 2 ] and that in 2010 there were 20 gangs active in Italy alone, earning €2 million a week. [ 24 ]
https://en.wikipedia.org/wiki/Tarmac_scam
Tarmacadam or tarmac is a concrete road surfacing material made by combining tar and macadam ( crushed stone and sand ), patented by Welsh inventor Edgar Purnell Hooley in 1902. It is a more durable and dust-free enhancement of simple compacted stone macadam surfaces invented by Scottish engineer John Loudon McAdam in the early 19th century. The terms "tarmacadam" and "tarmac" are also used for a variety of other materials, including tar- grouted macadam, bituminous surface treatments and modern asphalt concrete . Macadam roads pioneered by Scottish engineer John Loudon McAdam in the 1820s [ 1 ] are prone to rutting and generating dust. Methods to stabilise macadam surfaces with tar date back to at least 1834 when John Henry Cassell, operating from Cassell's Patent Lava Stone Works in Millwall , England , patented "lava stone." [ 2 ] This method involved spreading tar on the subgrade , placing a typical macadam layer, and finally sealing the macadam with a mixture of tar and sand. Tar-grouted macadam was in use well before 1900 and involved scarifying the surface of an existing macadam pavement, spreading tar and re-compacting. Although the use of tar in road construction was known in the 19th century, it was little used and was not introduced on a large scale until the motorcar arrived on the scene in the early 20th century. Ironically, although John Loudon McAdam himself had been a supplier of coke for Britain's first Coal-Tar factory , he never in his own lifetime advocated for the use of tar as a binding agent for his road designs, preferring free-draining materials (see the page Macadam ). In 1901, Edgar Purnell Hooley was walking in Denby , Derbyshire , when he noticed a smooth stretch of road close to an ironworks. He was informed that a barrel of tar had fallen onto the road and someone poured waste slag from the nearby furnaces to cover up the mess. [ 3 ] Hooley noticed this unintentional resurfacing had solidified the road, and there was no rutting and no dust. [ 3 ] Hooley's 1902 patent for tarmac involved mechanically mixing tar and aggregate before lay-down and then compacting the mixture with a steamroller . The tar was modified by adding small amounts of Portland cement , resin and pitch . [ 4 ] Nottingham 's Radcliffe Road became the first tarmac road in the world. [ 3 ] In 1903 Hooley formed Tar Macadam Syndicate Ltd and registered tarmac as a trademark. [ 3 ] As petroleum production increased, the by-product bitumen became available in greater quantities and largely supplanted coal tar. The macadam construction process quickly became obsolete because of the onerous and impractical manual labour required. The somewhat similar tar and chip method, also known as bituminous surface treatment (BST) or chipseal , remains popular. While the specific tarmac pavement is not common in some countries today, many people use the word to refer to generic paved areas at airports , [ 5 ] especially the apron near airport terminals , [ 6 ] although these areas are often made of concrete . Similarly in the UK, the word tarmac is much more commonly used by the public when referring to asphalt concrete .
https://en.wikipedia.org/wiki/Tarmacadam
Tarnish is a thin layer of corrosion that forms over copper , brass , aluminum , magnesium , neodymium and other similar metals as their outermost layer undergoes a chemical reaction. [ 1 ] Tarnish does not always result from the sole effects of oxygen in the air. For example, silver needs hydrogen sulfide to tarnish, although it may tarnish with oxygen over time. It often appears as a dull, gray or black film or coating over metal. Tarnish is a surface phenomenon that is self-limiting, unlike rust . Only the top few layers of the metal react. The layer of tarnish seals and protects the underlying layers from reacting. Tarnish preserves the underlying metal in outdoor use, and in this form is called chemical patina , [ 2 ] an example of which is the green or blue-green form of copper(II) carbonate known as verdigris . Unlike patina advantageous in applications such as copper roofing and copper, bronze, and brass statues and fittings exposed to the elements, a chemical patina may be considered undesirable, as on silverware, [ 2 ] or a matter of taste or convention, as in toning on coins. Tarnish is a product of a chemical reaction between a metal and a nonmetal compound , especially oxygen and sulfur dioxide . It is usually a metal oxide , the product of oxidation ; sometimes it is a metal sulfide. The metal oxide sometimes reacts with water to make the hydroxide, or with carbon dioxide to make the carbonate. It is a chemical change. There are various methods to prevent metals from tarnishing. Heavy tarnish can be mechanically removed by using tools such as a file or abrasive materials such as steel wool , sandpaper , emery paper , and heavy polishing compounds. Lighter tarnish may be abrasively removed with lighter polishing compounds or chemicals such as baking soda . Gentler abrasives, such as calcium carbonate , are often used by museums to clean tarnished silver , which will not scratch it or leave unwanted residues. [ 4 ] Objects such as silverware may have their tarnish non-destructively reversed electrochemically by resting them on a piece of aluminium foil in a pot of boiling water with a small amount of salt or baking soda. [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Tarnish
Taro Morishima ( 森嶋 太郎 , Morishima Tarō , 1903 – 1989) was a Japanese mathematician specializing in algebra who attended University of Tokyo in Japan . Morishima published at least thirteen papers, including his work on Fermat's Last Theorem . [ 1 ] and a collected works volume published in 1990 after his death. [ 2 ] He also corresponded several times with American mathematician H. S. Vandiver . [ 3 ] Granville wrote that Morishima's proof could not be accepted. [1] This article about an Asian mathematician is a stub . You can help Wikipedia by expanding it . This article about a Japanese scientist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Taro_Morishima
In 1936, Alfred Tarski gave an axiomatization of the real numbers and their arithmetic, consisting of only the eight axioms shown below and a mere four primitive notions : [ 1 ] the set of reals denoted R , a binary relation over R , denoted by infix <, a binary operation of addition over R , denoted by infix +, and the constant 1. Tarski's axiomatization, which is a second-order theory, can be seen as a version of the more usual definition of real numbers as the unique Dedekind-complete ordered field ; it is however made much more concise by avoiding multiplication altogether and using unorthodox variants of standard algebraic axioms and other subtle tricks. Tarski did not supply a proof that his axioms are sufficient or a definition for the multiplication of real numbers in his system. Tarski also studied the first-order theory of the structure ( R , +, ·, <), leading to a set of axioms for this theory and to the concept of real closed fields . Tarski stated, without proof, that these axioms turn the relation < into a total ordering . The missing component was supplied in 2008 by Stefanie Ucsnay. [ 2 ] The axioms then imply that R is a linearly ordered abelian group under addition with distinguished positive element 1 , and that this group is Dedekind-complete , divisible , and Archimedean . Tarski never proved that these axioms and primitives imply the existence of a binary operation called multiplication that has the expected properties, so that R becomes a complete ordered field under addition and multiplication. It is possible to define this multiplication operation by considering certain order-preserving homomorphisms of the ordered group ( R ,+,<). [ 3 ]
https://en.wikipedia.org/wiki/Tarski's_axiomatization_of_the_reals
Tarski's axioms are an axiom system for Euclidean geometry , specifically for that portion of Euclidean geometry that is formulable in first-order logic with identity (i.e. is formulable as an elementary theory ). As such, it does not require an underlying set theory . The only primitive objects of the system are "points" and the only primitive predicates are "betweenness" (expressing the fact that a point lies on a line segment between two other points) and "congruence" (expressing the fact that the distance between two points equals the distance between two other points). The system contains infinitely many axioms. The axiom system is due to Alfred Tarski who first presented it in 1926. [ 1 ] Other modern axiomizations of Euclidean geometry are Hilbert's axioms (1899) and Birkhoff's axioms (1932). Using his axiom system, Tarski was able to show that the first-order theory of Euclidean geometry is consistent , complete and decidable : every sentence in its language is either provable or disprovable from the axioms, and we have an algorithm which decides for any given sentence whether it is provable or not. Early in his career Tarski taught geometry and researched set theory. His coworker Steven Givant (1999) explained Tarski's take-off point: Givant then says that "with typical thoroughness" Tarski devised his system: Like other modern axiomatizations of Euclidean geometry, Tarski's employs a formal system consisting of symbol strings, called sentences , whose construction respects formal syntactical rules , and rules of proof that determine the allowed manipulations of the sentences. Unlike some other modern axiomatizations, such as Birkhoff's and Hilbert's , Tarski's axiomatization has no primitive objects other than points , so a variable or constant cannot refer to a line or an angle. Because points are the only primitive objects, and because Tarski's system is a first-order theory , it is not even possible to define lines as sets of points. The only primitive relations ( predicates ) are "betweenness" and "congruence" among points. Tarski's axiomatization is shorter than its rivals, in a sense Tarski and Givant (1999) make explicit. It is more concise than Pieri's because Pieri had only two primitive notions while Tarski introduced three: point, betweenness, and congruence. Such economy of primitive and defined notions means that Tarski's system is not very convenient for doing Euclidean geometry. Rather, Tarski designed his system to facilitate its analysis via the tools of mathematical logic , i.e., to facilitate deriving its metamathematical properties. Tarski's system has the unusual property that all sentences can be written in universal-existential form, a special case of the prenex normal form . This form has all universal quantifiers preceding any existential quantifiers , so that all sentences can be recast in the form ∀ u ∀ v … ∃ a ∃ b … . {\displaystyle \forall u\forall v\ldots \exists a\exists b\dots .} This fact allowed Tarski to prove that Euclidean geometry is decidable : there exists an algorithm which can determine the truth or falsity of any sentence. Tarski's axiomatization is also complete . This does not contradict Gödel's first incompleteness theorem , because Tarski's theory lacks the expressive power needed to interpret Robinson arithmetic ( Franzén 2005 , pp. 25–26). Alfred Tarski worked on the axiomatization and metamathematics of Euclidean geometry intermittently from 1926 until his death in 1983, with Tarski (1959) heralding his mature interest in the subject. The work of Tarski and his students on Euclidean geometry culminated in the monograph Schwabhäuser, Szmielew, and Tarski (1983), which set out the 10 axioms and one axiom schema shown below, the associated metamathematics , and a fair bit of the subject. Gupta (1965) made important contributions, and Tarski and Givant (1999) discuss the history. These axioms are a more elegant version of a set Tarski devised in the 1920s as part of his investigation of the metamathematical properties of Euclidean plane geometry . This objective required reformulating that geometry as a first-order theory . Tarski did so by positing a universe of points , with lower case letters denoting variables ranging over that universe. Equality is provided by the underlying logic (see First-order logic#Equality and its axioms ). [ 2 ] Tarski then posited two primitive relations: Betweenness captures the affine aspect (such as the parallelism of lines) of Euclidean geometry; congruence, its metric aspect (such as angles and distances). The background logic includes identity , a binary relation denoted by =. The axioms below are grouped by the types of relation they invoke, then sorted, first by the number of existential quantifiers, then by the number of atomic sentences. The axioms should be read as universal closures ; hence any free variables should be taken as tacitly universally quantified . While the congruence relation x y ≡ z w {\displaystyle xy\equiv zw} is, formally, a 4-way relation among points, it may also be thought of, informally, as a binary relation between two line segments x y {\displaystyle xy} and z w {\displaystyle zw} . The "Reflexivity" and "Transitivity" axioms above, combined, prove both: The "transitivity" axiom asserts that congruence is Euclidean , in that it respects the first of Euclid's " common notions ". The "Identity of Congruence" axiom states, intuitively, that if xy is congruent with a segment that begins and ends at the same point, x and y are the same point. This is closely related to the notion of reflexivity for binary relations . The only point on the line segment x x {\displaystyle xx} is x {\displaystyle x} itself. Let φ( x ) and ψ( y ) be first-order formulae containing no free instances of either a or b . Let there also be no free instances of x in ψ( y ) or of y in φ( x ). Then all instances of the following schema are axioms: Let r be a ray with endpoint a . Let the first order formulae φ and ψ define subsets X and Y of r , such that every point in Y is to the right of every point of X (with respect to a ). Then there exists a point b in r lying between X and Y . This is essentially the Dedekind cut construction, carried out in a way that avoids quantification over sets. Note that the formulae φ( x ) and ψ( y ) may contain parameters, i.e. free variables different from a , b , x, y . And indeed, each instance of the axiom scheme that does not contain parameters can be proven from the other axioms. [ 3 ] There exist three noncollinear points. Without this axiom, the theory could be modeled by the one-dimensional real line , a single point, or even the empty set. Three points equidistant from two distinct points form a line. Without this axiom, the theory could be modeled by three-dimensional or higher-dimensional space. Three variants of this axiom can be given, labeled A, B and C below. They are equivalent to each other given the remaining Tarski's axioms, and indeed equivalent to Euclid's parallel postulate . Let a line segment join the midpoint of two sides of a given triangle . That line segment will be half as long as the third side. This is equivalent to the interior angles of any triangle summing to two right angles . Given any triangle , there exists a circle that includes all of its vertices. Given any angle and any point v in its interior, there exists a line segment including v , with an endpoint on each side of the angle. Each variant has an advantage over the others: Begin with two triangles , xuz and x'u'z'. Draw the line segments yu and y'u', connecting a vertex of each triangle to a point on the side opposite to the vertex. The result is two divided triangles, each made up of five segments. If four segments of one triangle are each congruent to a segment in the other triangle, then the fifth segments in both triangles must be congruent. This is equivalent to the side-angle-side rule for determining that two triangles are congruent; if the angles uxz and u'x'z' are congruent (there exist congruent triangles xuz and x'u'z' ), and the two pairs of incident sides are congruent ( xu ≡ x'u' and xz ≡ x'z' ), then the remaining pair of sides is also congruent ( uz ≡ u'z' ). For any point y , it is possible to draw in any direction (determined by x ) a line congruent to any segment ab . According to Tarski and Givant (1999: 192-93), none of the above axioms are fundamentally new. The first four axioms establish some elementary properties of the two primitive relations. For instance, Reflexivity and Transitivity of Congruence establish that congruence is an equivalence relation over line segments. The Identity of Congruence and of Betweenness govern the trivial case when those relations are applied to nondistinct points. The theorem xy ≡ zz ↔ x = y ↔ Bxyx extends these Identity axioms. A number of other properties of Betweenness are derivable as theorems [ 4 ] including: The last two properties totally order the points making up a line segment. The Upper and Lower Dimension axioms together require that any model of these axioms have dimension 2, i.e. that we are axiomatizing the Euclidean plane. Suitable changes in these axioms yield axiom sets for Euclidean geometry for dimensions 0, 1, and greater than 2 (Tarski and Givant 1999: Axioms 8 (1) , 8 (n) , 9 (0) , 9 (1) , 9 (n) ). Note that solid geometry requires no new axioms, unlike the case with Hilbert's axioms . Moreover, Lower Dimension for n dimensions is simply the negation of Upper Dimension for n - 1 dimensions. When the number of dimensions is greater than 1, Betweenness can be defined in terms of congruence (Tarski and Givant, 1999). First define the relation "≤" (where a b ≤ c d {\displaystyle ab\leq cd} is interpreted "the length of line segment a b {\displaystyle ab} is less than or equal to the length of line segment c d {\displaystyle cd} "): In the case of two dimensions, the intuition is as follows: For any line segment xy , consider the possible range of lengths of xv , where v is any point on the perpendicular bisector of xy . It is apparent that while there is no upper bound to the length of xv , there is a lower bound, which occurs when v is the midpoint of xy . So if xy is shorter than or equal to zu , then the range of possible lengths of xv will be a superset of the range of possible lengths of zw , where w is any point on the perpendicular bisector of zu . Betweenness can then be defined by using the intuition that the shortest distance between any two points is a straight line: The Axiom Schema of Continuity assures that the ordering of points on a line is complete (with respect to first-order definable properties). As was pointed out by Tarski, this first-order axiom schema may be replaced by a more powerful second-order Axiom of Continuity if one allows for variables to refer to arbitrary sets of points. The resulting second-order system is equivalent to Hilbert's set of axioms. (Tarski and Givant 1999) The Axioms of Pasch and Euclid are well known. The Segment Construction axiom makes measurement and the Cartesian coordinate system possible—simply assign the length 1 to some arbitrary non-empty line segment. Indeed, it is shown in (Schwabhäuser 1983) that by specifying two distinguished points on a line, called 0 and 1, we can define an addition, multiplication and ordering, turning the set of points on that line into a real-closed ordered field . We can then introduce coordinates from this field, showing that every model of Tarski's axioms is isomorphic to the two-dimensional plane over some real-closed ordered field. The standard geometric notions of parallelism and intersection of lines (where lines are represented by two distinct points on them), right angles, congruence of angles, similarity of triangles, tangency of lines and circles (represented by a center point and a radius) can all be defined in Tarski's system. Let wff stand for a well-formed formula (or syntactically correct first-order formula) in Tarski's system. Tarski and Givant (1999: 175) proved that Tarski's system is: This has the consequence that every statement of (second-order, general) Euclidean geometry which can be formulated as a first-order sentence in Tarski's system is true if and only if it is provable in Tarski's system, and this provability can be automatically checked with Tarski's algorithm. This, for instance, applies to all theorems in Euclid's Elements , Book I. An example of a theorem of Euclidean geometry which cannot be so formulated is the Archimedean property : to any two positive-length line segments S 1 and S 2 there exists a natural number n such that nS 1 is longer than S 2 . (This is a consequence of the fact that there are real-closed fields that contain infinitesimals. [ 5 ] ) Other notions that cannot be expressed in Tarski's system are the constructability with straightedge and compass and statements that talk about "all polygones" etc. [ 6 ] Gupta (1965) proved the Tarski's axioms independent, excepting Pasch and Reflexivity of Congruence . Negating the Axiom of Euclid yields hyperbolic geometry , while eliminating it outright yields absolute geometry . Full (as opposed to elementary) Euclidean geometry requires giving up a first order axiomatization: replace φ( x ) and ψ( y ) in the axiom schema of Continuity with x ∈ A and y ∈ B , where A and B are universally quantified variables ranging over sets of points. Further simplifications for the fragment describing the plane Euclidean geometry of ruler and segment-transporter constructions, as well as for that of ruler and compass constructions were provided in (Pambuccian 2024). Each axiom of the axiom systems presented there is a prenex statement with at most 5 variables. Hilbert's axioms for plane geometry number 16, and include Transitivity of Congruence and a variant of the Axiom of Pasch. The only notion from intuitive geometry invoked in the remarks to Tarski's axioms is triangle . (Versions B and C of the Axiom of Euclid refer to "circle" and "angle," respectively.) Hilbert's axioms also require "ray," "angle," and the notion of a triangle "including" an angle. In addition to betweenness and congruence, Hilbert's axioms require a primitive binary relation "on," linking a point and a line. Hilbert uses two axioms of Continuity, and they require second-order logic . By contrast, Tarski's Axiom schema of Continuity consists of infinitely many first-order axioms. Such a schema is indispensable; Euclidean geometry in Tarski's (or equivalent) language cannot be finitely axiomatized as a first-order theory . Hilbert's system is therefore considerably stronger: every model is isomorphic to the real plane R 2 {\displaystyle \mathbb {R} ^{2}} (using the standard notions of points and lines). By contrast, Tarski's system has many non-isomorphic models: for every real-closed field F , the plane F 2 provides one such model (where betweenness and congruence are defined in the obvious way). [ 7 ] The first four groups of axioms of Hilbert's axioms for plane geometry are bi-interpretable with Tarski's axioms minus continuity.
https://en.wikipedia.org/wiki/Tarski's_axioms
Tarski's circle-squaring problem is the challenge, posed by Alfred Tarski in 1925, [ 1 ] to take a disc in the plane, cut it into finitely many pieces, and reassemble the pieces so as to get a square of equal area . It is possible, using pieces that are Borel sets , but not with pieces cut by Jordan curves . Tarski's circle-squaring problem was proven to be solvable by Miklós Laczkovich in 1990. The decomposition makes heavy use of the axiom of choice and is therefore non-constructive . Laczkovich estimated the number of pieces in his decomposition at roughly 10 50 . The pieces used in his decomposition are non-measurable subsets of the plane. [ 2 ] [ 3 ] Laczkovich actually proved the reassembly can be done using translations only ; rotations are not required. Along the way, he also proved that any simple polygon in the plane can be decomposed into finitely many pieces and reassembled using translations only to form a square of equal area. [ 2 ] [ 3 ] It follows from a result of Wilson (2005) that it is possible to choose the pieces in such a way that they can be moved continuously while remaining disjoint to yield the square. Moreover, this stronger statement can be proved as well to be accomplished by means of translations only. [ 4 ] A constructive solution was given by Łukasz Grabowski, András Máthé and Oleg Pikhurko in 2016 which worked everywhere except for a set of measure zero . [ 5 ] More recently, Andrew Marks and Spencer Unger gave a completely constructive solution using about 10 200 {\displaystyle 10^{200}} Borel pieces . [ 6 ] Lester Dubins , Morris W. Hirsch & Jack Karush proved it is impossible to dissect a circle and make a square using pieces that could be cut with an idealized pair of scissors (that is, having Jordan curve boundary). [ 7 ] The Bolyai–Gerwien theorem is a related but much simpler result: it states that one can accomplish such a decomposition of a simple polygon with finitely many polygonal pieces if both translations and rotations are allowed for the reassembly. [ 2 ] [ 3 ] These results should be compared with the much more paradoxical decompositions in three dimensions provided by the Banach–Tarski paradox ; those decompositions can even change the volume of a set. However, in the plane, a decomposition into finitely many pieces must preserve the sum of the Banach measures of the pieces, and therefore cannot change the total area of a set. [ 8 ]
https://en.wikipedia.org/wiki/Tarski's_circle-squaring_problem
In model theory , Tarski's exponential function problem asks whether the theory of the real numbers together with the exponential function is decidable . Alfred Tarski had previously shown that the theory of the real numbers (without the exponential function) is decidable . [ 1 ] The ordered real field R {\displaystyle \mathbb {R} } is a structure over the language of ordered rings L or = ( + , − , < , 0 , 1 ) {\displaystyle L_{\text{or}}=(+,-,<,0,1)} , with the usual interpretation given to each symbol. It was proved by Tarski that the theory of the real field , Th ⁡ ( R ) {\displaystyle \operatorname {Th} (\mathbb {R} )} , is decidable. That is, given any L or {\displaystyle L_{\text{or}}} -sentence φ {\displaystyle \varphi } there is an effective procedure for determining whether He then asked whether this was still the case if one added a unary function exp {\displaystyle \exp } to the language that was interpreted as the exponential function on R {\displaystyle \mathbb {R} } , to get the structure R exp {\displaystyle \mathbb {R} _{\exp }} . The problem can be reduced to finding an effective procedure for determining whether any given exponential polynomial in n {\displaystyle n} variables and with coefficients in Z {\displaystyle \mathbb {Z} } has a solution in R n {\displaystyle \mathbb {R} ^{n}} . Macintyre & Wilkie (1996) showed that Schanuel's conjecture implies such a procedure exists, and hence gave a conditional solution to Tarski's problem. [ 2 ] Schanuel's conjecture deals with all complex numbers so would be expected to be a stronger result than the decidability of R exp {\displaystyle \mathbb {R} _{\exp }} , and indeed, Macintyre and Wilkie proved that only a real version of Schanuel's conjecture is required to imply the decidability of this theory. Even the real version of Schanuel's conjecture is not a necessary condition for the decidability of the theory. In their paper, Macintyre and Wilkie showed that an equivalent result to the decidability of Th ⁡ ( R exp ) {\displaystyle \operatorname {Th} (\mathbb {R} _{\exp })} is what they dubbed the weak Schanuel's conjecture. This conjecture states that there is an effective procedure that, given n ≥ 1 {\displaystyle n\geq 1} and exponential polynomials in n {\displaystyle n} variables with integer coefficients f 1 , … , f n , g {\displaystyle f_{1},\dots ,f_{n},g} , produces an integer η ≥ 1 {\displaystyle \eta \geq 1} that depends on n , f 1 , … , f n , g {\displaystyle n,f_{1},\dots ,f_{n},g} , and such that if α ∈ R n {\displaystyle \alpha \in \mathbb {R} ^{n}} is a non-singular solution of the system then either g ( α ) = 0 {\displaystyle g(\alpha )=0} or | g ( α ) | > 1 η {\displaystyle |g(\alpha )|>{\tfrac {1}{\eta }}} .
https://en.wikipedia.org/wiki/Tarski's_exponential_function_problem
In mathematical logic , Tarski's high school algebra problem was a question posed by Alfred Tarski . It asks whether there are identities involving addition , multiplication , and exponentiation over the positive integers that cannot be proved using eleven axioms about these operations that are taught in high-school -level mathematics . The question was solved in 1980 by Alex Wilkie , who showed that such unprovable identities do exist. Tarski's problem more formally asks if the equational theory of the High School Axioms Th E q ( H S ) {\displaystyle {\text{Th}}_{Eq}(\mathrm {HS} )} (that is, the set of identities provable from them in equational logic) is equal to the equational theory of R ≥ 0 {\displaystyle \mathbb {R} _{\geq 0}} (that is, the set of all true identities)? This turns out to be analogous to Hilbert's program and Gödel's incompleteness theorem in the 1920s and 30s. First, note that Birkhoff proved with his HSP theorem that, remarkably, the equational theory of R ≥ 0 {\displaystyle \mathbb {R} _{\geq 0}} is equal to the equational theory of all commutative semirings, in particular the equational theory of N {\displaystyle \mathbb {N} } . In other words, to test if an identity is true one only needs to test it for natural numbers. Then, one can ask if the first-order theory of some finite set of axioms (that is, the set of formulas provable from them in first-order logic) is equal to the first-order theory of the natural numbers, Th ( N ) {\displaystyle {\text{Th}}(\mathbb {N} )} (that is, the set of all true formulas). In Tarski's question the goal is for Th E q ( H S ) = Th E q ( N ) {\displaystyle {\text{Th}}_{Eq}(\mathrm {HS} )={\text{Th}}_{Eq}(\mathbb {N} )} ; in Hilbert's question the goal is for a theory T {\displaystyle T} for which Th ( T ) = Th ( N ) {\displaystyle {\text{Th}}(T)={\text{Th}}(\mathbb {N} )} . In both cases this does not work out. Godel's first incompleteness theorem, which shows that Th ( N ) {\displaystyle {\text{Th}}(\mathbb {N} )} is not computably axiomatizable, is then analogous to Wilkie and Gurevič's results that the equational theory is not finitely axiomatizable. Tarski considered the following eleven axioms about addition ( + ) {\displaystyle (+)} , multiplication ( ⋅ ) {\displaystyle (\cdot )} , and exponentiation to be standard axioms taught in high school: x + y = y + x ( x + y ) + z = x + ( y + z ) x ⋅ 1 = x x ⋅ y = y ⋅ x ( x ⋅ y ) ⋅ z = x ⋅ ( y ⋅ z ) x ⋅ ( y + z ) = x ⋅ y + x ⋅ z 1 x = 1 x 1 = x x y + z = x y ⋅ x z ( x ⋅ y ) z = x z ⋅ y z ( x y ) z = x y ⋅ z {\displaystyle {\begin{aligned}x+y&=y+x\\(x+y)+z&=x+(y+z)\\x\cdot 1&=x\\x\cdot y&=y\cdot x\\(x\cdot y)\cdot z&=x\cdot (y\cdot z)\\x\cdot (y+z)&=x\cdot y+x\cdot z\\1^{x}&=1\\x^{1}&=x\\x^{y+z}&=x^{y}\cdot x^{z}\\(x\cdot y)^{z}&=x^{z}\cdot y^{z}\\(x^{y})^{z}&=x^{y\cdot z}\end{aligned}}} These eleven axioms, sometimes called the high school identities , [ 1 ] are related to the axioms of a bicartesian closed category or an exponential ring . [ 2 ] Tarski's problem then becomes: are there identities involving only addition, multiplication, and exponentiation, that are true for all positive integers, but that cannot be proved using only the axioms 1–11? Since the axioms seem to list all the basic facts about the operations in question, it is not immediately obvious that there should be anything provably true one can state using only the three operations, but cannot prove with the axioms. However, proving seemingly innocuous statements can require long proofs using only the above eleven axioms. Consider the following proof that ( x + 1 ) 2 = x 2 + 2 ⋅ x + 1 : {\displaystyle (x+1)^{2}=x^{2}+2\cdot x+1:} ( x + 1 ) 2 = ( x + 1 ) 1 + 1 = ( x + 1 ) 1 ⋅ ( x + 1 ) 1 by 9. = ( x + 1 ) ⋅ ( x + 1 ) by two applications of 8. = ( x + 1 ) ⋅ x + ( x + 1 ) ⋅ 1 by 6. = x ⋅ ( x + 1 ) + ( x + 1 ) by 4. and 3. = ( x ⋅ x + x ⋅ 1 ) + ( x ⋅ 1 + 1 ) by 6. and 3. = x ⋅ x + ( x ⋅ 1 + x ⋅ 1 ) + 1 by two applications of 2. = x 1 ⋅ x 1 + x ⋅ ( 1 + 1 ) + 1 by 6. and two applications of 8. = x 1 + 1 + x ⋅ 2 + 1 by 9. = x 2 + 2 ⋅ x + 1 by 4. {\displaystyle {\begin{aligned}(x+1)^{2}&=(x+1)^{1+1}\\&=(x+1)^{1}\cdot (x+1)^{1}&&{\text{by 9.}}\\&=(x+1)\cdot (x+1)&&{\text{by two applications of 8.}}\\&=(x+1)\cdot x+(x+1)\cdot 1&&{\text{by 6.}}\\&=x\cdot (x+1)+(x+1)&&{\text{by 4. and 3.}}\\&=(x\cdot x+x\cdot 1)+(x\cdot 1+1)&&{\text{by 6. and 3.}}\\&=x\cdot x+(x\cdot 1+x\cdot 1)+1&&{\text{by two applications of 2.}}\\&=x^{1}\cdot x^{1}+x\cdot (1+1)+1&&{\text{by 6. and two applications of 8.}}\\&=x^{1+1}+x\cdot 2+1&&{\text{by 9.}}\\&=x^{2}+2\cdot x+1&&{\text{by 4.}}\end{aligned}}} Strictly we should not write sums of more than two terms without parentheses, and therefore a completely formal proof would prove the identity ( x + 1 ) 2 = ( x 2 + 2 ⋅ x ) + 1 {\displaystyle (x+1)^{2}=\left(x^{2}+2\cdot x\right)+1} (or ( x + 1 ) 2 = x 2 + ( 2 ⋅ x + 1 ) {\displaystyle (x+1)^{2}=x^{2}+(2\cdot x+1)} ) and would have an extra set of parentheses in each line from x ⋅ x + ( x ⋅ 1 + x ⋅ 1 ) + 1 {\displaystyle x\cdot x+(x\cdot 1+x\cdot 1)+1} onwards. The length of proofs is not an issue; proofs of similar identities to that above for things like ( x + y ) 100 {\displaystyle (x+y)^{100}} would take a lot of lines, but would really involve little more than the above proof. The list of eleven axioms can be found explicitly written down in the works of Richard Dedekind , [ 3 ] although they were obviously known and used by mathematicians long before then. Dedekind was the first, though, who seemed to be asking if these axioms were somehow sufficient to tell us everything we could want to know about the integers. The question was put on a firm footing as a problem in logic and model theory sometime in the 1960s by Alfred Tarski, [ 1 ] [ 4 ] and by the 1980s it had become known as Tarski's high school algebra problem. In 1980 Alex Wilkie proved that not every identity in question can be proved using the axioms above. [ 5 ] He did this by explicitly finding such an identity. By introducing new function symbols corresponding to polynomials that map positive numbers to positive numbers he proved this identity, and showed that these functions together with the eleven axioms above were both necessary and sufficient to prove it. The identity in question is ( ( 1 + x ) y + ( 1 + x + x 2 ) y ) x ⋅ ( ( 1 + x 3 ) x + ( 1 + x 2 + x 4 ) x ) y = ( ( 1 + x ) x + ( 1 + x + x 2 ) x ) y ⋅ ( ( 1 + x 3 ) y + ( 1 + x 2 + x 4 ) y ) x . {\displaystyle {\begin{aligned}&\left((1+x)^{y}+(1+x+x^{2})^{y}\right)^{x}\cdot \left((1+x^{3})^{x}+(1+x^{2}+x^{4})^{x}\right)^{y}\\={}&\left((1+x)^{x}+(1+x+x^{2})^{x}\right)^{y}\cdot \left((1+x^{3})^{y}+(1+x^{2}+x^{4})^{y}\right)^{x}.\end{aligned}}} This identity is usually denoted W ( x , y ) {\displaystyle W(x,y)} and is true for all positive integers x {\displaystyle x} and y , {\displaystyle y,} as can be seen by factoring ( 1 − x + x 2 ) x y {\displaystyle (1-x+x^{2})^{xy}} out of the second factor on each side; yet it cannot be proved true using the eleven high school axioms. Intuitively, the identity cannot be proved because the high school axioms can't be used to discuss the polynomial 1 − x + x 2 . {\displaystyle 1-x+x^{2}.} Reasoning about that polynomial and the subterm − x {\displaystyle -x} requires a concept of negation or subtraction , and these are not present in the high school axioms. Lacking this, it is then impossible to use the axioms to manipulate the polynomial and prove true properties about it. Wilkie's results from his paper show, in more formal language, that the "only gap" in the high school axioms is the inability to manipulate polynomials with negative coefficients . R. Gurevič showed in 1988 that there is no finite axiomatization for the valid equations for the positive natural numbers with 1, addition, multiplication, and exponentiation. [ 6 ] [ 7 ] Wilkie proved that there are statements about the positive integers that cannot be proved using the eleven axioms above and showed what extra information is needed before such statements can be proved. Using Nevanlinna theory it has also been proved that if one restricts the kinds of exponential one takes then the above eleven axioms are sufficient to prove every true statement. [ 8 ] Another problem stemming from Wilkie's result, which remains open, is that which asks what the smallest algebra is for which W ( x , y ) {\displaystyle W(x,y)} is not true but the eleven axioms above are. In 1985 an algebra with 59 elements was found that satisfied the axioms but for which W ( x , y ) {\displaystyle W(x,y)} was false. [ 4 ] Smaller such algebras have since been found, and it is now known that the smallest such one must have either 11 or 12 elements. [ 9 ]
https://en.wikipedia.org/wiki/Tarski's_high_school_algebra_problem
In mathematics , Tarski's plank problem is a question about coverings of convex regions in n -dimensional Euclidean space by "planks": regions between two hyperplanes . Alfred Tarski asked if the sum of the widths of the planks must be at least the minimum width of the convex region. The question was answered affirmatively by Thøger Bang ( 1950 , 1951 ). [ 1 ] Given a convex body C in R n and a hyperplane H , the width of C parallel to H , w ( C , H ), is the distance between the two supporting hyperplanes of C that are parallel to H . The smallest such distance (i.e. the infimum over all possible hyperplanes) is called the minimal width of C , w ( C ). The (closed) set of points P between two distinct, parallel hyperplanes in R n is called a plank, and the distance between the two hyperplanes is called the width of the plank, w ( P ). Tarski conjectured that if a convex body C of minimal width w ( C ) was covered by a collection of planks, then the sum of the widths of those planks must be at least w ( C ). That is, if P 1 ,…, P m are planks such that then Bang proved this is indeed the case. The name of the problem, specifically for the sets of points between parallel hyperplanes, comes from the visualisation of the problem in R 2 . Here, hyperplanes are just straight lines and so planks become the space between two parallel lines. Thus the planks can be thought of as (infinitely long) planks of wood , and the question becomes how many planks does one need to completely cover a convex tabletop of minimal width w ? Bang's theorem shows that, for example, a circular table of diameter d feet can't be covered by fewer than d planks of wood of width one foot each.
https://en.wikipedia.org/wiki/Tarski's_plank_problem
In mathematics , Tarski's theorem , proved by Alfred Tarski ( 1924 ), states that in ZF the theorem "For every infinite set A {\displaystyle A} , there is a bijective map between the sets A {\displaystyle A} and A × A {\displaystyle A\times A} " implies the axiom of choice . The opposite direction was already known, thus the theorem and axiom of choice are equivalent. Tarski told Jan Mycielski ( 2006 ) that when he tried to publish the theorem in Comptes Rendus de l'Académie des Sciences de Paris , Fréchet and Lebesgue refused to present it. Fréchet wrote that an implication between two well known propositions is not a new result. Lebesgue wrote that an implication between two false propositions is of no interest. The goal is to prove that the axiom of choice is implied by the statement "for every infinite set A : {\displaystyle A:} | A | = | A × A | {\displaystyle |A|=|A\times A|} ". It is known that the well-ordering theorem is equivalent to the axiom of choice; thus it is enough to show that the statement implies that for every set B {\displaystyle B} there exists a well-order . Since the collection of all ordinals such that there exists a surjective function from B {\displaystyle B} to the ordinal is a set, there exists an infinite ordinal, β , {\displaystyle \beta ,} such that there is no surjective function from B {\displaystyle B} to β . {\displaystyle \beta .} We assume without loss of generality that the sets B {\displaystyle B} and β {\displaystyle \beta } are disjoint . By the initial assumption, | B ∪ β | = | ( B ∪ β ) × ( B ∪ β ) | , {\displaystyle |B\cup \beta |=|(B\cup \beta )\times (B\cup \beta )|,} thus there exists a bijection f : B ∪ β → ( B ∪ β ) × ( B ∪ β ) . {\displaystyle f:B\cup \beta \to (B\cup \beta )\times (B\cup \beta ).} For every x ∈ B , {\displaystyle x\in B,} it is impossible that β × { x } ⊆ f [ B ] , {\displaystyle \beta \times \{x\}\subseteq f[B],} because otherwise we could define a surjective function from B {\displaystyle B} to β . {\displaystyle \beta .} Therefore, there exists at least one ordinal γ ∈ β {\displaystyle \gamma \in \beta } such that f ( γ ) ∈ β × { x } , {\displaystyle f(\gamma )\in \beta \times \{x\},} so the set S x = { γ : f ( γ ) ∈ β × { x } } {\displaystyle S_{x}=\{\gamma :f(\gamma )\in \beta \times \{x\}\}} is not empty. We can define a new function: g ( x ) = min S x . {\displaystyle g(x)=\min S_{x}.} This function is well defined since S x {\displaystyle S_{x}} is a non-empty set of ordinals, and so has a minimum. For every x , y ∈ B , x ≠ y {\displaystyle x,y\in B,x\neq y} the sets S x {\displaystyle S_{x}} and S y {\displaystyle S_{y}} are disjoint. Therefore, we can define a well order on B , {\displaystyle B,} for every x , y ∈ B {\displaystyle x,y\in B} we define x ≤ y ⟺ g ( x ) ≤ g ( y ) , {\displaystyle x\leq y\iff g(x)\leq g(y),} since the image of g , {\displaystyle g,} that is, g [ B ] {\displaystyle g[B]} is a set of ordinals and therefore well ordered.
https://en.wikipedia.org/wiki/Tarski's_theorem_about_choice
Tarski's undefinability theorem , stated and proved by Alfred Tarski in 1933, is an important limitative result in mathematical logic , the foundations of mathematics , and in formal semantics . Informally, the theorem states that "arithmetical truth cannot be defined in arithmetic". [ 1 ] The theorem applies more generally to any sufficiently strong formal system , showing that truth in the standard model of the system cannot be defined within the system. In 1931, Kurt Gödel published the incompleteness theorems , which he proved in part by showing how to represent the syntax of formal logic within first-order arithmetic . Each expression of the formal language of arithmetic is assigned a distinct number. This procedure is known variously as Gödel numbering , coding and, more generally, as arithmetization. In particular, various sets of expressions are coded as sets of numbers. For various syntactic properties (such as being a formula , being a sentence , etc.), these sets are computable . Moreover, any computable set of numbers can be defined by some arithmetical formula. For example, there are formulas in the language of arithmetic defining the set of codes for arithmetic sentences, and for provable arithmetic sentences. The undefinability theorem shows that this encoding cannot be done for semantic concepts such as truth. It shows that no sufficiently rich interpreted language can represent its own semantics. A corollary is that any metalanguage capable of expressing the semantics of some object language (e.g. a predicate is definable in Zermelo-Fraenkel set theory for whether formulae in the language of Peano arithmetic are true in the standard model of arithmetic [ 2 ] ) must have expressive power exceeding that of the object language. The metalanguage includes primitive notions, axioms, and rules absent from the object language, so that there are theorems provable in the metalanguage not provable in the object language. The undefinability theorem is conventionally attributed to Alfred Tarski . Gödel also discovered the undefinability theorem in 1930, while proving his incompleteness theorems published in 1931, and well before the 1933 publication of Tarski's work (Murawski 1998). While Gödel never published anything bearing on his independent discovery of undefinability, he did describe it in a 1931 letter to John von Neumann . Tarski had obtained almost all results of his 1933 monograph " The Concept of Truth in the Languages of the Deductive Sciences " between 1929 and 1931, and spoke about them to Polish audiences. However, as he emphasized in the paper, the undefinability theorem was the only result he did not obtain earlier. According to the footnote to the undefinability theorem (Twierdzenie I) of the 1933 monograph, the theorem and the sketch of the proof were added to the monograph only after the manuscript had been sent to the printer in 1931. Tarski reports there that, when he presented the content of his monograph to the Warsaw Academy of Science on March 21, 1931, he expressed at this place only some conjectures, based partly on his own investigations and partly on Gödel's short report on the incompleteness theorems " Einige metamathematische Resultate über Entscheidungsdefinitheit und Widerspruchsfreiheit " [Some metamathematical results on the definiteness of decision and consistency], Austrian Academy of Sciences , Vienna, 1930. We will first state a simplified version of Tarski's theorem, then state and prove in the next section the theorem Tarski proved in 1933. Let L {\displaystyle L} be the language of first-order arithmetic . This is the theory of the natural numbers , including their addition and multiplication, axiomatized by the first-order Peano axioms . This is a " first-order " theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence . Let N {\displaystyle \mathbb {N} } be the standard structure for L , {\displaystyle L,} i.e. N {\displaystyle \mathbb {N} } consists of the ordinary set of natural numbers and their addition and multiplication. Each sentence in L {\displaystyle L} can be interpreted in N {\displaystyle \mathbb {N} } and then becomes either true or false. Thus ( L , N ) {\displaystyle (L,\mathbb {N} )} is the "interpreted first-order language of arithmetic". Each formula φ {\displaystyle \varphi } in L {\displaystyle L} has a Gödel number g ( φ ) . {\displaystyle g(\varphi ).} This is a natural number that "encodes" φ . {\displaystyle \varphi .} In that way, the language L {\displaystyle L} can talk about formulas in L , {\displaystyle L,} not just about numbers. Let T {\displaystyle T} denote the set of L {\displaystyle L} -sentences true in N {\displaystyle \mathbb {N} } , and T ∗ {\displaystyle T^{*}} the set of Gödel numbers of the sentences in T . {\displaystyle T.} The following theorem answers the question: Can T ∗ {\displaystyle T^{*}} be defined by a formula of first-order arithmetic? Tarski's undefinability theorem : There is no L {\displaystyle L} -formula T r u e ( n ) {\displaystyle \mathrm {True} (n)} that defines T ∗ . {\displaystyle T^{*}.} That is, there is no L {\displaystyle L} -formula T r u e ( n ) {\displaystyle \mathrm {True} (n)} such that for every L {\displaystyle L} -sentence A , {\displaystyle A,} T r u e ( g ( A ) ) ⟺ A {\displaystyle \mathrm {True} (g(A))\iff A} holds in N {\displaystyle \mathbb {N} } . Informally, the theorem says that the concept of truth of first-order arithmetic statements cannot be defined by a formula in first-order arithmetic. This implies a major limitation on the scope of "self-representation". By working in a stronger system (e.g., by adding a sort of subsets of N {\displaystyle \mathbb {N} } as in second-order arithmetic ) It is possible to define a formula T r u e ( n ) {\displaystyle \mathrm {True} (n)} which holds on exactly the set T ∗ , {\displaystyle T^{*},} but that doesn't define truth for the stronger system However, this formula only defines a truth predicate for formulas in the original language L {\displaystyle L} (e.g., because T ∗ , {\displaystyle T^{*},} doesn't contain codes for sentences quantifying over subsets of N {\displaystyle \mathbb {N} } ). To define truth in this stronger system would require ascending to an even stronger system and so on. To prove the theorem, we proceed by contradiction and assume that an L {\displaystyle L} -formula T r u e ( n ) {\displaystyle \mathrm {True} (n)} exists which is true for the natural number n {\displaystyle n} in N {\displaystyle {\mathcal {N}}} if and only if n {\displaystyle n} is the Gödel number of a sentence in L {\displaystyle L} that is true in N {\displaystyle \mathbb {N} } . We could then use T r u e ( n ) {\displaystyle \mathrm {True} (n)} to define a new L {\displaystyle L} -formula S ( m ) {\displaystyle S(m)} which is true for the natural number m {\displaystyle m} if and only if m {\displaystyle m} is the Gödel number of a formula φ ( x ) {\displaystyle \varphi (x)} (with a free variable x {\displaystyle x} ) such that φ ( m ) {\displaystyle \varphi (m)} is false when interpreted in N {\displaystyle \mathbb {N} } (i.e. the formula φ ( x ) , {\displaystyle \varphi (x),} when applied to its own Gödel number, yields a false statement). If we now consider the Gödel number g {\displaystyle g} of the formula S ( m ) {\displaystyle S(m)} , and ask whether the sentence S ( g ) {\displaystyle S(g)} is true in N {\displaystyle \mathbb {N} } , we obtain a contradiction. (This is known as a diagonal argument .) The theorem is a corollary of Post's theorem about the arithmetical hierarchy , proved some years after Tarski (1933). A semantic proof of Tarski's theorem from Post's theorem is obtained by reductio ad absurdum as follows. Assuming T ∗ {\displaystyle T^{*}} is arithmetically definable, there is a natural number n {\displaystyle n} such that T ∗ {\displaystyle T^{*}} is definable by a formula at level Σ n 0 {\displaystyle \Sigma _{n}^{0}} of the arithmetical hierarchy . However, T ∗ {\displaystyle T^{*}} is Σ k 0 {\displaystyle \Sigma _{k}^{0}} -hard for all k . {\displaystyle k.} Thus the arithmetical hierarchy collapses at level n {\displaystyle n} , contradicting Post's theorem. Tarski proved a stronger theorem than the one stated above, using an entirely syntactical method. The resulting theorem applies to any formal language with negation , and with sufficient capability for self-reference that the diagonal lemma holds. First-order arithmetic satisfies these preconditions, but the theorem applies to much more general formal systems, such as ZFC . Tarski's undefinability theorem (general form) : Let ( L , N ) {\displaystyle (L,{\mathcal {N}})} be any interpreted formal language which includes negation and has a Gödel numbering g ( φ ) {\displaystyle g(\varphi )} satisfying the diagonal lemma, i.e. for every L {\displaystyle L} -formula B ( x ) {\displaystyle B(x)} (with one free variable x {\displaystyle x} ) there is a sentence A {\displaystyle A} such that A ⟺ B ( g ( A ) ) {\displaystyle A\iff B(g(A))} holds in N {\displaystyle {\mathcal {N}}} . Then there is no L {\displaystyle L} -formula T r u e ( n ) {\displaystyle \mathrm {True} (n)} with the following property: for every L {\displaystyle L} -sentence A , {\displaystyle A,} T r u e ( g ( A ) ) ⟺ A {\displaystyle \mathrm {True} (g(A))\iff A} is true in N {\displaystyle {\mathcal {N}}} . The proof of Tarski's undefinability theorem in this form is again by reductio ad absurdum . Suppose that an L {\displaystyle L} -formula T r u e ( n ) {\displaystyle \mathrm {True} (n)} as above existed, i.e., if A {\displaystyle A} is a sentence of arithmetic, then T r u e ( g ( A ) ) {\displaystyle \mathrm {True} (g(A))} holds in N {\displaystyle {\mathcal {N}}} if and only if A {\displaystyle A} holds in N {\displaystyle {\mathcal {N}}} . Hence for all A {\displaystyle A} , the formula T r u e ( g ( A ) ) ⟺ A {\displaystyle \mathrm {True} (g(A))\iff A} holds in N {\displaystyle {\mathcal {N}}} . But the diagonal lemma yields a counterexample to this equivalence, by giving a "liar" formula S {\displaystyle S} such that S ⟺ ¬ T r u e ( g ( S ) ) {\displaystyle S\iff \lnot \mathrm {True} (g(S))} holds in N {\displaystyle {\mathcal {N}}} . This is a contradiction. QED. The formal machinery of the proof given above is wholly elementary except for the diagonalization which the diagonal lemma requires. The proof of the diagonal lemma is likewise surprisingly simple; for example, it does not invoke recursive functions in any way. The proof does assume that every L {\displaystyle L} -formula has a Gödel number , but the specifics of a coding method are not required. Hence Tarski's theorem is much easier to motivate and prove than the more celebrated theorems of Gödel about the metamathematical properties of first-order arithmetic. Smullyan (1991, 2001) has argued forcefully that Tarski's undefinability theorem deserves much of the attention garnered by Gödel's incompleteness theorems . That the latter theorems have much to say about all of mathematics and more controversially, about a range of philosophical issues (e.g., Lucas 1961) is less than evident. Tarski's theorem, on the other hand, is not directly about mathematics but about the inherent limitations of any formal language sufficiently expressive to be of real interest. Such languages are necessarily capable of enough self-reference for the diagonal lemma to apply to them. The broader philosophical import of Tarski's theorem is more strikingly evident. An interpreted language is strongly-semantically-self-representational exactly when the language contains predicates and function symbols defining all the semantic concepts specific to the language. Hence the required functions include the "semantic valuation function" mapping a formula A {\displaystyle A} to its truth value | | A | | , {\displaystyle ||A||,} and the "semantic denotation function" mapping a term t {\displaystyle t} to the object it denotes. Tarski's theorem then generalizes as follows: No sufficiently powerful language is strongly-semantically-self-representational . The undefinability theorem does not prevent truth in one theory from being defined in a stronger theory. For example, the set of (codes for) formulas of first-order Peano arithmetic that are true in N {\displaystyle {\mathcal {N}}} is definable by a formula in second order arithmetic . Similarly, the set of true formulas of the standard model of second order arithmetic (or n {\displaystyle n} -th order arithmetic for any n {\displaystyle n} ) can be defined by a formula in first-order ZFC .
https://en.wikipedia.org/wiki/Tarski's_undefinability_theorem
The Alfred Tarski Lectures are an annual distinction in mathematical logic and series of lectures held at the University of California, Berkeley . Established in tribute to Alfred Tarski on the fifth anniversary of his death, the award has been given every year since 1989. [ 1 ] [ 2 ] Following a 2-year hiatus after the 2020 lecture was not given due to the COVID-19 pandemic , the lectures resumed in 2023. [ 3 ] The list of past Tarski lecturers is maintained by UC Berkeley. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Tarski_Lectures
[ 5 ] Tartaric acid is a white, crystalline organic acid that occurs naturally in many fruits, most notably in grapes but also in tamarinds , bananas , avocados , and citrus . [ 1 ] Its salt , potassium bitartrate , commonly known as cream of tartar, develops naturally in the process of fermentation . Potassium bitartrate is commonly mixed with sodium bicarbonate and is sold as baking powder used as a leavening agent in food preparation. The acid itself is added to foods as an antioxidant E334 and to impart its distinctive sour taste. Naturally occurring tartaric acid is a useful raw material in organic synthesis . Tartaric acid, an alpha-hydroxy- carboxylic acid , is diprotic and aldaric in acid characteristics and is a dihydroxyl derivative of succinic acid . Tartaric acid has been known to winemakers for centuries. However, the chemical process for extraction was developed in 1769 by the Swedish chemist Carl Wilhelm Scheele . [ 7 ] Tartaric acid played an important role in the discovery of chemical chirality . This property of tartaric acid was first observed in 1832 by Jean Baptiste Biot , who observed its ability to rotate polarized light . [ 8 ] [ 9 ] Louis Pasteur continued this research in 1847 by investigating the shapes of sodium ammonium tartrate crystals, which he found to be chiral. By manually sorting the differently shaped crystals, Pasteur was the first to produce a pure sample of levotartaric acid. [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ] Naturally occurring form of the acid is dextro tartaric acid or L -(+)-tartaric acid (obsolete name d -tartaric acid). Because it is available naturally, it is cheaper than its enantiomer and the meso isomer . The dextro and levo prefixes are archaic terms. [ 15 ] Modern textbooks refer to the natural form as (2 R ,3 R )-tartaric acid ( L -(+)-tartaric acid) , and its enantiomer as (2 S ,3 S )-tartaric acid ( D -(−)-tartaric acid) . The meso diastereomer is referred to as (2 R ,3 S )-tartaric acid or (2 S ,3 R )-tartaric acid. Tartaric acid in Fehling's solution binds to copper(II) ions, preventing the formation of insoluble hydroxide salts. The L -(+)-tartaric acid isomer of tartaric acid is industrially produced in the largest amounts. It is obtained from lees , a solid byproduct of fermentations. The former byproducts mostly consist of potassium bitartrate ( KHC 4 H 4 O 6 ). This potassium salt is converted to calcium tartrate ( CaC 4 H 4 O 6 ) upon treatment with calcium hydroxide ( Ca(OH) 2 ): [ 23 ] In practice, higher yields of calcium tartrate are obtained with the addition of calcium sulfate . Calcium tartrate is then converted to tartaric acid by treating the salt with aqueous sulfuric acid: Racemic tartaric acid can be prepared in a multistep reaction from maleic acid . In the first step, the maleic acid is epoxidized by hydrogen peroxide using potassium tungstate [ de ] as a catalyst. [ 23 ] In the next step, the epoxide is hydrolyzed. A mixture of racemic acid and meso -tartaric acid is formed when dextro -Tartaric acid is heated in water at 165 °C for about 2 days. meso -Tartaric acid can also be prepared from dibromosuccinic acid using silver hydroxide: [ 24 ] meso -Tartaric acid can be separated from residual racemic acid by crystallization, the racemate being less soluble. L-(+)-tartaric acid, can participate in several reactions. As shown the reaction scheme below, dihydroxymaleic acid is produced upon treatment of L-(+)-tartaric acid with hydrogen peroxide in the presence of a ferrous salt. Dihydroxymaleic acid can then be oxidized to tartronic acid with nitric acid. [ 25 ] Important derivatives of tartaric acid include: Tartaric acid is a muscle toxin , which works by inhibiting the production of malic acid , and in high doses causes paralysis and death. [ 29 ] The median lethal dose (LD 50 ) is about 7.5 grams/kg for a human, 5.3 grams/kg for rabbits, and 4.4 grams/kg for mice. [ 30 ] Given this figure, it would take over 500 g (18 oz) to kill a person weighing 70 kg (150 lb) with 50% probability, so it may be safely included in many foods, especially sour-tasting sweets . As a food additive , tartaric acid is used as an antioxidant with E number E334 ; tartrates are other additives serving as antioxidants or emulsifiers . When cream of tartar is added to water, a suspension results which serves to clean copper coins very well, as the tartrate solution can dissolve the layer of copper(II) oxide present on the surface of the coin. The resulting copper(II)-tartrate complex is easily soluble in water. Tartaric acid may be most immediately recognizable to wine drinkers as the source of "wine diamonds", the small potassium bitartrate crystals that sometimes form spontaneously on the cork or bottom of the bottle. These "tartrates" are harmless, despite sometimes being mistaken for broken glass, and are prevented in many wines through cold stabilization (which is not always preferred since it can change the wine's profile). The tartrates remaining on the inside of aging barrels were at one time a major industrial source of potassium bitartrate. Tartaric acid plays an important role chemically, lowering the pH of fermenting "must" to a level where many undesirable spoilage bacteria cannot live, and acting as a preservative after fermentation . In the mouth, tartaric acid provides some of the tartness in the wine, although citric and malic acids also play a role. Grapes and tamarinds have the highest levels of tartaric acid concentration. Other fruits with tartaric acid are bananas , avocados , prickly pear fruit, apples , cherries , papayas , peaches , pears , pineapples , strawberries , mangoes and citrus fruits . [ 1 ] [ 31 ] Trace amounts of tartaric acid have been found in cranberries and other berries . [ 32 ] Tartaric acid is also present in the leaves and pods of Pelargonium plants and beans . Tartaric acid and its derivatives have a plethora of uses in the field of pharmaceuticals. For example, it has been used in the production of effervescent salts, in combination with citric acid, to improve the taste of oral medications. [ 25 ] The potassium antimonyl derivative of the acid known as tartar emetic is included, in small doses, in cough syrup as an expectorant . Tartaric acid also has several applications for industrial use. The acid has been observed to chelate metal ions such as calcium and magnesium. Therefore, the acid has served in the farming and metal industries as a chelating agent for complexing micronutrients in soil fertilizer and for cleaning metal surfaces consisting of aluminium, copper, iron, and alloys of these metals, respectively. [ 23 ] While tartaric acid is well-tolerated by humans and lab animals, an April 2021 letter to the editor of JAVMA hypothesized that the tartaric acid in grapes could be the cause of grape and raisin toxicity in dogs . [ 33 ] [ 34 ] Other studies have observed tartaric acid toxicity in kidney cells of dogs, but not in human kidney cells. [ 35 ] In dogs, the tartaric acid of tamarind causes acute kidney injury , which can often be fatal. [ 36 ] A review identified a relationship between grape ingestion and illness, though the specific type or quantity of grapes that cause toxicity remains unclear. Grape ingestion commonly leads to gastrointestinal and/or renal issues, with treatment depending on the symptoms; outcomes can vary. [ 37 ]
https://en.wikipedia.org/wiki/Tartaric_acid
Tarumitra is a nationwide students' organization to promote ecological sensitivity in India . It has been campaigning for sensitizing various sections of society on ecological issues. It was started in 1988 by students in Patna , India. [ 1 ] U.N has conferred a Special Consultative Status to Tarumitra from 2005. It has over 2,00,000 members in over 1000 high schools and colleges. Tarumitra has also had several full-time volunteers from abroad. [ 2 ] Tarumitra has also set up a bio-reserve in Patna in a plot of land given to them by the Patna Jesuits. It has a rare collection of over 400 vanishing trees and plants of North India . The Centre has a large genetic nursery, a Tissue Culture lab and facilities to accommodate 50-100 students for nature related camps. [ 3 ] [ 4 ] Tarumitra, meaning "Friends of Trees" in Hindi and Sanskrit, is a student movement to protect and promote a healthy environment on Earth. [ 5 ] It was the effort of Jesuit Fr. Robert Athickal from St. Xavier's School, Patna and students from few schools under the leadership of Anindo Banerjee, a class IX student from Loyola High School, Patna , that the movement came into existence in 1988. In April 1989, four high school students from Loyla High School, Anindo Banerjee, Vijay Mathur, Sanjay Pandey and Jayant Chatterjee, set out on a cycle rally in North India from Patna to New Delhi to promote awareness about the environment and the movement. They met with the then vice president of India, Shankar Dayal Sharma , to present their findings. While returning, at Agra , Jayant fell seriously ill and despite attempt, could not be saved. His demise shocked the movement; however, his sacrifice for the environment made his companions to work and promote the movement with great zeal. Today, Jayant's sacrifice is one of the cornerstone of Tarumitra's foundation history, which is still remembered the same way as 30 years ago. [ 6 ] Invigorated by the spirit of the late Jayant Chatterjee, Tarumitra grew up with assistance from Sr. Gita SND, Principal of Hartmann High School, Fr. George Manimala S.J., the then Principal of St. Xavier's High School, Patna and Bro. Geo Pulickal, his assistant, who went out of their way to establish the organisation's firm footing. Kumar Hemant Sinha, a student of 1990 batch of St. Xavier's High School, Patna was chosen its first President. Through their efforts, the present headquarters of the movement, Tarumitra Ashram, was inaugurated by the Collector of Patna, Arvind Prasad, on 20 February 1991. [ 6 ] In 1994, the St Xavier’s School gave Tarumitra a 10-acre plot for a plantation at Digha, Patna . [ 7 ] The strength of Tarumitra also increased rapidly. The girls of the local Hartmann High School, under the leadership of Sr. Gita SND and Sr. Roshni SND, gave the initial fillip needed for any new organisation. Units after units of Tarumitra sprung up in various schools in and around Patna . Many college students, social activists and journalists have also joined Tarumitra in its crusade against the destruction of the environment of India. [ 8 ] The activists of Tarumitra have taken out massive rallies, organized protest demonstrations, [ 9 ] resisted the felling of trees and forests, [ 10 ] built road side gardens, cleaned up garbage dumps, planted rare variety of trees, taken house-to-house awareness schemes to name a few. Tarumitra has developed several garbage dumps into beautiful roadside gardens called 'Oxygen Belt' . Each garden is maintained by School or a plant nursery. Tarumitra maintained a total of 38 Oxygen belts at one point in time. [ 7 ] Working with Swiss physicist Wolfgong Scheffler, Tarumitra activist M.M Mathew SJ has helped to set up a plant to fabricate solar cookers along the traditional solar panels to harness solar energy. The activists has taken our processions for bannining the use of plastics. [ 11 ] Tarumitra has expanded from the city of Patna to the remote areas of the country. It has joined hands with similar organizations to support the cause. One of the significant steps has been to set up Bio-reserves like the one in Patna in other parts of the country. As of now similar initiatives are coming up in Gujarat, Meghalaya, Tamilnadu, Karnataka and Kerala. Tarumitra also takes part in International summits [ 12 ] on environment. Tarumitra received Special Consultative Status (ECOSOC) from the Economic and Social Council of the United Nations from 2005. The organization has supported the participation of hundreds of students in the activities of the United Nations. Many have participated in the U.N Meetings and conferences across the globe.
https://en.wikipedia.org/wiki/Tarumitra
Tashiro's indicator is a pH indicator (pH value: 4.4–6.2), mixed indicator composed of a solution of methylene blue (0.1%) and methyl red (0.03%) in ethanol [ 1 ] [ 2 ] [ 3 ] or in methanol . [ 4 ] It can be used for the titration of ammonia in Kjeldahl analysis . Methylene blue functions to change the red-yellow shift of methyl red to a more distinct violet-green shift. This chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tashiro's_indicator
In computing , a task is a unit of execution or a unit of work. The term is ambiguous; precise alternative terms include process , light-weight process , thread (for execution), step , request , or query (for work). In the adjacent diagram, there are queues of incoming work to do and outgoing completed work, and a thread pool of threads to perform this work. Either the work units themselves or the threads that perform the work can be referred to as "tasks", and these can be referred to respectively as requests/responses/threads, incoming tasks/completed tasks/threads (as illustrated), or requests/responses/tasks. In the sense of "unit of execution", in some operating systems , a task is synonymous with a process [ citation needed ] , and in others with a thread [ citation needed ] . In non-interactive execution ( batch processing ), a task is a unit of execution within a job , [ 1 ] [ 2 ] with the task itself typically a process. The term " multitasking " primarily refers to the processing sense – multiple tasks executing at the same time – but has nuances of the work sense of multiple tasks being performed at the same time. In the sense of "unit of work", in a job (meaning "one-off piece of work") a task can correspond to a single step (the step itself, not the execution thereof), while in batch processing individual tasks can correspond to a single step of processing a single item in a batch, or to a single step of processing all items in the batch. In online systems, tasks most commonly correspond to a single request (in request–response architectures) or a query (in information retrieval ), either a single stage of handling, or the whole system-wide handling. In the Java programming language, these two concepts (unit of work and unit of execution) are conflated when working directly with threads, but clearly distinguished in the Executors framework: When you work directly with threads, a Thread serves as both a unit of work and the mechanism for executing it. In the executor framework, the unit of work and the execution mechanism are separate. The key abstraction is the unit of work, which is called a task . [ 3 ] IBM's use of the term has been influential, though underlining the ambiguity of the term, in IBM terminology, "task" has dozens of specific meanings, including: [ 4 ] In z/OS specifically, it is defined precisely as: [ 5 ] The term task in OS/360 through z/OS is roughly equivalent to light-weight process; the tasks in a job step share an address space. However, in MVS/ESA through z/OS, a task or Service Request Block (SRB) may have access to other address spaces via its access list. The term task is used in the Linux kernel (at least since v2.6.13, [ 6 ] up to and including v4.8 [ 7 ] ) to refer to a unit of execution, which may share various system resources with other tasks on the system. Depending on the level of sharing, the task may be regarded as a conventional thread or process . Tasks are brought into existence using the clone() system call, [ 8 ] where a user can specify the desired level of resource sharing. The term task for a part of a job dates to multiprogramming in the early 1960s, as in this example from 1961: The serial model has the ability to process tasks of one job in an independent manner similar to the functioning of the IBM 709 . [ 9 ] The term was popularized with the introduction of OS/360 (announced 1964), which featured Multiprogramming with a Fixed number of Tasks (MFT) and Multiprogramming with a Variable number of Tasks (MVT). In this case tasks were identified with light-weight processes, a job consisted of a number of tasks, and, later, tasks could have sub-tasks (in modern terminology, child processes ). Today the term "task" is used very ambiguously. For example, the Windows Task Manager manages (running) processes , while Windows Task Scheduler schedules programs to execute in future, what is traditionally known as a job scheduler , and uses the .job extension. By contrast, the term " task queue " is commonly used in the sense of "units of work".
https://en.wikipedia.org/wiki/Task_(computing)
The IEEE Task Force on Process Mining ( TFPM ) is a non-commercial association for process mining . The IEEE (Institute of Electrical and Electronics Engineers) Task Force on Process Mining was established in October 2009 as part of the IEEE Computational Intelligence Society at the Eindhoven University of Technology . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] The task force is supported by over 80 organizations and has around 750 members. [ 6 ] The main goal of the task force is to promote the research, development, education, and understanding of process mining . [ 7 ] In 2012, the IEEE World Congress on Computational Intelligence/ IEEE Congress on Evolutionary Computation held a session on Process Mining. [ citation needed ] Process mining is a type of research that is a mix of computational intelligence and data mining, as well as process modeling and analysis. The Task Force on Process Mining has a Steering Committee [ 8 ] and an Advisory Board . [ 9 ] The Steering Committee , was chaired by Wil van der Aalst in its inception in 2009, defined 15 action lines. These include the organization of the annual International Process Mining Conference (ICPM) series, [ 10 ] standardization efforts leading to the IEEE XES standard for storing and exchanging event data [ 11 ] [ 12 ] , and the Process Mining Manifesto [ 13 ] [ 14 ] which was translated into 16 languages. The Task Force on Process Mining also publishes a newsletter , provides data sets, organizes workshops and competitions, and connects researchers and practitioners. In 2016, the IEEE Standards Association published the IEEE Standard for Extensible Event Stream (XES), which is a widely accepted file format by the process mining community. [ 15 ] As of 2023, Boudewijn van Dongen serves as chair of the Steering Committee. Wil van der Aalst and Moe Wynn both serve as vice-chair of the Steering Committee. [ 16 ]
https://en.wikipedia.org/wiki/Task_Force_on_Process_Mining
Task allocation and partitioning is the way that tasks are chosen, assigned, subdivided, and coordinated within a colony of social insects. Task allocation and partitioning gives rise to the division of labor often observed in social insect colonies, whereby individuals specialize on different tasks within the colony (e.g., "foragers", "nurses"). Communication is closely related to the ability to allocate tasks among individuals within a group. This entry focuses exclusively on social insects . For information on human task allocation and partitioning, see division of labour , task analysis , and workflow . Social living provides a multitude of advantages to its practitioners, including predation risk reduction, environmental buffering, food procurement, and possible mating advantages. The most advanced form of sociality is eusociality , characterized by overlapping generations, cooperative care of the young, and reproductive division of labor, which includes sterility or near-sterility of the overwhelming majority of colony members. With few exceptions, all the practitioners of eusociality are insects of the orders Hymenoptera ( ants , bees , and wasps ), Isoptera ( termites ), Thysanoptera ( thrips ), and Hemiptera ( aphids ). [ 5 ] [ 6 ] Social insects have been extraordinarily successful ecologically and evolutionarily. This success has at its most pronounced produced colonies 1) having a persistence many times the lifespan of most individuals of the colony, and 2) numbering thousands or even millions of individuals. Social insects can exhibit division of labor with respect to non-reproductive tasks, in addition to the aforementioned reproductive one. In some cases this takes the form of markedly different, alternative morphological development ( polymorphism ), as in the case of soldier castes in ants, termites, thrips, and aphids, while in other cases it is age-based (temporal polyethism ), as with honey bee foragers, who are the oldest members of the colony (with the exception of the queen). Evolutionary biologists are still debating the fitness-advantage gained by social insects due to their advanced division of labor and task allocation, but hypotheses include: increased resilience against a fluctuating environment, reduced energy costs of continuously switching tasks, increased longevity of the colony as a whole, or reduced rate of pathogen transmission. [ 7 ] [ 8 ] Division of labor, large colony sizes, temporally-changing colony needs, and the value of adaptability and efficiency under Darwinian competition, all form a theoretical basis favoring the existence of evolved communication in social insects. [ 9 ] [ 10 ] [ 11 ] Beyond the rationale, there is well-documented empirical evidence of communication related to tasks; examples include the waggle dance of honey bee foragers, trail marking by ant foragers such as the red harvester ants , and the propagation via pheromones of an alarm state in Africanized honey bees . One of the most well known mechanisms of task allocation is worker polymorphism, where workers within a colony have morphological differences. This difference in size is determined by the amount of food workers are fed as larvae, and is set once workers emerge from their pupae. Workers may vary just in size (monomorphism) or size and bodily proportions (allometry). An excellent example of the monomorphism is in bumblebees ( Bombus spp.). Bumblebee workers display a large amount of body size variation which is normally distributed. The largest workers may be ten times the mass of the smallest workers. Worker size is correlated with several tasks: larger workers tend to forage, while smaller workers tend to perform brood care and nest thermoregulation. Size also affects task efficiency. Larger workers are better at learning, have better vision, carry more weight, and fly at a greater range of temperatures. However, smaller workers are more resistant to starvation. [ 12 ] In other eusocial insects as well, worker size can determine what polymorphic role they become. For instance, larger workers in Myrmecocystus mexicanus (a North America species of honeypot ant) tend to become repletes, or workers so engorged with food that they become immobile and act a living food storage for the rest of the colonies. [ 13 ] In many ants and termites, on the other hand, workers vary in both size and bodily proportions, which have a bimodal distribution. This is present in approximately one in six ant genera. In most of these there are two developmentally distinct pathways, or castes, into which workers can develop. Typically members of the smaller caste are called minors and members of the larger caste are called majors or soldiers. There is often variation in size within each caste. The term soldiers may be apt, as in Cephalotes , but in many species members of the larger caste act primarily as foragers or food processors. In a few ant species, such as certain Pheidole species, there is a third caste, called supersoldiers. Temporal polyethism is a mechanism of task allocation, and is ubiquitous among eusocial insect colonies. Tasks in a colony are allocated among workers based on their age. Newly emerged workers perform tasks within the nest, such as brood care and nest maintenance, and progress to tasks outside the nest, such as foraging, nest defense, and corpse removal as they age. In honeybees, the youngest workers exclusively clean cells, which is then followed by tasks related to brood care and nest maintenance from about 2–11 days of age. From 11– 20 days, they transition to receiving and storing food from foragers, and at about 20 days workers begin to forage. [ 14 ] Similar temporal polyethism patterns can be seen in primitive species of wasps, such as Ropalidia marginata as well as the eusocial wasp Vespula germanica . Young workers feed larvae, and then transition to nest building tasks, followed by foraging. [ 15 ] Many species of ants also display this pattern. [ 16 ] This pattern is not rigid, though. Workers of certain ages have strong tendencies to perform certain tasks, but may perform other tasks if there is enough need. For instance, removing young workers from the nest will cause foragers, especially younger foragers, to revert to tasks such as caring for brood. [ 17 ] These changes in task preference are caused by epigenetic changes over the life of the individual. Honeybee workers of different ages show substantial differences in DNA methylation, which causes differences in gene expression. Reverting foragers to nurses by removing younger workers causes changes in DNA methylation similar to younger workers. [ 18 ] Temporal polyethism is not adaptive because of maximized efficiency; indeed older workers are actually more efficient at brood care than younger workers in some ant species. [ 17 ] Rather it allows workers with the lowest remaining life expectancy to perform the most dangerous tasks. Older workers tend to perform riskier tasks, such as foraging, which has high risks of predation and parasitism, while younger workers perform less dangerous tasks, such as brood care. If workers experience injuries, which shortens their life expectancies, they will start foraging sooner than healthy workers of the same age. [ 19 ] A dominant theory of explaining the self-organized division of labor in social insect societies such as honey bee colonies is the Response-Threshold Model. It predicts that individual worker bees have inherent thresholds to stimuli associated with different tasks. Individuals with the lowest thresholds will preferentially perform that task. [ 7 ] Stimuli could include the “search time” that elapses while a foraging bee waits to unload her nectar and pollen to a receiver bee at the hive, the smell of diseased brood cells, or any other combination of environmental inputs that an individual worker bee encounters. [ 20 ] The Response-Threshold Model only provides for effective task allocation in the honey bee colony if thresholds are varied among individual workers. This variation originates from the considerable genetic diversity among worker daughters of a colony due to the queen’s multiple matings. [ 21 ] To explain how colony-level complexity arises from the interactions of several autonomous individuals, a network -based approach has emerged as a promising area of social insect research. Social insect colonies can be viewed as a self-organized network, in which interacting elements (i.e. nodes ) communicate with each other. As decentralized networks, colonies are capable of distributing information rapidly which facilitates robust responsiveness to their dynamic environments. [ 22 ] The efficiency of information flow is critical for colony-level flexibility because worker behavior is not controlled by a centralized leader but rather is based on local information. Social insect networks are often non-randomly distributed, wherein a few individuals act as ‘hubs,’ having disproportionately more connections to other nestmates than other workers in the colony. [ 22 ] In harvester ants, the total interactions per ant during recruitment for outside work is right-skewed, meaning that some ants are more highly connected than others. [ 23 ] Computer simulations of this particular interaction network demonstrated that inter-individual variation in connectivity patterns expedites information flow among nestmates. Task allocation within a social insect colony can be modeled using a network-based approach, in which workers are represented by nodes, which are connected by edges that signify inter-node interactions. Workers performing a common task form highly connected clusters, with weaker links across tasks. These weaker, cross-task connections are important for allowing task-switching to occur between clusters. [ 22 ] This approach is potentially problematic because connections between workers are not permanent, and some information is broadcast globally, e.g. through pheromones, and therefore does not rely on interaction networks. One alternative approach to avoid this pitfall is to treat tasks as nodes and workers as fluid connections. To demonstrate how time and space constraints of individual-level interactions affect colony function, social insect network approaches can also incorporate spatiotemporal dynamics. These effects can impose upper bounds to information flow rate in the network. For example, the rate of information flow through Temnothorax rugatulus ant colonies is slower than would be predicted if time spent traveling and location within the nest were not considered. [ 24 ] In Formica fusca L. ant colonies, a network analysis of spatial effects on feeding and the regulation of food storage revealed that food is distributed heterogeneously within colony, wherein heavily loaded workers are located centrally within the nest and those storing less food were located at the periphery. [ 25 ] Studies of inter-nest pheromone trail networks maintained by super-colonies of Argentine ants ( Linepithema humile ) have shown that different colonies establish networks with very similar topologies. [ 26 ] Insights from these analyses revealed that these networks – which are used to guide workers transporting brood, workers and food between nests – are formed through a pruning process, in which individual ants initially create a complex network of trails, which are then refined to eliminate extraneous edges, resulting in a shorter, more efficient inter-nest network. Long-term stability of interaction networks has been demonstrated in Odontomachus hastatus ants, in which initially highly connected ants remain highly connected over an extended time period. [ 27 ] Conversely, Temnothorax rugatulus ant workers are not persistent in their interactive role, which might suggest that social organization is regulated differently among different eusocial species. [ 24 ] A network is pictorially represented as a graph , but can equivalently be represented as an adjacency list or adjacency matrix . [ 28 ] Traditionally, workers are the nodes of the graph, but Fewell prefers to make the tasks the nodes, with workers as the links. [ 29 ] [ 30 ] O'Donnell has coined the term "worker connectivity" to stand for "communicative interactions that link a colony's workers in a social network and affect task performance". [ 30 ] He has pointed out that connectivity provides three adaptive advantages compared to individual direct perception of needs: [ 30 ] O'Donnell provides a comprehensive survey, with examples, of factors that have a large bearing on worker connectivity. [ 30 ] They include: Anderson, Franks, and McShea have broken down insect tasks (and subtasks) into a hierarchical taxonomy ; their focus is on task partitioning and its complexity implications. They classify tasks as individual, group, team, or partitioned; classification of a task depends on whether there are multiple vs. individual workers, whether there is division of labor, and whether subtasks are done concurrently or sequentially. Note that in their classification, in order for an action to be considered a task, it must contribute positively to inclusive fitness; if it must be combined with other actions to achieve that goal, it is considered to be a subtask. In their simple model, they award 1, 2, or 3 points to the different tasks and subtasks, depending on its above classification. Summing all tasks and subtasks point values down through all levels of nesting allows any task to be given a score that roughly ranks relative complexity of actions. [ 31 ] See also the review of task partitioning by Ratnieks and Anderson. [ 2 ] All models are simplified abstractions of the real-life situation. There exists a basic tradeoff between model precision and parameter precision. A fixed amount of information collected, will, if split amongst the many parameters of an overly precise model, result in at least some of the parameters being represented by inadequate sample sizes. [ 32 ] Because of the often limited quantities and limited precision of data from which to calculate parameters values in non-human behavior studies, such models should generally be kept simple. Therefore, we generally should not expect models for social insect task allocation or task partitioning to be as elaborate as human workflow ones, for example. With increased data, more elaborate metrics for division of labor within the colony become possible. Gorelick and Bertram survey the applicability of metrics taken from a wide range of other fields. They argue that a single output statistic is desirable, to permit comparisons across different population sizes and different numbers of tasks. But they also argue that the input to the function should be a matrix representation (of time spent by each individual on each task), in order to provide the function with better data. They conclude that "... normalized matrix-input generalizations of Shannon's and Simpson's index ... should be the indices of choice when one wants to simultaneously examine division of labor amongst all individuals in a population". [ 33 ] Note that these indexes, used as metrics of biodiversity , now find a place measuring division of labor.
https://en.wikipedia.org/wiki/Task_allocation_and_partitioning_in_social_insects
Task skipping is an approximate computing technique that allows to skip code blocks according to a specific boolean condition to be checked at run-time . This technique is usually applied on the most computational-intensive section of the code. It relies on the fact that a tuple of values sequentially computed are going to be useful only if the whole tuple meet certain conditions. Knowing that a value of the tuple invalides or probably will invalidate the whole tuple, it is possible to avoid the computation of the rest of the tuple. The example that follows provides the result of task skipping applied on this C -like source code This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Task_skipping
The vegetation in Tasmania's alpine environments is predominately woody and shrub-like. One vegetation type is coniferous shrubbery , characterised by the gymnosperm species Microcachrys tetragona , Pherosphaera hookeriana , Podocarpus lawrencei , and Diselma archeri . [ 1 ] Distribution of these species is relevant with abiotic factors including edaphic conditions and fire frequency, and increasingly, the threat of climate change towards species survival exists. [ 2 ] Conservation and management of coniferous shrubbery are necessary considering that the paleoendemic species, Microcachrys, Pherosphaera and Diselma , have persisted in western Tasmanian environments for millions of years. [ 3 ] These coniferous shrub species are restricted to subalpine and alpine heathlands in western Tasmania, with the exception of Podocarpus lawrencei which lives on the mainland. [ 4 ] The alpine environments where these conifers occur have high levels of conifer endemism , which is an ecologically habitat for coniferous shrub species. [ 4 ] Coniferous shrub species can be observed in Mount Field National Park in Tasmania's south west along the Tarn Shelf. All species can be observed in rocky environments with shallow soil above 1,000 m (3,300 ft). Both the alpine environment and the harsh maritime climate [ 2 ] have the pressures and limitations of wind exposure and ice abrasion for the woody and shrub-like habit of coniferous shrubbery. The lack of protective snow cover on Tasmanian mountains means that vegetation must be mechanically resistant to these elements, hence an ecologically habitat for coniferous shrub species. This is contrasted to alps of mainland Australia or New Zealand, where the presence of prolonged snow lie lead to the development of a grassland-herbland vegetation community. [ 2 ] Low productivity of the environment is indicated through the slow growth habit of the conifers, and the effects of fire are detrimental to the species. [ 2 ] As well as this, physiological drought intolerance in conifers could influence the growth of vegetation considering the changing climate. [ 5 ] The mosaic pattern of distribution is due to both fire history and the influence of precipitation and temperature. [ 1 ] The exceptionally high species richness in subalpine vegetation is caused by this mosaic of vegetation communities. [ 1 ] Taxa that make up this vegetation type include: [ 6 ] The alpine vegetation of western Tasmania is associated with paleoendemic species, i.e. species that are old and geographically confined. Microchachry s , Pherosphaera and Diselma are paleoendemic coniferous shrub clades of alpine western Tasmania. These clades were found on other southern hemisphere continents according to fossil, and they are restricted in their distribution. The environment they inhabit currently is not productive with infrequent fire, which is evident through the short and open canopy structure of the vegetation. Distribution of paleoendemic species gives insight into the similarities to environments in which ancestral lineages occurred and the current environmental characteristics allowing the species survival. Persistence of these species is due to natural selection, not dispersal limitation. The conservation of certain ecological characteristics promotes survival. The table below shows clade ages and scores of paleoendemism, which are calculated by dividing the age of the clade by the square root of the area of current occupancy. Scores of >500 m −1 are considered high. [ 3 ] Tasmania is one of 5 global hotspots for conifer diversity, with one of the highest rates of endemism in conifer flora in the world. The threats of climate change to Tasmanian coniferous shrubbery exist. Physical intolerance to drought and fire sensitivity are characteristic of the conifers and potential for distribution and recolonization is limited by ways of seed dispersal and slow growth rates. [ citation needed ] In Tasmania, climate change is a rise in mean annual temperature, changed rainfall seasonality and weather events such as drought and fire. Dry lightning and drier soil conditions in western Tasmania pose a threat to coniferous shrubbery and other alpine vegetation types. The probability that extreme weather events cause that the extinction of species is higher than the effects of rise in temperature or rainfall. [ 5 ] For conservation of the Tasmanian coniferous shrubbery species, monitoring and prediction of climate events and refugia are fundamental. Field surveys and aerial photograph monitoring is in place in order to collect the required information. [ 5 ] In order to reduce the risk of fire, 'fuel stove only areas' have been implemented in the Tasmanian Wilderness World Heritage Area , where the majority of Tasmanian coniferous shrubbery is located. These measures have been introduced in the hope of preventing the loss of conifer populations in both rainforest and alpine communities and to promote their survival into the foreseeable future. [ 4 ]
https://en.wikipedia.org/wiki/Tasmanian_coniferous_shrubbery
TassDB ( TAndem Splice Site DataBase ) is a database of tandem splice sites of eight species [ 1 ] This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/TassDB
In the theory of elliptic curves , Tate's algorithm takes as input an integral model of an elliptic curve E over Q {\displaystyle \mathbb {Q} } , or more generally an algebraic number field , and a prime or prime ideal p . It returns the exponent f p of p in the conductor of E , the type of reduction at p , the local index where E 0 ( Q p ) {\displaystyle E^{0}(\mathbb {Q} _{p})} is the group of Q p {\displaystyle \mathbb {Q} _{p}} -points whose reduction mod p is a non-singular point . Also, the algorithm determines whether or not the given integral model is minimal at p , and, if not, returns an integral model with integral coefficients for which the valuation at p of the discriminant is minimal. Tate's algorithm also gives the structure of the singular fibers given by the Kodaira symbol or Néron symbol, for which, see elliptic surfaces : in turn this determines the exponent f p of the conductor E . Tate's algorithm can be greatly simplified if the characteristic of the residue class field is not 2 or 3; in this case the type and c and f can be read off from the valuations of j and Δ (defined below). Tate's algorithm was introduced by John Tate ( 1975 ) as an improvement of the description of the Néron model of an elliptic curve by Néron ( 1964 ). Assume that all the coefficients of the equation of the curve lie in a complete discrete valuation ring R with perfect residue field K and maximal ideal generated by a prime π. The elliptic curve is given by the equation Define: The algorithm is implemented for algebraic number fields in the PARI/GP computer algebra system, available through the function elllocalred.
https://en.wikipedia.org/wiki/Tate's_algorithm
In mathematics , Tate duality or Poitou–Tate duality is a duality theorem for Galois cohomology groups of modules over the Galois group of an algebraic number field or local field , introduced by John Tate ( 1962 ) and Georges Poitou ( 1967 ). For a p -adic local field k {\displaystyle k} , local Tate duality says there is a perfect pairing of the finite groups arising from Galois cohomology: where M {\displaystyle M} is a finite group scheme, M ′ {\displaystyle M'} its dual Hom ⁡ ( M , G m ) {\displaystyle \operatorname {Hom} (M,G_{m})} , and G m {\displaystyle \mathbb {G} _{m}} is the multiplicative group . For a local field of characteristic p > 0 {\displaystyle p>0} , the statement is similar, except that the pairing takes values in H 2 ( k , μ ) = ⋃ p ∤ n 1 n Z / Z {\displaystyle H^{2}(k,\mu )=\bigcup _{p\nmid n}{\tfrac {1}{n}}\mathbb {Z} /\mathbb {Z} } . [ 1 ] The statement also holds when k {\displaystyle k} is an Archimedean field , though the definition of the cohomology groups looks somewhat different in this case. Given a finite group scheme M {\displaystyle M} over a global field k {\displaystyle k} , global Tate duality relates the cohomology of M {\displaystyle M} with that of M ′ = Hom ⁡ ( M , G m ) {\displaystyle M'=\operatorname {Hom} (M,G_{m})} using the local pairings constructed above. This is done via the localization maps where v {\displaystyle v} varies over all places of k {\displaystyle k} , and where ∏ ′ {\displaystyle \prod '} denotes a restricted product with respect to the unramified cohomology groups. Summing the local pairings gives a canonical perfect pairing One part of Poitou-Tate duality states that, under this pairing, the image of H r ( k , M ) {\displaystyle H^{r}(k,M)} has annihilator equal to the image of H 2 − r ( k , M ′ ) {\displaystyle H^{2-r}(k,M')} for r = 0 , 1 , 2 {\displaystyle r=0,1,2} . The map α r , M {\displaystyle \alpha _{r,M}} has a finite kernel for all r {\displaystyle r} , and Tate also constructs a canonical perfect pairing These dualities are often presented in the form of a nine-term exact sequence Here, the asterisk denotes the Pontryagin dual of a given locally compact abelian group. All of these statements were presented by Tate in a more general form depending on a set of places S {\displaystyle S} of k {\displaystyle k} , with the above statements being the form of his theorems for the case where S {\displaystyle S} contains all places of k {\displaystyle k} . For the more general result, see e.g. Neukirch, Schmidt & Wingberg (2000 , Theorem 8.4.4). Among other statements, Poitou–Tate duality establishes a perfect pairing between certain Shafarevich groups . Given a global field k {\displaystyle k} , a set S of primes, and the maximal extension k S {\displaystyle k_{S}} which is unramified outside S , the Shafarevich groups capture, broadly speaking, those elements in the cohomology of Gal ⁡ ( k S / k ) {\displaystyle \operatorname {Gal} (k_{S}/k)} which vanish in the Galois cohomology of the local fields pertaining to the primes in S . [ 2 ] An extension to the case where the ring of S -integers O S {\displaystyle {\mathcal {O}}_{S}} is replaced by a regular scheme of finite type over Spec ⁡ O S {\displaystyle \operatorname {Spec} {\mathcal {O}}_{S}} was shown by Geisser & Schmidt (2018) . Another generalisation is due to Česnavičius, who relaxed the condition on the localising set S by using flat cohomology on smooth proper curves. [ 3 ]
https://en.wikipedia.org/wiki/Tate_duality
In number theory and algebraic geometry , the Tate twist , [ 1 ] [ 2 ] named after John Tate , is an operation on Galois modules . For example, if K is a field , G K is its absolute Galois group , and ρ : G K → Aut Q p ( V ) is a representation of G K on a finite-dimensional vector space V over the field Q p of p -adic numbers , then the Tate twist of V , denoted V (1), is the representation on the tensor product V ⊗ Q p (1), where Q p (1) is the p -adic cyclotomic character (i.e. the Tate module of the group of roots of unity in the separable closure K s of K ). More generally, if m is a positive integer , the m th Tate twist of V , denoted V ( m ), is the tensor product of V with the m -fold tensor product of Q p (1). Denoting by Q p (−1) the dual representation of Q p (1), the −m th Tate twist of V can be defined as
https://en.wikipedia.org/wiki/Tate_twist
In arithmetic geometry , the Tate–Shafarevich group Ш( A / K ) of an abelian variety A (or more generally a group scheme ) defined over a number field K consists of the elements of the Weil–Châtelet group W C ( A / K ) = H 1 ( G K , A ) {\displaystyle \mathrm {WC} (A/K)=H^{1}(G_{K},A)} , where G K = G a l ( K a l g / K ) {\displaystyle G_{K}=\mathrm {Gal} (K^{alg}/K)} is the absolute Galois group of K , that become trivial in all of the completions of K (i.e., the real and complex completions as well as the p -adic fields obtained from K by completing with respect to all its Archimedean and non Archimedean valuations v ). Thus, in terms of Galois cohomology , Ш( A / K ) can be defined as This group was introduced by Serge Lang and John Tate [ 1 ] and Igor Shafarevich . [ 2 ] Cassels introduced the notation Ш( A / K ) , where Ш is the Cyrillic letter " Sha ", [ 3 ] for Shafarevich, replacing the older notation TS or TŠ . [ 4 ] Geometrically, the non-trivial elements of the Tate–Shafarevich group can be thought of as the homogeneous spaces of A that have K v - rational points for every place v of K , but no K -rational point. Thus, the group measures the extent to which the Hasse principle fails to hold for rational equations with coefficients in the field K . Carl-Erik Lind gave an example of such a homogeneous space, by showing that the genus 1 curve x 4 − 17 = 2 y 2 has solutions over the reals and over all p -adic fields, but has no rational points. [ 5 ] Ernst S. Selmer gave many more examples, such as 3 x 3 + 4 y 3 + 5 z 3 = 0 . [ 6 ] The special case of the Tate–Shafarevich group for the finite group scheme consisting of points of some given finite order n of an abelian variety is closely related to the Selmer group . The Tate–Shafarevich conjecture states that the Tate–Shafarevich group is finite. Karl Rubin proved this for some elliptic curves of rank at most 1 with complex multiplication . [ 7 ] Victor A. Kolyvagin extended this to modular elliptic curves over the rationals of analytic rank at most 1. [ 8 ] (The modularity theorem later showed that the modularity assumption always holds.) It is known that the Tate–Shafarevich group is a torsion group , [ 9 ] [ 10 ] thus the conjecture is equivalent to stating that the group is finitely generated . The Cassels–Tate pairing is a bilinear pairing Ш( A ) × Ш(  ) → Q / Z , where A is an abelian variety and  is its dual. Cassels introduced this for elliptic curves , when A can be identified with  and the pairing is an alternating form. [ 4 ] The kernel of this form is the subgroup of divisible elements, which is trivial if the Tate–Shafarevich conjecture is true. Tate extended the pairing to general abelian varieties, as a variation of Tate duality . [ 11 ] A choice of polarization on A gives a map from A to  , which induces a bilinear pairing on Ш( A ) with values in Q / Z , but unlike the case of elliptic curves this need not be alternating or even skew symmetric. For an elliptic curve, Cassels showed that the pairing is alternating, and a consequence is that if the order of Ш is finite then it is a square. For more general abelian varieties it was sometimes incorrectly believed for many years that the order of Ш is a square whenever it is finite; this mistake originated in a paper by Swinnerton-Dyer , [ 12 ] who misquoted one of the results of Tate. [ 11 ] Poonen and Stoll gave some examples where the order is twice a square, such as the Jacobian of a certain genus 2 curve over the rationals whose Tate–Shafarevich group has order 2, [ 13 ] and Stein gave some examples where the power of an odd prime dividing the order is odd. [ 14 ] If the abelian variety has a principal polarization then the form on Ш is skew symmetric which implies that the order of Ш is a square or twice a square (if it is finite), and if in addition the principal polarization comes from a rational divisor (as is the case for elliptic curves) then the form is alternating and the order of Ш is a square (if it is finite). On the other hand building on the results just presented Konstantinou showed that for any squarefree number n there is an abelian variety A defined over Q and an integer m with | Ш | = n ⋅ m 2 . [ 15 ] In particular Ш is finite in Konstantinou's examples and these examples confirm a conjecture of Stein. Thus modulo squares any integer can be the order of Ш .
https://en.wikipedia.org/wiki/Tate–Shafarevich_group
A tatum is a feature of music that has been variously defined as: "the smallest time interval between successive notes in a rhythmic phrase", [ 1 ] "the shortest durational value [...] in music that [is] still more than incidentally encountered", [ 2 ] "the smallest cognitively meaningful subdivision of the main beat", [ 3 ] and "the fastest pulse present in a piece of music". [ 4 ] "In Western notation , tatums may correspond typically to sixteenth- or twenty-fourth-notes", [ 3 ] or thirty-second notes . [ 4 ] More technically, a tatum is the "lowest regular pulse train that a listener intuitively infers from the timing of perceived musical events: a time quantum. It is roughly equivalent to the time division that most highly coincides with note onsets". [ 5 ] The tatum allows a musician's deviation from an ensemble's tempo (which may be implied or explicitly played) to be quantified: mathematically, "a deviation function determines the amount of time that an event metrically falling on a particular tatum should be shifted when performed". [ 1 ] The existence of the tatum allowed human perception of music to be more closely modelled by algorithms. [ 6 ] This was important in the development of software for Echo Nest , which underlies several music streaming services. [ 6 ] The term was coined by Jeff Bilmes in an MIT Master's thesis, Timing Is of the Essence , published in 1993, and is named after the influential jazz pianist Art Tatum , "whose tatum was faster than all others". [ 1 ] [ 7 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tatum_(music)
In probability theory , tau-leaping , or τ-leaping , is an approximate method for the simulation of a stochastic system . [ 1 ] It is based on the Gillespie algorithm , performing all reactions for an interval of length tau before updating the propensity functions. [ 2 ] By updating the rates less often this sometimes allows for more efficient simulation and thus the consideration of larger systems. Many variants of the basic algorithm have been considered. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] The algorithm is analogous to the Euler method for deterministic systems, but instead of making a fixed change x ( t + τ ) = x ( t ) + τ x ′ ( t ) {\displaystyle x(t+\tau )=x(t)+\tau x'(t)} the change is x ( t + τ ) = x ( t ) + P ( τ x ′ ( t ) ) {\displaystyle x(t+\tau )=x(t)+P(\tau x'(t))} where P ( τ x ′ ( t ) ) {\displaystyle P(\tau x'(t))} is a Poisson distributed random variable with mean τ x ′ ( t ) {\displaystyle \tau x'(t)} . Given a state x ( t ) = { X i ( t ) } {\displaystyle \mathbf {x} (t)=\{X_{i}(t)\}} with events E j {\displaystyle E_{j}} occurring at rate R j ( x ( t ) ) {\displaystyle R_{j}(\mathbf {x} (t))} and with state change vectors v i j {\displaystyle \mathbf {v} _{ij}} (where i {\displaystyle i} indexes the state variables, and j {\displaystyle j} indexes the events), the method is as follows: This algorithm is described by Cao et al. [ 4 ] The idea is to bound the relative change in each event rate R j {\displaystyle R_{j}} by a specified tolerance ϵ {\displaystyle \epsilon } (Cao et al. recommend ϵ = 0.03 {\displaystyle \epsilon =0.03} , although it may depend on model specifics). This is achieved by bounding the relative change in each state variable X i {\displaystyle X_{i}} by ϵ / g i {\displaystyle \epsilon /g_{i}} , where g i {\displaystyle g_{i}} depends on the rate that changes the most for a given change in X i {\displaystyle X_{i}} . Typically g i {\displaystyle g_{i}} is equal the highest order event rate, but this may be more complex in different situations (especially epidemiological models with non-linear event rates). This algorithm typically requires computing 2 N {\displaystyle 2N} auxiliary values (where N {\displaystyle N} is the number of state variables X i {\displaystyle X_{i}} ), and should only require reusing previously calculated values R j ( x ) {\displaystyle R_{j}(\mathbf {x} )} . An important factor in this is that since X i {\displaystyle X_{i}} is an integer value, there is a minimum value by which it can change, preventing the relative change in R j {\displaystyle R_{j}} being bounded by 0, which would result in τ {\displaystyle \tau } also tending to 0. This computed τ {\displaystyle \tau } is then used in Step 3 of the τ {\displaystyle \tau } leaping algorithm.
https://en.wikipedia.org/wiki/Tau-leaping
The number 𝜏 ( / ˈ t aʊ , ˈ t ɔː , ˈ t ɒ / ⓘ ; spelled out as tau ) is a mathematical constant that is the ratio of a circle 's circumference to its radius . It is approximately equal to 6.28 and exactly equal to 2 π . 𝜏 and π are both circle constants relating the circumference of a circle to its linear dimension: the radius in the case of 𝜏 ; the diameter in the case of π . While π is used almost exclusively in mainstream mathematical education and practice, it has been proposed, most notably by Michael Hartl in 2010, that 𝜏 should be used instead. Hartl and other proponents argue that 𝜏 is the more natural circle constant and its use leads to conceptually simpler and more intuitive mathematical notation. [ 1 ] Critics have responded that the benefits of using 𝜏 over π are trivial and that given the ubiquity and historical significance of π a change is unlikely to occur. [ 2 ] The proposal did not initially gain widespread acceptance in the mathematical community, but awareness of 𝜏 has become more widespread, [ 3 ] having been added to several major programming languages and calculators. 𝜏 is commonly defined as the ratio of a circle 's circumference C {\textstyle {C}} to its radius r {\textstyle {r}} : τ = C r {\displaystyle \tau ={\frac {C}{r}}} A circle is defined as a closed curve formed by the set of all points in a plane that are a given distance from a fixed point, where the given distance is called the radius. The distance around the circle is the circumference, and the ratio C r {\textstyle {\frac {C}{r}}} is constant regardless of the circle's size. Thus, 𝜏 denotes the fixed relationship between the circumference of any circle and the fundamental defining property of that circle, the radius. When radians are used as the unit of angular measure there are 𝜏 radians in one full turn of a circle, and the radian angle is aligned with the proportion of a full turn around the circle: τ 8 {\textstyle {\frac {\tau }{8}}} rad is an eighth of a turn; 3 τ 4 {\textstyle {\frac {3\tau }{4}}} rad is three-quarters of a turn. As 𝜏 is exactly equal to 2 π it shares many of the properties of π including being both an irrational and transcendental number. The proposal to use the Greek letter 𝜏 as a circle constant representing 2 π dates to Michael Hartl's 2010 publication, The Tau Manifesto , [ a ] although the symbol had been independently suggested earlier by Joseph Lindenburg ( c. 1990), John Fisher (2004) and Peter Harremoës (2010). [ 5 ] Hartl offered two reasons for the choice of notation. First, τ is the number of radians in one turn , and both τ and turn begin with a / t / sound. Second, τ visually resembles π , whose association with the circle constant is unavoidable. There had been a number of earlier proposals for a new circle constant equal to 2 π , together with varying suggestions for its name and symbol. In 2001, Robert Palais of the University of Utah proposed that π was "wrong" as the fundamental circle constant arguing instead that 2 π was the proper value. [ 6 ] His proposal used a "π with three legs" symbol to denote the constant ( π π = 2 π {\displaystyle \pi \!\;\!\!\!\pi =2\pi } ), and referred to angles as fractions of a "turn" ( 1 4 π π = 1 4 t u r n {\displaystyle {\tfrac {1}{4}}\pi \!\;\!\!\!\pi ={\tfrac {1}{4}}\,\mathrm {turn} } ). Palais stated that the word "turn" served as both the name of the new constant and a reference to the ordinary language meaning of turn . [ 7 ] In 2008, Robert P. Crease proposed defining a constant as the ratio of circumference to radius, an idea supported by John Horton Conway . Crease used the Greek letter psi : ψ = 2 π {\displaystyle \psi =2\pi } . [ 8 ] The same year, Thomas Colignatus proposed the uppercase Greek letter theta , Θ, to represent 2 π due to its visual resemblance of a circle. [ 9 ] For a similar reason another proposal suggested the Phoenician and Hebrew letter teth , 𐤈 or ט, (from which the letter theta was derived), due to its connection with wheels and circles in ancient cultures. [ 10 ] [ 11 ] The meaning of the symbol π {\displaystyle \pi } was not originally defined as the ratio of circumference to diameter, and at times was used in representations of the 6.28...constant. Early works in circle geometry used the letter π to designate the perimeter (i.e., circumference ) in different fractional representations of circle constants and in 1697 David Gregory used ⁠ π / ρ ⁠ (pi over rho) to denote the perimeter divided by the radius (6.28...). [ 12 ] [ 13 ] Subsequently π came to be used as a single symbol to represent the ratios in whole. Leonhard Euler initially used the single letter π was to denote the constant 6.28... in his 1727 Essay Explaining the Properties of Air . [ 14 ] [ 15 ] Euler would later use the letter π for 3.14... in his 1736 Mechanica [ 16 ] and 1748 Introductio in analysin infinitorum , [ 17 ] though defined as half the circumference of a circle of radius 1 rather than the ratio of circumference to diameter. Elsewhere in Mechanica , Euler instead used the letter π for one-fourth of the circumference of a unit circle, or 1.57... . [ 18 ] [ 19 ] Usage of the letter π , sometimes for 3.14... and other times for 6.28..., became widespread, with the definition varying as late as 1761; [ 20 ] afterward, π was standardized as being equal to 3.14... . [ 21 ] [ 22 ] Proponents argue that while use of 𝜏 in place of 2 π does not change any of the underlying mathematics, it does lead to simpler and more intuitive notation in many areas. Michael Hartl's Tau Manifesto [ b ] gives many examples of formulas that are asserted to be clearer where τ is used instead of π . [ 23 ] [ 24 ] [ 25 ] Hartl and Robert Palais [ 7 ] have argued that 𝜏 allows radian angles to be expressed more directly and in a way that makes clear the link between the radian measure and rotation around the unit circle. For instance, ⁠ 3 τ / 4 ⁠ rad can be easily interpreted as ⁠ 3 / 4 ⁠ ⁠ of a turn around the unit circle in contrast with the numerically equal ⁠ ⁠ 3 π / 2 ⁠ ⁠ rad, where the meaning could be obscured, particularly for children and students of mathematics. Critics have responded that a full rotation is not necessarily the correct or fundamental reference measure for angles and two other possibilities, the right angle and straight angle, each have historical precedent. Euclid used the right angle as the basic unit of angle, and David Butler has suggested that ⁠ τ / 4 ⁠ = ⁠ π / 2 ⁠ ≈ 1.57 , which he denotes with the Greek letter η ( eta ), should be seen as the fundamental circle constant. [ 26 ] [ 27 ] Hartl has argued that the periodic trigonometric functions are simplified using 𝜏 as it aligns the function argument (radians) with the function period: sin θ repeats with period T = τ rad, reaches a maximum at ⁠ T / 4 ⁠ = ⁠ τ / 4 ⁠ rad and a minimum at ⁠ 3T / 4 ⁠ = ⁠ 3τ / 4 ⁠ rad. Critics have argued that the formula for the area of a circle is more complicated when restated as A = ⁠ 1 / 2 ⁠ 𝜏 r 2 . Hartl and others respond that the ⁠ 1 / 2 ⁠ factor is meaningful, arising from either integration or geometric proofs for the area of a circle as half the circumference times the radius . A common criticism of τ is that Euler's identity , e iπ + 1 = 0 , sometimes claimed to be "the most beautiful theorem in mathematics" [ 28 ] is made less elegant rendered as e i τ /2 + 1 = 0 . [ 29 ] Hartl has asserted that e iτ = 1 (which he also called "Euler's identity") is more fundamental and meaningful. John Conway noted [ 8 ] that Euler's identity is a specific case of the general formula of the n th roots of unity , n √1 = e iτk/n (k = 1,2,..,n) , which he maintained is preferable and more economical than Euler's. The following table shows how various identities appear when τ = 2 π is used instead of π . [ 30 ] [ 6 ] For a more complete list, see List of formulae involving π . S n ( r ) = 2 π r V n − 1 ( r ) {\displaystyle S_{n}(r)={\color {orangered}2\pi }rV_{n-1}(r)} S n ( r ) = τ r V n − 1 ( r ) {\displaystyle S_{n}(r)={\color {orangered}\tau }rV_{n-1}(r)} 𝜏 has made numerous appearances in culture. It is celebrated annually on June 28, known as Tau Day. [ 31 ] Supporters of 𝜏 are called tauists. [ 25 ] 𝜏 has been covered in videos by Vi Hart , [ 32 ] [ 33 ] [ 34 ] Numberphile , [ 35 ] [ 36 ] [ 37 ] SciShow , [ 38 ] Steve Mould , [ 39 ] [ 40 ] [ 41 ] Khan Academy , [ 42 ] and 3Blue1Brown , [ 43 ] [ 44 ] and it has appeared in the comics xkcd , [ 45 ] [ 46 ] Saturday Morning Breakfast Cereal , [ 47 ] [ 48 ] [ 49 ] and Sally Forth . [ 50 ] The Massachusetts Institute of Technology usually announces admissions on March 14 at 6:28 p.m., which is on Pi Day at Tau Time. [ 51 ] Peter Harremoës has used τ in a mathematical research article which was granted Editor's award of the year. [ 52 ] The following table documents various programming languages that have implemented the circle constant for converting between turns and radians. All of the languages below support the name "Tau" in some casing, but Processing also supports "TWO_PI" and Raku also supports the symbol "τ" for accessing the same value. The constant τ is made available in the Google calculator, Desmos graphing calculator , [ 53 ] and the iPhone 's Convert Angle option expresses the turn as τ . [ 54 ]
https://en.wikipedia.org/wiki/Tau_(2π)
Tau functions are an important ingredient in the modern mathematical theory of integrable systems , and have numerous applications in a variety of other domains. They were originally introduced by Ryogo Hirota [ 1 ] in his direct method approach to soliton equations, based on expressing them in an equivalent bilinear form. The term tau function , or τ {\displaystyle \tau } -function, was first used systematically by Mikio Sato [ 2 ] and his students [ 3 ] [ 4 ] in the specific context of the Kadomtsev–Petviashvili (or KP) equation and related integrable hierarchies . It is a central ingredient in the theory of solitons . In this setting, given any τ {\displaystyle \tau } -function satisfying a Hirota-type system of bilinear equations (see § Hirota bilinear residue relation for KP tau functions below), the corresponding solutions of the equations of the integrable hierarchy are explicitly expressible in terms of it and its logarithmic derivatives up to a finite order. Tau functions also appear as matrix model partition functions in the spectral theory of random matrices , [ 5 ] [ 6 ] [ 7 ] and may also serve as generating functions , in the sense of combinatorics and enumerative geometry , especially in relation to moduli spaces of Riemann surfaces , and enumeration of branched coverings , or so-called Hurwitz numbers . [ 8 ] [ 9 ] [ 10 ] There are two notions of τ {\displaystyle \tau } -functions, both introduced by the Sato school. The first is isospectral τ {\displaystyle \tau } -functions of the Sato – Segal –Wilson type [ 2 ] [ 11 ] for integrable hierarchies, such as the KP hierarchy, which are parametrized by linear operators satisfying isospectral deformation equations of Lax type. The second is isomonodromic τ {\displaystyle \tau } -functions . [ 12 ] Depending on the specific application, a τ {\displaystyle \tau } -function may either be: 1) an analytic function of a finite or infinite number of independent, commuting flow variables, or deformation parameters; 2) a discrete function of a finite or infinite number of denumerable variables; 3) a formal power series expansion in a finite or infinite number of expansion variables, which need have no convergence domain, but serves as generating function for certain enumerative invariants appearing as the coefficients of the series; 4) a finite or infinite (Fredholm) determinant whose entries are either specific polynomial or quasi-polynomial functions, or parametric integrals, and their derivatives; 5) the Pfaffian of a skew symmetric matrix (either finite or infinite dimensional) with entries similarly of polynomial or quasi-polynomial type. Examples of all these types are given below. In the Hamilton–Jacobi approach to Liouville integrable Hamiltonian systems , Hamilton's principal function , evaluated on the level surfaces of a complete set of Poisson commuting invariants, plays a role similar to the τ {\displaystyle \tau } -function, serving both as a generating function for the canonical transformation to linearizing canonical coordinates and, when evaluated on simultaneous level sets of a complete set of Poisson commuting invariants, as a complete solution of the Hamilton–Jacobi equation . A τ {\displaystyle \tau } -function of isospectral type is defined as a solution of the Hirota bilinear equations (see § Hirota bilinear residue relation for KP tau functions below), from which the linear operator undergoing isospectral evolution can be uniquely reconstructed. Geometrically, in the Sato [ 2 ] and Segal -Wilson [ 11 ] sense, it is the value of the determinant of a Fredholm integral operator , interpreted as the orthogonal projection of an element of a suitably defined (infinite dimensional) Grassmann manifold onto the origin , as that element evolves under the linear exponential action of a maximal abelian subgroup of the general linear group. It typically arises as a partition function , in the sense of statistical mechanics , many-body quantum mechanics or quantum field theory , as the underlying measure undergoes a linear exponential deformation. Isomonodromic τ {\displaystyle \tau } -functions for linear systems of Fuchsian type are defined below in § Fuchsian isomonodromic systems. Schlesinger equations . For the more general case of linear ordinary differential equations with rational coefficients, including irregular singularities, they are developed in reference. [ 12 ] A KP ( Kadomtsev–Petviashvili ) τ {\displaystyle \tau } -function τ ( t ) {\displaystyle \tau (\mathbf {t} )} is a function of an infinite collection t = ( t 1 , t 2 , … ) {\displaystyle \mathbf {t} =(t_{1},t_{2},\dots )} of variables (called KP flow variables ) that satisfies the bilinear formal residue equation identically in the δ t j {\displaystyle \delta t_{j}} variables, where r e s z = 0 {\displaystyle \mathrm {res} _{z=0}} is the z − 1 {\displaystyle z^{-1}} coefficient in the formal Laurent expansion resulting from expanding all factors as Laurent series in z {\displaystyle z} , and As explained below in the section § Formal Baker-Akhiezer function and the KP hierarchy , every such τ {\displaystyle \tau } -function determines a set of solutions to the equations of the KP hierarchy. If τ ( t 1 , t 2 , t 3 , … … ) {\displaystyle \tau (t_{1},t_{2},t_{3},\dots \dots )} is a KP τ {\displaystyle \tau } -function satisfying the Hirota residue equation ( 1 ) and we identify the first three flow variables as it follows that the function satisfies the 2 {\displaystyle 2} (spatial) + 1 {\displaystyle +1} (time) dimensional nonlinear partial differential equation known as the Kadomtsev-Petviashvili (KP) equation . This equation plays a prominent role in plasma physics and in shallow water ocean waves. Taking further logarithmic derivatives of τ ( t 1 , t 2 , t 3 , … … ) {\displaystyle \tau (t_{1},t_{2},t_{3},\dots \dots )} gives an infinite sequence of functions that satisfy further systems of nonlinear autonomous PDE's, each involving partial derivatives of finite order with respect to a finite number of the KP flow parameters t = ( t 1 , t 2 , … ) {\displaystyle {\bf {t}}=(t_{1},t_{2},\dots )} . These are collectively known as the KP hierarchy . If we define the (formal) Baker-Akhiezer function ψ ( z , t ) {\displaystyle \psi (z,\mathbf {t} )} by Sato's formula [ 2 ] [ 3 ] and expand it as a formal series in the powers of the variable z {\displaystyle z} this satisfies an infinite sequence of compatible evolution equations where D i {\displaystyle {\mathcal {D}}_{i}} is a linear ordinary differential operator of degree i {\displaystyle i} in the variable x := t 1 {\displaystyle x:=t_{1}} , with coefficients that are functions of the flow variables t = ( t 1 , t 2 , … ) {\displaystyle \mathbf {t} =(t_{1},t_{2},\dots )} , defined as follows where L {\displaystyle {\mathcal {L}}} is the formal pseudo-differential operator with ∂ := ∂ ∂ x {\displaystyle \partial :={\frac {\partial }{\partial x}}} , is the wave operator and ( L i ) + {\displaystyle {\big (}{\mathcal {L}}^{i}{\big )}_{+}} denotes the projection to the part of L i {\displaystyle {\mathcal {L}}^{i}} containing purely non-negative powers of ∂ {\displaystyle \partial } ; i.e. the differential operator part of L i {\displaystyle {\mathcal {L}}^{i}} . The pseudodifferential operator L {\displaystyle {\mathcal {L}}} satisfies the infinite system of isospectral deformation equations and the compatibility conditions for both the system ( 3 ) and ( 4 ) are This is a compatible infinite system of nonlinear partial differential equations, known as the KP (Kadomtsev-Petviashvili) hierarchy , for the functions { u j ( t ) } j ∈ N {\displaystyle \{u_{j}(\mathbf {t} )\}_{j\in \mathbf {N} }} , with respect to the set t = ( t 1 , t 2 , … ) {\displaystyle \mathbf {t} =(t_{1},t_{2},\dots )} of independent variables, each of which contains only a finite number of u j {\displaystyle u_{j}} 's, and derivatives only with respect to the three independent variables ( x , t i , t j ) {\displaystyle (x,t_{i},t_{j})} . The first nontrivial case of these is the Kadomtsev-Petviashvili equation ( 2 ). Thus, every KP τ {\displaystyle \tau } -function provides a solution, at least in the formal sense, of this infinite system of nonlinear partial differential equations. Consider the overdetermined system of first order matrix partial differential equations where { N i } i = 1 , … , n {\displaystyle \{N_{i}\}_{i=1,\dots ,n}} are a set of n {\displaystyle n} r × r {\displaystyle r\times r} traceless matrices, { α i } i = 1 , … , n {\displaystyle \{\alpha _{i}\}_{i=1,\dots ,n}} a set of n {\displaystyle n} complex parameters, z {\displaystyle z} a complex variable, and Ψ ( z , α 1 , … , α m ) {\displaystyle \Psi (z,\alpha _{1},\dots ,\alpha _{m})} is an invertible r × r {\displaystyle r\times r} matrix valued function of z {\displaystyle z} and { α i } i = 1 , … , n {\displaystyle \{\alpha _{i}\}_{i=1,\dots ,n}} . These are the necessary and sufficient conditions for the based monodromy representation of the fundamental group π 1 ( P 1 ∖ { α i } i = 1 , … , n ) {\displaystyle \pi _{1}({\bf {P}}^{1}\backslash \{\alpha _{i}\}_{i=1,\dots ,n})} of the Riemann sphere punctured at the points { α i } i = 1 , … , n {\displaystyle \{\alpha _{i}\}_{i=1,\dots ,n}} corresponding to the rational covariant derivative operator to be independent of the parameters { α i } i = 1 , … , n {\displaystyle \{\alpha _{i}\}_{i=1,\dots ,n}} ; i.e. that changes in these parameters induce an isomonodromic deformation . The compatibility conditions for this system are the Schlesinger equations [ 12 ] Defining n {\displaystyle n} functions the Schlesinger equations ( 8 ) imply that the differential form on the space of parameters is closed: and hence, locally exact. Therefore, at least locally, there exists a function τ ( α 1 , … , α n ) {\displaystyle \tau (\alpha _{1},\dots ,\alpha _{n})} of the parameters, defined within a multiplicative constant, such that The function τ ( α 1 , … , α n ) {\displaystyle \tau (\alpha _{1},\dots ,\alpha _{n})} is called the isomonodromic τ {\displaystyle \tau } -function associated to the fundamental solution Ψ {\displaystyle \Psi } of the system ( 6 ), ( 7 ). Defining the Lie Poisson brackets on the space of n {\displaystyle n} -tuples { N i } i = 1 , … , n {\displaystyle \{N_{i}\}_{i=1,\dots ,n}} of r × r {\displaystyle r\times r} matrices: and viewing the n {\displaystyle n} functions { H i } i = 1 , … , n {\displaystyle \{H_{i}\}_{i=1,\dots ,n}} defined in ( 9 ) as Hamiltonian functions on this Poisson space, the Schlesinger equations ( 8 ) may be expressed in Hamiltonian form as [ 13 ] [ 14 ] for any differentiable function f ( N 1 , … , N n ) {\displaystyle f(N_{1},\dots ,N_{n})} . The simplest nontrivial case of the Schlesinger equations is when r = 2 {\displaystyle r=2} and n = 3 {\displaystyle n=3} . By applying a Möbius transformation to the variable z {\displaystyle z} , two of the finite poles may be chosen to be at 0 {\displaystyle 0} and 1 {\displaystyle 1} , and the third viewed as the independent variable. Setting the sum ∑ i = 1 3 N i {\displaystyle \sum _{i=1}^{3}N_{i}} of the matrices appearing in ( 6 ), which is an invariant of the Schlesinger equations, equal to a constant, and quotienting by its stabilizer under G l ( 2 ) {\displaystyle Gl(2)} conjugation, we obtain a system equivalent to the most generic case P V I {\displaystyle P_{VI}} of the six Painlevé transcendent equations , for which many detailed classes of explicit solutions are known. [ 15 ] [ 16 ] [ 17 ] For non-Fuchsian systems, with higher order poles, the generalized monodromy data include Stokes matrices and connection matrices , and there are further isomonodromic deformation parameters associated with the local asymptotics, but the isomonodromic τ {\displaystyle \tau } -functions may be defined in a similar way, using differentials on the extended parameter space. [ 12 ] There is similarly a Poisson bracket structure on the space of rational matrix valued functions of the spectral parameter z {\displaystyle z} and corresponding spectral invariant Hamiltonians that generate the isomonodromic deformation dynamics. [ 13 ] [ 14 ] Taking all possible confluences of the poles appearing in ( 6 ) for the r = 2 {\displaystyle r=2} and n = 3 {\displaystyle n=3} case, including the one at z = ∞ {\displaystyle z=\infty } , and making the corresponding reductions, we obtain all other instances P I ⋯ P V {\displaystyle P_{I}\cdots P_{V}} of the Painlevé transcendents , for which numerous special solutions are also known. [ 15 ] [ 16 ] The fermionic Fock space F {\displaystyle {\mathcal {F}}} , is a semi-infinite exterior product space [ 18 ] defined on a (separable) Hilbert space H {\displaystyle {\mathcal {H}}} with basis elements { e i } i ∈ Z {\displaystyle \{e_{i}\}_{i\in \mathbf {Z} }} and dual basis elements { e i } i ∈ Z {\displaystyle \{e^{i}\}_{i\in \mathbf {Z} }} for H ∗ {\displaystyle {\mathcal {H}}^{*}} . The free fermionic creation and annihilation operators { ψ j , ψ j † } j ∈ Z {\displaystyle \{\psi _{j},\psi _{j}^{\dagger }\}_{j\in \mathbf {Z} }} act as endomorphisms on F {\displaystyle {\mathcal {F}}} via exterior and interior multiplication by the basis elements and satisfy the canonical anti-commutation relations These generate the standard fermionic representation of the Clifford algebra on the direct sum H + H ∗ {\displaystyle {\mathcal {H}}+{\mathcal {H}}^{*}} , corresponding to the scalar product with the Fock space F {\displaystyle {\mathcal {F}}} as irreducible module. Denote the vacuum state, in the zero fermionic charge sector F 0 {\displaystyle {\mathcal {F}}_{0}} , as which corresponds to the Dirac sea of states along the real integer lattice in which all negative integer locations are occupied and all non-negative ones are empty. This is annihilated by the following operators The dual fermionic Fock space vacuum state, denoted ⟨ 0 | {\displaystyle \langle 0|} , is annihilated by the adjoint operators, acting to the left Normal ordering : L 1 , ⋯ L m : {\displaystyle :L_{1},\cdots L_{m}:} of a product of linear operators (i.e., finite or infinite linear combinations of creation and annihilation operators) is defined so that its vacuum expectation value (VEV) vanishes In particular, for a product L 1 L 2 {\displaystyle L_{1}L_{2}} of a pair ( L 1 , L 2 ) {\displaystyle (L_{1},L_{2})} of linear operators, one has The fermionic charge operator C {\displaystyle C} is defined as The subspace F n ⊂ F {\displaystyle {\mathcal {F}}_{n}\subset {\mathcal {F}}} is the eigenspace of C {\displaystyle C} consisting of all eigenvectors with eigenvalue n {\displaystyle n} The standard orthonormal basis { | λ ⟩ } {\displaystyle \{|\lambda \rangle \}} for the zero fermionic charge sector F 0 {\displaystyle {\mathcal {F}}_{0}} is labelled by integer partitions λ = ( λ 1 , … , λ ℓ ( λ ) ) {\displaystyle \lambda =(\lambda _{1},\dots ,\lambda _{\ell (\lambda )})} , where λ 1 ≥ ⋯ ≥ λ ℓ ( λ ) {\displaystyle \lambda _{1}\geq \cdots \geq \lambda _{\ell (\lambda )}} is a weakly decreasing sequence of ℓ ( λ ) {\displaystyle \ell (\lambda )} positive integers, which can equivalently be represented by a Young diagram , as depicted here for the partition ( 5 , 4 , 1 ) {\displaystyle (5,4,1)} . An alternative notation for a partition λ {\displaystyle \lambda } consists of the Frobenius indices ( α 1 , … α r | β 1 , … β r ) {\displaystyle (\alpha _{1},\dots \alpha _{r}|\beta _{1},\dots \beta _{r})} , where α i {\displaystyle \alpha _{i}} denotes the arm length ; i.e. the number λ i − i {\displaystyle \lambda _{i}-i} of boxes in the Young diagram to the right of the i {\displaystyle i} 'th diagonal box, β i {\displaystyle \beta _{i}} denotes the leg length , i.e. the number of boxes in the Young diagram below the i {\displaystyle i} 'th diagonal box, for i = 1 , … , r {\displaystyle i=1,\dots ,r} , where r {\displaystyle r} is the Frobenius rank , which is the number of elements along the principal diagonal. The basis element | λ ⟩ {\displaystyle |\lambda \rangle } is then given by acting on the vacuum with a product of r {\displaystyle r} pairs of creation and annihilation operators, labelled by the Frobenius indices The integers { α i } i = 1 , … , r {\displaystyle \{\alpha _{i}\}_{i=1,\dots ,r}} indicate, relative to the Dirac sea, the occupied non-negative sites on the integer lattice while { − β i − 1 } i = 1 , … , r {\displaystyle \{-\beta _{i}-1\}_{i=1,\dots ,r}} indicate the unoccupied negative integer sites. The corresponding diagram, consisting of infinitely many occupied and unoccupied sites on the integer lattice that are a finite perturbation of the Dirac sea are referred to as a Maya diagram . [ 2 ] The case of the null (emptyset) partition | ∅ ⟩ = | 0 ⟩ {\displaystyle |\emptyset \rangle =|0\rangle } gives the vacuum state, and the dual basis { ⟨ μ | } {\displaystyle \{\langle \mu |\}} is defined by Any KP τ {\displaystyle \tau } -function can be expressed as a sum where t = ( t 1 , t 2 , … , … ) {\displaystyle \mathbf {t} =(t_{1},t_{2},\dots ,\dots )} are the KP flow variables, s λ ( t ) {\displaystyle s_{\lambda }(\mathbf {t} )} is the Schur function corresponding to the partition λ {\displaystyle \lambda } , viewed as a function of the normalized power sum variables in terms of an auxiliary (finite or infinite) sequence of variables x := ( x 1 , … , x N ) {\displaystyle \mathbf {x} :=(x_{1},\dots ,x_{N})} and the constant coefficients π λ ( w ) {\displaystyle \pi _{\lambda }(w)} may be viewed as the Plücker coordinates of an element w ∈ G r H + ( H ) {\displaystyle w\in \mathrm {Gr} _{{\mathcal {H}}_{+}}({\mathcal {H}})} of the infinite dimensional Grassmannian consisting of the orbit, under the action of the general linear group G l ( H ) {\displaystyle \mathrm {Gl} ({\mathcal {H}})} , of the subspace H + = s p a n { e − i } i ∈ N ⊂ H {\displaystyle {\mathcal {H}}_{+}=\mathrm {span} \{e_{-i}\}_{i\in \mathbf {N} }\subset {\mathcal {H}}} of the Hilbert space H {\displaystyle {\mathcal {H}}} . This corresponds, under the Bose-Fermi correspondence , to a decomposable element of the Fock space F 0 {\displaystyle {\mathcal {F}}_{0}} which, up to projectivization, is the image of the Grassmannian element w ∈ G r H + ( H ) {\displaystyle w\in \mathrm {Gr} _{{\mathcal {H}}_{+}}({\mathcal {H}})} under the Plücker map where ( w 1 , w 2 , … ) {\displaystyle (w_{1},w_{2},\dots )} is a basis for the subspace w ⊂ H {\displaystyle w\subset {\mathcal {H}}} and [ ⋯ ] {\displaystyle [\cdots ]} denotes projectivization of an element of F {\displaystyle {\mathcal {F}}} . The Plücker coordinates { π λ ( w ) } {\displaystyle \{\pi _{\lambda }(w)\}} satisfy an infinite set of bilinear relations, the Plücker relations , defining the image of the Plücker embedding into the projectivization P ( F ) {\displaystyle \mathbf {P} ({\mathcal {F}})} of the fermionic Fock space, which are equivalent to the Hirota bilinear residue relation ( 1 ). If w = g ( H + ) {\displaystyle w=g({\mathcal {H}}_{+})} for a group element g ∈ G l ( H ) {\displaystyle g\in \mathrm {Gl} ({\mathcal {H}})} with fermionic representation g ^ {\displaystyle {\hat {g}}} , then the τ {\displaystyle \tau } -function τ w ( t ) {\displaystyle \tau _{w}(\mathbf {t} )} can be expressed as the fermionic vacuum state expectation value (VEV): where is the abelian subgroup of G l ( H ) {\displaystyle \mathrm {Gl} ({\mathcal {H}})} that generates the KP flows, and are the ""current"" components. As seen in equation ( 9 ), every KP τ {\displaystyle \tau } -function can be represented (at least formally) as a linear combination of Schur functions , in which the coefficients π λ ( w ) {\displaystyle \pi _{\lambda }(w)} satisfy the bilinear set of Plucker relations corresponding to an element w {\displaystyle w} of an infinite (or finite) Grassmann manifold. In fact, the simplest class of (polynomial) tau functions consists of the Schur functions s λ ( t ) {\displaystyle s_{\lambda }(\mathbf {t} )} themselves, which correspond to the special element of the Grassmann manifold whose image under the Plücker map is | λ > {\displaystyle |\lambda >} . If we choose 3 N {\displaystyle 3N} complex constants { α k , β k , γ k } k = 1 , … , N {\displaystyle \{\alpha _{k},\beta _{k},\gamma _{k}\}_{k=1,\dots ,N}} with α k , β k {\displaystyle \alpha _{k},\beta _{k}} 's all distinct, γ k ≠ 0 {\displaystyle \gamma _{k}\neq 0} , and define the functions we arrive at the Wronskian determinant formula which gives the general N {\displaystyle N} -soliton τ {\displaystyle \tau } -function. [ 3 ] [ 4 ] [ 19 ] Let X {\displaystyle X} be a compact Riemann surface of genus g {\displaystyle g} and fix a canonical homology basis a 1 , … , a g , b 1 , … , b g {\displaystyle a_{1},\dots ,a_{g},b_{1},\dots ,b_{g}} of H 1 ( X , Z ) {\displaystyle H_{1}(X,\mathbf {Z} )} with intersection numbers Let { ω i } i = 1 , … , g {\displaystyle \{\omega _{i}\}_{i=1,\dots ,g}} be a basis for the space H 1 ( X ) {\displaystyle H^{1}(X)} of holomorphic differentials satisfying the standard normalization conditions where B {\displaystyle B} is the Riemann matrix of periods. The matrix B {\displaystyle B} belongs to the Siegel upper half space The Riemann θ {\displaystyle \theta } function on C g {\displaystyle \mathbf {C} ^{g}} corresponding to the period matrix B {\displaystyle B} is defined to be Choose a point p ∞ ∈ X {\displaystyle p_{\infty }\in X} , a local parameter ζ {\displaystyle \zeta } in a neighbourhood of p ∞ {\displaystyle p_{\infty }} with ζ ( p ∞ ) = 0 {\displaystyle \zeta (p_{\infty })=0} and a positive divisor of degree g {\displaystyle g} For any positive integer k ∈ N + {\displaystyle k\in \mathbf {N} ^{+}} let Ω k {\displaystyle \Omega _{k}} be the unique meromorphic differential of the second kind characterized by the following conditions: Denote by U k ∈ C g {\displaystyle \mathbf {U} _{k}\in \mathbf {C} ^{g}} the vector of b {\displaystyle b} -cycles of Ω k {\displaystyle \Omega _{k}} : Denote the image of D {\displaystyle {\mathcal {D}}} under the Abel map A : S g ( X ) → C g {\displaystyle {\mathcal {A}}:{\mathcal {S}}^{g}(X)\to \mathbf {C} ^{g}} with arbitrary base point p 0 {\displaystyle p_{0}} . Then the following is a KP τ {\displaystyle \tau } -function: [ 20 ] Let d μ 0 ( M ) {\displaystyle d\mu _{0}(M)} be the Lebesgue measure on the N 2 {\displaystyle N^{2}} dimensional space H N × N {\displaystyle {\mathbf {H} }^{N\times N}} of N × N {\displaystyle N\times N} complex Hermitian matrices. Let ρ ( M ) {\displaystyle \rho (M)} be a conjugation invariant integrable density function Define a deformation family of measures for small t = ( t 1 , t 2 , ⋯ ) {\displaystyle \mathbf {t} =(t_{1},t_{2},\cdots )} and let be the partition function for this random matrix model . [ 21 ] [ 5 ] Then τ N , ρ ( t ) {\displaystyle \tau _{N,\rho }(\mathbf {t} )} satisfies the bilinear Hirota residue equation ( 1 ), and hence is a τ {\displaystyle \tau } -function of the KP hierarchy. [ 22 ] Let { r i } i ∈ Z {\displaystyle \{r_{i}\}_{i\in \mathbf {Z} }} be a (doubly) infinite sequence of complex numbers. For any integer partition λ = ( λ 1 , … , λ ℓ ( λ ) ) {\displaystyle \lambda =(\lambda _{1},\dots ,\lambda _{\ell (\lambda )})} define the content product coefficient where the product is over all pairs ( i , j ) {\displaystyle (i,j)} of positive integers that correspond to boxes of the Young diagram of the partition λ {\displaystyle \lambda } , viewed as positions of matrix elements of the corresponding ℓ ( λ ) × λ 1 {\displaystyle \ell (\lambda )\times \lambda _{1}} matrix. Then, for every pair of infinite sequences t = ( t 1 , t 2 , … ) {\displaystyle \mathbf {t} =(t_{1},t_{2},\dots )} and s = ( s 1 , s 2 , … ) {\displaystyle \mathbf {s} =(s_{1},s_{2},\dots )} of complex variables, viewed as (normalized) power sums t = [ x ] , s = [ y ] {\displaystyle \mathbf {t} =[\mathbf {x} ],\ \mathbf {s} =[\mathbf {y} ]} of the infinite sequence of auxiliary variables defined by: the function is a double KP τ {\displaystyle \tau } -function, both in the t {\displaystyle \mathbf {t} } and the s {\displaystyle \mathbf {s} } variables, known as a τ {\displaystyle \tau } -function of hypergeometric type . [ 23 ] In particular, choosing for some small parameter β {\displaystyle \beta } , denoting the corresponding content product coefficient as r λ β {\displaystyle r_{\lambda }^{\beta }} and setting the resulting τ {\displaystyle \tau } -function can be equivalently expanded as where { H d ( λ ) } {\displaystyle \{H_{d}(\lambda )\}} are the simple Hurwitz numbers , which are 1 n ! {\displaystyle {\frac {1}{n!}}} times the number of ways in which an element k λ ∈ S n {\displaystyle k_{\lambda }\in {\mathcal {S}}_{n}} of the symmetric group S n {\displaystyle {\mathcal {S}}_{n}} in n = | λ | {\displaystyle n=|\lambda |} elements, with cycle lengths equal to the parts of the partition λ {\displaystyle \lambda } , can be factorized as a product of d {\displaystyle d} 2 {\displaystyle 2} -cycles and is the power sum symmetric function. Equation ( 12 ) thus shows that the (formal) KP hypergeometric τ {\displaystyle \tau } -function ( 11 ) corresponding to the content product coefficients r λ β {\displaystyle r_{\lambda }^{\beta }} is a generating function, in the combinatorial sense, for simple Hurwitz numbers. [ 8 ] [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Tau_function_(integrable_systems)
The Taub–NUT metric ( / t ɔː b n ʌ t / , [ 1 ] /- ˌ ɛ n . j uː ˈ t iː / ) is an exact solution to Einstein's equations . It may be considered a first attempt in finding the metric of a spinning black hole. It is sometimes also used in homogeneous but anisotropic cosmological models formulated in the framework of general relativity . [ citation needed ] The underlying Taub space was found by Abraham Haskel Taub ( 1951 ), and extended to a larger manifold by Ezra T. Newman , Louis A. Tamburino, and Theodore W. J. Unti ( 1963 ), whose initials form the "NUT" of "Taub–NUT". Taub's solution is an empty space solution of Einstein's equations with topology R × S 3 and metric (or equivalently line element ) where and m and l are positive constants. Taub's metric has coordinate singularities at U = 0 , t = m + ( m 2 + l 2 ) 1 / 2 {\displaystyle U=0,t=m+(m^{2}+l^{2})^{1/2}} , and Newman, Tamburino and Unti showed how to extend the metric across these surfaces. When Roy Kerr developed the Kerr metric for spinning black holes in 1963, he ended up with a four-parameter solution, one of which was the mass and another the angular momentum of the central body. One of the two other parameters was the NUT-parameter, which he threw out of his solution because he found it to be nonphysical since it caused the metric to be not asymptotically flat, [ 2 ] [ 3 ] while other sources interpret it either as a gravomagnetic monopole parameter of the central mass, [ 4 ] or a twisting property of the surrounding spacetime. [ 5 ] A simplified 1+1-dimensional version of the Taub–NUT spacetime is the Misner spacetime . This relativity -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Taub–NUT_space
A Tauc plot [ 1 ] is used to determine the optical bandgap , or Tauc bandgap, of either disordered [ 2 ] or amorphous [ 3 ] semiconductors . In his original work Jan Tauc ( / t aʊ t s / ) showed that the optical absorption spectrum of amorphous germanium resembles the spectrum of the indirect transitions in crystalline germanium (plus a tail due to localized states at lower energies), and proposed an extrapolation to find the optical bandgap of these crystalline-like states. [ 4 ] Typically, a Tauc plot shows the photon energy E (= hν) on the abscissa (x-coordinate) and the quantity (αE) 1/2 on the ordinate (y-coordinate), where α is the absorption coefficient of the material. Thus, extrapolating this linear region to the abscissa yields the energy of the optical bandgap of the amorphous material. A similar procedure is adopted to determine the optical bandgap of crystalline semiconductors. [ 5 ] In this case, however, the ordinate is given by (α) 1/r , in which the exponent 1/r denotes the nature of the transition: [ 6 ] , [ 7 ] , [ 8 ] Again, the resulting plot (quite often, incorrectly identified as a Tauc plot) has a distinct linear region that, extrapolated to the abscissa, yields the energy of the optical bandgap of the material. [ 9 ]
https://en.wikipedia.org/wiki/Tauc_plot
The Tauc–Lorentz model is a mathematical formula for the frequency dependence of the complex-valued relative permittivity , sometimes referred to as the dielectric function. The model has been used to fit the complex refractive index of amorphous semiconductor materials at frequencies greater than their optical band gap . The dispersion relation bears the names of Jan Tauc and Hendrik Lorentz , whose previous works [ 1 ] were combined by G. E. Jellison and F. A. Modine to create the model. [ 2 ] [ 3 ] The model was inspired, in part, by shortcomings of the Forouhi–Bloomer model , which is aphysical due to its incorrect asymptotic behavior and non-Hermitian character. Despite the inspiration, the Tauc–Lorentz model is itself aphysical due to being non-Hermitian and non-analytic in the upper half-plane . Further researchers have modified the model to address these shortcomings. [ 4 ] [ 5 ] [ 6 ] The general form of the model is given by where The imaginary component of χ T L ( E ) {\displaystyle \chi ^{TL}(E)} is formed as the product of the imaginary component of the Lorentz oscillator model and a model developed by Jan Tauc for the imaginary component of the relative permittivity near the bandgap of a material. [ 1 ] The real component of χ T L ( E ) {\displaystyle \chi ^{TL}(E)} is obtained via the Kramers-Kronig transform of its imaginary component. Mathematically, they are given by [ 2 ] where Computing the Kramers-Kronig transform, [ 3 ] = A C π ζ 4 a l n 2 α E 0 ln ⁡ ( E 0 2 + E g 2 + α E g E 0 2 + E g 2 − α E g ) {\displaystyle ={\frac {AC}{\pi \zeta ^{4}}}{\frac {a_{\mathrm {ln} }}{2\alpha E_{0}}}\ln {\left({\frac {E_{0}^{2}+E_{g}^{2}+\alpha E_{g}}{E_{0}^{2}+E_{g}^{2}-\alpha E_{g}}}\right)}\,\!} − A π ζ 4 a a t a n E 0 [ π − arctan ⁡ ( α + 2 E g C ) + arctan ⁡ ( α − 2 E g C ) ] {\displaystyle -{\frac {A}{\pi \zeta ^{4}}}{\frac {a_{\mathrm {atan} }}{E_{0}}}\left[\pi -\arctan {\left({\frac {\alpha +2E_{g}}{C}}\right)}+\arctan {\left({\frac {\alpha -2E_{g}}{C}}\right)}\right]\,\!} + 2 A E 0 π ζ 4 α E g ( E 2 − γ 2 ) [ π + 2 arctan ⁡ ( 2 γ 2 − E g 2 α C ) ] {\displaystyle +2{\frac {AE_{0}}{\pi \zeta ^{4}\alpha }}E_{g}\left(E^{2}-\gamma ^{2}\right)\left[\pi +2\arctan {\left(2{\frac {\gamma ^{2}-E_{g}^{2}}{\alpha C}}\right)}\right]\,\!} − A E 0 C π ζ 4 E 2 + E g 2 E ln ⁡ ( | E − E g | E + E g ) {\displaystyle -{\frac {AE_{0}C}{\pi \zeta ^{4}}}{\frac {E^{2}+E_{g}^{2}}{E}}\ln {\left({\frac {\left|E-E_{g}\right|}{E+E_{g}}}\right)}\,\!} + 2 A E 0 C π ζ 4 E g ln ⁡ [ | E − E g | ( E + E g ) ( E 0 2 − E g 2 ) 2 + E g 2 C 2 ] {\displaystyle +2{\frac {AE_{0}C}{\pi \zeta ^{4}}}E_{g}\ln {\left[{\frac {\left|E-E_{g}\right|\left(E+E_{g}\right)}{\sqrt {\left(E_{0}^{2}-E_{g}^{2}\right)^{2}+E_{g}^{2}C^{2}}}}\right]}} where
https://en.wikipedia.org/wiki/Tauc–Lorentz_model
Tauri is an open-source software framework designed to create cross-platform desktop and mobile applications on Linux , macOS , Windows , Android and iOS using a web frontend. The framework functions with a Rust back-end and a JavaScript front-end [ 1 ] that runs on local WebView libraries using rendering libraries like Tao and Wry. [ 2 ] [ 3 ] Tauri aims to provide a more lightweight alternative to similar existing frameworks such as Electron . [ 4 ] [ 5 ] Tauri is governed by the Tauri Foundation within the Dutch non-profit Commons Conservancy. [ 6 ] As of 2024, Tauri is licensed and distributed under the MIT license , and Apache 2.0 license. [ 7 ] Tauri 1.0 was released in June 2022. In early 2024, Tauri v2 Beta was released, which included mobile support for iOS and Android systems. [ 8 ] On 2 October 2024, Tauri v2 was released as a stable release. [ 9 ] Central to Tauri's architecture are core components such as the Tauri crate, which serves as a hub for managing various functionalities like runtimes, macros, utilities, and APIs. The toolkit also includes essential tooling such as bundlers, CLI interfaces, and scaffolding kits, to streamline the development and deployment processes. Tauri supports cross-platform application window creation (TAO) and WebView rendering (WRY), which allows compatibility across macOS , Linux and Windows platforms. Tauri is built using Rust , a programming language emphasizing performance , type safety , and memory safety . It also allows users the function to switch individual APIs on and off, [ 10 ] and provides an isolation pattern to prevent untrusted scripts from accessing the back-end from a WebView . [ 11 ] This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tauri_(software_framework)
Taurocholic acid , known also as cholaic acid , cholyltaurine , or acidum cholatauricum , is a deliquescent yellowish crystalline bile acid involved in the emulsification of fats . It occurs as a sodium salt in the bile of mammals . It is a conjugate of cholic acid with taurine . In medical use, it is administered as a cholagogue and choleretic . [ 1 ] Hydrolysis of taurocholic acid yields taurine . For commercial use, taurocholic acid is manufactured from cattle bile, a byproduct of the meat-processing industry. [ 2 ] This acid is also one of the many molecules in the body that has cholesterol as its precursor. [ citation needed ] In a large prospective study (involving 569 incident colon cancer cases and 569 matched controls) it was found that prediagnostic concentrations of circulating taurocholic acid, as well as six other bile acids, were statistically significantly associated with increased colon cancer risk. [ 3 ] The median lethal dose of taurocholic acid in newborn rats is 380 mg/kg. [ citation needed ]
https://en.wikipedia.org/wiki/Taurocholic_acid
A tautochrone curve or isochrone curve (from Ancient Greek ταὐτό ( tauto- ) ' same ' ἴσος ( isos- ) ' equal ' and χρόνος ( chronos ) ' time ' ) is the curve for which the time taken by an object sliding without friction in uniform gravity to its lowest point is independent of its starting point on the curve. The curve is a cycloid , and the time is equal to π times the square root of the radius of the circle which generates the cycloid, over the acceleration of gravity . The tautochrone curve is related to the brachistochrone curve , which is also a cycloid. It was in the left hand try-pot of the Pequod, with the soapstone diligently circling round me, that I was first indirectly struck by the remarkable fact, that in geometry all bodies gliding along the cycloid, my soapstone for example, will descend from any point in precisely the same time. The tautochrone problem, the attempt to identify this curve, was solved by Christiaan Huygens in 1659. He proved geometrically in his Horologium Oscillatorium , originally published in 1673, that the curve is a cycloid . On a cycloid whose axis is erected on the perpendicular and whose vertex is located at the bottom, the times of descent, in which a body arrives at the lowest point at the vertex after having departed from any point on the cycloid, are equal to each other ... [ 1 ] The cycloid is given by a point on a circle of radius r {\displaystyle r} tracing a curve as the circle rolls along the x {\displaystyle x} axis, as: x = r ( θ − sin ⁡ θ ) y = r ( 1 − cos ⁡ θ ) , {\displaystyle {\begin{aligned}x&=r(\theta -\sin \theta )\\y&=r(1-\cos \theta ),\end{aligned}}} Huygens also proved that the time of descent is equal to the time a body takes to fall vertically the same distance as diameter of the circle that generates the cycloid, multiplied by π / 2 {\displaystyle \pi /2} . In modern terms, this means that the time of descent is π r / g {\textstyle \pi {\sqrt {r/g}}} , where r {\displaystyle r} is the radius of the circle which generates the cycloid, and g {\displaystyle g} is the gravity of Earth , or more accurately, the earth's gravitational acceleration. This solution was later used to solve the problem of the brachistochrone curve . Johann Bernoulli solved the problem in a paper ( Acta Eruditorum , 1697). The tautochrone problem was studied by Huygens more closely when it was realized that a pendulum, which follows a circular path, was not isochronous and thus his pendulum clock would keep different time depending on how far the pendulum swung. After determining the correct path, Christiaan Huygens attempted to create pendulum clocks that used a string to suspend the bob and curb cheeks near the top of the string to change the path to the tautochrone curve. These attempts proved unhelpful for a number of reasons. First, the bending of the string causes friction, changing the timing. Second, there were much more significant sources of timing errors that overwhelmed any theoretical improvements that traveling on the tautochrone curve helps. Finally, the "circular error" of a pendulum decreases as length of the swing decreases, so better clock escapements could greatly reduce this source of inaccuracy. Later, the mathematicians Joseph Louis Lagrange and Leonhard Euler provided an analytical solution to the problem. For a simple harmonic oscillator released from rest, regardless of its initial displacement, the time it takes to reach the lowest potential energy point is always a quarter of its period, which is independent of its amplitude. Therefore, the Lagrangian of a simple harmonic oscillator is isochronous . In the tautochrone problem, if the particle's position is parametrized by the arclength s ( t ) from the lowest point, the kinetic energy is then proportional to s ˙ 2 {\displaystyle {\dot {s}}^{2}} , and the potential energy is proportional to the height h ( s ) . One way the curve in the tautochrone problem can be an isochrone is if the Lagrangian is mathematically equivalent to a simple harmonic oscillator; that is, the height of the curve must be proportional to the arclength squared: where the constant of proportionality is 1 / ( 8 r ) {\displaystyle 1/(8r)} . Compared to the simple harmonic oscillator's Lagrangian , the equivalent spring constant is k = m g / ( 4 r ) {\displaystyle k=mg/(4r)} , and the time of descent is T / 4 = π 2 m k = π r g . {\displaystyle T/4={\frac {\pi }{2}}{\sqrt {\frac {m}{k}}}=\pi {\sqrt {\frac {r}{g}}}.} However, the physical meaning of the constant r {\displaystyle r} is not clear until we determine the exact analytical equation of the curve. To solve for the analytical equation of the curve, note that the differential form of the above relation is which eliminates s , and leaves a differential equation for dx and dh . This is the differential equation for a cycloid when the vertical coordinate h is counted from its vertex (the point with a horizontal tangent) instead of the cusp . To find the solution, integrate for x in terms of h : where u = h / ( 2 r ) {\displaystyle u={\sqrt {h/(2r)}}} , and the height decreases as the particle moves forward d x / d h < 0 {\displaystyle dx/dh<0} . This integral is the area under a circle, which can be done with another substitution u = cos ⁡ ( t / 2 ) {\displaystyle u=\cos(t/2)} and yield: This is the standard parameterization of a cycloid with h = 2 r − y {\displaystyle h=2r-y} . It's interesting to note that the arc length squared is equal to the height difference multiplied by the full arch length 8 r {\displaystyle 8r} . The simplest solution to the tautochrone problem is to note a direct relation between the angle of an incline and the gravity felt by a particle on the incline. A particle on a 90° vertical incline undergoes full gravitational acceleration g {\displaystyle g} , while a particle on a horizontal plane undergoes zero gravitational acceleration. At intermediate angles, the acceleration due to "virtual gravity" by the particle is g sin ⁡ θ {\displaystyle g\sin \theta } . Note that θ {\displaystyle \theta } is measured between the tangent to the curve and the horizontal, with angles above the horizontal being treated as positive angles. Thus, θ {\displaystyle \theta } varies from − π / 2 {\displaystyle -\pi /2} to π / 2 {\displaystyle \pi /2} . The position of a mass measured along a tautochrone curve, s ( t ) {\displaystyle s(t)} , must obey the following differential equation: which, along with the initial conditions s ( 0 ) = s 0 {\displaystyle s(0)=s_{0}} and s ′ ( 0 ) = 0 {\displaystyle s'(0)=0} , has solution: It can be easily verified both that this solution solves the differential equation and that a particle will reach s = 0 {\displaystyle s=0} at time π / 2 ω {\displaystyle \pi /2\omega } from any starting position s 0 {\displaystyle s_{0}} . The problem is now to construct a curve that will cause the mass to obey the above motion. Newton's second law shows that the force of gravity and the acceleration of the mass are related by: The explicit appearance of the distance, s {\displaystyle s} , is troublesome, but we can differentiate to obtain a more manageable form: This equation relates the change in the curve's angle to the change in the distance along the curve. We now use trigonometry to relate the angle θ {\displaystyle \theta } to the differential lengths d x {\displaystyle dx} , d y {\displaystyle dy} and d s {\displaystyle ds} : Replacing d s {\displaystyle ds} with d x {\displaystyle dx} in the above equation lets us solve for x {\displaystyle x} in terms of θ {\displaystyle \theta } : Likewise, we can also express d s {\displaystyle ds} in terms of d y {\displaystyle dy} and solve for y {\displaystyle y} in terms of θ {\displaystyle \theta } : Substituting ϕ = 2 θ {\displaystyle \phi =2\theta } and r = g 4 ω 2 {\textstyle r={\frac {g}{4\omega ^{2}}}\,} , we see that these parametric equations for x {\displaystyle x} and y {\displaystyle y} are those of a point on a circle of radius r {\displaystyle r} rolling along a horizontal line (a cycloid ), with the circle center at the coordinates ( C x + r ϕ , C y ) {\displaystyle (C_{x}+r\phi ,C_{y})} : Note that ϕ {\displaystyle \phi } ranges from − π ≤ ϕ ≤ π {\displaystyle -\pi \leq \phi \leq \pi } . It is typical to set C x = 0 {\displaystyle C_{x}=0} and C y = r {\displaystyle C_{y}=r} so that the lowest point on the curve coincides with the origin. Therefore: Solving for ω {\displaystyle \omega } and remembering that T = π 2 ω {\displaystyle T={\frac {\pi }{2\omega }}} is the time required for descent, being a quarter of a whole cycle, we find the descent time in terms of the radius r {\displaystyle r} : (Based loosely on Proctor , pp. 135–139) Niels Henrik Abel attacked a generalized version of the tautochrone problem ( Abel's mechanical problem ), namely, given a function T ( y ) {\displaystyle T(y)} that specifies the total time of descent for a given starting height, find an equation of the curve that yields this result. The tautochrone problem is a special case of Abel's mechanical problem when T ( y ) {\displaystyle T(y)} is a constant. Abel's solution begins with the principle of conservation of energy – since the particle is frictionless, and thus loses no energy to heat , its kinetic energy at any point is exactly equal to the difference in gravitational potential energy from its starting point. The kinetic energy is 1 2 m v 2 {\textstyle {\frac {1}{2}}mv^{2}} , and since the particle is constrained to move along a curve, its velocity is simply d ℓ / d t {\displaystyle {d\ell }/{dt}} , where ℓ {\displaystyle \ell } is the distance measured along the curve. Likewise, the gravitational potential energy gained in falling from an initial height y 0 {\displaystyle y_{0}} to a height y {\displaystyle y} is m g ( y 0 − y ) {\displaystyle mg(y_{0}-y)} , thus: In the last equation, we have anticipated writing the distance remaining along the curve as a function of height ( ℓ ( y ) ) {\displaystyle \ell (y))} , recognized that the distance remaining must decrease as time increases (thus the minus sign), and used the chain rule in the form d ℓ = d ℓ d y d y {\textstyle d\ell ={\frac {d\ell }{dy}}dy} . Now we integrate from y = y 0 {\displaystyle y=y_{0}} to y = 0 {\displaystyle y=0} to get the total time required for the particle to fall: This is called Abel's integral equation and allows us to compute the total time required for a particle to fall along a given curve (for which d ℓ / d y {\displaystyle {d\ell }/{dy}} would be easy to calculate). But Abel's mechanical problem requires the converse – given T ( y 0 ) {\displaystyle T(y_{0})\,} , we wish to find f ( y ) = d ℓ / d y {\displaystyle f(y)={d\ell }/{dy}} , from which an equation for the curve would follow in a straightforward manner. To proceed, we note that the integral on the right is the convolution of d ℓ / d y {\displaystyle {d\ell }/{dy}} with 1 / y {\displaystyle {1}/{\sqrt {y}}} and thus take the Laplace transform of both sides with respect to variable y {\displaystyle y} : where F ( s ) = L [ d ℓ / d y ] {\displaystyle F(s)={\mathcal {L}}{\left[{d\ell }/{dy}\right]}} . Since L [ 1 / y ] = π / s {\textstyle {\mathcal {L}}{\left[{1}/{\sqrt {y}}\right]}={\sqrt {{\pi }/{s}}}} , we now have an expression for the Laplace transform of d ℓ / d y {\displaystyle {d\ell }/{dy}} in terms of the Laplace transform of T ( y 0 ) {\displaystyle T(y_{0})} : This is as far as we can go without specifying T ( y 0 ) {\displaystyle T(y_{0})} . Once T ( y 0 ) {\displaystyle T(y_{0})} is known, we can compute its Laplace transform, calculate the Laplace transform of d ℓ / d y {\displaystyle {d\ell }/{dy}} and then take the inverse transform (or try to) to find d ℓ / d y {\displaystyle {d\ell }/{dy}} . For the tautochrone problem, T ( y 0 ) = T 0 {\displaystyle T(y_{0})=T_{0}\,} is constant. Since the Laplace transform of 1 is 1 / s {\displaystyle {1}/{s}} , i.e., L [ T ( y 0 ) ] = T 0 / s {\textstyle {\mathcal {L}}[T(y_{0})]={T_{0}}/{s}} , we find the shape function f ( y ) = d ℓ / d y {\textstyle f(y)={d\ell }/{dy}} : Making use again of the Laplace transform above, we invert the transform and conclude: It can be shown that the cycloid obeys this equation. It needs one step further to do the integral with respect to y {\displaystyle y} to obtain the expression of the path shape. ( Simmons , Section 54).
https://en.wikipedia.org/wiki/Tautochrone_curve
In chemistry , tautomers ( / ˈ t ɔː t ə m ər / ) [ 1 ] are structural isomers (constitutional isomers) of chemical compounds that readily interconvert. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The chemical reaction interconverting the two is called tautomerization . This conversion commonly results from the relocation of a hydrogen atom within the compound. The phenomenon of tautomerization is called tautomerism , also called desmotropism . Tautomerism is for example relevant to the behavior of amino acids and nucleic acids , two of the fundamental building blocks of life. Care should be taken not to confuse tautomers with depictions of "contributing structures" in chemical resonance . Tautomers are distinct chemical species that can be distinguished by their differing atomic connectivities, molecular geometries, and physicochemical and spectroscopic properties, [ 6 ] whereas resonance forms are merely alternative Lewis structure ( valence bond theory ) depictions of a single chemical species, whose true structure is a quantum superposition , essentially the "average" of the idealized, hypothetical geometries implied by these resonance forms. The term tautomer is derived from Ancient Greek ταὐτό (tautó) ' the same ' and μέρος (méros) ' part ' . Tautomerization is pervasive in organic chemistry . [ 2 ] [ 7 ] It is typically associated with polar molecules and ions containing functional groups that are at least weakly acidic. Most common tautomers exist in pairs, which means that the hydrogen is located at one of two positions, and even more specifically the most common form involves a hydrogen changing places with a double bond: H−X−Y=Z ⇌ X=Y−Z−H . Common tautomeric pairs include: [ 3 ] [ 4 ] Prototropy is the most common form of tautomerism and refers to the relocation of a hydrogen atom. [ 7 ] Prototropic tautomerism may be considered a subset of acid-base behavior. Prototropic tautomers are sets of isomeric protonation states with the same empirical formula and total charge . Tautomerizations are catalyzed by: [ 4 ] Two specific further subcategories of tautomerizations: Valence tautomerism is a type of tautomerism in which single and/or double bonds are rapidly formed and ruptured, without migration of atoms or groups. [ 9 ] It is distinct from prototropic tautomerism, and involves processes with rapid reorganisation of bonding electrons. A pair of valence tautomers with formula C 6 H 6 O are benzene oxide and oxepin . [ 9 ] [ 10 ] Other examples of this type of tautomerism can be found in bullvalene , and in open and closed forms of certain heterocycles , such as organic azides and tetrazoles , [ 11 ] or mesoionic münchnone and acylamino ketene. Valence tautomerism requires a change in molecular geometry and should not be confused with canonical resonance structures or mesomers. In inorganic extended solids, valence tautomerism can manifest itself in the change of oxidation states its spatial distribution upon the change of macroscopic thermodynamic conditions. Such effects have been called charge ordering or valence mixing to describe the behavior in inorganic oxides. [ 12 ] The existence of multiple possible tautomers for individual chemical substances can lead to confusion. For example, samples of 2-pyridone and 2-hydroxypyridine do not exist as separate isolatable materials: the two tautomeric forms are interconvertible and the proportion of each depends on factors such as temperature, solvent, and additional substituents attached to the main ring. [ 8 ] [ 13 ] Historically, each form of the substance was entered into databases such as those maintained by the Chemical Abstracts Service and given separate CAS Registry Numbers . [ 14 ] 2-Pyridone was assigned [142-08-5] [ 15 ] and 2-hydroxypyridine [109-10-4]. [ 16 ] The latter is now a "replaced" registry number so that look-up by either identifier reaches the same entry. The facility to automatically recognise such potential tautomerism and ensure that all tautomers are indexed together has been greatly facilitated by the creation of the International Chemical Identifier (InChI) and associated software. [ 17 ] [ 18 ] [ 19 ] Thus the standard InChI for either tautomer is InChI=1S/C5H5NO/c7-5-3-1-2-4-6-5/h1-4H,(H,6,7) . [ 20 ]
https://en.wikipedia.org/wiki/Tautomer
Taxadienone ((+)-taxa-4(5),11(12)-dien-2-one) is an organic compound and a taxane . The compound is of some academic interest as a potential precursor to Taxol , in important anti-cancer drug , in a commercially viable process. [ 1 ] A total synthesis of taxadienone was reported in 2012 together with its conversion to the next Taxol precursor taxadiene . [ 2 ] A multigram synthetic method was reported in 2015. [ 3 ] This article about an organic compound is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Taxadienone
In mathematics , the n th taxicab number , typically denoted Ta( n ) or Taxicab( n ), is defined as the smallest integer that can be expressed as a sum of two positive integer cubes in n distinct ways. [ 1 ] The most famous taxicab number is 1729 = Ta(2) = 1 3 + 12 3 = 9 3 + 10 3 , also known as the Hardy–Ramanujan number. [ 2 ] [ 3 ] The name is derived from a conversation ca. 1919 involving mathematicians G. H. Hardy and Srinivasa Ramanujan . As told by Hardy: I remember once going to see him [Ramanujan] when he was lying ill at Putney . I had ridden in taxi-cab No. 1729 , and remarked that the number seemed to be rather a dull one, and that I hoped it was not an unfavourable omen. "No," he replied, "it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways." [ 4 ] [ 5 ] The pairs of summands of the Hardy–Ramanujan number Ta(2) = 1729 were first mentioned by Bernard Frénicle de Bessy , who published his observation in 1657. 1729 was made famous as the first taxicab number in the early 20th century by a story involving Srinivasa Ramanujan in claiming it to be the smallest for his particular example of two summands. In 1938, G. H. Hardy and E. M. Wright proved that such numbers exist for all positive integers n , and their proof is easily converted into a program to generate such numbers. However, the proof makes no claims at all about whether the thus-generated numbers are the smallest possible and so it cannot be used to find the actual value of Ta( n ). The taxicab numbers subsequent to 1729 were found with the help of computers. John Leech obtained Ta(3) in 1957. E. Rosenstiel, J. A. Dardis and C. R. Rosenstiel found Ta(4) in 1989. [ 6 ] J. A. Dardis found Ta(5) in 1994 and it was confirmed by David W. Wilson in 1999. [ 7 ] [ 8 ] Ta(6) was announced by Uwe Hollerbach on the NMBRTHRY mailing list on March 9, 2008, [ 9 ] following a 2003 paper by Calude et al. that gave a 99% probability that the number was actually Ta(6). [ 10 ] Upper bounds for Ta(7) to Ta(12) were found by Christian Boyer in 2006. [ 11 ] The restriction of the summands to positive numbers is necessary, because allowing negative numbers allows for more (and smaller) instances of numbers that can be expressed as sums of cubes in n distinct ways. (sequence A293647 in the OEIS ). The concept of a cabtaxi number has been introduced to allow for alternative, less restrictive definitions of this nature. In a sense, the specification of two summands and powers of three is also restrictive; a generalized taxicab number allows for these values to be other than two and three, respectively. So far, the following 6 taxicab numbers are known: Ta ⁡ ( 1 ) = 2 = 1 3 + 1 3 Ta ⁡ ( 2 ) = 1729 = 1 3 + 12 3 = 9 3 + 10 3 Ta ⁡ ( 3 ) = 87539319 = 167 3 + 436 3 = 228 3 + 423 3 = 255 3 + 414 3 Ta ⁡ ( 4 ) = 6963472309248 = 2421 3 + 19083 3 = 5436 3 + 18948 3 = 10200 3 + 18072 3 = 13322 3 + 16630 3 Ta ⁡ ( 5 ) = 48988659276962496 = 38787 3 + 365757 3 = 107839 3 + 362753 3 = 205292 3 + 342952 3 = 221424 3 + 336588 3 = 231518 3 + 331954 3 Ta ⁡ ( 6 ) = 24153319581254312065344 = 582162 3 + 28906206 3 = 3064173 3 + 28894803 3 = 8519281 3 + 28657487 3 = 16218068 3 + 27093208 3 = 17492496 3 + 26590452 3 = 18289922 3 + 26224366 3 {\displaystyle {\begin{aligned}\operatorname {Ta} (1)=&\ 2\\&=1^{3}+1^{3}\\[6pt]\operatorname {Ta} (2)=&\ 1729\\&=1^{3}+12^{3}\\&=9^{3}+10^{3}\\[6pt]\operatorname {Ta} (3)=&\ 87539319\\&=167^{3}+436^{3}\\&=228^{3}+423^{3}\\&=255^{3}+414^{3}\\[6pt]\operatorname {Ta} (4)=&\ 6963472309248\\&=2421^{3}+19083^{3}\\&=5436^{3}+18948^{3}\\&=10200^{3}+18072^{3}\\&=13322^{3}+16630^{3}\\[6pt]\operatorname {Ta} (5)=&\ 48988659276962496\\&=38787^{3}+365757^{3}\\&=107839^{3}+362753^{3}\\&=205292^{3}+342952^{3}\\&=221424^{3}+336588^{3}\\&=231518^{3}+331954^{3}\\[6pt]\operatorname {Ta} (6)=&\ 24153319581254312065344\\&=582162^{3}+28906206^{3}\\&=3064173^{3}+28894803^{3}\\&=8519281^{3}+28657487^{3}\\&=16218068^{3}+27093208^{3}\\&=17492496^{3}+26590452^{3}\\&=18289922^{3}+26224366^{3}\end{aligned}}} For the following taxicab numbers upper bounds are known: Ta ⁡ ( 7 ) ≤ 24885189317885898975235988544 = 2648660966 3 + 1847282122 3 = 2685635652 3 + 1766742096 3 = 2736414008 3 + 1638024868 3 = 2894406187 3 + 860447381 3 = 2915734948 3 + 459531128 3 = 2918375103 3 + 309481473 3 = 2919526806 3 + 58798362 3 Ta ⁡ ( 8 ) ≤ 50974398750539071400590819921724352 = 299512063576 3 + 288873662876 3 = 336379942682 3 + 234604829494 3 = 341075727804 3 + 224376246192 3 = 347524579016 3 + 208029158236 3 = 367589585749 3 + 109276817387 3 = 370298338396 3 + 58360453256 3 = 370633638081 3 + 39304147071 3 = 370779904362 3 + 7467391974 3 Ta ⁡ ( 9 ) ≤ 136897813798023990395783317207361432493888 = 41632176837064 3 + 40153439139764 3 = 46756812032798 3 + 32610071299666 3 = 47409526164756 3 + 31188298220688 3 = 48305916483224 3 + 28916052994804 3 = 51094952419111 3 + 15189477616793 3 = 51471469037044 3 + 8112103002584 3 = 51518075693259 3 + 5463276442869 3 = 51530042142656 3 + 4076877805588 3 = 51538406706318 3 + 1037967484386 3 Ta ⁡ ( 10 ) ≤ 7335345315241855602572782233444632535674275447104 = 15695330667573128 3 + 15137846555691028 3 = 17627318136364846 3 + 12293996879974082 3 = 17873391364113012 3 + 11757988429199376 3 = 18211330514175448 3 + 10901351979041108 3 = 19262797062004847 3 + 5726433061530961 3 = 19404743826965588 3 + 3058262831974168 3 = 19422314536358643 3 + 2059655218961613 3 = 19426825887781312 3 + 1536982932706676 3 = 19429379778270560 3 + 904069333568884 3 = 19429979328281886 3 + 391313741613522 3 Ta ⁡ ( 11 ) ≤ 2818537360434849382734382145310807703728251895897826621632 = 11410505395325664056 3 + 11005214445987377356 3 = 12815060285137243042 3 + 8937735731741157614 3 = 12993955521710159724 3 + 8548057588027946352 3 = 13239637283805550696 3 + 7925282888762885516 3 = 13600192974314732786 3 + 6716379921779399326 3 = 14004053464077523769 3 + 4163116835733008647 3 = 14107248762203982476 3 + 2223357078845220136 3 = 14120022667932733461 3 + 1497369344185092651 3 = 14123302420417013824 3 + 1117386592077753452 3 = 14125159098802697120 3 + 657258405504578668 3 = 14125594971660931122 3 + 284485090153030494 3 Ta ⁡ ( 12 ) ≤ 73914858746493893996583617733225161086864012865017882136931801625152 = 33900611529512547910376 3 + 32696492119028498124676 3 = 38073544107142749077782 3 + 26554012859002979271194 3 = 38605041855000884540004 3 + 25396279094031028611792 3 = 39334962370186291117816 3 + 23546015462514532868036 3 = 40406173326689071107206 3 + 19954364747606595397546 3 = 41606042841774323117699 3 + 12368620118962768690237 3 = 41912636072508031936196 3 + 6605593881249149024056 3 = 41950587346428151112631 3 + 4448684321573910266121 3 = 41960331491058948071104 3 + 3319755565063005505892 3 = 41965847682542813143520 3 + 1952714722754103222628 3 = 41965889731136229476526 3 + 1933097542618122241026 3 = 41967142660804626363462 3 + 845205202844653597674 3 {\displaystyle {\begin{aligned}\operatorname {Ta} (7)\leq &\ 24885189317885898975235988544\\&=2648660966^{3}+1847282122^{3}\\&=2685635652^{3}+1766742096^{3}\\&=2736414008^{3}+1638024868^{3}\\&=2894406187^{3}+860447381^{3}\\&=2915734948^{3}+459531128^{3}\\&=2918375103^{3}+309481473^{3}\\&=2919526806^{3}+58798362^{3}\\[6pt]\operatorname {Ta} (8)\leq &\ 50974398750539071400590819921724352\\&=299512063576^{3}+288873662876^{3}\\&=336379942682^{3}+234604829494^{3}\\&=341075727804^{3}+224376246192^{3}\\&=347524579016^{3}+208029158236^{3}\\&=367589585749^{3}+109276817387^{3}\\&=370298338396^{3}+58360453256^{3}\\&=370633638081^{3}+39304147071^{3}\\&=370779904362^{3}+7467391974^{3}\\[6pt]\operatorname {Ta} (9)\leq &\ 136897813798023990395783317207361432493888\\&=41632176837064^{3}+40153439139764^{3}\\&=46756812032798^{3}+32610071299666^{3}\\&=47409526164756^{3}+31188298220688^{3}\\&=48305916483224^{3}+28916052994804^{3}\\&=51094952419111^{3}+15189477616793^{3}\\&=51471469037044^{3}+8112103002584^{3}\\&=51518075693259^{3}+5463276442869^{3}\\&=51530042142656^{3}+4076877805588^{3}\\&=51538406706318^{3}+1037967484386^{3}\\[6pt]\operatorname {Ta} (10)\leq &\ 7335345315241855602572782233444632535674275447104\\&=15695330667573128^{3}+15137846555691028^{3}\\&=17627318136364846^{3}+12293996879974082^{3}\\&=17873391364113012^{3}+11757988429199376^{3}\\&=18211330514175448^{3}+10901351979041108^{3}\\&=19262797062004847^{3}+5726433061530961^{3}\\&=19404743826965588^{3}+3058262831974168^{3}\\&=19422314536358643^{3}+2059655218961613^{3}\\&=19426825887781312^{3}+1536982932706676^{3}\\&=19429379778270560^{3}+904069333568884^{3}\\&=19429979328281886^{3}+391313741613522^{3}\\[6pt]\operatorname {Ta} (11)\leq &\ 2818537360434849382734382145310807703728251895897826621632\\&=11410505395325664056^{3}+11005214445987377356^{3}\\&=12815060285137243042^{3}+8937735731741157614^{3}\\&=12993955521710159724^{3}+8548057588027946352^{3}\\&=13239637283805550696^{3}+7925282888762885516^{3}\\&=13600192974314732786^{3}+6716379921779399326^{3}\\&=14004053464077523769^{3}+4163116835733008647^{3}\\&=14107248762203982476^{3}+2223357078845220136^{3}\\&=14120022667932733461^{3}+1497369344185092651^{3}\\&=14123302420417013824^{3}+1117386592077753452^{3}\\&=14125159098802697120^{3}+657258405504578668^{3}\\&=14125594971660931122^{3}+284485090153030494^{3}\\[6pt]\operatorname {Ta} (12)\leq &\ 73914858746493893996583617733225161086864012865017882136931801625152\\&=33900611529512547910376^{3}+32696492119028498124676^{3}\\&=38073544107142749077782^{3}+26554012859002979271194^{3}\\&=38605041855000884540004^{3}+25396279094031028611792^{3}\\&=39334962370186291117816^{3}+23546015462514532868036^{3}\\&=40406173326689071107206^{3}+19954364747606595397546^{3}\\&=41606042841774323117699^{3}+12368620118962768690237^{3}\\&=41912636072508031936196^{3}+6605593881249149024056^{3}\\&=41950587346428151112631^{3}+4448684321573910266121^{3}\\&=41960331491058948071104^{3}+3319755565063005505892^{3}\\&=41965847682542813143520^{3}+1952714722754103222628^{3}\\&=41965889731136229476526^{3}+1933097542618122241026^{3}\\&=41967142660804626363462^{3}+845205202844653597674^{3}\end{aligned}}} A more restrictive taxicab problem requires that the taxicab number be cubefree , which means that it is not divisible by any cube other than 1 3 . When a cubefree taxicab number T is written as T = x 3 + y 3 , the numbers x and y must be relatively prime . Among the taxicab numbers Ta( n ) listed above, only Ta(1) and Ta(2) are cubefree taxicab numbers. The smallest cubefree taxicab number with three representations was discovered by Paul Vojta (unpublished) in 1981 while he was a graduate student: 15170835645 = 517 3 + 2468 3 = 709 3 + 2456 3 = 1733 3 + 2152 3 {\displaystyle {\begin{aligned}15170835645&=517^{3}+2468^{3}\\&=709^{3}+2456^{3}\\&=1733^{3}+2152^{3}\end{aligned}}} The smallest cubefree taxicab number with four representations was discovered by Stuart Gascoigne and independently by Duncan Moore in 2003: 1801049058342701083 = 92227 3 + 1216500 3 = 136635 3 + 1216102 3 = 341995 3 + 1207602 3 = 600259 3 + 1165884 3 {\displaystyle {\begin{aligned}1801049058342701083&=92227^{3}+1216500^{3}\\&=136635^{3}+1216102^{3}\\&=341995^{3}+1207602^{3}\\&=600259^{3}+1165884^{3}\end{aligned}}} (sequence A080642 in the OEIS ).
https://en.wikipedia.org/wiki/Taxicab_number
A taximeter or fare meter is a mechanical or electronic device installed in taxicabs and auto rickshaws that calculates passenger fares based on a combination of distance travelled and waiting time. Its shortened form, "taxi", is also a metonym for the hired cars that use them. [ 1 ] [ 2 ] The modern taximeter was invented by German Friedrich Wilhelm Gustav Bruhn in 1891, [ 3 ] and the Daimler Victoria —the world's first meter-equipped (and gasoline-powered) taxicab—was built by Gottlieb Daimler in 1897. [ 4 ] Taximeters were originally mechanical and mounted outside the cab, above the driver's side front wheel. Meters were soon relocated inside the taxi, and in the 1980s electronic meters were introduced. Constant expressed in pulses per kilometre which represents the number of pulses the taximeter must receive in order to correctly indicate a distance traveled of one kilometre. [ 5 ] Taximeters, when they are installed to the taxis, require adjustment of k constant. During the movement, car generates signal which transmitted to the taximeter. Number of signals transmitted per k constant ratio results distance travelled. Within pre-installed tariff values and travel data are multiplied and fare is calculated. Taximeters can include several accessories, or act as components in larger dispatching/control systems. Features include: During normal operation, taximeters repeat cyclically through several stages:
https://en.wikipedia.org/wiki/Taximeter
A taxis (from Ancient Greek τάξις (táxis) ' arrangement, order ' ; [ 1 ] pl. : taxes / ˈ t æ k s iː z / ) [ 2 ] [ 3 ] [ 4 ] is the movement of an organism in response to a stimulus such as light or the presence of food. Taxes are innate behavioural responses. A taxis differs from a tropism (turning response, often growth towards or away from a stimulus) in that in the case of taxis, the organism has motility and demonstrates guided movement towards or away from the stimulus source. [ 5 ] [ 6 ] It is sometimes distinguished from a kinesis , a non-directional change in activity in response to a stimulus. Taxis can be positive (moving towards the stimulus) or negative (moving away from the stimulus). Taxes are classified based on the type of stimulus, and on whether the organism's response is to move towards or away from the stimulus. If the organism moves towards the stimulus the taxis are positive, while if it moves away the taxis are negative. For example, flagellate protozoans of the genus Euglena move towards a light source. This reaction or behavior is called positive phototaxis since phototaxis refers to a response to light and the organism is moving towards the stimulus. Many types of taxis have been identified, including: Depending on the type of sensory organs present, a taxis can be classified as a klinotaxis , where an organism continuously samples the environment to determine the direction of a stimulus; a tropotaxis , where bilateral sense organs are used to determine the stimulus direction; and a telotaxis , where a single organ suffices to establish the orientation of the stimulus. There are five types of taxes based on the movement of organisms.
https://en.wikipedia.org/wiki/Taxis
A taxocene (from Greek τάξις and κοινός) is a taxonomically related set of species within a community . [ 1 ] An example of a taxocene would be " fishes in a pond ," as the fishes are closely related to one another (i.e., the fishes are more closely related, by evolutionary descent, to each other than any fish is related to any other type of pond organism) and fulfill similar roles within the pond community. Alternatively, it can be defined as a group of species that belong to particular supraspecific taxon and occur together in the same association. [ 2 ] This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Taxocene
In biology , a taxon ( back-formation from taxonomy ; pl. : taxa ) is a group of one or more populations of an organism or organisms seen by taxonomists to form a unit. Although neither is required, a taxon is usually known by a particular name and given a particular ranking , especially if and when it is accepted or becomes established. It is very common, however, for taxonomists to remain at odds over what belongs to a taxon and the criteria used for inclusion, especially in the context of rank-based (" Linnaean ") nomenclature (much less so under phylogenetic nomenclature ). [ 1 ] If a taxon is given a formal scientific name , its use is then governed by one of the nomenclature codes specifying which scientific name is correct for a particular grouping. Initial attempts at classifying and ordering organisms (plants and animals) were presumably set forth in prehistoric times by hunter-gatherers, as suggested by the fairly sophisticated folk taxonomies. Much later, Aristotle, and later still, European scientists, like Magnol , [ 2 ] Tournefort [ 3 ] and Carl Linnaeus 's system in Systema Naturae , 10th edition (1758), [ 4 ] , as well as an unpublished work by Bernard and Antoine Laurent de Jussieu , contributed to this field. The idea of a unit-based system of biological classification was first made widely available in 1805 in the introduction of Jean-Baptiste Lamarck 's Flore françoise , and Augustin Pyramus de Candolle 's Principes élémentaires de botanique . Lamarck set out a system for the "natural classification" of plants. Since then, systematists continue to construct accurate classifications encompassing the diversity of life; today, a "good" or "useful" taxon is commonly taken to be one that reflects evolutionary relationships . [ note 1 ] Many modern systematists, such as advocates of phylogenetic nomenclature , use cladistic methods that require taxa to be monophyletic (all descendants of some ancestor). Therefore, their basic unit, the clade , is equivalent to the taxon, assuming that taxa should reflect evolutionary relationships. Similarly, among those contemporary taxonomists working with the traditional Linnean (binomial) nomenclature, few propose taxa they know to be paraphyletic . [ 5 ] An example of a long-established taxon that is not also a clade is the class Reptilia , the reptiles; birds and mammals are the descendants of animals traditionally classed as reptiles, but neither is included in the Reptilia (birds are traditionally placed in the class Aves , and mammals in the class Mammalia ). [ 6 ] The term taxon was first used in 1926 by Adolf Meyer-Abich for animal groups, as a back-formation from the word taxonomy ; the word taxonomy had been coined a century before from the Greek components τάξις ( táxis ), meaning "arrangement", and νόμος ( nómos ), meaning " method ". [ 7 ] [ 8 ] For plants, it was proposed by Herman Johannes Lam in 1948, and it was adopted at the VII International Botanical Congress , held in 1950. [ 9 ] The glossary of the International Code of Zoological Nomenclature (1999) defines [ 10 ] a A taxon can be assigned a taxonomic rank , usually (but not necessarily) when it is given a formal name. [ citation needed ] " Phylum " applies formally to any biological domain , but traditionally it was always used for animals, whereas "division" was traditionally often used for plants , fungi , etc. [ citation needed ] A prefix is used to indicate a ranking of lesser importance. The prefix super- indicates a rank above, the prefix sub- indicates a rank below. In zoology , the prefix infra- indicates a rank below sub- . For instance, among the additional ranks of class are superclass, subclass and infraclass. [ citation needed ] Rank is relative, and restricted to a particular systematic schema. For example, liverworts have been grouped, in various systems of classification, as a family, order, class, or division (phylum). The use of a narrow set of ranks is challenged by users of cladistics ; for example, the mere 10 ranks traditionally used between animal families (governed by the International Code of Zoological Nomenclature (ICZN)) and animal phyla (usually the highest relevant rank in taxonomic work) often cannot adequately represent the evolutionary history as more about a lineage's phylogeny becomes known. [ citation needed ] In addition, the class rank is quite often not an evolutionary but a phenetic or paraphyletic group and as opposed to those ranks governed by the ICZN (family-level, genus-level and species -level taxa), can usually not be made monophyletic by exchanging the taxa contained therein. This has given rise to phylogenetic taxonomy and the ongoing development of the PhyloCode , which has been proposed as a new alternative to replace Linnean classification and govern the application of names to clades . Many cladists do not see any need to depart from traditional nomenclature as governed by the ICZN, International Code of Nomenclature for algae, fungi, and plants , etc. [ citation needed ]
https://en.wikipedia.org/wiki/Taxon
Taxon cycles refer to a biogeographical theory of how species evolve through range expansions and contractions over time associated with adaptive shifts in the ecology and morphology of species. The taxon cycle concept was explicitly formulated by biologist E. O. Wilson in 1961 [ 1 ] after he surveyed the distributions, habitats , behavior and morphology of ant species in the Melanesian archipelago. [ 2 ] Wilson categorized species into evolutionary "stages", which today are commonly described in the outline by Ricklefs & Cox (1972). [ 3 ] However, with the advent of molecular techniques to construct time-calibrated phylogenetic relationships between species, the taxon cycle concept was further developed to include well-defined temporal scales [ 4 ] and combined with concepts from ecological succession and speciation cycle theories. [ 5 ] Taxon cycles have mainly been described in island settings (archipelagos), where the distributions and movements of species are readily recognized, [ 6 ] but may also occur in continental biota. The ecology and evolution of the Melanesian ants that originally inspired Wilson's hypothesis have since been shown to be consistent with the taxon cycle predictions using modern methods. [ 8 ] Ricklefs & Bermingham (2002) [ 6 ] estimated that taxon cycles take place over periods of 0.1-10 million years in different bird groups of the Lesser Antilles islands. Pepke et al. (2019) [ 5 ] used the difference in mean age of late- and early-stage species as a lower estimate (4.7 million years) of the tempo of taxon cycling in an Indo-Pacific bird family .
https://en.wikipedia.org/wiki/Taxon_cycle
A1 B A2 C In bacteriology , a taxon in disguise is a species , genus or higher unit of biological classification whose evolutionary history reveals has evolved from another unit of a similar or lower rank, making the parent unit paraphyletic . [ 1 ] [ 2 ] That happens when rapid evolution makes a new species appear so radically different from the ancestral group that it is not (initially) recognised as belonging to the parent phylogenetic group, which is left as an evolutionary grade . While the term is from bacteriology, parallel examples are found throughout the tree of life. For example, four-footed animals have evolved from piscine ancestors but since they are not generally considered fish , they can be said to be "fish in disguise". In many cases, the paraphyly can be resolved by reclassifying the taxon in question under the parent group. However, in bacteriology, since renaming groups may have serious consequences since by causing confusion over the identity of pathogens , it is generally avoided for some groups. The bacterial genus Shigella is the cause of bacillary dysentery , a potentially-severe infection that kills over a million people every year. [ 3 ] The genus ( S. dysenteriae , S. flexneri , S. boydii , S. sonnei ) have evolved from the common intestinal bacterium Escherichia coli , which renders that species paraphyletic. E. coli itself can also cause serious dysentery, [ 4 ] but differences in genetic makeup between E. coli and Shigella cause different medical conditions and symptoms. [ 2 ] Escherichia coli is a badly-classified species since some strains share only 20% of their genome. It is so diverse that it should be given a higher taxonomic ranking. However, medical conditions associated with E. coli itself and Shigella make the current classification not to be changed to avoid confusion in medical context. Shigella will thus remain " E. coli in disguise". Similarly, the Bacillus species of the B. cereus -group ( B. anthracis , B. cereus , B . thuringiensis , B. mycoides , B. pseudomycoides , B. weihenstephanensis and B. medusa ) have 99-100% similar 16S rRNA sequence (97% is a commonly-cited adequate species limit) and should be considered a single species. [ 5 ] Some members of the group appear to have arisen from other Bacillus strains by acquiring a protein coding plasmid and so the group may thus be polyphyletic. For medical reasons, such as anthrax , the current arrangement of separate species has remained intact. [ 5 ]
https://en.wikipedia.org/wiki/Taxon_in_disguise
The term boundary paradox refers to the conflict between traditional, rank-based classification of life and evolutionary thinking. In the hierarchy of ranked categories it is implicitly assumed that the morphological gap is growing along with increasing ranks: two species from the same genus are more similar than other two species from different genera in the same family , these latter two species are more similar than any two species from different families of the same order , and so on. However, this requirement may only satisfy for the classification of contemporary organisms; difficulties arise if we wish to classify descendants together with their ancestors. Theoretically, such a classification necessarily involves segmentation of the spatio-temporal continuum of populations into groups with crisp boundaries. However, the problem is not only that many parent populations would separate at species level from their offspring. The truly paradoxical situation is that some between-species boundaries would necessarily coincide with between-genus boundaries, and a few between-genus boundaries with borders between families, and so on. [ 1 ] [ 2 ] This apparent ambiguity cannot be resolved in Linnaean systems ; resolution is only possible if classification is cladistic (see below). Jean-Baptiste Lamarck , in Philosophie zoologique (1809), was the first who questioned the objectivity of rank-based classification of life, by saying: …classes, orders, families, genera and nomenclatures are weapons of our own invention. We could not do without them, but we must use them with discretion. …among her productions nature has not really formed either classes, orders, families, genera or constant species, but only individuals who succeed one another and resemble those from which they sprung. Half a century later, Charles Darwin explained that sharp separation of groups of organisms observed at present becomes less obvious if we go back into the past: The most common case, especially with respect to very distinct groups, such as fish and reptiles, seems to be, that supposing them to be distinguished at the present day from each other by a dozen characters, the ancient members of the same two groups would be distinguished by a somewhat lesser number of characters, so that the two groups, though formerly quite distinct, at that period made some small approach to each other. In his book on orchids , Darwin also warned that the system of ranks would not work if we knew more details about past life: To make a perfect gradation, all the extinct forms which have ever existed, along many lines of descent converging to the common progenitor of the order, would have to be called into life. It is due to their absence, and to the consequent wide gaps in the series, that we are enabled to divide the existing species into definable groups, such as genera, families, and tribes. Finally, Richard Dawkins has argued recently that If we assume, as almost every anthropologist today accepts, that all members of the genus Homo are descended from ancestors belonging to the genus we call Australopithecus, it necessarily follows that, somewhere along the chain of descent from one species to the other, there must have been at least one individual who sat exactly on the borderline. and Indeed, on the evolutionary view, the conferring of discrete names should actually become impossible if only the fossil record were more complete. In one way, it is fortunate that fossils are so rare. If we had a continuous and unbroken fossil record, the granting of distinct names to species and genera would become impossible, or at least very problematical. with the following conclusion: The [Linnaean] system works, as long as we don’t try to classify the dead antecedents. But as soon as we include our hypothetically complete fossil record, all the neat separations break down. The paradox may be best illustrated by model diagrams similar to Darwin’s single evolutionary tree in On the Origin of Species . [ 4 ] In these tree graphs , dots represent populations and edges correspond to parent-offspring relations. The trees are placed into a coordinate system which is one-dimensional (time) for a single lineage, and two-dimensional (differentiation vs. time) for cladogenesis or evolution with divergence. In the single lineage model we now consider a sequence of populations along an extremely long time axis, say several hundred million years, with the last dot representing an extant population. In the figure there is space for a few dots even though edges between adjacent populations are hidden. We could use a second axis to express differentiation, but it is not necessary for our purposes. Here we assume that there is no extinction and all branching events are disregarded (if there were no branches at all, then the changes would correspond to a typical anagenesis . Classification of organisms along this sequence into species is shown by small ellipses. If the differences between certain species are judged to be large enough to justify classification into distinct genera, then generic separators must each coincide with a between-species boundary. If differences reach family-level differentiation, which is easy to imagine over the very long time we consider here, the consequence is that a family-level border must overlap with a between-genus and, in turn, a between-species border (gray arrow in the figure). One cannot imagine, however, that a parent and its offspring are so distinct that they should be classified to different families, or even genera – that would be paradoxical. This illustrates Dawkins’ above argumentation on human ancestry at the level of genera, Homo and Australopithecus . Darwin placed emphasis on divergence, that is, when a parent population splits and these offspring populations diverge gradually, each following their own anagenetic sequence potentially with further divergence events. In this case, evolutionary (say morphological) divergence is expressed on a new, horizontal, axis and time becomes the vertical axis. At time point 1 an imaginary taxonomist judges populations A and B to belong to different species, but within the same genus. Their respective descendants, C and D are observed at time 2, and considered to represent two separate genera because their morphological difference is large. The paradox is that while A and C, as well as B and D remain within generic limits but C and D do not, so that ancestors cannot be classified together with their descendants meaningfully in a Linnaean system. This figure illustrates the problem Darwin has discussed in the fish and reptile example. Let us consider a hypothetical evolutionary tree with four recent species, A to D, classified into two genera that are fairly distant from each other morphologically. We assume, further, that from the fossil record we only know their common ancestor, E, representing yet another genus for a taxonomist because it takes “intermediate” position between the other two – yet considerably different from both. All other forms went extinct; therefore we have classification of these five species into three genera, which would be illogical if more fossils were known. This illustrates Darwin’s and Dawkins’ examples on the role of gaps in the fossil record in classification – and nomenclature. As demonstrated, given a Darwinian evolutionary model, descendants and their ancestors cannot be classified together within the system of Linnean ranks. Solution is provided by cladistic classification in which each group is composed of an ancestor and all of its descendant populations, a condition called monophyly . In the above models monophyletic groups may be obtained by cutting a branch (subtree) from the tree at places where, for instance, new apomorphic (evolutionary derived) characters appear. For these groups there is no need to consider how much change occurred between members of one group as compared to those of the other.
https://en.wikipedia.org/wiki/Taxonomic_boundary_paradox
The term boundary paradox refers to the conflict between traditional, rank-based classification of life and evolutionary thinking. In the hierarchy of ranked categories it is implicitly assumed that the morphological gap is growing along with increasing ranks: two species from the same genus are more similar than other two species from different genera in the same family , these latter two species are more similar than any two species from different families of the same order , and so on. However, this requirement may only satisfy for the classification of contemporary organisms; difficulties arise if we wish to classify descendants together with their ancestors. Theoretically, such a classification necessarily involves segmentation of the spatio-temporal continuum of populations into groups with crisp boundaries. However, the problem is not only that many parent populations would separate at species level from their offspring. The truly paradoxical situation is that some between-species boundaries would necessarily coincide with between-genus boundaries, and a few between-genus boundaries with borders between families, and so on. [ 1 ] [ 2 ] This apparent ambiguity cannot be resolved in Linnaean systems ; resolution is only possible if classification is cladistic (see below). Jean-Baptiste Lamarck , in Philosophie zoologique (1809), was the first who questioned the objectivity of rank-based classification of life, by saying: …classes, orders, families, genera and nomenclatures are weapons of our own invention. We could not do without them, but we must use them with discretion. …among her productions nature has not really formed either classes, orders, families, genera or constant species, but only individuals who succeed one another and resemble those from which they sprung. Half a century later, Charles Darwin explained that sharp separation of groups of organisms observed at present becomes less obvious if we go back into the past: The most common case, especially with respect to very distinct groups, such as fish and reptiles, seems to be, that supposing them to be distinguished at the present day from each other by a dozen characters, the ancient members of the same two groups would be distinguished by a somewhat lesser number of characters, so that the two groups, though formerly quite distinct, at that period made some small approach to each other. In his book on orchids , Darwin also warned that the system of ranks would not work if we knew more details about past life: To make a perfect gradation, all the extinct forms which have ever existed, along many lines of descent converging to the common progenitor of the order, would have to be called into life. It is due to their absence, and to the consequent wide gaps in the series, that we are enabled to divide the existing species into definable groups, such as genera, families, and tribes. Finally, Richard Dawkins has argued recently that If we assume, as almost every anthropologist today accepts, that all members of the genus Homo are descended from ancestors belonging to the genus we call Australopithecus, it necessarily follows that, somewhere along the chain of descent from one species to the other, there must have been at least one individual who sat exactly on the borderline. and Indeed, on the evolutionary view, the conferring of discrete names should actually become impossible if only the fossil record were more complete. In one way, it is fortunate that fossils are so rare. If we had a continuous and unbroken fossil record, the granting of distinct names to species and genera would become impossible, or at least very problematical. with the following conclusion: The [Linnaean] system works, as long as we don’t try to classify the dead antecedents. But as soon as we include our hypothetically complete fossil record, all the neat separations break down. The paradox may be best illustrated by model diagrams similar to Darwin’s single evolutionary tree in On the Origin of Species . [ 4 ] In these tree graphs , dots represent populations and edges correspond to parent-offspring relations. The trees are placed into a coordinate system which is one-dimensional (time) for a single lineage, and two-dimensional (differentiation vs. time) for cladogenesis or evolution with divergence. In the single lineage model we now consider a sequence of populations along an extremely long time axis, say several hundred million years, with the last dot representing an extant population. In the figure there is space for a few dots even though edges between adjacent populations are hidden. We could use a second axis to express differentiation, but it is not necessary for our purposes. Here we assume that there is no extinction and all branching events are disregarded (if there were no branches at all, then the changes would correspond to a typical anagenesis . Classification of organisms along this sequence into species is shown by small ellipses. If the differences between certain species are judged to be large enough to justify classification into distinct genera, then generic separators must each coincide with a between-species boundary. If differences reach family-level differentiation, which is easy to imagine over the very long time we consider here, the consequence is that a family-level border must overlap with a between-genus and, in turn, a between-species border (gray arrow in the figure). One cannot imagine, however, that a parent and its offspring are so distinct that they should be classified to different families, or even genera – that would be paradoxical. This illustrates Dawkins’ above argumentation on human ancestry at the level of genera, Homo and Australopithecus . Darwin placed emphasis on divergence, that is, when a parent population splits and these offspring populations diverge gradually, each following their own anagenetic sequence potentially with further divergence events. In this case, evolutionary (say morphological) divergence is expressed on a new, horizontal, axis and time becomes the vertical axis. At time point 1 an imaginary taxonomist judges populations A and B to belong to different species, but within the same genus. Their respective descendants, C and D are observed at time 2, and considered to represent two separate genera because their morphological difference is large. The paradox is that while A and C, as well as B and D remain within generic limits but C and D do not, so that ancestors cannot be classified together with their descendants meaningfully in a Linnaean system. This figure illustrates the problem Darwin has discussed in the fish and reptile example. Let us consider a hypothetical evolutionary tree with four recent species, A to D, classified into two genera that are fairly distant from each other morphologically. We assume, further, that from the fossil record we only know their common ancestor, E, representing yet another genus for a taxonomist because it takes “intermediate” position between the other two – yet considerably different from both. All other forms went extinct; therefore we have classification of these five species into three genera, which would be illogical if more fossils were known. This illustrates Darwin’s and Dawkins’ examples on the role of gaps in the fossil record in classification – and nomenclature. As demonstrated, given a Darwinian evolutionary model, descendants and their ancestors cannot be classified together within the system of Linnean ranks. Solution is provided by cladistic classification in which each group is composed of an ancestor and all of its descendant populations, a condition called monophyly . In the above models monophyletic groups may be obtained by cutting a branch (subtree) from the tree at places where, for instance, new apomorphic (evolutionary derived) characters appear. For these groups there is no need to consider how much change occurred between members of one group as compared to those of the other.
https://en.wikipedia.org/wiki/Taxonomic_boundary_paradoxes
A taxonomic database is a database created to hold information on biological taxa – for example groups of organisms organized by species name or other taxonomic identifier – for efficient data management and information retrieval . Taxonomic databases are routinely used for the automated construction of biological checklists such as floras and faunas , both for print publication and online; to underpin the operation of web-based species information systems; as a part of biological collection management (for example in museums and herbaria ); as well as providing, in some cases, the taxon management component of broader science or biology information systems. They are also a fundamental contribution to the discipline of biodiversity informatics . Taxonomic databases digitize scientific biodiversity data and provide access to taxonomic data for research. [ 1 ] Taxonomic databases vary in breadth of the groups of taxa and geographical space they seek to include, for example: beetles in a defined region, mammals globally, or all described taxa in the tree of life. [ 2 ] A taxonomic database may incorporate organism identifiers (scientific name, author, and – for zoological taxa – year of original publication), synonyms, taxonomic opinions, literature sources or citations, illustrations or photographs, and biological attributes for each taxon (such as geographic distribution, ecology, descriptive information, threatened or vulnerable status, etc.). [ 2 ] [ 3 ] [ 4 ] [ 5 ] Some databases, such as the Global Biodiversity Information Facility (GBIF) database and the Barcode of Life Data System , store the DNA barcode of a taxon if one exists (also called the Barcode Index Number (BIN) which may be assigned, for example, by the International Barcode of Life project (iBOL) or UNITE, a database for fungal DNA barcoding ). [ 6 ] [ 7 ] A taxonomic database aims to accurately model the characteristics of interest that are relevant to the organisms which are in scope for the intended coverage and usage of the system. [ 8 ] For example, databases of fungi , algae , bryophytes and vascular plants ("higher plants") encode conventions from the International Code of Botanical Nomenclature while their counterparts for animals and most protists encode equivalent rules from the International Code of Zoological Nomenclature . Modelling the relevant taxonomic hierarchy for any taxon is a natural fit with the relational model employed in almost all database systems. [ citation needed ] Scientific consensus is not reached for all taxon groups, and new species continue to be described; therefore, another goal of taxonomic databases is to aid in resolving conflicts of scientific opinion and unify taxonomy. [ 2 ] Possibly the earliest documented management of taxonomic information in computerised form comprised the taxonomic coding system developed by Richard Swartz et al. at the Virginia Institute of Marine Science for the Biota of Chesapeake Bay and described in a published report in 1972. [ 9 ] This work led directly or indirectly to other projects with greater profile including the NODC Taxonomic Code system [ 10 ] which went through 8 versions before being discontinued in 1996, to be subsumed and transformed into the still current Integrated Taxonomic Information System (ITIS). A number of other taxonomic databases specializing in particular groups of organisms that appeared in the 1970s through to the present jointly contribute to the Species 2000 project, which since 2001 has been partnering with ITIS to produce a combined product, the Catalogue of Life . While the Catalogue of Life currently concentrates on assembling basic name information as a global species checklist, numerous other taxonomic database projects such as Fauna Europaea , the Australian Faunal Directory, [ 11 ] and more supply rich ancillary information including descriptions, illustrations, maps, and more. Many taxonomic database projects are currently listed at the TDWG "Biodiversity Information Projects of the World" site. [ 12 ] The representation of taxonomic information in machine-encodable form raises a number of issues not encountered in other domains, such as variant ways to cite the same species or other taxon name, the same name used for multiple taxa ( homonyms ), multiple non-current names for the same taxon ( synonyms ), changes in name and taxon concept definition through time, and more. [ 8 ] [ 2 ] [ 1 ] Non-standardized categories and metadata in taxonomic databases hampers the ability for researchers to analyze the data. [ 3 ] One forum that has promoted discussion and possible solutions to these and related problems since 1985 is the Biodiversity Information Standards (TDWG) , originally called the Taxonomic Database Working Group. While online databases have great benefits (for example, increased access to taxonomic information), they also have issues such as data integrity risks due to on- and off-line versions and continuous updates, technical access issues due to server or internet outage, and differing capacities for complex queries to extract taxonomic data into lists. [ 2 ] As the quantity of information in online taxonomic databases rapidly expands, data aggregation, and the integration and alignment of non-standardized data across databases, is a big challenge in taxonomy and biodiversity informatics. [ 1 ]
https://en.wikipedia.org/wiki/Taxonomic_database
Conservationists, ecologists, biodiversity scientists, lawmakers, and many others rely heavily on taxonomic information to manage, conserve, use, and share our biodiversity. The world-wide shortage of this important taxonomic information, the gaps in our taxonomic knowledge, and the shortage of trained taxonomists and curators to fill this need has come to be known as the taxonomic impediment . The importance of this phenomenon was recognized by the Convention on Biological Diversity , signed at the 1992 Rio Earth Summit , [ 1 ] and initiatives have occurred that have not yet solved the problem. [ 2 ] The greatest contributions of taxonomy to science and humanity are yet to come. Against formidable odds and with minimal funding, equipment, infrastructure, organization and encouragement, taxonomists have discovered, described, and classified nearly 1.8 million species. While increasing attention is being paid to making this substantial amount of accumulated taxonomic information more easily accessible, comparatively little attention has been paid to opening access to the research resources required by taxonomists themselves. Benefits associated with ease of access to museum records (e.g. Global Biodiversity Information Facility ) or 'known' species (e.g. Encyclopedia of Life ) are seriously restricted when such information is untested for validity or is simply unavailable, as is the case for three-quarters or more of the species on Earth. We act as if taxonomy is done but nothing could be farther from the truth. The history of the term "taxonomic impediment" can be traced, with the first documented use in any context being in 1976, [ 3 ] though this and a few later uses were made with regards to "aspects of taxonomic poverty other than lack of taxonomic expertise." [ 4 ] It was not until the Conference of the Parties to the Convention on Biological Diversity (COP 2) meeting in Jakarta in 1995 that the term "taxonomic impediment" was first used in the modern sense, referring explicitly to a shortage of taxonomists and lack of support for their research, [ 4 ] and subsequently first formally published in the broader scientific literature in 1996. [ 5 ] The causes of the current crisis in taxonomy have been ascribed to a loss of perspective in ecology [ 6 ] and evolutionary biology as the modern evolutionary synthesis developed during the 1930s and 40s: a conflation of "pattern with process", [ 7 ] "confusing the methods and goals of the emerging science of population genetics with those of the established science of taxonomy", [ 7 ] which caused the traditional fundamental taxonomy to be disparaged, and consequently underfunded. It is argued that some initiatives that aim to bypass the bottleneck of insufficient taxonomic expertise continue to draw funds away from solving the fundamental problem. [ 8 ] [ 9 ] [ 10 ]
https://en.wikipedia.org/wiki/Taxonomic_impediment
Taxonomic inflation is a pejorative term for what is perceived to be an excessive increase in the number of recognised taxa in a given context, due not to the discovery of new taxa but rather to putatively arbitrary changes to how taxa are delineated. The best-known case is the elevation of a group of subspecies to species rank through the arbitrary decision that the differences between the various taxa warrant distinguishing them at species rank. The rise of molecular genetics is also correlated with the delineation of species based on a small number of genetic differences. Taxonomic inflation is often claimed to occur for conservation reasons. It may be difficult to make a case for the protection of an isolated and unusual population of a common and widespread species , but it becomes much easier to do so if that population is recognised as a rare subspecies or species. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Taxonomic_inflation
In biology , taxonomic rank (which some authors prefer to call nomenclatural rank [ 1 ] because ranking is part of nomenclature rather than taxonomy proper, according to some definitions of these terms) is the relative or absolute level of a group of organisms (a taxon ) in a hierarchy that reflects evolutionary relationships. Thus, the most inclusive clades (such as Eukarya and Animalia ) have the highest ranks, whereas the least inclusive ones (such as Homo sapiens or Bufo bufo ) have the lowest ranks. Ranks can be either relative and be denoted by an indented taxonomy in which the level of indentation reflects the rank, or absolute, in which various terms, such as species , genus , family , order , class , phylum , kingdom , and domain designate rank. This page emphasizes absolute ranks and the rank-based codes (the Zoological Code , the Botanical Code , the Code for Cultivated Plants , the Prokaryotic Code , and the Code for Viruses ) require them. However, absolute ranks are not required in all nomenclatural systems for taxonomists; for instance, the PhyloCode , [ 2 ] the code of phylogenetic nomenclature , does not require absolute ranks. Taxa are hierarchical groups of organisms, and their ranks describes their position in this hierarchy. High-ranking taxa (e.g. those considered to be domains or kingdoms, for instance) include more sub-taxa than low-ranking taxa (e.g. those considered genera, species or subspecies). The rank of these taxa reflects inheritance of traits or molecular features from common ancestors. The name of any species and genus are basic ; which means that to identify a particular organism, it is usually not necessary to specify names at ranks other than these first two, within a set of taxa covered by a given rank-based code. [ 3 ] However, this is not true globally because most rank-based codes are independent from each other, so there are many inter-code homonyms (the same name used for different organisms, often for an animal and for a taxon covered by the botanical code). For this reason, attempts were made at creating a BioCode that would regulate all taxon names, [ 4 ] but this attempt has so far failed [ 5 ] because of firmly entrenched traditions in each community. [ 6 ] Consider a particular species, the red fox , Vulpes vulpes : in the context of the Zoological Code , the specific epithet vulpes (small v ) identifies a particular species in the genus Vulpes (capital V ) which comprises all the "true" foxes. Their close relatives are all in the family Canidae , which includes dogs, wolves, jackals, and all foxes; the next higher major taxon, Carnivora (considered an order), includes caniforms (bears, seals, weasels, skunks, raccoons and all those mentioned above), and feliforms (cats, civets, hyenas, mongooses). Carnivorans are one group of the hairy, warm-blooded, nursing members of the class Mammalia , which are classified among animals with notochords in the phylum Chordata , and with them among all animals in the kingdom Animalia . Finally, at the highest rank all of these are grouped together with all other organisms possessing cell nuclei in the domain Eukarya . The International Code of Zoological Nomenclature defines rank as: "The level, for nomenclatural purposes, of a taxon in a taxonomic hierarchy (e.g. all families are for nomenclatural purposes at the same rank, which lies between superfamily and subfamily)." [ 7 ] Note that the discussions on this page generally assume that taxa are clades ( monophyletic groups of organisms), but this is required neither by the International Code of Zoological Nomenclature nor by the Botanical Code , and some experts on biological nomenclature do not think that this should be required, [ 8 ] and in that case, the hierarchy of taxa (hence, their ranks) does not necessarily reflect the hierarchy of clades . While older approaches to taxonomic classification were phenomenological, forming groups on the basis of similarities in appearance, organic structure and behavior, two important new methods developed in the second half of the 20th century changed drastically taxonomic practice. One is the advent of cladistics , which stemmed from the works of the German entomologist Willi Hennig . [ 9 ] Cladistics is a method of classification of life forms according to the proportion of characteristics that they have in common (called synapomorphies ). It is assumed that the higher the proportion of characteristics that two organisms share, the more recently they both came from a common ancestor. The second one is molecular systematics, based on genetic analysis , which can provide much additional data that prove especially useful when few phenotypic characters can resolve relationships, as, for instance, in many viruses , bacteria [ 10 ] and archaea , or to resolve relationships between taxa that arose in a fast evolutionary radiation that occurred long ago, such as the main taxa of placental mammals . [ 11 ] In his landmark publications, such as the Systema Naturae , Carl Linnaeus used a ranking scale limited to kingdom, class, order, genus, species, and one rank below species. Today, the nomenclature is regulated by the nomenclature codes . There are seven main taxonomic ranks: kingdom, phylum or division, class, order, family, genus, and species. In addition, domain (proposed by Carl Woese ) is now widely used as a fundamental rank, although it is not mentioned in any of the nomenclature codes, and is a synonym for dominion ( Latin : dominium ), introduced by Moore in 1974. [ 12 ] [ 13 ] A taxon is usually assigned a rank when it is given its formal name. The basic ranks are species and genus. When an organism is given a species name it is assigned to a genus, and the genus name is part of the species name. The species name is also called a binomial , that is, a two-term name. For example, the zoological name for the human species is Homo sapiens . This is usually italicized in print or underlined when italics are not available. In this case, Homo is the generic name and it is capitalized; sapiens indicates the species and it is not capitalized. While not always used, some species include a subspecific epithet. For instance, modern humans are Homo sapiens sapiens , or H. sapiens sapiens . In zoological nomenclature, higher taxon names are normally not italicized, but the Botanical Code , the Prokaryotic Code , the Code for Viruses , the draft BioCode [ 4 ] and the PhyloCode [ 2 ] all recommend italicizing all taxon names (of all ranks). There are rules applying to the following taxonomic ranks in the International Code of Zoological Nomenclature : superfamily, family, subfamily, tribe, subtribe, genus, subgenus, species, subspecies. [ 14 ] The International Code of Zoological Nomenclature divides names into "family-group names", "genus-group names" and "species-group names". The Code explicitly mentions the following ranks for these categories: [ 14 ] : §29–31 The rules in the Code apply to the ranks of superfamily to subspecies, and only to some extent to those above the rank of superfamily. Among "genus-group names" and "species-group names" no further ranks are officially allowed, which creates problems when naming taxa in these groups in speciose clades, such as Rana . [ 15 ] Zoologists sometimes use additional terms such as species group , species subgroup , species complex and superspecies for convenience as extra, but unofficial, ranks between the subgenus and species levels in taxa with many species, e.g. the genus Drosophila . (Note the potentially confusing use of "species group" as both a category of ranks as well as an unofficial rank itself. For this reason, Alain Dubois has been using the alternative expressions "nominal-series", "family-series", "genus-series" and "species-series" (among others) at least since 2000. [ 16 ] [ 15 ] ) At higher ranks (family and above) a lower level may be denoted by adding the prefix " infra ", meaning lower , to the rank. For example, infra order (below suborder) or infra family (below subfamily). Botanical ranks categorize organisms based (often) on their relationships ( monophyly is not required by that clade, which does not even mention this word, nor that of " clade "). They start with Kingdom, then move to Division (or Phylum), [ 17 ] Class, Order, Family, Genus, and Species. Taxa at each rank generally possess shared characteristics and evolutionary history. Understanding these ranks aids in taxonomy and studying biodiversity. There are definitions of the following taxonomic categories in the International Code of Nomenclature for Cultivated Plants : cultivar group , cultivar , grex . The rules in the ICN apply primarily to the ranks of family and below, and only to some extent to those above the rank of family. (See also descriptive botanical name .) Taxa at the rank of genus and above have a botanical name in one part (unitary name); those at the rank of species and above (but below genus) have a botanical name in two parts ( binary name ); all taxa below the rank of species have a botanical name in three parts (an infraspecific name ). To indicate the rank of the infraspecific name, a "connecting term" is needed. Thus Poa secunda subsp. juncifolia , where "subsp". is an abbreviation for "subspecies", is the name of a subspecies of Poa secunda . [ 19 ] Hybrids can be specified either by a "hybrid formula" that specifies the parentage, or may be given a name. For hybrids receiving a hybrid name , the same ranks apply, prefixed with notho (Greek: 'bastard'), with nothogenus as the highest permitted rank. [ 20 ] If a different term for the rank was used in an old publication, but the intention is clear, botanical nomenclature specifies certain substitutions: [ citation needed ] Classifications of five species follow: the fruit fly familiar in genetics laboratories ( Drosophila melanogaster ), humans ( Homo sapiens ), the peas used by Gregor Mendel in his discovery of genetics ( Pisum sativum ), the "fly agaric" mushroom Amanita muscaria , and the bacterium Escherichia coli . The eight major ranks are given in bold; a selection of minor ranks are given as well. Taxa above the genus level are often given names based on the type genus , with a standard termination. The terminations used in forming these names depend on the kingdom (and sometimes the phylum and class) as set out in the table below. Pronunciations given are the most Anglicized . More Latinate pronunciations are also common, particularly / ɑː / rather than / eɪ / for stressed a . There is an indeterminate number of ranks, as a taxonomist may invent a new rank at will, at any time, if they feel this is necessary. In doing so, there are some restrictions, which will vary with the nomenclature code that applies. [ citation needed ] The following is an artificial synthesis, solely for purposes of demonstration of absolute rank (but see notes), from most general to most specific: [ 34 ] Ranks are assigned based on subjective dissimilarity, and do not fully reflect the gradational nature of variation within nature. These problems were already identified by Willi Hennig , who advocated dropping them in 1969, [ 40 ] and this position gathered support from Graham C. D. Griffiths only a few years later. [ 41 ] In fact, these ranks were proposed in a fixist context and the advent of evolution sapped the foundations of this system, as was recognised long ago; the introduction of The Code of Nomenclature and Check-list of North American Birds Adopted by the American Ornithologists' Union published in 1886 states "No one appears to have suspected, in 1842 [when the Strickland code was drafted], that the Linnaean system was not the permanent heritage of science, or that in a few years a theory of evolution was to sap its very foundations, by radically changing men's conceptions of those things to which names were to be furnished." [ 42 ] Such ranks are used simply because they are required by the rank-based codes; because of this, some systematists prefer to call them nomenclatural ranks . [ 1 ] [ 6 ] In most cases, higher taxonomic groupings arise further back in time, simply because the most inclusive taxa necessarily appeared first. [ 43 ] Furthermore, the diversity in some major taxa (such as vertebrates and angiosperms ) is better known than that of others (such as fungi , arthropods and nematodes ) not because they are more diverse than other taxa, but because they are more easily sampled and studied than other taxa, or because they attract more interest and funding for research. [ 44 ] [ 45 ] Of these many ranks, many systematists consider that the most basic (or important) is the species, but this opinion is not universally shared. [ 46 ] [ 47 ] [ 48 ] Thus, species are not necessarily more sharply defined than taxa at any other rank, and in fact, given the phenotypic gaps created by extinction, in practice, the reverse is often the case. [ 6 ] Ideally, a taxon is intended to represent a clade , that is, the phylogeny of the organisms under discussion, but this is not a requirement of the zoological and botanical codes. [ 6 ] A classification in which all taxa have formal ranks cannot adequately reflect knowledge about phylogeny. Since taxon names are dependent on ranks in rank-based (Linnaean) nomenclature, taxa without ranks cannot be given names. Alternative approaches, such as phylogenetic nomenclature , [ 49 ] [ 50 ] as implemented under the PhyloCode and supported by the International Society for Phylogenetic Nomenclature , [ 51 ] or using circumscriptional names , avoid this problem. [ 52 ] [ 53 ] The theoretical difficulty with superimposing taxonomic ranks over evolutionary trees is manifested as the boundary paradox which may be illustrated by Darwinian evolutionary models. There are no rules for how many species should make a genus, a family, or any other higher taxon (that is, a taxon in a category above the species level). [ 54 ] [ 55 ] It should be a natural group (that is, non-artificial, non- polyphyletic ), as judged by a biologist, using all the information available to them. Equally ranked higher taxa in different phyla are not necessarily equivalent in terms of time of origin, phenotypic distinctiveness or number of lower-ranking included taxa (e.g., it is incorrect to assume that families of insects are in some way evolutionarily comparable to families of mollusks). [ 55 ] [ 56 ] [ 6 ] Of all criteria that have been advocated to rank taxa, age of origin has been the most frequently advocated. Willi Hennig proposed it in 1966, [ 9 ] but he concluded in 1969 [ 40 ] that this system was unworkable and suggested dropping absolute ranks. However, the idea of ranking taxa using the age of origin (either as the sole criterion, or as one of the main ones) persists under the name of time banding, and is still advocated by several authors. [ 57 ] [ 58 ] [ 59 ] [ 60 ] For animals, at least the phylum rank is usually associated with a certain body plan , which is also, however, an arbitrary criterion. [ citation needed ] Enigmatic taxa are taxonomic groups whose broader relationships are unknown or undefined. [ 61 ] (See Incertae sedis .) There are several acronyms intended to help memorise the taxonomic hierarchy, such as " K ing P hillip C ame O ver F or G reat S paghetti". [ 62 ] ( K ingdom(s), P hylum/ P hyla, C lass(es), O rder(s), F amily/ F amilies, G enus, S pecies) (See taxonomy mnemonic .)
https://en.wikipedia.org/wiki/Taxonomic_rank
Taxonomic sequence (also known as systematic , phyletic or taxonomic order ) is a sequence followed in listing of taxa which aids ease of use and roughly reflects the evolutionary relationships among the taxa. Taxonomic sequences can exist for taxa within any rank , that is, a list of families , genera , species can each have a sequence. Early biologists used the concept of "age" or "primitiveness" of the groups in question to derive an order of arrangement, with "older" or more "primitive" groups being listed first and more recent or "advanced" ones last. A modern understanding of evolutionary biology has brought about a more robust framework for the taxonomic ordering of lists. A list may be seen as a rough one-dimensional representation of a phylogenetic tree . Taxonomic sequences are essentially heuristic devices that help in arrangements of linear systems such as books and information retrieval systems. Since phylogenetic relationships are complex and non-linear, there is no unique way to define the sequence, although they generally have the more basal listed first with species that cluster in a tight group included next to each other. [ 1 ] [ 2 ] The organization of field guides and taxonomic monographs may either follow or prescribe the taxonomic sequence; changes in these sequences are often introduced by new publications. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Taxonomic_sequence
A taxonomic treatment is a section in a scientific publication documenting the features of a related group of organisms or taxa . [ 1 ] Treatments have been the building blocks of how data about taxa are provided, ever since the beginning of modern taxonomy by Linnaeus 1753 for plants [ 2 ] and 1758 for animals. [ 3 ] Each scientifically described taxon has at least one taxonomic treatment. In today’s publishing, a taxonomic treatment tag [ 4 ] is used to delimit such a section. [ 5 ] It allows to make this section findable, accessible, interoperable and reusable FAIR data . This is implemented in the Biodiversity Literature Repository , where upon deposition of the treatment a persistent DataCite digital object identifier (DOI) is minted. This includes metadata about the treatment, the source publication and other cited resources, such as figures cited in the treatment. This DOI allows a link from a taxonomic name usage to the respective scientific evidence provided by the author(s), both for human and machine consumption. Treatments are considered data and thus copyright is not applicable [ 6 ] and thus can be made available even from closed access publications. The term taxonomic treatment has been coined because the term description has two meanings in species or taxonomic descriptions . One is equivalent to treatment, the second as subsection in treatments describing the taxon, complementing diagnosis, materials examined, distribution, conservation and other subsections. [ 7 ] This term has been introduced during a national US NSF digital library project, [ 8 ] and has been further developed into Taxpub, [ 9 ] a taxonomy specific version of the Journal Article Tag Suite by Plazi , National Center for Biotechnology Information , and Pensoft Publishers . It was prototyped by the taxonomic journal ZooKeys , [ 10 ] which adopted Taxpub from its volume 50 onwards, followed by PhytoKeys . [ 11 ] Taxpub is now used by journals published by Pensoft Publishers, the European Journal of Taxonomy , [ 12 ] the Consortium of European Taxonomic Facilities (CETAF), and the National Museum of Natural History, France . [ 13 ] The TreatmentBank [ 14 ] service provided by Plazi to convert taxonomic publications into FAIR data provides access to over 500,000 taxonomic treatments, [ 15 ] including over 7,700 treatments for new described species in 2020. [ 16 ] They will eventually become accessible in BLR after passing quality control to avoid artifacts due to the complex conversion of unstructured, mainly PDF based publications.
https://en.wikipedia.org/wiki/Taxonomic_treatment
Taxonomic vandalism is a term used in biology to describe the practice of publishing numerous [ 1 ] scientifically unfounded or poorly-justified taxonomic names , often without adequate research [ 2 ] or peer review. [ 3 ] [ 4 ] This phenomenon has been observed across various fields of taxonomy, but has been particularly prevalent in herpetology . [ 5 ] [ 6 ] [ 7 ] This biology article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Taxonomic_vandalism
Taylor's power law is an empirical law in ecology that relates the variance of the number of individuals of a species per unit area of habitat to the corresponding mean by a power law relationship. [ 1 ] It is named after the ecologist who first proposed it in 1961, Lionel Roy Taylor (1924–2007). [ 2 ] Taylor's original name for this relationship was the law of the mean. [ 1 ] The name Taylor's law was coined by Southwood in 1966. [ 2 ] This law was originally defined for ecological systems, specifically to assess the spatial clustering of organisms. For a population count Y {\displaystyle Y} with mean μ {\displaystyle \mu } and variance var ⁡ ( Y ) {\displaystyle \operatorname {var} (Y)} , Taylor's law is written where a and b are both positive constants. Taylor proposed this relationship in 1961, suggesting that the exponent b be considered a species specific index of aggregation. [ 1 ] This power law has subsequently been confirmed for many hundreds of species. [ 3 ] [ 4 ] Taylor's law has also been applied to assess the time dependent changes of population distributions. [ 3 ] Related variance to mean power laws have also been demonstrated in several non-ecological systems: The first use of a double log-log plot was by Reynolds in 1879 on thermal aerodynamics. [ 17 ] Pareto used a similar plot to study the proportion of a population and their income. [ 18 ] The term variance was coined by Fisher in 1918. [ 19 ] Pearson [ 20 ] in 1921 proposed the equation (also studied by Neyman [ 21 ] ) Smith in 1938 while studying crop yields proposed a relationship similar to Taylor's. [ 22 ] This relationship was where V x is the variance of yield for plots of x units, V 1 is the variance of yield per unit area and x is the size of plots. The slope ( b ) is the index of heterogeneity. The value of b in this relationship lies between 0 and 1. Where the yield are highly correlated b tends to 0; when they are uncorrelated b tends to 1. Bliss [ 23 ] in 1941, Fracker and Brischle [ 24 ] in 1941 and Hayman & Lowe [ 25 ] in 1961 also described what is now known as Taylor's law, but in the context of data from single species. Taylor's 1961 paper used data from 24 papers, published between 1936 and 1960, that considered a variety of biological settings: virus lesions, macro-zooplankton , worms and symphylids in soil , insects in soil, on plants and in the air, mites on leaves , ticks on sheep and fish in the sea .; [ 1 ] the b value lay between 1 and 3. Taylor proposed the power law as a general feature of the spatial distribution of these species. He also proposed a mechanistic hypothesis to explain this law. Initial attempts to explain the spatial distribution of animals had been based on approaches like Bartlett's stochastic population models and the negative binomial distribution that could result from birth–death processes . [ 26 ] Taylor's explanation was based the assumption of a balanced migratory and congregatory behavior of animals. [ 1 ] His hypothesis was initially qualitative, but as it evolved it became semi-quantitative and was supported by simulations. [ 27 ] Many alternative hypotheses for the power law have been advanced. Hanski proposed a random walk model, modulated by the presumed multiplicative effect of reproduction. [ 28 ] Hanski's model predicted that the power law exponent would be constrained to range closely about the value of 2, which seemed inconsistent with many reported values. [ 3 ] [ 4 ] Anderson et al formulated a simple stochastic birth, death, immigration and emigration model that yielded a quadratic variance function. [ 29 ] As a response to this model Taylor argued that such a Markov process would predict that the power law exponent would vary considerably between replicate observations, and that such variability had not been observed. [ 30 ] Adrienne W. Kemp reviewed a number of discrete stochastic models based on the negative binomial, Neyman type A , and Polya–Aeppli distributions that with suitable adjustment of parameters could produce a variance to mean power law. [ 31 ] Kemp, however, did not explain the parameterizations of her models in mechanistic terms. Other relatively abstract models for Taylor's law followed. [ 6 ] [ 32 ] Statistical concerns were raised regarding Taylor's law, based on the difficulty with real data in distinguishing between Taylor's law and other variance to mean functions, as well the inaccuracy of standard regression methods. [ 33 ] [ 34 ] Taylor's law has been applied to time series data, and Perry showed, using simulations, that chaos theory could yield Taylor's law. [ 35 ] Taylor's law has been applied to the spatial distribution of plants [ 36 ] and bacterial populations [ 37 ] As with the observations of Tobacco necrosis virus mentioned earlier, these observations were not consistent with Taylor's animal behavioral model. A variance to mean power function had been applied to non-ecological systems, under the rubric of Taylor's law. A more general explanation for the range of manifestations of the power law a hypothesis has been proposed based on the Tweedie distributions , [ 38 ] a family of probabilistic models that express an inherent power function relationship between the variance and the mean. [ 11 ] [ 13 ] [ 39 ] Several alternative hypotheses for the power law have been proposed. Hanski proposed a random walk model, modulated by the presumed multiplicative effect of reproduction. [ 28 ] Hanski's model predicted that the power law exponent would be constrained to range closely about the value of 2, which seemed inconsistent with many reported values. [ 3 ] [ 4 ] Anderson et al formulated a simple stochastic birth, death, immigration and emigration model that yielded a quadratic variance function. [ 29 ] The Lewontin Cohen growth model . [ 40 ] is another proposed explanation. The possibility that observations of a power law might reflect more mathematical artifact than a mechanistic process was raised. [ 41 ] Variation in the exponents of Taylor's Law applied to ecological populations cannot be explained or predicted based solely on statistical grounds however. [ 42 ] Research has shown that variation within the Taylor's law exponents for the North Sea fish community varies with the external environment, suggesting ecological processes at least partially determine the form of Taylor's law. [ 43 ] In the physics literature Taylor's law has been referred to as fluctuation scaling . Eisler et al , in a further attempt to find a general explanation for fluctuation scaling, proposed a process they called impact inhomogeneity in which frequent events are associated with larger impacts. [ 44 ] In appendix B of the Eisler article, however, the authors noted that the equations for impact inhomogeneity yielded the same mathematical relationships as found with the Tweedie distributions. Another group of physicists, Fronczak and Fronczak, derived Taylor's power law for fluctuation scaling from principles of equilibrium and non-equilibrium statistical physics . [ 45 ] Their derivation was based on assumptions of physical quantities like free energy and an external field that caused the clustering of biological organisms. Direct experimental demonstration of these postulated physical quantities in relationship to animal or plant aggregation has yet to be achieved, though. Shortly thereafter, an analysis of Fronczak and Fronczak's model was presented that showed their equations directly lead to the Tweedie distributions, a finding that suggested that Fronczak and Fronczak had possibly provided a maximum entropy derivation of these distributions. [ 14 ] Taylor's law has been shown to hold for prime numbers not exceeding a given real number. [ 46 ] This result has been shown to hold for the first 11 million primes. If the Hardy–Littlewood twin primes conjecture is true then this law also holds for twin primes. About the time that Taylor was substantiating his ecological observations, MCK Tweedie , a British statistician and medical physicist, was investigating a family of probabilistic models that are now known as the Tweedie distributions . [ 47 ] [ 48 ] As mentioned above, these distributions are all characterized by a variance to mean power law mathematically identical to Taylor's law. The Tweedie distribution most applicable to ecological observations is the compound Poisson-gamma distribution , which represents the sum of N independent and identically distributed random variables with a gamma distribution where N is a random variable distributed in accordance with a Poisson distribution. In the additive form its cumulant generating function (CGF) is: where κ b ( θ ) is the cumulant function, the Tweedie exponent s is the generating function variable, and θ and λ are the canonical and index parameters, respectively. [ 38 ] These last two parameters are analogous to the scale and shape parameters used in probability theory . The cumulants of this distribution can be determined by successive differentiations of the CGF and then substituting s=0 into the resultant equations. The first and second cumulants are the mean and variance, respectively, and thus the compound Poisson-gamma CGF yields Taylor's law with the proportionality constant The compound Poisson-gamma cumulative distribution function has been verified for limited ecological data through the comparison of the theoretical distribution function with the empirical distribution function . [ 39 ] A number of other systems, demonstrating variance to mean power laws related to Taylor's law, have been similarly tested for the compound Poisson-gamma distribution. [ 12 ] [ 13 ] [ 14 ] [ 16 ] The main justification for the Tweedie hypothesis rests with the mathematical convergence properties of the Tweedie distributions. [ 13 ] The Tweedie convergence theorem requires the Tweedie distributions to act as foci of convergence for a wide range of statistical processes. [ 49 ] As a consequence of this convergence theorem, processes based on the sum of multiple independent small jumps will tend to express Taylor's law and obey a Tweedie distribution. A limit theorem for independent and identically distributed variables, as with the Tweedie convergence theorem, might then be considered as being fundamental relative to the ad hoc population models, or models proposed on the basis of simulation or approximation. [ 14 ] [ 16 ] This hypothesis remains controversial; more conventional population dynamic approaches seem preferred amongst ecologists, despite the fact that the Tweedie compound Poisson distribution can be directly applied to population dynamic mechanisms. [ 6 ] One difficulty with the Tweedie hypothesis is that the value of b does not range between 0 and 1. Values of b < 1 are rare but have been reported. [ 50 ] In symbols where s i 2 is the variance of the density of the i th sample, m i is the mean density of the i th sample and a and b are constants. In logarithmic form The exponent in Taylor's law is scale invariant: If the unit of measurement is changed by a constant factor c {\displaystyle c} , the exponent ( b {\displaystyle b} ) remains unchanged. To see this let y = cx . Then Taylor's law expressed in the original variable ( x ) is and in the rescaled variable ( y ) it is Thus, σ 2 2 {\displaystyle \sigma _{2}^{2}} is still proportional to μ 2 b {\displaystyle \mu _{2}^{b}} (even though the proportionality constant has changed). It has been shown that Taylor's law is the only relationship between the mean and variance that is scale invariant. [ 51 ] A refinement in the estimation of the slope b has been proposed by Rayner. [ 52 ] where r {\displaystyle r} is the Pearson moment correlation coefficient between log ⁡ ( s 2 ) {\displaystyle \log(s^{2})} and log ⁡ m {\displaystyle \log m} , f {\displaystyle f} is the ratio of sample variances in log ⁡ ( s 2 ) {\displaystyle \log(s^{2})} and log ⁡ m {\displaystyle \log m} and φ {\displaystyle \varphi } is the ratio of the errors in log ⁡ ( s 2 ) {\displaystyle \log(s^{2})} and log ⁡ m {\displaystyle \log m} . Ordinary least squares regression assumes that φ = ∞. This tends to underestimate the value of b because the estimates of both log ⁡ ( s 2 ) {\displaystyle \log(s^{2})} and log ⁡ m {\displaystyle \log m} are subject to error. An extension of Taylor's law has been proposed by Ferris et al when multiple samples are taken [ 53 ] where s 2 and m are the variance and mean respectively, b , c and d are constants and n is the number of samples taken. To date, this proposed extension has not been verified to be as applicable as the original version of Taylor's law. An extension to this law for small samples has been proposed by Hanski. [ 54 ] For small samples the Poisson variation ( P ) - the variation that can be ascribed to sampling variation - may be significant. Let S be the total variance and let V be the biological (real) variance. Then Assuming the validity of Taylor's law, we have Because in the Poisson distribution the mean equals the variance, we have This gives us This closely resembles Barlett's original suggestion. Slope values ( b ) significantly > 1 indicate clumping of the organisms. In Poisson-distributed data, b = 1. [ 30 ] If the population follows a lognormal or gamma distribution , then b = 2. For populations that are experiencing constant per capita environmental variability, the regression of log( variance ) versus log( mean abundance ) should have a line with b = 2. Most populations that have been studied have b < 2 (usually 1.5–1.6) but values of 2 have been reported. [ 55 ] Occasionally cases with b > 2 have been reported. [ 3 ] b values below 1 are uncommon but have also been reported ( b = 0.93 ). [ 50 ] It has been suggested that the exponent of the law ( b ) is proportional to the skewness of the underlying distribution. [ 56 ] This proposal has criticised: additional work seems to be indicated. [ 57 ] [ 58 ] The origin of the slope ( b ) in this regression remains unclear. Two hypotheses have been proposed to explain it. One suggests that b arises from the species behavior and is a constant for that species. The alternative suggests that it is dependent on the sampled population. Despite the considerable number of studies carried out on this law (over 1000), this question remains open. It is known that both a and b are subject to change due to age-specific dispersal, mortality and sample unit size. [ 59 ] This law may be a poor fit if the values are small. For this reason an extension to Taylor's law has been proposed by Hanski which improves the fit of Taylor's law at low densities. [ 54 ] A form of Taylor's law applicable to binary data in clusters (e.q., quadrats) has been proposed. [ 60 ] In a binomial distribution, the theoretical variance is where (var bin ) is the binomial variance, n is the sample size per cluster, and p is the proportion of individuals with a trait (such as disease), an estimate of the probability of an individual having that trait. One difficulty with binary data is that the mean and variance, in general, have a particular relationship: as the mean proportion of individuals infected increases above 0.5, the variance deceases. It is now known that the observed variance (var obs ) changes as a power function of (var bin ). [ 60 ] Hughes and Madden noted that if the distribution is Poisson, the mean and variance are equal. [ 60 ] As this is clearly not the case in many observed proportion samples, they instead assumed a binomial distribution. They replaced the mean in Taylor's law with the binomial variance and then compared this theoretical variance with the observed variance. For binomial data, they showed that var obs = var bin with overdispersion, var obs > var bin . In symbols, Hughes and Madden's modification to Tyalor's law was In logarithmic form this relationship is This latter version is known as the binary power law. A key step in the derivation of the binary power law by Hughes and Madden was the observation made by Patil and Stiteler [ 61 ] that the variance-to-mean ratio used for assessing over-dispersion of unbounded counts in a single sample is actually the ratio of two variances: the observed variance and the theoretical variance for a random distribution. For unbounded counts, the random distribution is the Poisson. Thus, the Taylor power law for a collection of samples can be considered as a relationship between the observed variance and the Poisson variance. More broadly, Madden and Hughes [ 60 ] considered the power law as the relationship between two variances, the observed variance and the theoretical variance for a random distribution. With binary data, the random distribution is the binomial (not the Poisson). Thus the Taylor power law and the binary power law are two special cases of a general power-law relationships for heterogeneity. When both a and b are equal to 1, then a small-scale random spatial pattern is suggested and is best described by the binomial distribution. When b = 1 and a > 1, there is over-dispersion (small-scale aggregation). When b is > 1, the degree of aggregation varies with p . Turechek et al. [ 62 ] have shown that the binary power law describes numerous data sets in plant pathology. In general, b is greater than 1 and less than 2. The fit of this law has been tested by simulations. [ 63 ] These results suggest that rather than a single regression line for the data set, a segmental regression may be a better model for genuinely random distributions. However, this segmentation only occurs for very short-range dispersal distances and large quadrat sizes. [ 62 ] The break in the line occurs only at p very close to 0. An extension to this law has been proposed. [ 64 ] The original form of this law is symmetrical but it can be extended to an asymmetrical form. [ 64 ] Using simulations the symmetrical form fits the data when there is positive correlation of disease status of neighbors. Where there is a negative correlation between the likelihood of neighbours being infected, the asymmetrical version is a better fit to the data. Because of the ubiquitous occurrence of Taylor's law in biology it has found a variety of uses some of which are listed here. It has been recommended based on simulation studies [ 65 ] in applications testing the validity of Taylor's law to a data sample that: (1) the total number of organisms studied be > 15 (2) the minimum number of groups of organisms studied be > 5 (3) the density of the organisms should vary by at least 2 orders of magnitude within the sample It is common assumed (at least initially) that a population is randomly distributed in the environment. If a population is randomly distributed then the mean ( m ) and variance ( s 2 ) of the population are equal and the proportion of samples that contain at least one individual ( p ) is When a species with a clumped pattern is compared with one that is randomly distributed with equal overall densities, p will be less for the species having the clumped distribution pattern. Conversely when comparing a uniformly and a randomly distributed species but at equal overall densities, p will be greater for the randomly distributed population. This can be graphically tested by plotting p against m . Wilson and Room developed a binomial model that incorporates Taylor's law. [ 66 ] The basic relationship is where the log is taken to the base e . Incorporating Taylor's law this relationship becomes The common dispersion parameter ( k ) of the negative binomial distribution is where m {\displaystyle m} is the sample mean and s 2 {\displaystyle s^{2}} is the variance. [ 67 ] If 1 / k is > 0 the population is considered to be aggregated; 1 / k = 0 ( s 2 = m ) the population is considered to be randomly (Poisson) distributed and if 1 / k is < 0 the population is considered to be uniformly distributed. No comment on the distribution can be made if k = 0. Wilson and Room assuming that Taylor's law applied to the population gave an alternative estimator for k : [ 66 ] where a and b are the constants from Taylor's law. Jones [ 68 ] using the estimate for k above along with the relationship Wilson and Room developed for the probability of finding a sample having at least one individual [ 66 ] derived an estimator for the probability of a sample containing x individuals per sampling unit. Jones's formula is where P ( x ) is the probability of finding x individuals per sampling unit, k is estimated from the Wilon and Room equation and m is the sample mean. The probability of finding zero individuals P ( 0 ) is estimated with the negative binomial distribution Jones also gives confidence intervals for these probabilities. where CI is the confidence interval, t is the critical value taken from the t distribution and N is the total sample size. Katz proposed a family of distributions (the Katz family ) with 2 parameters ( w 1 , w 2 ). [ 69 ] This family of distributions includes the Bernoulli , Geometric , Pascal and Poisson distributions as special cases. The mean and variance of a Katz distribution are where m is the mean and s 2 is the variance of the sample. The parameters can be estimated by the method of moments from which we have For a Poisson distribution w 2 = 0 and w 1 = λ the parameter of the Possion distribution. This family of distributions is also sometimes known as the Panjer family of distributions. The Katz family is related to the Sundt-Jewel family of distributions: [ 70 ] The only members of the Sundt-Jewel family are the Poisson, binomial, negative binomial (Pascal), extended truncated negative binomial and logarithmic series distributions . If the population obeys a Katz distribution then the coefficients of Taylor's law are Katz also introduced a statistical test [ 69 ] where J n is the test statistic, s 2 is the variance of the sample, m is the mean of the sample and n is the sample size. J n is asymptotically normally distributed with a zero mean and unit variance. If the sample is Poisson distributed J n = 0; values of J n < 0 and > 0 indicate under and over dispersion respectively. Overdispersion is often caused by latent heterogeneity - the presence of multiple sub populations within the population the sample is drawn from. This statistic is related to the Neyman–Scott statistic which is known to be asymptotically normal and the conditional chi-squared statistic (Poisson dispersion test) which is known to have an asymptotic chi squared distribution with n − 1 degrees of freedom when the population is Poisson distributed. If the population obeys Taylor's law then If Taylor's law is assumed to apply it is possible to determine the mean time to local extinction. This model assumes a simple random walk in time and the absence of density dependent population regulation. [ 71 ] Let N t + 1 = r N t {\displaystyle N_{t+1}=rN_{t}} where N t +1 and N t are the population sizes at time t + 1 and t respectively and r is parameter equal to the annual increase (decrease in population). Then where var ( r ) {\displaystyle {\text{var}}(r)} is the variance of r {\displaystyle r} . Let K {\displaystyle K} be a measure of the species abundance (organisms per unit area). Then where T E is the mean time to local extinction. The probability of extinction by time t is If a population is lognormally distributed then the harmonic mean of the population size ( H ) is related to the arithmetic mean ( m ) [ 72 ] Given that H must be > 0 for the population to persist then rearranging we have is the minimum size of population for the species to persist. The assumption of a lognormal distribution appears to apply to about half of a sample of 544 species. [ 73 ] suggesting that it is at least a plausible assumption. The degree of precision ( D ) is defined to be s / m where s is the standard deviation and m is the mean. The degree of precision is known as the coefficient of variation in other contexts. In ecology research it is recommended that D be in the range 10–25%. [ 74 ] The desired degree of precision is important in estimating the required sample size where an investigator wishes to test if Taylor's law applies to the data. The required sample size has been estimated for a number of simple distributions but where the population distribution is not known or cannot be assumed more complex formulae may needed to determine the required sample size. Where the population is Poisson distributed the sample size ( n ) needed is where t is critical level of the t distribution for the type 1 error with the degrees of freedom that the mean ( m ) was calculated with. If the population is distributed as a negative binomial distribution then the required sample size is where k is the parameter of the negative binomial distribution. A more general sample size estimator has also been proposed [ 75 ] where a and b are derived from Taylor's law. An alternative has been proposed by Southwood [ 76 ] where n is the required sample size, a and b are the Taylor's law coefficients and D is the desired degree of precision. Karandinos proposed two similar estimators for n . [ 77 ] The first was modified by Ruesink to incorporate Taylor's law. [ 78 ] where d is the ratio of half the desired confidence interval ( CI ) to the mean. In symbols The second estimator is used in binomial (presence-absence) sampling. The desired sample size ( n ) is where the d p is ratio of half the desired confidence interval to the proportion of sample units with individuals, p is proportion of samples containing individuals and q = 1 − p . In symbols For binary (presence/absence) sampling, Schulthess et al modified Karandinos' equation where N is the required sample size, p is the proportion of units containing the organisms of interest, t is the chosen level of significance and D ip is a parameter derived from Taylor's law. [ 79 ] Sequential analysis is a method of statistical analysis where the sample size is not fixed in advance. Instead samples are taken in accordance with a predefined stopping rule . Taylor's law has been used to derive a number of stopping rules. A formula for fixed precision in serial sampling to test Taylor's law was derived by Green in 1970. [ 80 ] where T is the cumulative sample total, D is the level of precision, n is the sample size and a and b are obtained from Taylor's law. As an aid to pest control Wilson et al developed a test that incorporated a threshold level where action should be taken. [ 81 ] The required sample size is where a and b are the Taylor coefficients, || is the absolute value , m is the sample mean, T is the threshold level and t is the critical level of the t distribution. The authors also provided a similar test for binomial (presence-absence) sampling where p is the probability of finding a sample with pests present and q = 1 − p . Green derived another sampling formula for sequential sampling based on Taylor's law [ 82 ] where D is the degree of precision, a and b are the Taylor's law coefficients, n is the sample size and T is the total number of individuals sampled. Serra et al have proposed a stopping rule based on Taylor's law. [ 83 ] where a and b are the parameters from Taylor's law, D is the desired level of precision and T n is the total sample size. Serra et al also proposed a second stopping rule based on Iwoa's regression where α and β are the parameters of the regression line, D is the desired level of precision and T n is the total sample size. The authors recommended that D be set at 0.1 for studies of population dynamics and D = 0.25 for pest control. It is considered to be good practice to estimate at least one additional analysis of aggregation (other than Taylor's law) because the use of only a single index may be misleading. [ 84 ] Although a number of other methods for detecting relationships between the variance and mean in biological samples have been proposed, to date none have achieved the popularity of Taylor's law. The most popular analysis used in conjunction with Taylor's law is probably Iwao's Patchiness regression test but all the methods listed here have been used in the literature. Barlett in 1936 [ 85 ] and later Iwao independently in 1968 [ 86 ] both proposed an alternative relationship between the variance and the mean. In symbols where s is the variance in the i th sample and m i is the mean of the i th sample When the population follows a negative binomial distribution , a = 1 and b = k (the exponent of the negative binomial distribution). This alternative formulation has not been found to be as good a fit as Taylor's law in most studies. Nachman proposed a relationship between the mean density and the proportion of samples with zero counts: [ 87 ] where p 0 is the proportion of the sample with zero counts, m is the mean density, a is a scale parameter and b is a dispersion parameter. If a = b = 1 the distribution is random. This relationship is usually tested in its logarithmic form Allsop used this relationship along with Taylor's law to derive an expression for the proportion of infested units in a sample [ 88 ] where where D 2 is the degree of precision desired, z α/2 is the upper α/2 of the normal distribution, a and b are the Taylor's law coefficients, c and d are the Nachman coefficients, n is the sample size and N is the number of infested units. Binary sampling is not uncommonly used in ecology. In 1958 Kono and Sugino derived an equation that relates the proportion of samples without individuals to the mean density of the samples. [ 89 ] where p 0 is the proportion of the sample with no individuals, m is the mean sample density, a and b are constants. Like Taylor's law this equation has been found to fit a variety of populations including ones that obey Taylor's law. Unlike the negative binomial distribution this model is independent of the mean density. The derivation of this equation is straightforward. Let the proportion of empty units be p 0 and assume that these are distributed exponentially. Then Taking logs twice and rearranging, we obtain the equation above. This model is the same as that proposed by Nachman. The advantage of this model is that it does not require counting the individuals but rather their presence or absence. Counting individuals may not be possible in many cases particularly where insects are the matter of study. The equation was derived while examining the relationship between the proportion P of a series of rice hills infested and the mean severity of infestation m . The model studied was where a and b are empirical constants. Based on this model the constants a and b were derived and a table prepared relating the values of P and m The predicted estimates of m from this equation are subject to bias [ 90 ] and it is recommended that the adjusted mean ( m a ) be used instead [ 91 ] where var is the variance of the sample unit means m i and m is the overall mean. An alternative adjustment to the mean estimates is [ 91 ] where MSE is the mean square error of the regression. This model may also be used to estimate stop lines for enumerative (sequential) sampling. The variance of the estimated means is [ 92 ] where where MSE is the mean square error of the regression, α and β are the constant and slope of the regression respectively, s β 2 is the variance of the slope of the regression, N is the number of points in the regression, n is the number of sample units and p is the mean value of p 0 in the regression. The parameters a and b are estimated from Taylor's law: Hughes and Madden have proposed testing a similar relationship applicable to binary observations in cluster, where each cluster contains from 0 to n individuals. [ 60 ] where a , b and c are constants, var obs is the observed variance, and p is the proportion of individuals with a trait (such as disease), an estimate of the probability of an individual with a trait. In logarithmic form, this relationship is In most cases, it is assumed that b = c , leading to a simple model This relationship has been subjected to less extensive testing than Taylor's law. However, it has accurately described over 100 data sets, and there are no published examples reporting that it does not works. [ 62 ] A variant of this equation was proposed by Shiyomi et al. ( [ 93 ] ) who suggested testing the regression where var obs is the variance, a and b are the constants of the regression, n here is the sample size (not sample per cluster) and p is the probability of a sample containing at least one individual. A negative binomial model has also been proposed. [ 94 ] The dispersion parameter ( k ) using the method of moments is m 2 / ( s 2 – m ) and p i is the proportion of samples with counts > 0. The s 2 used in the calculation of k are the values predicted by Taylor's law. p i is plotted against 1 − ( k ( k + m ) −1 ) k and the fit of the data is visually inspected. Perry and Taylor have proposed an alternative estimator of k based on Taylor's law. [ 95 ] A better estimate of the dispersion parameter can be made with the method of maximum likelihood . For the negative binomial it can be estimated from the equation [ 67 ] where A x is the total number of samples with more than x individuals, N is the total number of individuals, x is the number of individuals in a sample, m is the mean number of individuals per sample and k is the exponent. The value of k has to be estimated numerically. Goodness of fit of this model can be tested in a number of ways including using the chi square test. As these may be biased by small samples an alternative is the U statistic – the difference between the variance expected under the negative binomial distribution and that of the sample. The expected variance of this distribution is m + m 2 / k and where s 2 is the sample variance, m is the sample mean and k is the negative binomial parameter. The variance of U is [ 67 ] where p = m / k , q = 1 + p , R = p / q and N is the total number of individuals in the sample. The expected value of U is 0. For large sample sizes U is distributed normally. Note: The negative binomial is actually a family of distributions defined by the relation of the mean to the variance σ 2 = μ + a μ p {\displaystyle \sigma ^{2}=\mu +a\mu ^{p}} where a and p are constants. When a = 0 this defines the Poisson distribution. With p = 1 and p = 2, the distribution is known as the NB1 and NB2 distribution respectively. This model is a version of that proposed earlier by Barlett. The dispersion parameter ( k ) [ 67 ] is where m is the sample mean and s 2 is the variance. If k −1 is > 0 the population is considered to be aggregated; k −1 = 0 the population is considered to be random; and if k −1 is < 0 the population is considered to be uniformly distributed. Southwood has recommended regressing k against the mean and a constant [ 76 ] where k i and m i are the dispersion parameter and the mean of the ith sample respectively to test for the existence of a common dispersion parameter ( k c ). A slope ( b ) value significantly > 0 indicates the dependence of k on the mean density. An alternative method was proposed by Elliot who suggested plotting ( s 2 − m ) against ( m 2 − s 2 / n ). [ 96 ] k c is equal to 1/slope of this regression. This coefficient ( C ) is defined as If the population can be assumed to be distributed in a negative binomial fashion, then C = 100 (1/ k ) 0.5 where k is the dispersion parameter of the distribution. This index ( I c ) is defined as [ 97 ] The usual interpretation of this index is as follows: values of I c < 1, = 1, > 1 are taken to mean a uniform distribution, a random distribution or an aggregated distribution. Because s 2 = Σ x 2 − (Σx) 2 , the index can also be written If Taylor's law can be assumed to hold, then Lloyd's index of mean crowding ( IMC ) is the average number of other points contained in the sample unit that contains a randomly chosen point. [ 98 ] where m is the sample mean and s 2 is the variance. Lloyd's index of patchiness ( IP ) [ 98 ] is It is a measure of pattern intensity that is unaffected by thinning (random removal of points). This index was also proposed by Pielou in 1988 and is sometimes known by this name also. Because an estimate of the variance of IP is extremely difficult to estimate from the formula itself, LLyod suggested fitting a negative binomial distribution to the data. This method gives a parameter k Then where S E ( I P ) {\displaystyle SE(IP)} is the standard error of the index of patchiness, var ( k ) {\displaystyle {\text{var}}(k)} is the variance of the parameter k and q is the number of quadrats sampled.. If the population obeys Taylor's law then Iwao proposed a patchiness regression to test for clumping [ 99 ] [ 100 ] Let y i here is Lloyd's index of mean crowding. [ 98 ] Perform an ordinary least squares regression of m i against y . In this regression the value of the slope ( b ) is an indicator of clumping: the slope = 1 if the data is Poisson-distributed. The constant ( a ) is the number of individuals that share a unit of habitat at infinitesimal density and may be < 0, 0 or > 0. These values represent regularity, randomness and aggregation of populations in spatial patterns respectively. A value of a < 1 is taken to mean that the basic unit of the distribution is a single individual. Where the statistic s 2 / m is not constant it has been recommended to use instead to regress Lloyd's index against am + bm 2 where a and b are constants. [ 101 ] The sample size ( n ) for a given degree of precision ( D ) for this regression is given by [ 101 ] where a is the constant in this regression, b is the slope, m is the mean and t is the critical value of the t distribution. Iwao has proposed a sequential sampling test based on this regression. [ 102 ] The upper and lower limits of this test are based on critical densities m c where control of a pest requires action to be taken. where N u and N l are the upper and lower bounds respectively, a is the constant from the regression, b is the slope and i is the number of samples. Kuno has proposed an alternative sequential stopping test also based on this regression. [ 103 ] where T n is the total sample size, D is the degree of precision, n is the number of samples units, a is the constant and b is the slope from the regression respectively. Kuno's test is subject to the condition that n ≥ ( b − 1) / D 2 Parrella and Jones have proposed an alternative but related stop line [ 104 ] where a and b are the parameters from the regression, N is the maximum number of sampled units and n is the individual sample size. Masaaki Morisita 's index of dispersion ( I m ) is the scaled probability that two points chosen at random from the whole population are in the same sample. [ 105 ] Higher values indicate a more clumped distribution. An alternative formulation is where n is the total sample size, m is the sample mean and x are the individual values with the sum taken over the whole sample. It is also equal to where IMC is Lloyd's index of crowding. [ 98 ] This index is relatively independent of the population density but is affected by the sample size. Values > 1 indicate clumping; values < 1 indicate a uniformity of distribution and a value of 1 indicates a random sample. Morisita showed that the statistic [ 105 ] is distributed as a chi squared variable with n − 1 degrees of freedom. An alternative significance test for this index has been developed for large samples. [ 106 ] where m is the overall sample mean, n is the number of sample units and z is the normal distribution abscissa . Significance is tested by comparing the value of z against the values of the normal distribution . A function for its calculation is available in the statistical R language in the vegan package . Note, not to be confused with Morisita's overlap index . Smith-Gill developed a statistic based on Morisita's index which is independent of both sample size and population density and bounded by −1 and +1. This statistic is calculated as follows [ 107 ] First determine Morisita's index ( I d ) in the usual fashion. Then let k be the number of units the population was sampled from. Calculate the two critical values where χ 2 is the chi square value for n − 1 degrees of freedom at the 97.5% and 2.5% levels of confidence. The standardised index ( I p ) is then calculated from one of the formulae below. When I d ≥ M c > 1 When M c > I d ≥ 1 When 1 > I d ≥ M u When 1 > M u > I d I p ranges between +1 and −1 with 95% confidence intervals of ±0.5. I p has the value of 0 if the pattern is random; if the pattern is uniform, I p < 0 and if the pattern shows aggregation, I p > 0. Southwood's index of spatial aggregation ( k ) is defined as where m is the mean of the sample and m * is Lloyd's index of crowding. [ 76 ] Fisher's index of dispersion [ 108 ] [ 109 ] is This index may be used to test for over dispersion of the population. It is recommended that in applications n > 5 [ 110 ] and that the sample total divided by the number of samples is > 3. In symbols where x is an individual sample value. The expectation of the index is equal to n and it is distributed as the chi-square distribution with n − 1 degrees of freedom when the population is Poisson distributed. [ 110 ] It is equal to the scale parameter when the population obeys the gamma distribution . It can be applied both to the overall population and to the individual areas sampled individually. The use of this test on the individual sample areas should also include the use of a Bonferroni correction factor. If the population obeys Taylor's law then The index of cluster size ( ICS ) was created by David and Moore. [ 111 ] Under a random (Poisson) distribution ICS is expected to equal 0. Positive values indicate a clumped distribution; negative values indicate a uniform distribution. where s 2 is the variance and m is the mean. If the population obeys Taylor's law The ICS is also equal to Katz's test statistic divided by ( n / 2 ) 1/2 where n is the sample size. It is also related to Clapham's test statistic. It is also sometimes referred to as the clumping index. Green's index ( GI ) is a modification of the index of cluster size that is independent of n the number of sample units. [ 112 ] This index equals 0 if the distribution is random, 1 if it is maximally aggregated and −1 / ( nm − 1 ) if it is uniform. The distribution of Green's index is not currently known so statistical tests have been difficult to devise for it. If the population obeys Taylor's law Binary sampling (presence/absence) is frequently used where it is difficult to obtain accurate counts. The dispersal index ( D ) is used when the study population is divided into a series of equal samples ( number of units = N : number of units per sample = n : total population size = n x N ). [ 113 ] The theoretical variance of a sample from a population with a binomial distribution is where s 2 is the variance, n is the number of units sampled and p is the mean proportion of sampling units with at least one individual present. The dispersal index ( D ) is defined as the ratio of observed variance to the expected variance. In symbols where var obs is the observed variance and var bin is the expected variance. The expected variance is calculated with the overall mean of the population. Values of D > 1 are considered to suggest aggregation. D ( n − 1 ) is distributed as the chi squared variable with n − 1 degrees of freedom where n is the number of units sampled. An alternative test is the C test. [ 114 ] where D is the dispersal index, n is the number of units per sample and N is the number of samples. C is distributed normally. A statistically significant value of C indicates overdispersion of the population. D is also related to intraclass correlation ( ρ ) which is defined as [ 115 ] where T is the number of organisms per sample, p is the likelihood of the organism having the sought after property (diseased, pest free, etc ), and x i is the number of organism in the i th unit with this property. T must be the same for all sampled units. In this case with n constant If the data can be fitted with a beta-binomial distribution then [ 115 ] where θ is the parameter of the distribution. [ 114 ] Ma has proposed a parameter ( m 0 ) − the population aggregation critical density - to relate population density to Taylor's law. [ 116 ] A number of statistical tests are known that may be of use in applications. A related statistic suggested by de Oliveria [ 117 ] is the difference of the variance and the mean. [ 118 ] If the population is Poisson distributed then where t is the Poisson parameter, s 2 is the variance, m is the mean and n is the sample size. The expected value of s 2 - m is zero. This statistic is distributed normally. [ 119 ] If the Poisson parameter in this equation is estimated by putting t = m , after a little manipulation this statistic can be written This is almost identical to Katz's statistic with ( n - 1 ) replacing n . Again O T is normally distributed with mean 0 and unit variance for large n . This statistic is the same as the Neyman-Scott statistic. de Oliveria actually suggested that the variance of s 2 - m was ( 1 - 2 t 1/2 + 3 t ) / n where t is the Poisson parameter. He suggested that t could be estimated by putting it equal to the mean ( m ) of the sample. Further investigation by Bohning [ 118 ] showed that this estimate of the variance was incorrect. Bohning's correction is given in the equations above. In 1936 Clapham proposed using the ratio of the variance to the mean as a test statistic (the relative variance). [ 120 ] In symbols For a Possion distribution this ratio equals 1. To test for deviations from this value he proposed testing its value against the chi square distribution with n degrees of freedom where n is the number of sample units. The distribution of this statistic was studied further by Blackman [ 121 ] who noted that it was approximately normally distributed with a mean of 1 and a variance ( V θ ) of The derivation of the variance was re analysed by Bartlett [ 122 ] who considered it to be For large samples these two formulae are in approximate agreement. This test is related to the later Katz's J n statistic. If the population obeys Taylor's law then A refinement on this test has also been published [ 123 ] These authors noted that the original test tends to detect overdispersion at higher scales even when this was not present in the data. They noted that the use of the multinomial distribution may be more appropriate than the use of a Poisson distribution for such data. The statistic θ is distributed where N is the number of sample units, n is the total number of samples examined and x i are the individual data values. The expectation and variance of θ are For large N , E( θ ) is approximately 1 and If the number of individuals sampled ( n ) is large this estimate of the variance is in agreement with those derived earlier. However, for smaller samples these latter estimates are more precise and should be used. Power Laws
https://en.wikipedia.org/wiki/Taylor's_law
In calculus , Taylor's theorem gives an approximation of a k {\textstyle k} -times differentiable function around a given point by a polynomial of degree k {\textstyle k} , called the k {\textstyle k} -th-order Taylor polynomial . For a smooth function , the Taylor polynomial is the truncation at the order k {\textstyle k} of the Taylor series of the function. The first-order Taylor polynomial is the linear approximation of the function, and the second-order Taylor polynomial is often referred to as the quadratic approximation . [ 1 ] There are several versions of Taylor's theorem, some giving explicit estimates of the approximation error of the function by its Taylor polynomial. Taylor's theorem is named after the mathematician Brook Taylor , who stated a version of it in 1715, [ 2 ] although an earlier version of the result was already mentioned in 1671 by James Gregory . [ 3 ] Taylor's theorem is taught in introductory-level calculus courses and is one of the central elementary tools in mathematical analysis . It gives simple arithmetic formulas to accurately compute values of many transcendental functions such as the exponential function and trigonometric functions . It is the starting point of the study of analytic functions , and is fundamental in various areas of mathematics, as well as in numerical analysis and mathematical physics . Taylor's theorem also generalizes to multivariate and vector valued functions. It provided the mathematical basis for some landmark early computing machines: Charles Babbage 's Difference Engine calculated sines, cosines, logarithms, and other transcendental functions by numerically integrating the first 7 terms of their Taylor series. If a real-valued function f ( x ) {\textstyle f(x)} is differentiable at the point x = a {\textstyle x=a} , then it has a linear approximation near this point. This means that there exists a function h 1 ( x ) such that f ( x ) = f ( a ) + f ′ ( a ) ( x − a ) + h 1 ( x ) ( x − a ) , lim x → a h 1 ( x ) = 0. {\displaystyle f(x)=f(a)+f'(a)(x-a)+h_{1}(x)(x-a),\quad \lim _{x\to a}h_{1}(x)=0.} Here P 1 ( x ) = f ( a ) + f ′ ( a ) ( x − a ) {\displaystyle P_{1}(x)=f(a)+f'(a)(x-a)} is the linear approximation of f ( x ) {\textstyle f(x)} for x near the point a , whose graph y = P 1 ( x ) {\textstyle y=P_{1}(x)} is the tangent line to the graph y = f ( x ) {\textstyle y=f(x)} at x = a . The error in the approximation is: R 1 ( x ) = f ( x ) − P 1 ( x ) = h 1 ( x ) ( x − a ) . {\displaystyle R_{1}(x)=f(x)-P_{1}(x)=h_{1}(x)(x-a).} As x tends to a, this error goes to zero much faster than ( x − a ) {\displaystyle (x-a)} , making f ( x ) ≈ P 1 ( x ) {\displaystyle f(x)\approx P_{1}(x)} a useful approximation. For a better approximation to f ( x ) {\textstyle f(x)} , we can fit a quadratic polynomial instead of a linear function: P 2 ( x ) = f ( a ) + f ′ ( a ) ( x − a ) + f ″ ( a ) 2 ( x − a ) 2 . {\displaystyle P_{2}(x)=f(a)+f'(a)(x-a)+{\frac {f''(a)}{2}}(x-a)^{2}.} Instead of just matching one derivative of f ( x ) {\textstyle f(x)} at x = a {\textstyle x=a} , this polynomial has the same first and second derivatives, as is evident upon differentiation. Taylor's theorem ensures that the quadratic approximation is, in a sufficiently small neighborhood of x = a {\textstyle x=a} , more accurate than the linear approximation. Specifically, f ( x ) = P 2 ( x ) + h 2 ( x ) ( x − a ) 2 , lim x → a h 2 ( x ) = 0. {\displaystyle f(x)=P_{2}(x)+h_{2}(x)(x-a)^{2},\quad \lim _{x\to a}h_{2}(x)=0.} Here the error in the approximation is R 2 ( x ) = f ( x ) − P 2 ( x ) = h 2 ( x ) ( x − a ) 2 , {\displaystyle R_{2}(x)=f(x)-P_{2}(x)=h_{2}(x)(x-a)^{2},} which, given the limiting behavior of h 2 {\displaystyle h_{2}} , goes to zero faster than ( x − a ) 2 {\displaystyle (x-a)^{2}} as x tends to a . Similarly, we might get still better approximations to f if we use polynomials of higher degree, since then we can match even more derivatives with f at the selected base point. In general, the error in approximating a function by a polynomial of degree k will go to zero much faster than ( x − a ) k {\displaystyle (x-a)^{k}} as x tends to a . However, there are functions, even infinitely differentiable ones, for which increasing the degree of the approximating polynomial does not increase the accuracy of approximation: we say such a function fails to be analytic at x = a : it is not (locally) determined by its derivatives at this point. Taylor's theorem is of asymptotic nature: it only tells us that the error R k {\textstyle R_{k}} in an approximation by a k {\textstyle k} -th order Taylor polynomial P k tends to zero faster than any nonzero k {\textstyle k} -th degree polynomial as x → a {\textstyle x\to a} . It does not tell us how large the error is in any concrete neighborhood of the center of expansion, but for this purpose there are explicit formulas for the remainder term (given below) which are valid under some additional regularity assumptions on f . These enhanced versions of Taylor's theorem typically lead to uniform estimates for the approximation error in a small neighborhood of the center of expansion, but the estimates do not necessarily hold for neighborhoods which are too large, even if the function f is analytic . In that situation one may have to select several Taylor polynomials with different centers of expansion to have reliable Taylor-approximations of the original function (see animation on the right.) There are several ways we might use the remainder term: The precise statement of the most basic version of Taylor's theorem is as follows: Taylor's theorem [ 4 ] [ 5 ] [ 6 ] — Let k ≥ 1 be an integer and let the function f : R → R be k times differentiable at the point a ∈ R . Then there exists a function h k : R → R such that f ( x ) = ∑ i = 0 k f ( i ) ( a ) i ! ( x − a ) i + h k ( x ) ( x − a ) k , {\displaystyle f(x)=\sum _{i=0}^{k}{\frac {f^{(i)}(a)}{i!}}(x-a)^{i}+h_{k}(x)(x-a)^{k},} and lim x → a h k ( x ) = 0. {\displaystyle \lim _{x\to a}h_{k}(x)=0.} This is called the Peano form of the remainder . The polynomial appearing in Taylor's theorem is the k {\textstyle {\boldsymbol {k}}} -th order Taylor polynomial P k ( x ) = f ( a ) + f ′ ( a ) ( x − a ) + f ″ ( a ) 2 ! ( x − a ) 2 + ⋯ + f ( k ) ( a ) k ! ( x − a ) k {\displaystyle P_{k}(x)=f(a)+f'(a)(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+\cdots +{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}} of the function f at the point a . The Taylor polynomial is the unique "asymptotic best fit" polynomial in the sense that if there exists a function h k : R → R and a k {\textstyle k} -th order polynomial p such that f ( x ) = p ( x ) + h k ( x ) ( x − a ) k , lim x → a h k ( x ) = 0 , {\displaystyle f(x)=p(x)+h_{k}(x)(x-a)^{k},\quad \lim _{x\to a}h_{k}(x)=0,} then p = P k . Taylor's theorem describes the asymptotic behavior of the remainder term R k ( x ) = f ( x ) − P k ( x ) , {\displaystyle R_{k}(x)=f(x)-P_{k}(x),} which is the approximation error when approximating f with its Taylor polynomial. Using the little-o notation , the statement in Taylor's theorem reads as R k ( x ) = o ( | x − a | k ) , x → a . {\displaystyle R_{k}(x)=o(|x-a|^{k}),\quad x\to a.} Under stronger regularity assumptions on f there are several precise formulas for the remainder term R k of the Taylor polynomial, the most common ones being the following. Mean-value forms of the remainder — Let f : R → R be k + 1 times differentiable on the open interval between a {\textstyle a} and x {\textstyle x} with f ( k ) continuous on the closed interval between a {\textstyle a} and x {\textstyle x} . [ 7 ] Then R k ( x ) = f ( k + 1 ) ( ξ L ) ( k + 1 ) ! ( x − a ) k + 1 {\displaystyle R_{k}(x)={\frac {f^{(k+1)}(\xi _{L})}{(k+1)!}}(x-a)^{k+1}} for some real number ξ L {\textstyle \xi _{L}} between a {\textstyle a} and x {\textstyle x} . This is the Lagrange form [ 8 ] of the remainder. Similarly, R k ( x ) = f ( k + 1 ) ( ξ C ) k ! ( x − ξ C ) k ( x − a ) {\displaystyle R_{k}(x)={\frac {f^{(k+1)}(\xi _{C})}{k!}}(x-\xi _{C})^{k}(x-a)} for some real number ξ C {\textstyle \xi _{C}} between a {\textstyle a} and x {\textstyle x} . This is the Cauchy form [ 9 ] of the remainder. Both can be thought of as specific cases of the following result: Consider p > 0 {\displaystyle p>0} R k ( x ) = f ( k + 1 ) ( ξ S ) k ! ( x − ξ S ) k + 1 − p ( x − a ) p p {\displaystyle R_{k}(x)={\frac {f^{(k+1)}(\xi _{S})}{k!}}(x-\xi _{S})^{k+1-p}{\frac {(x-a)^{p}}{p}}} for some real number ξ S {\textstyle \xi _{S}} between a {\textstyle a} and x {\textstyle x} . This is the Schlömilch form of the remainder (sometimes called the Schlömilch- Roche ). The choice p = k + 1 {\textstyle p=k+1} is the Lagrange form, whilst the choice p = 1 {\textstyle p=1} is the Cauchy form. These refinements of Taylor's theorem are usually proved using the mean value theorem , whence the name. Additionally, notice that this is precisely the mean value theorem when k = 0 {\textstyle k=0} . Also other similar expressions can be found. For example, if G ( t ) is continuous on the closed interval and differentiable with a non-vanishing derivative on the open interval between a {\textstyle a} and x {\textstyle x} , then R k ( x ) = f ( k + 1 ) ( ξ ) k ! ( x − ξ ) k G ( x ) − G ( a ) G ′ ( ξ ) {\displaystyle R_{k}(x)={\frac {f^{(k+1)}(\xi )}{k!}}(x-\xi )^{k}{\frac {G(x)-G(a)}{G'(\xi )}}} for some number ξ {\textstyle \xi } between a {\textstyle a} and x {\textstyle x} . This version covers the Lagrange and Cauchy forms of the remainder as special cases, and is proved below using Cauchy's mean value theorem . The Lagrange form is obtained by taking G ( t ) = ( x − t ) k + 1 {\displaystyle G(t)=(x-t)^{k+1}} and the Cauchy form is obtained by taking G ( t ) = t − a {\displaystyle G(t)=t-a} . The statement for the integral form of the remainder is more advanced than the previous ones, and requires understanding of Lebesgue integration theory for the full generality. However, it holds also in the sense of Riemann integral provided the ( k + 1)th derivative of f is continuous on the closed interval [ a , x ]. Integral form of the remainder [ 10 ] — Let f ( k ) {\textstyle f^{(k)}} be absolutely continuous on the closed interval between a {\textstyle a} and x {\textstyle x} . Then R k ( x ) = ∫ a x f ( k + 1 ) ( t ) k ! ( x − t ) k d t . {\displaystyle R_{k}(x)=\int _{a}^{x}{\frac {f^{(k+1)}(t)}{k!}}(x-t)^{k}\,dt.} Due to the absolute continuity of f ( k ) on the closed interval between a {\textstyle a} and x {\textstyle x} , its derivative f ( k +1) exists as an L 1 -function, and the result can be proven by a formal calculation using the fundamental theorem of calculus and integration by parts . It is often useful in practice to be able to estimate the remainder term appearing in the Taylor approximation, rather than having an exact formula for it. Suppose that f is ( k + 1) -times continuously differentiable in an interval I containing a . Suppose that there are real constants q and Q such that q ≤ f ( k + 1 ) ( x ) ≤ Q {\displaystyle q\leq f^{(k+1)}(x)\leq Q} throughout I . Then the remainder term satisfies the inequality [ 11 ] q ( x − a ) k + 1 ( k + 1 ) ! ≤ R k ( x ) ≤ Q ( x − a ) k + 1 ( k + 1 ) ! , {\displaystyle q{\frac {(x-a)^{k+1}}{(k+1)!}}\leq R_{k}(x)\leq Q{\frac {(x-a)^{k+1}}{(k+1)!}},} if x > a , and a similar estimate if x < a . This is a simple consequence of the Lagrange form of the remainder. In particular, if | f ( k + 1 ) ( x ) | ≤ M {\displaystyle |f^{(k+1)}(x)|\leq M} on an interval I = ( a − r , a + r ) with some r > 0 {\displaystyle r>0} , then | R k ( x ) | ≤ M | x − a | k + 1 ( k + 1 ) ! ≤ M r k + 1 ( k + 1 ) ! {\displaystyle |R_{k}(x)|\leq M{\frac {|x-a|^{k+1}}{(k+1)!}}\leq M{\frac {r^{k+1}}{(k+1)!}}} for all x ∈( a − r , a + r ). The second inequality is called a uniform estimate , because it holds uniformly for all x on the interval ( a − r , a + r ). Suppose that we wish to find the approximate value of the function f ( x ) = e x {\textstyle f(x)=e^{x}} on the interval [ − 1 , 1 ] {\textstyle [-1,1]} while ensuring that the error in the approximation is no more than 10 −5 . In this example we pretend that we only know the following properties of the exponential function: From these properties it follows that f ( k ) ( x ) = e x {\textstyle f^{(k)}(x)=e^{x}} for all k {\textstyle k} , and in particular, f ( k ) ( 0 ) = 1 {\textstyle f^{(k)}(0)=1} . Hence the k {\textstyle k} -th order Taylor polynomial of f {\textstyle f} at 0 {\textstyle 0} and its remainder term in the Lagrange form are given by P k ( x ) = 1 + x + x 2 2 ! + ⋯ + x k k ! , R k ( x ) = e ξ ( k + 1 ) ! x k + 1 , {\displaystyle P_{k}(x)=1+x+{\frac {x^{2}}{2!}}+\cdots +{\frac {x^{k}}{k!}},\qquad R_{k}(x)={\frac {e^{\xi }}{(k+1)!}}x^{k+1},} where ξ {\textstyle \xi } is some number between 0 and x . Since e x is increasing by ( ★ ), we can simply use e x ≤ 1 {\textstyle e^{x}\leq 1} for x ∈ [ − 1 , 0 ] {\textstyle x\in [-1,0]} to estimate the remainder on the subinterval [ − 1 , 0 ] {\displaystyle [-1,0]} . To obtain an upper bound for the remainder on [ 0 , 1 ] {\displaystyle [0,1]} , we use the property e ξ < e x {\textstyle e^{\xi }<e^{x}} for 0 < ξ < x {\textstyle 0<\xi <x} to estimate e x = 1 + x + e ξ 2 x 2 < 1 + x + e x 2 x 2 , 0 < x ≤ 1 {\displaystyle e^{x}=1+x+{\frac {e^{\xi }}{2}}x^{2}<1+x+{\frac {e^{x}}{2}}x^{2},\qquad 0<x\leq 1} using the second order Taylor expansion. Then we solve for e x to deduce that e x ≤ 1 + x 1 − x 2 2 = 2 1 + x 2 − x 2 ≤ 4 , 0 ≤ x ≤ 1 {\displaystyle e^{x}\leq {\frac {1+x}{1-{\frac {x^{2}}{2}}}}=2{\frac {1+x}{2-x^{2}}}\leq 4,\qquad 0\leq x\leq 1} simply by maximizing the numerator and minimizing the denominator . Combining these estimates for e x we see that | R k ( x ) | ≤ 4 | x | k + 1 ( k + 1 ) ! ≤ 4 ( k + 1 ) ! , − 1 ≤ x ≤ 1 , {\displaystyle |R_{k}(x)|\leq {\frac {4|x|^{k+1}}{(k+1)!}}\leq {\frac {4}{(k+1)!}},\qquad -1\leq x\leq 1,} so the required precision is certainly reached, when 4 ( k + 1 ) ! < 10 − 5 ⟺ 4 ⋅ 10 5 < ( k + 1 ) ! ⟺ k ≥ 9. {\displaystyle {\frac {4}{(k+1)!}}<10^{-5}\quad \Longleftrightarrow \quad 4\cdot 10^{5}<(k+1)!\quad \Longleftrightarrow \quad k\geq 9.} (See factorial or compute by hand the values 9 ! = 362880 {\textstyle 9!=362880} and 10 ! = 3628800 {\textstyle 10!=3628800} .) As a conclusion, Taylor's theorem leads to the approximation e x = 1 + x + x 2 2 ! + ⋯ + x 9 9 ! + R 9 ( x ) , | R 9 ( x ) | < 10 − 5 , − 1 ≤ x ≤ 1. {\displaystyle e^{x}=1+x+{\frac {x^{2}}{2!}}+\cdots +{\frac {x^{9}}{9!}}+R_{9}(x),\qquad |R_{9}(x)|<10^{-5},\qquad -1\leq x\leq 1.} For instance, this approximation provides a decimal expression e ≈ 2.71828 {\displaystyle e\approx 2.71828} , correct up to five decimal places. Let I ⊂ R be an open interval . By definition, a function f : I → R is real analytic if it is locally defined by a convergent power series . This means that for every a ∈ I there exists some r > 0 and a sequence of coefficients c k ∈ R such that ( a − r , a + r ) ⊂ I and f ( x ) = ∑ k = 0 ∞ c k ( x − a ) k = c 0 + c 1 ( x − a ) + c 2 ( x − a ) 2 + ⋯ , | x − a | < r . {\displaystyle f(x)=\sum _{k=0}^{\infty }c_{k}(x-a)^{k}=c_{0}+c_{1}(x-a)+c_{2}(x-a)^{2}+\cdots ,\qquad |x-a|<r.} In general, the radius of convergence of a power series can be computed from the Cauchy–Hadamard formula 1 R = lim sup k → ∞ | c k | 1 k . {\displaystyle {\frac {1}{R}}=\limsup _{k\to \infty }|c_{k}|^{\frac {1}{k}}.} This result is based on comparison with a geometric series , and the same method shows that if the power series based on a converges for some b ∈ R , it must converge uniformly on the closed interval [ a − r b , a + r b ] {\textstyle [a-r_{b},a+r_{b}]} , where r b = | b − a | {\textstyle r_{b}=\left\vert b-a\right\vert } . Here only the convergence of the power series is considered, and it might well be that ( a − R , a + R ) extends beyond the domain I of f . The Taylor polynomials of the real analytic function f at a are simply the finite truncations P k ( x ) = ∑ j = 0 k c j ( x − a ) j , c j = f ( j ) ( a ) j ! {\displaystyle P_{k}(x)=\sum _{j=0}^{k}c_{j}(x-a)^{j},\qquad c_{j}={\frac {f^{(j)}(a)}{j!}}} of its locally defining power series, and the corresponding remainder terms are locally given by the analytic functions R k ( x ) = ∑ j = k + 1 ∞ c j ( x − a ) j = ( x − a ) k h k ( x ) , | x − a | < r . {\displaystyle R_{k}(x)=\sum _{j=k+1}^{\infty }c_{j}(x-a)^{j}=(x-a)^{k}h_{k}(x),\qquad |x-a|<r.} Here the functions h k : ( a − r , a + r ) → R h k ( x ) = ( x − a ) ∑ j = 0 ∞ c k + 1 + j ( x − a ) j {\displaystyle {\begin{aligned}&h_{k}:(a-r,a+r)\to \mathbb {R} \\[1ex]&h_{k}(x)=(x-a)\sum _{j=0}^{\infty }c_{k+1+j}\left(x-a\right)^{j}\end{aligned}}} are also analytic, since their defining power series have the same radius of convergence as the original series. Assuming that [ a − r , a + r ] ⊂ I and r < R , all these series converge uniformly on ( a − r , a + r ) . Naturally, in the case of analytic functions one can estimate the remainder term R k ( x ) {\textstyle R_{k}(x)} by the tail of the sequence of the derivatives f′ ( a ) at the center of the expansion, but using complex analysis also another possibility arises, which is described below . The Taylor series of f will converge in some interval in which all its derivatives are bounded and do not grow too fast as k goes to infinity. (However, even if the Taylor series converges, it might not converge to f , as explained below; f is then said to be non- analytic .) One might think of the Taylor series f ( x ) ≈ ∑ k = 0 ∞ c k ( x − a ) k = c 0 + c 1 ( x − a ) + c 2 ( x − a ) 2 + ⋯ {\displaystyle f(x)\approx \sum _{k=0}^{\infty }c_{k}(x-a)^{k}=c_{0}+c_{1}(x-a)+c_{2}(x-a)^{2}+\cdots } of an infinitely many times differentiable function f : R → R as its "infinite order Taylor polynomial" at a . Now the estimates for the remainder imply that if, for any r , the derivatives of f are known to be bounded over ( a − r , a + r ), then for any order k and for any r > 0 there exists a constant M k,r > 0 such that for every x ∈ ( a − r , a + r ). Sometimes the constants M k,r can be chosen in such way that M k,r is bounded above, for fixed r and all k . Then the Taylor series of f converges uniformly to some analytic function T f : ( a − r , a + r ) → R T f ( x ) = ∑ k = 0 ∞ f ( k ) ( a ) k ! ( x − a ) k {\displaystyle {\begin{aligned}&T_{f}:(a-r,a+r)\to \mathbb {R} \\&T_{f}(x)=\sum _{k=0}^{\infty }{\frac {f^{(k)}(a)}{k!}}\left(x-a\right)^{k}\end{aligned}}} (One also gets convergence even if M k,r is not bounded above as long as it grows slowly enough.) The limit function T f is by definition always analytic, but it is not necessarily equal to the original function f , even if f is infinitely differentiable. In this case, we say f is a non-analytic smooth function , for example a flat function : f : R → R f ( x ) = { e − 1 x 2 x > 0 0 x ≤ 0. {\displaystyle {\begin{aligned}&f:\mathbb {R} \to \mathbb {R} \\&f(x)={\begin{cases}e^{-{\frac {1}{x^{2}}}}&x>0\\0&x\leq 0.\end{cases}}\end{aligned}}} Using the chain rule repeatedly by mathematical induction , one shows that for any order k , f ( k ) ( x ) = { p k ( x ) x 3 k ⋅ e − 1 x 2 x > 0 0 x ≤ 0 {\displaystyle f^{(k)}(x)={\begin{cases}{\frac {p_{k}(x)}{x^{3k}}}\cdot e^{-{\frac {1}{x^{2}}}}&x>0\\0&x\leq 0\end{cases}}} for some polynomial p k of degree 2( k − 1). The function e − 1 x 2 {\displaystyle e^{-{\frac {1}{x^{2}}}}} tends to zero faster than any polynomial as x → 0 {\textstyle x\to 0} , so f is infinitely many times differentiable and f ( k ) (0) = 0 for every positive integer k . The above results all hold in this case: However, as k increases for fixed r , the value of M k,r grows more quickly than r k , and the error does not go to zero . Taylor's theorem generalizes to functions f : C → C which are complex differentiable in an open subset U ⊂ C of the complex plane . However, its usefulness is dwarfed by other general theorems in complex analysis . Namely, stronger versions of related results can be deduced for complex differentiable functions f : U → C using Cauchy's integral formula as follows. Let r > 0 such that the closed disk B ( z , r ) ∪ S ( z , r ) is contained in U . Then Cauchy's integral formula with a positive parametrization γ ( t ) = z + re it of the circle S ( z , r ) with t ∈ [ 0 , 2 π ] {\displaystyle t\in [0,2\pi ]} gives f ( z ) = 1 2 π i ∫ γ f ( w ) w − z d w , f ′ ( z ) = 1 2 π i ∫ γ f ( w ) ( w − z ) 2 d w , … , f ( k ) ( z ) = k ! 2 π i ∫ γ f ( w ) ( w − z ) k + 1 d w . {\displaystyle f(z)={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-z}}\,dw,\quad f'(z)={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-z)^{2}}}\,dw,\quad \ldots ,\quad f^{(k)}(z)={\frac {k!}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-z)^{k+1}}}\,dw.} Here all the integrands are continuous on the circle S ( z , r ), which justifies differentiation under the integral sign. In particular, if f is once complex differentiable on the open set U , then it is actually infinitely many times complex differentiable on U . One also obtains Cauchy's estimate [ 12 ] | f ( k ) ( z ) | ≤ k ! 2 π ∫ γ M r | w − z | k + 1 d w = k ! M r r k , M r = max | w − c | = r | f ( w ) | {\displaystyle |f^{(k)}(z)|\leq {\frac {k!}{2\pi }}\int _{\gamma }{\frac {M_{r}}{|w-z|^{k+1}}}\,dw={\frac {k!M_{r}}{r^{k}}},\quad M_{r}=\max _{|w-c|=r}|f(w)|} for any z ∈ U and r > 0 such that B ( z , r ) ∪ S ( c , r ) ⊂ U . The estimate implies that the complex Taylor series T f ( z ) = ∑ k = 0 ∞ f ( k ) ( c ) k ! ( z − c ) k {\displaystyle T_{f}(z)=\sum _{k=0}^{\infty }{\frac {f^{(k)}(c)}{k!}}(z-c)^{k}} of f converges uniformly on any open disk B ( c , r ) ⊂ U {\textstyle B(c,r)\subset U} with S ( c , r ) ⊂ U {\textstyle S(c,r)\subset U} into some function T f . Furthermore, using the contour integral formulas for the derivatives f ( k ) ( c ), T f ( z ) = ∑ k = 0 ∞ ( z − c ) k 2 π i ∫ γ f ( w ) ( w − c ) k + 1 d w = 1 2 π i ∫ γ f ( w ) w − c ∑ k = 0 ∞ ( z − c w − c ) k d w = 1 2 π i ∫ γ f ( w ) w − c ( 1 1 − z − c w − c ) d w = 1 2 π i ∫ γ f ( w ) w − z d w = f ( z ) , {\displaystyle {\begin{aligned}T_{f}(z)&=\sum _{k=0}^{\infty }{\frac {(z-c)^{k}}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-c)^{k+1}}}\,dw\\&={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-c}}\sum _{k=0}^{\infty }\left({\frac {z-c}{w-c}}\right)^{k}\,dw\\&={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-c}}\left({\frac {1}{1-{\frac {z-c}{w-c}}}}\right)\,dw\\&={\frac {1}{2\pi i}}\int _{\gamma }{\frac {f(w)}{w-z}}\,dw\\&=f(z),\end{aligned}}} so any complex differentiable function f in an open set U ⊂ C is in fact complex analytic . All that is said for real analytic functions here holds also for complex analytic functions with the open interval I replaced by an open subset U ∈ C and a -centered intervals ( a − r , a + r ) replaced by c -centered disks B ( c , r ). In particular, the Taylor expansion holds in the form f ( z ) = P k ( z ) + R k ( z ) , P k ( z ) = ∑ j = 0 k f ( j ) ( c ) j ! ( z − c ) j , {\displaystyle f(z)=P_{k}(z)+R_{k}(z),\quad P_{k}(z)=\sum _{j=0}^{k}{\frac {f^{(j)}(c)}{j!}}(z-c)^{j},} where the remainder term R k is complex analytic. Methods of complex analysis provide some powerful results regarding Taylor expansions. For example, using Cauchy's integral formula for any positively oriented Jordan curve γ {\textstyle \gamma } which parametrizes the boundary ∂ W ⊂ U {\textstyle \partial W\subset U} of a region W ⊂ U {\textstyle W\subset U} , one obtains expressions for the derivatives f ( j ) ( c ) as above, and modifying slightly the computation for T f ( z ) = f ( z ) , one arrives at the exact formula R k ( z ) = ∑ j = k + 1 ∞ ( z − c ) j 2 π i ∫ γ f ( w ) ( w − c ) j + 1 d w = ( z − c ) k + 1 2 π i ∫ γ f ( w ) d w ( w − c ) k + 1 ( w − z ) , z ∈ W . {\displaystyle R_{k}(z)=\sum _{j=k+1}^{\infty }{\frac {(z-c)^{j}}{2\pi i}}\int _{\gamma }{\frac {f(w)}{(w-c)^{j+1}}}\,dw={\frac {(z-c)^{k+1}}{2\pi i}}\int _{\gamma }{\frac {f(w)\,dw}{(w-c)^{k+1}(w-z)}},\qquad z\in W.} The important feature here is that the quality of the approximation by a Taylor polynomial on the region W ⊂ U {\textstyle W\subset U} is dominated by the values of the function f itself on the boundary ∂ W ⊂ U {\textstyle \partial W\subset U} . Similarly, applying Cauchy's estimates to the series expression for the remainder, one obtains the uniform estimates | R k ( z ) | ≤ ∑ j = k + 1 ∞ M r | z − c | j r j = M r r k + 1 | z − c | k + 1 1 − | z − c | r ≤ M r β k + 1 1 − β , | z − c | r ≤ β < 1. {\displaystyle |R_{k}(z)|\leq \sum _{j=k+1}^{\infty }{\frac {M_{r}|z-c|^{j}}{r^{j}}}={\frac {M_{r}}{r^{k+1}}}{\frac {|z-c|^{k+1}}{1-{\frac {|z-c|}{r}}}}\leq {\frac {M_{r}\beta ^{k+1}}{1-\beta }},\qquad {\frac {|z-c|}{r}}\leq \beta <1.} The function f : R → R f ( x ) = 1 1 + x 2 {\displaystyle {\begin{aligned}&f:\mathbb {R} \to \mathbb {R} \\&f(x)={\frac {1}{1+x^{2}}}\end{aligned}}} is real analytic , that is, locally determined by its Taylor series. This function was plotted above to illustrate the fact that some elementary functions cannot be approximated by Taylor polynomials in neighborhoods of the center of expansion which are too large. This kind of behavior is easily understood in the framework of complex analysis. Namely, the function f extends into a meromorphic function f : C ∪ { ∞ } → C ∪ { ∞ } f ( z ) = 1 1 + z 2 {\displaystyle {\begin{aligned}&f:\mathbb {C} \cup \{\infty \}\to \mathbb {C} \cup \{\infty \}\\&f(z)={\frac {1}{1+z^{2}}}\end{aligned}}} on the compactified complex plane. It has simple poles at z = i {\textstyle z=i} and z = − i {\textstyle z=-i} , and it is analytic elsewhere. Now its Taylor series centered at z 0 converges on any disc B ( z 0 , r ) with r < | z − z 0 |, where the same Taylor series converges at z ∈ C . Therefore, Taylor series of f centered at 0 converges on B (0, 1) and it does not converge for any z ∈ C with | z | > 1 due to the poles at i and − i . For the same reason the Taylor series of f centered at 1 converges on B ( 1 , 2 ) {\textstyle B(1,{\sqrt {2}})} and does not converge for any z ∈ C with | z − 1 | > 2 {\textstyle \left\vert z-1\right\vert >{\sqrt {2}}} . A function f : R n → R is differentiable at a ∈ R n if and only if there exists a linear functional L : R n → R and a function h : R n → R such that f ( x ) = f ( a ) + L ( x − a ) + h ( x ) ‖ x − a ‖ , lim x → a h ( x ) = 0. {\displaystyle f({\boldsymbol {x}})=f({\boldsymbol {a}})+L({\boldsymbol {x}}-{\boldsymbol {a}})+h({\boldsymbol {x}})\lVert {\boldsymbol {x}}-{\boldsymbol {a}}\rVert ,\qquad \lim _{{\boldsymbol {x}}\to {\boldsymbol {a}}}h({\boldsymbol {x}})=0.} If this is the case, then L = d f ( a ) {\textstyle L=df({\boldsymbol {a}})} is the (uniquely defined) differential of f at the point a . Furthermore, then the partial derivatives of f exist at a and the differential of f at a is given by d f ( a ) ( v ) = ∂ f ∂ x 1 ( a ) v 1 + ⋯ + ∂ f ∂ x n ( a ) v n . {\displaystyle df({\boldsymbol {a}})({\boldsymbol {v}})={\frac {\partial f}{\partial x_{1}}}({\boldsymbol {a}})v_{1}+\cdots +{\frac {\partial f}{\partial x_{n}}}({\boldsymbol {a}})v_{n}.} Introduce the multi-index notation | α | = α 1 + ⋯ + α n , α ! = α 1 ! ⋯ α n ! , x α = x 1 α 1 ⋯ x n α n {\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n},\quad \alpha !=\alpha _{1}!\cdots \alpha _{n}!,\quad {\boldsymbol {x}}^{\alpha }=x_{1}^{\alpha _{1}}\cdots x_{n}^{\alpha _{n}}} for α ∈ N n and x ∈ R n . If all the k {\textstyle k} -th order partial derivatives of f : R n → R are continuous at a ∈ R n , then by Clairaut's theorem , one can change the order of mixed derivatives at a , so the short-hand notation D α f = ∂ | α | f ∂ x α = ∂ α 1 + … + α n f ∂ x 1 α 1 ⋯ ∂ x n α n {\displaystyle D^{\alpha }f={\frac {\partial ^{|\alpha |}f}{\partial {\boldsymbol {x}}^{\alpha }}}={\frac {\partial ^{\alpha _{1}+\ldots +\alpha _{n}}f}{\partial x_{1}^{\alpha _{1}}\cdots \partial x_{n}^{\alpha _{n}}}}} for the higher order partial derivatives is justified in this situation. The same is true if all the ( k − 1 )-th order partial derivatives of f exist in some neighborhood of a and are differentiable at a . [ 13 ] Then we say that f is k times differentiable at the point a . Using notations of the preceding section, one has the following theorem. Multivariate version of Taylor's theorem [ 14 ] — Let f : R n → R be a k -times continuously differentiable function at the point a ∈ R n . Then there exist functions h α : R n → R , where | α | = k , {\displaystyle |\alpha |=k,} such that f ( x ) = ∑ | α | ≤ k D α f ( a ) α ! ( x − a ) α + ∑ | α | = k h α ( x ) ( x − a ) α , and lim x → a h α ( x ) = 0. {\displaystyle {\begin{aligned}&f({\boldsymbol {x}})=\sum _{|\alpha |\leq k}{\frac {D^{\alpha }f({\boldsymbol {a}})}{\alpha !}}({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }+\sum _{|\alpha |=k}h_{\alpha }({\boldsymbol {x}})({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha },\\&{\mbox{and}}\quad \lim _{{\boldsymbol {x}}\to {\boldsymbol {a}}}h_{\alpha }({\boldsymbol {x}})=0.\end{aligned}}} If the function f : R n → R is k + 1 times continuously differentiable in a closed ball B = { y ∈ R n : ‖ a − y ‖ ≤ r } {\displaystyle B=\{\mathbf {y} \in \mathbb {R} ^{n}:\left\|\mathbf {a} -\mathbf {y} \right\|\leq r\}} for some r > 0 {\displaystyle r>0} , then one can derive an exact formula for the remainder in terms of ( k +1 )-th order partial derivatives of f in this neighborhood. [ 15 ] Namely, f ( x ) = ∑ | α | ≤ k D α f ( a ) α ! ( x − a ) α + ∑ | β | = k + 1 R β ( x ) ( x − a ) β , R β ( x ) = | β | β ! ∫ 0 1 ( 1 − t ) | β | − 1 D β f ( a + t ( x − a ) ) d t . {\displaystyle {\begin{aligned}&f({\boldsymbol {x}})=\sum _{|\alpha |\leq k}{\frac {D^{\alpha }f({\boldsymbol {a}})}{\alpha !}}({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }+\sum _{|\beta |=k+1}R_{\beta }({\boldsymbol {x}})({\boldsymbol {x}}-{\boldsymbol {a}})^{\beta },\\&R_{\beta }({\boldsymbol {x}})={\frac {|\beta |}{\beta !}}\int _{0}^{1}(1-t)^{|\beta |-1}D^{\beta }f{\big (}{\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}){\big )}\,dt.\end{aligned}}} In this case, due to the continuity of ( k +1 )-th order partial derivatives in the compact set B , one immediately obtains the uniform estimates | R β ( x ) | ≤ 1 β ! max | α | = | β | max y ∈ B | D α f ( y ) | , x ∈ B . {\displaystyle \left|R_{\beta }({\boldsymbol {x}})\right|\leq {\frac {1}{\beta !}}\max _{|\alpha |=|\beta |}\max _{{\boldsymbol {y}}\in B}|D^{\alpha }f({\boldsymbol {y}})|,\qquad {\boldsymbol {x}}\in B.} For example, the third-order Taylor polynomial of a smooth function f : R 2 → R {\displaystyle f:\mathbb {R} ^{2}\to \mathbb {R} } is, denoting x − a = v {\displaystyle {\boldsymbol {x}}-{\boldsymbol {a}}={\boldsymbol {v}}} , P 3 ( x ) = f ( a ) + ∂ f ∂ x 1 ( a ) v 1 + ∂ f ∂ x 2 ( a ) v 2 + ∂ 2 f ∂ x 1 2 ( a ) v 1 2 2 ! + ∂ 2 f ∂ x 1 ∂ x 2 ( a ) v 1 v 2 + ∂ 2 f ∂ x 2 2 ( a ) v 2 2 2 ! + ∂ 3 f ∂ x 1 3 ( a ) v 1 3 3 ! + ∂ 3 f ∂ x 1 2 ∂ x 2 ( a ) v 1 2 v 2 2 ! + ∂ 3 f ∂ x 1 ∂ x 2 2 ( a ) v 1 v 2 2 2 ! + ∂ 3 f ∂ x 2 3 ( a ) v 2 3 3 ! {\displaystyle {\begin{aligned}P_{3}({\boldsymbol {x}})=f({\boldsymbol {a}})+{}&{\frac {\partial f}{\partial x_{1}}}({\boldsymbol {a}})v_{1}+{\frac {\partial f}{\partial x_{2}}}({\boldsymbol {a}})v_{2}+{\frac {\partial ^{2}f}{\partial x_{1}^{2}}}({\boldsymbol {a}}){\frac {v_{1}^{2}}{2!}}+{\frac {\partial ^{2}f}{\partial x_{1}\partial x_{2}}}({\boldsymbol {a}})v_{1}v_{2}+{\frac {\partial ^{2}f}{\partial x_{2}^{2}}}({\boldsymbol {a}}){\frac {v_{2}^{2}}{2!}}\\&+{\frac {\partial ^{3}f}{\partial x_{1}^{3}}}({\boldsymbol {a}}){\frac {v_{1}^{3}}{3!}}+{\frac {\partial ^{3}f}{\partial x_{1}^{2}\partial x_{2}}}({\boldsymbol {a}}){\frac {v_{1}^{2}v_{2}}{2!}}+{\frac {\partial ^{3}f}{\partial x_{1}\partial x_{2}^{2}}}({\boldsymbol {a}}){\frac {v_{1}v_{2}^{2}}{2!}}+{\frac {\partial ^{3}f}{\partial x_{2}^{3}}}({\boldsymbol {a}}){\frac {v_{2}^{3}}{3!}}\end{aligned}}} Let [ 16 ] h k ( x ) = { f ( x ) − P ( x ) ( x − a ) k x ≠ a 0 x = a {\displaystyle h_{k}(x)={\begin{cases}{\frac {f(x)-P(x)}{(x-a)^{k}}}&x\not =a\\0&x=a\end{cases}}} where, as in the statement of Taylor's theorem, P ( x ) = f ( a ) + f ′ ( a ) ( x − a ) + f ″ ( a ) 2 ! ( x − a ) 2 + ⋯ + f ( k ) ( a ) k ! ( x − a ) k . {\displaystyle P(x)=f(a)+f'(a)(x-a)+{\frac {f''(a)}{2!}}(x-a)^{2}+\cdots +{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}.} It is sufficient to show that lim x → a h k ( x ) = 0. {\displaystyle \lim _{x\to a}h_{k}(x)=0.} The proof here is based on repeated application of L'Hôpital's rule . Note that, for each j = 0 , 1 , . . . , k − 1 {\textstyle j=0,1,...,k-1} , f ( j ) ( a ) = P ( j ) ( a ) {\displaystyle f^{(j)}(a)=P^{(j)}(a)} . Hence each of the first k − 1 {\textstyle k-1} derivatives of the numerator in h k ( x ) {\displaystyle h_{k}(x)} vanishes at x = a {\displaystyle x=a} , and the same is true of the denominator. Also, since the condition that the function f {\textstyle f} be k {\textstyle k} times differentiable at a point requires differentiability up to order k − 1 {\textstyle k-1} in a neighborhood of said point (this is true, because differentiability requires a function to be defined in a whole neighborhood of a point), the numerator and its k − 2 {\textstyle k-2} derivatives are differentiable in a neighborhood of a {\textstyle a} . Clearly, the denominator also satisfies said condition, and additionally, doesn't vanish unless x = a {\textstyle x=a} , therefore all conditions necessary for L'Hôpital's rule are fulfilled, and its use is justified. So lim x → a f ( x ) − P ( x ) ( x − a ) k = lim x → a d d x ( f ( x ) − P ( x ) ) d d x ( x − a ) k = ⋯ = lim x → a d k − 1 d x k − 1 ( f ( x ) − P ( x ) ) d k − 1 d x k − 1 ( x − a ) k = 1 k ! lim x → a f ( k − 1 ) ( x ) − P ( k − 1 ) ( x ) x − a = 1 k ! ( f ( k ) ( a ) − P ( k ) ( a ) ) = 0 {\displaystyle {\begin{aligned}\lim _{x\to a}{\frac {f(x)-P(x)}{(x-a)^{k}}}&=\lim _{x\to a}{\frac {{\frac {d}{dx}}(f(x)-P(x))}{{\frac {d}{dx}}(x-a)^{k}}}\\[1ex]&=\cdots \\[1ex]&=\lim _{x\to a}{\frac {{\frac {d^{k-1}}{dx^{k-1}}}(f(x)-P(x))}{{\frac {d^{k-1}}{dx^{k-1}}}(x-a)^{k}}}\\[1ex]&={\frac {1}{k!}}\lim _{x\to a}{\frac {f^{(k-1)}(x)-P^{(k-1)}(x)}{x-a}}\\[1ex]&={\frac {1}{k!}}(f^{(k)}(a)-P^{(k)}(a))=0\end{aligned}}} where the second-to-last equality follows by the definition of the derivative at x = a {\textstyle x=a} . Let f ( x ) {\displaystyle f(x)} be any real-valued continuous function to be approximated by the Taylor polynomial. Step 1: Let F {\textstyle F} and G {\textstyle G} be functions. Set F {\textstyle F} and G {\textstyle G} to be F ( x ) = f ( x ) − ∑ k = 0 n − 1 f ( k ) ( a ) k ! ( x − a ) k {\displaystyle {\begin{aligned}F(x)=f(x)-\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}\end{aligned}}} G ( x ) = ( x − a ) n {\displaystyle {\begin{aligned}G(x)=(x-a)^{n}\end{aligned}}} Step 2: Properties of F {\textstyle F} and G {\textstyle G} : F ( a ) = f ( a ) − f ( a ) − f ′ ( a ) ( a − a ) − . . . − f ( n − 1 ) ( a ) ( n − 1 ) ! ( a − a ) n − 1 = 0 G ( a ) = ( a − a ) n = 0 {\displaystyle {\begin{aligned}F(a)&=f(a)-f(a)-f'(a)(a-a)-...-{\frac {f^{(n-1)}(a)}{(n-1)!}}(a-a)^{n-1}=0\\G(a)&=(a-a)^{n}=0\end{aligned}}} Similarly, F ′ ( a ) = f ′ ( a ) − f ′ ( a ) − f ″ ( a ) ( 2 − 1 ) ! ( a − a ) ( 2 − 1 ) − . . . − f ( n − 1 ) ( a ) ( n − 2 ) ! ( a − a ) n − 2 = 0 {\displaystyle {\begin{aligned}F'(a)=f'(a)-f'(a)-{\frac {f''(a)}{(2-1)!}}(a-a)^{(2-1)}-...-{\frac {f^{(n-1)}(a)}{(n-2)!}}(a-a)^{n-2}=0\end{aligned}}} G ′ ( a ) = n ( a − a ) n − 1 = 0 ⋮ G ( n − 1 ) ( a ) = F ( n − 1 ) ( a ) = 0 {\displaystyle {\begin{aligned}G'(a)&=n(a-a)^{n-1}=0\\&\qquad \vdots \\G^{(n-1)}(a)&=F^{(n-1)}(a)=0\end{aligned}}} Step 3: Use Cauchy Mean Value Theorem Let f 1 {\displaystyle f_{1}} and g 1 {\displaystyle g_{1}} be continuous functions on [ a , b ] {\displaystyle [a,b]} . Since a < x < b {\displaystyle a<x<b} so we can work with the interval [ a , x ] {\displaystyle [a,x]} . Let f 1 {\displaystyle f_{1}} and g 1 {\displaystyle g_{1}} be differentiable on ( a , x ) {\displaystyle (a,x)} . Assume g 1 ′ ( x ) ≠ 0 {\displaystyle g_{1}'(x)\neq 0} for all x ∈ ( a , b ) {\displaystyle x\in (a,b)} . Then there exists c 1 ∈ ( a , x ) {\displaystyle c_{1}\in (a,x)} such that f 1 ( x ) − f 1 ( a ) g 1 ( x ) − g 1 ( a ) = f 1 ′ ( c 1 ) g 1 ′ ( c 1 ) {\displaystyle {\begin{aligned}{\frac {f_{1}(x)-f_{1}(a)}{g_{1}(x)-g_{1}(a)}}={\frac {f_{1}'(c_{1})}{g_{1}'(c_{1})}}\end{aligned}}} Note: G ′ ( x ) ≠ 0 {\displaystyle G'(x)\neq 0} in ( a , b ) {\displaystyle (a,b)} and F ( a ) , G ( a ) = 0 {\displaystyle F(a),G(a)=0} so F ( x ) G ( x ) = F ( x ) − F ( a ) G ( x ) − G ( a ) = F ′ ( c 1 ) G ′ ( c 1 ) {\displaystyle {\begin{aligned}{\frac {F(x)}{G(x)}}={\frac {F(x)-F(a)}{G(x)-G(a)}}={\frac {F'(c_{1})}{G'(c_{1})}}\end{aligned}}} for some c 1 ∈ ( a , x ) {\displaystyle c_{1}\in (a,x)} . This can also be performed for ( a , c 1 ) {\displaystyle (a,c_{1})} : F ′ ( c 1 ) G ′ ( c 1 ) = F ′ ( c 1 ) − F ′ ( a ) G ′ ( c 1 ) − G ′ ( a ) = F ″ ( c 2 ) G ″ ( c 2 ) {\displaystyle {\begin{aligned}{\frac {F'(c_{1})}{G'(c_{1})}}={\frac {F'(c_{1})-F'(a)}{G'(c_{1})-G'(a)}}={\frac {F''(c_{2})}{G''(c_{2})}}\end{aligned}}} for some c 2 ∈ ( a , c 1 ) {\displaystyle c_{2}\in (a,c_{1})} . This can be continued to c n {\displaystyle c_{n}} . This gives a partition in ( a , b ) {\displaystyle (a,b)} : a < c n < c n − 1 < ⋯ < c 1 < x {\displaystyle a<c_{n}<c_{n-1}<\dots <c_{1}<x} with F ( x ) G ( x ) = F ′ ( c 1 ) G ′ ( c 1 ) = ⋯ = F ( n ) ( c n ) G ( n ) ( c n ) . {\displaystyle {\frac {F(x)}{G(x)}}={\frac {F'(c_{1})}{G'(c_{1})}}=\dots ={\frac {F^{(n)}(c_{n})}{G^{(n)}(c_{n})}}.} Set c = c n {\displaystyle c=c_{n}} : F ( x ) G ( x ) = F ( n ) ( c ) G ( n ) ( c ) {\displaystyle {\frac {F(x)}{G(x)}}={\frac {F^{(n)}(c)}{G^{(n)}(c)}}} Step 4: Substitute back F ( x ) G ( x ) = f ( x ) − ∑ k = 0 n − 1 f ( k ) ( a ) k ! ( x − a ) k ( x − a ) n = F ( n ) ( c ) G ( n ) ( c ) {\displaystyle {\begin{aligned}{\frac {F(x)}{G(x)}}={\frac {f(x)-\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}}{(x-a)^{n}}}={\frac {F^{(n)}(c)}{G^{(n)}(c)}}\end{aligned}}} By the Power Rule, repeated derivatives of ( x − a ) n {\displaystyle (x-a)^{n}} , G ( n ) ( c ) = n ( n − 1 ) . . .1 {\displaystyle G^{(n)}(c)=n(n-1)...1} , so: F ( n ) ( c ) G ( n ) ( c ) = f ( n ) ( c ) n ( n − 1 ) ⋯ 1 = f ( n ) ( c ) n ! . {\displaystyle {\frac {F^{(n)}(c)}{G^{(n)}(c)}}={\frac {f^{(n)}(c)}{n(n-1)\cdots 1}}={\frac {f^{(n)}(c)}{n!}}.} This leads to: f ( x ) − ∑ k = 0 n − 1 f ( k ) ( a ) k ! ( x − a ) k = f ( n ) ( c ) n ! ( x − a ) n . {\displaystyle {\begin{aligned}f(x)-\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}={\frac {f^{(n)}(c)}{n!}}(x-a)^{n}\end{aligned}}.} By rearranging, we get: f ( x ) = ∑ k = 0 n − 1 f ( k ) ( a ) k ! ( x − a ) k + f ( n ) ( c ) n ! ( x − a ) n , {\displaystyle {\begin{aligned}f(x)=\sum _{k=0}^{n-1}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}+{\frac {f^{(n)}(c)}{n!}}(x-a)^{n}\end{aligned}},} or because c n = a {\displaystyle c_{n}=a} eventually: f ( x ) = ∑ k = 0 n f ( k ) ( a ) k ! ( x − a ) k . {\displaystyle f(x)=\sum _{k=0}^{n}{\frac {f^{(k)}(a)}{k!}}(x-a)^{k}.} Let G be any real-valued function, continuous on the closed interval between a {\textstyle a} and x {\textstyle x} and differentiable with a non-vanishing derivative on the open interval between a {\textstyle a} and x {\textstyle x} , and define F ( t ) = f ( t ) + f ′ ( t ) ( x − t ) + f ″ ( t ) 2 ! ( x − t ) 2 + ⋯ + f ( k ) ( t ) k ! ( x − t ) k . {\displaystyle F(t)=f(t)+f'(t)(x-t)+{\frac {f''(t)}{2!}}(x-t)^{2}+\cdots +{\frac {f^{(k)}(t)}{k!}}(x-t)^{k}.} For t ∈ [ a , x ] {\displaystyle t\in [a,x]} . Then, by Cauchy's mean value theorem , for some ξ {\textstyle \xi } on the open interval between a {\textstyle a} and x {\textstyle x} . Note that here the numerator F ( x ) − F ( a ) = R k ( x ) {\textstyle F(x)-F(a)=R_{k}(x)} is exactly the remainder of the Taylor polynomial for y = f ( x ) {\textstyle y=f(x)} . Compute F ′ ( t ) = f ′ ( t ) + ( f ″ ( t ) ( x − t ) − f ′ ( t ) ) + ( f ( 3 ) ( t ) 2 ! ( x − t ) 2 − f ( 2 ) ( t ) 1 ! ( x − t ) ) + ⋯ ⋯ + ( f ( k + 1 ) ( t ) k ! ( x − t ) k − f ( k ) ( t ) ( k − 1 ) ! ( x − t ) k − 1 ) = f ( k + 1 ) ( t ) k ! ( x − t ) k , {\displaystyle {\begin{aligned}F'(t)={}&f'(t)+{\big (}f''(t)(x-t)-f'(t){\big )}+\left({\frac {f^{(3)}(t)}{2!}}(x-t)^{2}-{\frac {f^{(2)}(t)}{1!}}(x-t)\right)+\cdots \\&\cdots +\left({\frac {f^{(k+1)}(t)}{k!}}(x-t)^{k}-{\frac {f^{(k)}(t)}{(k-1)!}}(x-t)^{k-1}\right)={\frac {f^{(k+1)}(t)}{k!}}(x-t)^{k},\end{aligned}}} plug it into ( ★★★ ) and rearrange terms to find that R k ( x ) = f ( k + 1 ) ( ξ ) k ! ( x − ξ ) k G ( x ) − G ( a ) G ′ ( ξ ) . {\displaystyle R_{k}(x)={\frac {f^{(k+1)}(\xi )}{k!}}(x-\xi )^{k}{\frac {G(x)-G(a)}{G'(\xi )}}.} This is the form of the remainder term mentioned after the actual statement of Taylor's theorem with remainder in the mean value form. The Lagrange form of the remainder is found by choosing G ( t ) = ( x − t ) k + 1 {\displaystyle G(t)=(x-t)^{k+1}} and the Cauchy form by choosing G ( t ) = t − a {\displaystyle G(t)=t-a} . Remark. Using this method one can also recover the integral form of the remainder by choosing G ( t ) = ∫ a t f ( k + 1 ) ( s ) k ! ( x − s ) k d s , {\displaystyle G(t)=\int _{a}^{t}{\frac {f^{(k+1)}(s)}{k!}}(x-s)^{k}\,ds,} but the requirements for f needed for the use of mean value theorem are too strong, if one aims to prove the claim in the case that f ( k ) is only absolutely continuous . However, if one uses Riemann integral instead of Lebesgue integral , the assumptions cannot be weakened. Due to the absolute continuity of f ( k ) {\displaystyle f^{(k)}} on the closed interval between a {\textstyle a} and x {\textstyle x} , its derivative f ( k + 1 ) {\displaystyle f^{(k+1)}} exists as an L 1 {\displaystyle L^{1}} -function, and we can use the fundamental theorem of calculus and integration by parts . This same proof applies for the Riemann integral assuming that f ( k ) {\displaystyle f^{(k)}} is continuous on the closed interval and differentiable on the open interval between a {\textstyle a} and x {\textstyle x} , and this leads to the same result as using the mean value theorem. The fundamental theorem of calculus states that f ( x ) = f ( a ) + ∫ a x f ′ ( t ) d t . {\displaystyle f(x)=f(a)+\int _{a}^{x}\,f'(t)\,dt.} Now we can integrate by parts and use the fundamental theorem of calculus again to see that f ( x ) = f ( a ) + ( x f ′ ( x ) − a f ′ ( a ) ) − ∫ a x t f ″ ( t ) d t = f ( a ) + x ( f ′ ( a ) + ∫ a x f ″ ( t ) d t ) − a f ′ ( a ) − ∫ a x t f ″ ( t ) d t = f ( a ) + ( x − a ) f ′ ( a ) + ∫ a x ( x − t ) f ″ ( t ) d t , {\displaystyle {\begin{aligned}f(x)&=f(a)+{\Big (}xf'(x)-af'(a){\Big )}-\int _{a}^{x}tf''(t)\,dt\\&=f(a)+x\left(f'(a)+\int _{a}^{x}f''(t)\,dt\right)-af'(a)-\int _{a}^{x}tf''(t)\,dt\\&=f(a)+(x-a)f'(a)+\int _{a}^{x}\,(x-t)f''(t)\,dt,\end{aligned}}} which is exactly Taylor's theorem with remainder in the integral form in the case k = 1 {\displaystyle k=1} . The general statement is proved using induction . Suppose that Integrating the remainder term by parts we arrive at ∫ a x f ( k + 1 ) ( t ) k ! ( x − t ) k d t = − [ f ( k + 1 ) ( t ) ( k + 1 ) k ! ( x − t ) k + 1 ] a x + ∫ a x f ( k + 2 ) ( t ) ( k + 1 ) k ! ( x − t ) k + 1 d t = f ( k + 1 ) ( a ) ( k + 1 ) ! ( x − a ) k + 1 + ∫ a x f ( k + 2 ) ( t ) ( k + 1 ) ! ( x − t ) k + 1 d t . {\displaystyle {\begin{aligned}\int _{a}^{x}{\frac {f^{(k+1)}(t)}{k!}}(x-t)^{k}\,dt=&-\left[{\frac {f^{(k+1)}(t)}{(k+1)k!}}(x-t)^{k+1}\right]_{a}^{x}+\int _{a}^{x}{\frac {f^{(k+2)}(t)}{(k+1)k!}}(x-t)^{k+1}\,dt\\=&\ {\frac {f^{(k+1)}(a)}{(k+1)!}}(x-a)^{k+1}+\int _{a}^{x}{\frac {f^{(k+2)}(t)}{(k+1)!}}(x-t)^{k+1}\,dt.\end{aligned}}} Substituting this into the formula in ( eq1 ) shows that if it holds for the value k {\displaystyle k} , it must also hold for the value k + 1 {\displaystyle k+1} . Therefore, since it holds for k = 1 {\displaystyle k=1} , it must hold for every positive integer k {\displaystyle k} . We prove the special case, where f : R n → R {\displaystyle f:\mathbb {R} ^{n}\to \mathbb {R} } has continuous partial derivatives up to the order k + 1 {\displaystyle k+1} in some closed ball B {\displaystyle B} with center a {\displaystyle {\boldsymbol {a}}} . The strategy of the proof is to apply the one-variable case of Taylor's theorem to the restriction of f {\displaystyle f} to the line segment adjoining x {\displaystyle {\boldsymbol {x}}} and a {\displaystyle {\boldsymbol {a}}} . [ 17 ] Parametrize the line segment between a {\displaystyle {\boldsymbol {a}}} and x {\displaystyle {\boldsymbol {x}}} by u ( t ) = a + t ( x − a ) {\displaystyle {\boldsymbol {u}}(t)={\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}})} We apply the one-variable version of Taylor's theorem to the function g ( t ) = f ( u ( t ) ) {\displaystyle g(t)=f({\boldsymbol {u}}(t))} : f ( x ) = g ( 1 ) = g ( 0 ) + ∑ j = 1 k 1 j ! g ( j ) ( 0 ) + ∫ 0 1 ( 1 − t ) k k ! g ( k + 1 ) ( t ) d t . {\displaystyle f({\boldsymbol {x}})=g(1)=g(0)+\sum _{j=1}^{k}{\frac {1}{j!}}g^{(j)}(0)\ +\ \int _{0}^{1}{\frac {(1-t)^{k}}{k!}}g^{(k+1)}(t)\,dt.} Applying the chain rule for several variables gives g ( j ) ( t ) = d j d t j f ( u ( t ) ) = d j d t j f ( a + t ( x − a ) ) = ∑ | α | = j ( j α ) ( D α f ) ( a + t ( x − a ) ) ( x − a ) α {\displaystyle {\begin{aligned}g^{(j)}(t)&={\frac {d^{j}}{dt^{j}}}f({\boldsymbol {u}}(t))\\&={\frac {d^{j}}{dt^{j}}}f({\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}))\\&=\sum _{|\alpha |=j}\left({\begin{matrix}j\\\alpha \end{matrix}}\right)(D^{\alpha }f)({\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}))({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }\end{aligned}}} where ( j α ) {\displaystyle {\tbinom {j}{\alpha }}} is the multinomial coefficient . Since 1 j ! ( j α ) = 1 α ! {\displaystyle {\tfrac {1}{j!}}{\tbinom {j}{\alpha }}={\tfrac {1}{\alpha !}}} , we get: f ( x ) = f ( a ) + ∑ 1 ≤ | α | ≤ k 1 α ! ( D α f ) ( a ) ( x − a ) α + ∑ | α | = k + 1 k + 1 α ! ( x − a ) α ∫ 0 1 ( 1 − t ) k ( D α f ) ( a + t ( x − a ) ) d t . {\displaystyle f({\boldsymbol {x}})=f({\boldsymbol {a}})+\sum _{1\leq |\alpha |\leq k}{\frac {1}{\alpha !}}(D^{\alpha }f)({\boldsymbol {a}})({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }+\sum _{|\alpha |=k+1}{\frac {k+1}{\alpha !}}({\boldsymbol {x}}-{\boldsymbol {a}})^{\alpha }\int _{0}^{1}(1-t)^{k}(D^{\alpha }f)({\boldsymbol {a}}+t({\boldsymbol {x}}-{\boldsymbol {a}}))\,dt.}
https://en.wikipedia.org/wiki/Taylor's_theorem
Taylor Hobson is an English company founded in 1886 and located in Leicester, England . Originally a manufacturer of still camera and cine lenses, the company now manufactures precision metrology instruments—in particular, profilometers for the analysis of surface textures and forms. Taylor Hobson is now part of Ametek 's Ultra Precision Technologies Group. [ 1 ] William Taylor was convinced that to be a leader in optical lenses, he needed to have the best understanding and control of the surface quality of his lenses. As a result, he started to design instruments capable of helping him evaluate surface texture and roundness. After other manufacturers in various domains became aware of this, they requested to purchase these instruments. He refused, as the invented instruments were crucial to Taylor Hobson's optical lenses supremacy. After the company decided to market the instruments, Taylor Hobson became the major instrument manufacturer.
https://en.wikipedia.org/wiki/Taylor_Hobson
Taylor dispersion or Taylor diffusion is an apparent or effective diffusion of some scalar field arising on the large scale due to the presence of a strong, confined, zero-mean shear flow on the small scale. Essentially, the shear acts to smear out the concentration distribution in the direction of the flow, enhancing the rate at which it spreads in that direction. [ 1 ] [ 2 ] [ 3 ] The effect is named after the British fluid dynamicist G. I. Taylor , who described the shear-induced dispersion for large Peclet numbers . The analysis was later generalized by Rutherford Aris for arbitrary values of the Peclet number , and hence the process is sometimes also referred to as Taylor-Aris dispersion . The canonical example is that of a simple diffusing species in uniform Poiseuille flow through a uniform circular pipe with no-flux boundary conditions, but is relevant in many other contexts, including the spread of pollutants in rivers and of drugs in blood flow [ 4 ] and rivulet flow. [ 5 ] We use z as an axial coordinate and r as the radial coordinate, and assume axisymmetry. The pipe has radius a , and the fluid velocity is: The concentration of the diffusing species is denoted c and its diffusivity is D . The concentration is assumed to be governed by the linear advection–diffusion equation : The concentration and velocity are written as the sum of a cross-sectional average (indicated by an overbar) and a deviation (indicated by a prime), thus: Under some assumptions (see below), it is possible to derive an equation just involving the average quantities: Observe how the effective diffusivity multiplying the derivative on the right hand side is greater than the original value of diffusion coefficient, D. The effective diffusivity is often written as: where P e = a w ¯ / D {\displaystyle {\mathit {Pe}}=a{\bar {w}}/D} is the Péclet number , based on the channel radius a {\displaystyle a} . The interesting result is that for large values of the Péclet number, the effective diffusivity is inversely proportional to the molecular diffusivity. The effect of Taylor dispersion is therefore more pronounced at higher Péclet numbers. In a frame moving with the mean velocity, i.e., by introducing ξ = z − w ¯ t {\displaystyle \xi =z-{\bar {w}}t} , the dispersion process becomes a purely diffusion process, with diffusivity given by the effective diffusivity. The assumption is that c ′ ≪ c ¯ {\displaystyle c'\ll {\bar {c}}} for given z {\displaystyle z} , which is the case if the length scale in the z {\displaystyle z} direction is long enough to smooth the gradient in the r {\displaystyle r} direction. This can be translated into the requirement that the length scale L {\displaystyle L} in the z {\displaystyle z} direction satisfies: Dispersion is also a function of channel geometry. An interesting phenomenon for example is that the dispersion of a flow between two infinite flat plates and a rectangular channel, which is infinitely thin, differs approximately 8.75 times. Here the very small side walls of the rectangular channel have an enormous influence on the dispersion. While the exact formula will not hold in more general circumstances, the mechanism still applies, and the effect is stronger at higher Péclet numbers. Taylor dispersion is of particular relevance for flows in porous media modelled by Darcy's law . [ 6 ] One may derive the Taylor equation using method of averages, first introduced by Aris. The result can also be derived from large-time asymptotics, which is more intuitively clear. In the dimensional coordinate system ( x ′ , r ′ , θ ) {\displaystyle (x',r',\theta )} , consider the fully-developed Poiseuille flow u = 2 U [ 1 − ( r ′ / a ) 2 ] {\displaystyle u=2U[1-(r'/a)^{2}]} flowing inside a pipe of radius a {\displaystyle a} , where U {\displaystyle U} is the average velocity of the fluid. A species of concentration c {\displaystyle c} with some arbitrary distribution is to be released at somewhere inside the pipe at time t ′ = 0 {\displaystyle t'=0} . As long as this initial distribution is compact, for instance the species/solute is not released everywhere with finite concentration level, the species will be convected along the pipe with the mean velocity U {\displaystyle U} . In a frame moving with the mean velocity and scaled with following non-dimensional scales where a 2 / D {\displaystyle a^{2}/D} is the time required for the species to diffuse in the radial direction, D {\displaystyle D} is the diffusion coefficient of the species and P e {\displaystyle Pe} is the Peclet number , the governing equations are given by Thus in this moving frame, at times t ∼ 1 {\displaystyle t\sim 1} (in dimensional variables, t ′ ∼ a 2 / D {\displaystyle t'\sim a^{2}/D} ), the species will diffuse radially. It is clear then that when t ≫ 1 {\displaystyle t\gg 1} (in dimensional variables, t ′ ≫ a 2 / D {\displaystyle t'\gg a^{2}/D} ), diffusion in the radial direction will make the concentration uniform across the pipe, although however the species is still diffusing in the x {\displaystyle x} direction. Taylor dispersion quantifies this axial diffusion process for large t {\displaystyle t} . Suppose t ∼ 1 / ϵ ≫ 1 {\displaystyle t\sim 1/\epsilon \gg 1} (i.e., times large in comparison with the radial diffusion time a 2 / D {\displaystyle a^{2}/D} ), where ϵ ≪ 1 {\displaystyle \epsilon \ll 1} is a small number. Then at these times, the concentration would spread to an axial extent x ∼ t ∼ 1 / ϵ ≫ 1 {\displaystyle x\sim {\sqrt {t}}\sim {\sqrt {1/\epsilon }}\gg 1} . To quantify large-time behavior, the following rescalings [ 7 ] can be introduced. The equation then becomes If pipe walls do not absorb or react with the species, then the boundary condition ∂ c / ∂ r = 0 {\displaystyle \partial c/\partial r=0} must be satisfied at r = 1 {\displaystyle r=1} . Due to symmetry, ∂ c / ∂ r = 0 {\displaystyle \partial c/\partial r=0} at r = 0 {\displaystyle r=0} . Since ϵ ≪ 1 {\displaystyle \epsilon \ll 1} , the solution can be expanded in an asymptotic series, c = c 0 + ϵ c 1 + ϵ c 2 + ⋯ {\displaystyle c=c_{0}+{\sqrt {\epsilon }}c_{1}+\epsilon c_{2}+\cdots } Substituting this series into the governing equation and collecting terms of different orders will lead to series of equations. At leading order, the equation obtained is Integrating this equation with boundary conditions defined before, one finds c 0 = c 0 ( ξ , τ ) {\displaystyle c_{0}=c_{0}(\xi ,\tau )} . At this order, c 0 {\displaystyle c_{0}} is still an unknown function. This fact that c 0 {\displaystyle c_{0}} is independent of r {\displaystyle r} is an expected result since as already said, at times t ′ ≫ a 2 / D {\displaystyle t'\gg a^{2}/D} , the radial diffusion will dominate first and make the concentration uniform across the pipe. Terms of order ϵ {\displaystyle {\sqrt {\epsilon }}} leads to the equation Integrating this equation with respect to r {\displaystyle r} using the boundary conditions leads to where c 1 a {\displaystyle c_{1a}} is the value of c 1 {\displaystyle c_{1}} at r = 0 {\displaystyle r=0} , an unknown function at this order. Terms of order ϵ {\displaystyle \epsilon } leads to the equation This equation can also be integrated with respect to r {\displaystyle r} , but what is required is the solvability condition of the above equation. The solvability condition is obtained by multiplying the above equation by 2 r d r {\displaystyle 2rdr} and integrating the whole equation from r = 0 {\displaystyle r=0} to r = 1 {\displaystyle r=1} . This is also the same as averaging the above equation over the radial direction. Using the boundary conditions and results obtained in the previous two orders, the solvability condition leads to This is the required diffusion equation. Going back to the laboratory frame and dimensional variables, the equation becomes By the way in which this equation is derived, it can be seen that this is valid for t ′ ≫ a 2 / D {\displaystyle t'\gg a^{2}/D} in which c 0 {\displaystyle c_{0}} changes significantly over a length scale x ′ ≫ a {\displaystyle x'\gg a} (or more precisely on a scale x ∼ D t ′ ) {\displaystyle x\sim {\sqrt {Dt'}})} . At the same time scale t ′ ≫ a 2 / D {\displaystyle t'\gg a^{2}/D} , at any small length scale about some location that moves with the mean flow, say x ′ − U t ′ = x s ′ − U t ′ {\displaystyle x'-Ut'=x_{s}'-Ut'} , i.e., on the length scale x ′ − x s ′ ∼ a {\displaystyle x'-x_{s}'\sim a} , the concentration is no longer independent of r {\displaystyle r} , but is given by c = c 0 + ϵ c 1 . {\displaystyle c=c_{0}+{\sqrt {\epsilon }}c_{1}.} Integrating the equations obtained at the second order, we find where c 2 a ( ξ , τ ) {\displaystyle c_{2a}(\xi ,\tau )} is an unknown at this order. Now collecting terms of order ϵ ϵ {\displaystyle \epsilon {\sqrt {\epsilon }}} , we find The solvability condition of the above equation yields the governing equation for c 1 a ( ξ , τ ) {\displaystyle c_{1a}(\xi ,\tau )} as follows
https://en.wikipedia.org/wiki/Taylor_dispersion
In fluid dynamics , the Taylor microscale , which is sometimes called the turbulence length scale , is a length scale used to characterize a turbulent fluid flow. [ 1 ] This microscale is named after Geoffrey Ingram Taylor . The Taylor microscale is the intermediate length scale at which fluid viscosity significantly affects the dynamics of turbulent eddies in the flow. This length scale is traditionally applied to turbulent flow which can be characterized by a Kolmogorov spectrum of velocity fluctuations. In such a flow, length scales which are larger than the Taylor microscale are not strongly affected by viscosity. These larger length scales in the flow are generally referred to as the inertial range . Below the Taylor microscale the turbulent motions are subject to strong viscous forces and kinetic energy is dissipated into heat. These shorter length scale motions are generally termed the dissipation range . Calculation of the Taylor microscale is not entirely straightforward, requiring formation of certain flow correlation function(s), [ 2 ] then expanding in a Taylor series and using the first non-zero term to characterize an osculating parabola . The Taylor microscale is proportional to Re − 1 / 2 {\displaystyle {\text{Re}}^{-1/2}} , while the Kolmogorov microscale is proportional to Re − 3 / 4 {\displaystyle {\text{Re}}^{-3/4}} , where Re {\displaystyle {\text{Re}}} is the integral scale Reynolds number . A turbulence Reynolds number calculated based on the Taylor microscale λ {\displaystyle \lambda } is given by where ⟨ v ′ ⟩ r m s = 1 3 ( v 1 ′ ) 2 + ( v 2 ′ ) 2 + ( v 3 ′ ) 2 {\displaystyle \langle \mathbf {v'} \rangle _{rms}={\frac {1}{\sqrt {3}}}{\sqrt {(v'_{1})^{2}+(v'_{2})^{2}+(v'_{3})^{2}}}} is the root mean square of the velocity fluctuations. The Taylor microscale is given as where ν {\displaystyle \nu } is the kinematic viscosity , and ϵ {\displaystyle \epsilon } is the rate of energy dissipation. A relation with turbulence kinetic energy k {\displaystyle k} can be derived as The Taylor microscale gives a convenient estimation for the fluctuating strain rate field The Taylor microscale falls in between the large-scale eddies and the small-scale eddies, which can be seen by calculating the ratios between λ {\displaystyle \lambda } and the Kolmogorov microscale η {\displaystyle \eta } . Given the length scale of the larger eddies l ∝ k 3 / 2 ϵ {\displaystyle l\propto {\frac {k^{3/2}}{\epsilon }}} , and the turbulence Reynolds number Re l {\displaystyle {\text{Re}}_{l}} referred to these eddies, the following relations can be obtained: [ 3 ]
https://en.wikipedia.org/wiki/Taylor_microscale
In fluid dynamics , the Taylor number ( Ta ) is a dimensionless quantity that characterizes the importance of centrifugal "forces" or so-called inertial forces due to rotation of a fluid about an axis, relative to viscous forces . [ 1 ] In 1923 Geoffrey Ingram Taylor introduced this quantity in his article on the stability of flow. [ 2 ] The typical context of the Taylor number is in characterization of the Couette flow between rotating colinear cylinders or rotating concentric spheres. In the case of a system which is not rotating uniformly, such as the case of cylindrical Couette flow, where the outer cylinder is stationary and the inner cylinder is rotating, inertial forces will often tend to destabilize a system, whereas viscous forces tend to stabilize a system and damp out perturbations and turbulence. On the other hand, in other cases the effect of rotation can be stabilizing. For example, in the case of cylindrical Couette flow with positive Rayleigh discriminant, there are no axisymmetric instabilities. Another example is a bucket of water that is rotating uniformly (i.e. undergoing solid body rotation). Here the fluid is subject to the Taylor-Proudman theorem which says that small motions will tend to produce purely two-dimensional perturbations to the overall rotational flow. However, in this case the effects of rotation and viscosity are usually characterized by the Ekman number and the Rossby number rather than by the Taylor number. There are various definitions of the Taylor number which are not all equivalent, but most commonly it is given by where Ω {\displaystyle \Omega } is a characteristic angular velocity, R is a characteristic linear dimension perpendicular to the rotation axis, and ν {\displaystyle \nu } is the kinematic viscosity . In the case of inertial instability such as Taylor–Couette flow , the Taylor number is mathematically analogous to the Grashof number which characterizes the strength of buoyant forces relative to viscous forces in convection. When the former exceeds the latter by a critical ratio, convective instability sets in. Likewise, in various systems and geometries, when the Taylor number exceeds a critical value, inertial instabilities set in, sometimes known as Taylor instabilities, which may lead to Taylor vortices or cells. A Taylor–Couette flow describes the fluid behavior between 2 concentric cylinders in rotation. A textbook definition of the Taylor number is [ 3 ] where R 1 is the external radius of the internal cylinder, and R 2 is the internal radius of the external cylinder. The critical Ta is about 1700.
https://en.wikipedia.org/wiki/Taylor_number
In fluid dynamics , Taylor scraping flow is a type of two-dimensional corner flow occurring when one of the wall is sliding over the other with constant velocity, named after G. I. Taylor . [ 1 ] [ 2 ] [ 3 ] Consider a plane wall located at θ = 0 {\displaystyle \theta =0} in the cylindrical coordinates ( r , θ ) {\displaystyle (r,\theta )} , moving with a constant velocity U {\displaystyle U} towards the left. Consider another plane wall(scraper), at an inclined position, making an angle α {\displaystyle \alpha } from the positive x {\displaystyle x} direction and let the point of intersection be at r = 0 {\displaystyle r=0} . This description is equivalent to moving the scraper towards right with velocity U {\displaystyle U} . The problem is singular at r = 0 {\displaystyle r=0} because at the origin, the velocities are discontinuous, thus the velocity gradient is infinite there. Taylor noticed that the inertial terms are negligible as long as the region of interest is within r ≪ ν / U {\displaystyle r\ll \nu /U} ( or, equivalently Reynolds number R e = U r / ν ≪ 1 {\displaystyle Re=Ur/\nu \ll 1} ), thus within the region the flow is essentially a Stokes flow . For example, George Batchelor gives a typical value for lubricating oil with velocity U = 10 cm / s {\displaystyle U=10{\text{ cm}}/{\text{s}}} as r ≪ 0.4 cm {\displaystyle r\ll 0.4{\text{ cm}}} . [ 4 ] Then for two-dimensional planar problem, the equation is where v = ( u r , u θ ) {\displaystyle \mathbf {v} =(u_{r},u_{\theta })} is the velocity field and ψ {\displaystyle \psi } is the stream function . The boundary conditions are Attempting a separable solution of the form ψ = U r f ( θ ) {\displaystyle \psi =Urf(\theta )} reduces the problem to with boundary conditions The solution is [ 5 ] Therefore, the velocity field is Pressure can be obtained through integration of the momentum equation which gives, The tangential stress and the normal stress on the scraper due to pressure and viscous forces are The same scraper stress if resolved according to Cartesian coordinates (parallel and perpendicular to the lower plate i.e. σ x = − σ t cos ⁡ α + σ n sin ⁡ α , σ y = σ t sin ⁡ α + σ n cos ⁡ α {\displaystyle \sigma _{x}=-\sigma _{t}\cos \alpha +\sigma _{n}\sin \alpha ,\ \sigma _{y}=\sigma _{t}\sin \alpha +\sigma _{n}\cos \alpha } ) are As noted earlier, all the stresses become infinite at r = 0 {\displaystyle r=0} , because the velocity gradient is infinite there. In real life, there will be a huge pressure at the point, which depends on the geometry of the contact. The stresses are shown in the figure as given in the Taylor's original paper. The stress in the direction parallel to the lower wall decreases as α {\displaystyle \alpha } increases, and reaches its minimum value σ x = 2 μ U / r {\displaystyle \sigma _{x}=2\mu U/r} at α = π {\displaystyle \alpha =\pi } . Taylor says: "The most interesting and perhaps unexpected feature of the calculations is that σ y {\displaystyle \sigma _{y}} does not change sign in the range 0 < α < π {\displaystyle 0<\alpha <\pi } . In the range π / 2 < α < π {\displaystyle \pi /2<\alpha <\pi } the contribution to σ y {\displaystyle \sigma _{y}} due to normal stress is of opposite sign to that due to tangential stress, but the latter is the greater. The palette knives used by artists for removing paint from their palettes are very flexible scrapers. They can therefore only be used at such an angle that σ n {\displaystyle \sigma _{n}} is small and as will be seen in the figure this occurs only when α {\displaystyle \alpha } is nearly 180 ∘ {\displaystyle 180^{\circ }} . In fact artists instinctively hold their palette knives in this position." Further he adds "A plasterer on the other hand holds a smoothing tool so that α {\displaystyle \alpha } is small. In that way he can get the large values of σ y / σ x {\displaystyle \sigma _{y}/\sigma _{x}} which are needed in forcing plaster from protuberances to hollows." Since scraping applications are important for non-Newtonian fluid (for example, scraping paint, nail polish, cream, butter, honey, etc.,), it is essential to consider this case. The analysis was carried out by J. Riedler and Wilhelm Schneider in 1983 and they were able to obtain self-similar solutions for power-law fluids satisfying the relation for the apparent viscosity [ 6 ] where m z {\displaystyle m_{z}} and n {\displaystyle n} are constants. The solution for the streamfunction of the flow created by the plate moving towards right is given by where and where C {\displaystyle C} is the root of J 2 ( α ) = 0 {\displaystyle {\mathcal {J}}_{2}(\alpha )=0} . It can be verified that this solution reduces to that of Taylor's for Newtonian fluids, i.e., when n = 1 {\displaystyle n=1} .
https://en.wikipedia.org/wiki/Taylor_scraping_flow
In fluid dynamics , the Taylor–Couette flow consists of a viscous fluid confined in the gap between two rotating cylinders. For low angular velocities, measured by the Reynolds number Re , the flow is steady and purely azimuthal . This basic state is known as circular Couette flow , after Maurice Marie Alfred Couette , who used this experimental device as a means to measure viscosity . Sir Geoffrey Ingram Taylor investigated the stability of Couette flow in a ground-breaking paper. [ 1 ] Taylor's paper became a cornerstone in the development of hydrodynamic stability theory and demonstrated that the no-slip condition , which was in dispute by the scientific community at the time, was the correct boundary condition for viscous flows at a solid boundary. Taylor showed that when the angular velocity of the inner cylinder is increased above a certain threshold, Couette flow becomes unstable and a secondary steady state characterized by axisymmetric toroidal vortices, known as Taylor vortex flow, emerges. Subsequently, upon increasing the angular speed of the cylinder the system undergoes a progression of instabilities which lead to states with greater spatio-temporal complexity, with the next state being called wavy vortex flow . If the two cylinders rotate in opposite sense then spiral vortex flow arises. Beyond a certain Reynolds number there is the onset of turbulence . Circular Couette flow has wide applications ranging from desalination to magnetohydrodynamics and also in viscosimetric analysis. Different flow regimes have been categorized over the years including twisted Taylor vortices and wavy outflow boundaries. It has been a well researched and documented flow in fluid dynamics. [ 2 ] A simple Taylor–Couette flow is a steady flow created between two rotating infinitely long coaxial cylinders. [ 3 ] Since the cylinder lengths are infinitely long, the flow is essentially unidirectional in steady state. If the inner cylinder with radius R 1 {\displaystyle R_{1}} is rotating at constant angular velocity Ω 1 {\displaystyle \Omega _{1}} and the outer cylinder with radius R 2 {\displaystyle R_{2}} is rotating at constant angular velocity Ω 2 {\displaystyle \Omega _{2}} as shown in figure, then the azimuthal velocity component is given by [ 4 ] where Lord Rayleigh [ 5 ] [ 6 ] studied the stability of the problem with inviscid assumption i.e., perturbing Euler equations . The criterion states that in the absence of viscosity the necessary and sufficient condition for distribution of azimuthal velocity v θ ( r ) {\displaystyle v_{\theta }(r)} to be stable is [ 7 ] everywhere in the interval; and, further, that the distribution is unstable if ( r v θ ) 2 {\displaystyle (rv_{\theta })^{2}} should decrease anywhere in the interval. Since | r v θ | {\displaystyle |rv_{\theta }|} represents angular momentum per unit mass, of a fluid element about the axis of rotation, an alternative way of stating the criterion is: a stratification of angular momentum about an axis is stable if and if only it increases monotonically outward. Applying this criterion to the Taylor-Couette flow indicates that the flow is stable if μ > η 2 {\displaystyle \mu >\eta ^{2}} , i.e., for stability, the outer cylinder must rotate (in the same sense) with an angular speed greater than η 2 {\displaystyle \eta ^{2}} -times that of the inner cylinder. The Rayleigh's criterion is violated ( Φ < 0 {\displaystyle \Phi <0} ) throughout the whole fluid when 0 < μ < η 2 {\displaystyle 0<\mu <\eta ^{2}} . On the other hand, when the cylinders rotate in opposite directions, i.e., when μ < 0 {\displaystyle \mu <0} , Rayleigh's criterion is violated only in the inner region, i.e., Φ ( r ) < 0 {\displaystyle \Phi (r)<0} for η < r / R 2 < η 0 {\displaystyle \eta <r/R_{2}<\eta _{0}} where η 0 = η [ ( 1 + | μ | ) / ( η 2 + | μ | ) ] 1 / 2 {\displaystyle \eta _{0}=\eta [(1+|\mu |)/(\eta ^{2}+|\mu |)]^{1/2}} . In a seminal work, G. I. Taylor found the criterion for instability in the presence of viscous forces both experimentally and theoretically. In general, viscous forces are found to postpone the onset of instability, predicted by Rayleigh's criterion. The stability is characterized by three parameters, namely, η {\displaystyle \eta } , μ {\displaystyle \mu } and a Taylor number The first result pertains to the fact that the flow is stable for μ > η 2 {\displaystyle \mu >\eta ^{2}} , consistent with Rayleigh's criterion. However, there are also stable cases in certain parametric range for μ < η 2 {\displaystyle \mu <\eta ^{2}} . Taylor obtained explicit criterion for the narrow gap in which the annular gap R 2 − R 1 {\displaystyle R_{2}-R_{1}} is small compared with the mean radius ( R 1 + R 2 ) / 2 {\displaystyle (R_{1}+R_{2})/2} , or in other words, 1 − η ≪ ( 1 + η ) / 2 ≈ 1 {\displaystyle 1-\eta \ll (1+\eta )/2\approx 1} . A better definition of Taylor number in the thin-gap approximation is In terms of this Taylor number, the critical condition for same-sense rotation was found to be As μ → 1 {\displaystyle \mu \rightarrow 1} , the critical Taylor number is given by Taylor vortices (also named after Sir Geoffrey Ingram Taylor ) are vortices formed in rotating Taylor–Couette flow when the Taylor number ( T a {\displaystyle \mathrm {Ta} } ) of the flow exceeds a critical value T a c {\displaystyle \mathrm {Ta_{c}} } . For flow in which instabilities in the flow are not present, i.e. perturbations to the flow are damped out by viscous forces, and the flow is steady. But, as the T a {\displaystyle \mathrm {Ta} } exceeds T a c {\displaystyle \mathrm {Ta_{c}} } , axisymmetric instabilities appear. The nature of these instabilities is that of an exchange of stabilities (rather than an overstability), and the result is not turbulence but rather a stable secondary flow pattern that emerges in which large toroidal vortices form in flow, stacked one on top of the other. These are the Taylor vortices. While the fluid mechanics of the original flow are unsteady when T a > T a c {\displaystyle \mathrm {Ta} >\mathrm {Ta_{c}} } , the new flow, called Taylor–Couette flow , with the Taylor vortices present, is actually steady until the flow reaches a large Reynolds number , at which point the flow transitions to unsteady "wavy vortex" flow, presumably indicating the presence of non-axisymmetric instabilities. The idealized mathematical problem is posed by choosing a particular value of μ {\displaystyle \mu } , η {\displaystyle \eta } , and T a {\displaystyle \mathrm {Ta} } . As η → 1 {\displaystyle \eta \rightarrow 1} and μ → 0 {\displaystyle \mu \rightarrow 0} from below, the critical Taylor number is T a c ≃ 1708 {\displaystyle \mathrm {Ta_{c}} \simeq 1708} [ 4 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ] ⁠⁠ In 1975, J. P. Gollub and H. L. Swinney published a paper on the onset of turbulence in rotating fluid. In a Taylor–Couette flow system, they observed that, as the rotation rate increases, the fluid stratifies into a pile of "fluid donuts". With further increases in the rotation rate, the donuts oscillate and twist and finally become turbulent. [ 12 ] Their study helped establish the Ruelle–Takens scenario in turbulence, [ 13 ] which is an important contribution by Floris Takens and David Ruelle towards understanding how hydrodynamic systems transition from stable flow patterns into turbulent. While the principal, governing factor for this transition is the Reynolds number , there are other important influencing factors: whether the flow is open (meaning there is a lateral up- and downstream) or closed (flow is laterally bound; e.g. rotating), and bounded (influenced by wall effects) or unbounded (not influenced by wall effects). According to this classification the Taylor–Couette flow is an example of a flow pattern forming in a closed, bounded flow system.
https://en.wikipedia.org/wiki/Taylor–Couette_flow
In fluid dynamics , Taylor–Culick flow describes the axisymmetric flow inside a long slender cylinder with one end closed, supplied by a constant flow injection through the sidewall. The flow is named after Geoffrey Ingram Taylor and F. E. C. Culick. [ 1 ] In 1956, Taylor showed that when a fluid forced into porous sheet of cone or wedge, a favorable longitudinal pressure gradient is set up in the direction of the flow inside the cone or wedge and the flow is rotational; this is in contrast in the vice versa case wherein the fluid is forced out of the cone or wedge sheet from inside in which case, the flow is uniform inside the cone or wedge and is obviously potential. Taylor also obtained solutions for the velocity in the limiting case where the cone or the wedge degenerates into a circular tube or parallel plates. Later in 1966, Culick found the solution corresponding to the tube problem, in problem applied to solid-propellant rocket combustion. [ 2 ] Here the thermal expansion of the gas due to combustion occurring at the inner surface of the combustion chamber (long slender cylinder) generates a flow directed towards the axis. The axisymmetric inviscid equation is governed by the Hicks equation , that reduces when no swirl is present (i.e., zero circulation ) to where ψ {\displaystyle \psi } is the stream function , r {\displaystyle r} is the radial distance from the axis, and z {\displaystyle z} is the axial distance measured from the closed end of the cylinder. The function f ( ψ ) = π 2 ψ {\displaystyle f(\psi )=\pi ^{2}\psi } is found to predict the correct solution. The solution satisfying the required boundary conditions is given by where a {\displaystyle a} is the radius of the cylinder and U {\displaystyle U} is the injection velocity at the wall. Despite the simple-looking formula, the solution has been experimentally verified to be accurate. [ 3 ] The solution is wrong for distances of order z ∼ a {\displaystyle z\sim a} since boundary layer separation at z = 0 {\displaystyle z=0} is inevitable; that is, the Taylor–Culick profile is correct for z ≫ 1 {\displaystyle z\gg 1} . The Taylor–Culick profile with injection at the closed end of the cylinder can also be solved analytically. [ 4 ] Although the solution is derived for the inviscid equation, it satisfies the non-slip condition at the wall since, as Taylor argued, any boundary layer at the sidewall will be blown off by flow injection. Hence, the flow is referred to as quasi-viscous.
https://en.wikipedia.org/wiki/Taylor–Culick_flow
The Taylor–Goldstein equation is an ordinary differential equation used in the fields of geophysical fluid dynamics , and more generally in fluid dynamics , in presence of quasi- 2D flows. [ 1 ] It describes the dynamics of the Kelvin–Helmholtz instability , subject to buoyancy forces (e.g. gravity), for stably stratified fluids in the dissipation-less limit . Or, more generally, the dynamics of internal waves in the presence of a (continuous) density stratification and shear flow . The Taylor–Goldstein equation is derived from the 2D Euler equations , using the Boussinesq approximation . [ 2 ] The equation is named after G.I. Taylor and S. Goldstein , who derived the equation independently from each other in 1931. The third independent derivation, also in 1931, was made by B. Haurwitz. [ 2 ] The equation is derived by solving a linearized version of the Navier–Stokes equation , in presence of gravity g {\displaystyle g} and a mean density gradient (with gradient-length L ρ {\displaystyle L_{\rho }} ), for the perturbation velocity field where ( U ( z ) , 0 , 0 ) {\displaystyle (U(z),0,0)} is the unperturbed or basic flow. The perturbation velocity has the wave -like solution u ′ ∝ exp ⁡ ( i α ( x − c t ) ) {\displaystyle \mathbf {u} '\propto \exp(i\alpha (x-ct))} ( real part understood). Using this knowledge, and the streamfunction representation u x ′ = d ϕ ~ / d z , u z ′ = − i α ϕ ~ {\displaystyle u_{x}'=d{\tilde {\phi }}/dz,u_{z}'=-i\alpha {\tilde {\phi }}} for the flow, the following dimensional form of the Taylor–Goldstein equation is obtained: where N = g L ρ {\displaystyle N={\sqrt {g \over L_{\rho }}}} denotes the Brunt–Väisälä frequency . The eigenvalue parameter of the problem is c {\displaystyle c} . If the imaginary part of the wave speed c {\displaystyle c} is positive, then the flow is unstable, and the small perturbation introduced to the system is amplified in time. Note that a purely imaginary Brunt–Väisälä frequency N {\displaystyle N} results in a flow which is always unstable. This instability is known as the Rayleigh–Taylor instability . The relevant boundary conditions are, in case of the no-slip boundary conditions at the channel top and bottom z = z 1 {\displaystyle z=z_{1}} and z = z 2 : {\displaystyle z=z_{2}:}
https://en.wikipedia.org/wiki/Taylor–Goldstein_equation
In fluid dynamics, the Taylor–Green vortex is an unsteady flow of a decaying vortex , which has an exact closed form solution of the incompressible Navier–Stokes equations in Cartesian coordinates . It is named after the British physicist and mathematician Geoffrey Ingram Taylor and his collaborator A. E. Green . [ 1 ] In the original work of Taylor and Green, [ 1 ] a particular flow is analyzed in three spatial dimensions, with the three velocity components v = ( u , v , w ) {\displaystyle \mathbf {v} =(u,v,w)} at time t = 0 {\displaystyle t=0} specified by The continuity equation ∇ ⋅ v = 0 {\displaystyle \nabla \cdot \mathbf {v} =0} determines that A a + B b + C c = 0 {\displaystyle Aa+Bb+Cc=0} . The small time behavior of the flow is then found through simplification of the incompressible Navier–Stokes equations using the initial flow to give a step-by-step solution as time progresses. An exact solution in two spatial dimensions is known, and is presented below. The incompressible Navier–Stokes equations in the absence of body force , and in two spatial dimensions, are given by The first of the above equation represents the continuity equation and the other two represent the momentum equations. In the domain 0 ≤ x , y ≤ 2 π {\displaystyle 0\leq x,y\leq 2\pi } , the solution is given by where F ( t ) = e − 2 ν t {\displaystyle F(t)=e^{-2\nu t}} , ν {\displaystyle \nu } being the kinematic viscosity of the fluid. Following the analysis of Taylor and Green [ 1 ] for the two-dimensional situation, and for A = a = b = 1 {\displaystyle A=a=b=1} , gives agreement with this exact solution, if the exponential is expanded as a Taylor series , i.e. F ( t ) = 1 − 2 ν t + O ( t 2 ) {\displaystyle F(t)=1-2\nu t+O(t^{2})} . The pressure field p {\displaystyle p} can be obtained by substituting the velocity solution in the momentum equations and is given by The stream function of the Taylor–Green vortex solution, i.e. which satisfies v = ∇ × ψ {\displaystyle \mathbf {v} =\nabla \times {\boldsymbol {\psi }}} for flow velocity v {\displaystyle \mathbf {v} } , is Similarly, the vorticity , which satisfies ω = ∇ × v {\displaystyle {\boldsymbol {\mathbf {\omega } }}=\nabla \times \mathbf {v} } , is given by The Taylor–Green vortex solution may be used for testing and validation of temporal accuracy of Navier–Stokes algorithms. [ 2 ] [ 3 ] A generalization of the Taylor–Green vortex solution in three dimensions is described in. [ 4 ]
https://en.wikipedia.org/wiki/Taylor–Green_vortex
Taylor–Maccoll flow refers to the steady flow behind a conical shock wave that is attached to a solid cone. The flow is named after G. I. Taylor and J. W. Maccoll, whom described the flow in 1933, guided by an earlier work of Theodore von Kármán . [ 1 ] [ 2 ] [ 3 ] Consider a steady supersonic flow past a solid cone that has a semi-vertical angle χ {\displaystyle \chi } . A conical shock wave can form in this situation, with the vertex of the shock wave lying at the vertex of the solid cone. If it were a two-dimensional problem, i.e., for a supersonic flow past a wedge, then the incoming stream would have deflected through an angle χ {\displaystyle \chi } upon crossing the shock wave so that streamlines behind the shock wave would be parallel to the wedge sides. Such a simple turnover of streamlines is not possible for three-dimensional case. After passing through the shock wave, the streamlines are curved and only asymptotically they approach the generators of the cone. The curving of streamlines is accompanied by a gradual increase in density and decrease in velocity, in addition to those increments/decrements effected at the shock wave. [ 4 ] The direction and magnitude of the velocity immediately behind the oblique shock wave is given by weak branch of the shock polar . This particularly suggests that for each value of incoming Mach number M 1 {\displaystyle M_{1}} , there exists a maximum value of χ m a x {\displaystyle \chi _{\mathrm {max} }} beyond which shock polar do not provide solution under in which case the conical shock wave will have detached from the solid surface (see Mach reflection ). These detached cases are not considered here. The flow immediately behind the oblique conical shock wave is typically supersonic, although however when χ {\displaystyle \chi } is close to χ m a x {\displaystyle \chi _{\mathrm {max} }} , it can be subsonic. The supersonic flow behind the shock wave will become subsonic as it evolves downstream. Since all incident streamlines intersect the conical shock wave at the same angle, the intensity of the shock wave is constant. This particularly means that entropy jump across the shock wave is also constant throughout. In this case, the flow behind the shock wave is a potential flow . [ 4 ] Hence we can introduce the velocity potential φ {\displaystyle \varphi } such that v = ∇ φ {\displaystyle \mathbf {v} =\nabla \varphi } . Since the problem do not have any length scale and is clearly axisymmetric, the velocity field v {\displaystyle \mathbf {v} } and the pressure field p {\displaystyle p} will be turn out to functions of the polar angle θ {\displaystyle \theta } only (the origin of the spherical coordinates ( r , θ , ϕ ) {\displaystyle (r,\theta ,\phi )} is taken to be located at the vertex). This means that we have The steady potential flow is governed by the equation [ 4 ] where the sound speed c = c ( v ) {\displaystyle c=c(v)} is expressed as a function of the velocity magnitude v 2 = ( ∇ ϕ ) 2 {\displaystyle v^{2}=(\nabla \phi )^{2}} only. Substituting the above assumed form for the velocity field, into the governing equation, we obtain the general Taylor–Maccoll equation The equation is simplified greatly for a polytropic gas for which c 2 = ( γ − 1 ) ( h 0 − v 2 / 2 ) {\displaystyle c^{2}=(\gamma -1)(h_{0}-v^{2}/2)} , [ 4 ] i.e., where γ {\displaystyle \gamma } is the specific heat ratio and h 0 {\displaystyle h_{0}} is the stagnation enthalpy . Introducing this formula into the general Taylor–Maccoll equation and introducing a non-dimensional function F ( θ ) = f ( θ ) / v m a x {\displaystyle F(\theta )=f(\theta )/v_{\mathrm {max} }} , where v m a x = 2 h 0 {\displaystyle v_{\mathrm {max} }={\sqrt {2h_{0}}}} (the speed of the potential flow when it flows out into a vacuum), we obtain, for the polytropic gas, the Taylor–Maccoll equation , The equation must satisfy the condition that F ′ ( χ ) = 0 {\displaystyle F'(\chi )=0} (no penetration on the solid surface) and also must correspond to conditions behind the shock wave at χ = ψ {\displaystyle \chi =\psi } , where ψ {\displaystyle \psi } is the half-angle of shock cone, which must be determined as part of the solution for a given incoming flow Mach number M {\displaystyle M} and γ {\displaystyle \gamma } . The Taylor–Maccoll equation has no known explicit solution and it is integrated numerically. When the cone angle is very small, the flow is nearly parallel everywhere in which case, an exact solution can be found, as shown by Theodore von Kármán and Norton B. Moore in 1932. [ 2 ] The solution is more apparent in the cylindrical coordinates ( ρ , ϖ , z ) {\displaystyle (\rho ,\varpi ,z)} (the ρ {\displaystyle \rho } here is the radial distance from the z {\displaystyle z} -axis, and not the density). If U {\displaystyle U} is the speed of the incoming flow, then we write φ = U z + ϕ {\displaystyle \varphi =Uz+\phi } , where ϕ {\displaystyle \phi } is a small correction and satisfies where M = U / c ∞ {\displaystyle M=U/c_{\infty }} is the Mach number of the incoming flow. We expect the velocity components to depend only on θ {\displaystyle \theta } , i.e., ρ / z = tan ⁡ θ {\displaystyle \rho /z=\tan \theta } in cylindrical coordinates, which means that we must have ϕ = z g ( ξ ) {\displaystyle \phi =zg(\xi )} , where ξ = ρ / z {\displaystyle \xi =\rho /z} is a self-similar coordinate. The governing equation reduces to On the surface of the cone ξ = tan ⁡ χ ≈ χ {\displaystyle \xi =\tan \chi \approx \chi } , we must have v ρ / v z = ( ∂ ϕ / ∂ ρ ) / ( U + ∂ ϕ / ∂ z ) ≈ ( 1 / U ) ∂ ϕ / ∂ ρ = χ {\displaystyle v_{\rho }/v_{z}=(\partial \phi /\partial \rho )/(U+\partial \phi /\partial z)\approx (1/U)\partial \phi /\partial \rho =\chi } and conesequently g ′ = U χ {\displaystyle g'=U\chi } . In the small-angle approximation, the weak shock cone is given by z = β ρ {\displaystyle z=\beta \rho } . The trivial solution for g {\displaystyle g} describes the uniform flow upstream of the shock cone, whereas the non-trivial solution satisfying the boundary condition on the solid surface behind the shock wave is given by We therefore have [ 4 ] exhibiting a logarthmic singularity as ρ → 0. {\displaystyle \rho \to 0.} The velocity components are given by The pressure on the surface of the cone p s {\displaystyle p_{s}} is found to be p s − p ∞ = ρ ∞ U 2 χ 2 [ ln ⁡ ( 2 / β χ ) − 1 / 2 ] {\displaystyle p_{s}-p_{\infty }=\rho _{\infty }U^{2}\chi ^{2}[\ln(2/\beta \chi )-1/2]} (in this formula, ρ ∞ {\displaystyle \rho _{\infty }} is the density of the incoming gas).
https://en.wikipedia.org/wiki/Taylor–Maccoll_flow
In fluid mechanics , the Taylor–Proudman theorem (after Geoffrey Ingram Taylor and Joseph Proudman ) states that when a solid body [ clarification needed ] is moved slowly within a fluid that is steadily rotated with a high angular velocity Ω {\displaystyle \Omega } , the fluid velocity will be uniform along any line parallel to the axis of rotation. Ω {\displaystyle \Omega } must be large compared to the movement of the solid body in order to make the Coriolis force large compared to the acceleration terms. The Navier–Stokes equations for steady flow, with zero viscosity and a body force corresponding to the Coriolis force, are where u {\displaystyle {\mathbf {u} }} is the fluid velocity, ρ {\displaystyle \rho } is the fluid density, and p {\displaystyle p} the pressure. If we assume that F = ∇ Φ = − 2 ρ Ω × u {\displaystyle F=\nabla \Phi =-2\rho \mathbf {\Omega } \times {\mathbf {u} }} is a scalar potential and the advective term on the left may be neglected (reasonable if the Rossby number is much less than unity) and that the flow is incompressible (density is constant), the equations become: where Ω {\displaystyle \Omega } is the angular velocity vector. If the curl of this equation is taken, the result is the Taylor–Proudman theorem: To derive this, one needs the vector identities and and (because the curl of the gradient is always equal to zero). Note that ∇ ⋅ Ω = 0 {\displaystyle \nabla \cdot {\mathbf {\Omega } }=0} is also needed (angular velocity is divergence-free). The vector form of the Taylor–Proudman theorem is perhaps better understood by expanding the dot product : In coordinates for which Ω x = Ω y = 0 {\displaystyle \Omega _{x}=\Omega _{y}=0} , the equations reduce to if Ω z ≠ 0 {\displaystyle \Omega _{z}\neq 0} . Thus, all three components of the velocity vector are uniform along any line parallel to the z-axis. The Taylor column is an imaginary cylinder projected above and below a real cylinder that has been placed parallel to the rotation axis (anywhere in the flow, not necessarily in the center). The flow will curve around the imaginary cylinders just like the real due to the Taylor–Proudman theorem, which states that the flow in a rotating, homogeneous, inviscid fluid are 2-dimensional in the plane orthogonal to the rotation axis and thus there is no variation in the flow along the Ω → {\displaystyle {\vec {\Omega }}} axis, often taken to be the z ^ {\displaystyle {\hat {z}}} axis. The Taylor column is a simplified, experimentally observed effect of what transpires in the Earth's atmospheres and oceans. The result known as the Taylor-Proudman theorem was first derived by Sydney Samuel Hough (1870-1923), a mathematician at Cambridge University, in 1897. [ 1 ] : 506 [ 2 ] Proudman published another derivation in 1916 and Taylor in 1917, then the effect was demonstrated experimentally by Taylor in 1923. [ 3 ] : 648 [ 4 ] : 245 [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Taylor–Proudman_theorem
Taylor–von Neumann–Sedov blast wave (or sometimes referred to as Sedov–von Neumann–Taylor blast wave ) refers to a blast wave induced by a strong explosion. The blast wave was described by a self-similar solution independently by G. I. Taylor , John von Neumann and Leonid Sedov during World War II . [ 1 ] [ 2 ] G. I. Taylor was told by the British Ministry of Home Security that it might be possible to produce a bomb in which a very large amount of energy would be released by nuclear fission and asked to report the effect of such weapons. Taylor presented his results on June 27, 1941. [ 3 ] Exactly at the same time, in the United States , John von Neumann was working on the same problem and he presented his results on June 30, 1941. [ 4 ] It was said that Leonid Sedov was also working on the problem around the same time in the USSR , although Sedov never confirmed any exact dates. [ 5 ] The complete solution was published first by Sedov in 1946. [ 6 ] von Neumann published his results in August 1947 in the Los Alamos scientific laboratory report on "Blast wave" (PDF) . Archived (PDF) from the original on June 1, 2022. , although that report was distributed only in 1958. [ 7 ] Taylor got clearance to publish his results in 1949 and he published his works in two papers in 1950. [ 8 ] [ 9 ] In the second paper, Taylor calculated the energy of the atomic bomb used in the Trinity (nuclear test) using the similarity, just by looking at the series of blast wave photographs that had a length scale and time stamps, published by Julian E Mack in 1947. [ 10 ] This calculation of energy caused, in Taylor's own words, 'much embarrassment' (according to Grigory Barenblatt ) in US government circles since the number was then still classified although the photographs published by Mack were not. Taylor's biographer George Batchelor writes This estimate of the yield of the first atom bomb explosion caused quite a stir... G.I. was mildly admonished by the US Army for publishing his deductions from their (unclassified) photographs . [ 11 ] Consider a strong explosion (such as nuclear bombs) that releases a large amount of energy E {\displaystyle E} in a small volume during a short time interval. This will create a strong spherical shock wave propagating outwards from the explosion center. The self-similar solution tries to describe the flow when the shock wave has moved through a distance that is extremely large when compared to the size of the explosive. At these large distances, the information about the size and duration of the explosion will be forgotten; only the energy released E {\displaystyle E} will have influence on how the shock wave evolves. To a very high degree of accuracy, then it can be assumed that the explosion occurred at a point (say the origin r = 0 {\displaystyle r=0} ) instantaneously at time t = 0 {\displaystyle t=0} . The shock wave in the self-similar region is assumed to be still very strong such that the pressure behind the shock wave p 1 {\displaystyle p_{1}} is very large in comparison with the pressure (atmospheric pressure) in front of the shock wave p 0 {\displaystyle p_{0}} , which can be neglected from the analysis. Although the pressure of the undisturbed gas is negligible, the density of the undisturbed gas ρ 0 {\displaystyle \rho _{0}} cannot be neglected since the density jump across strong shock waves is finite as a direct consequence of Rankine–Hugoniot conditions . This approximation is equivalent to setting p 0 = 0 {\displaystyle p_{0}=0} and the corresponding sound speed c 0 = 0 {\displaystyle c_{0}=0} , but keeping its density non zero, i.e., ρ 0 ≠ 0 {\displaystyle \rho _{0}\neq 0} . [ 12 ] The only parameters available at our disposal are the energy E {\displaystyle E} and the undisturbed gas density ρ 0 {\displaystyle \rho _{0}} . The properties behind the shock wave such as p 1 , ρ 1 {\displaystyle p_{1},\,\rho _{1}} are derivable from those in front of the shock wave. The only non-dimensional combination available from r , t , ρ 0 {\displaystyle r,\,t,\,\rho _{0}} and E {\displaystyle E} is It is reasonable to assume that the evolution in r {\displaystyle r} and t {\displaystyle t} of the shock wave depends only on the above variable. This means that the shock wave location r = R ( t ) {\displaystyle r=R(t)} itself will correspond to a particular value, say β {\displaystyle \beta } , of this variable, i.e., The detailed analysis that follows will, at the end, reveal that the factor β {\displaystyle \beta } is quite close to unity, thereby demonstrating (for this problem) the quantitative predictive capability of the dimensional analysis in determining the shock-wave location as a function of time. The propagation velocity of the shock wave is With the approximation described above, Rankine–Hugoniot conditions determines the gas velocity immediately behind the shock front v 1 {\displaystyle v_{1}} , p 1 {\displaystyle p_{1}} and ρ 1 {\displaystyle \rho _{1}} for an ideal gas as follows where γ {\displaystyle \gamma } is the specific heat ratio . Since ρ 0 {\displaystyle \rho _{0}} is a constant, the density immediately behind the shock wave is not changing with time, whereas v 1 {\displaystyle v_{1}} and p 1 {\displaystyle p_{1}} decrease as t − 3 / 5 {\displaystyle t^{-3/5}} and t − 6 / 5 {\displaystyle t^{-6/5}} , respectively. The gas motion behind the shock wave is governed by Euler equations . For an ideal polytropic gas with spherical symmetry, the equations for the fluid variables such as radial velocity v ( r , t ) {\displaystyle v(r,t)} , density ρ ( r , t ) {\displaystyle \rho (r,t)} and pressure p ( r , t ) {\displaystyle p(r,t)} are given by At r = R ( t ) {\displaystyle r=R(t)} , the solutions should approach the values given by the Rankine-Hugoniot conditions defined in the previous section. The variable pressure can be replaced by the sound speed c ( r , t ) {\displaystyle c(r,t)} since pressure can be obtained from the formula c 2 = γ p / ρ {\displaystyle c^{2}=\gamma p/\rho } . The following non-dimensional self-similar variables are introduced, [ 13 ] [ 14 ] The conditions at the shock front ξ = 1 {\displaystyle \xi =1} becomes Substituting the self-similar variables into the governing equations will lead to three ordinary differential equations. Solving these differential equations analytically is laborious, as shown by Sedov in 1946 and von Neumann in 1947. G. I. Taylor integrated these equations numerically to obtain desired results. The relation between Z {\displaystyle Z} and V {\displaystyle V} can be deduced directly from energy conservation. Since the energy associated with the undisturbed gas is neglected by setting p 0 = 0 {\displaystyle p_{0}=0} , the total energy of the gas within the shock sphere must be equal to E {\displaystyle E} . Due to self-similarity, it is clear that not only the total energy within a sphere of radius ξ = 1 {\displaystyle \xi =1} is constant, but also the total energy within a sphere of any radius ξ < 1 {\displaystyle \xi <1} (in dimensional form, it says that total energy within a sphere of radius r {\displaystyle r} that moves outwards with a velocity v n = 2 r / 5 t {\displaystyle v_{n}=2r/5t} must be constant). The amount of energy that leaves the sphere of radius r {\displaystyle r} in time d t {\displaystyle dt} due to the gas velocity v {\displaystyle v} is 4 π r 2 ρ v ( h + v 2 / 2 ) d t {\displaystyle 4\pi r^{2}\rho v(h+v^{2}/2)\mathrm {d} t} , where h {\displaystyle h} is the specific enthalpy of the gas. In that time, the radius of the sphere increases with the velocity v n {\displaystyle v_{n}} and the energy of the gas in this extra increased volume is 4 π r 2 ρ v n ( e + v 2 / 2 ) d t {\displaystyle 4\pi r^{2}\rho v_{n}(e+v^{2}/2)\mathrm {d} t} , where e {\displaystyle e} is the specific energy of the gas. Equating these expressions and substituting e = c 2 / γ ( γ − 1 ) {\displaystyle e=c^{2}/\gamma (\gamma -1)} and h = c 2 / ( γ − 1 ) {\displaystyle h=c^{2}/(\gamma -1)} that is valid for ideal polytropic gas leads to The continuity and energy equation reduce to Expressing d V / d ln ⁡ ξ {\displaystyle \mathrm {d} V/\mathrm {d} \ln \xi } and d ln ⁡ G / d V {\displaystyle \mathrm {d} \ln G/\mathrm {d} V} as a function of V {\displaystyle V} only using the relation obtained earlier and integrating once yields the solution in implicit form, where The constant β {\displaystyle \beta } that determines the shock location can be determined from the conservation of energy to obtain For air, γ = 7 / 5 {\displaystyle \gamma =7/5} and β = 1.033 {\displaystyle \beta =1.033} . The solution for γ = 7 / 5 {\displaystyle \gamma =7/5} is shown in the figure by graphing the curves of ρ / ρ 1 = G ( γ − 1 ) / ( γ + 1 ) {\displaystyle \rho /\rho _{1}=G(\gamma -1)/(\gamma +1)} , v / v 1 = ξ V ( γ + 1 ) / 2 {\displaystyle v/v_{1}=\xi V(\gamma +1)/2} , p / p 1 = ξ 2 G Z ( γ + 1 ) / ( 2 γ ) {\displaystyle p/p_{1}=\xi ^{2}GZ(\gamma +1)/(2\gamma )} and T / T 1 = ξ 2 Z ( γ + 1 ) 2 / [ 2 γ ( γ − 1 ) ] , {\displaystyle T/T_{1}=\xi ^{2}Z(\gamma +1)^{2}/[2\gamma (\gamma -1)],} where T {\displaystyle T} is the temperature. The asymptotic behavior of the central region can be investigated by taking the limit ξ → 0 {\displaystyle \xi \rightarrow 0} . From the figure, it can be observed that the density falls to zero very rapidly behind the shock wave. The entire mass of the gas which was initially spread out uniformly in a sphere of radius R {\displaystyle R} is now contained in a thin layer behind the shock wave, that is to say, all the mass is driven outwards by the acceleration imparted by the shock wave. Thus, most of the region is basically empty. The pressure ratio also drops rapidly to attain the constant value p c {\displaystyle p_{c}} . The temperature ratio follows from the ideal gas law ; since density ratio decays to zero and the pressure ratio is constant, the temperature ratio must become infinite. The limiting form for the density is given as follows Remember that the density ρ 1 {\displaystyle \rho _{1}} is time-independent whereas p 1 ∼ t − 6 / 5 {\displaystyle p_{1}\sim t^{-6/5}} which means that the actual pressure is in fact time dependent. It becomes clear if the above forms are rewritten in dimensional units, The velocity ratio has the linear behavior in the central region, whereas the behavior of the velocity itself is given by As the shock wave evolves in time, its strength decreases. The self-similar solution described above breaks down when p 1 {\displaystyle p_{1}} becomes comparable to p 0 {\displaystyle p_{0}} (more precisely, when p 1 ∼ [ ( γ + 1 ) / ( γ − 1 ) ] p 0 {\displaystyle p_{1}\sim [(\gamma +1)/(\gamma -1)]p_{0}} ). At this later stage of the evolution, p 0 {\displaystyle p_{0}} (and consequently c 0 {\displaystyle c_{0}} ) cannot be neglected. This means that the evolution is not self-similar, because one can form a length scale ( E / p 0 ) 1 / 3 {\displaystyle (E/p_{0})^{1/3}} and a time scale ( E / p 0 ) 1 / 3 / c 0 {\displaystyle (E/p_{0})^{1/3}/c_{0}} to describe the problem. The governing equations are then integrated numerically, as was done by H. Goldstine and John von Neumann , [ 15 ] Brode, [ 16 ] and Okhotsimskii et al. [ 17 ] Furthermore, in this stage, the compressing shock wave is necessarily followed by a rarefaction wave behind it; the waveform is empirically fitted by the Friedlander waveform. The analogous problem in cylindrical geometry corresponding to an axisymmetric blast wave, such as that produced in a lightning , can be solved analytically. This problem was solved independently by Leonid Sedov , A. Sakurai [ 18 ] and S. C. Lin. [ 19 ] In cylindrical geometry, the non-dimensional combination involving the radial coordinate r {\displaystyle r} (this is different from the r {\displaystyle r} in the spherical geometry), the time t {\displaystyle t} , the total energy released per unit axial length E {\displaystyle E} (this is different from the E {\displaystyle E} used in the previous section) and the ambient density ρ 0 {\displaystyle \rho _{0}} is found to be
https://en.wikipedia.org/wiki/Taylor–von_Neumann–Sedov_blast_wave
Terbium(III) oxide , also known as terbium sesquioxide , is a sesquioxide of the rare earth metal terbium , having chemical formula Tb 2 O 3 . It is a p-type semiconductor , which conducts protons, which is enhanced when doped with calcium . [ 3 ] It may be prepared by the reduction of Tb 4 O 7 in hydrogen at 1300 °C for 24 hours. [ 4 ] It is a basic oxide and easily dissolved to dilute acids, and then almost colourless terbium salt is formed. The crystal structure is cubic and the lattice constant is a = 1057 pm. [ 5 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tb2O3
Terbium(III,IV) oxide , occasionally called tetraterbium heptaoxide , has the formula Tb 4 O 7 , though some texts refer to it as TbO 1.75 . There is some debate as to whether it is a discrete compound, or simply one phase in an interstitial oxide system. Tb 4 O 7 is one of the main commercial terbium compounds, and the only such product containing at least some Tb(IV) (terbium in the +4 oxidation state), along with the more stable Tb(III). It is produced by heating the metal oxalate, and it is used in the preparation of other terbium compounds. It is also used in electronics and data storage, green energy technologies, medical imaging and diagnosis, and chemical processes. [ 1 ] Terbium forms three other major oxides : Tb 2 O 3 , TbO 2 , and Tb 6 O 11 . Tb 4 O 7 is most often produced by ignition of the oxalate or the sulfate in air. [ 2 ] The oxalate (at 1000 °C) is generally preferred, since the sulfate requires a higher temperature, and it produces an almost black product contaminated with Tb 6 O 11 or other oxygen-rich oxides. Terbium(III,IV) oxide loses O 2 when heated at high temperatures; at more moderate temperatures (ca. 350 °C) it reversibly loses oxygen, as shown by exchange with 18 O 2 . This property, also seen in Pr 6 O 11 and V 2 O 5 , allows it to work like V 2 O 5 as a redox catalyst in reactions involving oxygen. It was found as early as 1916 that hot Tb 4 O 7 catalyses the reaction of coal gas ( CO + H 2 ) with air, leading to incandescence and often ignition. [ 3 ] Tb 4 O 7 reacts with atomic oxygen to produce TbO 2 , but more convenient preparations are available. [ 4 ] . Tb 4 O 7 reacts with other hot concentrated acids to produce terbium(III) salts. For example, reaction with sulfuric acid gives terbium(III) sulfate . Terbium oxide reacts slowly with hydrochloric acid to form terbium(III) chloride solution, and elemental chlorine. At ambient temperature, complete dissolution might require a month; in a hot water bath, about a week. Anhydrous terbium(III) chloride can be produced by the ammonium chloride route [ 5 ] [ 6 ] [ 7 ] In the first step, terbium oxide is heated with ammonium chloride to produce the ammonium salt of the pentachloride: In the second step, the ammonium chloride salt is converted to the trichlorides by heating in a vacuum at 350-400 °C:
https://en.wikipedia.org/wiki/Tb4O7
Terbium(III) chloride ( Tb Cl 3 ) is a chemical compound . In the solid state TbCl 3 has the YCl 3 layer structure. [ 2 ] Terbium(III) chloride frequently forms a hexahydrate. The hexahydrate of terbium(III) chloride can be obtained by the reaction of terbium(III) oxide and hydrochloric acid : [ 3 ] It can also be obtained by direct reaction of the elements: [ 4 ] Terbium(III) chloride is a white, hygroscopic powder. [ 5 ] It crystallizes in an orthorhombic plutonium(III) bromide crystal structure with space group Cmcm (No. 63). [ 6 ] [ 7 ] It can form a complex Tb(gly) 3 Cl 3 ·3H 2 O with glycine. [ 8 ] The hexahydrate plays an important role as an activator of green phosphors in color TV tubes and is also used in specialty lasers and as a dopant in solid-state devices . [ 9 ] Terbium(III) chloride causes hyperemia of the iris . [ 10 ] Conditions/substances to avoid are: heat , acids and acid fumes.
https://en.wikipedia.org/wiki/TbCl3
Technetium(VII) oxide is the chemical compound with the formula Tc 2 O 7 . This yellow volatile solid is a rare example of a molecular binary metal oxide, the other examples being RuO 4 , OsO 4 , and the unstable Mn 2 O 7 . It adopts a centrosymmetric corner-shared bi-tetrahedral structure in which the terminal and bridging Tc−O bonds are 167pm and 184 pm respectively and the Tc−O−Tc angle is 180°. [ 2 ] Technetium(VII) oxide is prepared by the oxidation of technetium at 450–500 °C: [ 3 ] It is the anhydride of pertechnetic acid and the precursor to sodium pertechnetate :
https://en.wikipedia.org/wiki/Tc2O7
Technetium trichloride is an inorganic compound of technetium and chlorine with the formula TcCl 3 . Two polymorphs of technetium trichloride are known. The α-polymorph is prepared as a black solid from ditechnetium(III) tetraacetate dichloride and hydrogen chloride at 300 °C. It has a bioctahedral structure, consisting of triangular Tc 3 Cl 9 units with C 3v symmetry, with each Tc atom coordinated to two Tc neighbors and five chloride ligands (Tc-Tc bond length 2.44 angstrom ). The Tc-Tc distances are indicative of double bonded Tc atoms. Tc 3 Cl 9 is isostructural to its rhenium homologue, trirhenium nonachloride . [ 1 ] β-TcCl 3 is obtained by the reaction between technetium metal and chlorine gas. Its structure consists of infinite layers of edge-sharing octahedra, similar to MoCl 3 and ReCl 3 , with distances that also indicate metal-metal bonding. It is less stable than α-TcCl 3 and slowly transforms into it. [ 1 ]
https://en.wikipedia.org/wiki/TcCl3
Technetium(IV) chloride is the inorganic compound with the formula TcCl 4 . It was discovered in 1957 as the first binary halide of technetium. It is the highest oxidation binary chloride of technetium that has been isolated as a solid. It is volatile at elevated temperatures and its volatility has been used for separating technetium from other metal chlorides. [ 2 ] Colloidal solutions of technetium(IV) chloride are oxidized to form Tc(VII) ions when exposed to gamma rays . [ 3 ] Technetium tetrachloride can be synthesized from the reaction of Cl 2 with technetium metal at elevated temperatures between 300 and 500 °C: [ 4 ] Technetium tetrachloride has also been prepared from the reaction of technetium(VII) oxide with carbon tetrachloride in a sealed vessel at elevated temperature: [ 5 ] At 450 °C under vacuum, TcCl 4 decomposes to TcCl 3 and TcCl 2 . [ 6 ] As verified by X-ray crystallography , the compound is an inorganic polymer consisting of interconnected TcCl 6 octahedra.
https://en.wikipedia.org/wiki/TcCl4
Technetium hexafluoride or technetium(VI) fluoride ( Tc F 6 ) is a yellow inorganic compound with a low melting point . It was first identified in 1961. [ 3 ] In this compound, technetium has an oxidation state of +6, the highest oxidation state found in the technetium halides . In this respect, technetium differs from rhenium, which forms a heptafluoride, ReF 7 . [ 4 ] Technetium hexafluoride occurs as an impurity in uranium hexafluoride , as technetium is a fission product of uranium ( spontaneous fission in natural uranium , possible contamination from induced fission inside the reactor in reprocessed uranium ). The fact that the boiling point of the hexafluorides of uranium and technetium are very close to each other presents a problem in using fluoride volatility in nuclear reprocessing . Technetium hexafluoride is prepared by heating technetium metal with an excess of F 2 at 400 °C. [ 3 ] Technetium hexafluoride is a golden-yellow solid at room temperature. Its melting point is 37.4 °C and its boiling point is 55.3 °C. [ 1 ] Technetium hexafluoride undergoes a solid phase transition at −4.54 °C. Above this temperature (measured at 10 °C), the solid structure is cubic . Lattice parameters are a = 6.16 Å. There are two formula units (in this case, discrete molecules) per unit cell , giving a density of 3.02 g·cm −3 . Below this temperature (measured at −19 °C), the solid structure is orthorhombic space group Pnma . Lattice parameters are a = 9.55 Å , b = 8.74 Å, and c = 5.02 Å. There are four formula units (in this case, discrete molecules) per unit cell , giving a density of 3.38 g·cm −3 . At −140 °C, the solid structure is still orthothombic, but the lattice parameters are now a = 9.360 Å , b = 8.517 Å, and c = 4.934 Å, giving a density of 3.58 g·cm −3 . [ 2 ] The TcF 6 molecule itself (the form important for the liquid or gas phase) has octahedral molecular geometry , which has point group ( O h ). The Tc–F bond length is 1.812 Å. [ 2 ] Its magnetic moment has been measured to be 0.45 μ B . [ 5 ] TcF 6 is octahedral , as shown by infrared and Raman spectra . [ 6 ] [ 7 ] Its low-temperature orthorhombic form converts to the higher symmetry body-centred cubic form at room temperature, like other metal hexafluorides such as RhF 6 and OsF 6 . [ 8 ] Preliminary measurements of magnetic moment yield a value of 0.45 μB , which is lower than expected for a d 1 octahedral compound. [ 9 ] TcF 6 reacts with alkaline chlorides in iodine pentafluoride (IF 5 ) solution to form hexafluorotechnetates. [ 10 ] [ 11 ] TcF 6 disproportionates on hydrolysis with aqueous NaOH to form a black precipitate of TcO 2 . [ 3 ] In hydrogen fluoride solution, TcF 6 reacts with hydrazinium fluoride to yield N 2 H 6 TcF 6 or N 2 H 6 (TcF 6 ) 2 . [ 12 ]
https://en.wikipedia.org/wiki/TcF6
The Dragon Database for Human Transcription Co-Factors and Transcription Factor Interacting Proteins ( TcoF-DB ) is a database that facilitates the exploration of proteins involved in the regulation of transcription in humans by binding to regulatory DNA regions ( transcription factors ) and proteins involved in the regulation of transcription in humans by interacting with transcription factors and not binding to regulatory DNA regions ( transcription co-factors ). [ 1 ] The database describes a total of 529 (potential) human transcription co-factors interacting with a total of 1365 human transcription factors. This Biological database -related article is a stub . You can help Wikipedia by expanding it . This biophysics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/TcoF-DB
TCR-Seq ( T-cell Receptor Sequencing ) is a method used to identify and track specific T cells and their clones. [ 1 ] TCR-Seq utilizes the unique nature of a T-cell receptor (TCR) as a ready-made molecular barcode. [ 1 ] This technology can apply to both single cell sequencing technologies and high throughput screens [ 1 ] T cells are a part of the adaptive immune system and play a critical role in protecting the body from foreign pathogens . [ 2 ] T-cell receptors (TCRs) are a group of membrane proteins found on the surface of T cells which can bind to foreign antigens. [ 3 ] TCRs interact with major histocompatibility complexes (MHC) on cell surfaces to recognize antigens . They are heterodimers made up of predominantly α and β chains (or more rarely δ and γ chains) [ 4 ] and consist of a variable region and a constant region. Variable regions are produced through a process called VDJ recombination , which results in unique amino acid sequences for α, β, and γ chains. The result is that each TCR is unique and recognizes a specific antigen [ 4 ] Complementarity determining regions (CDRs) are a part of the TCR and play an essential role in TCR-MHC interactions. CDR1 and CDR2 are encoded by V genes, while CDR3 is made from the region between V and J genes or between D and J genes (termed " VDJ genes" when referred to together). [ 4 ] CDR3 is the most variable of the CDRs, and is in direct contact with the antigen. [ 4 ] [ 5 ] As such, CDR3 is used as the “barcode region” to identify unique T cell populations, as it is highly unlikely for two T cells to have the same CDR3 sequence unless they came from the same parental T cell. [ 4 ] [ 5 ] VDJ recombination produces such a vast amount of unique TCRs that many receptors never encounter the antigen they are best suited for. When a foreign antigen is present in the body, the few T cells that recognize that antigen are positively selected for so that the body has an adequate number of T cells to mount an effective immune response. [ 6 ] The selected T cells rapidly divide and differentiate into effector T-cells through a process called clonal expansion, [ 7 ] which retains the TCR sequence (including the CDR3 sequence) that originally recognized the antigen [ 8 ] TCR-Seq uses the unique nature of the TCR - in particular CDR3 - as a molecular barcode to track T cells through a variety of processes like differentiation and proliferation, [ 9 ] which can be used for a wide variety of purposes. TCR sequencing can be performed in on pooled cell populations (“bulk sequencing”) or single cells (“ single cell sequencing ”). [ 4 ] Bulk sequencing is useful to explore entire TCR repertoires - all the TCRs within an individual or a sample - and to generate comparisons between repertoires of different individuals. [ 4 ] This method can sequence millions of cells in a single experiment. [ 5 ] However, one major disadvantage is that bulk sequencing cannot determine which TCR chains pair together, only the frequency within the repertoire. The large amount of TCRs sampled also means that lower abundance TCRs may not be detected [ 5 ] Single cell sequencing can determine TCR chain pairs, making them more useful for identifying specific TCRs. [ 4 ] [ 5 ] Some major disadvantages of this technique are its high costs, [ 4 ] limited capacity of a few thousand cells, [ 5 ] and the necessity of live cells which may be more challenging to obtain [ 4 ] Any TCR chain can be sequenced, although the α and β chain are more commonly chosen due to their abundance in the T cell population. [ 4 ] In particular, the β chain is of interest due to its higher diversity and specificity compared to other chains. The presence of a D gene component in the β chain which is not present in the α chain allows more diverse combinations. [ 4 ] [ 10 ] As well, β chains are unique to each T cell, which can be used to identify distinct T cell populations within a sample [ 4 ] [ 5 ] [ 10 ] To perform TCR-sequencing, polymerase chain reaction (PCR) amplification is performed on the CDR3 region as a measure of unique T cells within a population. [ 4 ] [ 10 ] The CDR3 region is chosen over CDR1 and CDR2 as it is directly responsible for antigen interactions [ 4 ] [ 10 ] and is generally unique to TCRs from the same lineage, which allows identification of distinct populations [ 10 ] The goal of this step is to generate a library of transcripts to be sequenced. There are 3 major ways of generating a library for TCR sequencing. Multiplex PCR can be employed on both genomic DNA (gDNA) or RNA which has been converted to double-stranded complementary DNA (cDNA) . [ 4 ] Primer pools with primer pairs targeting J and V alleles are used to amplify the CDR3 region of the TCR transcript. [ 4 ] [ 10 ] The transcript goes through two or more rounds of PCR to amplify the region of interest, then adaptors are ligated onto either end of the resulting transcript. [ 10 ] This method is among the most used in the generation of libraries for TCR-seq as it can capture a great deal of the diversity of the TCR through the primer pool. [ 4 ] [ 10 ] However, as it is near-impossible to optimize PCR conditions for all the primers in the pool, multiplex DNA can result in amplification bias where some CDR3 regions with primers that bind poorly may not be amplified. [ 5 ] [ 10 ] This means the abundance of amplified segments may not correspond with the actual abundance within the cell [ 5 ] [ 10 ] This method can use genomic gDNA or RNA converted to cDNA. [ 4 ] [ 10 ] The starting material is first processed to generate DNA or cDNA transcripts with indexed adaptors on the 5’ and 3’ ends. [ 4 ] These transcripts are then incubated with RNA baits designed to bind to regions of interest, which is generally the CDR3 region. [ 4 ] These baits, which are normally bound to magnetic beads, can be isolated using a magnet. This allows the isolation of transcripts of the CDR3 region which can then amplified using PCR. [ 4 ] Target enrichment using RNA baits requires fewer PCR amplification steps, which may decrease amplification bias. [ 4 ] However, the efficiency of the capture by magnets may affect the diversity of the amplified transcripts. Rapid Amplification of cDNA Ends (RACE) is a method that uses RNA transcripts for generation of the library. [ 4 ] [ 10 ] [ 11 ] Although RACE can be applied with the 3' or the 5' end, the 5' end more commonly used for TCR-seq. [ 4 ] This method revolves around the addition of a common 5' adaptor sequence to the transcript, which can be a done a few different ways. [ 11 ] [ 12 ] One method is to add on the adapter following reverse transcription . During the generation of the reverse DNA strand from the RNA template, a forward primer adds a sequence complementary to the 5'adapter, leading to template switching [ 4 ] [ 10 ] [ 11 ] This allows a 5' adapter to be incorporated into the cDNA when the complementary sequence is generated. [ 4 ] [ 10 ] [ 11 ] [ 12 ] Primers can be designed to amplify the entire region from the adaptor to the constant region, [ 4 ] [ 10 ] [ 11 ] then adaptor ligation can be performed in a second PCR reaction. As all the different transcripts now share an identical adapter, they can be amplified using a single primer pair. As such, this method decreases amplification bias and improves the ability to detect more uncommon TCR populations with greater certainty. [ 4 ] [ 10 ] [ 11 ] However, as TCR transcription levels differ between cells, this method cannot provide an accurate measurement of the number of different T cell types in the sample based on the level of RNA transcripts alone [ 4 ] Following generation of the library, the products can be sequenced, generally via Next Generation Sequencing (NGS) . [ 4 ] [ 10 ] [ 11 ] Usage of machines capable of longer reads and maintains read quality at the 3’end is important, as the CDR3 region is at the 3’end of an approximately 500 base pair transcript [ 10 ] The error rate of NGS presents a challenge for analysis of TCR repertoires. [ 4 ] [ 10 ] [ 11 ] Small variations in the TCR can change their specificity towards antigens, [ 10 ] and as such may be interest to researchers. However, errors in sequencing can generate a minor change that may be interpreted as a low-frequency, distinct TCR population, [ 10 ] which is a problem when analyzing changes in TCR repertoires. Efforts have been made to establish thresholds to remove low abundance reads from analysis, as well as to develop algorithms to correct these errors [ 10 ] Generally, the data collected from TCR-seq is used to compare TCR repertoires, either between the same patient at different timepoints, or between different patients. [ 4 ] [ 10 ] [ 11 ] Recent studies examined the characteristics of a healthy repertoire, and found a high degree of variation in TCR β chain levels and types, though a subset is shared across different individuals. [ 10 ] However, this diversity has yet to be shown to strongly correlate with any conditions of interest, such as rates of infection or chance of cancer relapse, [ 10 ] suggesting further research is necessary. Clonal expansion of T cells allow the immune system to deal with a variety of infection disease with high specificity. [ 13 ] Thus, understanding changes that occur to the T cell repertoire following disease infection can early diagnosis, disease monitoring, and therapeutic development [ 5 ] Acquired Immunodeficiency Syndrome (AIDS) is a devastating disease caused by Human Immunodeficiency Virus (HIV) infection, which results in the death of CD4+ T cells. [ 14 ] and dysfunctional CD8+ T cells. [ 5 ] Recent studies have suggested that increased TCR diversity may decrease HIV diversity and limit disease progression. [ 5 ] [ 15 ] Sequencing of the TCR would also increase understanding of the progression of AIDS and predict morbidity. [ 5 ] Additionally, sequencing the TCR repertoire of individuals with natural defense against AIDs infection [ 16 ] could help development of a vaccine to limit further spread of the disease [ 5 ] Cancer is the uncontrolled proliferation of malignant cells which can spread throughout the body. [ 17 ] This is caused by mutations within the cancer cell, which often leads to expression of mutant proteins termed neoantigens . [ 5 ] [ 17 ] Identification of these neoantigens has great therapeutic benefit, as they can be exploited to target cancer cells without harming normal cells. As CD8+ T cells can recognize some neoantigens in their TCR, sequencing of TCR repertoires can help identify potential cancer biomarkers . [ 5 ] In addition to biomarker identification, sequencing of the TCR repertoire can also track changes in cancer progression, assess responses to immunotherapy , [ 18 ] and evaluate the tumour microenvironmen t for conditions that may make it permissible to cancer growth [ 5 ]
https://en.wikipedia.org/wiki/Tcr-seq
Te-Tzu Chang or T. T. Chang ( simplified Chinese : 张德慈 ; traditional Chinese : 張德慈 ; pinyin : Zhāng Décí ; 1927–2006) was a prominent Chinese agricultural and environmental scientist. Chang was born in Shanghai on April 3, 1927 to a "scholar-gentry" family. Chang's father graduated from the Saint John's University in Shanghai and won the Boxer Rebellion Indemnity Scholarship Program and completed his study in the United States . Chang had three (older) sisters and one (younger) brother. [ 1 ] Chang finished his secondary education at the Saint John's School (a middle school afflicted to the Saint John's University) in Shanghai. Chang at beginning studied agricultural science at the Saint John's University in Shanghai, which was his father's alma mater . After about one year, Chang transferred to the University of Nanking in Nanjing and majored in agriculture and horticulture . Chang graduated from University of Nanking with BSA in 1949. [ 1 ] After graduation, Chang worked for the Council of Agriculture in Guangzhou , the capital city of Guangdong Province . During this period of time Shen Tsung-han (1895–1980, 沈宗瀚 , born Ningbo , Zhejiang ; death Taipei , Taiwan ) was one of his mentors. Shen was the second and former Director-general of the Council of Agriculture. [ 1 ] In 1950, Chang moved to Taiwan and served as a technician in the Ministry of Agriculture. Recommended by Shen, in 1952 Chang went to study plant genetics at Cornell University which was also the alma mater of Shen (Shen received his PhD from Cornell). Chang obtained his MSc from Cornell in 1954 and continued his study at the University of Minnesota where he earned PhD in plant genetics in 1959. [ 1 ] Chang went back to Taiwan in 1959. However, after two years of staying in Taiwan, Chang moved to Philippines and worked for the International Rice Research Institute (IRRI) in Los Baños , Laguna . From 1962 to 1991, Chang managed the International Rice Germplasm Center. The T. T. Chang Genetic Resources Center is named after him. [ 1 ] List of awards and honors received by Chang: [ 2 ]
https://en.wikipedia.org/wiki/Te-Tzu_Chang
Tritellurium dichloride is the inorganic compound with the formula Te 3 Cl 2 . It is one of the more stable lower chlorides of tellurium. Te 3 Cl 2 is a gray solid. Its structure consists of a long chain of Te atoms, with every third Te center carrying two chloride ligands for the repeat unit -Te-Te-TeCl 2 -. [ 2 ] It is a semiconductor with a band gap of 1.52 eV, which is larger than that for elemental Te (0.34 eV). [ 3 ] It is prepared by heating Te with the appropriate stoichiometry of chlorine. [ 4 ]
https://en.wikipedia.org/wiki/Te3Cl2
Tellurium tetrachloride is the inorganic compound with the empirical formula TeCl 4 . The compound is volatile, subliming at 200 °C at 0.1 mmHg. [ 2 ] Molten TeCl 4 is ionic, dissociating into TeCl 3 + and Te 2 Cl 10 2− . [ 2 ] TeCl 4 is monomeric in the gas phase, with a structure similar to that of SF 4 . [ 3 ] In the solid state, it is a tetrameric cubane-type cluster , consisting of a Te 4 Cl 4 core and three terminal chloride ligands for each Te. Alternatively, this tetrameric structure can be considered as a Te 4 tetrahedron with face-capping chlorines and three terminal chlorines per tellurium atom, giving each tellurium atom a distorted octahedral environment TeCl 4 is prepared by chlorination of tellurium powder: The reaction is initiated with heat. The product is isolated by distillation. [ 4 ] Crude TeCl 4 can be purified by distillation under an atmosphere of chlorine. [ 1 ] Alternatively TeCl 4 can be prepared using sulfuryl chloride (SO₂Cl₂) as a chlorine source. [ 1 ] Yet another method involves the reaction of tellurium with sulfur monochloride (S 2 Cl 2 ) at room temperature. This exothermic reaction rapidly forms white needle-like crystals of TeCl 4 . [ 5 ] Tellurium tetrachloride is the gateway compound for high valent organotellurium compounds . Arylation gives, depending on conditions, Te(C 6 H 4 R) 2 Cl 2 , [Te(C 6 H 4 R) 5 ] − , [Te(C 6 H 4 R) 6 ] 2− . [ 6 ] TeCl 4 has few applications in organic synthesis. Its equivalent weight is high, and the toxicity of organotellurium compounds is problematic. Possible applications of tellurium tetrachloride to organic synthesis have been reported. [ 7 ] It adds to alkenes to give Cl-C-C-TeCl 3 derivatives, wherein the Te can be subsequently removed with sodium sulfide. Electron-rich arenes react to give aryl Te compounds. Thus, anisole gives TeCl 2 (C 6 H 4 OMe) 2 , which can be reduced to the diaryl telluride. TeCl 4 is a precursor to tellurium-containing heterocycles like tellurophenes . [ 1 ] Heating a mixture of TeCl 4 and metallic tellurium gives tellurium dichloride (TeCl 2 ). [ 8 ] In moist air, TeCl 4 forms tellurium oxychloride (TeOCl 2 ), which further decomposes with excess water to form tellurous acid (H 2 TeO 3 ). [ 8 ] As is the case for other tellurium compounds, TeCl 4 is toxic. It also releases HCl upon hydrolysis. [ 1 ]
https://en.wikipedia.org/wiki/Te4Cl16
Tellurium tetrabromide ( Te Br 4 ) is an inorganic chemical compound . It has a similar tetrameric structure to TeCl 4 . [ 3 ] It can be made by reacting bromine and tellurium. [ 4 ] In the vapour TeBr 4 dissociates: [ 3 ] It is a conductor when molten, dissociating into the ions TeBr 3 + and Br − . When dissolved in benzene and toluene , TeBr 4 is present as the unionized tetramer Te 4 Br 16 . [ 3 ] In solvents with donor properties such as acetonitrile , CH 3 CN ionic complexes are formed which make the solution conducting: This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/TeBr4
Tellurium dichloride is a chloride of tellurium with the chemical formula TeCl 2 . Tellurium dichloride can be produced by reacting tellurium with difluorodichloromethane . [ 2 ] [ 3 ] It can also be produced by the comproportionation of tellurium and tellurium tetrachloride . [ 4 ] Tellurium dichloride is a black solid that reacts with water. It melts into a black liquid and vapourizes into a purple gas. [ 1 ] [ 5 ] The gas consists of monomeric TeCl 2 molecules with Te–Cl bond lengths of 2.329 Å and a Cl–Te–Cl bond angle of 97.0°. [ 5 ] Tellurium dichloride (TeCl 2 ) is unstable with respect to disproportionation . [ 5 ] Several complexes of it are known and well characterized. They are prepared by treating tellurium dioxide with hydrochloric acid in the presence of thioureas . The thiourea serves both as a ligand and as a reductant, converting Te(IV) to Te(II).
https://en.wikipedia.org/wiki/TeCl2
Tellurium tetrachloride is the inorganic compound with the empirical formula TeCl 4 . The compound is volatile, subliming at 200 °C at 0.1 mmHg. [ 2 ] Molten TeCl 4 is ionic, dissociating into TeCl 3 + and Te 2 Cl 10 2− . [ 2 ] TeCl 4 is monomeric in the gas phase, with a structure similar to that of SF 4 . [ 3 ] In the solid state, it is a tetrameric cubane-type cluster , consisting of a Te 4 Cl 4 core and three terminal chloride ligands for each Te. Alternatively, this tetrameric structure can be considered as a Te 4 tetrahedron with face-capping chlorines and three terminal chlorines per tellurium atom, giving each tellurium atom a distorted octahedral environment TeCl 4 is prepared by chlorination of tellurium powder: The reaction is initiated with heat. The product is isolated by distillation. [ 4 ] Crude TeCl 4 can be purified by distillation under an atmosphere of chlorine. [ 1 ] Alternatively TeCl 4 can be prepared using sulfuryl chloride (SO₂Cl₂) as a chlorine source. [ 1 ] Yet another method involves the reaction of tellurium with sulfur monochloride (S 2 Cl 2 ) at room temperature. This exothermic reaction rapidly forms white needle-like crystals of TeCl 4 . [ 5 ] Tellurium tetrachloride is the gateway compound for high valent organotellurium compounds . Arylation gives, depending on conditions, Te(C 6 H 4 R) 2 Cl 2 , [Te(C 6 H 4 R) 5 ] − , [Te(C 6 H 4 R) 6 ] 2− . [ 6 ] TeCl 4 has few applications in organic synthesis. Its equivalent weight is high, and the toxicity of organotellurium compounds is problematic. Possible applications of tellurium tetrachloride to organic synthesis have been reported. [ 7 ] It adds to alkenes to give Cl-C-C-TeCl 3 derivatives, wherein the Te can be subsequently removed with sodium sulfide. Electron-rich arenes react to give aryl Te compounds. Thus, anisole gives TeCl 2 (C 6 H 4 OMe) 2 , which can be reduced to the diaryl telluride. TeCl 4 is a precursor to tellurium-containing heterocycles like tellurophenes . [ 1 ] Heating a mixture of TeCl 4 and metallic tellurium gives tellurium dichloride (TeCl 2 ). [ 8 ] In moist air, TeCl 4 forms tellurium oxychloride (TeOCl 2 ), which further decomposes with excess water to form tellurous acid (H 2 TeO 3 ). [ 8 ] As is the case for other tellurium compounds, TeCl 4 is toxic. It also releases HCl upon hydrolysis. [ 1 ]
https://en.wikipedia.org/wiki/TeCl4