text
stringlengths
11
320k
source
stringlengths
26
161
Trivialism is the logical theory that all statements (also known as propositions ) are true and, consequently, that all contradictions of the form "p and not p" (e.g. the ball is red and not red) are true. In accordance with this, a trivialist is a person who believes everything is true. [ 1 ] [ 2 ] In classical logic , trivialism is in direct violation of Aristotle 's law of noncontradiction . In philosophy , trivialism is considered by some to be the complete opposite of skepticism . Paraconsistent logics may use "the law of non-triviality" to abstain from trivialism in logical practices that involve true contradictions . Theoretical arguments and anecdotes have been offered for trivialism to contrast it with theories such as modal realism , dialetheism and paraconsistent logics. Trivialism , as a term, is derived from the Latin word trivialis, meaning commonplace, in turn derived from the trivium , the three introductory educational topics (grammar, logic, and rhetoric) expected to be learned by all freemen . In logic , from this meaning, a "trivial" theory is something regarded as defective in the face of a complex phenomenon that needs to be completely represented. Thus, literally, the trivialist theory is something expressed in the simplest possible way. [ 3 ] In symbolic logic , trivialism may be expressed as the following: [ 4 ] The above would be read as "given any proposition, it is a true proposition" through universal quantification (∀). A claim of trivialism may always apply its fundamental truth, otherwise known as a truth predicate : The above would be read as a "proposition if and only if a true proposition", meaning that all propositions are believed to be inherently proven as true. Without consistent use of this concept, a claim of advocating trivialism may not be seen as genuine and complete trivialism; as to claim a proposition is true but deny it as probably true may be considered inconsistent with the assumed theory. [ 4 ] Luis Estrada-González in "Models of Possibilism and Trivialism" lists four types of trivialism through the concept of possible worlds , with a "world" being a possibility and "the actual world" being reality. It is theorized a trivialist simply designates a value to all propositions in equivalence to seeing all propositions and their negations as true. This taxonomy is used to demonstrate the different strengths and plausibility of trivialism in this context: The consensus among the majority of philosophers is descriptively a denial of trivialism, termed as non-trivialism or anti-trivialism. [ 3 ] This is due to it being unable to produce a sound argument through the principle of explosion and it being considered an absurdity ( reductio ad absurdum ). [ 2 ] [ 4 ] Aristotle 's law of noncontradiction and other arguments are considered to be against trivialism. Luis Estrada-González in "Models of Possiblism and Trivialism" has interpreted Aristotle's Metaphysics Book IV as such: "A family of arguments between 1008a26 and 1007b12 of the form 'If trivialism is right, then X is the case, but if X is the case then all things are one. But it is impossible that all things are one, so trivialism is impossible.' ... these Aristotelian considerations are the seeds of virtually all subsequent suspicions against trivialism: Trivialism has to be rejected because it identifies what should not be identified, and is undesirable from a logical point of view because it identifies what is not identical, namely, truth and falsehood." [ 3 ] Graham Priest considers trivialism untenable: "a substantial case can be made for dialetheism; belief in [trivialism], though, would appear to be grounds for certifiable insanity". [ 5 ] He formulated the "law of non-triviality" as a replacement for the law of non-contradiction in paraconsistent logic and dialetheism. [ 6 ] There are theoretical arguments for trivialism argued from the position of a devil's advocate : Paul Kabay has argued for trivialism in "On the Plenitude of Truth" from the following: Above, possibilism ( modal realism ; related to possible worlds ) is the oft-debated theory that every proposition is possible. With this assumed to be true, trivialism can be assumed to be true as well according to Kabay. The liar's paradox , Curry's paradox , and the principle of explosion all can be asserted as valid and not required to be resolved and used to defend trivialism. [ 2 ] [ 4 ] In Paul Kabay's comparison of trivialism to schools of philosophical skepticism (in "On the Plenitude of Truth")—such as Pyrrhonism —who seek to attain a form of ataraxia , or state of imperturbability; it is purported the figurative trivialist inherently attains this state. This is claimed to be justified by the figurative trivialist seeing every state of affairs being true, even in a state of anxiety. Once universally accepted as true, the trivialist is free from any further anxieties regarding whether any state of affairs is true. Kabay compares the Pyrrhonian skeptic to the figurative trivialist and claims that as the skeptic reportedly attains a state of imperturbability through a suspension of belief , the trivialist may attain such a state through an abundance of belief. In this case—and according to independent claims by Graham Priest—trivialism is considered the complete opposite of skepticism . [ 2 ] [ 4 ] [ 7 ] However, insofar as the trivialist affirms all states of affairs as universally true, the Pyrrhonist neither affirms nor denies the truth (or falsity) of such affairs. [ 8 ] It is asserted by both Priest and Kabay that it is impossible for a trivialist to truly choose and thus act. [ 6 ] [ 9 ] Priest argues this by the following in Doubt Truth to Be a Liar : "One cannot intend to act in such a way as to bring about some state of affairs, s , if one believes s already to hold. Conversely, if one acts with the purpose of bringing s about, one cannot believe that s already obtains." Due to their suspension of determination upon striking equipollence between claims, the Pyrrhonist has also remained subject to apraxia charges. [ 10 ] [ 11 ] [ 12 ] Paul Kabay, an Australian philosopher, in his book A Defense of Trivialism has argued that various philosophers in history have held views resembling trivialism, although he stops short of calling them trivialists. He mentions various pre-Socratic Greek philosophers as philosophers holding views resembling trivialism. He mentions that Aristotle in his book Metaphysics appears to suggest that Heraclitus and Anaxagoras advocated trivialism. He quotes Anaxagoras as saying that all things are one. Kabay also suggests Heraclitus' ideas are similar to trivialism because Heraclitus believed in a union of opposites, shown in such quotes as "the way up and down is the same". [ 4 ] : 32–35 Kabay also mentions a fifteenth century Roman Catholic cardinal, Nicholas of Cusa , stating that what Cusa wrote in De Docta Ignorantia is interpreted as stating that God contained every fact, which Kabay argues would result in trivialism, but Kabay admits that mainstream Cusa scholars would not agree with interpreting Cusa as a trivialist. [ 4 ] : 36–37 Kabay also mentions Spinoza as a philosopher whose views resemble trivialism. Kabay argues Spinoza was a trivialist because Spinoza believed everything was made of one substance which had infinite attributes. [ 4 ] : 37–40 Kabay also mentions Hegel as a philosopher whose views resemble trivialism, quoting Hegel as stating in The Science of Logic "everything is inherently contradictory." [ 4 ] : 40–41 Jody Azzouni is a purported advocate of trivialism in his article The Strengthened Liar by claiming that natural language is trivial and inconsistent through the existence of the liar paradox ("This sentence is false"), and claiming that natural language has developed without central direction. Azzouni implies that every sentence in any natural language is true. "According to Azzouni, natural language is trivial, that is to say, every sentence in natural language is true...And, of course, trivialism follows straightforwardly from the triviality of natural language: after all, 'trivialism is true' is a sentence in natural language." [ 4 ] : 42 [ 13 ] [ 14 ] The Greek philosopher Anaxagoras is suggested as a possible trivialist by Graham Priest in his 2005 book Doubt Truth to Be a Liar . Priest writes, "He held that, at least at one time, everything was all mixed up so that no predicate applied to any one thing more than a contrary predicate." [ 6 ] Luis Estrada-González in "Models of Possibilism and Trivialism" lists eight types of anti-trivialism (or non-trivialism) through the use of possible worlds :
https://en.wikipedia.org/wiki/Trivialism
In mathematics , the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or a particularly simple object possessing a given structure (e.g., group , topological space ). [ 1 ] [ 2 ] The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. [ 1 ] [ 3 ] The opposite of trivial is nontrivial , which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove. [ 2 ] Triviality does not have a rigorous definition in mathematics. It is subjective , and often determined in a given situation by the knowledge and experience of those considering the case. In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others: " Trivial " can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions . For example, consider the differential equation y ′ = y {\displaystyle y'=y} where y = y ( x ) {\displaystyle y=y(x)} is a function whose derivative is y ′ {\displaystyle y'} . The trivial solution is the zero function y ( x ) = 0 {\displaystyle y(x)=0} while a nontrivial solution is the exponential function y ( x ) = e x . {\displaystyle y(x)=e^{x}.} The differential equation f ″ ( x ) = − λ f ( x ) {\displaystyle f''(x)=-\lambda f(x)} with boundary conditions f ( 0 ) = f ( L ) = 0 {\displaystyle f(0)=f(L)=0} is important in mathematics and physics, as it could be used to describe a particle in a box in quantum mechanics, or a standing wave on a string. It always includes the solution f ( x ) = 0 {\displaystyle f(x)=0} , which is considered obvious and hence is called the "trivial" solution. In some cases, there may be other solutions ( sinusoids ), which are called "nontrivial" solutions. [ 4 ] Similarly, mathematicians often describe Fermat's last theorem as asserting that there are no nontrivial integer solutions to the equation a n + b n = c n {\displaystyle a^{n}+b^{n}=c^{n}} , where n is greater than 2. Clearly, there are some solutions to the equation. For example, a = b = c = 0 {\displaystyle a=b=c=0} is a solution for any n , but such solutions are obvious and obtainable with little effort, and hence "trivial". Trivial may also refer to any easy case of a proof, which for the sake of completeness cannot be ignored. For instance, proofs by mathematical induction have two parts: the "base case" which shows that the theorem is true for a particular initial value (such as n = 0 or n = 1), and the inductive step which shows that if the theorem is true for a certain value of n , then it is also true for the value n + 1. The base case is often trivial and is identified as such, although there are situations where the base case is difficult but the inductive step is trivial. Similarly, one might want to prove that some property is possessed by all the members of a certain set. The main part of the proof will consider the case of a nonempty set, and examine the members in detail; in the case where the set is empty, the property is trivially possessed by all the members of the empty set, since there are none (see vacuous truth for more). The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. The following examples show the subjectivity and ambiguity of the triviality judgement. Triviality also depends on context. A proof in functional analysis would probably, given a number, trivially assume the existence of a larger number. However, when proving basic results about the natural numbers in elementary number theory , the proof may very well hinge on the remark that any natural number has a successor – a statement which should itself be proved or be taken as an axiom so is not trivial (for more, see Peano's axioms ). In some texts, a trivial proof refers to a statement involving a material implication P → Q, where the consequent Q , is always true. [ 5 ] Here, the proof follows immediately by virtue of the definition of material implication in which as the implication is true regardless of the truth value of the antecedent P if the consequent is fixed as true. [ 5 ] A related concept is a vacuous truth , where the antecedent P in a material implication P → Q is false. [ 5 ] In this case, the implication is always true regardless of the truth value of the consequent Q – again by virtue of the definition of material implication. [ 5 ]
https://en.wikipedia.org/wiki/Triviality_(mathematics)
TRIzol is a widely used [ 1 ] chemical solution used in the extraction of DNA , RNA , and proteins from cells. The solution was initially used and published by Piotr Chomczyński and Nicoletta Sacchi in 1987. [ 2 ] TRIzol is the brand name of guanidinium thiocyanate from the Ambion part of Life Technologies , [ 3 ] and Tri-Reagent is the brand name from MRC, [ 4 ] which was founded by Chomczynski. The correct name of the method is guanidinium thiocyanate-phenol-chloroform extraction . The use of TRIzol can result in DNA yields comparable to other extraction methods, and it leads to >50% bigger RNA yield. [ 5 ] [ 6 ] An alternative method for RNA extraction is phenol extraction and TCA/acetone precipitation. Chloroform should be exchanged with 1-bromo-3-chloropropane when using the new generation TRI Reagent. DNA and RNA from TRIzol and TRI reagent can also be extracted using the Direct-zol Miniprep kit by Zymo Research . [ 7 ] This method eliminates the use of Chloroform and 1-bromo-3-chloropropane completely, bypassing phase-separation and precipitation steps. [ 8 ] TRIzol is light-sensitive and is often stored in a dark-colored, glass container covered in foil. It is stored at room temperature. When used, it resembles cough syrup, bright pink. The smell of the phenol is extremely strong. TRIzol works by maintaining RNA integrity during tissue homogenization, while at the same time disrupting and breaking down cells and cell components. Vigilant caution should be taken while using TRIzol (due to the phenol and chloroform ). TRIzol is labeled as acute oral, dermal, and inhalation toxicity besides skin corrosion/irritation in the manufacturer MDS. [ 9 ] Exposure to TRIzol can be a serious health hazard. Exposure can lead to serious chemical burns, permanent scarring and kidney failure. Experiments should be performed under a chemical hood, with lab coat, nitrile gloves and a plastic apron. [ 10 ] [ 11 ] TRIzol waste should never be mixed with bleach or acids: the guanidinium thiocyanate in TRIzol reacts to form highly toxic gases.
https://en.wikipedia.org/wiki/Trizol
troff ( / ˈ t iː r ɒ f / ), short for "typesetter roff", is the major component of a document processing system developed by Bell Labs for the Unix operating system. troff and the related nroff were both developed from the original roff . While nroff was intended to produce output on terminals and line printers, troff was intended to produce output on typesetting systems, specifically the Graphic Systems CAT , which had been introduced in 1972. Both used the same underlying markup language , and a single source file could normally be used by nroff or troff without change. troff features commands to designate fonts, spacing, paragraphs, margins, footnotes and more. Unlike many other text formatters, troff can position characters arbitrarily on a page, even overlapping them, and has a fully programmable input language. Separate preprocessors are used for more convenient production of tables, diagrams, and mathematics. Inputs to troff are plain text files and can be created by any text editor. Extensive macro packages have been created for various document styles. A typical distribution of troff includes the me macros for formatting research papers, man and mdoc macros for creating Unix man pages , mv macros for creating mountable transparencies , and the ms and mm macros for letters, books, technical memoranda, and reports. troff' s origins can be traced to a text-formatting program called RUNOFF , which was written by Jerome H. Saltzer for MIT 's CTSS operating system in the mid-1960s. (The name allegedly came from the phrase I'll run off a document .) Bob Morris ported it to the GE 635 architecture and called the program roff (an abbreviation of runoff ). [ citation needed ] It was rewritten as rf for the PDP-7 , and at the same time (1969), Doug McIlroy rewrote an extended and simplified version of roff in the BCPL programming language . The first version of Unix was developed on a PDP-7 which was sitting around Bell Labs . In 1971 the developers wanted to get a PDP-11 for further work on the operating system. In order to justify the cost for this system, they proposed that they would implement a document-formatting system for the Bell Labs patents department. [ 1 ] This first formatting program was a reimplementation of McIllroy's roff , written by Joe F. Ossanna . When they needed a more flexible language, a new version of roff called nroff ( newer "roff" ) was written, which provided the basis for all future versions. When they got a Graphic Systems CAT phototypesetter , Ossanna modified nroff to support multiple fonts and proportional spacing . Dubbed troff , for typesetter roff , its sophisticated output amazed the typesetter manufacturer and confused peer reviewers , who thought that manuscripts using troff had been published before. [ 2 ] [ 3 ] As such, the name troff is pronounced / ˈ t iː r ɒ f / rather than * / ˈ t r ɒ f / . With troff came nroff (they were actually almost the same program), which was for producing output for line printers and character terminals . It understood everything troff did, and ignored the commands which were not applicable, e.g., font changes. Ossanna's troff was written in PDP-11 assembly language and produced output specifically for the CAT phototypesetter . He rewrote it in C , although it was now 7000 lines of uncommented code and still dependent on the CAT. As the CAT became less common, and was no longer supported by the manufacturer, the need to make it support other devices became a priority. Ossanna died before this task was completed, so Brian Kernighan took on the task of rewriting troff . The newly rewritten version produced a device-independent code which was very easy for post-processors to read and translate to the appropriate printer codes. Also, this new version of troff (often called ditroff for device independent troff ) had several extensions, which included drawing functions. [ 4 ] The program's documentation defines the output format of ditroff , which is used by many modern troff clones like GNU groff . In 1983, troff was one of several UNIX tools available for Charles River Data Systems' UNOS operating system under Bell Laboratories license. [ 5 ] The troff collection of tools (including pre - and post-processors) was eventually called Documenter's WorkBench (DWB), and was under continuous development in Bell Labs and later at the spin-off Unix System Laboratories (USL) through 1994. At that time, SoftQuad took over the maintenance, although Brian Kernighan continued to improve troff on his own. Thus, there are at least the following variants of the original Bell Labs troff in use: While troff has been supplanted by other programs such as Interleaf , FrameMaker , and LaTeX , it is still being used quite extensively. It remains the default formatter for the UNIX documentation . The software was reimplemented as groff for the GNU system beginning in 1990. In addition, due to the open sourcing of Ancient UNIX systems, as well as modern successors such as the ditroff-based open-sourced versions found on OpenSolaris and Plan 9 from Bell Labs , there are several versions of AT&T troff (CAT and ditroff-based [ 6 ] ) available under various open-source licenses. In general, one was not encouraged to use troff directly, but rather go through some easier-to-use interface. [ 7 ] [ 8 ] Troff includes macros that are run before starting to process the document. These macros include setting up page headers and footers, defining new commands, and influencing how the output will be formatted. The command-line argument for including a macro set is -m name , which has led to many macro sets being known as the base filename with a leading m . [ 9 ] The standard macro sets, with leading m are: The ms macros were the first of these, developed at AT&T, before they were supplanted by the mm macros. [ 18 ] One goal of the mm macros was that they be usable by the typing pool at Bell Labs and, over time, this happened and the mm macros became a standard at Bell Labs. [ 19 ] AT&T made the mm macros commercially available for System V Unix. [ 18 ] In contrast, the me macros were developed at Berkeley. [ 18 ] An example of a simple business letter prepared with the mm macros would be: A comprehensive list of macros available is usually listed in a tmac(5) manual page . [ 14 ] As troff evolved, since there are several things which cannot be done easily in troff , several preprocessors were developed. These programs transform certain parts of a document into troff input, fitting naturally into the use of "pipelines" in Unix — sending the output of one program as the input to another (see pipes and filters ). Typically, each preprocessor translates only sections of the input file that are specially marked, passing the rest of the file through unchanged. The embedded preprocessing instructions are written in a simple application-specific programming language, which provides a high degree of power and flexibility. Three preprocessors provide troff with drawing capabilities by defining a domain-specific language for describing the picture. In addition, there is a command soelim that removes .so inclusion directives from the input text. [ 24 ] A typical structure of the pipeline might be: soelim file | refer | ideal | pic | tbl | eqn | troff Yet more preprocessors allow the drawing of more complex pictures by generating output for pic . Several other front-ends have been developed that are intended to be friendlier interfaces to troff. One of them is Sanscribe , originally developed at Berkeley and then enhanced during the 1980s by several users including Intel and InterACT . Used for writing memos, reports, documents, Sanscribe is built upon basic troff commands as well as the me macros and various pre- and post-processors such as soelim, eqn, tbl, grap, and pic. However it is a main program binary, not a preprocessor. The conditional inclusion capability renders it especially useful for maintaining multi-platform reference manuals. However, Sanscribe is fragile and prone to giving cryptic errors or producing weirdly formatted results. [ 28 ] A special-purpose front-end is vgrind , which generates nicely formatted source program listings, with such features as putting comments in italics, keywords in bold, and function names highlighted in margins. It can run either as a filter or as a main program with its output being passed to troff. It has support for the languages in use at Bell Labs facilities, including not just Fortran , C , and C++ but also domain-specific tools such as Bourne shell and yacc as well as those further afield such as Emacs Lisp and Icon . [ 24 ] A different approach is employed by the CADiZ suite of tools for the Z notation . Rather than the cadiz program being a preprocessor in the front of the pipeline, it interacts multiple times with troff as both input and output, using saved files rather than a pipe. CADiZ also contains its own set of macros, called .ZA through .ZZ . [ 29 ]
https://en.wikipedia.org/wiki/Troff
In astronomy , a trojan is a small celestial body (mostly asteroids) that shares the orbit of a larger body, remaining in a stable orbit approximately 60° ahead of or behind the main body near one of its Lagrangian points L 4 and L 5 . Trojans can share the orbits of planets or of large moons . Trojans are one type of co-orbital object . In this arrangement, a star and a planet orbit about their common barycenter , which is close to the center of the star because it is usually much more massive than the orbiting planet. In turn, a much smaller mass than both the star and the planet, located at one of the Lagrangian points of the star–planet system, is subject to a combined gravitational force that acts through this barycenter. Hence the smallest object orbits around the barycenter with the same orbital period as the planet, and the arrangement can remain stable over time. [ 1 ] In the Solar System, most known trojans share the orbit of Jupiter . They are divided into the Greek camp at L 4 (ahead of Jupiter) and the Trojan camp at L 5 (trailing Jupiter). More than a million Jupiter trojans larger than one kilometer are thought to exist, [ 2 ] of which more than 7,000 are currently catalogued. In other planetary orbits only nine Mars trojans , 31 Neptune trojans , two Uranus trojans , two Earth trojans , and one Saturn trojan have been found to date. A temporary Venus trojan is also known. Numerical orbital dynamics stability simulations indicate that Saturn probably does not have any primordial trojans. [ 3 ] The same arrangement can appear when the primary object is a planet and the secondary is one of its moons, whereby much smaller trojan moons can share its orbit. All known trojan moons are part of the Saturn system . Telesto and Calypso are trojans of Tethys , and Helene and Polydeuces of Dione . In 1772, the Italian–French mathematician and astronomer Joseph-Louis Lagrange obtained two constant-pattern solutions (collinear and equilateral) of the general three-body problem . [ 4 ] In the restricted three-body problem, with one mass negligible (which Lagrange did not consider), the five possible positions of that mass are now termed Lagrange points . The term "trojan" originally referred to the "trojan asteroids" ( Jovian trojans ) that orbit close to the Lagrangian points of Jupiter. These have long been named for figures from the Trojan War of Greek mythology . By convention, the asteroids orbiting near the L 4 point of Jupiter are named for the characters from the Greek side of the war, whereas those orbiting near the L 5 of Jupiter are from the Trojan side. There are two exceptions, named before the convention was adopted: 624 Hektor in the L4 group, and 617 Patroclus in the L5 group. [ 5 ] Astronomers estimate that the Jovian trojans are about as numerous as the asteroids of the asteroid belt . [ 6 ] Later on, objects were found orbiting near the Lagrangian points of Neptune , Mars , Earth , [ 7 ] Uranus , and Venus . Minor planets at the Lagrangian points of planets other than Jupiter may be called Lagrangian minor planets. [ 8 ] Whether or not a system of star, planet, and trojan is stable depends on how large the perturbations are to which it is subject. If, for example, the planet is the mass of Earth, and there is also a Jupiter-mass object orbiting that star, the trojan's orbit would be much less stable than if the second planet had the mass of Pluto. As a rule of thumb, the system is likely to be long-lived if m 1 > 100 m 2 > 10,000 m 3 (in which m 1 , m 2 , and m 3 are the masses of the star, planet, and trojan). More formally, in a three-body system with circular orbits, the stability condition is 27( m 1 m 2 + m 2 m 3 + m 3 m 1 ) < ( m 1 + m 2 + m 3 ) 2 . So the trojan being a mote of dust, m 3 →0, imposes a lower bound on ⁠ m 1 / m 2 ⁠ of ⁠ 25+√621 / 2 ⁠ ≈ 24.9599. And if the star were hyper-massive, m 1 →+∞, then under Newtonian gravity, the system is stable whatever the planet and trojan masses. And if ⁠ m 1 / m 2 ⁠ = ⁠ m 2 / m 3 ⁠ , then both must exceed 13+√168 ≈ 25.9615. However, this all assumes a three-body system; once other bodies are introduced, even if distant and small, stability of the system requires even larger ratios. Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Trojan_(celestial_body)
In physics , a trojan wave packet is a wave packet that is nonstationary and nonspreading. It is part of an artificially created system that consists of a nucleus and one or more electron wave packets, and that is highly excited under a continuous electromagnetic field . Its discovery as one of significant contributions to the quantum mechanics was awarded the 2022 Wigner Medal for Iwo Bialynicki-Birula [ 1 ] [ clarification needed ] The strong, polarized electromagnetic field, holds or "traps" each electron wave packet in an intentionally selected orbit (energy shell). [ 2 ] [ 3 ] They derive their names from the trojan asteroids in the Sun–Jupiter system. [ 4 ] Trojan asteroids orbit around the Sun in Jupiter 's orbit at its Lagrange points L4 and L5, where they are phase-locked and protected from collision with each other, and this phenomenon is analogous to the way the wave packet is held together. The concept of the trojan wave packet is derived from manipulating atoms and ions at the atomic level creating ion traps . Ion traps allow the manipulation of atoms and are used to create new states of matter including ionic liquids , Wigner crystals and Bose–Einstein condensates . [ 5 ] This ability to manipulate the quantum properties directly is key to the development of applicable nanodevices such as quantum dots and microchip traps. In 2004 it was shown that it is possible to create a trap which is actually a single atom. Within the atom, the behavior of an electron can be manipulated. [ 6 ] During experiments in 2004 using lithium atoms in an excited state, researchers were able to localize an electron in a classical orbit for 15,000 orbits (900 ns). It was neither spreading nor dispersing. This "classical atom" was synthesized by "tethering" the electron using a microwave field to which its motion is phase locked. The phase lock of the electrons in this unique atomic system is, as mentioned above, analogous to the phase locked asteroids of Jupiter's orbit. [ 7 ] The techniques explored in this experiment are a solution to a problem that dates back to 1926. Physicists at that time realized that any initially localized wave packet will inevitably spread around the orbit of the electrons. Physicists noticed that "the wave equation is dispersive for the atomic Coulomb potential." In the 1980s several groups of researchers proved this to be true. The wave packets spread all the way around the orbits and coherently interfered with themselves. Recently the real world innovation realized with experiments such as trojan wave packets, is localizing the wave packets, i.e., with no dispersion. Applying a polarized circular EM field, at microwave frequencies, synchronized with an electron wave packet, intentionally keeps the electron wave packets in a Lagrange type orbit. [ 8 ] [ 9 ] The trojan wave packet experiments built on previous work with lithium atoms in an excited state. These are atoms, which respond sensitively to electric and magnetic fields, have decay periods that are relatively prolonged, and electrons, which for all intents and purposes actually operate in classical orbits. The sensitivity to electric and magnetic fields is important because this allows control and response by the polarized microwave field. [ 10 ] The next logical step is to attempt to move from single electron wave packets to more than one electron wave packet . This had already been accomplished in barium atoms, with two electron wave packets. These two were localized. However, eventually, these created dispersion after colliding near the nucleus. Another technique employed a nondispersive pair of electrons, but one of these had to have a localized orbit close to the nucleus. The nondispersive two-electron trojan wave packets demonstration changes all that. These are the next step analogue of the one electron trojan wave packets – and designed for excited helium atoms. [ 12 ] [ 13 ] As of July 2005, atoms with coherent, stable two-electron, nondispersing wave packets had been created. These are excited helium-like atoms, or quantum dot helium (in solid-state applications), and are atomic (quantum) analogues to the three body problem of Newton's classical physics , which includes today's astrophysics . In tandem, circularly polarized electromagnetic and magnetic fields stabilize the two electron configuration in the helium atom or the quantum dot helium (with impurity center). The stability is maintained over a broad spectrum , and because of this, the configuration of two electron wave packets is considered to be truly nondispersive. For example, with the quantum dot helium, configured for confining electrons in two spatial dimensions, there now exists a variety of trojan wave packet configurations with two electrons, and as of 2005, only one in three dimensions. [ 14 ] In 2012 an essential experimental step was undertaken not only generating but locking the trojan wavepackets on adiabatically changed frequency and expanding the atoms as once predicted by Kalinski and Eberly . [ 15 ] It will allow to create two electron Langmuir trojan wave packets in helium by the sequential excitation in adiabatic Stark field able to produce the circular one-electron aureola over He + first and then put the second electron in similar state. [ 16 ]
https://en.wikipedia.org/wiki/Trojan_wave_packet
The Trolox equivalent antioxidant capacity ( TEAC ) assay measures the antioxidant capacity of a given substance, as compared to the standard, Trolox . Most commonly, antioxidant capacity is measured using the ABTS Decolorization Assay. Other antioxidant capacity assays which use Trolox as a standard include the diphenylpicrylhydrazyl ( DPPH ), oxygen radical absorbance capacity ( ORAC ) and ferric reducing ability of plasma ( FRAP ) assays. The TEAC assay is often used to measure the antioxidant capacity of foods, beverages and nutritional supplements. [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trolox_equivalent_antioxidant_capacity
A Trombe wall is a massive equator-facing wall that is painted a dark color in order to absorb thermal energy from incident sunlight and covered with a glass on the outside with an insulating air-gap between the wall and the glaze. A Trombe wall is a passive solar building design strategy that adopts the concept of indirect-gain, where sunlight first strikes a solar energy collection surface in contact with a thermal mass of air. The sunlight absorbed by the mass is converted to thermal energy (heat) and then transferred into the living space. Trombe walls may also be referred to as a mass wall , [ 1 ] solar wall , [ 2 ] or thermal storage wall . [ 3 ] However, due to the extensive work of professor and architect Félix Trombe in the design of passively heated and cooled solar structure, they are often called Trombe Walls. [ 2 ] This system is similar to the air heater (as a simple glazed box on the south wall with a dark absorber, air space, and two sets of vents at top and bottom) created by professor Edward S. Morse a hundred years ago. [ 4 ] [ 5 ] [ 6 ] In 1920s, the idea of solar heating began in Europe. In Germany, housing projects were designed to take advantage of the sun. The research and accumulated solar design experience was then spread across the Atlantic by architects such as Walter Gropius and Marcel Breuer. Apart from these early examples, heating homes with the sun made slow progress until the 1930s, when several different American architects started to explore the potential of solar heating. The pioneering work of these American architects, the influence of immigrant Europeans, and the memory of wartime fuel shortages made solar heating very popular during the initial housing boom at the end of World War II. [ 7 ] Later in the 1970s, before and after the international oil crisis of 1973, some European architectural periodicals were critical of standard construction methods and architecture of the time. They described how architects and engineers reacted to the crisis, proposing new techniques and projects in order to intervene innovatively in the built environment, using energy and natural resources more efficiently. [ 8 ] Moreover, the depletion of natural resources generated interest in renewable energy sources, such as solar energy . [ 9 ] Parallel to global population growth, energy consumption and environmental issues have become a global concern - especially while the building sector is consuming the highest energy in the world and most of the energy is used for heating, ventilation and air conditioning systems. [ 10 ] For these reasons, today's buildings are expected to achieve both energy efficiency and environmental-friendly design through the use of renewable energy partly or completely instead of fossil energy for heating and cooling. In this direction, the integration of passive solar systems in buildings is one strategy for sustainable development and increasingly encouraged by international regulations. [ 11 ] Today's low-energy buildings with Trombe walls often improve on an ancient technique that incorporates a thermal storage and delivery system people have already used: thick walls of adobe or stone to trap the sun's heat during the day and release it slowly and evenly at night to heat their building. [ 12 ] Today, the Trombe wall continues to serve as an effective strategy of passive solar design. The first well-known example of a Trombe wall system was used in the Trombe house of Odeillo, France in 1967. [ 13 ] [ 3 ] The black painted wall is constructed of approximately 2 foot thick concrete with an air space and a double glazing on its exterior side. The house is primarily heated by radiation and convection from the inner surface of the concrete wall and the results from studies show that 70% of this building's yearly heating needs are supplied by solar energy. Therefore, the efficiency of the system is comparable to a good active solar heating system. PV, Photovoltaic for electrical production converts 15%-20% radiation to energy. Meaning its energy efficiency is low - 85% of the sun's radiation is lost. Whereas the solar thermal collector, Trombe Wall is able to convert 70%-80% of the suns radiation to heat, meaning, it is far more energy efficient and its heat production is powerful. Another passive collector-distributor Trombe Wall system was built in 1970, in Montmedy, France. The house with 280 cubic metres (9,900 cu ft) living space required 7000 kWh for space heating annually. At Montmedy-between 49° and 50° North latitude-5400 kWh were supplied by solar heating and the remainder from an auxiliary electrical system. The annual heating cost for electricity was approximately $225 when compared to an estimated $750 for a home entirely heated by electricity in the same area. This yields to a 77% reduction in heating load and a 70% reduction in the cost for winter heating requirements. [ 14 ] In 1974, the first example of Trombe wall system was used in the Kelbaugh House in Princeton, New Jersey. [ 4 ] The house is located along the northern boundary of the site to maximize the unshaded access to available sunlight. The two-story building has 600 square feet (56 m 2 ) of thermal storage wall which is constructed of concrete and painted with a selective black paint over a masonry sealer. Although the main heating is accomplished by radiation and convection from the inner face of the wall, two vents in the wall also allow daytime heating by the natural convection loop. According to data collected in the winters of 1975-1976 and 1976–1977, the Trombe wall system reduced the heating costs respectively by 76% and 84%. [ 3 ] Unlike an active solar system that employs hardware and mechanical equipment to collect or transport heat, a Trombe wall is a passive solar-heating system where the thermal energy flows in the system by natural means such as radiation, conduction, and natural convection. As a consequence, the wall works by absorbing sunlight on its outer face and then transferring this heat through the wall by conduction. Heat conducted through the wall is then distributed to the living space by radiation, and to some degree by convection, from the wall's inner surface. [ 3 ] The greenhouse effect helps this system by trapping the solar radiation between the glazing and the thermal mass. Heat from the sun, in the form of shorter-wavelength radiation, passes through the glazing largely unimpeded. When this radiation strikes the dark colored surface of the thermal mass facing the sun, the energy is absorbed and then re-emitted in the form of longer-wavelength radiation that cannot pass through the glazing as readily. Hence heat becomes trapped and builds up in the air space between the high heat capacity thermal mass and the glazing that faces the sun. [ 15 ] Another phenomenon that plays a role in the Trombe wall's operation is the time lag caused by the heat capacity of the materials. Since Trombe walls are quite thick and made of high heat capacity materials, the heat-flow from the warmer outer surface to the cooler inner surface is slower than other materials with less heat capacity. This delayed heat-flow phenomenon is known as time lag and it causes the heat gained during the day to reach the interior surface of the thermal mass later. This property of the mass helps to heat the living space in the evenings as well. [ 7 ] So, if there is enough mass, the wall can act as a radiant heater all night long. On the other hand, if the mass is too thick, it takes too long to transmit the thermal energy it collects, thus, the living space does not receive enough heat during the evening hours when it is needed the most. Likewise, if the thermal mass is too thin, it transmits the heat too quickly, resulting in overheating of the living space during the day and little energy left for the evening. Also, Trombe walls using water as a thermal mass collect and distribute heat to a space in the same way, but they transfer the heat through the wall components (tubes, bottles, barrels, drums, etc.) by convection rather than by conduction and the convection performance of the water walls differs according to their different heat capacities. [ 1 ] Larger storage volumes provide a greater and longer-term heat storage capacity, while smaller contained volumes provide greater heat exchange surfaces and thus faster distribution. Trombe walls are often designed to serve as a load-bearing function as well as to collect and store the sun's energy and to help enclose the building's interior spaces. [ 2 ] The requirements of a Trombe Wall are glazing areas faced toward the equator for maximum winter solar gain and a thermal mass, located 4 inches or more directly behind the glass, which serves for heat storage and distribution. Also, there are many factors, such as color, thickness, or additional thermal control devices that have an impact on the design and the effectiveness of Trombe Walls. [ 3 ] Equatorial, which is Southward in the Northern Hemisphere and Northward in the Southern Hemisphere, is the best rotation for passive solar strategies because they collect much more sun during the day than they lose during the night, and collect much more sun in the winter than in the summer. [ 7 ] The first design strategy to increase the effectiveness of Trombe Walls is painting the outside surface of the wall to black (or a dark color) for the best possible absorption of sunlight. Moreover, a selective coating to a Trombe wall improves its performance by reducing the amount of infrared energy radiated back through the glass. The selective surface consists of a sheet of metal foil glued to the outside surface of the wall and it absorbs almost all the radiation in the visible portion of the solar spectrum and emits very little in the infrared range. High absorbency turns the sunlight into heat at the wall's surface, and low emittance prevents the heat from radiating back towards the glass. [ 16 ] Although the Trombe walls are usually made of solid materials, such as concrete, brick, stone, or adobe, they can also be made of water. The advantage of using water as a thermal mass is that water stores considerably more heat per volume (has a greater heat capacity) than masonry. [ 2 ] The developer of this water wall, Steve Baer, names this system “Drum Wall”. [ 14 ] He painted the steel containers similar to oil drums and filled them almost full of water, leaving some room for the thermal expansion. Then stacked the containers horizontally behind an equator-facing double glazing with the blackened bottoms facing outside. This water wall involves the same principles as the Trombe walls but employs a different storage material and different methods of containing that material. [ 1 ] Like the dark colored thermal mass of the Trombe walls, the containers that store the water are also frequently painted with dark colors to increase their absorptivity, but it is also common to leave them transparent or translucent to allow some daylight to pass through. Another critical part of Trombe wall design is choosing the proper thermal mass material and thickness. The optimum thickness of the thermal mass is dependent on the heat capacity and the thermal conductivity of the material used. There are some rules to follow while sizing the thermal mass. [ 3 ] The optimum thickness of a masonry wall increases as the thermal conductivity of the wall material increases. For instance, to compensate for a rapid heat transfer through a highly conductive material, the wall needs to be thicker. Accordingly, since the thicker wall absorbs and stores more heat to use at night, the efficiency of the wall increases as the conductivity and thickness of the wall increase. There is an optimum thickness range for the masonry materials. The efficiency of the water wall increases as the thickness of the wall increases. However, it is hard to notice a considerable performance increase as the walls get thicker than 6 inches. Likely, a water wall thinner than 6 inches is also not enough to act as a proper thermal mass that stores the heat during the day. In the early Trombe wall design, there are vents on the walls to distribute the heat by natural convection (thermocirculation) from the exterior face of the wall, but only during the daytime and early evening. [ 3 ] Solar radiation passing through the glass is absorbed by the wall heating its surface to temperature as high as 150 °F (66 °C). This heat is transferred to the air in the air space between the wall and the glass. Through openings or vents located at the top of the wall, warm air rising in the air space enters the room while simultaneously drawing cool room air through the low vents in the wall. In this way additional heat can be supplied to the living space during periods of sunny weather. However, it is now clear that the vents do not work well in either summer or winter. [ 7 ] It becomes more common to design a half Trombe Wall then combine it with a direct gain system. The direct gain part delivers heat early in the day while the Trombe wall stores heat for the nighttime use. Moreover, unlike a full Trombe wall, the direct gain part allows views and the delight of winter sunshine. To minimize the possible drawbacks of the Trombe wall system, there are additional thermal control strategies to employ to the wall design. For instance, the minimum 4-inch distance between the glass and the mass allows cleaning the glazing and the insertion of a roll-down radiant barrier as needed. [ 7 ] Adding a radiant barrier or night insulation between the glazing and the thermal mass reduces nighttime heat losses and summer daytime heat gains. However, to prevent overheating in summers, combining this strategy with an outdoor shading device like shutter, a roof overhang, or an interior shading to block excessive solar radiation from heating the Trombe wall would be the best. [ 17 ] Another strategy helps to benefit from the solar collection without some of the drawbacks of the Trombe walls is to use exterior mirror-like reflectors. [ 7 ] The additional reflected area helps Trombe walls to benefit more from the sunlight with the flexibility of removing or rotating the reflector device if the solar collection is undesired. When three different Trombe wall facades with single glass, double glass, and an integrated semi-transparent PV module are compared in hot and humid climate, the single glass provides the highest solar radiation gain due to its higher solar heat gain efficiency. [ 18 ] However, it is recommended to use the single glass with a shutter for the evening and night times, to offset its heat losses. High transmission glazing maximizes solar gains of the Trombe wall while allowing to recognize the dark brick, natural stones, water containers, or another attractive thermal mass system behind the glazing as well. However, from an aesthetics perspective, sometimes it is not desirable to distinguish the black thermal mass. As an architectural detail, patterned glass can be used to limit the exterior visibility of the dark wall without sacrificing transmissivity. [ 16 ] The largest Trombe wall in the Northeastern United States is located in NJIT’s Mechanical Engineering Building, at 200 Central Avenue, Newark, NJ. The Kachadorian floor overcomes the disadvantages of the Trombe wall by orienting it horizontally instead of vertically. The Barra system combines actual Trombe walls with a ventilated slab like the Kachadorian floor.
https://en.wikipedia.org/wiki/Trombe_wall
In mathematics , the Trombi–Varadarajan theorem , introduced by Trombi and Varadarjan ( 1971 ), gives an isomorphism between a certain space of spherical functions on a semisimple Lie group , and a certain space of holomorphic functions defined on a tubular neighborhood of the dual of a Cartan subalgebra . This differential geometry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trombi–Varadarajan_theorem
A trommel screen , also known as a rotary screen , is a mechanical screening machine used to separate materials, mainly in the mineral and solid-waste processing industries . [ 1 ] It consists of a perforated cylindrical drum that is normally elevated at an angle at the feed end. [ 2 ] Physical size separation is achieved as the feed material spirals down the rotating drum, where the undersized material smaller than the screen apertures passes through the screen, while the oversized material exits at the other end of the drum. [ 3 ] The name "trommel" comes from the German word for "drum". [ 4 ] Trommel screens can be used in a variety of applications such as classification of solid waste and recovery of valuable minerals from raw materials. Trommels come in many designs such as concentric screens, series or parallel arrangement and each component has a few configurations. However depending on the application required, trommels have several advantages and limitations over other screening processes such as vibrating screens , grizzly screens , roller screens , curved screens and gyratory screen separators . Some of the main governing equations for a trommel screen include the screening rate, screening efficiency and residence time of particles in the screen. These equations could be applied in the rough calculation done in initial phases of a design process. However, design is largely based on heuristics . Therefore, design rules are often used in place of the governing equations in the design of a trommel screen. When designing a trommel screen, the main factors affecting the screening efficiency and production rate are the rotational velocity of the drum, mass flow rate of feed particles, size of the drum, and inclination of trommel screen. Depending on desired application of trommel screen, a balance has to be made between the screening efficiency and production rate. Trommel screens are used by the municipal waste industry in the screening process to classify sizes of solid waste. [ 5 ] Besides that, it can also be used to improve the recovery of fuel-derived solid waste. This is done by removing inorganic materials such as moisture and ash from the air-classified light fraction segregated from shredded solid waste, thereby increasing the quality of the product fuel. [ 6 ] In addition, trommel screens are used for the treatment of wastewater. For this particular application, solids from the entering flow will settle onto the screen mesh and the drum will rotate once the liquid reaches a certain level. The clean area of the screen is submerged into the liquid while the trapped solids fall onto a conveyor which will be further processed before removal. [ 7 ] Trommel screens are also used for the grading of raw materials to recover valuable minerals. The screen will segregate minuscule materials which are not in the suitable range of size to be used in the crushing stage. It also helps to get rid of dust particles which will otherwise impair the performance of the subsequent machineries in the downstream processes. [ 8 ] Other applications of trommel screens can be seen in the screening process of composts as an enhancement technique. It selects composts of variable size fractions to get rid of contaminants and incomplete composted residues, forming end products with a variety of uses. [ 9 ] Besides this, the food industries use trommel screens to sort dry food of different sizes and shapes. The classification process will help to achieve the desired mass or heat transfer rate and avoid under or over-processing. It also screens tiny food such as peas and nuts that are strong enough to resist the rotational force of the drum. [ 10 ] One of the available designs of trommel screens is concentric screens with the coarsest screen located at the innermost section. It can also be designed in parallel in which objects exit one stream and enter the following. [ 10 ] A trommel in series is a single drum whereby each section has different apertures size arranged from the finest to the coarsest [ 11 ] The trommel screen has many different configurations. For the drum component, an internal screw is fitted when the placement of the drum is flat or elevated at an angle less than 5°. The internal screw facilitates the movement of objects through the drum by forcing them to spiral. For an inclined drum, objects are being lifted and then dropped with the help of lifter bars to move it further down the drum which the objects will otherwise roll down slower. Furthermore, the lifter bars shake the objects to segregate them. Lifter bars will not be considered in the presence of heavy objects as they may break the screen. As for the screens, perforated plate screens or mesh screens are usually used. Perforated plate screen are rolled and welded for strength. This design contains fewer ridges which makes it easier for the cleaning process. On the other hand, mesh screen are replaceable as it is susceptible to wear and tear compared to perforated screen. In addition, screw cleaning work for this design is more intensive as objects tend to get wedged in the mesh ridges. [ 12 ] The screen's aperture comes in either square or round shape which is determined by many operating factors [ 12 ] such as: Trommel screens are cheaper to produce than vibrating screens. They are vibration free which causes less noise than vibrating screens. Trommel screens are more mechanically robust than vibrating screens allowing it to last longer under mechanical stress. [ 11 ] [ 13 ] However more material can be screened at once for a vibrating screen compared to a trommel screen. This is because only one part of the screen area of the trommel screen is utilised during the screening process whilst the entire screen is used for a vibrating screen. Trommel screens are also more susceptible to plugging and blinding, especially when different sized screen apertures are in series. [ 11 ] Plugging is when material larger than the aperture may become stuck or wedged into the apertures and then may be forced through which is undesirable. [ 13 ] Blinding is when wet material clump up and stick to the surface of the screen. [ 14 ] The vibrations in the vibrating screens reduce the risk of plugging and blinding. [ 14 ] A grizzly screen is a grid or set of parallel metal bars set in an inclined stationary frame. The slope and the path of the material are usually parallel to the length of the bars. The length of the bar may be up to 3 m and the spacing between the bars ranges from 50 to 200 mm. Grizzly screens are typically used in mining to limit the size of material passing into a conveyance or size reduction stage. The material of construction of the bars is usually manganese steel to reduce wear. Usually, the bar is shaped in such a way that its top is wider than the bottom, and hence the bars can be made fairly deep for strength without being choked by lumps passing partway through them. A coarse feed (say from a primary crusher) is fed at the upper end of the grizzly. Large chunks roll and slide to the lower end (tail discharge), while small lumps having sizes less than the openings in the bars fall through the grid into a separate collector. Roller screens are preferred to trommel screens when the feed rate required is high. They also cause less noise than trommel screens and require less head room. Viscous and sticky materials are easier to be separated using a roller screen than with a trommel screen. [ 11 ] Curved screens are able to separate finer particles (200-3000 μm) than trommel screens. However, binding may occur if the particle size is less than 200 μm [ 15 ] which will affect the separation efficiency. The screening rate of a curved screen is also much higher than the trommel screen as the whole surface area of the screen is utilised. [ 16 ] Furthermore, for curved screens, the feed flows parallel to the apertures. This allows any loose material to break up from the jagged surface of the larger materials, resulting in more undersized particles passing through. [ 17 ] Finer particle sizes (>40 μm) are able to be separated with the gyratory separator than with a trommel screen. [ 11 ] The size of the gyratory screen separator can be adjusted through removable trays, whereas the trommel screen is usually fixed. [ 18 ] Gyratory separators can also separate dry and wet materials like trommel screens. However, it is common for the gyratory separators to separate either dry or wet materials only. This is because there are different parameters for the gyratory screen to have the best separation efficiency. Therefore, two separators would be required for the separation of dry and wet materials, while one trommel screen would be able to do the same job. [ 17 ] One of the main process characteristics of interest is the screening rate of the trommel. Screening rate is related to the probability of the undersized particles passing through the screen apertures upon impact. [ 6 ] Based on the assumption that the particle falls perpendicularly on the screen surface, the probability of passage, P, is simply given as [ 19 ] where d {\displaystyle d} refers to the particle size, a {\displaystyle a} refers to the size of aperture (diameter or length) and Q {\displaystyle Q} refers to the ratio of aperture area to the total screen area. Equation ( 1 ) holds for both square and circular apertures. However, for rectangular apertures, the equation becomes: [ 19 ] where a {\displaystyle a} and A {\displaystyle A} refers to the rectangular dimension of the aperture. After determining the probability of passage of a given size interval of particles through the screen, the fraction of particles remaining in the screen, V {\displaystyle V} , can be found using: [ 6 ] where n {\displaystyle n} is the number of impingements of the particles on the screen. After making the assumption that the number of impingements per unit time, σ t {\displaystyle \sigma _{t}} , is constant, equation ( 3 ) becomes: [ 6 ] An alternative way of expressing the fraction of particles remaining in the screen is in terms of the particle weight, which is given as follows: [ 6 ] where W ( t ) {\displaystyle W(t)} is the weight of a given size interval of particles remaining in the screen at any given time t {\displaystyle t} and W ( 0 ) {\displaystyle W(0)} is the initial weight of the feed. Therefore, from equations ( 4 ) and ( 5 ), the screening rate can be expressed as: [ 6 ] Screening efficiency can be calculated using mas weight in the following way E=c(f-u)(1-u)(c-f)/f(c-u)^2(1-f) Apart from screening rate, another characteristic of interest is the separation efficiency of the trommel screen. Assuming that the size distribution function of the undersized particles to be removed, f ( x ) {\displaystyle f(x)} , is known, the cumulative probability of all particles ranging from x 0 {\displaystyle x_{0}} to x m {\displaystyle x_{m}} that are separated after n {\displaystyle n} impingements is simply: [ 19 ] Furthermore, the total number fraction of particles within this size range in the feed can be expressed as follows: [ 19 ] Therefore, the separation efficiency, which is defined as the ratio of the fraction of particles removed to the total fraction of particles in the feed, can be determined as follows: [ 19 ] There are a number of factors that affect the separation efficiency of the trommel, which include: [ 20 ] Two simplifying assumptions are made in the equation presented in this section for the residence time of materials in a rotating screen. First, it is assumed that there is no slippage of particles on the screen. [ 6 ] In addition, the particles dislodging from the screen are under free fall. When the drum rotates, particles are kept in contact with the rotating wall by centrifugal force. [ 6 ] As the particles reach near the top of the drum, the gravitational force acting in the radial direction overcomes the centrifugal force , causing the particles to fall from the drum in a cataracting motion. [ 2 ] The force components acting on the particle at the point of departure is illustrated in Figure 6. The departure angle, α can be determined through a force balance, which is given as: [ 6 ] where r {\displaystyle r} is the drum radius, ω t {\displaystyle \omega _{t}} is the rotational velocity in radians per second, g {\displaystyle g} is the gravitational acceleration and β {\displaystyle \beta } is the angle of inclination of the drum. Hence, the residence time of particles in the rotating screen can be determined from the equation below: [ 6 ] where L {\displaystyle L} refers to the screen length, n {\displaystyle n} refers to the rotation of the screen in terms of revolutions per minute and α {\displaystyle \alpha } refers to the departure angle in degrees. Trommel screens are used widely in industries for its efficiency in material size separation. The trommel screening system is governed by the rotational velocity of the drum, mass flow rate of feed particles, size of the drum and inclination of trommel screen. [ 21 ] Considering the mesh sizes of the rotating drum are larger than particle sizes as shown in Figure 7, the particle motion velocity V {\displaystyle V} can be broken down into two velocity components consisting of the vertical component V y {\displaystyle V_{y}} and horizontal component V x {\displaystyle V_{x}} . Denoting θ {\displaystyle \theta } to be the angle between the particle motion and vertical component, the vertical and horizontal velocities can now be written as: When V y > V x {\displaystyle V_{y}>V_{x}} , the particles escape through the mesh in the rotating drum. However, if V y < V x {\displaystyle V_{y}<V_{x}} , the particles are retained within the rotating drum. Larger granules will be retained inside the trommel screen until the desired aperture is met and follows the same particle behaviour. With varying rotational velocities, the effect of screening efficiency and production rate varies according to different types of motion mechanisms. These mechanisms include slumping, cataracting and centrifuging. [ 22 ] This occurs when the rotational velocity of drum is low. The particles are lifted slightly from the bottom of the drum before tumbling down the free surface as shown in Figure 8. As only smaller- sized filter granules near the wall of the trommel body are able to be screened, this results in a lower screening efficiency. As rotational velocity increases, slumping transitions to cataracting motion where particles detach near the top of the rotating drum as shown in Figure 9. Larger granules segregate near the inner surface due to the Brazil nut effect while smaller granules stay near the screen surface, thereby allowing smaller filter granules to pass through. [ 3 ] This motion generates turbulent flow of particles, resulting in a higher screening efficiency compared to slumping. As the rotational velocity increases further, cataracting motion will transition to centrifuging motion which will result in a lower screening efficiency. This is due to particles attaching to the wall of the rotating drum caused by centrifugal forces as shown in Figure 10. According to Ottino and Khakhar, [ 22 ] increasing the feed flow rate of particles resulted in a decrease in screening efficiency. Not much is known about why this occurs, however, it is suggested that this effect is influenced by the thickness of filter granules packed in the trommel body. At higher feed flow rates, smaller-sized particles at the lower layer of the packed bed are able to be screened at designated apertures and remaining small-sized particles adhere to larger particles. On the other hand, it is easier for smaller-sized particles to pass through the granules thickness in the trommel system at lower feed rates. Increasing the area of material exposed to screening allows more particles to be filtered out. Therefore, features that increase the surface area will result in a much higher screening efficiency and production rate. The larger surface area can be increased by When designing the trommel screen, it should be taken into account that higher inclination angle would result in a higher production rate of particles. A higher inclination angle would result in a higher production rate due to an increase in particle velocity, V {\displaystyle V} , as illustrated in Figure 7. However, this is at a cost of a lower screening efficiency. On the other hand, decreasing the inclination angle will result in a much longer residence time of particles within the trommel system which increases the screening efficiency. Since screening efficiency is directly proportional to the length of the trommel, a shorter trommel screen would be needed at a smaller inclination angle to achieve a desired screening efficiency. It is suggested that the inclination angle should not be below 2° because the efficiency and production rate is unknown beyond this point. A phenomenon exist below 2° such that for a given set of operating conditions, decreasing the inclination angle will increase the bed depth resulting in a lower screening efficiency. However it will also simultaneously increase the residence time, which results in an increase in the screening efficiency. It is unsure which effect will be more dominant at inclination angles less than 2°. [ 3 ] In the wastewater treatment industry, the solids that exit the trommel will be compressed and dewatered as they travel along the conveyor. Most often a post-washing treatment such as a jet wash will be used after the trommel screen to break down faecal and unwanted semi-solid matter. The volume of the solid will decrease up to 40% depending on the properties before removal. [ 7 ]
https://en.wikipedia.org/wiki/Trommel_screen
Trophallaxis ( / ˌ t r oʊ f ə ˈ l æ k s ɪ s / ) is the transfer of food or other fluids among members of a community through mouth-to-mouth ( stomodeal ) or anus-to-mouth ( proctodeal ) feeding. Along with nutrients, trophallaxis can involve the transfer of molecules such as pheromones, organisms such as symbionts , and information to serve as a form of communication. [ 1 ] Trophallaxis is used by some birds , gray wolves , vampire bats , and is most highly developed in eusocial insects such as ants , wasps , bees , and termites . Tropho- (prefix or suffix) is derived from the Greek trophé, meaning 'nourishment'. The Greek 'allaxis' means 'exchange'. [ 2 ] The word was introduced by the entomologist William Morton Wheeler in 1918. [ 3 ] Trophallaxis was used in the past to support theories on the origin of sociality in insects. [ 4 ] [ 5 ] The Swiss psychologist and entomologist Auguste Forel also believed that food sharing was key to ant society and he used an illustration of it as the frontispiece for his book The Social World of the Ants Compared with that of Man . [ 6 ] Proctodeal trophallaxis allowed termites to transfer cellulolytic flagellates that made the digestion of wood possible and efficient. [ 7 ] Besides sociality, trophallaxis has evolved within many species as a method of nourishment for adults and/or juveniles, [ 8 ] kin survival, [ 8 ] transfer of symbionts, [ 9 ] transfer of immunity, [ 10 ] colony recognition [ 11 ] and foraging communication. [ 12 ] Trophallaxis has even evolved as a parasitic strategy in some species to obtain food from their host. [ 13 ] Trophallaxis can also result in the spreading of chemicals, such as pheromones , throughout a colony, which is significant in social colony functioning. [ 14 ] Species have evolved anatomy to allow them to participate in trophallaxis, such as the proventriculus in the crops of Formica fusca ants. [ 14 ] This structure acts as a valve to enhance food storage capacity. [ 14 ] Likewise, the honey bee Apis mellifera is able to protrude their proboscis and sip nectar from the open mandibles of the donor bee. [ 12 ] Certain mechanisms have also evolved to initiate food sharing, such as the sensory exploitation strategy that has evolved in the common cuckoo brood parasites. [ 15 ] These birds have evolved brightly coloured gapes that stimulate the host to transfer food. [ 15 ] Trophallaxis is a form of social feeding in many insects that contributes to the formation of social bonds. [ 5 ] Trophallaxis serves as a means of communication , at least in bees , like M. genalis , [ 16 ] and ants. [ 17 ] Trophallaxis in M. genalis is part of a social exchange system, where dominant bees are usually the recipients of food. [ 16 ] It increases longevity of bees that have less access to food and decreases aggression between nest mates. [ 16 ] In the red fire ant, colony members store food in their crops and regularly exchange this food with other colony members and larvae to form a sort of "communal stomach" for the colony. [ 17 ] This is also true for certain species of Lasioglossum , such as the sweat bee Lasioglossum hemichalceum . L. hemichalceum will often exchange food with other members regardless of whether they are nestmates or not. [ 18 ] This is because cooperation among non-relatives offers more benefit than cost to the group. [ 18 ] Many wasps, like Protopolybia exigua and Belonogaster petiolata , exhibit foraging behavior where adults perform trophallaxis with adults and between adults and larvae. [ 19 ] [ 8 ] P. exigua carry nectar, wood pulp and macerated prey in its crop from the field to the nest for transfer; for larvae survival they carry amounts of prey proportional to the amount of larvae in the nest. [ 19 ] Voluntary trophallaxis in Xylocopa pubescens bees has led to the nest guarding behavior that the species is known for. [ 20 ] This bee species allows one adult to forage and bring nectar back for the rest of the nest population as a way to continually defend the nest while obtaining nutrients for all members of the colony. [ 20 ] In termites, [ 9 ] proctodeal trophallaxis is crucial for replacing the gut endosymbionts that are lost after every molt. Gut symbionts are also transferred by anal trophallaxis in wood-eating termites and cockroaches. [ 21 ] Transfer of gut symbionts in these species is essential to digest wood as their food source. Carpenter ants transfer immunity through trophallaxis by the direct transfer of antimicrobial substances, increasing disease resistance and social immunity of the colony. [ 10 ] [ 20 ] In some species of ants, it may play a role in spreading the colony odour that identifies members. [ 22 ] Honey bee foragers use trophallaxis in associative learning to form long-term olfactory memories, in order to teach nest mates foraging behavior and where to search for food. [ 12 ] In addition, Vespula austriaca wasps also engage in trophallaxis as a form of parasitism with its host to obtain nutrients. [ 23 ] V. austriaca is an obligate parasite species that invades the nests of host species and obtains food by constraining the host with their legs and forcing trophallaxis. [ 23 ] Vertebrates such as some bird species, gray wolves , and vampire bats also feed their young through regurgitation of food as a form of trophallaxis. Food sharing in vertebrates is a form of reciprocity demonstrated by many social vertebrates. [ 24 ] Wild wolves transport food in their stomach to pups and/or breeding females and share it by regurgitation, as a form of trophallaxis. [ 25 ] The recipient wolves often lick or sniff the donor wolf's muzzle to activate regurgitation and receive nutrients. [ 25 ] Vampire bats share blood with kin by regurgitation as a means of increasing their fitness through kin selection. [ 24 ] Birds regurgitate food and directly transfer it into the mouths of their offspring as a part of parental care, such as the " crop milk " that is transferred by mother ring doves into the mouths of their young. [ 26 ] The cuckoo brood parasite is another bird species that engages in trophallaxis. The cuckoo bird uses mimicry, such as mimicking the eggshell colors and patterns of the host's eggs, to place their young in the nest of host species where they will be fed and reared at no expense to the cuckoo mother. [ 15 ] The cuckoo young can often mimic the begging call of an entire nest of the host species' young and have evolved intensely colored gapes ; both of which act as supernormal stimuli , inducing the host bird to deliver food to them over their own young via trophallaxis. [ 15 ]
https://en.wikipedia.org/wiki/Trophallaxis
A trophic egg is an egg whose function is not reproduction but nutrition ; in essence, the trophic egg serves as food for offspring hatched from viable eggs. In most species that produce them, a trophic egg is usually an unfertilised egg. The production of trophic eggs has been observed in a highly diverse range of species, including fish , amphibians , spiders and insects . The function is not limited to any particular level of parental care , but occurs in some sub-social species of insects, the spider A. ferox , and a few other species like the frogs Leptodactylus fallax and Oophaga , and the catfish Bagrus meridionalis . Parents of some species deliver trophic eggs directly to their offspring, whereas some other species simply produce the trophic eggs after laying the viable eggs; they then leave the trophic eggs where the viable offspring are likely to find them. The mackerel sharks present the most extreme example of proximity between reproductive eggs and trophic eggs; their viable offspring feed on trophic eggs in utero . Despite the diversity of species and life strategies in which trophic eggs occur, all trophic egg functions are similarly derived from similar ancestral functions, which once amounted to the sacrifice of potential future offspring in order to provide food for the survival of rival (usually earlier) offspring. In more derived examples the trophic eggs are not viable, being neither fertilised, nor even fully formed in some cases, so they do not represent actually potential offspring, although they still represent parental investment corresponding to the amount of food it took to produce them. Trophic eggs are not always morphologically distinct from normal reproductive eggs; however if there is no physical distinction there tends to be some kind of specialised behaviour in the way that trophic eggs are delivered by the parents. In some beetles, trophic eggs are paler in colour and softer in texture than reproductive eggs, with a smoother surface on the chorion . [ 1 ] It has also been found that trophic eggs in ants have a less pronounced reticulate pattern on the chorion. [ 2 ] The morphological differences may arise due to the fact that mothers invest less energy in the production of trophic eggs than viable eggs. [ 3 ] The behaviour of trophic egg-laying species depends highly on their environment and can be modified via adaptive plasticity in response to environmental variation. The ratio of trophic to viable eggs is determined by the availability of resources, although the absolute number of trophic eggs does not always change. [ 4 ] The production of fewer viable eggs ensures that each hatched nymph will have a larger provision of trophic eggs; and therefore give each individual an enhanced chance of survival when external resources are limited. Females can adaptively adjust the egg ratio in response to environmental drivers prior to oviposition. When resources are limited, the presence of trophic eggs greatly increases the maturation and survival rates of offspring. When other suitable sources of food are plentiful, feeding on trophic eggs generally has little effect on brood success. [ 5 ] However, there are some species such as the subsocial burrower bug Canthophorus niveimarginatus ( Heteroptera : Cydnidae ) whose offspring cannot survive at all without the provision of trophic eggs. The nymphs starve to death because trophic eggs are the only thing they are able to feed on. [ 6 ] Sibling cannibalism, common in many spider species, is not affected by the proportion of trophic eggs, since viable eggs are oviposited and hatch synchronously, before trophic eggs are laid. In the spider Amaurobius ferox , trophic eggs are laid the day after spiderlings emerge from their egg sac. The mother's reproductive behaviour is modified by the behaviour of her offspring, and their presence inhibits the second generation of eggs from maturing; instead they are released as infertile trophic eggs. Converting the second generation into food for the first ultimately boosts the mother's reproductive success. [ 7 ] There are no concrete explanations for the evolution of trophic eggs. The two main conflicting arguments are: If they have evolved (and are now distinct) from functionless by-products of failed reproduction, then trophic eggs should be more easily available and provide more nutrients to the offspring than their evolutionary predecessors. There seems to be clear evidence of this adaptation in many species. This can be seen in mothers making an effort to distribute trophic eggs to their offspring; or eggs which are specialised for the nutritional needs of the offspring. However, in many species, the two types of egg are indistinguishable. Various hypotheses could potentially be tested to determine whether trophic eggs are indeed an evolved phenotype. [ 3 ] It has been suggested that trophic egg-laying evolved as a consequence of limited egg size , since larger eggs with more nutrient supply would require the mother to have a larger body size. Thus, the production of more eggs, some of which are not intended to reach maturity. It is relatively simple for the mother to adjust the ratio of fertilised to non-fertilised eggs, in response to environmental conditions. An alternative to trophic egg-laying is sibling cannibalism; however this requires the mother to regulate the synchrony of hatching times . However, in this case eggs which are not eaten would continue to develop. If it is difficult for the mother to achieve this synchrony, trophic eggs are a sensible alternative in ensuring that the offspring that hatches will be fed sufficiently.
https://en.wikipedia.org/wiki/Trophic_egg
The trophic level of an organism is the position it occupies in a food web . Within a food web, a food chain is a succession of organisms that eat other organisms and may, in turn, be eaten themselves. The trophic level of an organism is the number of steps it is from the start of the chain. A food web starts at trophic level 1 with primary producers such as plants, can move to herbivores at level 2, carnivores at level 3 or higher, and typically finish with apex predators at level 4 or 5. The path along the chain can form either a one-way flow or a part of a wider food "web". Ecological communities with higher biodiversity form more complex trophic paths. The word trophic derives from the Greek τροφή (trophē) referring to food or nourishment. [ 1 ] The concept of trophic level was developed by Raymond Lindeman (1942), based on the terminology of August Thienemann (1926): "producers", "consumers", and "reducers" (modified to "decomposers" by Lindeman). [ 2 ] [ 3 ] The three basic ways in which organisms get food are as producers, consumers, and decomposers. Trophic levels can be represented by numbers, starting at level 1 with plants. Further trophic levels are numbered subsequently according to how far the organism is along the food chain. In real-world ecosystems , there is more than one food chain for most organisms, since most organisms eat more than one kind of food or are eaten by more than one type of predator. A diagram that sets out the intricate network of intersecting and overlapping food chains for an ecosystem is called its food web . [ 6 ] Decomposers are often left off food webs, but if included, they mark the end of a food chain. [ 6 ] Thus food chains start with primary producers and end with decay and decomposers. Since decomposers recycle nutrients, leaving them so they can be reused by primary producers, they are sometimes regarded as occupying their own trophic level. [ 7 ] [ 8 ] The trophic level of a species may vary if it has a choice of diet. Virtually all plants and phytoplankton are purely phototrophic and are at exactly level 1.0. Many worms are at around 2.1; insects 2.2; jellyfish 3.0; birds 3.6. [ 9 ] A 2013 study estimates the average trophic level of human beings at 2.21, similar to pigs or anchovies. [ 10 ] This is only an average, and plainly both modern and ancient human eating habits are complex and vary greatly. For example, a traditional Inuit living on a diet consisting primarily of seals would have a trophic level of nearly 5. [ 11 ] In general, each trophic level relates to the one below it by absorbing some of the energy it consumes, and in this way can be regarded as resting on, or supported by, the next lower trophic level. Food chains can be diagrammed to illustrate the amount of energy that moves from one feeding level to the next in a food chain. This is called an energy pyramid . The energy transferred between levels can also be thought of as approximating to a transfer in biomass , so energy pyramids can also be viewed as biomass pyramids, picturing the amount of biomass that results at higher levels from biomass consumed at lower levels. However, when primary producers grow rapidly and are consumed rapidly, the biomass at any one moment may be low; for example, phytoplankton (producer) biomass can be low compared to the zooplankton (consumer) biomass in the same area of ocean. [ 12 ] The efficiency with which energy or biomass is transferred from one trophic level to the next is called the ecological efficiency . Consumers at each level convert on average only about 10% of the chemical energy in their food to their own organic tissue (the ten-per cent law ). For this reason, food chains rarely extend for more than 5 or 6 levels. At the lowest trophic level (the bottom of the food chain), plants convert about 1% of the sunlight they receive into chemical energy. It follows from this that the total energy originally present in the incident sunlight that is finally embodied in a tertiary consumer is about 0.001% [ 7 ] Both the number of trophic levels and the complexity of relationships between them evolve as life diversifies through time, the exception being intermittent mass extinction events. [ 13 ] Food webs largely define ecosystems, and the trophic levels define the position of organisms within the webs. But these trophic levels are not always simple integers, because organisms often feed at more than one trophic level. [ 14 ] [ 15 ] For example, some carnivores also eat plants, and some plants are carnivores. A large carnivore may eat both smaller carnivores and herbivores; the bobcat eats rabbits, but the mountain lion eats both bobcats and rabbits. Animals can also eat each other; the bullfrog eats crayfish and crayfish eat young bullfrogs. The feeding habits of a juvenile animal, and, as a consequence, its trophic level, can change as it grows up. The fisheries scientist Daniel Pauly sets the values of trophic levels to one in plants and detritus, two in herbivores and detritivores (primary consumers), three in secondary consumers, and so on. The definition of the trophic level, TL, for any consumer species is: [ 8 ] where T L j {\displaystyle TL_{j}} is the fractional trophic level of the prey j , and D C i j {\displaystyle DC_{ij}} represents the fraction of j in the diet of i . That is, the consumer trophic level is one plus the weighted average of how much different trophic levels contribute to its food. In the case of marine ecosystems, the trophic level of most fish and other marine consumers takes a value between 2.0 and 5.0. The upper value, 5.0, is unusual, even for large fish, [ 16 ] though it occurs in apex predators of marine mammals, such as polar bears and orcas. [ 17 ] In addition to observational studies of animal behavior, and quantification of animal stomach contents, trophic level can be quantified through stable isotope analysis of animal tissues such as muscle , skin , hair , bone collagen . This is because there is a consistent increase in the nitrogen isotopic composition at each trophic level caused by fractionations that occur with the synthesis of biomolecules; the magnitude of this increase in nitrogen isotopic composition is approximately 3–4‰. [ 18 ] [ 19 ] In fisheries, the mean trophic level for the fisheries catch across an entire area or ecosystem is calculated for year y as: where Y i y {\displaystyle Y_{iy}} is the annual catch of the species or group i in year y , and T L i {\displaystyle \ TL_{i}\ } is the trophic level for species i as defined above. [ 8 ] Fish at higher trophic levels usually have a higher economic value, which can result in overfishing at the higher trophic levels. Earlier reports found precipitous declines in mean trophic level of fisheries catch, in a process known as fishing down the food web . [ 20 ] However, more recent work finds no relation between economic value and trophic level; [ 21 ] and that mean trophic levels in catches, surveys and stock assessments have not in fact declined, suggesting that fishing down the food web is not a global phenomenon. [ 22 ] However Pauly et al . note that trophic levels peaked at 3.4 in 1970 in the northwest and west-central Atlantic, followed by a subsequent decline to 2.9 in 1994. They report a shift away from long-lived, piscivorous, high-trophic-level bottom fishes, such as cod and haddock, to short-lived, planktivorous, low-trophic-level invertebrates (e.g., shrimp) and small, pelagic fish (e.g., herring). This shift from high-trophic-level fishes to low-trophic-level invertebrates and fishes is a response to changes in the relative abundance of the preferred catch. They consider that this is part of the global fishery collapse, [ 17 ] [ 23 ] which finds an echo in the overfished Mediterranean Sea. [ 24 ] Humans have a mean trophic level of about 2.21, about the same as a pig or an anchovy. [ 25 ] [ 26 ] Since biomass transfer efficiencies are only about 10%, it follows that the rate of biological production is much greater at lower trophic levels than it is at higher levels. Fisheries catch, at least, to begin with, will tend to increase as the trophic level declines. At this point the fisheries will target species lower in the food web. [ 23 ] In 2000, this led Pauly and others to construct a "Fisheries in Balance" index, usually called the FiB index. [ 27 ] The FiB index is defined, for any year y , by [ 8 ] where Y y {\displaystyle Y_{y}} is the catch at year y , T L y {\displaystyle TL_{y}} is the mean trophic level of the catch at year y , Y 0 {\displaystyle Y_{0}} is the catch, T L 0 {\displaystyle TL_{0}} the mean trophic level of the catch at the start of the series being analyzed, and T E {\displaystyle TE} is the transfer efficiency of biomass or energy between trophic levels. The FiB index is stable (zero) over periods of time when changes in trophic levels are matched by appropriate changes in the catch in the opposite direction. The index increases if catches increase for any reason, e.g. higher fish biomass, or geographic expansion. [ 8 ] Such decreases explain the "backward-bending" plots of trophic level versus catch originally observed by Pauly and others in 1998. [ 23 ] One aspect of trophic levels is called tritrophic interaction. Ecologists often restrict their research to two trophic levels as a way of simplifying the analysis; however, this can be misleading if tritrophic interactions (such as plant–herbivore–predator) are not easily understood by simply adding pairwise interactions (plant-herbivore plus herbivore–predator, for example). Significant interactions can occur between the first trophic level (plant) and the third trophic level (a predator) in determining herbivore population growth, for example. Simple genetic changes may yield morphological variants in plants that then differ in their resistance to herbivores because of the effects of the plant architecture on enemies of the herbivore. [ 28 ] Plants can also develop defenses against herbivores such as chemical defenses. [ 29 ]
https://en.wikipedia.org/wiki/Trophic_level
The trophic level index (TLI) is used in New Zealand as a measure of nutrient status of lakes. [ 1 ] It is similar to the trophic state index but was proposed as alternative that suited New Zealand. [ 2 ] The system uses four criteria, phosphorus and nitrogen concentrations, as well as visual clarity and algal biomass weighted equally. [ 3 ] This New Zealand –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trophic_level_index
Trophic mutualism is a key type of ecological mutualism . Specifically, "trophic mutualism" refers to the transfer of energy and nutrients between two species . This is also sometimes known as resource-to-resource mutualism. Trophic mutualism often occurs between an autotroph and a heterotroph . [ 1 ] Although there are many examples of trophic mutualisms, the heterotroph is generally a fungus or bacteria. This mutualism can be both obligate and opportunistic. Ecologists first began to understand trophic mutualisms in the mid-20th century with the investigation of nutrient abundance and distribution. One of the first trophic mutualisms was discovered in 1958 by Professor Leonard Muscatine of UCLA, the relationship between endozoic algae and coral. [ 7 ] In this relationship, the algae provides the coral with a Carbon source to develop its CaCO 3 skeleton and the coral secretes a protecting nutrient-rich mucus which benefits the algae. Perhaps one of the most famous discoveries made by Muscatine in the field of trophic mutualism came about 10 years later in another aquatic based system-the relationship between algae and water hydra. [ 8 ] This work was significant in establishing the presence of mutualistic relationships in both aquatic and terrestrial environments. Perhaps the most widely acclaimed example of a trophic mutualism was the discovery of the leafcutter ant that engage in trophic mutualism with a fungus. [ 9 ] These ants cultivate a certain type of fungus by providing it with leaves and other nutrients. In turn, the ants will feed on a special nutrient that is only created by the fungus they nurture. This trophic mutualism was studied in detail in the 1970s and since.
https://en.wikipedia.org/wiki/Trophic_mutualism
Trophic species are a scientific grouping of organisms according to their shared trophic (feeding) positions in a food web or food chain . Trophic species have identical prey and a shared set of predators in the food web. This means that members of a trophic species share many of the same kinds of ecological functions. [ 1 ] [ 2 ] The idea of trophic species was first devised by Frederic Briand and Joel Cohen in 1984 when investigating scaling laws applying to food webs. [ 3 ] The category may include species of plants, animals, a combination of plants and animals, and biological stages of an organism. When assigning groups in a trophic manner, relationships are linear in scale, which allowed the same authors to predict the proportion of different trophic links in food webs. [ 4 ] Furthermore grouping similar species according to feeding habit rather than genetics results in a ratio of predator to prey that is generally 1:1 in food webs. [ 5 ] This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trophic_species
The Trophic State Index ( TSI ) is a classification system designed to rate water bodies based on the amount of biological productivity they sustain. [ 1 ] Although the term "trophic index" is commonly applied to lakes, any surface water body may be indexed. The TSI of a water body is rated on a scale from zero to one hundred. [ 1 ] Under the TSI scale, water bodies may be defined as: [ 1 ] The quantities of nitrogen , phosphorus , and other biologically useful nutrients are the primary determinants of a water body's TSI. Nutrients such as nitrogen and phosphorus tend to be limiting resources in standing water bodies, so increased concentrations tend to result in increased plant growth, followed by corollary increases in subsequent trophic levels . [ a ] Consequently, trophic index may sometimes be used to make a rough estimate of biological condition of water bodies. [ 2 ] Carlson's index was proposed by Robert Carlson in his 1977 seminal paper, "A trophic state index for lakes". [ 3 ] It is one of the more commonly used trophic indices and is the trophic index used by the United States Environmental Protection Agency . [ 2 ] The trophic state is defined as the total weight of biomass in a given water body at the time of measurement. Because they are of public concern, the Carlson index uses the algal biomass as an objective classifier of a lake or other water body's trophic status. [ 3 ] According to the US EPA, the Carlson Index should only be used with lakes that have relatively few rooted plants and non-algal turbidity sources. [ 2 ] Because they tend to correlate, three independent variables can be used to calculate the Carlson Index: chlorophyll pigments , total phosphorus and Secchi depth . Of these three, chlorophyll will probably yield the most accurate measures, as it is the most accurate predictor of biomass. Phosphorus may be a more accurate estimation of a water body's summer trophic status than chlorophyll if the measurements are made during the winter. Finally, the Secchi depth is probably the least accurate measure, but also the most affordable and expedient one. Consequently, citizen monitoring programs and other volunteer or large-scale surveys will often use the Secchi depth. By translating the Secchi transparency values to a log base 2 scale, each successive doubling of biomass is represented as a whole integer index number. [ 4 ] The Secchi depth, which measures water transparency, indicates the concentration of dissolved and particulate material in the water, which in turn can be used to derive the biomass. This relationship is expressed in the following equation: A lake is usually classified as being in one of three possible classes: oligotrophic , mesotrophic or eutrophic . Lakes with extreme trophic indices may also be considered hyperoligotrophic or hypereutrophic (also "hypertrophic"). The table below demonstrates how the index values translate into trophic classes. Oligotrophic lakes generally host very little or no aquatic vegetation and are relatively clear, while eutrophic lakes tend to host large quantities of organisms, including algal blooms. Each trophic class supports different types of fish and other organisms, as well. If the algal biomass in a lake or other water body reaches too high a concentration (say >80 TSI), massive fish die-offs may occur as decomposing biomass deoxygenates the water. Limnologists use the term " oligotrophic " or "hipotrophic" to describe lakes that have low primary productivity due to nutrient deficiency. (This contrasts against eutrophic lakes, which are highly productive due to an ample supply of nutrients, as can arise from human activities such as agriculture in the watershed.) Oligotrophic lakes are most common in cold, sparsely developed regions that are underlain by crystalline igneous , granitic bedrock. Due to their low algal production, these lakes consequently have very clear waters, with high drinking-water quality. Lakes that have intermixing of their layers are classified into the category of holomictic , whereas lakes that do not have interlayer mixing are permanently stratified and thus are termed meromictic . Generally, in a holomictic lake, during the fall, the cooling of the epilimnion reduces lake stratification, thereby allowing for mixing to occur. Winds aid in this process. [ 5 ] Thus it is the deep mixing of lakes (which occurs most often during the fall and early winter, in holomictic lakes of the monomictic subtype) that allows oxygen to be transported from the epilimnion to the hypolimnion. [ 6 ] [ 7 ] [ 8 ] In this way, oligotrophic lakes can have significant oxygen down to the depth to which the aforementioned seasonal mixing occurs, but they will be oxygen deficient below this depth. Therefore, oligotrophic lakes often support fish species such as lake trout , which require cold, well- oxygenated waters. The oxygen content of these lakes is a function of their seasonally mixed hypolimnetic volume. Hypolimnetic volumes that are anoxic will result in fish congregating in areas where oxygen is sufficient for their needs. [ 6 ] Anoxia is more common in the hypolimnion during the summer when mixing does not occur. [ 5 ] In the absence of oxygen from the epilimnion, decomposition can cause hypoxia in the hypolimnion. [ 9 ] Mesotrophic lakes are lakes with an intermediate level of productivity. These lakes are commonly clear water lakes and ponds with beds of submerged aquatic plants and medium levels of nutrients. The term mesotrophic is also applied to terrestrial habitats. Mesotrophic soils have moderate nutrient levels. A eutrophic water body, commonly a lake or pond, has high biological productivity. Due to excessive nutrients, especially nitrogen and phosphorus, these water bodies are able to support an abundance of aquatic plants. Usually, the water body will be dominated either by aquatic plants or algae. When aquatic plants dominate, the water tends to be clear. When algae dominate, the water tends to be darker. The algae engage in photosynthesis which supplies oxygen to the fish and biota which inhabit these waters. Occasionally, an excessive algal bloom will occur and can ultimately result in fish death, due to respiration by algae and bottom-living bacteria. The process of eutrophication can occur naturally and by human impact on the environment . Eutrophic comes from the Greek eutrophos meaning "well-nourished", from eu meaning good and trephein meaning "to nourish". [ 10 ] Hypertrophic or hypereutrophic lakes are very nutrient-rich lakes characterized by frequent and severe nuisance algal blooms and low transparency. Hypereutrophic lakes have a visibility depth of less than 90 centimetres (3 feet), they have greater than 40 micrograms/litre total chlorophyll and greater than 100 micrograms/litre phosphorus . The excessive algal blooms can also significantly reduce oxygen levels and prevent life from functioning at lower depths creating dead zones beneath the surface. Likewise, large algal blooms can cause biodilution to occur, which is a decrease in the concentration of a pollutant with an increase in trophic level . This is opposed to biomagnification and is due to a decreased concentration from increased algal uptake. Both natural and anthropogenic factors can influence a lake or other water body's trophic index. A water body situated in a nutrient-rich region with high net primary productivity may be naturally eutrophic. Nutrients carried into water bodies from non-point sources such as agricultural runoff, residential fertilisers, and sewage will all increase the algal biomass , and can easily cause an oligotrophic lake to become hypereutrophic. [ 11 ] [ 12 ] [ 13 ] Although there is no absolute consensus as to which nutrients contribute the most to increasing primary productivity, phosphorus concentration is thought to be the main limiting factor in freshwater lakes. [ 14 ] [ 15 ] [ 16 ] This is likely due to the prevalence of nitrogen-fixing microorganisms in these systems, which can compensate for a lack of readily available fixed nitrogen. [ 16 ] In some coastal marine ecosystems, research has found nitrogen to be the key limiting nutrient, driving primary production independently of phosphorus. [ 17 ] [ 18 ] Nitrogen fixation cannot adequately supply these marine ecosystems, because the nitrogen fixing microbes are themselves limited by the availability of various abiotic factors like sunlight and dissolved oxygen. [ 19 ] However, marine ecosystems are too broad a range of environments for one nutrient to limit all marine primary productivity. The limiting nutrient may vary in different marine environments according to a variety of factors like depth, distance from shore, or availability of organic matter. [ 20 ] [ 19 ] Often, the desired trophic index differs between stakeholders. Water-fowl enthusiasts (e.g. duck hunters) may want a lake to be eutrophic so that it will support a large population of waterfowl. Residents, though, may want the same lake to be oligotrophic, as this is more pleasant for swimming and boating. Natural resource agencies are generally responsible for reconciling these conflicting uses and determining what a water body's trophic index should be.
https://en.wikipedia.org/wiki/Trophic_state_index
Trophobiosis is a symbiotic association between organisms where food is obtained or provided. The provider of food in the association is referred to as a trophobiont. The name is derived from the Ancient Greek τροφή ( trophē ), meaning "nourishment", and -βίωσις ( -biosis ), which is short for the English word symbiosis . [ 1 ] Among the more noted trophobiotic groups are ants and members of a number of hemipteran families. A number of ant genera are recorded as tending groups of hemipterans to varying degrees. In most cases the ants collect and transport the honeydew secretions from the hemipterans back to the nest for consumption. Not all examples of ant trophobiotic interactions are mutualistic , with instances such as ants attracted to Cacopsylla pyricola feeding on both the honeydew and the C. pyricola individuals. This interaction has been recorded in Ancient Chinese writings and is noted as one of the oldest instances of biological pest control. [ 1 ] In mutualistic relationships, the production of honeydew by trophobionts is rewarded by removal of dead hemipterans and protection from a variety of predators by the attendant ants. In some relationships the ants will build shelters for the farmed trophobionts, either to protect them or keep them from leaving the area. Some species of ants construct underground rooms to house the trophobionts and carry them between the host plant and housing area daily. In more complex obligate relationships (where both symbionts entirely depend on each other for survival) the ants will nest with the partner trophobionts in silk constructed leaf shelters or in underground colonies. Several species of migratory ants are noted to bring the trophobiont species with them when they move, transporting the trophobionts to new feeding areas and acting as a quick escape method if danger arises. While aphids, mealybugs and other more sedentary hemipterans are most often used as trophobionts, occasional instances of more active hemipterans such as leafhoppers have been recorded. [ 1 ] [ 2 ] In such instances in southern Africa, larger ant genera such as Camponotus are more successful at herding and containment of the leafhoppers. Smaller ant genera have been observed to tend younger or smaller leafhoppers for short periods, and in some cases, small ant genera were observed visiting herds tended by large ant genera. In these cases it is suggested the small ant genera may have been stealing honeydew droplets from the herd. [ 2 ] Ants of the entirely subterranean genus Acropyga have a noted trophobiotic relationship with mealybugs , being considered obligate coccidophiles and living in the same nests with their trophobionts. Queens of at least eleven living Acropyga species have been observed carrying a "seed" trophobiont in their mandibles during the mating flight, and it is suggested the seed is then used to start the mealybug colony in the queen's new nest. [ 3 ] The level of dependency between Acropyga and their trophobiont is suggested to be such that neither can survive without the other. An experiment using a captive colony of A. epedana showed that even when the colony was starved the ant refused offered food alternatives. This specific behavior has also been documented in Dominican amber fossils dating back 15 million years ago , with queens of the fossil species Acropyga glaesaria being found preserved with species of the extinct mealybug genus Electromyrmococcus . [ 4 ] [ 5 ] Older trophobiotic associations have been suggested for the Eocene fossil ant species Ctenobethylus goepperti [ 6 ] based on a Baltic amber fossil entombing thirteen C. goepperti workers intermingled with a number of aphids. [ 7 ] Convergent behavior to that of Acropyga is displayed by the arboreal ant Tetraponera binghami . This species lives in hollow internodes of giant bamboos and new queens will also carry a seed mealybug during the mating flight. [ 1 ]
https://en.wikipedia.org/wiki/Trophobiosis
Desmarestia tropica , sometimes called tropical acidweed , is a species of seaweed in the family Desmarestiaceae . It is critically endangered , possibly extinct , and one of only fifteen protists evaluated by IUCN . [ 1 ] Endemic to the Galápagos Islands , [ 1 ] the specific epithet tropica alludes to its tropical habitat, rare for members of Desmarestiales . [ 1 ] The common name acidweed applies to members of the genus Desmarestia , [ 2 ] generally characterized by fronds containing vacuoles of concentrated sulfuric acid , [ 3 ] but it is unclear if this species also produces acid. [ 4 ] D. tropica was first collected by William Randolph Taylor on 19 January 1935, and twice more later that month. [ 5 ] He published a description of the species ten years later in May 1945. [ 5 ] The organism was last collected in 1972, and not seen since despite efforts to search the sighted locations and other possible habitats in the archipelago . [ 1 ] Because of its preference for deep, cold water in a tropical location, it was likely severely affected by El Niño , especially the 1982–83 El Niño event . [ 1 ] This event killed much of the macroalgae in the area, [ 1 ] and D. tropica likely declined from overgrazing by herbivores resulting from El Niño and overfishing of predator fish. [ 6 ] There is a small possibility that the species still lives in deeper water in a cryptic gametophyte stage, but if so it has yet to return to the more visible sporophyte stage. [ 1 ] The gametophyte has also never been observed. [ 4 ] The thallus of D. tropica can be about 40 centimetres (16 in) tall and is soft, bushy, and light brown in color. [ 5 ] The holdfast is tiny and not very differentiated. [ 5 ] The stipe is 3 millimetres (0.12 in) in diameter and short, fleshy, and firm. [ 5 ] It continues up as the rachis where it flattens out, only visible underneath the blade, and widens to 5–8 millimetres (0.20–0.31 in). [ 5 ] Opposite branching starts 1–2 centimetres (0.39–0.79 in) from the base with wide-angled branches every 1–3 centimetres (0.39–1.18 in), and continues from each branch for several degrees. [ 5 ] The blades have short, broad teeth which on the younger blades include short brown filaments. [ 5 ] These filaments are oppositely branched. [ 5 ] D. tropica differs from D. latifrons by being more bushy and having more of a gradation in branches from the apex to the base. [ 5 ] The branches are also more expansive in D. tropica than in D. latifrons or similar species. [ 5 ] D. tropica is in the section Herbacea of Desmarestia , but compared to other North American species it is less membranous. [ 5 ] Tropical acidweed has been found in only two locations: Post Office Bay off Floreana Island [ 5 ] and Caleta Tagus (Tagus Cove) off Isabela Island . [ 1 ] At the former site it was found at depths from 14–60 metres (46–197 ft). [ 5 ] It was once thought to be found off the mainland coast of Peru , but these specimens are considered instead to be D. firma . [ 4 ] [ 1 ] Despite its tropical range, it prefers the cold, deep water [ 1 ] of upwelling areas of the lower sublittoral zone . [ 4 ]
https://en.wikipedia.org/wiki/Tropical_acid_weed
Desmarestia tropica , sometimes called tropical acidweed , is a species of seaweed in the family Desmarestiaceae . It is critically endangered , possibly extinct , and one of only fifteen protists evaluated by IUCN . [ 1 ] Endemic to the Galápagos Islands , [ 1 ] the specific epithet tropica alludes to its tropical habitat, rare for members of Desmarestiales . [ 1 ] The common name acidweed applies to members of the genus Desmarestia , [ 2 ] generally characterized by fronds containing vacuoles of concentrated sulfuric acid , [ 3 ] but it is unclear if this species also produces acid. [ 4 ] D. tropica was first collected by William Randolph Taylor on 19 January 1935, and twice more later that month. [ 5 ] He published a description of the species ten years later in May 1945. [ 5 ] The organism was last collected in 1972, and not seen since despite efforts to search the sighted locations and other possible habitats in the archipelago . [ 1 ] Because of its preference for deep, cold water in a tropical location, it was likely severely affected by El Niño , especially the 1982–83 El Niño event . [ 1 ] This event killed much of the macroalgae in the area, [ 1 ] and D. tropica likely declined from overgrazing by herbivores resulting from El Niño and overfishing of predator fish. [ 6 ] There is a small possibility that the species still lives in deeper water in a cryptic gametophyte stage, but if so it has yet to return to the more visible sporophyte stage. [ 1 ] The gametophyte has also never been observed. [ 4 ] The thallus of D. tropica can be about 40 centimetres (16 in) tall and is soft, bushy, and light brown in color. [ 5 ] The holdfast is tiny and not very differentiated. [ 5 ] The stipe is 3 millimetres (0.12 in) in diameter and short, fleshy, and firm. [ 5 ] It continues up as the rachis where it flattens out, only visible underneath the blade, and widens to 5–8 millimetres (0.20–0.31 in). [ 5 ] Opposite branching starts 1–2 centimetres (0.39–0.79 in) from the base with wide-angled branches every 1–3 centimetres (0.39–1.18 in), and continues from each branch for several degrees. [ 5 ] The blades have short, broad teeth which on the younger blades include short brown filaments. [ 5 ] These filaments are oppositely branched. [ 5 ] D. tropica differs from D. latifrons by being more bushy and having more of a gradation in branches from the apex to the base. [ 5 ] The branches are also more expansive in D. tropica than in D. latifrons or similar species. [ 5 ] D. tropica is in the section Herbacea of Desmarestia , but compared to other North American species it is less membranous. [ 5 ] Tropical acidweed has been found in only two locations: Post Office Bay off Floreana Island [ 5 ] and Caleta Tagus (Tagus Cove) off Isabela Island . [ 1 ] At the former site it was found at depths from 14–60 metres (46–197 ft). [ 5 ] It was once thought to be found off the mainland coast of Peru , but these specimens are considered instead to be D. firma . [ 4 ] [ 1 ] Despite its tropical range, it prefers the cold, deep water [ 1 ] of upwelling areas of the lower sublittoral zone . [ 4 ]
https://en.wikipedia.org/wiki/Tropical_acidweed
In algebraic geometry , a tropical compactification is a compactification ( projective completion ) of a subvariety of an algebraic torus , introduced by Jenia Tevelev. [ 1 ] [ 2 ] Given an algebraic torus and a connected closed subvariety of that torus, a compactification of the subvariety is defined as a closure of it in a toric variety of the original torus. The concept of a tropical compactification arises when trying to make compactifications as "nice" as possible. For a torus T {\displaystyle T} and a toric variety P {\displaystyle \mathbb {P} } , the compactification X ¯ {\displaystyle {\bar {X}}} is tropical when the map is faithfully flat and X ¯ {\displaystyle {\bar {X}}} is proper. This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tropical_compactification
Tropical cyclone engineering , or hurricane engineering , is a specialist sub-discipline of civil engineering that encompasses planning, analysis, design, response, and recovery of civil engineering systems and infrastructure for hurricane hazards. Hurricane engineering is a relatively new and emerging discipline within the field of civil engineering. It is an integration of many recognized branches of engineering, such as structural engineering , wind engineering , coastal engineering , and forensic engineering , with other recognized sciences and planning functions such as, climatology , oceanography , architecture , emergency management and preparedness, hazard mitigation, and hazard vulnerability analysis. Hurricane engineering aims to minimize risks to human safety, the natural and built environment, and business processes. As a result of the tremendous threats to life safety and economic disruptions caused by the 2004 and 2005 hurricane seasons, governmental organizations, such as the United States National Science Foundation , have recognized the need to better understand hurricane threats and further establish this discipline. In September 2006, the National Science Board released recommendations to the United States Congress calling for major new investments in hurricane science and engineering. Accredited university engineering programs, such as the Louisiana State University civil engineering department and University of Notre Dame Department of Civil Engineering and Geological Sciences, are establishing programs to better understand these catastrophic storms and their interaction with the environment. The LSU Hurricane Center has begun to offer hurricane engineering courses with the focus of educating students on the unique threats caused by hurricanes. The past two decades have witnessed exponential growth in damage due to hurricanes and the situation continues to deteriorate. The most vulnerable areas, coastal counties along the Gulf and Atlantic seaboards, are experiencing greater population growth and development than anyplace else in the country. If the trend of rapidly increasing losses caused by hurricanes is to be reversed, a whole new philosophy of understanding, planning, and preparedness is required. The Hurricane Engineering curriculum is the result of a multidisciplinary project aimed at giving engineering students a comprehensive understanding of the hazards associated with hurricanes: In the Northwest Pacific, where the term for strong tropical cyclones is typhoon , the concept of typhoon engineering , which is very similar to Hurricane Engineering, is being proposed. This article about or related to tropical cyclones is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tropical_cyclone_engineering
Tropical deserts are located in regions between 15 and 30 degrees latitude . The environment is very extreme, and they have the highest average monthly temperature on Earth . Rainfall is sporadic; precipitation may not be observed at all in a few years. In addition to these extreme environmental and climate conditions, most tropical deserts are covered with sand and rocks, and thus too flat and lacking in vegetation to block out the wind. Wind may erode and transport sand, rocks and other materials; these are known as eolian processes . Landforms caused by wind erosion vary greatly in characteristics and size. Representative landforms include depressions and pans , Yardangs , inverted topography and ventifacts . No significant populations can survive in tropical deserts due to extreme aridity, heat and the paucity of vegetation; only specific flora and fauna with special behavioral and physical mechanisms are supported. Although tropical deserts are considered to be harsh and barren, they are in fact important sources of natural resources and play a significant role in economic development . Besides the equatorial deserts, there are many hot deserts situated in the tropical zone. Tropical deserts are located in both continental interiors and coastal areas between the Tropic of Cancer and Tropic of Capricorn . Representative deserts include the Sahara Desert in North Africa , the Australian Desert in Western and Southern Australia , Arabian Desert and Syrian Desert in Western Asia , the Kalahari Desert in Southern Africa , Sonoran Desert in the United States and Mexico, Mojave Desert in the United States, Thar Desert in India and Pakistan, Dasht-e Margo and Registan Desert in Afghanistan and Dasht-e Kavir and Dasht-e Loot in Iran. Tropics form a belt around the equator from latitude 3 degrees north to latitude 3 degrees south, which is called the Intertropical Convergence Zone . Tropical heat generates unstable air in this area, and air masses become extremely dry due to the loss of moisture during the process of tropical ascent. [ 1 ] Another significant determinant of tropical desert climate are Hadley cells . Hadley cells concentrate all precipitations in the hotter humid lower pressure equator, leaving colder higher pressure deserts with no precipitation. [ 2 ] Tropical deserts have the highest average daily temperature on the planet, as both the energy input during the day and the loss of heat at night are large. This phenomenon causes an extremely large daily temperature range. Specifically, temperatures in a low elevation inland desert can reach 40°C to 50°C during the day, and drop to approximately 5°C at night; the daily range is around 30 to 40°C. [ 3 ] There are some other reasons for significant changes in temperature in tropical deserts. For instance, a lack of water and vegetation on the ground can enhance the absorption of the heat due to insolation . Subsiding air from dominant high pressure areas in a cloud-free sky can also lead to large amounts of insolation; a cloudless sky enables day temperature to escape rapidly at night. [ 3 ] Precipitation is very irregular in tropical deserts. The average annual precipitation in low latitude deserts is less than 250 mm. Relative humidity is very low – only 10% to 30% in interior locations, and even the dewpoints are typically very low, often being well below the freezing mark. Some deserts do not have rainfall all year round, they are located far from the ocean. High-pressure cells and high temperatures can also increase the level of aridity. [ 3 ] Wind greatly contributes to aridity in tropical deserts. If wind speed exceeds 80 km/h, it can generate dust storms and sandstorms and erode the rocky surface. [ 4 ] Therefore, wind plays an important role in shaping various landforms. This phenomenon is known as the eolian process. There are two types of eolian process: deflation and abrasion . First, deflation may cause the light lowering of ground surface, leading to deflation hollows, plains , basins , blowouts, wind-eroded plains and parabolic dunes. [ 5 ] Second, the eolian process leads to abrasion , which forms special landforms with a significant undercut. [ 5 ] Various landforms are found in tropical deserts due to different kinds of eolian process. The major landforms are dunes , depressions and pans , yardangs , and inverted topography . There are various kinds of dune in tropical deserts. Representative dunes include dome dunes, transverse dunes , barchans , star dunes, shadow dunes, linear dunes and longitudinal dunes. [ 6 ] A desert depression is caused by polygenetic factors such as wind erosion , broad shallow warping and block faulting, stream erosion, karst activity, salt weathering mass wasting, and zoogenic processes; representative examples are the large enclosed basins in Africa , such as Farafra , Baharia , Dakhla , Qattara , Siwa and Kargha. [ 7 ] Pans are widespread in southern and western Australia , southern Africa and the high plains of the United States deserts. The factors responsible for pans include a vegetation-free surface and low humidity, a low water table and poorly consolidated sediment, and a huge amount of fine-grained sandstone and shale. Feedback mechanisms also play a significant role in the process of enlarging the pan; salts are left as water accumulates in depressions, which inhibits sedimentation due to weather and the growth of vegetation in the future. This affects both erosional processes and depositional processes in pans. [ 7 ] Yardangs can be observed in orbital and aerial images of Mars and Earth . Yardangs usually develop in arid regions, predominantly due to wind, processes. The, classic forms are streamlined and elongated ridges; they may also appear with flat tops or with stubby and short profiles. Their length-to-width ratios range from 3:1 to 10:1; this is determined by the wind direction, duration of exposure to wind and rock material. [ 7 ] Inverted topography forms in areas previously at a low elevation , such as deltaic distributary systems and river systems . They are left at higher relief due to their relative resistance to wind erosion. Inverted topography is frequently observed in yardang fields, such as raised channels in Egypt , Oman and China and on Mars . [ 7 ] The environment in tropical deserts is harsh as well as barren; only certain plants and animals with special behavioral and physical mechanisms can live there. For flora , general adaptations including transforming leaves into spines to protect themselves. With the reduction in leaf area, the stem develops as a major photosynthetic structure, which is also responsible for storing water. A common example is the cactus , which has a specific means of storing and conserving water, along with few or no leaves to minimize transpiration. [ 8 ] In addition to the protection provided by spines, chemical defences are also very common. Desert plants grow slowly as less photosynthesis takes place, allowing them to invest more in defence. [ 8 ] Another adaption is the development of extremely long roots that allow the flora to acquire moisture at the water table. Furthermore, some desert plants exhibit behavioural adaption; for instance, some flora live for only one season or one year, and desert perennials can survive by staying dormant during extremely dry periods; when the environment receives more moisture, they become active again. [ 9 ] For fauna , the easiest way is to stay away from the surface of the tropical deserts as much as possible to avoid the heat and aridity. As a result of the scarcity of water, most animals in these regions get their water from eating succulent plants and seeds , or from the tissues and blood of their prey. [ 8 ] They also have specific ways to store water and prevent water from leaving their bodies. Some animals live in burrows under the ground which are not too hot and relatively humid; they stay in their burrows during the heat of the day, and only come out to seek food at night. Examples of these animals include kangaroo rats and lizards . [ 8 ] Other animals, such as wolf spiders and scorpions , have a thick outer covering that minimizes moisture loss. Animals in tropical deserts have also been found to concentrate their urine in their kidneys to excrete less water. [ 8 ] Representative desert plants include the barrel cactus , brittlebush , and chain fruit cholla, Additionally, it is also common to see crimson hedgehog, cactus , common saltbush and desert ironwood , fairy duster , Joshua tree . In some deserts Mojave aster, ocotillo , organ pipe cactus and pancake prickly pear cactus can be found. Furthermore, paloverde , saguaro cactus, soaptree yucca , cholla guera, triangle-leaf bursage, tumbleweed and velvet mesquite can also be found in these regions. [ 10 ] Representative fauna in tropical deserts include the armadillo lizard , banded Gila monster, bobcat , cactus wren and cactus ferruginous pygmy owl. Moreover, some other animals in deserts including coyote , desert bighorn sheep , desert kangaroo rat , desert tortoise , javelina and Mojave rattlesnake, cougar . Overall, different tropical deserts have different species, for example, Sonoran Desert toad, Sonoran pronghorn antelope are typical animals in Sonoran Desert. [ 10 ] Rich and sometimes unique mineral resources are located in tropical deserts. Representative minerals include borax , sodium nitrate , sodium , iodine , calcium , bromine , and strontium compounds. These minerals are created when the water in desert lakes evaporates. [ 11 ] Borax is a natural cleaner and freshener, also known as a detergent booster. Boric acid is derived from borax and can be used to manufacture agricultural chemicals such as herbicide and insecticide , It is also used widely in fire retardants, glass , ceramics , water softeners , pharmaceuticals , paint , enamel , cosmetics and coated paper . Billions of dollars of borax has been mined in the northern Mojave Desert since 1881. [ 11 ] Borax is also a key ingredient for slime-making, the trend that was popular during the 2016-2017 period. Sodium nitrate forms through the evaporation of water in desert areas. The richest cache of sodium nitrate is located in South America; approximately 3 million metric tons were mined during World War I . It was the earliest food preservative , and is still used today to cure fish and meat to produce bacon , ham , sausage and deli meats . It is also used in the manufacturing of pharmaceuticals , fertilizers , dyes , explosives flares and enamels. [ 11 ] Natural gas and oil are complex hydrocarbons that formed millions of years ago from the decomposition of animals and plants. They are the world's primary energy source and exist in viscous, solid, liquid or gaseous forms. The five largest oil fields are in Saudi Arabia , Iraq and Kuwait . The largest petroleum-producing region in the world is the Arabian Desert Most major kinds of mineral deposits formed by groundwater are located in the deserts. For example, some valuable metallic minerals, such as gold , silver , iron , zinc , and uranium , are found in Western Desert in Australia. This is due to special geological processes, and climate factors in the desert can preserve and enhance mineral deposits. [ 11 ] Tropical deserts have various semi-precious and precious gemstones. The Some common semi-precious gemstones including chalcedony , opal , quartz , turquoise , jade , amethyst , petrified wood , and topaz . Precious gemstones such as diamonds are used in jewellery and decoration. Although some gemstones can also be found in temperate zones throughout the world, turquoise can only be found in tropical deserts. Turquoise is a very valuable and popular opaque gemstone, with beautiful blue-green or sky-blue colour and exquisite veins. [ 11 ]
https://en.wikipedia.org/wiki/Tropical_desert
Tropical forests are forested ecoregions with tropical climates – that is, land areas approximately bounded by the tropics of Cancer and Capricorn , but possibly affected by other factors such as prevailing winds . Some tropical forest types are difficult to categorize. While forests in temperate areas are readily categorized on the basis of tree canopy density, such schemes do not work well in tropical forests. [ 1 ] There is no single scheme that defines what a forest is, in tropical regions or elsewhere. [ 1 ] [ 2 ] Because of these difficulties, information on the extent of tropical forests varies between sources. However, tropical forests are extensive, making up just under half the world's forests. [ 3 ] The tropical domain has the largest proportion of the world's forests (45 percent), followed by the boreal, temperate and subtropical domains. [ 4 ] More than 3.6 million hectares of virgin tropical forest was lost in 2018. [ 5 ] [ 6 ] The first tropical rainforests appeared during the Devonian , characterized mainly by Pseudosporochnalean and Archaeopteridalean plants. [ 7 ] Other canopy forests expanded north-south of the equator during the Paleogene epoch, around 40 million years ago, as a result of the emergence of drier, cooler climates. The tropical forest was originally identified as a specific type of biome in 1949. [ 8 ] Tropical forests are often thought of as evergreen rainforests [ 2 ] and moist forests, but these account for only a portion of them (depending on how they are defined – see maps). The remaining tropical forests are a diversity of many different forest types including: Eucalyptus open forest, tropical coniferous forests , savanna woodland ( e.g. Sahelian forest ), and mountain forests [ 9 ] (the higher elevations of which are cloud forests ). Over even relatively short distances, the boundaries between these biomes may be unclear, with ecotones between the main types. The nature of tropical forests in any given area is affected by several factors, most importantly: The Global 200 scheme, promoted by the World Wildlife Fund , classifies three main tropical forest habitat types ( biomes ), grouping together tropical and sub-tropical areas (maps below): Extent of tropical and sub-tropical - A number of tropical forests have been designated High-Biodiversity Wilderness Areas , but remain subject to a wide range of disturbances, including more localized pressures such as habitat loss and degradation and anthropogenic climate change . Studies have also shown that ongoing climate change is increasing the frequency and intensity of some climate extremes (e.g. droughts, heatwaves and hurricanes) which, in combination with other local human disturbances, are driving unprecedented negative ecological consequences for tropical forests around the world. [ 14 ] All tropical forests have experienced at least some levels of disturbance. [ 15 ] Current deforestation in the biodiversity hotspots of North of South America, sub-Saharan Africa, South-East Asia and the Pacific, can be attributed to export of commodities such as: beef, soy, coffee, cacao, palm oil , and timber; there is a requirement for "strong transnational efforts ... by improving supply chain transparency [and] public–private engagement". [ 16 ] A study in Borneo describes how, between 1973 and 2018, the old-growth forest had been reduced from 76% to 50% of the island, mostly due to fire and agricultural expansion . [ 17 ] A widely-held view is that placing a value on the ecosystem services these forests provide may bring about more sustainable policies. However, clear monitoring and evaluation mechanisms for environmental, social and economic outcomes are needed. For example, a study in Vietnam indicated that poor and inconsistent data combined with a lack of human resources and political interest (thus lack of financial support) are hampering efforts to improve forest land allocation and a Payments for Forest Environmental Services scheme. [ 18 ]
https://en.wikipedia.org/wiki/Tropical_forest
Salt ponds are a natural feature of both temperate and tropical coastlines. These ponds form a vital buffer zone between terrestrial and marine ecosystems. Contaminants such as sediment, nitrates and phosphates are filtered out by salt ponds before they can reach the ocean. The depth, salinity and overall chemistry of these dynamic salt ponds fluctuate depending on temperature, rainfall, and anthropogenic influences such as nutrient runoff . The flora and fauna of tropical salt ponds differ markedly from those of temperate ponds. Mangrove trees are the dominant vegetation of tropical salt pond ecosystems, which also serve as vital feeding and breeding grounds for shore birds. Tropical salt ponds form as bays are gradually closed off with berms of rubble from the reef. Mangroves grow atop the berms, which gradually close off the area to create a salt pond. [ 1 ] These typically form at the base of watersheds with steep slopes, as sediments transported during storm events begin to fill in and cover up the rubble berm. Mangroves may grow over the berm, also contributing to the isolation of the salt pond. [ 1 ] Typically, the ponds communicate with the open sea through ground seepage. Evaporation and precipitation cycles in salt ponds create variable environments with wide ranges of salinity and depth. [ 1 ] Due to depth and temperature fluctuation salt pond could be classified as hyposaline 3-20 ppt, mesosaline 20-50 ppt, or hypersaline with ppt greater than 50. [ 1 ] Another important aspect of salt ponds is their permanence. [ 2 ] Salt ponds can eventually become filled in over time, and transition into an extension of the land. [ 3 ] Some are intermittent ponds due to predictable dry and wet seasons while others are episodic (if the region has highly unpredictable weather). [ 4 ] Organisms typically found in and around tropical salt ponds include cyanobacteria, marine invertebrates, birds, algae and mangrove trees. For example, a typical Caribbean salt pond is the permanent or part-time home to the following: [ 1 ] [ 3 ] There are 110 species of mangroves found worldwide all with special adaptations that allow for them to inhabit salt ponds. Mangroves are often found near or around salt ponds because of their ability to exist in an ecosystem with high salinity, low dissolved oxygen levels, brackish water , and extreme temperatures. Mangroves’ unique prop roots function as a barrier to the salt water, limiting water loss, and acting as a snorkel for oxygen and nutrients. Mangroves seeds have also evolved to be buoyant and germinate while still attached to the parent increasing the chance of survival in difficult environments. The presence of mangroves augments and helps maintain many of the benefits provided by salt ponds, such as: [ 3 ] [ 5 ] Caribbean salt ponds commonly host three types of mangroves: Salt ponds provide a number of important ecosystem services . Salt ponds act as natural sediment traps that limit the amount of sedimentation and pollutants that would otherwise end up in the ocean, potentially harming other ecosystems. Salt ponds are home to dense benthic mats of bacteria which also trap nutrients such as nitrogen that otherwise would greatly contribute to detrimental marine eutrophication . [ 1 ] Coral reefs are particularly vulnerable to sedimentation, siltation , and eutrophication processes. [ 6 ] Salt ponds and their mangrove systems act as a buffer from storm surges associated with hurricanes and greatly dissipate wave energy that could cause erosion, including even large, rare waves such as tsunamis. [ 7 ] In addition to these ecosystem services, salt ponds also produce a variety of useful products. Artemia , one of the primary food organisms for aquaculture systems, are cultured in salt ponds. Halophilic green algae can also be cultured in salt ponds to produce glycerol, dried protein that can be fed to livestock, and β–carotene used in dietary supplements. Spirulina is a salt-loving cyanobacterium with a protein content even higher than meat (60%), and it can be cultured in salt ponds. Other halophilic bacteria can be used to produce components used in highly technological processes. Photosynthetic pigment found in Halobacterium halobium is produced commercially and used for optical data processing, non-linear optics and as light sensors. Halophilic bacteria could also be used to produce polyhydroxyalkanoates (PHA) which are biodegradable, water resistant thermoplastics. [ 7 ] Both anthropogenic and natural threats affect tropical salt ponds. Natural threats include hurricanes and other large storms, salinity changes, runoff, sedimentation, and grazing and predation. Hurricanes and other large storms can damage salt pond organisms as well as cause seawater overwash, leading to potentially detrimental salinity changes and physical damage. Salinity may also be reduced by precipitation, which can alter community composition by restricting the number and type of species adapted for these conditions. Furthermore, increased evapotranspiration can increase salinity and diminish species diversity. Local conditions, such as annual rainfall and slope aspect, can determine runoff amounts. Influxes of runoff can cause sediment deposition in salt ponds, eventually causing infill of the pond to occur. Natural grazing and predation around salt ponds can trample vegetation, increase local erosion, and introduce nutrients to the ecosystem. [ 1 ] Anthropogenic threats to salt ponds include development and altered hydrology, pollution, erosion, and livestock and agricultural operations. Salt ponds may be filled, dredged, or removed for marinas, harbors, buildings, or other uses. Construction in upland areas also affects salt ponds by causing increased erosion and sedimentation. [ 8 ] Pollution is also a major threat to salt ponds. These areas are frequent dumping sites for trash, wastewater, and solid waste. Livestock grazing can not only increase erosion through soil compaction and deforestation, but also introduces fertilizers. Agriculture can also introduce fertilizers and pesticides, causing algal blooms and reduced water quality. Anthropogenic activities, such as fossil fuel burning, can cause increased global temperatures and could lead to the drying of salt ponds. As many of the biological functions of salt ponds are unknown, it would be wise to mitigate potential human impact on these vulnerable ecosystems. [ 1 ]
https://en.wikipedia.org/wiki/Tropical_salt_pond_ecosystem
Tropical vegetation is any vegetation in tropical latitudes . Plant life that occurs in climates that are warm year-round is in general more biologically diverse than in other latitudes. Some tropical areas may receive abundant rain the whole year round, but others have long dry seasons which last several months and may vary in length and intensity with geographic location. These seasonal droughts have a great impact on the vegetation, such as in the Madagascar spiny forests . [ 1 ] Rainforest vegetation is categorized by five layers. The top layer being the emergents, or the upper tree layer. Here you will find the largest and widest trees in all the forest, commonly 165 feet (fifty meters) and higher. These trees tend to have very large canopies so they can be fully exposed to sunlight. A layer below that is the canopy, or middle tree layer, averaging 98 to 130 feet (30 to 40 meters) in height. Here you will find more compact trees and vegetation. These trees tend to be more skinny as they are trying to gain any sunlight they can. The third layer is the lower tree area. These trees tend to be around five to ten meters (16 to 33 feet) high and tightly compacted. The trees found in the third layer include young trees trying to grow into the larger canopy trees, and "palmoids" or "Corner Model Trees". The fourth layer is the shrub layer beneath the tree canopy. This layer is mainly populated by sapling trees, shrubs, and seedlings. The fifth and final layer is the herb layer which is the forest floor. The forest floor is mainly bare except for various plants, mosses , Lycopods and ferns . The forest floor is much more dense than above because of little sunlight and air movement. [ 2 ] Plant species native to the tropics found in tropical ecosystems are known as tropical plants. Some examples of tropical ecosystems are the Guinean Forests of West Africa , the Madagascar dry deciduous forests and the broadleaf forests of the Thai highlands and the El Yunque National Forest in Puerto Rico . Dr. Ghillean Prance has estimated that, as of 1979, there are 155,000 known species of tropical plants, with 90,000 species in the Neotropics , 35,000 in southern Asia and the East Indies and 30,000 in Africa , [ 3 ] about half of those in Madagascar . There are also 50,000 Neotropical Fungi and about 20,000 fungal species each from Asia and Africa. [ 4 ] The term "tropical vegetation" is frequently used in the sense of lush and luxuriant, but not all the vegetation of the areas of the Earth in tropical climates can be defined as such. Despite lush vegetation, often the soils of tropical forests are low in nutrients making them quite vulnerable to slash-and-burn deforestation techniques, which are sometimes an element of shifting cultivation agricultural systems. [ 5 ] Tropical vegetation may include the following habitat types: Tropical rainforest ecosystems include significant areas of biodiversity , often coupled with high species endemism . [ 6 ] Rainforests are home to half of all the living animal and plant species on the planet and roughly two-thirds of all flowering plants can be found in rainforests. [ 7 ] [ 8 ] The most representative are the Borneo rainforest, one of the oldest rainforests in the world, the Brazilian and Venezuelan Amazon Rainforest , as well as the eastern Costa Paulon rainforests. Seasonal tropical forests generally receive high total rainfall, averaging more than 1000 mm per year, but with a distinct dry season . [ 9 ] They include: the Congolian forests , a broad belt of highland tropical moist broadleaf forest which extends across the basin of the Congo River ; Central American tropical forests in Panama and Nicaragua ; the seasonal forests that predominate across much the Indian subcontinent , Indochina , and northern Australia: Queensland . Tropical dry broadleaf forests are territories with a forest cover that is not very dense and has often an unkempt, irregular appearance, especially in the dry season. [ 10 ] This type of forest often includes bamboo and teak as the dominant large tree species, such as in the Phi Pan Nam Range , part of the Central Indochina dry forests . [ 11 ] They are affected by often long seasonal dry periods and, though less biologically diverse than rainforests, tropical dry forests are home to a wide variety of wildlife. Tropical grasslands, savannas, and shrublands [ 12 ] are spread over a large area of the tropics with a vegetation made up mainly of low shrubs and grasses , often including sclerophyll species. [ 12 ] Some of the most representative are the Western Zambezian grasslands in Zambia and Angola , as well as the Einasleigh upland savanna in Australia and the Everglades in United States of America . Tree species such as Acacia and baobab may be present in these ecosystems depending on the region.
https://en.wikipedia.org/wiki/Tropical_vegetation
A tropical year or solar year (or tropical period ) is the time that the Sun takes to return to the same position in the sky – as viewed from the Earth or another celestial body of the Solar System – thus completing a full cycle of astronomical seasons . For example, it is the time from vernal equinox to the next vernal equinox, or from summer solstice to the next summer solstice. It is the type of year used by tropical solar calendars . The tropical year is one type of astronomical year and particular orbital period . Another type is the sidereal year (or sidereal orbital period), which is the time it takes Earth to complete one full orbit around the Sun as measured with respect to the fixed stars , resulting in a duration of 20 minutes longer than the tropical year, because of the precession of the equinoxes . Since antiquity, astronomers have progressively refined the definition of the tropical year. The entry for "year, tropical" in the Astronomical Almanac Online Glossary states: [ 1 ] the period of time for the ecliptic longitude of the Sun to increase 360 degrees . Since the Sun's ecliptic longitude is measured with respect to the equinox, the tropical year comprises a complete cycle of seasons, and its length is approximated in the long term by the civil (Gregorian) calendar. The mean tropical year is approximately 365 days, 5 hours, 48 minutes, 45 seconds. An equivalent, more descriptive, definition is "The natural basis for computing passing tropical years is the mean longitude of the Sun reckoned from the precessionally moving equinox (the dynamical equinox or equinox of date). Whenever the longitude reaches a multiple of 360 degrees the mean Sun crosses the vernal equinox and a new tropical year begins". [ 2 ] The mean tropical year in 2000 was 365.24219 ephemeris days , each ephemeris day lasting 86,400 SI seconds. [ 3 ] This is 365.24217 mean solar days . [ 4 ] For this reason, the calendar year is an approximation of the solar year: the Gregorian calendar (with its rules for catch-up leap days ) is designed so as to resynchronize the calendar year with the solar year at regular intervals. The word "tropical" comes from the Greek tropikos meaning "turn". [ 5 ] Thus, the tropics of Cancer and Capricorn mark the extreme north and south latitudes where the Sun can appear directly overhead, and where it appears to "turn" in its annual seasonal motion. Because of this connection between the tropics and the seasonal cycle of the apparent position of the Sun, the word "tropical" was lent to the period of the seasonal cycle . The early Chinese, Hindus, Greeks, and others made approximate measures of the tropical year. In the 2nd century BC Hipparchus measured the time required for the Sun to travel from an equinox to the same equinox again. He reckoned the length of the year to be 1/300 of a day less than 365.25 days (365 days, 5 hours, 55 minutes, 12 seconds, or 365.24667 days). Hipparchus used this method because he was better able to detect the time of the equinoxes, compared to that of the solstices. [ 6 ] Hipparchus also discovered that the equinoctial points moved along the ecliptic (plane of the Earth's orbit, or what Hipparchus would have thought of as the plane of the Sun's orbit about the Earth) in a direction opposite that of the movement of the Sun, a phenomenon that came to be named "precession of the equinoxes". He reckoned the value as 1° per century, a value that was not improved upon until about 1000 years later, by Islamic astronomers . Since this discovery a distinction has been made between the tropical year and the sidereal year. [ 6 ] During the Middle Ages and Renaissance a number of progressively better tables were published that allowed computation of the positions of the Sun, Moon and planets relative to the fixed stars. An important application of these tables was the reform of the calendar . The Alfonsine Tables , published in 1252, were based on the theories of Ptolemy and were revised and updated after the original publication. The length of the tropical year was given as 365 solar days 5 hours 49 minutes 16 seconds (≈ 365.24255 days). This length was used in devising the Gregorian calendar of 1582. [ 7 ] In Uzbekistan , Ulugh Beg 's Zij-i Sultani was published in 1437 and gave an estimate of 365 solar days 5 hours 49 minutes 15 seconds (365.242535 days). [ 8 ] In the 16th century Copernicus put forward a heliocentric cosmology . Erasmus Reinhold used Copernicus' theory to compute the Prutenic Tables in 1551, and gave a tropical year length of 365 solar days, 5 hours, 55 minutes, 58 seconds (365.24720 days), based on the length of a sidereal year and the presumed rate of precession. This was actually less accurate than the earlier value of the Alfonsine Tables. Major advances in the 17th century were made by Johannes Kepler and Isaac Newton . In 1609 and 1619 Kepler published his three laws of planetary motion. [ 9 ] In 1627, Kepler used the observations of Tycho Brahe and Waltherus to produce the most accurate tables up to that time, the Rudolphine Tables . He evaluated the mean tropical year as 365 solar days, 5 hours, 48 minutes, 45 seconds (365.24219 days). [ 7 ] Newton's three laws of dynamics and theory of gravity were published in his Philosophiæ Naturalis Principia Mathematica in 1687. Newton's theoretical and mathematical advances influenced tables by Edmond Halley published in 1693 and 1749 [ 10 ] and provided the underpinnings of all solar system models until Albert Einstein 's theory of General relativity in the 20th century. From the time of Hipparchus and Ptolemy, the year was based on two equinoxes (or two solstices) a number of years apart, to average out both observational errors and periodic variations (caused by the gravitational pull of the planets, and the small effect of nutation on the equinox). These effects did not begin to be understood until Newton's time. To model short-term variations of the time between equinoxes (and prevent them from confounding efforts to measure long-term variations) requires precise observations and an elaborate theory of the apparent motion of the Sun. The necessary theories and mathematical tools came together in the 18th century due to the work of Pierre-Simon de Laplace , Joseph Louis Lagrange , and other specialists in celestial mechanics . They were able to compute periodic variations and separate them from the gradual mean motion. They could express the mean longitude of the Sun in a polynomial such as: where T is the time in Julian centuries. The derivative of this formula is an expression of the mean angular velocity, and the inverse of this gives an expression for the length of the tropical year as a linear function of T . Two equations are given in the table. Both equations estimate that the tropical year gets roughly a half second shorter each century. Newcomb's tables were sufficiently accurate that they were used by the joint American-British Astronomical Almanac for the Sun, Mercury , Venus , and Mars through 1983. [ 12 ] The length of the mean tropical year is derived from a model of the Solar System, so any advance that improves the solar system model potentially improves the accuracy of the mean tropical year. Many new observing instruments became available, including The complexity of the model used for the Solar System must be limited to the available computation facilities. In the 1920s punched card equipment came into use by L. J. Comrie in Britain. For the American Ephemeris an electromagnetic computer, the IBM Selective Sequence Electronic Calculator was used since 1948. When modern computers became available, it was possible to compute ephemerides using numerical integration rather than general theories; numerical integration came into use in 1984 for the joint US-UK almanacs. [ 16 ] Albert Einstein 's General Theory of Relativity provided a more accurate theory, but the accuracy of theories and observations did not require the refinement provided by this theory (except for the advance of the perihelion of Mercury) until 1984. Time scales incorporated general relativity beginning in the 1970s. [ 17 ] A key development in understanding the tropical year over long periods of time is the discovery that the rate of rotation of the earth, or equivalently, the length of the mean solar day , is not constant. William Ferrel in 1864 and Charles-Eugène Delaunay in 1865 predicted that the rotation of the Earth is being retarded by tides. This could be verified by observation only in the 1920s with the very accurate Shortt-Synchronome clock and later in the 1930s when quartz clocks began to replace pendulum clocks as time standards. [ 18 ] Apparent solar time is the time indicated by a sundial , and is determined by the apparent motion of the Sun caused by the rotation of the Earth around its axis as well as the revolution of the Earth around the Sun. Mean solar time is corrected for the periodic variations in the apparent velocity of the Sun as the Earth revolves in its orbit. The most important such time scale is Universal Time , which is the mean solar time at 0 degrees longitude (the IERS Reference Meridian ). Civil time is based on UT (actually UTC ), and civil calendars count mean solar days. However the rotation of the Earth itself is irregular and is slowing down, with respect to more stable time indicators: specifically, the motion of planets, and atomic clocks. Ephemeris time (ET) is the independent variable in the equations of motion of the Solar System, in particular, the equations from Newcomb's work, and this ET was in use from 1960 to 1984. [ 19 ] These ephemerides were based on observations made in solar time over a period of several centuries, and as a consequence represent the mean solar second over that period. The SI second , defined in atomic time, was intended to agree with the ephemeris second based on Newcomb's work, which in turn makes it agree with the mean solar second of the mid-19th century. [ 20 ] ET as counted by atomic clocks was given a new name, Terrestrial Time (TT), and for most purposes ET = TT = International Atomic Time + 32.184 SI seconds. Since the era of the observations, the rotation of the Earth has slowed down and the mean solar second has grown somewhat longer than the SI second. As a result, the time scales of TT and UT1 build up a growing difference: the amount that TT is ahead of UT1 is known as Δ T , or Delta T . [ 21 ] As of 5 July 2022, [update] TT is ahead of UT1 by 69.28 seconds. [ 22 ] [ 23 ] [ 24 ] As a consequence, the tropical year following the seasons on Earth as counted in solar days of UT is increasingly out of sync with expressions for equinoxes in ephemerides in TT. As explained below, long-term estimates of the length of the tropical year were used in connection with the reform of the Julian calendar , which resulted in the Gregorian calendar. Participants in that reform were unaware of the non-uniform rotation of the Earth, but now this can be taken into account to some degree. The table below gives Morrison and Stephenson's estimates and standard errors ( σ ) for ΔT at dates significant in the process of developing the Gregorian calendar. [ 25 ] The low-precision extrapolations are computed with an expression provided by Morrison and Stephenson: [ 25 ] where t is measured in Julian centuries from 1820. The extrapolation is provided only to show Δ T is not negligible when evaluating the calendar for long periods; [ 27 ] Borkowski cautions that "many researchers have attempted to fit a parabola to the measured Δ T values in order to determine the magnitude of the deceleration of the Earth's rotation. The results, when taken together, are rather discouraging." [ 27 ] One definition of the tropical year would be the time required for the Sun, beginning at a chosen ecliptic longitude, to make one complete cycle of the seasons and return to the same ecliptic longitude. Before considering an example, the equinox must be examined. There are two important planes in solar system calculations: the plane of the ecliptic (the Earth's orbit around the Sun), and the plane of the celestial equator (the Earth's equator projected into space). These two planes intersect in a line. One direction points to the so-called vernal, northward, or March equinox which is given the symbol ♈︎ (the symbol looks like the horns of a ram because it used to be toward the constellation Aries ). The opposite direction is given the symbol ♎︎ (because it used to be toward Libra ). Because of the precession of the equinoxes and nutation these directions change, compared to the direction of distant stars and galaxies, whose directions have no measurable motion due to their great distance (see International Celestial Reference Frame ). The ecliptic longitude of the Sun is the angle between ♈︎ and the Sun, measured eastward along the ecliptic. This creates a relative and not an absolute measurement, because as the Sun is moving, the direction the angle is measured from is also moving. It is convenient to have a fixed (with respect to distant stars) direction to measure from; the direction of ♈︎ at noon January 1, 2000, fills this role and is given the symbol ♈︎ 0 . There was an equinox on March 20, 2009, 11:44:43.6 TT. The 2010 March equinox was March 20, 17:33:18.1 TT, which gives an interval - and a duration of the tropical year - of 365 days 5 hours 48 minutes 34.5 seconds. [ 28 ] While the Sun moves, ♈︎ moves in the opposite direction. When the Sun and ♈︎ met at the 2010 March equinox, the Sun had moved east 359°59'09" while ♈︎ had moved west 51" for a total of 360° (all with respect to ♈︎ 0 [ 29 ] ). This is why the tropical year is 20 min. shorter than the sidereal year. When tropical year measurements from several successive years are compared, variations are found which are due to the perturbations by the Moon and planets acting on the Earth, and to nutation. Meeus and Savoie provided the following examples of intervals between March (northward) equinoxes: [ 7 ] Until the beginning of the 19th century, the length of the tropical year was found by comparing equinox dates that were separated by many years; this approach yielded the mean tropical year. [ 11 ] If a different starting longitude for the Sun is chosen than 0° ( i.e. ♈︎), then the duration for the Sun to return to the same longitude will be different. This is a second-order effect of the circumstance that the speed of the Earth (and conversely the apparent speed of the Sun) varies in its elliptical orbit: faster in the perihelion , slower in the aphelion . The equinox moves with respect to the perihelion (and both move with respect to the fixed sidereal frame). From one equinox passage to the next, or from one solstice passage to the next, the Sun completes not quite a full elliptic orbit. The time saved depends on where it starts in the orbit. If the starting point is close to the perihelion (such as the December solstice), then the speed is higher than average, and the apparent Sun saves little time for not having to cover a full circle: the "tropical year" is comparatively long. If the starting point is near aphelion, then the speed is lower and the time saved for not having to run the same small arc that the equinox has precessed is longer: that tropical year is comparatively short. The "mean tropical year" is based on the mean sun , and is not exactly equal to any of the times taken to go from an equinox to the next or from a solstice to the next. The following values of time intervals between equinoxes and solstices were provided by Meeus and Savoie for the years 0 and 2000. [ 11 ] These are smoothed values which take account of the Earth's orbit being elliptical, using well-known procedures (including solving Kepler's equation ). They do not take into account periodic variations due to factors such as the gravitational force of the orbiting Moon and gravitational forces from the other planets. Such perturbations are minor compared to the positional difference resulting from the orbit being elliptical rather than circular. [ 30 ] The mean tropical year on January 1, 2000, was 365.242 189 7 or 365 ephemeris days , 5 hours, 48 minutes, 45.19 seconds. This changes slowly; an expression suitable for calculating the length of a tropical year in ephemeris days, between 8000 BC and 12000 AD is where T is in Julian centuries of 36,525 days of 86,400 SI seconds measured from noon January 1, 2000, TT. [ 31 ] Modern astronomers define the tropical year as time for the Sun's mean longitude to increase by 360°. The process for finding an expression for the length of the tropical year is to first find an expression for the Sun's mean longitude (with respect to ♈︎), such as Newcomb's expression given above, or Laskar's expression. [ 32 ] When viewed over a one-year period, the mean longitude is very nearly a linear function of Terrestrial Time. To find the length of the tropical year, the mean longitude is differentiated, to give the angular speed of the Sun as a function of Terrestrial Time, and this angular speed is used to compute how long it would take for the Sun to move 360°. [ 11 ] [ 33 ] The above formulae give the length of the tropical year in ephemeris days (equal to 86,400 SI seconds), not solar days . It is the number of solar days in a tropical year that is important for keeping the calendar in synch with the seasons (see below). The Gregorian calendar , as used for civil and scientific purposes, is an international standard. It is a solar calendar that is designed to maintain synchrony with the mean tropical year. [ 34 ] It has a cycle of 400 years (146,097 days). Each cycle repeats the months, dates, and weekdays. The average year length is 146,097/400 = 365 + 97 ⁄ 400 = 365.2425 days per year, a close approximation to the mean tropical year of 365.2422 days. [ 35 ] The Gregorian calendar is a reformed version of the Julian calendar organized by the Catholic Church and enacted in 1582. By the time of the reform, the date of the vernal equinox had shifted about 10 days, from about March 21 at the time of the First Council of Nicaea in 325, to about March 11. The motivation for the change was the correct observance of Easter. The rules used to compute the date of Easter used a conventional date for the vernal equinox (March 21), and it was considered important to keep March 21 close to the actual equinox. [ 36 ] If society in the future still attaches importance to the synchronization between the civil calendar and the seasons, another reform of the calendar will eventually be necessary. According to Blackburn and Holford-Strevens (who used Newcomb's value for the tropical year) if the tropical year remained at its 1900 value of 365.242 198 781 25 days the Gregorian calendar would be 3 days, 17 min, 33 s behind the Sun after 10,000 years. Aggravating this error, the length of the tropical year (measured in Terrestrial Time) is decreasing at a rate of approximately 0.53 s per century and the mean solar day is getting longer at a rate of about 1.5 ms per century. These effects will cause the calendar to be nearly a day behind in 3200. The number of solar days in a "tropical millennium" is decreasing by about 0.06 per millennium (neglecting the oscillatory changes in the real length of the tropical year). [ 37 ] This means there should be fewer and fewer leap days as time goes on. One possible reform that has been proposed involves omitting the leap day in 3200, keeping 3600 and 4000 as leap years, and making all centennial years common except 4500, 5000, 5500, 6000, etc. (i.e. making centennial leap years occur once every 500 years instead of 400 starting from the year 4000), but the quantity ΔT is not sufficiently predictable to form more precise proposals. [ 38 ]
https://en.wikipedia.org/wiki/Tropical_year
Tropinone is an alkaloid , famously synthesised in 1917 by Robert Robinson as a synthetic precursor to atropine , a scarce commodity during World War I . [ 2 ] [ 3 ] Tropinone and the alkaloids cocaine and atropine all share the same tropane core structure. Its corresponding conjugate acid at pH 7.3 major species is known as tropiniumone. [ 4 ] The first synthesis of tropinone was by Richard Willstätter in 1901. It started from the seemingly related cycloheptanone , but required many steps to introduce the nitrogen bridge; the overall yield for the synthesis path is only 0.75%. [ 5 ] Willstätter had previously synthesized cocaine from tropinone, in what was the first synthesis and elucidation of the structure of cocaine. [ 6 ] The 1917 synthesis by Robinson is considered a classic in total synthesis [ 8 ] due to its simplicity and biomimetic approach. Tropinone is a bicyclic molecule , but the reactants used in its preparation are fairly simple: succinaldehyde , methylamine and acetonedicarboxylic acid (or even acetone ). The synthesis is a good example of a biomimetic reaction or biogenetic-type synthesis because biosynthesis makes use of the same building blocks. It also demonstrates a tandem reaction in a one-pot synthesis . Furthermore, the yield of the synthesis was 17% and with subsequent improvements exceeded 90%. [ 5 ] This reaction is described as an intramolecular "double Mannich reaction " for obvious reasons. It is not unique in this regard, as others have also attempted it in piperidine synthesis. [ 9 ] [ 10 ] In place of acetone, acetonedicarboxylic acid is known as the " synthetic equivalent " the 1,3-dicarboxylic acid groups are so-called " activating groups " to facilitate the ring forming reactions. The calcium salt is there as a " buffer " as it is claimed that higher yields are possible if the reaction is conducted at " physiological pH ". The main features apparent from the reaction sequence below are: Some authors have actually tried to retain one of the CO 2 H groups. [ 11 ] CO 2 R-tropinone has 4 stereoisomers, although the corresponding ecgonidine alkyl ester has only a pair of enantiomers. IBX dehydrogenation (oxidation) of cycloheptanone (suberone) to 2,6-cycloheptadienone [1192-93-4] followed by reaction with an amine is versatile a way of forming tropinones. [ 12 ] [ 13 ] The mechanism evoked is clearly delineated to be a double Michael reaction (i.e. conjugate addition). [ 14 ] The reduction of tropinone is mediated by NADPH -dependent reductase enzymes, which have been characterized in multiple plant species. [ 15 ] These plant species all contain two types of the reductase enzymes, tropinone reductase I and tropinone reductase II. TRI produces tropine and TRII produces pseudotropine. Due to differing kinetic and pH/activity characteristics of the enzymes and by the 25-fold higher activity of TRI over TRII, the majority of the tropinone reduction is from TRI to form tropine. [ 16 ]
https://en.wikipedia.org/wiki/Tropinone
Tropospheric ozone depletion events are phenomena that reduce the concentration of ozone in the Earth's lower atmosphere. Ozone (O 3 ) is a trace gas which has been of concern because of its unique dual role in different layers of the atmosphere. [ 1 ] Apart from absorbing UV-B radiation and converting solar energy into heat in the stratosphere , ozone in the troposphere provides greenhouse effect and controls the oxidation capacity of the atmosphere. [ 1 ] Ozone in the troposhere is determined by photochemical production and destruction, dry deposition and cross-tropopause transport of ozone from the stratosphere. [ 2 ] In the Arctic troposphere, transport and photochemical reactions involving nitrogen oxides and volatile organic compounds (VOCs) as a result of human emissions also produce ozone resulting in a background mixing ratio of 30 to 50 nmol mol−1 (ppb). [ 3 ] Nitrogen oxides play a key role in recycling active free radicals (such as reactive halogens ) in the atmosphere and indirectly affect ozone depletion. [ 4 ] Ozone depletion events (ODEs) are phenomena associated with the sea ice zone. They are routinely observed at coastal locations when incoming winds have traversed sea ice covered areas. [ 5 ] During springtime in the polar regions of Earth , unique photochemistry converts inert halide salt ions (e.g. Br − ) into reactive halogen species (e.g. Br atoms and BrO ) that episodically deplete ozone in the atmospheric boundary layer to near zero levels. [ 6 ] These processes are favored by light and low temperature conditions. [ 4 ] Since their discovery in the late 1980s, research on these ozone depletion events has shown the central role of bromine photochemistry. The exact sources and mechanisms that release bromine are still not fully understood, but the combination of concentrated sea salt in a condensed phase substrate appears to be a pre-requisite. [ 7 ] Shallow boundary layers are also likely to be beneficial since they enhance the speed of autocatalytic bromine release by confining the released bromine to a smaller space. [ 3 ] Under these conditions, and with sufficient acidity, gaseous hypobromous acid (HOBr ) can react with condensed sea salt bromide and produce bromine that is then released to the atmosphere. Subsequent photolysis of this bromine generates bromine radicals that can react with and destroy ozone. [ 7 ] Due to the autocatalytic nature of the reaction mechanism, it has been called bromine explosion. It is still not fully understood how salts are transported from the ocean and oxidized to become reactive halogen species in the air. Other halogens ( chlorine and iodine ) are also activated through mechanisms coupled to bromine chemistry. [ 6 ] The main consequence of halogen activation is chemical destruction of ozone, which removes the primary precursor of atmospheric oxidation, and generation of reactive halogen atoms/oxides that become the primary oxidizing species. [ 6 ] The oxidation ability originally influenced by ozone is weakened, while the halogen species now holds the oxidation ability. This changes the reaction cycles and final products of many atmospheric reactions. During ozone depletion events, the enhanced halogen chemistry can effectively oxidize reactive gaseous elements. [ 4 ] The different reactivity of halogens as compared to OH and ozone has broad impacts on atmospheric chemistry . These include near complete removal and deposition of mercury , alteration of oxidation fates for organic gases, and export of bromine into the free troposphere. [ 6 ] The deposition of reactive gaseous mercury (RGM) in snow from oxidation by enhanced halogens increases the bioavailability of mercury. [ 4 ] Recent changes in the climate of the Arctic and state of the Arctic sea ice cover are likely to have strong effects on halogen activation and ozone depletion events. Human-induced climate change affects the quantity of snow and ice cover in the Arctic, altering the intensity of nitrogen oxide emissions. [ 4 ] Increment in background levels of nitrogen oxide apparently strengthens the consumption of ozone and the enhancement of halogens.
https://en.wikipedia.org/wiki/Tropospheric_ozone_depletion_events
The Trost ligand is a diphosphine used in the palladium - catalyzed Trost asymmetric allylic alkylation . Other C 2 -symmetric ligands derived from trans -1,2-diaminocyclohexane (DACH) have been developed, such as the ( R , R )-DACH- naphthyl ligand derived from 2-diphenylphosphino-1-naphthalenecarboxylic acid. Related bidentate phosphine -containing ligands derived from other chiral diamines and 2-diphenylphosphinobenzoic acid have also been developed for applications in asymmetric synthesis . This article about an organic compound is a stub . You can help Wikipedia by expanding it . This catalysis article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trost_ligand
In thermodynamics , Trouton's rule states that the (molar) entropy of vaporization has almost the same value, about 85–88 J/(K·mol), for various kinds of liquids at their boiling points . [ 1 ] The entropy of vaporization is defined as the ratio between the enthalpy of vaporization and the boiling temperature. It is named after Frederick Thomas Trouton . It is expressed as a function of the gas constant R : A similar way of stating this ( Trouton's ratio ) is that the latent heat is connected to boiling point roughly as Trouton’s rule can be explained by using Boltzmann's definition of entropy to the relative change in free volume (that is, space available for movement) between the liquid and vapour phases . [ 2 ] [ 3 ] It is valid for many liquids; for instance, the entropy of vaporization of toluene is 87.30 J/(K·mol), that of benzene is 89.45 J/(K·mol), and that of chloroform is 87.92 J/(K·mol). Because of its convenience, the rule is used to estimate the enthalpy of vaporization of liquids whose boiling points are known. The rule, however, has some exceptions. For example, the entropies of vaporization of water , ethanol , formic acid and hydrogen fluoride are far from the predicted values. The entropy of vaporization of XeF 6 at its boiling point has the extraordinarily high value of 136.9 J/(K·mol), or 16.5 R . [ 4 ] The characteristic of those liquids to which Trouton’s rule cannot be applied is their special interaction between molecules, such as hydrogen bonding . The entropy of vaporization of water and ethanol shows positive deviance from the rule; this is because the hydrogen bonding in the liquid phase lessens the entropy of the phase. In contrast, the entropy of vaporization of formic acid has negative deviance. This fact indicates the existence of an orderly structure in the gas phase; it is known that formic acid forms a dimer structure even in the gas phase. Negative deviance can also occur as a result of a small gas-phase entropy owing to a low population of excited rotational states in the gas phase, particularly in small molecules such as methane – a small moment of inertia I giving rise to a large rotational constant B , with correspondingly widely separated rotational energy levels and, according to Maxwell–Boltzmann distribution , a small population of excited rotational states, and hence a low rotational entropy. The validity of Trouton's rule can be increased by considering [ citation needed ] Here, if T = 400 K , the right-hand side of the equation equals 10.5 R , and we find the original formulation for Trouton's rule.
https://en.wikipedia.org/wiki/Trouton's_rule
The trp operon is a group of genes that are transcribed together, encoding the enzymes that produce the amino acid tryptophan in bacteria. The trp operon was first characterized in Escherichia coli , and it has since been discovered in many other bacteria. [ 1 ] The operon is regulated so that, when tryptophan is present in the environment, the genes for tryptophan synthesis are repressed. The trp operon contains five structural genes: trpE , trpD , trpC , trpB , and trpA , which encode the enzymes needed to synthesize tryptophan. It also contains a repressive regulator gene called trpR . When tryptophan is present, the trpR protein binds to the operator , blocking transcription of the trp operon by RNA polymerase. This operon is an example of repressible negative regulation of gene expression. The repressor protein binds to the operator in the presence of tryptophan (repressing transcription ) and is released from the operon when tryptophan is absent (allowing transcription to proceed). The trp operon additionally uses attenuation to control expression of the operon , a second negative feedback control mechanism. The trp operon is well-studied and is commonly used as an example of gene regulation in bacteria alongside the lac operon . trp operon contains five structural genes. The roles of their products are: The operon operates by a negative repressible feedback mechanism. The repressor for the trp operon is produced upstream by the trpR gene, which is constitutively expressed at a low level. Synthesized trpR monomers associate into dimers. When tryptophan is present, these tryptophan repressor dimers bind to tryptophan, causing a change in the repressor conformation, allowing the repressor to bind to the operator . This prevents RNA polymerase from binding to and transcribing the operon, so tryptophan is not produced from its precursor. When tryptophan is not present, the repressor is in its inactive conformation and cannot bind the operator region, so transcription is not inhibited by the repressor. Attenuation is a second mechanism of negative feedback in the trp operon. The repression system targets the intracellular trp concentration whereas the attenuation responds to the concentration of charged tRNA trp . [ 2 ] Thus, the repressor (the trpR protein) decreases gene expression by altering the initiation of transcription, while attenuation does so by altering the process of transcription that's already in progress. [ 2 ] While the TrpR repressor decreases transcription by a factor of 70, attenuation can further decrease it by a factor of 10, thus allowing accumulated repression of about 700-fold. [ 3 ] Attenuation is made possible by the fact that in prokaryotes (which have no nucleus ), the ribosomes begin translating the mRNA while RNA polymerase is still transcribing the DNA sequence. This allows the process of translation to affect transcription of the operon directly. At the beginning of the transcribed genes of the trp operon is a sequence of at least 130 nucleotides termed the leader transcript (trpL; P0AD92 ). [ 4 ] Lee and Yanofsky (1977) found that the attenuation efficiency is correlated with the stability of a secondary structure embedded in trpL, [ 5 ] and the 2 constituent hairpins of the terminator structure were later elucidated by Oxender et al. (1979). [ 6 ] This transcript includes four short sequences designated 1–4, each of which is partially complementary to the next one. Thus, three distinct secondary structures ( hairpins ) can form: 1–2, 2–3 or 3–4. The hybridization of sequences 1 and 2 to form the 1–2 structure is rare because the RNA polymerase waits for a ribosome to attach before continuing transcription past sequence 1, however if the 1–2 hairpin were to form it would prevent the formation of the 2–3 structure (but not 3–4). The formation of a hairpin loop between sequences 2–3 prevents the formation of hairpin loops between both 1–2 and 3–4. The 3–4 structure is a transcription termination sequence (abundant in G/C and immediately followed by several uracil residues), once it forms RNA polymerase will disassociate from the DNA and transcription of the structural genes of the operon can not occur (see below for a more detailed explanation). The functional importance of the 2nd hairpin for the transcriptional termination is illustrated by the reduced transcription termination frequency observed in experiments destabilizing the central G+C pairing of this hairpin. [ 5 ] [ 7 ] [ 8 ] [ 9 ] Part of the leader transcript codes for a short polypeptide of 14 amino acids, termed the leader peptide. This peptide contains two adjacent tryptophan residues, which is unusual, since tryptophan is a fairly uncommon amino acid (about one in a hundred residues in a typical E. coli protein is tryptophan). The strand 1 in trpL encompasses the region encoding the trailing residues of the leader peptide: Trp, Trp, Arg, Thr, Ser; [ 2 ] conservation is observed in these 5 codons whereas mutating the upstream codons do not alter the operon expression. [ 2 ] [ 10 ] [ 11 ] [ 12 ] If the ribosome attempts to translate this peptide while tryptophan levels in the cell are low, it will stall at either of the two trp codons. While it is stalled, the ribosome physically shields sequence 1 of the transcript, preventing the formation of the 1–2 secondary structure. Sequence 2 is then free to hybridize with sequence 3 to form the 2–3 structure, which then prevents the formation of the 3–4 termination hairpin, which is why the 2–3 structure is called an anti-termination hairpin. In the presence of the 2–3 structure, RNA polymerase is free to continue transcribing the operon. Mutational analysis and studies involving complementary oligonucleotides demonstrate that the stability of the 2–3 structure corresponds to the operon expression level. [ 10 ] [ 13 ] [ 14 ] [ 15 ] If tryptophan levels in the cell are high, the ribosome will translate the entire leader peptide without interruption and will only stall during translation termination at the stop codon . At this point the ribosome physically shields both sequences 1 and 2. Sequences 3 and 4 are thus free to form the 3–4 structure which terminates transcription. This terminator structure forms when no ribosome stalls in the vicinity of the Trp tandem (i.e. Trp or Arg codon): either the leader peptide is not translated or the translation proceeds smoothly along the strand 1 with abundant charged tRNAtrp. [ 2 ] [ 10 ] More over, the ribosome is proposed to only block about 10 nts downstream, thus ribosome stalling in either the upstream Gly or further downstream Thr do not seem to affect the formation of the termination hairpin. [ 2 ] [ 10 ] The end result is that the operon will be transcribed only when tryptophan is unavailable for the ribosome, while the trpL transcript is constitutively expressed. This attenuation mechanism is experimentally supported. First, the translation of the leader peptide and ribosomal stalling are directly evidenced to be necessary for inhibiting the transcription termination. [ 13 ] Moreover, mutational analysis destabilizing or disrupting the base-pairing of the antiterminator hairpin results in increased termination of several folds; consistent with the attenuation model, this mutation fails to relieve attenuation even with starved Trp. [ 10 ] [ 13 ] In contrast, complementary oligonucleotides targeting strand 1 increases the operon expression by promoting the antiterminator formation. [ 10 ] [ 14 ] Furthermore, in histidine operon, compensatory mutation shows that the pairing ability of strands 2–3 matters more than their primary sequence in inhibiting attenuation. [ 10 ] [ 15 ] In attenuation, where the translating ribosome is stalled determines whether the termination hairpin will be formed. [ 10 ] In order for the transcribing polymerase to concomitantly capture the alternative structure, the time scale of the structural modulation must be comparable to that of the transcription. [ 2 ] To ensure that the ribosome binds and begins translation of the leader transcript immediately following its synthesis, a pause site exists in the trpL sequence. Upon reaching this site, RNA polymerase pauses transcription and apparently waits for translation to begin. This mechanism allows for synchronization of transcription and translation, a key element in attenuation. A similar attenuation mechanism regulates the synthesis of histidine , phenylalanine and threonine . The arrangement of the trp operon in E. coli and Bacillus subtilis differs. There are 5 structural genes in E. coli that are found under a single transcriptional unit. In Bacillus subtilis , there are 6 structural genes that are situated within a supraoperon. Three of these genes are found upstream while the other three genes are found downstream of the trp operon. [ 16 ] There is a 7th gene in Bacillus subtilis ' s operon called trpG or pabA which is responsible for protein synthesis of tryptophan and folate . [ 17 ] Regulation of trp operons in both organisms depends on the amount of tryptophan present in the cell. However, the primary regulation of tryptophan biosynthesis in B. subtilis is via attenuation, rather than repression, of transcription. [ 18 ] In B. subtilis , tryptophan binds to the eleven-subunit tryptophan-activated RNA-binding attenuation protein (TRAP), which activates TRAP's ability to bind to the trp leader RNA. [ 19 ] [ 20 ] Binding of trp-activated TRAP to leader RNA results in the formation of a terminator structure that causes transcription termination. [ 18 ] In addition, the activated TRAP inhibits the initiation of translation of trpP, trpE, trpG and ycbK genes. The gene trpP plays a role in trp transportation, while the gene trpG is utilized in the folate operon, and the gene ycbK is involved in synthesis of an efflux protein. The activated TRAP protein is regulated by an anti-TRAP protein and AT synthesis. AT can inactive TRAP to lower the transcription of tryptophan. [ 21 ]
https://en.wikipedia.org/wiki/Trp_operon
TruCluster is a closed-source high-availability clustering solution for the Tru64 UNIX operating system . It was originally developed by Digital Equipment Corporation (DEC), but was transferred to Compaq in 1998 when Digital was acquired by the company, which then later merged with Hewlett-Packard (HP). This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/TruCluster
A truck scale (US), weighbridge (non-US) or railroad scale is a large set of scales , usually mounted permanently on a concrete foundation, that is used to weigh entire rail or road vehicles and their contents. By weighing the vehicle both empty and when loaded, the load carried by the vehicle can be calculated. The key component that uses a weighbridge in order to make the weigh measurement is load cells . Commercial scales have to be National Type Evaluation Program (NTEP) approved or certified. The certification is issued by the National Conference on Weights and Measures (NCWM), in accordance to the National Institute of Standards and Technology (NIST), "Handbook 44" specifications and tolerances, [ 1 ] through Conformity Assessment and the Verified Conformity Assessment Program (VCAP) [ 2 ] Handbook 44: General Code paragraph G-A.1.; and the NIST Handbook 130 (Uniform Weights and Measures Law; Section 1.13.) define Commercial Weighing and Measuring Equipment as follows; weights and measures and weighing and measuring devices commercially used or employed in establishing the size, quantity, extent, area, or measurement of quantities, things, produce, or articles for distribution or consumption, purchased, offered, or submitted for sale, hire, or award, or in computing any basic charge or payment for services rendered on the basis of weight or measure. NTEP approved scales are generally considered those scales which are intended by the manufacturer for use in commercial applications where products are sold by weight. NTEP Approved is also known as Legal for Trade or complies with Handbook 44. NTEP scales are commonly used for applications ranging from weighing coldcuts at the deli, to fruit at the roadside farm stand, shipping centers for determining shipping cost to weighing gold and silver and more. [ 3 ] A rail weighbridge [ 4 ] is used to weigh rollingstock including railroad cars , railroad cars , goods wagons and locomotives , empty or loaded. When loaded, the net weight of the cargo is the gross weight less the tare weight when known. It is also used to weigh trams . There are different types, but all of them have electronic sensors built into the track that measure the weight. All designs have in common that there must be a sufficient approach and departure distance in front of and behind the respective scale. All of them can measure independently of the direction of travel and whether the train is being pushed or pulled. In principle, a distinction is made between three different types of construction : [ 5 ] The dynamic weighbridge consists of one or more weighbridges that can be connected together. The construction of the weighbridge is similar to the static track scales with load cells and weighing platform. The rails are applied to the weighing platform and are designed with rail bevelling. Rail switches are integrated into the rails to detect the position of the wagons on the scale. Together with the weighing terminal and the software, the weight of the individual wagons or the bogies is determined dynamically during the passage at up to 10 km/h. Advantages: Disadvantages: For dynamic track scales with force sensors, several force sensors are drilled and pressed into the track. When a train passes over the scales at up to 30 km/h, the rail is deformed by the mass of the vehicle (deformation). The change in material stress deforms the sensor, in which strain gauges are mounted as in a classic load cell. Thus, the weight of the individual wheelset or bogie can be calculated from the specific deformation behaviour of the rail. Advantages: Disadvantages: A dynamic track weigher based on weighing sleepers is, like the strain gauge in rail weigher, a gapless construction without rail cuts. In simple terms, several sleepers are removed from the track and replaced by weighing sleepers. Load cells are installed in these sleepers. Compared to the weighbridge, the gapless (and thus force-coupled) design means that the weighbridge cannot be statically adjusted, but can only operate purely dynamically. This requires a very stable substructure without a jump in stiffness. The difference to the scale with strain gauge in the rail is that calibratable sensors can be used for this variant and the scale is therefore calibratable. Advantages: Disadvantages: Truck scales can be surface mounted with a ramp leading up a short distance and the weighing equipment underneath or they can be pit mounted with the weighing equipment and platform in a pit so that the weighing surface is level with the road. They are typically built from steel or concrete and by nature are extremely robust. In earlier versions the bridge is installed over a rectangular pit that contains levers that ultimately connect to a balance mechanism. The most complex portion of this type is the arrangement of levers underneath the weighbridge since the response of the scale must be independent of the distribution of the load. [ 6 ] Modern devices use multiple load cells that connect to an electronic equipment to totalize the sensor inputs. In either type of semi-permanent scale the weight readings are typically recorded in a nearby hut or office. Many weighbridges are now linked to a PC which runs truck scale software capable of printing tickets and providing reporting features. Truck scales can be used for two main purposes: They are used in industries that manufacture or move bulk items, such as in mines or quarries, garbage dumps / recycling centers, bulk liquid and powder movement, household goods, and electrical equipment. Since the weight of the vehicle carrying the goods is known (and can be ascertained quickly if it is not known by the simple expedient of weighing the empty vehicle) they are a quick and easy way to measure the flow of bulk goods in and out of different locations. A single axle truck scale or axle weighing system can be used to check individual axle weights and gross vehicle weights to determine whether the vehicle is safe to travel on the public highway without being stopped and fined by the authorities for being overloaded. Similar to the full size truck scale these systems can be pit mounted with the weighing surface flush to the level of the roadway or surface mounted. For many uses (such as at police over the road truck weigh stations or temporary road intercepts) weighbridges have been largely supplanted by simple and thin electronic weigh cells , over which a vehicle is slowly driven. A computer records the output of the cell and accumulates the total vehicle weight. By weighing the force of each axle it can be assured that the vehicle is within statutory limits, which typically will impose a total vehicle weight, a maximum weight within an axle span limit and an individual axle limit. The former two limits ensure the safety of bridges while the latter protects the road surface. Portable truck scales can also be found in use around the world. A portable truck scale will have lower frame work that can be placed on non-typical surfaces such as dirt. These scales retain the same level of accuracy as a pit-type scale, with accuracy of up to + or - 1%. The first portable truck scale record in the US was units operated by the Weight Patrol of the Los Angeles Motor Patrol in 1929. Four such weighing units were used with one under each of the trucks wheels. Each unit could record up to 15,000 pounds (6,800 kg). [ 7 ] Digital Load cells : Digital load cells have replaced traditional analog ones due to their superior accuracy, faster response times, and better resistance to environmental factors. These load cells offer real-time weight data with reduced signal interference. Weighbridge Software Integration : Weighbridge software has been developed to streamline data collection, analysis, and reporting. This software simplifies integration with other business systems, improving compliance tracking, inventory management, and billing. Remote Monitoring & Connectivity : Weighbridges now feature remote monitoring capabilities, allowing users to access weight data and system status in real-time from a distance. This feature enhances efficiency by providing preventive maintenance and troubleshooting capabilities. In- Motion weighing : In-motion weighbridge systems have revolutionized truck weighing by allowing vehicles to be weighed while moving slowly over the scale. This eliminates the need for stopping for weighing, improving traffic flow and saving time. RFID Technology : RFID technology is being integrated into weighbridge systems to automate the identification of vehicles and goods. This improves data accuracy, speeds up the weighing process, and reduces errors. Imaging : Advanced camera systems capture images of vehicles and their loads during the weighing process. This visual evidence can be useful in dispute resolution, record-keeping, and verification. Data Analytics & Reporting : Weighbridge technology now includes powerful data analytics tools that help organizations draw insights from weight data. These insights can aid in making informed operational decisions, identifying patterns, and optimizing load distribution Mobile Apps & Cloud Integration : Mobile applications allow users to interact with weighbridge systems remotely and access reports, alerts, and real-time weight data. Integration with the cloud ensures secure data storage and cross-platform accessibility. Sustainable : Weighbridge designs now incorporate solar-powered systems and energy-saving components to minimize their environmental impact. Enhanced Durability & Construction : Weighbridge construction materials have advanced to withstand heavy usage, harsh weather conditions, and corrosive environments, resulting in longer lifespans and reduced maintenance requirements.
https://en.wikipedia.org/wiki/Truck_scale
The Truckee–Carson Irrigation District (TCID) is a public enterprise organized in the State of Nevada , which operates dams at Lake Tahoe , diversion dams on the Truckee River in Washoe County , and the Lake Lahontan reservoir. TCD also operates 380 miles (610 km) of canals , and 340 miles (550 km) of drains, in support of agriculture in Lyon County and Churchill County , western Nevada. The excess irrigation water eventually drains into the endorheic Lake Lahontan Basin. Diversion of water by the TCID from the Truckee River has caused a reduction in the level of natural Pyramid Lake , resulting in the endemic species of fish that live in it becoming endangered species . In the mid-1980s the United States Environmental Protection Agency initiated development of the DSSAM Model to analyze effects of variable Truckee River flow rates and water quality upon these endangered fish species.
https://en.wikipedia.org/wiki/Truckee–Carson_Irrigation_District
In mathematical analysis , Trudinger's theorem or the Trudinger inequality (also sometimes called the Moser–Trudinger inequality ) is a result of functional analysis on Sobolev spaces . It is named after Neil Trudinger (and Jürgen Moser ). It provides an inequality between a certain Sobolev space norm and an Orlicz space norm of a function. The inequality is a limiting case of Sobolev embedding and can be stated as the following theorem: Let Ω {\displaystyle \Omega } be a bounded domain in R n {\displaystyle \mathbb {R} ^{n}} satisfying the cone condition . Let m p = n {\displaystyle mp=n} and p > 1 {\displaystyle p>1} . Set Then there exists the embedding where The space is an example of an Orlicz space .
https://en.wikipedia.org/wiki/Trudinger's_theorem
True Corporation Public Company Limited (stylized as true ) is a communications conglomerate in Thailand . It is a joint venture between Charoen Pokphand Group and Telenor , formed by the merger between the original True Corporation and DTAC in the form of equal partnership to create a new telecommunications company that can fully meet the needs of the digital age. True controls Thailand's largest cable TV provider, TrueVisions , [ 3 ] Thailand's largest internet service provider True Online , [ citation needed ] Thailand's largest mobile operators, TrueMove H and DTAC TriNet , which is second and third only to AIS . [ 4 ] and entertainment media including television, internet, online games, and mobile phones under the True Digital brand. As of August 2014, True, along with True Telecommunications Growth Infrastructure Fund, had a combined market capitalization of US$10 billion. [ citation needed ] TrueMove is also a partner of Vodafone Group . [ 5 ] Charoen Pokphand Group and Telenor hold equal ownership of 30% of True's shares as of March 2023. [ 6 ] It operates fixed-line (as a concessionaire of NT (formerly known as TOT)), wireless , cable TV , IPTV and broadband services. True Corporation was established on 13 November 1990 as TelecomAsia . [ 7 ] The company had partnership with Verizon . [ 7 ] The company was listed on the Stock Exchange of Thailand on 22 December 1993 ( 1993-12-22 ) . [ 8 ] In 2001, TelecomAsia set up mobile phone subsidiary TA Orange with Orange SA . Orange sold off its stake in 2003 but the Orange brand was used until 2006. [ 9 ] In an effort to converge TelecomAsia's telecommunication business into a single brand, the company renamed itself to True Corporation in 2004, [ 10 ] and streamlined its operations with subsidiaries Asia Infonet (renamed True Internet [ citation needed ] ) and Orange (renamed True Move in 2006 [ 11 ] ). In 2005, True took a higher stake in UBC , Thailand's largest cable television provider that time [ citation needed ] , and renamed the company to UBC-True. [ 12 ] On 24 January 2007, UBC-True was renamed TrueVisions . [ citation needed ] On 8 May 2013, TrueMove H became Thailand's first mobile operator to provide 4G LTE commercial service on the 2100 MHz bandwidth. [ 13 ] On 11 September 2014, it was announced that China Mobile agreed to purchase 18 percent of its shares for US$881 million. [ 14 ] [ 15 ] On 13 November 2014, TrueMove H announced that it allocated 10 billion baht to expand its 4G LTE network in Thailand to cover 80 percent of the country's population. [ 16 ] In June 2015 Suphachai Chearavanont , True's President and CEO, was presented with the "2015 Frost & Sullivan Asia Pacific Telecom CEO of the Year" award in Singapore for his leadership and achievements in developing the telecommunications industry in the Asia Pacific region. [ citation needed ] In the same month, Chearavanont was elected president of The Telecommunications Association of Thailand. [ citation needed ] On 22 November 2021, Charoen Pokphand and Telenor , officially announced they have agreed to explore a USD 8.6 billion merger plan between Thailand’s second and third largest telecom operators (by subscribers), True Corporation (TRUE) and Total Access Communication (DTAC) – The proposed merger is subject to regulatory approvals. The merger is expected to be completed by late-September 2022. [ 17 ] [ 18 ] The merger was "acknowledged" by the regulator NBTC at a meeting on 20 October 2022. [ 19 ] The newly merged company still retain the True Corporation name, which was founded on 1 March 2023 and it was listed on the Stock Exchange of Thailand under the stock ticker symbol TRUE on 3 March 2023. [ 20 ] 2024, becoming tech company True Corporation is moving forward to become Thailand’s leading telco-tech company [ 21 ] through transparent  and sustainable operation, enhancing its competitive capabilities, and contributing to Thai social transformation. True is modernizing its network and leveraging digital technology and AI to drive operations and innovation. This includes advancements like the “Mari” chatbot, now evolved into a “voicebot” for human-like customer interactions, while prioritizing data privacy and security. True is also advancing digital technology for modern living, supporting digital transformation in various industries, and fostering an AI-first culture. Additionally, True remains committed to sustainability, ranked as a world-class organization on the Dow Jones Sustainability Indices (DJSI) for the sixth consecutive year according to S&P Global's announcement on December 8th, 2023. True is pursuing a comprehensive strategy for systemic change, emphasizing AI innovation, business efficiency, energy reduction, digital skills development, and using digital technology to reduce educational disparities. The company is also expanding business processes that uphold human rights throughout the supply chain, promoting transparent and accountable AI governance, and combating all forms of corruption. True Corporation includes the following subsidiaries: [ 22 ] Thai activists have charged that True, Thailand's largest ISP, shared dissidents' internet account details to the junta in the aftermath of the 2014 Thai coup d'état . It is impossible to corroborate that True shared dissidents' data with law enforcement, but Thai governments since 2007 have sought to curb online criticism by passing legislation that compel ISPs to deploy online surveillance and censorship technologies. [ 23 ] True's privacy policy allows it to share data with law-enforcement authorities. [ 24 ]
https://en.wikipedia.org/wiki/True_Corporation
True DC is a type of switch disconnect (isolator) used in solar photovoltaic installations, in accordance with EN 60364–7–712. [ 1 ] [ 2 ] It was created by UK based IMO Precision Controls Ltd, and later adopted by other manufacturers such as Senton and ABB . The isolator design ensures a very fast break/make action by incorporating a user-independent switching action. As the handle is moved, it interacts with a spring mechanism causing the contacts to "snap" over upon reaching a set point. This mechanism means that the disconnection of the load circuits and the suppression of the electrical arc, produced by a constant DC load, is normally extinguished in a maximum of 5 milliseconds (MS), using the specific pole suppression chambers incorporated into the design. Many alternative solutions, particularly those based upon an AC Switch Disconnect design which use bridge contacts, have been modified and rated for DC operation. These types of products have a switching speed that is directly linked to the operator speed, therefore slow operation of the handle results in slow contact separation of the contacts which can produce arcing times of 100 MS or more. Additionally in these switches, the contact surface is also the surface upon which electrical arcs tend to form, therefore any surface damage or shooting caused by arcing is likely to have a detrimental effect on the isolators contact resistance and its longevity. True DC Solar Isolators use a rotary knife contact mechanism, so when the unit is operated, the handle movement gives a double make/break per contact set. As DC load switching creates electrical arcing , the design is such that this only occurs on the corners of the switching parts meaning that the main contact is made on an area where no arcing has occurred. The rotary contact mechanism methodology used in True DC solar isolators means that when the isolator is operated, a self-cleaning action occurs on the arcing points and contact surfaces thereby producing high-vibration resistant contact integrity, with reduced contact resistance. This contact system ensures that power loss per pole is kept as low as possible, and is consistent over the life of the product. The overall design of a True DC solar isolator is satisfactory for use in installations classified as either DC-21A, DC-21B or DC-22A, and so suitable for a high number of "off load" operations (without current) and also a high number of operating cycles "on load" (with current). A further advantage of the True DC mechanism is that, in the event of the supply to earth failure, the high short circuit current pulls the contacts together, thereby giving high short circuit withstand current of up to 2400 A (product dependent). Residential photovoltaic installations are typically 1000 V DC, however the majority of the True DC isolators available on the market today already have the capability to operate up to 1500 V DC. In the move towards safer installations of PV systems, [ 3 ] whether it be in a domestic or industrial environment, consideration has to often be given to the materials and the risk of fire hazard [ 4 ] [ 5 ] that they pose. Ratings referred to under the UL 94 category are deemed generally acceptable for compliance with this requirement as this cover tests for flammability of polymeric materials used for parts in devices and appliances. Although there are 12 flame classifications specified in UL 94, there are 6 which relate to materials commonly used in manufacturing enclosures, structural parts and insulators found in consumer electronic products. These are 5VA, 5VB, V-0, V-1, V-2 and HB. With the advent of more worldwide installations, and the requirements laid down in many countries' national wiring publications for the use of DC switches in PV installations, True DC Solar Isolators are assessed and tested under the latest UL standard UL508I which has been specifically written to cover the use of “Manual Disconnect Switches intended for use in Photovoltaic Systems”. This UL508I standard specifically covers switches rated up to 1500 V that are intended for use in an ambient temperatures of -20 °C to +60 °C, and that are suitable for use on the load side of PV branch protection devices. [ 6 ]
https://en.wikipedia.org/wiki/True_DC
In celestial mechanics , true anomaly is an angular parameter that defines the position of a body moving along a Keplerian orbit . It is the angle between the direction of periapsis and the current position of the body, as seen from the main focus of the ellipse (the point around which the object orbits). The true anomaly is usually denoted by the Greek letters ν or θ , or the Latin letter f , and is usually restricted to the range 0–360° (0–2π rad). The true anomaly f is one of three angular parameters ( anomalies ) that can be used to define a position along an orbit, the other two being the eccentric anomaly and the mean anomaly . For elliptic orbits, the true anomaly ν can be calculated from orbital state vectors as: where: For circular orbits the true anomaly is undefined, because circular orbits do not have a uniquely determined periapsis. Instead the argument of latitude u is used: where: For circular orbits with zero inclination the argument of latitude is also undefined, because there is no uniquely determined line of nodes. One uses the true longitude instead: where: The relation between the true anomaly ν and the eccentric anomaly E {\displaystyle E} is: or using the sine [ 1 ] and tangent : or equivalently: so Alternatively, a form of this equation was derived by [ 2 ] that avoids numerical issues when the arguments are near ± π {\displaystyle \pm \pi } , as the two tangents become infinite. Additionally, since E 2 {\displaystyle {\frac {E}{2}}} and ν 2 {\displaystyle {\frac {\nu }{2}}} are always in the same quadrant, there will not be any sign problems. so The true anomaly can be calculated directly from the mean anomaly M {\displaystyle M} via a Fourier expansion : [ 3 ] with Bessel functions J n {\displaystyle J_{n}} and parameter β = 1 − 1 − e 2 e {\displaystyle \beta ={\frac {1-{\sqrt {1-e^{2}}}}{e}}} . Omitting all terms of order e 4 {\displaystyle e^{4}} or higher (indicated by O ⁡ ( e 4 ) {\displaystyle \operatorname {\mathcal {O}} \left(e^{4}\right)} ), it can be written as [ 3 ] [ 4 ] [ 5 ] Note that for reasons of accuracy this approximation is usually limited to orbits where the eccentricity e {\displaystyle e} is small. The expression ν − M {\displaystyle \nu -M} is known as the equation of the center , where more details about the expansion are given. The radius (distance between the focus of attraction and the orbiting body) is related to the true anomaly by the formula where a is the orbit's semi-major axis . In celestial mechanics , Projective anomaly is an angular parameter that defines the position of a body moving along a Keplerian orbit . It is the angle between the direction of periapsis and the current position of the body in the projective space. The projective anomaly is usually denoted by the θ {\displaystyle \theta } and is usually restricted to the range 0 - 360 degree (0 - 2 π {\displaystyle \pi } radian). The projective anomaly θ {\displaystyle \theta } is one of four angular parameters ( anomalies ) that defines a position along an orbit, the other two being the eccentric anomaly , true anomaly and the mean anomaly . In the projective geometry, circle, ellipse, parabolla, hyperbolla are treated as a same kind of quadratic curves. An orbit type is classified by two project parameters α {\displaystyle \alpha } and β {\displaystyle \beta } as follows, where α = ( 1 + e ) ( q − p ) + ( 1 + e ) 2 ( q + p ) 2 + 4 e 2 2 {\displaystyle \alpha ={\frac {(1+e)(q-p)+{\sqrt {(1+e)^{2}(q+p)^{2}+4e^{2}}}}{2}}} β = 2 e ( 1 + e ) ( q + p ) + ( 1 + e ) 2 ( q + p ) 2 + 4 e 2 {\displaystyle \beta ={\frac {2e}{(1+e)(q+p)+{\sqrt {(1+e)^{2}(q+p)^{2}+4e^{2}}}}}} q = ( 1 − e ) a {\displaystyle q=(1-e)a} p = 1 Q = 1 ( 1 + e ) a {\displaystyle p={\frac {1}{Q}}={\frac {1}{(1+e)a}}} where α {\displaystyle \alpha } is semi major axis , e {\displaystyle e} is eccentricity , q {\displaystyle q} is perihelion distance 、 Q {\displaystyle Q} is aphelion distance . Position and heliocentric distance of the planet x {\displaystyle x} , y {\displaystyle y} and r {\displaystyle r} can be calculated as functions of the projective anomaly θ {\displaystyle \theta } : x = − β + α cos ⁡ θ 1 + α β cos ⁡ θ {\displaystyle x={\frac {-\beta +\alpha \cos \theta }{1+\alpha \beta \cos \theta }}} y = α 2 − β 2 sin ⁡ θ 1 + α β cos ⁡ θ {\displaystyle y={\frac {{\sqrt {\alpha ^{2}-\beta ^{2}}}\sin \theta }{1+\alpha \beta \cos \theta }}} r = α − β cos ⁡ θ 1 + α β cos ⁡ θ {\displaystyle r={\frac {\alpha -\beta \cos \theta }{1+\alpha \beta \cos \theta }}} The projective anomaly θ {\displaystyle \theta } can be calculated from the eccentric anomaly u {\displaystyle u} as follows, tan ⁡ θ 2 = 1 + α β 1 − α β tan ⁡ u 2 {\displaystyle \tan {\frac {\theta }{2}}={\sqrt {\frac {1+\alpha \beta }{1-\alpha \beta }}}\tan {\frac {u}{2}}} u − e sin ⁡ u = M = ( 1 − α 2 β 2 α ( 1 + β 2 ) ) 3 / 2 k ( t − T 0 ) {\displaystyle u-e\sin u=M=\left({\frac {1-\alpha ^{2}\beta ^{2}}{\alpha (1+\beta ^{2})}}\right)^{3/2}k(t-T_{0})} s 3 3 + α 2 − 1 α 2 + 1 s = 2 k ( t − T 0 ) α ( α 2 + 1 ) 3 {\displaystyle {\frac {s^{3}}{3}}+{\frac {\alpha ^{2}-1}{\alpha ^{2}+1}}s={\frac {2k(t-T_{0})}{\sqrt {\alpha (\alpha ^{2}+1)^{3}}}}} s = tan ⁡ θ 2 {\displaystyle s=\tan {\frac {\theta }{2}}} tan ⁡ θ 2 = α β + 1 α β − 1 tanh ⁡ u 2 {\displaystyle \tan {\frac {\theta }{2}}={\sqrt {\frac {\alpha \beta +1}{\alpha \beta -1}}}\tanh {\frac {u}{2}}} e sinh ⁡ u − u = M = ( α 2 β 2 − 1 α ( 1 + β 2 ) ) 3 / 2 k ( t − T 0 ) {\displaystyle e\sinh u-u=M=\left({\frac {\alpha ^{2}\beta ^{2}-1}{\alpha (1+\beta ^{2})}}\right)^{3/2}k(t-T_{0})} The above equations are called Kepler's equation . For arbitrary constant λ {\displaystyle \lambda } , the generalized anomaly Θ {\displaystyle \Theta } is related as tan ⁡ Θ 2 = λ tan ⁡ u 2 {\displaystyle \tan {\frac {\Theta }{2}}=\lambda \tan {\frac {u}{2}}} The eccentric anomaly, the true anomaly, and the projective anomaly are the cases of λ = 1 {\displaystyle \lambda =1} , λ = 1 + e 1 − e {\displaystyle \lambda ={\sqrt {\frac {1+e}{1-e}}}} , λ = 1 + α β 1 − α β {\displaystyle \lambda ={\sqrt {\frac {1+\alpha \beta }{1-\alpha \beta }}}} , respectively.
https://en.wikipedia.org/wiki/True_anomaly
In mathematical logic , true arithmetic is the set of all true first-order statements about the arithmetic of natural numbers . [ 1 ] This is the theory associated with the standard model of the Peano axioms in the language of the first-order Peano axioms. True arithmetic is occasionally called Skolem arithmetic, though this term usually refers to the different theory of natural numbers with multiplication . The signature of Peano arithmetic includes the addition, multiplication, and successor function symbols, the equality and less-than relation symbols, and a constant symbol for 0. The (well-formed) formulas of the language of first-order arithmetic are built up from these symbols together with the logical symbols in the usual manner of first-order logic . The structure N {\displaystyle {\mathcal {N}}} is defined to be a model of Peano arithmetic as follows. This structure is known as the standard model or intended interpretation of first-order arithmetic. A sentence in the language of first-order arithmetic is said to be true in N {\displaystyle {\mathcal {N}}} if it is true in the structure just defined. The notation N ⊨ φ {\displaystyle {\mathcal {N}}\models \varphi } is used to indicate that the sentence φ {\displaystyle \varphi } is true in N . {\displaystyle {\mathcal {N}}.} True arithmetic is defined to be the set of all sentences in the language of first-order arithmetic that are true in N {\displaystyle {\mathcal {N}}} , written Th( N {\displaystyle {\mathcal {N}}} ) . This set is, equivalently, the (complete) theory of the structure N {\displaystyle {\mathcal {N}}} . [ 2 ] The central result on true arithmetic is the undefinability theorem of Alfred Tarski (1936). It states that the set Th( N {\displaystyle {\mathcal {N}}} ) is not arithmetically definable. This means that there is no formula φ ( x ) {\displaystyle \varphi (x)} in the language of first-order arithmetic such that, for every sentence θ in this language, Here # ( θ ) _ {\displaystyle {\underline {\#(\theta )}}} is the numeral of the canonical Gödel number of the sentence θ . Post's theorem is a sharper version of the undefinability theorem that shows a relationship between the definability of Th( N {\displaystyle {\mathcal {N}}} ) and the Turing degrees , using the arithmetical hierarchy . For each natural number n , let Th n ( N {\displaystyle {\mathcal {N}}} ) be the subset of Th( N {\displaystyle {\mathcal {N}}} ) consisting of only sentences that are Σ n 0 {\displaystyle \Sigma _{n}^{0}} or lower in the arithmetical hierarchy. Post's theorem shows that, for each n , Th n ( N {\displaystyle {\mathcal {N}}} ) is arithmetically definable, but only by a formula of complexity higher than Σ n 0 {\displaystyle \Sigma _{n}^{0}} . Thus no single formula can define Th( N {\displaystyle {\mathcal {N}}} ) , because but no single formula can define Th n ( N {\displaystyle {\mathcal {N}}} ) for arbitrarily large n . As discussed above, Th( N {\displaystyle {\mathcal {N}}} ) is not arithmetically definable, by Tarski's theorem. A corollary of Post's theorem establishes that the Turing degree of Th( N {\displaystyle {\mathcal {N}}} ) is 0 (ω) , and so Th( N {\displaystyle {\mathcal {N}}} ) is not decidable nor recursively enumerable . Th( N {\displaystyle {\mathcal {N}}} ) is closely related to the theory Th( R {\displaystyle {\mathcal {R}}} ) of the recursively enumerable Turing degrees , in the signature of partial orders . [ 3 ] In particular, there are computable functions S and T such that: True arithmetic is an unstable theory , and so has 2 κ {\displaystyle 2^{\kappa }} models for each uncountable cardinal κ {\displaystyle \kappa } . As there are continuum many types over the empty set, true arithmetic also has 2 ℵ 0 {\displaystyle 2^{\aleph _{0}}} countable models. Since the theory is complete , all of its models are elementarily equivalent . The true theory of second-order arithmetic consists of all the sentences in the language of second-order arithmetic that are satisfied by the standard model of second-order arithmetic, whose first-order part is the structure N {\displaystyle {\mathcal {N}}} and whose second-order part consists of every subset of N {\displaystyle \mathbb {N} } . The true theory of first-order arithmetic, Th( N {\displaystyle {\mathcal {N}}} ) , is a subset of the true theory of second-order arithmetic, and Th( N {\displaystyle {\mathcal {N}}} ) is definable in second-order arithmetic. However, the generalization of Post's theorem to the analytical hierarchy shows that the true theory of second-order arithmetic is not definable by any single formula in second-order arithmetic. Simpson (1977) has shown that the true theory of second-order arithmetic is computably interpretable with the theory of the partial order of all Turing degrees , in the signature of partial orders, and vice versa .
https://en.wikipedia.org/wiki/True_arithmetic
True north is the direction along Earth 's surface towards the place where the imaginary rotational axis of the Earth intersects the surface of the Earth on its northern half , the True North Pole . True south is the direction opposite to the true north. It is important to make the distinction from magnetic north, which points towards an ever changing location close to the True North Pole determined Earth's magnetic field . Due to fundamental limitations in map projection , true north also differs from the grid north which is marked by the direction of the grid lines on a typical printed map. However, the longitude lines on a globe lead to the true poles, because the three-dimensional representation avoids those limitations. The celestial pole is the location on the imaginary celestial sphere where an imaginary extension of the rotational axis of the Earth intersects the celestial sphere. Within a margin of error of 1°, the true north direction can be approximated by the position of the pole star Polaris which would currently appear to be very close to the intersection, tracing a tiny circle in the sky each sidereal day . Due to the axial precession of Earth, true north rotates in an arc with respect to the stars that takes approximately 25,000 years to complete. Around 2101–2103, Polaris will make its closest approach to the celestial north pole (extrapolated from recent Earth precession ). [ 1 ] [ 2 ] [ 3 ] The visible star nearest the north celestial pole 5,000 years ago was Thuban . [ 4 ] On maps published by the United States Geological Survey (USGS) and the United States Armed Forces , true north is marked with a line terminating in a five-pointed star. [ 5 ] The east and west edges of the USGS topographic quadrangle maps of the United States are meridians of longitude , thus indicating true north (so they are not exactly parallel). Maps issued by the United Kingdom Ordnance Survey contain a diagram showing the difference between true north, grid north, and magnetic north at a point on the sheet; the edges of the map are likely to follow grid directions rather than true, and the map will thus be truly rectangular/square. [ citation needed ] This article about geography terminology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/True_north
True vapor pressure (TVP) is a common measure of the volatility of petroleum distillate fuels. It is defined as the equilibrium partial pressure exerted by a volatile organic liquid as a function of temperature as determined by the test method ASTM D 2879. [ 1 ] The true vapor pressure (TVP) at 100 °F differs slightly from the Reid vapor pressure (RVP) (per definition also at 100 °F), as it excludes dissolved fixed gases such as air. Conversions between the two can be found in AP 42, Fifth Edition, Volume I Chapter 7: Liquid Storage Tanks (p 7.1-54 and onwards) This physical chemistry -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/True_vapor_pressure
In particle physics , a truly neutral particle is a subatomic particle that has no charge quantum number; they are their own antiparticle . [ 1 ] : 131 In other words, it remains itself under the charge conjugation , which replaces particles with their corresponding antiparticles. [ 1 ] : 135 All charges of a truly neutral particle must be equal to zero . This requires particles to not only be electrically neutral , but also requires that all of their other charges (such as the colour charge ) be neutral. Known examples of such elementary particles include photons , Z bosons , and Higgs bosons , along with the hypothetical neutralinos , sterile neutrinos , and gravitons . For a spin-½ particle such as the neutralino, being truly neutral implies being a Majorana fermion . By way of contrast, neutrinos are not truly neutral since they have a weak isospin of ⁠± + 1 / 2 ⁠ , or equivalently, a non-zero weak hypercharge , both of which are charge-like quantum numbers. This particle physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Truly_neutral_particle
In analytic geometry , a truncus is a curve in the Cartesian plane consisting of all points ( x , y ) satisfying an equation of the form f ( x ) = a ( x + b ) 2 + c {\displaystyle f(x)={a \over (x+b)^{2}}+c} where a , b , and c are given constants. The two asymptotes of a truncus are parallel to the coordinate axes. The basic truncus y = 1 / x 2 has asymptotes at x = 0 and y = 0, and every other truncus can be obtained from this one through a combination of translations and dilations . For the general truncus form above, the constant a dilates the graph by a factor of a from the x -axis; that is, the graph is stretched vertically when a > 1 and compressed vertically when 0 < a < 1. When a < 0 the graph is reflected in the x -axis as well as being stretched vertically. The constant b translates the graph horizontally left b units when b > 0, or right when b < 0. The constant c translates the graph vertically up c units when c > 0 or down when c < 0. The asymptotes of a truncus are found at x = -b (for the vertical asymptote) and y = c (for the horizontal asymptote). This function is more commonly known as a reciprocal squared function, particularly the basic example 1 / x 2 {\displaystyle 1/x^{2}} . [ 1 ] This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Truncus_(mathematics)
This business software article is a stub . You can help Wikipedia by expanding it . Trunks Integrated Record Keeping System ( TIRKS ) is an operations support system from Telcordia Technologies (since acquired by Ericsson , Inc.), originally developed by the Bell System during the late 1970s. It was developed for inventory and order control management of interoffice trunk circuits that interconnect telephone switches. It grew to encompass and automate many functions required to build the ever-expanding data transport network. Supporting circuits from POTS and 150 baud modems up through T1, DS3, SONET and DWDM , it continues to evolve today, and unlike many software technologies today, provides complete backward compatibility. TIRKS was recently updated with a Java GUI, XML API, and WORD Sketch, which provides graphical views of the TIRKS Work Order Record and Details Document as well as SONET and DWDM networks. When TIRKS became a registered trademark in 1987, it became technically improper to use it as an acronym. TIRKS was one of many OSS technologies transferred to Bell Communications Research as part of the Modification of Final Judgment related to the AT&T divestiture on January 1, 1984. In the 1990s, the Facility and Equipment Planning System (FEPS) and Planning Workstation System (PWS) products were incorporated into the Telcordia TIRKS CE System. TIRKS is still in use at AT&T, Verizon, CenturyLink/ Lumen Technologies , and altafiber.
https://en.wikipedia.org/wiki/Trunks_Integrated_Record_Keeping_System
Truscon Laboratories was a research and development chemical laboratory of the Trussed Concrete Steel Company ("Truscon") of Detroit, Michigan . [ 1 ] It made waterproofing liquid chemical products that went into or on cement and plaster. The products goals were to provide damp-proofing and waterproofing finishing for concrete and Truscon steel to guard against disintegrating action of water and air. [ 2 ] From Truscon laboratories viewpoint waterproofing was considered methods and means of protecting underground construction like foundations and footings. It also pertains to structures intended for retaining water like water tanks and containing water under hydrostatic conditions like in water pipes , tunnels , reservoirs , and cisterns . Damp-proofing was considered the methods of keeping dampness out of the main part of concrete buildings. It involves the methods of treating exposed walls above ground level to avoid the entrance of moisture into the building. These definitions then qualified their various products as servicing particular needs. [ 3 ] From the initial idea of protecting against water damage developed the science of integral waterproofing—the introduction of some element into the wet cement during the process of making causing a high degree of impermeability and imperviousness. Water in masonry does harm structurally because of its solvent properties and because it expands when frozen breaking concrete. Water is observed into concrete walls like a sponge absorbs water through capillary action. A wet wall produces damp and clammy conditions that promote and spread disease. While structural damp-proofing and waterproofing to prevent decay was a motive of the Truscon laboratories chemicals, the side benefit was that it provided better hygienic conditions. [ 3 ] Some of the products developed for damp-proofing was Por-Seal, Stone-Tex, Stone-Backing, and Plaster Bond. [ 4 ] Water Proofing Paste, an ingredient used in the making of stucco cement and plaster, [ 5 ] was developed for waterproofing. [ 4 ] Another waterproofing product was named Water Proofed Cement Stucco. [ 6 ] Truscon laboratories waterproofing protection products were for residential housing, apartment buildings, office buildings, hotels, hospitals, and manufacturing plants. [ 7 ] They involved enamels and interior finishes. [ 8 ] The products were coatings to provide a dustless waterproof washable surface for cement floors and walls. [ 9 ] The Truscon laboratories slogan was Waterproof is Weatherproof. [ 10 ] Some of the brand names of these specialized products were Asepticote, Sno-Wite, Industrial Enamel, Hospital Enamel, Dairy Enamel, Floor Finish, Edelweiss and Alkali-Proof Wall Size. [ 11 ] Its Asepticote waterproofing product was used in houses, hospitals, and hotels for its eye-soothing finish. [ 12 ] Truscon's "Waterproofing Paste" was an integral part of cement and used in floors, plaster, and stucco to waterproof walls and floors. [ 13 ] "Industrial White" was used in the interiors of mills and factories because of its white brightness qualities. [ 8 ] Truscon's "Granatex Floor Varnish", a stain-resistant product, was a transparent waterproofing agent that was used on concrete and wood floors. [ 8 ] Truscon's Agatex was a chemical that hardened cement floors that was popular. It was a wet liquid that was applied on the surface of concrete like a paint or varnish. The product interacted with concrete and its dust chemically to resemble that of agate resulting in a hardened dustless surface when dry. The company claimed that the resultant treated surface was so hard that it would ring under a hammer like an anvil. [ 14 ] Truscon's "Stone-Tex" product was used on all kinds of masonry building exterior walls as a water-resistant protective coating. It was used on concrete blocks, cement walls, stucco and brick. [ 15 ] It was also used for making the building more attractive looking. [ 16 ] Some of the thousands of Truscon laboratories product users were The Cincinnati Enquirer , American Tobacco Company , R. J. Reynolds Tobacco Company , Atlantic Petroleum , Haynes Automobile Company , Pennsylvania Railroad , E. W. Bliss Company , United States Military Academy , United States Marine Barracks , United States Shipping Board , Ferry-Morse Seed Company , Lowell Mills , Arlington Mills , H. J. Heinz Company , Dow Chemical laboratories , Winterhaven Citrus Growers' Association , Savage Arms , Curtiss Aeroplane and Motor Company , Liggett-Myers Tobacco Company and Sinclair Oil Corporation . [ 17 ] St Johns County School District on Fullerwood Grade School building. Buildings that used Truscon's products were the Packard automobile factory plant building number 10 , Highland Park Ford Plant , Fisher Building , Fisher Body , Frederick Stearns Building , Youth's Companion Building , Midland Packing Building , Majestic Theater (Detroit, Michigan) , Beech-Nut , Waco High School , Kalamazoo Paper , Detroit Crosstown Garage , Pennsylvania Rubber Company building , Minneapolis High School , Detroit Athletic Club , Detroit News building , and Milton Bradley building . [ 18 ] Truscon laboratories iron and steel protection products were for priming structural steel like in structures iron industrial building frames, factories, bridges, viaducts, stacks, and boilers. [ 19 ] They were also used with brewing coils, ice making coils, fireproofing, and acid-proofing. [ 20 ] These paint on products were waterproofing and rust preventing agents. [ 21 ] [ 22 ] Many of these products went under the brand name of Bar-Ox and were given numbers that related to specific applications. [ 23 ] Examples were Bar-Ox No. 7 for coating on exposed structural steel, Bar-Ox No. I4 for brine and condenser pipes, Bar-Ox 21 for stack enamel and boiler front enamel, Bar-Ox No. 28 for acid-proofing, Bar-Ox No. 35 for guarding against alkaline conditions, Bar-Ox No. 42 for conduit coating, and Bar-Ox No. 49 for gas holder tanks. [ 24 ] The Detroit operation's employees were organized by District 50 of the United Mine Workers . [ 25 ]
https://en.wikipedia.org/wiki/Truscon_Laboratories
A truss is an assembly of members such as beams , connected by nodes , that creates a rigid structure. [ 1 ] In engineering, a truss is a structure that "consists of two-force members only, where the members are organized so that the assemblage as a whole behaves as a single object". [ 2 ] A two-force member is a structural component where force is applied to only two points. Although this rigorous definition allows the members to have any shape connected in any stable configuration, architectural trusses typically comprise five or more triangular units constructed with straight members whose ends are connected at joints referred to as nodes . In this typical context, external forces and reactions to those forces are considered to act only at the nodes and result in forces in the members that are either tensile or compressive . For straight members, moments ( torques ) are explicitly excluded because, and only because, all the joints in a truss are treated as revolutes , as is necessary for the links to be two-force members. A planar truss is one where all members and nodes lie within a two-dimensional plane, while a space frame has members and nodes that extend into three dimensions . The top beams in a truss are called top chords and are typically in compression , and the bottom beams are called bottom chords , and are typically in tension . The interior beams are called webs , and the areas inside the webs are called panels , [ 3 ] or from graphic statics (see Cremona diagram ) polygons . [ 4 ] Truss derives from the Old French word trousse , from around 1200 AD, which means "collection of things bound together". [ 5 ] [ 6 ] The term truss has often been used to describe any assembly of members such as a cruck frame [ 7 ] [ 8 ] or a couple of rafters. [ 9 ] [ 10 ] A truss consists of typically (but not necessarily) straight members connected at joints, traditionally termed panel points . Trusses are typically (but not necessarily [ 11 ] ) composed of triangles because of the structural stability of that shape and design. A triangle is the simplest geometric figure that will not change shape when the lengths of the sides are fixed. [ 12 ] In comparison, both the angles and the lengths of a four-sided figure must be fixed for it to retain its shape. The simplest form of a truss is one single triangle. This type of truss is seen in a framed roof consisting of rafters and a ceiling joist , [ 13 ] and in other mechanical structures such as bicycles and aircraft. Because of the stability of this shape and the methods of analysis used to calculate the forces within it, a truss composed entirely of triangles is known as a simple truss. [ 14 ] However, a simple truss is often defined more restrictively by demanding that it can be constructed through successive addition of pairs of members, each connected to two existing joints and to each other to form a new joint, and this definition does not require a simple truss to comprise only triangles. [ 11 ] The traditional diamond-shape bicycle frame, which uses two conjoined triangles, is an example of a simple truss. [ 15 ] A planar truss lies in a single plane . [ 14 ] Planar trusses are typically used in parallel to form roofs and bridges. [ 16 ] The depth of a truss, or the height between the upper and lower chords, is what makes it an efficient structural form. A solid girder or beam of equal strength would have substantial weight and material cost as compared to a truss. For a given span , a deeper truss will require less material in the chords and greater material in the verticals and diagonals. An optimum depth of the truss will maximize the efficiency. [ 17 ] A space frame truss is a three-dimensional framework of members pinned at their ends. A tetrahedron shape is the simplest space truss, consisting of six members that meet at four joints. [ 14 ] Large planar structures may be composed from tetrahedrons with common edges, and they are also employed in the base structures of large free-standing power-line pylons. There are two basic types of truss: A combination of the two is a truncated truss, used in hip roof construction. A metal-plate-connected wood truss is a roof or floor truss whose wood members are connected with metal connector plates . Truss members form a series of equilateral triangles, alternating up and down. Truss members are made up of all equivalent equilateral triangles. The minimum composition is two regular tetrahedrons along with an octahedron. They fill up three-dimensional space in a variety of configurations. The Pratt truss was patented in 1844 by two Boston railway engineers, [ 18 ] Caleb Pratt and his son Thomas Willis Pratt . [ 19 ] The design uses vertical members for compression and diagonal members to respond to tension . The Pratt truss design remained popular as bridge designers switched from wood to iron, and from iron to steel. [ 20 ] This continued popularity of the Pratt truss is probably due to the fact that the configuration of the members means that longer diagonal members are only in tension for gravity load effects. This allows these members to be used more efficiently, as slenderness effects related to buckling under compression loads (which are compounded by the length of the member) will typically not control the design. Therefore, for a given planar truss with a fixed depth, the Pratt configuration is usually the most efficient under static, vertical loading. The Southern Pacific Railroad bridge in Tempe , Arizona , is a 393-meter-long (1,289 ft) truss bridge built in 1912. [ 21 ] [ 22 ] The structure, still in use today, consists of nine Pratt truss spans of varying lengths. The Wright Flyer used a Pratt truss in its wing construction, as the minimization of compression member lengths allowed for lower aerodynamic drag . [ 23 ] American architect Ithiel Town designed Town's Lattice Truss as an alternative to heavy-timber bridges. His design, patented in 1820 and 1835, uses easy-to-handle planks arranged diagonally with short spaces in between them, to form a lattice . Named for their shape, bowstring trusses were first used for arched truss bridges , often confused with tied-arch bridges . Thousands of bowstring trusses were used during World War II for holding up the curved roofs of aircraft hangars and other military buildings. Many variations exist in the arrangements of the members connecting the nodes of the upper arc with those of the lower, straight sequence of members, from nearly isosceles triangles to a variant of the Pratt truss. One of the simplest truss styles to implement, the king post consists of two angled supports leaning into a common vertical support. The queen post truss, sometimes queenpost or queenspost , is similar to a king post truss in that the outer supports are angled towards the centre of the structure. The primary difference is the horizontal extension at the centre which relies on beam action to provide mechanical stability. This truss style is only suitable for relatively short spans. [ 24 ] Lenticular trusses, patented in 1878 by William Douglas (although the Gaunless Bridge of 1823 was the first of the type), have the top and bottom chords of the truss arched, forming a lens shape. A lenticular pony truss bridge is a bridge design that involves a lenticular truss extending above and below the roadbed. The members of a Vierendeel structure are not triangulated but form rectangular openings. The structure has a frame with fixed joints that are capable of transferring and resisting bending moments . As such, it does not fit the definition of a truss, since it contains non-two-force members: regular trusses comprise members that are commonly assumed to have pinned joints, with the implication that no moments exist at the jointed ends. This style of structure was named after the Belgian engineer Arthur Vierendeel , [ 26 ] who developed the design in 1896. It is rarely used for bridges because of higher costs compared to a triangulated truss, but in buildings it has the advantage that a large amount of the exterior envelope remains unobstructed and it can therefore be used for windows and door openings. In some applications this is preferable to a braced-frame system, which would leave some areas obstructed by the diagonal braces. A truss that is assumed to comprise members that are connected by means of pin joints, and which is supported at both ends by means of hinged joints and rollers, is described as being statically determinate . Newton's laws apply to the structure as a whole, as well as to each node or joint. In order for any node that may be subject to an external load or force to remain static in space, the following conditions must hold: the sums of all (horizontal and vertical) forces, as well as all moments acting about the node, equal zero. Analysis of these conditions at each node yields the magnitude of the compression or tension forces. Trusses that are supported at more than two positions are said to be statically indeterminate , and the application of Newton's Laws alone is not sufficient to determine the member forces. In order for a truss with pin-connected members to be stable, it does not need to be entirely composed of triangles. [ 11 ] In mathematical terms, the following necessary condition for stability of a simple truss exists: where m is the total number of truss members, j is the total number of joints and r is the number of reactions (equal to 3 generally) in a 2-dimensional structure. When m = 2 j − 3 {\displaystyle m=2j-3} , the truss is said to be statically determinate , because the m +3 internal member forces and support reactions can then be completely determined by 2 j equilibrium equations, once the external loads and the geometry of the truss are known. Given a certain number of joints, this is the minimum number of members, in the sense that if any member is taken out (or fails), then the truss as a whole fails. While the relation (a) is necessary, it is not sufficient for stability, which also depends on the truss geometry, support conditions and the load carrying capacity of the members. Some structures are built with more than this minimum number of truss members. Those structures may survive even when some of the members fail. Their member forces depend on the relative stiffness of the members, in addition to the equilibrium condition described. Because the forces in each of its two main girders are essentially planar, a truss is usually modeled as a two-dimensional plane frame. However, if there are significant out-of-plane forces, then the structure must be modeled as a three-dimensional space. The analysis of trusses often assumes that loads are applied to joints only and not at intermediate points along the members. The weight of the members is often insignificant compared to the applied loads and so is often omitted; alternatively, half of the weight of each member may be applied to its two end joints. Provided that the members are long and slender, the moments transmitted through the joints are negligible, and the junctions can be treated as " hinges " or "pin-joints". Under these simplifying assumptions, every member of the truss is then subjected to pure compression or pure tension forces – shear , bending moment , and other more-complex stresses are all practically zero. Trusses are physically stronger than other ways of arranging structural elements, because nearly every material can resist a much larger load in tension or compression than in shear, bending, torsion , or other kinds of force. These simplifications make trusses easier to analyze. Structural analysis of trusses of any type can readily be carried out using a matrix method such as the direct stiffness method , the flexibility method , or the finite-element method. Illustrated is a simple, statically determinate flat truss with 9 joints and (2 x 9) − 3 = 15 members. External loads are concentrated in the outer joints. Since this is a symmetrical truss with symmetrical vertical loads, the reactive forces at A and B are vertical, equal, and half the total load. The internal forces in the members of the truss can be calculated in a variety of ways, including graphical methods: A truss can be thought of as a beam where the web consists of a series of separate members instead of a continuous plate. In the truss, the lower horizontal member (the bottom chord ) and the upper horizontal member (the top chord ) carry tension and compression , fulfilling the same function as the flanges of an I-beam . Which chord carries tension and which carries compression depends on the overall direction of bending . In the truss pictured above right, the bottom chord is in tension, and the top chord in compression. The diagonal and vertical members form the truss web , and carry the shear stress . Individually, they are also in tension and compression; the exact arrangement of forces depends on the type of truss and again on the direction of bending. In the truss shown above right, the vertical members are in tension, and the diagonals are in compression. In addition to carrying the static forces, the members serve additional functions of stabilizing each other, preventing buckling . In the adjacent picture, the top chord is prevented from buckling by the presence of bracing and by the stiffness of the web members. The inclusion of the elements shown is largely an engineering decision based upon economics, being a balance between the costs of raw materials, off-site fabrication, component transportation, on-site erection, the availability of machinery, and the cost of labor. In other cases the appearance of the structure may take on greater importance and so influence the design decisions beyond mere matters of economics. Modern materials such as prestressed concrete and fabrication methods such as automated welding have significantly influenced the design of modern bridges . Once the force on each member is known, the next step is to determine the cross section of the individual truss members. For members under tension the cross-sectional area A can be found using A = F × γ / σ y , where F is the force in the member, γ is a safety factor (typically 1.5 but dependent on building codes ), and σ y is the yield tensile strength of the steel used. The members under compression also have to be designed to be safe against buckling. The weight of a truss member depends directly on its cross section—that weight partially determines how strong the other members of the truss need to be. Giving one member a larger cross section than on a previous iteration requires giving other members a larger cross section as well, to hold the greater weight of the first member—one needs to go through another iteration to find exactly how much greater the other members need to be. Sometimes the designer goes through several iterations of the design process to converge on the "right" cross section for each member. On the other hand, reducing the size of one member from the previous iteration merely makes the other members have a larger (and more expensive) safety factor than is technically necessary, but does not require another iteration to find a buildable truss. The effect of the weight of the individual truss members in a large truss, such as a bridge, is usually insignificant compared to the force of the external loads. After determining the minimum cross section of the members, the last step in the design of a truss would be detailing of the bolted joints , e.g., involving shear stress of the bolt connections used in the joints. Based on the needs of the project, truss internal connections (joints) can be designed as rigid, semi-rigid, or hinged. Rigid connections can allow transfer of bending moments, leading to development of secondary bending moments in the members. Component connections are critical to the structural integrity of a framing system. In buildings with large, clearspan wood trusses, the most critical connections are those between the truss and its supports. In addition to gravity-induced forces (a.k.a. bearing loads), these connections must resist shear forces acting perpendicular to the plane of the truss and uplift forces due to wind. Depending upon overall building design, the connections may also be required to transfer bending moment. Wood posts enable the fabrication of strong, direct, yet inexpensive connections between large trusses and walls. Exact details for post-to-truss connections vary from designer to designer, and may be influenced by post type. Solid-sawn timber and glulam posts are generally notched to form a truss-bearing surface. The truss is rested on the notches and bolted into place. A special plate/bracket may be added to increase connection load-transfer capabilities. With mechanically-laminated posts, the truss may rest on a shortened outer-ply or on a shortened inner-ply. The later scenario places the bolts in double shear and is a very effective connection.
https://en.wikipedia.org/wiki/Truss
A truss connector plate , or gang plate , is a kind of tie . Truss plates are light gauge metal plates used to connect prefabricated light frame wood trusses . They are produced by punching light gauge galvanized steel to create teeth on one side. The teeth are embedded in and hold the wooden frame components to the plate and each other. Nail plates are used to connect timber of the same thickness in the same plane. When used on trusses, they are pressed into the side of the timber using tools such as a hydraulic press or a roller. As the plate is pressed in, the teeth are all driven into the wood fibers simultaneously, and the compression between adjacent teeth reduces the tendency of the wood to split. A truss connector plate is manufactured from ASTM A653/A653M, A591, A792/A792M, or A167 structural quality steel and is protected with zinc or zinc-aluminum alloy coatings or their stainless steel equivalent. Metal connector plates are manufactured with varying length, width and thickness (or gauge) and are designed to laterally transmit loads in wood. They are also known as stud ties, metal connector plates, mending plates, or nail plates. However, not all types of nail plates are approved for use in trusses and other structurally critical placements. [ 1 ] John Calvin Jureit invented the truss connector plate and patented it in 1955. [ 2 ] He formed the company Gang-Nails, Inc. which was later renamed Automated Building Components, Inc. [ 3 ] [ 4 ] This article about a civil engineering topic is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Truss_connector_plate
Intel Trust Domain Extensions ( TDX ) is a CPU -level technology proposed by Intel in May 2021 for implementing a trusted execution environment in which virtual machines (called "Trust Domains", or TDs) are hardware-isolated from the host's Virtual Machine Monitor (VMM), hypervisor, and other software on the host. This hardware isolation is intended to prevent threat actors with administrative access or physical access to the virtual machine host from compromising aspects of the TD virtual machine's confidentiality and integrity. TDX also supports a remote attestation feature which allows users to determine that a remote system has TDX protections enabled prior to sending it sensitive data. [ 1 ] Intel TDX is of particular use for cloud providers, as it increases isolation of customer virtual machines and provides a higher level of assurance that the cloud provider cannot access the customer's data. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Intel TDX was described in and is pending US patent number 20210141658A1. [ 6 ] TDX consists of multiple components including Virtual Machine Extensions (VMX) instruction set architecture (ISA) extensions, a technology for memory encryption, and a new CPU operation mode called SEAM ("Secure Arbitration Mode"), which hosts the TDX module. [ 7 ] TDX defines two classes of memory: shared memory and private memory. Shared memory is intended to be used for communicating with the TD host and may receive some TDX protections. Private memory received full TDX confidentiality and integrity protections. TDX implements memory protection by encrypting the TD's memory with a per-TD AES-XTS 128-bit key. To avoid leaking ciphertext, memory access is limited to being from the SEAM mode and direct memory access is unavailable. If memory integrity protections are enabled, a MAC using SHA-3-256 is generated for the private memory and if the MAC validation fails, the TD VM is terminated. TD VM registers are also kept confidential by storing them in a per-TD save state and scrubbing them when the TD returns control to the VMM. [ 1 ] [ 8 ] TDX provides hardware isolation of TD VMs by brokering all VMM to TD communication through the TDX module and preventing the VMM from accessing the TD's data. The VMM communicates to the TDX module using new SEAMCALL and SEAMRET CPU instructions . SEAMCALL is used by the VMM to invoke the TDX module to create, delete, or execute a TD. SEAMRET is used by the TDX module to return execution back to the VMM. [ 1 ] [ 9 ] [ 10 ] TDX's remote attestation feature builds on the SGX technology to allow someone to determine that a remote TD has TDX protections enabled prior to sending it sensitive data. The remote attestation report can be generated by the TDX module calling the SEAMREPORT instruction. The SEAMREPORT instruction generates a MAC-signed "Report" structure which includes information such as the version numbers of the TDX's components. The VMM would then use SGX enclaves to convert that "Report" structure into a remotely verifiable "Quote", which it would send to the system requesting attestation. [ 1 ] TDX is available for 5th generation Intel Xeon processors (codename Emerald Rapids ) and Edge Enhanced Compute variants of 4th generation Xeon processors (codename Sapphire Rapids ). [ 11 ] First patches to support TDX technology in the Linux kernel were posted in the Linux kernel mailing list around June 2021, [ 12 ] were merged on May 24, 2022, and were included in the mainline Linux Kernel version 5.19. [ 13 ] Microsoft Azure has announced that as of April 24, 2023 their new DCesv5-series and ECesv5-series virtual machines would support Intel TDX. [ 14 ] They have also published information how to use Intel TDX as part of Microsoft Azure Attestation. [ 15 ] TDX is somewhat similar to SGX , as in that both are implementations of trusted execution environments . However, they are significantly different in the scope of the protections and that SGX requires that applications be rewritten to support SGX, while TDX only requires support at the hardware and operating system levels. [ 16 ] On the VMM host, TDX involves the use of SGX enclaves to enable support for remote attestation. Additionally, even an operating system which does not support running as a TD VM can be protected by being launched as a nested VM within a TD VM. [ 1 ]
https://en.wikipedia.org/wiki/Trust_Domain_Extensions
Trust boundary is a term used in computer science and security which describes a boundary where program data or execution changes its level of "trust," or where two principals with different capabilities exchange data or commands. The term refers to any distinct boundary where within a system all sub-systems (including data) have equal trust. [ 1 ] An example of an execution trust boundary would be where an application attains an increased privilege level (such as root ). [ 2 ] A data trust boundary is a point where data comes from an untrusted source--for example, user input or a network socket . [ 3 ] A "trust boundary violation" refers to a vulnerability where computer software trusts data that has not been validated before crossing a boundary. [ 4 ] This computer security article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trust_boundary
Trust in Numbers: The Pursuit of Objectivity in Science and Public Life is a book by Theodore Porter , published in 1995 by Princeton University Press , [ 1 ] that proposes that quantification in public life is driven by bureaucratic necessities to obtain legitimacy through objectivity. In Trust in Numbers: The Pursuit of Objectivity in Science and Public Life , Theodore Porter reverses the classic notion that quantification descends from the successes of natural sciences being adopted by other disciplines, to investigate instead the opposite movement, whereby quantification is driven by political, administrative and bureaucratic necessities to standardize, communicate, and obtain legitimacy through objectivity. [ 2 ] The appeal of numbers is especially compelling to bureaucratic officials who lack the mandate of a popular election, or divine right, p. 8. [ 1 ] After noting how officials fear being criticized for arbitrariness and bias, he concludes: A decision made by the numbers (or by explicit rules of some other sort) has at least the appearance of being fair and impersonal, p. 8. [ 1 ] Thus, "trust may sometimes be based less on the solidity of the numbers themselves than on the needs of expert and client communities". [ 3 ] Defined as the book that comes closest to establishing a common theoretical language for sociology of quantification , [ 4 ] the work of Porter adopts a historical and sociological style of analysis that is indebted to Bruno Latour and Steven Shapin . [ 2 ] An important element of Porter 's analysis concerns the meaning of objectivity and how it has arisen historically, and what role numbers have played in its construction. [ 2 ] For Porter , 'mechanical objectivity' is sought and obtained via quantitative methods that ensure a procedural forms of accountability. He calls these procedures 'technologies of distance' that ensure compliance with impersonal rules excluding bias and personal preferences. [ 5 ] Based on a number of case studies in different countries — actuaries in the UK and US, engineers in France and in the US — Porter demonstrates that the allure of quantitative and standardized measures does not derive from their success in the natural sciences, but arise from the need of professional groups to "respond to external social and political pressures demanding accountability". [ 5 ] The author traces the history of cost-benefit analysis in a way that make evident the bureaucratic and political conflicts whereby actuaries and experts of different disciplines fought to maintain structures of power and privilege within national styles and contexts. In the US, tensions existed between the Bureau of Reclamation and the Army Corps of Engineers . Each institution produced cost-benefit analyses that were designed to favor the respective interests of the two institutions and their respective stakeholders. In Victorian England actuaries and the accountants fought to thwart attempts by the authorities to introduce standards of accounting, as to defend the nuanced expertise of the respective crafts. [ 3 ] In a sense, the work of Porter makes clear how objectivity is an alternative to personal trust. [ 6 ] He illustrates the point by comparing the practices and contexts of the Army Corps of Engineers in the US versus those of Les Ingénieurs des Ponts et Chaussées in France (pp. 114–190). [ 1 ] The very last chapter of Trust in Numbers shows — following a critical path opened by Sharon Traweek , p. 222 [ 1 ] — that in the most highly developed and leading research communities, for example among high-energy physicists, numbers and quantification are not center stage — a place that is taken by a community of trust, where a "personal knowledge" is at play, that ensures the creativity and vitality of the discipline, a point made by other STS scholars. [ 7 ] [ 8 ] As noted by the author, the quantitative element of 'mechanical objectivity' is more present in academic fields like economics, sociology and psychology than they are in physics. This chapter has been suggested as the most relevant for practicing research scientists. [ 3 ] For sociologist Trevor Pinch the most important aspect of this book is to demystify the concept that the more mathematical the science, the higher its prestige, and to achieve this through a comparative investigation of how different sciences make use of mathematics in different contexts". [ 2 ] Trust in Numbers has been suggested as of particular relevance to the field of Digital Humanities . [ 6 ] In 1997, Porter was awarded the Ludwik Fleck Prize for Trust in Numbers . [ 5 ] More than 40 reviews have been written about the book, including from Michel Callon , Philip Mirowski , Sheila Jasanoff , Roy MacLeod , Mary S. Morgan , Trevor Pinch , Jerry Ravetz , Jessica Riskin , E. Roy Weintraub , and many others. [ 5 ]
https://en.wikipedia.org/wiki/Trust_in_Numbers
In information system and information technology , trust management is an abstract system that processes symbolic representations of social trust , usually to aid automated decision-making process. Such representations, e.g. in a form of cryptographic credentials, can link the abstract system of trust management with results of trust assessment. Trust management is popular in implementing information security , specifically access control policies. The concept of trust management has been introduced by Matt Blaze [ 1 ] [ 2 ] to aid the automated verification of actions against security policies. In this concept, actions are allowed if they demonstrate sufficient credentials, irrespective of their actual identity, separating symbolic representation of trust from the actual person. Trust management can be best illustrated through the everyday experience of tickets. One can buy a ticket that entitles them e.g. to enter the stadium. The ticket acts as a symbol of trust, stating that the bearer of the ticket has paid for their seat and is entitled to enter. However, once bought, the ticket can be transferred to someone else, thus transferring such trust in a symbolic way. At the gate, only the ticket will be checked, not the identity of a bearer. Trust management can be seen as a symbol-based automation of social decisions related to trust, [ 3 ] where social agents instruct their technical representations how to act while meeting technical representations of other agents. Further automation of this process can lead to automated trust negotiations (e.g. see Winslett [ 4 ] ) where technical devices negotiate trust by selectively disclosing credential, according to rules defined by social agents that they represent. The definition and perspective on trust management was expanded in 2000 to include concepts of honesty, truthfulness, competence and reliability, in addition to trust levels, the nature of the trust relationship and the context. [ 5 ] Web Services Trust Language (WS-Trust) [ 6 ] brings trust management into the environment of web services. The core proposition remain generally unchanged: the Web Service (verifier) is accepting a request only if the request contains proofs of claims (credentials) that satisfy the policy of a Web Service. It is also possible to let technical agents monitor each other's behaviour and respond accordingly by increasing or decreasing trust. Such systems are collectively called Trust-Based Access Control (TBAC) [ 7 ] and their applicability have been studied for several different application areas. [ 8 ] An alternative view on trust management [ 9 ] questions the possibility to technically manage trust, and focuses on supporting the proper assessment of the extent of trust one person has in the other. Trust management is also studied in specific IT-related field such as transportation. [ 10 ] Trust management is an important topic in online social network these days. [ 11 ]
https://en.wikipedia.org/wiki/Trust_management_(information_system)
Trust on first use ( TOFU ), or trust upon first use ( TUFU ), is an authentication scheme [ 1 ] used by client software which needs to establish a trust relationship with an unknown or not-yet-trusted endpoint. In a TOFU model, the client will try to look up the endpoint's identifier, usually either the public identity key of the endpoint, or the fingerprint of said identity key, in its local trust database. If no identifier exists yet for the endpoint, the client software will either prompt the user to confirm they have verified the purported identifier is authentic, or if manual verification is not assumed to be possible in the protocol, the client will simply trust the identifier which was given and record the trust relationship into its trust database. If in a subsequent connection a different identifier is received from the opposing endpoint, the client software will consider it to be untrusted. In the SSH protocol, most client software (though not all [ 2 ] ) will, upon connecting to a not-yet-trusted server, display the server's public key fingerprint, and prompt the user to verify they have indeed authenticated it using an authenticated channel . The client will then record the trust relationship into its trust database. New identifier will cause a blocking warning that requires manual removal of the currently stored identifier. The XMPP client Conversations uses Blind Trust Before Verification, [ 3 ] where all identifiers are blindly trusted until the user demonstrates will and ability to authenticate endpoints by scanning the QR-code representation of the identifier. After the first identifier has been scanned, the client will display a shield symbol for messages from authenticated endpoints, and red background for others. In Signal the endpoints initially blindly trust the identifier and display non-blocking warnings when it changes. The identifier can be verified either by scanning a QR-code, or by exchanging the decimal representation of the identifier (called Safety Number) over an authenticated channel. The identifier can then be marked as verified. This changes the nature of identifier change warnings from non-blocking to blocking. [ 4 ] In e.g. Jami and Ricochet the identifier is the user's call-sign itself. The ID can be exchanged over any channel, but until the identifier is verified over an authenticated channel, it is effectively blindly trusted. The identifier change also requires an account change, thus a MITM attack for same account requires access to endpoint's private key. In WhatsApp the endpoint initially blindly trusts the identifier, and by default no warning is displayed when the identifier changes. If the user demonstrates will and ability to authenticate endpoints by accessing the key fingerprint (called Security Code), the client will prompt the user to enable non-blocking warnings when the identifier changes. The WhatsApp client does not allow the user to mark the identifier as verified. In Telegram 's optional secret chats the endpoints blindly trust the identifier. Changed identifier spawns a new secret chat window instead of displaying any warning. The identifiers can be verified by comparing the visual or hexadecimal representation of the identifier. The Telegram client does not allow the user to mark the identifier as verified. In Keybase the clients can cross-sign each other's keys, which means trusting a single identifier allows verification of multiple identifiers. Keybase acts as a trusted third party that verifies a link between a Keybase account and the account's signature chain that contains the identifier history. The identifier used in Keybase is either the hash of the root of the user's signature chain, or the Keybase account name tied to it. Until the user verifies the authenticity of the signature chain's root hash (or the keybase account) over an authenticated channel, the account and its associated identifiers are essentially blindly trusted, and the user is susceptible to a MITM attack. [ citation needed ] The single largest strength of any TOFU-style model is that a human being must initially validate every interaction. A common application of this model is the use of ssh-rpc 'bot' users between computers, whereby public keys are distributed to a set of computers for automated access from centralized hosts. The TOFU aspect of this application forces a sysadmin (or other trusted user) to validate the remote server's identity upon first connection. For end-to-end encrypted communication the TOFU model allows authenticated encryption without the complex procedure of obtaining a personal certificate which are vulnerable to CA Compromise . Compared to Web of Trust , TOFU has less maintenance overhead. The largest weakness of TOFU that requires manual verification is its inability to scale for large groups or computer networks. The maintenance overhead of keeping track of identifiers for every endpoint can quickly scale beyond the capabilities of the users. In environments where the authenticity of the identifier cannot be verified easily enough (for example, the IT staff of workplace or educational facility might be hard to reach), the users tend to blindly trust the identifier of the opposing endpoint. Accidentally approved identifiers of attackers may also be hard to detect if the man-in-the-middle attack persists. As a new endpoint always involves a new identifier, no warning about potential attack is displayed. This has caused misconception among users that it's safe to proceed without verifying the authenticity of the initial identifier, regardless of whether the identifier is presented to the user or not. Warning fatigue has pushed many messaging applications to remove blocking warnings to prevent users from reverting to less secure applications that do not feature end-to-end encryption in the first place. Out-of-sight identifier verification mechanisms reduce the likelihood that secure authentication practices are discovered and adopted by the users. The first known formal use of the term TOFU or TUFU was by CMU researchers Dan Wendlandt, David Andersen, and Adrian Perrig in their research paper "Perspectives: Improving SSH-Style Host Authentication With Multi-Path Probing" published in 2008 at the Usenix Annual Technical Conference . [ 5 ] Moxie Marlinspike mentioned Perspectives and the term TOFU the DEF CON 18 proceedings, with reference to comments made by Dan Kaminsky , during the panel discussion "An Open Letter, A Call to Action". An audience suggestion was raised implying the superiority of the SSH Public key infrastructure (PKI) model, over the SSL/TLS PKI model - whereby Moxie replied: Anonymous : "...so, if we dislike the certificate model in the (tls) PKI, but we like, say, the SSH PKI, which seems to work fairly well, basically the fundamental thing is: if I give my data to someone, I trust them with the data. So I should be remembering their certificate. If someone else comes in with a different certificate, signed by a different authority, I still don't trust them. And if we did it that way, then that would solve a lot of the problems- it would solve the problems of rogue CAs, to some extent, it wouldn't help you with the initial bootstrapping but the initial bootstrapping would use the initial model, and then for continued interaction with the site you would use the ssh model which would allow you continued strength beyond what we have now. So the model we have now can be continued to be re-used, for only the initial acceptance. So why don't we do this?" Dan : "So, I'm a former SSH developer, and let me walk very quickly, every time there's an error in the ssh key generation, the user is asked, 'please type yes to trusting this new key', or, 'please go into your known hosts file and delete that value', and every last time they do it, because it's always the fault of a server misconfiguration. The SSH model is cool, it don't scale". Moxie : " And I would just add, what you're talking about is called 'Trust on First Use', or 'tofu' , and there's a project that I'm involved in called perspectives, that tries to leverage that to be less confusing than the pure SSH model, and I think it's a really great project and you should check it out if you're interested in alternatives to the CA system." The topics of trust, validation, non-repudiation are fundamental to all work in the field of cryptography and digital security .
https://en.wikipedia.org/wiki/Trust_on_first_use
In computing , a trusted client is a device or program controlled by the user of a service, but with restrictions designed to prevent its use in ways not authorized by the provider of the service. That is, the client is a device that vendors trust and then sell to the consumers, whom they do not trust. Examples include video games played over a computer network or the Content Scramble System (CSS) in DVDs . Trusted client software is considered fundamentally insecure: once the security is broken by one user, the break is trivially copyable and available to others. As computer security specialist Bruce Schneier states, "Against the average user, anything works; there's no need for complex security software. Against the skilled attacker, on the other hand, nothing works." [ 1 ] Trusted client hardware is somewhat more secure, but not a complete solution. [ 2 ] Trusted clients are attractive to business as a form of vendor lock-in : sell the trusted client at a loss and charge more than would be otherwise economically viable for the associated service. One early example was radio receivers that were subsidized by broadcasters, but restricted to receiving only their radio station. Modern examples include video recorders being forced by law to include Macrovision copy protection, the DVD region code system and region-coded video game consoles . Trusted computing aims to create computer hardware which assists in the implementation of such restrictions in software , and attempts to make circumvention of these restrictions more difficult.
https://en.wikipedia.org/wiki/Trusted_client
A trusted execution environment ( TEE ) is a secure area of a main processor . It helps the code and data loaded inside it be protected with respect to confidentiality and integrity . Data confidentiality prevents unauthorized entities from outside the TEE from reading data, while code integrity prevents code in the TEE from being replaced or modified by unauthorized entities, which may also be the computer owner itself as in certain DRM schemes described in Intel SGX . This is done by implementing unique, immutable, and confidential architectural security, which offers hardware-based memory encryption that isolates specific application code and data in memory. This allows user-level code to allocate private regions of memory, called enclaves, which are designed to be protected from processes running at higher privilege levels. [ 1 ] [ 2 ] [ 3 ] A TEE as an isolated execution environment provides security features such as isolated execution, integrity of applications executing with the TEE, and confidentiality of their assets. In general terms, the TEE offers an execution space that provides a higher level of security for trusted applications running on the device than a rich operating system (OS) and more functionality than a 'secure element' (SE). The Open Mobile Terminal Platform (OMTP) first defined TEE in their "Advanced Trusted Environment:OMTP TR1" standard, defining it as a "set of hardware and software components providing facilities necessary to support applications," which had to meet the requirements of one of two defined security levels. The first security level, Profile 1, was targeted against only software attacks, while Profile 2, was targeted against both software and hardware attacks. [ 4 ] Commercial TEE solutions based on ARM TrustZone technology, conforming to the TR1 standard, were later launched, such as Trusted Foundations developed by Trusted Logic. [ 5 ] Work on the OMTP standards ended in mid-2010 when the group transitioned into the Wholesale Applications Community (WAC). [ 6 ] The OMTP standards, including those defining a TEE, are hosted by GSMA . [ 7 ] The TEE typically consists of a hardware isolation mechanism plus a secure operating system running on top of that isolation mechanism, although the term has been used more generally to mean a protected solution. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Whilst a GlobalPlatform TEE requires hardware isolation, others, such as EMVCo, use the term TEE to refer to both hardware and software-based solutions. [ 12 ] FIDO uses the concept of TEE in the restricted operating environment for TEEs based on hardware isolation. [ 13 ] Only trusted applications running in a TEE have access to the full power of a device's main processor, peripherals, and memory, while hardware isolation protects these from user-installed apps running in a main operating system. Software and cryptogaphic inside the TEE protect the trusted applications contained within from each other. [ 14 ] Service providers, mobile network operators (MNO), operating system developers, application developers , device manufacturers, platform providers, and silicon vendors are the main stakeholders contributing to the standardization efforts around the TEE. To prevent the simulation of hardware with user-controlled software, a so-called "hardware root of trust" is used. This is a set of private keys that are embedded directly into the chip during manufacturing ; one-time programmable memory such as eFuses is usually used on mobile devices. These cannot be changed, even after the device resets, and whose public counterparts reside in a manufacturer database, together with a non-secret hash of a public key belonging to the trusted party (usually a chip vendor) which is used to sign trusted firmware alongside the circuits doing cryptographic operations and controlling access. The hardware is designed in a way which prevents all software not signed by the trusted party's key from accessing the privileged features. The public key of the vendor is provided at runtime and hashed; this hash is then compared to the one embedded in the chip. If the hash matches, the public key is used to verify a digital signature of trusted vendor-controlled firmware (such as a chain of bootloaders on Android devices or 'architectural enclaves' in SGX). The trusted firmware is then used to implement remote attestation. [ 15 ] When an application is attested, its untrusted components loads its trusted component into memory; the trusted application is protected from modification by untrusted components with hardware. A nonce is requested by the untrusted party from verifier's server and is used as part of a cryptographic authentication protocol, proving integrity of the trusted application. The proof is passed to the verifier, which verifies it. A valid proof cannot be computed in simulated hardware (i.e. QEMU ) because in order to construct it, access to the keys baked into hardware is required; only trusted firmware has access to these keys and/or the keys derived from them or obtained using them. Because only the platform owner is meant to have access to the data recorded in the foundry, the verifying party must interact with the service set up by the vendor. If the scheme is implemented improperly, the chip vendor can track which applications are used on which chip and selectively deny service by returning a message indicating that authentication has not passed. [ 16 ] To simulate hardware in a way which enables it to pass remote authentication, an attacker would have to extract keys from the hardware, which is costly because of the equipment and technical skill required to execute it. For example, using focused ion beams , scanning electron microscopes , microprobing , and chip decapsulation [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] is difficult, or even impossible, if the hardware is designed in such a way that reverse-engineering destroys the keys. In most cases, the keys are unique for each piece of hardware, so that a key extracted from one chip cannot be used by others (for example physically unclonable functions [ 23 ] [ 24 ] ). Though deprivation of ownership is not an inherent property of TEEs (it is possible to design the system in a way that allows only the user who has obtained ownership of the device first to control the system by burning a hash of their own key into e-fuses), in practice all such systems in consumer electronics are intentionally designed so as to allow chip manufacturers to control access to attestation and its algorithms. It allows manufacturers to grant access to TEEs only to software developers who have a (usually commercial) business agreement with the manufacturer, monetizing the user base of the hardware, to enable such use cases as tivoization and DRM and to allow certain hardware features to be used only with vendor-supplied software, forcing users to use it despite its antifeatures , like ads , tracking and use case restriction for market segmentation . There are a number of use cases for the TEE. Though not all possible use cases exploit the deprivation of ownership, TEE is usually used exactly for this. Note: Much TEE literature covers this topic under the definition "premium content protection," which is the preferred nomenclature of many copyright holders. Premium content protection is a specific use case of digital rights management (DRM) and is controversial among some communities, such as the Free Software Foundation . [ 25 ] It is widely used by copyright holders to restrict the ways in which end users can consume content such as 4K high-definition films. The TEE is a suitable environment for protecting digitally encoded information (for example, HD films or audio) on connected devices such as smartphones, tablets, and HD televisions. This suitability comes from the ability of the TEE to deprive the owner of the device of access stored secrets, and the fact that there is often a protected hardware path between the TEE and the display and/or subsystems on devices. The TEE is used to protect the content once it is on the device. While the content is protected during transmission or streaming by the use of encryption, the TEE protects the content once it has been decrypted on the device by ensuring that decrypted content is not exposed to the environment not approved by the app developer or platform vendor. Mobile commerce applications such as: mobile wallets, peer-to-peer payments, contactless payments or using a mobile device as a point of sale (POS) terminal often have well-defined security requirements. TEEs can be used, often in conjunction with near-field communication (NFC), SEs, and trusted backend systems to provide the security required to enable financial transactions to take place In some scenarios, interaction with the end user is required, and this may require the user to expose sensitive information such as a PIN, password, or biometric identifier to the mobile OS as a means of authenticating the user. The TEE optionally offers a trusted user interface which can be used to construct user authentication on a mobile device. With the rise of cryptocurrency, TEEs are increasingly used to implement crypto-wallets, as they offer the ability to store tokens more securely than regular operating systems, and can provide the necessary computation and authentication applications. [ 26 ] The TEE is well-suited for supporting biometric identification methods (facial recognition, fingerprint sensor, and voice authorization), which may be easier to use and harder to steal than PINs and passwords. The authentication process is generally split into three main stages: A TEE is a good area within a mobile device to house the matching engine and the associated processing required to authenticate the user. The environment is designed to protect the data and establish a buffer against the non-secure apps located in mobile OSes . This additional security may help to satisfy the security needs of service providers in addition to keeping the costs low for handset developers. The TEE can be used by governments, enterprises, and cloud service providers to enable the secure handling of confidential information on mobile devices and on server infrastructure. The TEE offers a level of protection against software attacks generated in the mobile OS and assists in the control of access rights. It achieves this by housing sensitive, ‘trusted’ applications that need to be isolated and protected from the mobile OS and any malicious malware that may be present. Through utilizing the functionality and security levels offered by the TEE, governments, and enterprises can be assured that employees using their own devices are doing so in a secure and trusted manner. Likewise, server-based TEEs help defend against internal and external attacks against backend infrastructure. With the rise of software assets and reuses, modular programming is the most productive process to design software architecture, by decoupling the functionalities into small independent modules. As each module contains everything necessary to execute its desired functionality, the TEE allows the organization of the complete system featuring a high level of reliability and security, while preventing each module from vulnerabilities of the others. In order for the modules to communicate and share data, TEE provides means to securely have payloads sent/received between the modules, using mechanisms such as object serialization, in conjunction with proxies. See Component-based software engineering The following hardware technologies can be used to support TEE implementations:
https://en.wikipedia.org/wiki/Trusted_execution_environment
A trusted service manager (TSM) is a role in a near field communication ecosystem. It acts as a neutral broker that sets up business agreements and technical connections with mobile network operators , phone manufacturers or other entities controlling the secure element on mobile phones. The trusted service manager enables service providers to distribute and manage their contactless applications remotely by allowing access to the secure element in NFC-enabled handsets. The term is a standardized name used by the GSM Association , [ 1 ] the European Payments Council , [ 2 ] and the NFC Forum . [ 3 ] These functions can be performed by mobile network operators, service providers or third parties, and or part can be delegated by one party to another. This article related to telecommunications is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trusted_service_manager
The term trustworthy computing ( TwC ) has been applied to computing systems that are inherently secure , available, and reliable. It is particularly associated with the Microsoft initiative of the same name, launched in 2002. Until 1995, there were restrictions on commercial traffic over the Internet . [ 1 ] [ 2 ] [ 3 ] [ 4 ] On, May 26, 1995, Bill Gates sent the "Internet Tidal Wave" memorandum to Microsoft executives assigning "...the Internet this highest level of importance..." [ 5 ] but Microsoft's Windows 95 was released without a web browser as Microsoft had not yet developed one. The success of the web had caught them by surprise [ 6 ] but by mid 1995, they were testing their own web server, [ 7 ] and on August 24, 1995, launched a major online service , MSN . [ 8 ] The National Research Council recognized that the rise of the Internet simultaneously increased societal reliance on computer systems while increasing the vulnerability of such systems to failure and produced an important report in 1999, "Trust in Cyberspace". [ 9 ] This report reviews the cost of un-trustworthy systems and identifies actions required for improvement. Bill Gates launched Microsoft's "Trustworthy Computing" initiative with a January 15, 2002 memo, [ 10 ] referencing an internal whitepaper by Microsoft CTO and Senior Vice President Craig Mundie . [ 11 ] The move was reportedly prompted by the fact that they "...had been under fire from some of its larger customers–government agencies, financial companies and others–about the security problems in Windows, issues that were being brought front and center by a series of self-replicating worms and embarrassing attacks." [ 12 ] such as Code Red , Nimda , Klez and Slammer . Four areas were identified as the initiative's key areas: Security, Privacy, Reliability, and Business Integrity, [ 11 ] and despite some initial scepticism, at its 10-year anniversary it was generally accepted as having "...made a positive impact on the industry..." . [ 13 ] [ 14 ] The Trustworthy Computing campaign was the main reason why Easter eggs disappeared from Windows , Office and other Microsoft products. [ 15 ]
https://en.wikipedia.org/wiki/Trustworthy_computing
Truth or verity is the property of being in accord with fact or reality . [ 1 ] In everyday language, it is typically ascribed to things that aim to represent reality or otherwise correspond to it, such as beliefs , propositions , and declarative sentences . [ 2 ] True statements are usually held to be the opposite of false statements . The concept of truth is discussed and debated in various contexts, including philosophy , art , theology , law , and science . Most human activities depend upon the concept, where its nature as a concept is assumed rather than being a subject of discussion, including journalism and everyday life. Some philosophers view the concept of truth as basic, and unable to be explained in any terms that are more easily understood than the concept of truth itself. [ 3 ] Most commonly, truth is viewed as the correspondence of language or thought to a mind-independent world. This is called the correspondence theory of truth . [ 4 ] Various theories and views of truth continue to be debated among scholars, philosophers, and theologians. [ 2 ] [ 5 ] There are many different questions about the nature of truth which are still the subject of contemporary debates. These include the question of defining truth; whether it is even possible to give an informative definition of truth; identifying things as truth-bearers capable of being true or false; if truth and falsehood are bivalent , or if there are other truth values; identifying the criteria of truth that allow us to identify it and to distinguish it from falsehood; the role that truth plays in constituting knowledge ; and, if truth is always absolute or if it can be relative to one's perspective. [ 6 ] The English word truth is derived from Old English triewth , Middle English trewthe , as a -th nominalisation of the adjective true (Old English treowe ). In ordinary usage, the word may denote either a quality of "faithfulness, fidelity, loyalty, sincerity, veracity", [ 7 ] [ 1 ] or that of "agreement with fact or reality". The adjective, cognate with German treu 'faithful', stems from Proto-Germanic *trewwj- 'having good faith ', and perhaps ultimately from Proto-Indo-European *dru- 'tree', on the notion of "steadfast as an oak" ( cf. Sanskrit dā́ru '(piece of) wood'; [ 8 ] Old Norse trú 'faith, word of honour, belief'). [ 9 ] The question of what is a proper basis for deciding how words, symbols, ideas and beliefs may properly be considered true, whether by a single person or an entire society, is dealt with by the five most prevalent substantive theories of truth listed below. Each presents perspectives that are widely shared by published scholars. [ 10 ] [ 11 ] [ 12 ] : 309–330 Theories other than the most prevalent substantive theories are also discussed. According to a survey of professional philosophers and others on their philosophical views which was carried out in November 2009 (taken by 3226 respondents, including 1803 philosophy faculty members and/or PhDs and 829 philosophy graduate students) 45% of respondents accept or lean toward correspondence theories, 21% accept or lean toward deflationary theories and 14% epistemic theories . [ 13 ] Correspondence theories emphasize that true beliefs and true statements correspond to the actual state of affairs. [ 14 ] This type of theory stresses a relationship between thoughts or statements on one hand, and things or objects on the other. It is a traditional model tracing its origins to ancient Greek philosophers such as Socrates , Plato , and Aristotle . [ 15 ] This class of theories holds that the truth or the falsity of a representation is determined in principle entirely by how it relates to "things" according to whether it accurately describes those "things". A classic example of correspondence theory is the statement by the thirteenth century philosopher and theologian Thomas Aquinas : " Veritas est adaequatio rei et intellectus " ("Truth is the adequation of things and intellect "), which Aquinas attributed to the ninth century Neoplatonist Isaac Israeli . [ 16 ] [ 17 ] [ 18 ] Aquinas also restated the theory as: "A judgment is said to be true when it conforms to the external reality". [ 19 ] Correspondence theory centres around the assumption that truth is a matter of accurately copying what is known as " objective reality " and then representing it in thoughts, words, and other symbols. [ 20 ] Many modern theorists have stated that this ideal cannot be achieved without analysing additional factors. [ 10 ] [ 21 ] For example, language plays a role in that all languages have words to represent concepts that are virtually undefined in other languages. The German word Zeitgeist is one such example: one who speaks or understands the language may "know" what it means, but any translation of the word apparently fails to accurately capture its full meaning (this is a problem with many abstract words, especially those derived in agglutinative languages ). Thus, some words add an additional parameter to the construction of an accurate truth predicate . Among the philosophers who grappled with this problem is Alfred Tarski , whose semantic theory is summarized further on. [ 22 ] For coherence theories in general, truth requires a proper fit of elements within a whole system. Very often, coherence is taken to imply something more than simple logical consistency; often there is a demand that the propositions in a coherent system lend mutual inferential support to each other. So, for example, the completeness and comprehensiveness of the underlying set of concepts is a critical factor in judging the validity and usefulness of a coherent system. [ 23 ] A central tenet of coherence theories is the idea that truth is primarily a property of whole systems of propositions, and can be ascribed to an individual proposition only in virtue of its relationship to that system as a whole. Among the assortment of perspectives commonly regarded as coherence theory, theorists differ on the question of whether coherence entails many possible true systems of thought or only a single absolute system. [ 24 ] Some variants of coherence theory are claimed to describe the essential and intrinsic properties of formal systems in logic and mathematics. [ 25 ] Formal reasoners are content to contemplate axiomatically independent and sometimes mutually contradictory systems side by side, for example, the various alternative geometries . On the whole, coherence theories have been rejected for lacking justification in their application to other areas of truth, especially with respect to assertions about the natural world , empirical data in general, assertions about practical matters of psychology and society, especially when used without support from the other major theories of truth. [ 26 ] Coherence theories distinguish the thought of rationalist philosophers, particularly of Baruch Spinoza , Gottfried Wilhelm Leibniz , and Georg Wilhelm Friedrich Hegel , along with the British philosopher F. H. Bradley . [ 27 ] They have found a resurgence also among several proponents of logical positivism , notably Otto Neurath and Carl Hempel . Three influential forms of the pragmatic theory of truth were introduced around the turn of the 20th century by Charles Sanders Peirce , William James , and John Dewey . Although there are wide differences in viewpoint among these and other proponents of pragmatic theory, they all hold that truth is verified and confirmed by the results of putting one's concepts into practice. [ 28 ] Peirce defines it: "Truth is that concordance of an abstract statement with the ideal limit towards which endless investigation would tend to bring scientific belief, which concordance the abstract statement may possess by virtue of the confession of its inaccuracy and one-sidedness, and this confession is an essential ingredient of truth." [ 29 ] This statement stresses Peirce's view that ideas of approximation, incompleteness, and partiality, what he describes elsewhere as fallibilism and "reference to the future", are essential to a proper conception of truth. Although Peirce uses words like concordance and correspondence to describe one aspect of the pragmatic sign relation , he is also quite explicit in saying that definitions of truth based on mere correspondence are no more than nominal definitions, which he accords a lower status than real definitions. James' version of pragmatic theory, while complex, is often summarized by his statement that "the 'true' is only the expedient in our way of thinking, just as the 'right' is only the expedient in our way of behaving." [ 30 ] By this, James meant that truth is a quality , the value of which is confirmed by its effectiveness when applying concepts to practice (thus, "pragmatic"). Dewey, less broadly than James but more broadly than Peirce, held that inquiry , whether scientific, technical, sociological, philosophical, or cultural, is self-corrective over time if openly submitted for testing by a community of inquirers in order to clarify, justify, refine, and/or refute proposed truths. [ 31 ] Though not widely known, a new variation of the pragmatic theory was defined and wielded successfully from the 20th century forward. Defined and named by William Ernest Hocking , this variation is known as "negative pragmatism". Essentially, what works may or may not be true, but what fails cannot be true because the truth always works. [ 32 ] Philosopher of science Richard Feynman also subscribed to it: "We never are definitely right, we can only be sure we are wrong." [ 33 ] This approach incorporates many of the ideas from Peirce, James, and Dewey. For Peirce, the idea of "endless investigation would tend to bring about scientific belief" fits negative pragmatism in that a negative pragmatist would never stop testing. As Feynman noted, an idea or theory "could never be proved right, because tomorrow's experiment might succeed in proving wrong what you thought was right." [ 33 ] Similarly, James and Dewey's ideas also ascribe truth to repeated testing which is "self-corrective" over time. Pragmatism and negative pragmatism are also closely aligned with the coherence theory of truth in that any testing should not be isolated but rather incorporate knowledge from all human endeavors and experience. The universe is a whole and integrated system, and testing should acknowledge and account for its diversity. As Feynman said, "... if it disagrees with experiment, it is wrong." [ 33 ] : 150 Social constructivism holds that truth is constructed by social processes, is historically and culturally specific, and that it is in part shaped through the power struggles within a community. Constructivism views all of our knowledge as "constructed," because it does not reflect any external "transcendent" realities (as a pure correspondence theory might hold). Rather, perceptions of truth are viewed as contingent on convention, human perception, and social experience. It is believed by constructivists that representations of physical and biological reality, including race , sexuality , and gender , are socially constructed. [ 34 ] Giambattista Vico was among the first to claim that history and culture were man-made. Vico's epistemological orientation unfolds in one axiom: verum ipsum factum —"truth itself is constructed". Hegel and Marx were among the other early proponents of the premise that truth is, or can be, socially constructed. Marx, like many critical theorists who followed, did not reject the existence of objective truth, but rather distinguished between true knowledge and knowledge that has been distorted through power or ideology. For Marx, scientific and true knowledge is "in accordance with the dialectical understanding of history" and ideological knowledge is "an epiphenomenal expression of the relation of material forces in a given economic arrangement". [ 35 ] Consensus theory holds that truth is whatever is agreed upon, or in some versions, might come to be agreed upon, by some specified group. Such a group might include all human beings, or a subset thereof consisting of more than one person. [ 36 ] Among the current advocates of consensus theory as a useful accounting of the concept of "truth" is the philosopher Jürgen Habermas . [ 37 ] Habermas maintains that truth is what would be agreed upon in an ideal speech situation . [ 38 ] Among the current strong critics of consensus theory is the philosopher Nicholas Rescher . [ 39 ] Modern developments in the field of philosophy have resulted in the rise of a new thesis: that the term truth does not denote a real property of sentences or propositions. This thesis is in part a response to the common use of truth predicates (e.g., that some particular thing "... is true") which was particularly prevalent in philosophical discourse on truth in the first half of the 20th century. From this point of view, to assert that "'2 + 2 = 4' is true" is logically equivalent to asserting that "2 + 2 = 4", and the phrase "is true" is—philosophically, if not practically (see: "Michael" example, below)—completely dispensable in this and every other context. In common parlance, truth predicates are not commonly heard, and it would be interpreted as an unusual occurrence were someone to utilize a truth predicate in an everyday conversation when asserting that something is true. Newer perspectives that take this discrepancy into account, and work with sentence structures as actually employed in common discourse, can be broadly described: Whichever term is used, deflationary theories can be said to hold in common that "the predicate 'true' is an expressive convenience, not the name of a property requiring deep analysis." [ 10 ] Once we have identified the truth predicate's formal features and utility, deflationists argue, we have said all there is to be said about truth. Among the theoretical concerns of these views is to explain away those special cases where it does appear that the concept of truth has peculiar and interesting properties. (See, e.g., Semantic paradoxes , and below.) The scope of deflationary principles is generally limited to representations that resemble sentences. They do not encompass a broader range of entities that are typically considered true or otherwise. In addition, some deflationists point out that the concept employed in "... is true" formulations does enable us to express things that might otherwise require infinitely long sentences; for example, one cannot express confidence in Michael's accuracy by asserting the endless sentence: This assertion can instead be succinctly expressed by saying: What Michael says is true . [ 41 ] An early variety of deflationary theory is the redundancy theory of truth , so-called because—in examples like those above, e.g. "snow is white [is true]"—the concept of "truth" is redundant and need not have been articulated; that is, it is merely a word that is traditionally used in conversation or writing, generally for emphasis, but not a word that actually equates to anything in reality. This theory is commonly attributed to Frank P. Ramsey , who held that the use of words like fact and truth was nothing but a roundabout way of asserting a proposition, and that treating these words as separate problems in isolation from judgment was merely a "linguistic muddle". [ 10 ] [ 42 ] [ 43 ] A variant of redundancy theory is the "disquotational" theory, which uses a modified form of the logician Alfred Tarski 's schema : proponents observe that to say that "'P' is true" is to assert "P". A version of this theory was defended by C. J. F. Williams (in his book What is Truth? ). Yet another version of deflationism is the prosentential theory of truth, first developed by Dorothy Grover, Joseph Camp, and Nuel Belnap as an elaboration of Ramsey's claims. They argue that utterances such as "that's true", when said in response to (e.g.) "it's raining", are " prosentences "—expressions that merely repeat the content of other expressions. In the same way that it means the same as my dog in the statement "my dog was hungry, so I fed it", that's true is supposed to mean the same as it's raining when the former is said in reply to the latter. [ 44 ] As noted above, proponents of these ideas do not necessarily follow Ramsey in asserting that truth is not a property; rather, they can be understood to say that, for instance, the assertion "P" may well involve a substantial truth—it is only the redundancy involved in statements such as "that's true" (i.e., a prosentence) which is to be minimized. [ 10 ] Attributed to philosopher P. F. Strawson is the performative theory of truth which holds that to say "'Snow is white' is true" is to perform the speech act of signaling one's agreement with the claim that snow is white (much like nodding one's head in agreement). The idea that some statements are more actions than communicative statements is not as odd as it may seem. For example, when a wedding couple says "I do" at the appropriate time in a wedding, they are performing the act of taking the other to be their lawful wedded spouse. They are not describing themselves as taking the other, but actually doing so (perhaps the most thorough analysis of such "illocutionary acts" is J. L. Austin , most notably in How to Do Things With Words ). [ 45 ] Strawson holds that a similar analysis is applicable to all speech acts, not just illocutionary ones: "To say a statement is true is not to make a statement about a statement, but rather to perform the act of agreeing with, accepting, or endorsing a statement. When one says 'It's true that it's raining,' one asserts no more than 'It's raining.' The function of [the statement] 'It's true that ...' is to agree with, accept, or endorse the statement that 'it's raining. ' " [ 46 ] Philosophical skepticism is generally any doubt of one or more items of knowledge or belief which ascribe truth to their assertions and propositions. [ 47 ] [ 48 ] The primary target of philosophical skepticism is epistemology , but it can be applied to any domain, such as the supernatural , morality ( moral skepticism ), and religion (skepticism about the existence of God). [ 49 ] Philosophical skepticism comes in various forms. Radical forms of skepticism deny that knowledge or rational belief is possible and urge us to suspend judgment regarding ascription of truth on many or all controversial matters. More moderate forms of skepticism claim only that nothing can be known with certainty, or that we can know little or nothing about the "big questions" in life, such as whether God exists or whether there is an afterlife. Religious skepticism is "doubt concerning basic religious principles (such as immortality, providence, and revelation)". [ 50 ] Scientific skepticism concerns testing beliefs for reliability, by subjecting them to systematic investigation using the scientific method , to discover empirical evidence for them. Several of the major theories of truth hold that there is a particular property the having of which makes a belief or proposition true. Pluralist theories of truth assert that there may be more than one property that makes propositions true: ethical propositions might be true by virtue of coherence. Propositions about the physical world might be true by corresponding to the objects and properties they are about. [ 51 ] Some of the pragmatic theories, such as those by Charles Peirce and William James , included aspects of correspondence, coherence and constructivist theories. [ 29 ] [ 30 ] Crispin Wright argued in his 1992 book Truth and Objectivity that any predicate which satisfied certain platitudes about truth qualified as a truth predicate. In some discourses, Wright argued, the role of the truth predicate might be played by the notion of superassertibility. [ 52 ] Michael Lynch , in a 2009 book Truth as One and Many , argued that we should see truth as a functional property capable of being multiply manifested in distinct properties like correspondence or coherence. [ 53 ] Logic is concerned with the patterns in reason that can help tell if a proposition is true or not. Logicians use formal languages to express the truths they are concerned with, and as such there is only truth under some interpretation or truth within some logical system . [ 54 ] A logical truth (also called an analytic truth or a necessary truth) is a statement that is true in all logically possible worlds [ 55 ] or under all possible interpretations, as contrasted to a fact (also called a synthetic claim or a contingency ), which is only true in this world as it has historically unfolded. A proposition such as "If p and q, then p" is considered to be a logical truth because of the meaning of the symbols and words in it and not because of any fact of any particular world. They are such that they could not be untrue. Degrees of truth in logic may be represented using two or more discrete values, as with bivalent logic (or binary logic ), three-valued logic , and other forms of finite-valued logic . [ 56 ] [ 57 ] Truth in logic can be represented using numbers comprising a continuous range, typically between 0 and 1, as with fuzzy logic and other forms of infinite-valued logic . [ 58 ] [ 59 ] In general, the concept of representing truth using more than two values is known as many-valued logic . [ 60 ] There are two main approaches to truth in mathematics. They are the model theory of truth and the proof theory of truth . [ 61 ] Historically, with the nineteenth century development of Boolean algebra , mathematical models of logic began to treat "truth", also represented as "T" or "1", as an arbitrary constant. "Falsity" is also an arbitrary constant, which can be represented as "F" or "0". In propositional logic , these symbols can be manipulated according to a set of axioms and rules of inference , often given in the form of truth tables . In addition, from at least the time of Hilbert's program at the turn of the twentieth century to the proof of Gödel's incompleteness theorems and the development of the Church–Turing thesis in the early part of that century, true statements in mathematics were generally assumed to be those statements that are provable in a formal axiomatic system. [ 62 ] The works of Kurt Gödel , Alan Turing , and others shook this assumption, with the development of statements that are true but cannot be proven within the system. [ 63 ] Two examples of the latter can be found in Hilbert's problems . Work on Hilbert's 10th problem led in the late twentieth century to the construction of specific Diophantine equations for which it is undecidable whether they have a solution, [ 64 ] or even if they do, whether they have a finite or infinite number of solutions. More fundamentally, Hilbert's first problem was on the continuum hypothesis . [ 65 ] Gödel and Paul Cohen showed that this hypothesis cannot be proved or disproved using the standard axioms of set theory . [ 66 ] In the view of some, then, it is equally reasonable to take either the continuum hypothesis or its negation as a new axiom. Gödel thought that the ability to perceive the truth of a mathematical or logical proposition is a matter of intuition , an ability he admitted could be ultimately beyond the scope of a formal theory of logic or mathematics [ 67 ] [ 68 ] and perhaps best considered in the realm of human comprehension and communication. But he commented, "The more I think about language, the more it amazes me that people ever understand each other at all". [ 69 ] Tarski's theory of truth (named after Alfred Tarski ) was developed for formal languages, such as formal logic . Here he restricted it in this way: no language could contain its own truth predicate, that is, the expression is true could only apply to sentences in some other language. The latter he called an object language , the language being talked about. (It may, in turn, have a truth predicate that can be applied to sentences in still another language.) The reason for his restriction was that languages that contain their own truth predicate will contain paradoxical sentences such as, "This sentence is not true". As a result, Tarski held that the semantic theory could not be applied to any natural language, such as English, because they contain their own truth predicates. Donald Davidson used it as the foundation of his truth-conditional semantics and linked it to radical interpretation in a form of coherentism . [ 70 ] Bertrand Russell is credited with noticing the existence of such paradoxes even in the best symbolic formations of mathematics in his day, in particular the paradox that came to be named after him, Russell's paradox . Russell and Whitehead attempted to solve these problems in Principia Mathematica by putting statements into a hierarchy of types , wherein a statement cannot refer to itself, but only to statements lower in the hierarchy. This in turn led to new orders of difficulty regarding the precise natures of types and the structures of conceptually possible type systems that have yet to be resolved to this day. [ 71 ] Kripke's theory of truth (named after Saul Kripke ) contends that a natural language can in fact contain its own truth predicate without giving rise to contradiction. He showed how to construct one as follows: Truth never gets defined for sentences like This sentence is false , since it was not in the original subset and does not predicate truth of any sentence in the original or any subsequent set. In Kripke's terms, these are "ungrounded." Since these sentences are never assigned either truth or falsehood even if the process is carried out infinitely, Kripke's theory implies that some sentences are neither true nor false. This contradicts the principle of bivalence : every sentence must be either true or false. Since this principle is a key premise in deriving the liar paradox , the paradox is dissolved. [ 72 ] Kripke's semantics are related to the use of topoi and other concepts from category theory in the study of mathematical logic . [ 73 ] They provide a choice of formal semantics for intuitionistic logic . The truth predicate " P is true" has great practical value in human language, allowing efficient endorsement or impeaching of claims made by others, to emphasize the truth or falsity of a statement, or to enable various indirect ( Gricean ) conversational implications. [ 74 ] Individuals or societies will sometime punish "false" statements to deter falsehoods; [ 75 ] the oldest surviving law text, the Code of Ur-Nammu , lists penalties for false accusations of sorcery or adultery, as well as for committing perjury in court. Even four-year-old children can pass simple " false belief " tests and successfully assess that another individual's belief diverges from reality in a specific way; [ 76 ] by adulthood there are strong implicit intuitions about "truth" that form a "folk theory" of truth. These intuitions include: [ 77 ] Like many folk theories, the folk theory of truth is useful in everyday life but, upon deep analysis, turns out to be technically self-contradictory; in particular, any formal system that fully obeys "capture and release" semantics for truth (also known as the T-schema ), and that also respects classical logic, is provably inconsistent and succumbs to the liar paradox or to a similar contradiction. [ 78 ] Socrates ', Plato 's and Aristotle 's ideas about truth are seen by some as consistent with correspondence theory . In his Metaphysics , Aristotle stated: "To say of what is that it is not, or of what is not that it is, is false, while to say of what is that it is, and of what is not that it is not, is true". [ 79 ] The Stanford Encyclopedia of Philosophy proceeds to say of Aristotle: [ 79 ] ... Aristotle sounds much more like a genuine correspondence theorist in the Categories (12b11, 14b14), where he talks of "underlying things" that make statements true and implies that these "things" (pragmata) are logically structured situations or facts (viz., his sitting, his not sitting). Most influential is his claim in De Interpretatione (16a3) that thoughts are "likenesses" (homoiosis) of things. Although he nowhere defines truth in terms of a thought's likeness to a thing or fact, it is clear that such a definition would fit well into his overall philosophy of mind. ... Similar statements can also be found in Plato's dialogues ( Cratylus 385b2, Sophist 263b). [ 79 ] Some Greek philosophers maintained that truth was either not accessible to mortals, or of greatly limited accessibility, forming early philosophical skepticism . Among these were Xenophanes , Democritus , and Pyrrho , the founder of Pyrrhonism , who argued that there was no criterion of truth. The Epicureans believed that all sense perceptions were true, [ 80 ] [ 81 ] and that errors arise in how we judge those perceptions. The Stoics conceived truth as accessible from impressions via cognitive grasping . [ 82 ] In early Islamic philosophy , Avicenna (Ibn Sina) defined truth in his work The Book of Healing , Book I, Chapter 8, as: What corresponds in the mind to what is outside it. [ 83 ] Avicenna elaborated on his definition of truth later in Book VIII, Chapter 6: The truth of a thing is the property of the being of each thing which has been established in it. [ 84 ] This definition is but a rendering of the medieval Latin translation of the work by Simone van Riet. [ 85 ] A modern translation of the original Arabic text states: Truth is also said of the veridical belief in the existence [of something]. [ 86 ] Reevaluating Avicenna, and also Augustine and Aristotle, Thomas Aquinas stated in his Disputed Questions on Truth : A natural thing, being placed between two intellects, is called true insofar as it conforms to either. It is said to be true with respect to its conformity with the divine intellect insofar as it fulfills the end to which it was ordained by the divine intellect ... With respect to its conformity with a human intellect, a thing is said to be true insofar as it is such as to cause a true estimate about itself. [ 87 ] Thus, for Aquinas, the truth of the human intellect (logical truth) is based on the truth in things (ontological truth). [ 88 ] Following this, he wrote an elegant re-statement of Aristotle's view in his Summa I.16.1 : Veritas est adæquatio intellectus et rei. (Truth is the conformity of the intellect and things.) Aquinas also said that real things participate in the act of being of the Creator God who is Subsistent Being, Intelligence, and Truth. Thus, these beings possess the light of intelligibility and are knowable. These things (beings; reality ) are the foundation of the truth that is found in the human mind, when it acquires knowledge of things, first through the senses , then through the understanding and the judgement done by reason . For Aquinas, human intelligence ("intus", within and "legere", to read) has the capability to reach the essence and existence of things because it has a non-material, spiritual element, although some moral, educational, and other elements might interfere with its capability. [ 89 ] Richard Firth Green examined the concept of truth in the later Middle Ages in his A Crisis of Truth , and concludes that roughly during the reign of Richard II of England the very meaning of the concept changes. The idea of the oath, which was so much part and parcel of for instance Romance literature , [ 90 ] changes from a subjective concept to a more objective one (in Derek Pearsall 's summary). [ 91 ] Whereas truth (the "trouthe" of Sir Gawain and the Green Knight ) was first "an ethical truth in which truth is understood to reside in persons", in Ricardian England it "transforms ... into a political truth in which truth is understood to reside in documents". [ 92 ] Immanuel Kant endorses a definition of truth along the lines of the correspondence theory of truth. [ 79 ] Kant writes in the Critique of Pure Reason : "The nominal definition of truth, namely that it is the agreement of cognition with its object, is here granted and presupposed". [ 93 ] He denies that this correspondence definition of truth provides us with a test or criterion to establish which judgements are true. He states in his logic lectures: [ 94 ] ... Truth, it is said, consists in the agreement of cognition with its object. In consequence of this mere nominal definition, my cognition, to count as true, is supposed to agree with its object. Now I can compare the object with my cognition, however, only by cognizing it . Hence my cognition is supposed to confirm itself, which is far short of being sufficient for truth. For since the object is outside me, the cognition in me, all I can ever pass judgement on is whether my cognition of the object agrees with my cognition of the object. The ancients called such a circle in explanation a diallelon . And actually the logicians were always reproached with this mistake by the sceptics, who observed that with this definition of truth it is just as when someone makes a statement before a court and in doing so appeals to a witness with whom no one is acquainted, but who wants to establish his credibility by maintaining that the one who called him as witness is an honest man. The accusation was grounded, too. Only the solution of the indicated problem is impossible without qualification and for every man. ... This passage makes use of his distinction between nominal and real definitions. A nominal definition explains the meaning of a linguistic expression. A real definition describes the essence of certain objects and enables us to determine whether any given item falls within the definition. [ 95 ] Kant holds that the definition of truth is merely nominal and, therefore, we cannot employ it to establish which judgements are true. According to Kant, the ancient skeptics were critical of the logicians for holding that, by means of a merely nominal definition of truth, they can establish which judgements are true. They were trying to do something that is "impossible without qualification and for every man". [ 94 ] G. W. F. Hegel distanced his philosophy from empiricism by presenting truth as a self-moving process, rather than a matter of merely subjective thoughts. Hegel's truth is analogous to an organism in that it is self-determining according to its own inner logic: "Truth is its own self-movement within itself." [ 96 ] For Arthur Schopenhauer , [ 97 ] a judgment is a combination or separation of two or more concepts . If a judgment is to be an expression of knowledge , it must have a sufficient reason or ground by which the judgment could be called true. Truth is the reference of a judgment to something different from itself which is its sufficient reason (ground) . Judgments can have material, formal, transcendental, or metalogical truth. A judgment has material truth if its concepts are based on intuitive perceptions that are generated from sensations. If a judgment has its reason (ground) in another judgment, its truth is called logical or formal . If a judgment, of, for example, pure mathematics or pure science, is based on the forms (space, time, causality) of intuitive, empirical knowledge, then the judgment has transcendental truth. [ 97 ] When Søren Kierkegaard , as his character Johannes Climacus , ends his writings: My thesis was, subjectivity, heartfelt is the truth , he does not advocate for subjectivism in its extreme form (the theory that something is true simply because one believes it to be so), but rather that the objective approach to matters of personal truth cannot shed any light upon that which is most essential to a person's life. Objective truths are concerned with the facts of a person's being, while subjective truths are concerned with a person's way of being. Kierkegaard agrees that objective truths for the study of subjects like mathematics, science, and history are relevant and necessary, but argues that objective truths do not shed any light on a person's inner relationship to existence. At best, these truths can only provide a severely narrowed perspective that has little to do with one's actual experience of life. [ 98 ] While objective truths are final and static, subjective truths are continuing and dynamic. The truth of one's existence is a living, inward, and subjective experience that is always in the process of becoming. The values, morals, and spiritual approaches a person adopts, while not denying the existence of objective truths of those beliefs, can only become truly known when they have been inwardly appropriated through subjective experience. Thus, Kierkegaard criticizes all systematic philosophies which attempt to know life or the truth of existence via theories and objective knowledge about reality. As Kierkegaard claims, human truth is something that is continually occurring, and a human being cannot find truth separate from the subjective experience of one's own existing, defined by the values and fundamental essence that consist of one's way of life. [ 99 ] Friedrich Nietzsche believed the search for truth, or 'the will to truth', was a consequence of the will to power of philosophers. He thought that truth should be used as long as it promoted life and the will to power , and he thought untruth was better than truth if it had this life enhancement as a consequence. As he wrote in Beyond Good and Evil , "The falseness of a judgment is to us not necessarily an objection to a judgment ... The question is to what extent it is life-advancing, life-preserving, species-preserving, perhaps even species-breeding ..." (aphorism 4). He proposed the will to power as a truth only because, according to him, it was the most life-affirming and sincere perspective one could have. Robert Wicks discusses Nietzsche's basic view of truth as follows: [ 100 ] ... Some scholars regard Nietzsche's 1873 unpublished essay, "On Truth and Lies in a Nonmoral Sense" (" Über Wahrheit und Lüge im außermoralischen Sinn ") as a keystone in his thought. In this essay, Nietzsche rejects the idea of universal constants, and claims that what we call "truth" is only "a mobile army of metaphors, metonyms, and anthropomorphisms." His view at this time is that arbitrariness completely prevails within human experience: concepts originate via the very artistic transference of nerve stimuli into images; "truth" is nothing more than the invention of fixed conventions for merely practical purposes, especially those of repose, security and consistence. ... Separately Nietzsche suggested that an ancient, metaphysical belief in the divinity of Truth lies at the heart of and has served as the foundation for the entire subsequent Western intellectual tradition : "But you will have gathered what I am getting at, namely, that it is still a metaphysical faith on which our faith in science rests—that even we knowers of today, we godless anti-metaphysicians still take our fire too, from the flame lit by the thousand-year old faith, the Christian faith which was also Plato's faith, that God is Truth; that Truth is 'Divine' ..." [ 101 ] [ 102 ] Moreover, Nietzsche challenges the notion of objective truth, arguing that truths are human creations and serve practical purposes. He wrote, "Truths are illusions about which one has forgotten that this is what they are." [ 103 ] He argues that truth is a human invention, arising from the artistic transference of nerve stimuli into images, serving practical purposes like repose, security, and consistency; formed through metaphorical and rhetorical devices, shaped by societal conventions and forgotten origins: [ 104 ] "What, then, is truth? A mobile army of metaphors, metonyms, and anthropomorphisms – in short, a sum of human relations which have been enhanced, transposed, and embellished poetically and rhetorically ..." Nietzsche argues that truth is always filtered through individual perspectives and shaped by various interests and biases. In "On the Genealogy of Morality," he asserts, "There are no facts, only interpretations." [ 105 ] He suggests that truth is subject to constant reinterpretation and change, influenced by shifting cultural and historical contexts as he writes in "Thus Spoke Zarathustra" that "I say unto you: one must still have chaos in oneself to be able to give birth to a dancing star." [ 106 ] In the same book, Zarathustra proclaims, "Truths are illusions which we have forgotten are illusions; they are metaphors that have become worn out and have been drained of sensuous force, coins which have lost their embossing and are now considered as metal and no longer as coins." [ 107 ] Other philosophers take this common meaning to be secondary and derivative. According to Martin Heidegger , the original meaning and essence of truth in Ancient Greece was unconcealment, or the revealing or bringing of what was previously hidden into the open, as indicated by the original Greek term for truth, aletheia . [ 108 ] [ 109 ] On this view, the conception of truth as correctness is a later derivation from the concept's original essence, a development Heidegger traces to the Latin term veritas . Owing to the primacy of ontology in Heidegger's philosophy, he considered this truth to lie within Being itself, and already in Being and Time (1927) had identified truth with " being-truth " or the "truth of Being" and partially with the Kantian thing-in-itself in an epistemology essentially concerning a mode of Dasein . [ 110 ] In Being and Nothingness (1943), partially following Heidegger, Jean-Paul Sartre identified our knowledge of the truth as a relation between the in-itself and for-itself of being - yet simultaneously closely connected in this vein to the data available to the material personhood, in the body, of an individual in their interaction with the world and others - with Sartre's description that "the world is human" allowing him to postulate all truth as strictly understood by self-consciousness as self-consciousness of something, [ 111 ] a view also preceded by Henri Bergson in Time and Free Will (1889), the reading of which Sartre had credited for his interest in philosophy. [ 112 ] This first existentialist theory, more fully fleshed out in Sartre's essay Truth and Existence (1948), which already demonstrates a more radical departure from Heidegger in its emphasis on the primacy of the idea, already formulated in Being and Nothingness , of existence as preceding essence in its role in the formulation of truth, has nevertheless been critically examined as idealist rather than materialist in its departure from more traditional idealist epistemologies such as those of Ancient Greek philosophy in Plato and Aristotle, and staying as does Heidegger with Kant. [ 113 ] Later, in the Search for a Method (1957), in which Sartre used a unification of existentialism and Marxism that he would later formulate in the Critique of Dialectical Reason (1960), Sartre, with his growing emphasis on the Hegelian totalisation of historicity , posited a conception of truth still defined by its process of relation to a container giving it material meaning, but with specific reference to a role in this broader totalisation, for "subjectivity is neither everything nor nothing; it represents a moment in the objective process (that in which externality is internalised), and this moment is perpetually eliminated only to be perpetually reborn": "For us, truth is something which becomes, it has and will have become. It is a totalisation which is forever being totalised. Particular facts do not signify anything; they are neither true nor false so long as they are not related, through the mediation of various partial totalities, to the totalisation in process." Sartre describes this as a " realistic epistemology", developed out of Marx 's ideas but with such a development only possible in an existentialist light, as with the theme of the whole work. [ 114 ] [ 115 ] In an early segment of the lengthy two-volume Critique of 1960, Sartre continued to describe truth as a "totalising" "truth of history" to be interpreted by a "Marxist historian", whilst his break with Heidegger's epistemological ideas is finalised in the description of a seemingly antinomous " dualism of Being and Truth" as the essence of a truly Marxist epistemology. [ 116 ] The well-regarded French philosopher Albert Camus wrote in his famous essay, The Myth of Sisyphus (1942), that "there are truths but no truth", in fundamental agreement with Nietzsche's perspectivism , and favourably cites Kierkergaad in posing that "no truth is absolute or can render satisfactory an existence that is impossible in itself". [ 117 ] Later, in The Rebel (1951), he declared, akin to Sartre, that "the very lowest form of truth" is "the truth of history", [ 118 ] but describes this in the context of its abuse and like Kierkergaad in the Concluding Unscientific Postscript he criticizes Hegel in holding a historical attitude "which consists of saying: 'This is truth, which appears to us, however, to be error, but which is true precisely because it happens to be error. As for proof, it is not I, but history, at its conclusion, that will furnish it.'" [ 119 ] Pragmatists like C. S. Peirce take truth to have some manner of essential relation to human practices for inquiring into and discovering truth, with Peirce himself holding that truth is what human inquiry would find out on a matter, if our practice of inquiry were taken as far as it could profitably go: "The opinion which is fated to be ultimately agreed to by all who investigate, is what we mean by the truth ..." [ 120 ] According to Kitaro Nishida , "knowledge of things in the world begins with the differentiation of unitary consciousness into knower and known and ends with self and things becoming one again. Such unification takes form not only in knowing but in the valuing (of truth) that directs knowing, the willing that directs action, and the feeling or emotive reach that directs sensing." [ 121 ] Erich Fromm finds that trying to discuss truth as "absolute truth" is sterile and that emphasis ought to be placed on "optimal truth". He considers truth as stemming from the survival imperative of grasping one's environment physically and intellectually, whereby young children instinctively seek truth so as to orient themselves in "a strange and powerful world". The accuracy of their perceived approximation of the truth will therefore have direct consequences on their ability to deal with their environment. Fromm can be understood to define truth as a functional approximation of reality. His vision of optimal truth is described below: [ 122 ] ... the dichotomy between 'absolute = perfect' and 'relative = imperfect' has been superseded in all fields of scientific thought, where "it is generally recognized that there is no absolute truth but nevertheless that there are objectively valid laws and principles". [...] In that respect, "a scientifically or rationally valid statement means that the power of reason is applied to all the available data of observation without any of them being suppressed or falsified for the sake of the desired result". The history of science is "a history of inadequate and incomplete statements, and every new insight makes possible the recognition of the inadequacies of previous propositions and offers a springboard for creating a more adequate formulation." [...] As a result "the history of thought is the history of an ever-increasing approximation to the truth. Scientific knowledge is not absolute but optimal; it contains the optimum of truth attainable in a given historical period." Fromm furthermore notes that "different cultures have emphasized various aspects of the truth" and that increasing interaction between cultures allows for these aspects to reconcile and integrate, increasing further the approximation to the truth. Truth, says Michel Foucault , is problematic when any attempt is made to see truth as an "objective" quality. He prefers not to use the term truth itself but "Regimes of Truth". In his historical investigations he found truth to be something that was itself a part of, or embedded within, a given power structure. Thus Foucault's view shares much in common with the concepts of Nietzsche . Truth for Foucault is also something that shifts through various episteme throughout history. [ 123 ] Jean Baudrillard considered truth to be largely simulated, that is pretending to have something, as opposed to dissimulation, pretending to not have something. He took his cue from iconoclasts whom he claims knew that images of God demonstrated that God did not exist. [ 124 ] Baudrillard wrote in "Precession of the Simulacra": Some examples of simulacra that Baudrillard cited were: that prisons simulate the "truth" that society is free; scandals (e.g., Watergate ) simulate that corruption is corrected; Disney simulates that the U.S. itself is an adult place. Though such examples seem extreme, such extremity is an important part of Baudrillard's theory. For a less extreme example, movies usually end with the bad being punished, humiliated, or otherwise failing, thus affirming for viewers the concept that the good end happily and the bad unhappily, a narrative which implies that the status quo and established power structures are largely legitimate. [ 124 ] Truthmaker theory is "the branch of metaphysics that explores the relationships between what is true and what exists ". [ 127 ] It is different from substantive theories of truth in the sense that it does not aim at giving a definition of what truth is. Instead, it has the goal of determining how truth depends on being. [ 128 ]
https://en.wikipedia.org/wiki/Truth
A truth-bearer is an entity that is said to be either true or false and nothing else. The thesis that some things are true while others are false has led to different theories about the nature of these entities. Since there is divergence of opinion on the matter, the term truth-bearer is used to be neutral among the various theories . Truth-bearer candidates include propositions , sentences , sentence-tokens , statements , beliefs , thoughts , intuitions , utterances , and judgements but different authors exclude one or more of these, deny their existence, argue that they are true only in a derivative sense, assert or assume that the terms are synonymous, [ 1 ] or seek to avoid addressing their distinction or do not clarify it. [ 2 ] Some distinctions and terminology as used in this article, based on Wolfram 1989 [ 3 ] (Chapter 2 Section1) follow. It should be understood that the terminology described is not always used in the ways set out, and it is introduced solely for the purposes of discussion in this article. Use is made of the type–token and use–mention distinctions. Reflection on occurrences of numerals might be helpful. [ 4 ] In grammar a sentence can be a declaration, an explanation , a question, a command. In logic a declarative sentence is considered to be a sentence that can be used to communicate truth. Some sentences which are grammatically declarative are not logically so. A character [ nb 1 ] is a typographic character (printed or written) etc. A word-token [ nb 2 ] is a pattern of characters. A word-type [ nb 3 ] is an identical pattern of characters. A meaningful-word-token [ nb 4 ] is a meaningful pattern of characters. Two word-tokens which mean the same are of the same word-meaning [ nb 5 ] A sentence-token [ nb 6 ] is a pattern of word-tokens. A meaningful-sentence-token [ nb 7 ] is a meaningful sentence-token or a meaningful pattern of meaningful-word-tokens. Two sentence-tokens are of the same sentence-type if they are identical patterns of word-tokens characters [ nb 8 ] A declarative-sentence-token is a sentence-token which that can be used to communicate truth or convey information. [ nb 9 ] A meaningful-declarative-sentence-token is a meaningful declarative-sentence-token [ nb 10 ] Two meaningful-declarative-sentence-tokens are of the same meaningful-declarative-sentence-type [ nb 11 ] if they are identical patterns of word-tokens. A nonsense-declarative-sentence-token [ nb 12 ] is a declarative-sentence-token which is not a meaningful-declarative-sentence-token. A meaningful-declarative-sentence-token-use [ nb 13 ] occurs when and only when a meaningful-declarative-sentence-token is used declaratively. A referring-expression [ nb 14 ] is expression that can be used to pick out or refer to particular entity. A referential success [ nb 15 ] is a referring-expression's success in identifying a particular entity. A referential failure [ nb 16 ] is a referring-expression's failure to identify a particular entity. A referentially-successful-meaningful-declarative-sentence-token-use [ nb 17 ] is a meaningful-declarative-sentence-token-use containing no referring-expression that fails to identify a particular entity. As Aristotle pointed out, since some sentences are questions, commands, or meaningless, not all can be truth-bearers. If in the proposal "What makes the sentence Snow is white true is the fact that snow is white" it is assumed that sentences like Snow is white are truth-bearers, then it would be more clearly stated as "What makes the meaningful-declarative-sentence Snow is white true is the fact that snow is white". Theory 1a: All and only meaningful-declarative-sentence- types [ nb 18 ] are truth-bearers Criticisms of theory 1a Some meaningful-declarative-sentence- types will be both truth and false, contrary to our definition of truth-bearer, for example, (i) in liar-paradox sentences such as "This sentence is false", (see Fisher 2008 [ 5 ] ) (ii) and in time, place, and person-dependent sentences such as "It is noon", "This is London", and "I'm Spartacus". Anyone may ..ascribe truth and falsity to the deterministic propositional signs we here call utterances. But if he takes this line, he must, like Leibniz, recognise that truth cannot be an affair solely of actual utterances, since it makes sense to talk of the discovery of previously un-formulated truths. (Kneale, W&M (1962)) [ 6 ] Revision to Theory 1a , by making a distinction between type and token. To escape the time, place and person dependent criticism the theory can be revised, making use or the type–token distinction , [ 7 ] as follows Theory 1b: All and only meaningful-declarative-sentence-tokens are truth-bearers Quine argued that the primary truth-bearers are utterances [ nb 19 ] Having now recognised in a general way that what are true are sentences, we must turn to certain refinements. What are best seen as primarily true or false are not sentences but events of utterances. If a man utters the words 'It is raining' in the rain, or the words 'I am hungry' while hungry, his verbal performance counts as true. Obviously one utterance of a sentence may be true and another utterance of the same sentence be false. Source: Quine 1970, [ 8 ] page 13 Criticisms of theory 1b (i) Theory 1b prevents sentences which are meaningful-declarative-sentence-types from being truth-bearers. If all meaningful-declarative-sentence-types typographically identical to "The whole is greater than the part" are true then it surely follows that the meaningful-declarative-sentence-type "The whole is greater than the part" is true (just as all meaningful-declarative-sentence-tokens typographically identical to "The whole is greater than the part" are English entails the meaningful-declarative-sentence-types "The whole is greater than the part" is English) (ii) Some meaningful-declarative-sentences-tokens will be both truth and false, or neither, contrary to our definition of truth-bearer. E.g. A token, t, of the meaningful-declarative-sentence-type ‘P: I'm Spartacus’, written on a placard. The token t would be true when used by Spartacus, false when used by Bertrand Russell, neither true nor false when mentioned by Spartacus or when being neither used nor mentioned. Theory 1b.1 All meaningful-declarative-sentence-token-uses are truth-bearers; some meaningful-declarative-sentence-types are truth-bearers To allow that at least some meaningful-declarative-sentence-types can be truth-bearers, Quine allowed so-called "eternal sentences" [ nb 20 ] to be truth-bearers. In Peirces's terminology, utterances and inscriptions are tokens of the sentence or other linguistic expression concerned; and this linguistic expression is the type of those utterances and inscriptions. In Frege's terminology, truth and falsity are the two truth values . Succinctly then, an eternal sentence is a sentence whose tokens have the same truth values.... What are best regarded as true and false are not propositions but sentence tokens, or sentences if they are eternal Quine 1970 [ 9 ] pages 13–14 Theory 1c All and only meaningful-declarative-sentence-token-uses are truth-bearers Arguments for theory 1c By respecting the use–mention distinction, Theory 1c avoids criticism (ii) of Theory 1b. Criticisms of theory 1c (i) Theory 1c does not avoid criticism (i) of Theory 1b. (ii) meaningful-declarative-sentence-token-uses are events (located in particular positions in time and space) and entail a user. This implies that (a) nothing (no truth-bearer) exists and hence nothing (no truth-bearer) is true (or false) anytime anywhere (b) nothing (no truth-bearer) exists and hence nothing (no truth-bearer) is true (or false) in the absence of a user. This implies that (a) nothing was true before the evolution of users capable of using meaningful-declarative-sentence-tokens and (b) nothing is true (or false) except when being used (asserted) by a user. Intuitively the truth (or falsity) of ‘The tree continues to be in the quad’ continues in the absence of an agent to asset it. Referential Failure A problem of some antiquity is the status of sentences such as U: The King of France is bald V: The highest prime has no factors W: Pegasus did not exist Such sentences purport to refer to entitles which do not exist (or do not always exist). They are said to suffer from referential failure. We are obliged to choose either (a) That they are not truth-bearers and consequently neither true nor false or (b) That they are truth-bearers and per se are either true or false. Theory 1d All and only referentially-successful-meaningful-declarative-sentence-token-uses are truth-bearers. Theory 1d takes option (a) above by declaring that meaningful-declarative-sentence-token-uses that fail referentially are not truth-bearers. Theory 1e All referentially-successful-meaningful-declarative-sentence-token-uses are truth-bearers; some meaningful-declarative-sentence-types are truth-bearers Arguments for theory 1e Theory 1e has the same advantages as Theory 1d. Theory 1e allows for the existence of truth-bearers (i.e., meaningful-declarative-sentence-types) in the absence of users and between uses. If for any x, where x is a use of a referentially successful token of a meaningful-declarative-sentence-type y x is a truth-bearer then y is a truth-bearer otherwise y is not a truth bearer. E.g. If all uses of all referentially successful tokens of the meaningful-declarative-sentence-type ‘The whole is greater than the part’ are truth-bearers (i.e. true or false) then the meaningful-declarative-sentence-type ‘The whole is greater than the part’ is a truth-bearer. If some but not all uses of some referentially successful tokens of the meaningful-declarative-sentence-type ‘I am Spartacus’ are true then the meaningful-declarative-sentence-type ‘I am Spartacus’ is not a truth-bearer. Criticisms of theory 1e Theory 1e makes implicit use of the concept of an agent or user capable of using (i.e. asserting) a referentially-successful-meaningful-declarative-sentence-token. Although Theory 1e does not depend on the actual existence (now, in the past or in the future) of such users, it does depend on the possibility and cogency of their existence. Consequently, the concept of truth-bearer under Theory 1e is dependent upon giving an account of the concept of a ‘user’. In so far as referentially-successful-meaningful-declarative-sentence-tokens are particulars (locatable in time and space) the definition of truth-bearer just in terms of referentially-successful-meaningful-declarative-sentence is attractive to those who are (or would like to be) nominalists. The introduction of ‘use’ and ‘users’ threatens the introduction of intentions, attitudes, minds &c. as less-than welcome ontological baggage. In classical logic a sentence in a language is true or false under (and only under) an interpretation and is therefore a truth-bearer. For example, a language in the first-order predicate calculus might include one or more predicate symbols and one or more individual constants and one or more variables. The interpretation of such a language would define a domain (universe of discourse); assign an element of the domain to each individual constant; assign the denotation in the domain of some property to each unary (one-place) predicate symbol. [ 10 ] For example, if a language L consisted in the individual constant a , two unary predicate letters F and G and the variable x , then an interpretation I of L might define the Domain D as animals, assign Socrates to a , the denotation of the property being a man to F , and the denotation of the property being mortal to G . Under the interpretation I of L, Fa would be true if, and only if Socrates is a man, and the sentence ∀ {\displaystyle \forall } x(Fx → {\displaystyle \to } Gx) would be true if, and only if all men (in the domain) are mortal. In some texts an interpretation is said to give "meaning" to the symbols of the language. Since Fa has the value true under some (but not all) interpretations, it is not the sentence-type Fa which is said to be true but only some sentence-tokens of Fa under particular interpretations. A token of Fa without an interpretation is neither true nor false. Some sentences of a language like L are said to be true under all interpretations of the sentence, e.g. ∀ {\displaystyle \forall } x(Fx ∨ {\displaystyle \lor } ¬ {\displaystyle \neg } Fx), such sentences are termed logical truths , but again such sentences are neither true nor false in the absence of an interpretation. A number of authors [ 11 ] use the term proposition as truth-bearers. There is no single definition or usage. [ 12 ] [ 13 ] Sometimes it is used to mean a meaningful declarative sentence itself; sometimes it is used to mean the meaning of a meaningful declarative sentence. [ 14 ] This provides two possible definitions for the purposes of discussion as below Theory 2a : All and only meaningful-declarative-sentences are propositions Theory 2b : A meaningful-declarative-sentence-token expresses a proposition; two meaningful-declarative-sentence-tokens which have the same meaning express the same proposition; two meaningful-declarative-sentence-tokens with different meanings express different propositions. (cf Wolfram 1989, [ 15 ] p. 21) Proposition is not always used in one or other of these ways. Criticisms of theory 2a. Criticisms of Theory 2b Many authors consider statements as truth-bearers, though as with the term "proposition" there is divergence in definition and usage of that term. Sometimes 'statements' are taken to be meaningful-declarative-sentences; sometimes they are thought to be what is asserted by a meaningful-declarative-sentence. It is not always clear in which sense the word is used. This provides two possible definitions for the purposes of discussion as below. A particular concept of a statement was introduced by Strawson in the 1950s., [ 20 ] [ 21 ] [ 22 ] Consider the following: On the assumption that the same person wrote Waverley and Ivanhoe, the two distinct patterns of characters (meaningful-declarative-sentences) I and J make the same statement but express different propositions. The pairs of meaningful-declarative-sentences (K, L) & (M, N) have different meanings, but they are not necessarily contradictory, since K & L may have been asserted by different people, and M & N may have been asserted about different conductors. What these examples show is that we cannot identify that which is true or false (the statement) with the sentence used in making it; for the same sentence may be used to make different statements, some of them true and some of them false. (Strawson, P.F. (1952) [ 22 ] ) This suggests: Theory 3a All and only statements are meaningful-declarative-sentences. Theory 3b All and only meaningful-declarative-sentences can be used to make statements Statement is not always used in one or other of these ways. Arguments for theory 3a Criticisms of theory 3a Frege (1919) argued that an indicative sentence in which we communicate or state something, contains both a thought and an assertion, it expresses the thought, and the thought is the sense of the sentence. [ 23 ]
https://en.wikipedia.org/wiki/Truth-bearer
In formal semantics , truth-value semantics is an alternative to Tarskian semantics . It has been primarily championed by Ruth Barcan Marcus , [ 1 ] H. Leblanc, and J. Michael Dunn and Nuel Belnap . [ 2 ] It is also called the substitution interpretation (of the quantifiers) or substitutional quantification. The idea of these semantics is that a universal (respectively, existential ) quantifier may be read as a conjunction (respectively, disjunction ) of formulas in which constants replace the variables in the scope of the quantifier. For example, ∀ x P x {\displaystyle \forall xPx} may be read ( P a ∧ P b ∧ P c ∧ … {\displaystyle Pa\land Pb\land Pc\land \dots } ) where a , b , c {\displaystyle a,b,c} are individual constants replacing all occurrences of x {\displaystyle x} in P x {\displaystyle Px} . The main difference between truth-value semantics and the standard semantics for predicate logic is that there are no domains for truth-value semantics. Only the truth clauses for atomic and for quantificational formulas differ from those of the standard semantics. Whereas in standard semantics atomic formulas like P b {\displaystyle Pb} or R c a {\displaystyle Rca} are true if and only if (the referent of) b {\displaystyle b} is a member of the extension of the predicate P {\displaystyle P} , respectively, if and only if the pair ( c , a ) {\displaystyle (c,a)} is a member of the extension of R {\displaystyle R} , in truth-value semantics the truth-values of atomic formulas are basic. A universal (existential) formula is true if and only if all (some) ground substitution instances of the unquantified subformula are true. Compare this with the standard semantics, which says that a universal (existential) formula is true if and only if for all (some) members of the domain, the formula holds for all (some) of them; for example, ∀ x A {\displaystyle \forall xA} is true (under an interpretation) if and only if for all k {\displaystyle k} in the domain D {\displaystyle D} , A ( k / x ) {\displaystyle A(k/x)} is true (where A ( k / x ) {\displaystyle A(k/x)} is the result of substituting k {\displaystyle k} for all occurrences of x {\displaystyle x} in A {\displaystyle A} ). (Here we are assuming that constants are names for themselves—i.e. they are also members of the domain.) Truth-value semantics is not without its problems. First, the strong completeness theorem and compactness fail. To see this consider the set { F ( 1 ) , F ( 2 ) , … } {\displaystyle \{F(1),F(2),\dots \}} . Clearly the formula ∀ x F ( x ) {\displaystyle \forall xF(x)} is a logical consequence of the set, but it is not a consequence of any finite subset of it (and hence it is not deducible from it). It follows immediately that both compactness and the strong completeness theorem fail for truth-value semantics. This is rectified by a modified definition of logical consequence as given in Dunn and Belnap 1968. [ 2 ] Another problem occurs in free logic . Consider a language with one individual constant c {\displaystyle c} that is nondesignating and a predicate F {\displaystyle F} standing for 'does not exist'. Then ∃ x F x {\displaystyle \exists xFx} is false even though a substitution instance (in fact every such instance under this interpretation) of it is true. To solve this problem we simply add the proviso that an existentially quantified statement is true under an interpretation for at least one substitution instance in which the constant designates something that exists.
https://en.wikipedia.org/wiki/Truth-value_semantics
In logic , a truth function [ 1 ] is a function that accepts truth values as input and produces a unique truth value as output. In other words: the input and output of a truth function are all truth values; a truth function will always output exactly one truth value, and inputting the same truth value(s) will always output the same truth value. The typical example is in propositional logic , wherein a compound statement is constructed using individual statements connected by logical connectives ; if the truth value of the compound statement is entirely determined by the truth value(s) of the constituent statement(s), the compound statement is called a truth function, and any logical connectives used are said to be truth functional . [ 2 ] Classical propositional logic is a truth-functional logic, [ 3 ] in that every statement has exactly one truth value which is either true or false, and every logical connective is truth functional (with a correspondent truth table ), thus every compound statement is a truth function. [ 4 ] On the other hand, modal logic is non-truth-functional. A logical connective is truth-functional if the truth-value of a compound sentence is a function of the truth-value of its sub-sentences. A class of connectives is truth-functional if each of its members is. For example, the connective " and " is truth-functional since a sentence like " Apples are fruits and carrots are vegetables " is true if, and only if , each of its sub-sentences " apples are fruits " and " carrots are vegetables " is true, and it is false otherwise. Some connectives of a natural language, such as English, are not truth-functional. Connectives of the form "x believes that ..." are typical examples of connectives that are not truth-functional. If e.g. Mary mistakenly believes that Al Gore was President of the USA on April 20, 2000, but she does not believe that the moon is made of green cheese, then the sentence is true while is false. In both cases, each component sentence (i.e. " Al Gore was president of the USA on April 20, 2000 " and " the moon is made of green cheese ") is false, but each compound sentence formed by prefixing the phrase " Mary believes that " differs in truth-value. That is, the truth-value of a sentence of the form " Mary believes that... " is not determined solely by the truth-value of its component sentence, and hence the (unary) connective (or simply operator since it is unary) is non-truth-functional. The class of classical logic connectives (e.g. & , → ) used in the construction of formulas is truth-functional. Their values for various truth-values as argument are usually given by truth tables . Truth-functional propositional calculus is a formal system whose formulae may be interpreted as either true or false. In two-valued logic, there are sixteen possible truth functions, also called Boolean functions , of two inputs P and Q . Any of these functions corresponds to a truth table of a certain logical connective in classical logic, including several degenerate cases such as a function not depending on one or both of its arguments. Truth and falsehood are denoted as 1 and 0, respectively, in the following truth tables for sake of brevity. Because a function may be expressed as a composition , a truth-functional logical calculus does not need to have dedicated symbols for all of the above-mentioned functions to be functionally complete . This is expressed in a propositional calculus as logical equivalence of certain compound statements. For example, classical logic has ¬ P ∨ Q equivalent to P → Q . The conditional operator "→" is therefore not necessary for a classical-based logical system if "¬" (not) and "∨" (or) are already in use. A minimal set of operators that can express every statement expressible in the propositional calculus is called a minimal functionally complete set . A minimally complete set of operators is achieved by NAND alone {↑} and NOR alone {↓}. The following are the minimal functionally complete sets of operators whose arities do not exceed 2: [ 5 ] Some truth functions possess properties which may be expressed in the theorems containing the corresponding connective. Some of those properties that a binary truth function (or a corresponding logical connective) may have are: A set of truth functions is functionally complete if and only if for each of the following five properties it contains at least one member lacking it: A concrete function may be also referred to as an operator . In two-valued logic there are 2 nullary operators (constants), 4 unary operators , 16 binary operators , 256 ternary operators , and 2 2 n {\displaystyle 2^{2^{n}}} n -ary operators. In three-valued logic there are 3 nullary operators (constants), 27 unary operators , 19683 binary operators , 7625597484987 ternary operators , and 3 3 n {\displaystyle 3^{3^{n}}} n -ary operators. In k -valued logic, there are k nullary operators, k k {\displaystyle k^{k}} unary operators, k k 2 {\displaystyle k^{k^{2}}} binary operators, k k 3 {\displaystyle k^{k^{3}}} ternary operators, and k k n {\displaystyle k^{k^{n}}} n -ary operators. An n -ary operator in k -valued logic is a function from Z k n → Z k {\displaystyle \mathbb {Z} _{k}^{n}\to \mathbb {Z} _{k}} . Therefore, the number of such operators is | Z k | | Z k n | = k k n {\displaystyle |\mathbb {Z} _{k}|^{|\mathbb {Z} _{k}^{n}|}=k^{k^{n}}} , which is how the above numbers were derived. However, some of the operators of a particular arity are actually degenerate forms that perform a lower-arity operation on some of the inputs and ignore the rest of the inputs. Out of the 256 ternary Boolean operators cited above, ( 3 2 ) ⋅ 16 − ( 3 1 ) ⋅ 4 + ( 3 0 ) ⋅ 2 {\displaystyle {\binom {3}{2}}\cdot 16-{\binom {3}{1}}\cdot 4+{\binom {3}{0}}\cdot 2} of them are such degenerate forms of binary or lower-arity operators, using the inclusion–exclusion principle . The ternary operator f ( x , y , z ) = ¬ x {\displaystyle f(x,y,z)=\lnot x} is one such operator which is actually a unary operator applied to one input, and ignoring the other two inputs. "Not" is a unary operator , it takes a single term (¬ P ). The rest are binary operators , taking two terms to make a compound statement ( P ∧ Q , P ∨ Q , P → Q , P ↔ Q ). The set of logical operators Ω may be partitioned into disjoint subsets as follows: In this partition, Ω j {\displaystyle \Omega _{j}} is the set of operator symbols of arity j . In the more familiar propositional calculi, Ω {\displaystyle \Omega } is typically partitioned as follows: Instead of using truth tables , logical connective symbols can be interpreted by means of an interpretation function and a functionally complete set of truth-functions (Gamut 1991), as detailed by the principle of compositionality of meaning. Let I be an interpretation function, let Φ, Ψ be any two sentences and let the truth function f nand be defined as: Then, for convenience, f not , f or f and and so on are defined by means of f nand : or, alternatively f not , f or f and and so on are defined directly: Then etc. Thus if S is a sentence that is a string of symbols consisting of logical symbols v 1 ... v n representing logical connectives, and non-logical symbols c 1 ... c n , then if and only if I ( v 1 )... I ( v n ) have been provided interpreting v 1 to v n by means of f nand (or any other set of functional complete truth-functions) then the truth-value of ⁠ I ( s ) {\displaystyle I(s)} ⁠ is determined entirely by the truth-values of c 1 ... c n , i.e. of I ( c 1 )... I ( c n ) . In other words, as expected and required, S is true or false only under an interpretation of all its non-logical symbols. Using the functions defined above, we can give a formal definition of a proposition's truth function. [ 6 ] Let PROP be the set of all propositional variables, We define a truth assignment to be any function ϕ : P R O P → { T , F } {\displaystyle \phi :PROP\to \{T,F\}} . A truth assignment is therefore an association of each propositional variable with a particular truth value. This is effectively the same as a particular row of a proposition's truth table. For a truth assignment, ϕ {\displaystyle \phi } , we define its extended truth assignment , ϕ ¯ {\displaystyle {\overline {\phi }}} , as follows. This extends ϕ {\displaystyle \phi } to a new function ϕ ¯ {\displaystyle {\overline {\phi }}} which has domain equal to the set of all propositional formulas. The range of ϕ ¯ {\displaystyle {\overline {\phi }}} is still { T , F } {\displaystyle \{T,F\}} . Finally, now that we have defined the extended truth assignment, we can use this to define the truth-function of a proposition. For a proposition, A , its truth function , f A {\displaystyle f_{A}} , has domain equal to the set of all truth assignments, and range equal to { T , F } {\displaystyle \{T,F\}} . It is defined, for each truth assignment ϕ {\displaystyle \phi } , by f A ( ϕ ) = ϕ ¯ ( A ) {\displaystyle f_{A}(\phi )={\overline {\phi }}(A)} . The value given by ϕ ¯ ( A ) {\displaystyle {\overline {\phi }}(A)} is the same as the one displayed in the final column of the truth table of A , on the row identified with ϕ {\displaystyle \phi } . Logical operators are implemented as logic gates in digital circuits . Practically all digital circuits (the major exception is DRAM ) are built up from NAND , NOR , NOT , and transmission gates . NAND and NOR gates with 3 or more inputs rather than the usual 2 inputs are fairly common, although they are logically equivalent to a cascade of 2-input gates. All other operators are implemented by breaking them down into a logically equivalent combination of 2 or more of the above logic gates. The "logical equivalence" of "NAND alone", "NOR alone", and "NOT and AND" is similar to Turing equivalence . The fact that all truth functions can be expressed with NOR alone is demonstrated by the Apollo guidance computer .
https://en.wikipedia.org/wiki/Truth_function
In formal theories of truth , a truth predicate is a fundamental concept based on the sentences of a formal language as interpreted logically. That is, it formalizes the concept that is normally expressed by saying that a sentence, statement or idea "is true." Based on "Chomsky Definition", a language is assumed to be a countable set of sentences, each of finite length, and constructed out of a countable set of symbols. A theory of syntax is assumed to introduce symbols, and rules to construct well-formed sentences. A language is called fully interpreted if meanings are attached to its sentences so that they all are either true or false. A fully interpreted language L which does not have a truth predicate can be extended to a fully interpreted language Ľ that contains a truth predicate T , i.e., the sentence A ↔ T (⌈ A ⌉) is true for every sentence A of Ľ , where T (⌈ A ⌉) stands for "the sentence (denoted by) A is true". The main tools to prove this result are ordinary and transfinite induction , recursion methods, and ZF set theory (cf. [ 1 ] and [ 2 ] ). This logic -related article is a stub . You can help Wikipedia by expanding it . This linguistics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Truth_predicate
Trygve Helgaker (born August 11, 1953, in Porsgrunn , Norway ) is professor of chemistry, Department of Chemistry, University of Oslo , Norway. He is a member of the International Academy of Quantum Molecular Science , 2005. He has written more than 200 scientific papers, and the book, Molecular Electronic-Structure Theory (Trygve Helgaker, Poul Jørgensen , and Jeppe Olsen, Wiley, Chichester, 2000). He is one of the main authors of the DALTON program. This article about a Norwegian scientist is a stub . You can help Wikipedia by expanding it . This biographical article about a chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Trygve_Helgaker
Tryptic soy-serum-bacitracin-vancomycin ( TSBV ) is a type of agar plate medium used in microbiological testing to select for Aggregatibacter actinomycetemcomitans ( A. a. ). [ 1 ] It was described by Jørgen Slots in 1982, who also discovered the role of A.a. in periodontitis . [ 2 ] Per litre, TSBV contains: [ 3 ]
https://en.wikipedia.org/wiki/Tryptic_soy-serum-bacitracin-vancomycin
Tryptophan-rich sensory proteins ( TspO ) are a family of proteins that are involved in transmembrane signalling . In either prokaryotes or mitochondria they are localized to the outer membrane , and have been shown to bind and transport dicarboxylic tetrapyrrole intermediates of the haem biosynthetic pathway . [ 1 ] [ 2 ] They are associated with the major outer membrane porins (in prokaryotes) and with the voltage-dependent anion channel (in mitochondria). [ 3 ] TspO of Rhodobacter sphaeroides is involved in signal transduction, functioning as a negative regulator of the expression of some photosynthesis genes (PpsR/AppA repressor/antirepressor regulon ). This down-regulation is believed to be in response to oxygen levels. TspO works through (or modulates) the PpsR/AppA system and acts upstream of the site of action of these regulatory proteins . [ 4 ] It has been suggested that the TspO regulatory pathway works by regulating the efflux of certain tetrapyrrole intermediates of the haem/bacteriochlorophyll biosynthetic pathways in response to the availability of molecular oxygen, thereby causing the accumulation of a biosynthetic intermediate that serves as a corepressor for the regulated genes . [ 5 ] A homologue of the TspO protein in Sinorhizobium meliloti is involved in regulating expression of the ndi locus in response to stress conditions. [ 6 ] In animals, the peripheral benzodiazepine receptor is a mitochondrial protein (located in the outer mitochondrial membrane ) characterised by its ability to bind with nanomolar affinity to a variety of benzodiazepine -like drugs, as well as to dicarboxylic tetrapyrrole intermediates of the haem biosynthetic pathway. Depending upon the tissue, it was shown to be involved in steroidogenesis , haem biosynthesis , apoptosis , cell growth and differentiation, mitochondrial respiratory control , and immune and stress response , but the precise function of the PBR remains unclear. The role of PBR in the regulation of cholesterol transport from the outer to the inner mitochondrial membrane, the rate-determining step in steroid biosynthesis, has been studied in detail. PBR is required for the binding, uptake and release, upon ligand activation, of the substrate cholesterol. [ 7 ] PBR forms a multimeric complex with the voltage-dependent anion channel (VDAC) [ 3 ] and adenine nucleotide carrier. [ 1 ] Molecular modeling of PBR suggested that it might function as a channel for cholesterol. Indeed, cholesterol uptake and transport by bacterial cells was induced upon PBR expression . Mutagenesis studies identified a cholesterol recognition/interaction motif (CRAC) in the cytoplasmic C terminus of PBR. [ 8 ] [ 9 ] In complementation experiments , rat PBR (pk18) protein functionally substitutes for its homologue TspO in R. sphaeroides, negatively affecting transcription of specific photosynthesis genes . [ 10 ] This suggests that PBR may function as an oxygen sensor, transducing an oxygen-triggered signal leading to an adaptive cellular response. These observations suggest that fundamental aspects of this receptor and the downstream signal transduction pathway are conserved in bacteria and higher eukaryotic mitochondria. The alpha-3 subdivision of the purple bacteria is considered to be a likely source of the endosymbiont that ultimately gave rise to the mitochondrion. Therefore, it is possible that the mammalian PBR remains both evolutionarily and functionally related to the TspO of R. sphaeroides.
https://en.wikipedia.org/wiki/Tryptophan-rich_sensory_protein
Tryptophan tryptophylquinone ( TTQ ) [ 1 ] is an enzyme cofactor , generated by posttranslational modification of amino acids within the protein. Methylamine dehydrogenase (MADH), an amine dehydrogenase , requires TTQ for its catalytic function . [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tryptophan_tryptophylquinone
A ternary / ˈ t ɜːr n ər i / numeral system (also called base 3 or trinary [ 1 ] ) has three as its base . Analogous to a bit , a ternary digit is a trit ( tri nary dig it ). One trit is equivalent to log 2 3 (about 1.58496) bits of information . Although ternary most often refers to a system in which the three digits are all non–negative numbers; specifically 0 , 1 , and 2 , the adjective also lends its name to the balanced ternary system; comprising the digits −1 , 0 and +1, used in comparison logic and ternary computers . Representations of integer numbers in ternary do not get uncomfortably lengthy as quickly as in binary . For example, decimal 365 (10) or senary 1 405 (6) corresponds to binary 1 0110 1101 (2) (nine bits ) and to ternary 111 112 (3) (six digits). However, they are still far less compact than the corresponding representations in bases such as decimal – see below for a compact way to codify ternary using nonary (base 9) and septemvigesimal (base 27). As for rational numbers , ternary offers a convenient way to represent ⁠ 1 / 3 ⁠ as same as senary (as opposed to its cumbersome representation as an infinite string of recurring digits in decimal); but a major drawback is that, in turn, ternary does not offer a finite representation for ⁠ 1 / 2 ⁠ (nor for ⁠ 1 / 4 ⁠ , ⁠ 1 / 8 ⁠ , etc.), because 2 is not a prime factor of the base; as with base two, one-tenth (decimal ⁠ 1 / 10 ⁠ , senary ⁠ 1 / 14 ⁠ ) is not representable exactly (that would need e.g. decimal); nor is one-sixth (senary ⁠ 1 / 10 ⁠ , decimal ⁠ 1 / 6 ⁠ ). The value of a binary number with n bits that are all 1 is 2 n − 1 . Similarly, for a number N ( b , d ) with base b and d digits, all of which are the maximal digit value b − 1 , we can write: Then For a three-digit ternary number, N (3, 3) = 3 3 − 1 = 26 = 2 × 3 2 + 2 × 3 1 + 2 × 3 0 = 18 + 6 + 2 . Nonary / ˈ n ɒ n ər i / (base 9, each digit is two ternary digits) or septemvigesimal (base 27, each digit is three ternary digits) can be used for compact representation of ternary, similar to how octal and hexadecimal systems are used in place of binary . In certain analog logic, the state of the circuit is often expressed ternary. This is most commonly seen in CMOS circuits, and also in transistor–transistor logic with totem-pole output . The output is said to either be low ( grounded ), high, or open ( high- Z ). In this configuration the output of the circuit is actually not connected to any voltage reference at all. Where the signal is usually grounded to a certain reference, or at a certain voltage level, the state is said to be high impedance because it is open and serves its own reference. Thus, the actual voltage level is sometimes unpredictable. A rare "ternary point" in common use is for defensive statistics in American baseball (usually just for pitchers ), to denote fractional parts of an inning. Since the team on offense is allowed three outs , each out is considered one third of a defensive inning and is denoted as .1 . For example, if a player pitched all of the 4th, 5th and 6th innings, plus achieving 2 outs in the 7th inning, his innings pitched column for that game would be listed as 3.2 , the equivalent of 3 + 2 ⁄ 3 (which is sometimes used as an alternative by some record keepers). In this usage, only the fractional part of the number is written in ternary form. [ 2 ] [ 3 ] Ternary numbers can be used to convey self-similar structures like the Sierpinski triangle or the Cantor set conveniently. Additionally, it turns out that the ternary representation is useful for defining the Cantor set and related point sets, because of the way the Cantor set is constructed. The Cantor set consists of the points from 0 to 1 that have a ternary expression that does not contain any instance of the digit 1. [ 4 ] [ 5 ] Any terminating expansion in the ternary system is equivalent to the expression that is identical up to the term preceding the last non-zero term followed by the term one less than the last non-zero term of the first expression, followed by an infinite tail of twos. For example: 0.1020 is equivalent to 0.1012222... because the expansions are the same until the "two" of the first expression, the two was decremented in the second expansion, and trailing zeros were replaced with trailing twos in the second expression. Ternary is the integer base with the lowest radix economy , followed closely by binary and quaternary . This is due to its proximity to the mathematical constant e . It has been used for some computing systems because of this efficiency. It is also used to represent three-option trees , such as phone menu systems, which allow a simple path to any branch. A form of redundant binary representation called a binary signed-digit number system, a form of signed-digit representation , is sometimes used in low-level software and hardware to accomplish fast addition of integers because it can eliminate carries . [ 6 ] Simulation of ternary computers using binary computers, or interfacing between ternary and binary computers, can involve use of binary-coded ternary (BCT) numbers, with two or three bits used to encode each trit. [ 7 ] [ 8 ] BCT encoding is analogous to binary-coded decimal (BCD) encoding. If the trit values 0, 1 and 2 are encoded 00, 01 and 10, conversion in either direction between binary-coded ternary and binary can be done in logarithmic time . [ 9 ] A library of C code supporting BCT arithmetic is available. [ 10 ] Some ternary computers such as the Setun defined a tryte to be six trits [ 11 ] or approximately 9.5 bits (holding more information than the de facto binary byte ). [ 12 ]
https://en.wikipedia.org/wiki/Tryte
The Tsai–Hill failure criterion is one of the phenomenological material failure theories , which is widely used for anisotropic composite materials which have different strengths in tension and compression. The Tsai-Hill criterion predicts failure when the failure index in a laminate reaches 1. The Tsai-Hill criterion is based on an energy theory with interactions between stresses. Ply rupture appears when: [ 1 ] [ 2 ] Where: The Tsai hill criterion is interactive , i.e. the stresses in different directions are not decoupled and do affect the failure simultaneously. [ 2 ] Furthermore, it is a failure mode independent criterion, as it does not predict the way in which the material will fail, as opposed to mode-dependent criteria such as the Hashin criterion, or the Puck failure criterion. This can be important as some types of failure can be more critical than others.
https://en.wikipedia.org/wiki/Tsai-Hill_failure_criterion
The Tsakane Clay Grassland is a rare South African vegetation type supporting a unique grassland ecosystem. It is named after the township of Tsakane in Ekurhuleni , Gauteng , in which it is the dominant natural vegetation type. This ecosystem is characterized by its clay-rich soil, which supports a diverse array of flora and fauna, including several endemic and threatened species. The Tsakane Clay Grassland is an important conservation area, as it plays a crucial role in maintaining biodiversity and providing ecosystem services to the surrounding human populations. The Tsakane Clay Grassland is the main vegetation type within the Suikerbosrand Nature Reserve , with a smaller occurrence of the Andesite Mountain Bushveld (SVcb11) vegetation unit. [ 1 ] The altitude varies between 1,545 and 1,917 meters above sea level. The grassland extends from Soweto to the town of Springs in Gauteng and is distributed in patches southwards to Nigel and Vereeniging . The vegetation unit also occurs in parts of Mpumalanga between Balfour and Standerton and also in the northern side of the Vaal Dam . [ 1 ] The landscape is flat to slightly undulating, with low hills also present in some areas of the grassland. The Tsakane Clay Grassland is home to a diverse range of plant species, including important taxa such as Andropogon schirensis , Eragrostis racemosa , Senecio inornatus , and Anthospermum rigidum subsp. pumilum . [ 1 ] These species are adapted to the clay-rich soil conditions found in the grassland. The ecosystem also supports a variety of animal species, including mammals, birds, reptiles, and insects, many of which rely on the unique plant species for food and habitat. The Tsakane Clay Grassland is an important conservation area due to its high levels of biodiversity and the presence of several threatened and endemic species. Efforts to conserve the ecosystem include the establishment of protected areas, as well as ongoing research and monitoring programs to better understand and manage the unique flora and fauna. These conservation efforts aim to preserve the ecological integrity of the grassland and ensure the long-term survival of its species. The Tsakane Clay Grassland faces several threats, [ 2 ] primarily from human activities such as urbanization, agriculture, and mining. [ 3 ] The expansion of nearby towns and cities has led to habitat loss and fragmentation, which can negatively impact the ecosystem's biodiversity. Additionally, the introduction of non-native species and pollution from various sources can further degrade the grassland and threaten its native species.
https://en.wikipedia.org/wiki/Tsakane_Clay_Grassland
In statistics , a Tsallis distribution is a probability distribution derived from the maximization of the Tsallis entropy under appropriate constraints. There are several different families of Tsallis distributions, yet different sources may reference an individual family as "the Tsallis distribution". The q-Gaussian is a generalization of the Gaussian in the same way that Tsallis entropy is a generalization of standard Boltzmann–Gibbs entropy or Shannon entropy . [ 1 ] Similarly, if the domain of the variable is constrained to be positive in the maximum entropy procedure, the q-exponential distribution is derived. The Tsallis distributions have been applied to problems in the fields of statistical mechanics , geology , anatomy , astronomy , economics , finance , and machine learning . The distributions are often used for their heavy tails . Note that Tsallis distributions are obtained as Box–Cox transformation [ 2 ] over usual distributions, with deformation parameter λ = 1 − q {\displaystyle \lambda =1-q} . This deformation transforms exponentials into q-exponentials. In a similar procedure to how the normal distribution can be derived using the standard Boltzmann–Gibbs entropy or Shannon entropy, the q-Gaussian can be derived from a maximization of the Tsallis entropy subject to the appropriate constraints. [ 3 ] [ 4 ] See q-Gaussian . See q-exponential distribution See q-Weibull distribution
https://en.wikipedia.org/wiki/Tsallis_distribution
In physics, the Tsallis entropy is a generalization of the standard Boltzmann–Gibbs entropy . It is proportional to the expectation of the q-logarithm of a distribution. The concept was introduced in 1988 by Constantino Tsallis [ 1 ] as a basis for generalizing the standard statistical mechanics and is identical in form to Havrda–Charvát structural α-entropy , [ 2 ] introduced in 1967 within information theory . Given a discrete set of probabilities { p i } {\displaystyle \{p_{i}\}} with the condition ∑ i p i = 1 {\displaystyle \sum _{i}p_{i}=1} , and q {\displaystyle q} any real number, the Tsallis entropy is defined as where q {\displaystyle q} is a real parameter sometimes called entropic-index and k {\displaystyle k} a positive constant. In the limit as q → 1 {\displaystyle q\to 1} , the usual Boltzmann–Gibbs entropy is recovered, namely where one identifies k {\displaystyle k} with the Boltzmann constant k B {\displaystyle k_{B}} . For continuous probability distributions, we define the entropy as where p ( x ) {\displaystyle p(x)} is a probability density function . The cross-entropy pendant is the expectation of the negative q-logarithm with respect to a second distribution, r {\displaystyle r} . So 1 q − 1 ( 1 − ∑ i p i q ⋅ r i p i ) {\displaystyle {\tfrac {1}{q-1}}(1-{\textstyle \sum _{i}}p_{i}^{q}\cdot {\tfrac {r_{i}}{p_{i}}})} . Using t = q − 1 {\displaystyle t=q-1} , this may be written ( 1 − E r [ p t ] ) / t {\displaystyle (1-E_{r}[p^{t}])/t} . For smaller t {\displaystyle t} , values p i t {\displaystyle p_{i}^{t}} all tend towards 1 {\displaystyle 1} . The limit q → 1 {\displaystyle q\to 1} computes the negative of the slope of E r [ p t ] {\displaystyle E_{r}[p^{t}]} at t = 0 {\displaystyle t=0} and one recovers − ∑ i r i ln ⁡ p i {\displaystyle -{\textstyle \sum _{i}}r_{i}\ln p_{i}} . So for fixed small t {\displaystyle t} , raising this expectation relates to log-likelihood maximalization . A logarithm can be expressed in terms of a slope through d d x p x = p x ln ⁡ p {\displaystyle {\tfrac {d}{dx}}p^{x}=p^{x}\ln p} resulting in the following formula for the standard entropy: Likewise, the discrete Tsallis entropy satisfies where D q is the q-derivative with respect to x . Given two independent systems A and B , for which the joint probability density satisfies the Tsallis entropy of this system satisfies From this result, it is evident that the parameter | 1 − q | {\displaystyle |1-q|} is a measure of the departure from additivity. In the limit when q = 1, which is what is expected for an additive system. This property is sometimes referred to as "pseudo-additivity". Many common distributions like the normal distribution belongs to the statistical exponential families . Tsallis entropy for an exponential family can be written [ 3 ] as where F is log-normalizer and k the term indicating the carrier measure. For multivariate normal, term k is zero, and therefore the Tsallis entropy is in closed-form. The Tsallis Entropy has been used along with the Principle of maximum entropy to derive the Tsallis distribution . In scientific literature, the physical relevance of the Tsallis entropy has been debated. [ 4 ] [ 5 ] [ 6 ] However, from the years 2000 on, an increasingly wide spectrum of natural, artificial and social complex systems have been identified which confirm the predictions and consequences that are derived from this nonadditive entropy, such as nonextensive statistical mechanics, [ 7 ] which generalizes the Boltzmann–Gibbs theory. Among the various experimental verifications and applications presently available in the literature, the following ones deserve a special mention: Among the various available theoretical results which clarify the physical conditions under which Tsallis entropy and associated statistics apply, the following ones can be selected: For further details a bibliography is available at http://tsallis.cat.cbpf.br/biblio.htm Several interesting physical systems [ 28 ] abide by entropic functionals that are more general than the standard Tsallis entropy. Therefore, several physically meaningful generalizations have been introduced. The two most general of these are notably: Superstatistics , introduced by C. Beck and E. G. D. Cohen in 2003 [ 29 ] and Spectral Statistics , introduced by G. A. Tsekouras and Constantino Tsallis in 2005. [ 30 ] Both these entropic forms have Tsallis and Boltzmann–Gibbs statistics as special cases; Spectral Statistics has been proven to at least contain Superstatistics and it has been conjectured to also cover some additional cases. [ citation needed ]
https://en.wikipedia.org/wiki/Tsallis_entropy
The term Tsallis statistics usually refers to the collection of mathematical functions and associated probability distributions that were originated by Constantino Tsallis . Using that collection, it is possible to derive Tsallis distributions from the optimization of the Tsallis entropic form . A continuous real parameter q can be used to adjust the distributions, so that distributions which have properties intermediate to that of Gaussian and Lévy distributions can be created. The parameter q represents the degree of non- extensivity of the distribution. Tsallis statistics are useful for characterising complex, anomalous diffusion . The q -deformed exponential and logarithmic functions were first introduced in Tsallis statistics in 1994. [ 1 ] However, the q -logarithm is the Box–Cox transformation for q = 1 − λ {\displaystyle q=1-\lambda } , proposed by George Box and David Cox in 1964. [ 2 ] The q -exponential is a deformation of the exponential function using the real parameter q . [ 3 ] Note that the q -exponential in Tsallis statistics is different from a version used elsewhere . The q -logarithm is the inverse of q -exponential and a deformation of the logarithm using the real parameter q . [ 3 ] These functions have the property that The q → 1 {\displaystyle q\to 1} limits of the above expression can be understood by considering ( 1 + x N ) N ≈ e x {\displaystyle \left(1+{\frac {x}{N}}\right)^{N}\approx {\rm {e}}^{x}} for the exponential function and N ( x 1 N − 1 ) ≈ log ⁡ ( x ) {\displaystyle N\left(x^{\frac {1}{N}}-1\right)\approx \log(x)} for the logarithm.
https://en.wikipedia.org/wiki/Tsallis_statistics
The Tscherniak-Einhorn reaction is an organic chemistry name reaction initiated in 1901 by Joseph Tscherniak . It involves the condensation of N- hydroxymethylphthalimide with varied aromatic compounds. This process was later expanded upon in 1905 by Alfred Einhorn to include the condensation of N- hydroxymethylchloroacetamide and benzoic or cinnamic acid. [ 1 ] [ 2 ] This reaction entails an acid-catalyzed Electrophilic aromatic substitution amidoalkylation, utilizing an N -hydroxymethylamide or N -hydroxymethylimide, which are also known as the Tscherniak-Einhorn reagents. The reaction is catalyzed by potent acids such as 85-100% sulfuric acid , p-toluenesulfonic acid , methanesulfonic acid , or trifluoroacetic acid . [ 2 ] N -hydroxymethylamides may be prepared by the condensation of corresponding amides with an aqueous formaldehyde solution in dioxane , in the presence of sodium hydroxide . [ 3 ] In the first step, the N -hydroxymethylamide is subject to acid protonation . Post water elimination, a mesomerically stabilized cation forms. This reacts with the aromatic compound in line with an electrophilic aromatic substitution process. [ 2 ] The Tscherniak-Einhorn reaction is used to synthesize some alkaloid derivatives. [ 2 ]
https://en.wikipedia.org/wiki/Tscherniak-Einhorn_reaction
In mathematics , a Tschirnhaus transformation , also known as Tschirnhausen transformation , is a type of mapping on polynomials developed by Ehrenfried Walther von Tschirnhaus in 1683. [ 1 ] Simply, it is a method for transforming a polynomial equation of degree n ≥ 2 {\displaystyle n\geq 2} with some nonzero intermediate coefficients, a 1 , . . . , a n − 1 {\displaystyle a_{1},...,a_{n-1}} , such that some or all of the transformed intermediate coefficients, a 1 ′ , . . . , a n − 1 ′ {\displaystyle a'_{1},...,a'_{n-1}} , are exactly zero. For example, finding a substitution y ( x ) = k 1 x 2 + k 2 x + k 3 {\displaystyle y(x)=k_{1}x^{2}+k_{2}x+k_{3}} for a cubic equation of degree n = 3 {\displaystyle n=3} , f ( x ) = x 3 + a 2 x 2 + a 1 x + a 0 {\displaystyle f(x)=x^{3}+a_{2}x^{2}+a_{1}x+a_{0}} such that substituting x = x ( y ) {\displaystyle x=x(y)} yields a new equation f ′ ( y ) = y 3 + a 2 ′ y 2 + a 1 ′ y + a 0 ′ {\displaystyle f'(y)=y^{3}+a'_{2}y^{2}+a'_{1}y+a'_{0}} such that a 1 ′ = 0 {\displaystyle a'_{1}=0} , a 2 ′ = 0 {\displaystyle a'_{2}=0} , or both. More generally, it may be defined conveniently by means of field theory , as the transformation on minimal polynomials implied by a different choice of primitive element . This is the most general transformation of an irreducible polynomial that takes a root to some rational function applied to that root. For a generic n t h {\displaystyle n^{th}} degree reducible monic polynomial equation f ( x ) = 0 {\displaystyle f(x)=0} of the form f ( x ) = g ( x ) / h ( x ) {\displaystyle f(x)=g(x)/h(x)} , where g ( x ) {\displaystyle g(x)} and h ( x ) {\displaystyle h(x)} are polynomials and h ( x ) {\displaystyle h(x)} does not vanish at f ( x ) = 0 {\displaystyle f(x)=0} , f ( x ) = x n + a 1 x n − 1 + a 2 x n − 2 + . . . + a n − 1 x + a n = 0 {\displaystyle f(x)=x^{n}+a_{1}x^{n-1}+a_{2}x^{n-2}+...+a_{n-1}x+a_{n}=0} the Tschirnhaus transformation is the function: y = k 1 x n − 1 + k 2 x n − 2 + . . . + k n − 1 x + k n {\displaystyle y=k_{1}x^{n-1}+k_{2}x^{n-2}+...+k_{n-1}x+k_{n}} Such that the new equation in y {\displaystyle y} , f ′ ( y ) {\displaystyle f'(y)} , has certain special properties, most commonly such that some coefficients, a 1 ′ , . . . , a n − 1 ′ {\displaystyle a'_{1},...,a'_{n-1}} , are identically zero . [ 2 ] [ 3 ] In Tschirnhaus' 1683 paper, [ 1 ] he solved the equation f ( x ) = x 3 − p x 2 + q x − r = 0 {\displaystyle f(x)=x^{3}-px^{2}+qx-r=0} using the Tschirnhaus transformation y ( x ; a ) = x − a ⟷ x ( y ; a ) = x = y + a . {\displaystyle y(x;a)=x-a\longleftrightarrow x(y;a)=x=y+a.} Substituting yields the transformed equation f ′ ( y ; a ) = y 3 + ( 3 a − p ) y 2 + ( 3 a 2 − 2 p a + q ) y + ( a 3 − p a 2 + q a − r ) = 0 {\displaystyle f'(y;a)=y^{3}+(3a-p)y^{2}+(3a^{2}-2pa+q)y+(a^{3}-pa^{2}+qa-r)=0} or { a 1 ′ = 3 a − p a 2 ′ = 3 a 2 − 2 p a + q a 3 ′ = a 3 − p a 2 + q a − r . {\displaystyle {\begin{cases}a'_{1}=3a-p\\a'_{2}=3a^{2}-2pa+q\\a'_{3}=a^{3}-pa^{2}+qa-r\end{cases}}.} Setting a 1 ′ = 0 {\displaystyle a'_{1}=0} yields, 3 a − p = 0 → a = p 3 {\displaystyle 3a-p=0\rightarrow a={\frac {p}{3}}} and finally the Tschirnhaus transformation y = x − p 3 , {\displaystyle y=x-{\frac {p}{3}},} which may be substituted into f ′ ( y ; a ) {\displaystyle f'(y;a)} to yield an equation of the form: f ′ ( y ) = y 3 − q ′ y − r ′ . {\displaystyle f'(y)=y^{3}-q'y-r'.} Tschirnhaus went on to describe how a Tschirnhaus transformation of the form: x 2 ( y ; a , b ) = x 2 = b x + y + a {\displaystyle x^{2}(y;a,b)=x^{2}=bx+y+a} may be used to eliminate two coefficients in a similar way. In detail, let K {\displaystyle K} be a field, and P ( t ) {\displaystyle P(t)} a polynomial over K {\displaystyle K} . If P {\displaystyle P} is irreducible, then the quotient ring of the polynomial ring K [ t ] {\displaystyle K[t]} by the principal ideal generated by P {\displaystyle P} , is a field extension of K {\displaystyle K} . We have where α {\displaystyle \alpha } is t {\displaystyle t} modulo ( P ) {\displaystyle (P)} . That is, any element of L {\displaystyle L} is a polynomial in α {\displaystyle \alpha } , which is thus a primitive element of L {\displaystyle L} . There will be other choices β {\displaystyle \beta } of primitive element in L {\displaystyle L} : for any such choice of β {\displaystyle \beta } we will have by definition: with polynomials F {\displaystyle F} and G {\displaystyle G} over K {\displaystyle K} . Now if Q {\displaystyle Q} is the minimal polynomial for β {\displaystyle \beta } over K {\displaystyle K} , we can call Q {\displaystyle Q} a Tschirnhaus transformation of P {\displaystyle P} . Therefore the set of all Tschirnhaus transformations of an irreducible polynomial is to be described as running over all ways of changing P {\displaystyle P} , but leaving L {\displaystyle L} the same. This concept is used in reducing quintics to Bring–Jerrard form , for example. There is a connection with Galois theory , when L {\displaystyle L} is a Galois extension of K {\displaystyle K} . The Galois group may then be considered as all the Tschirnhaus transformations of P {\displaystyle P} to itself. In 1683, Ehrenfried Walther von Tschirnhaus published a method for rewriting a polynomial of degree n > 2 {\displaystyle n>2} such that the x n − 1 {\displaystyle x^{n-1}} and x n − 2 {\displaystyle x^{n-2}} terms have zero coefficients. In his paper, Tschirnhaus referenced a method by René Descartes to reduce a quadratic polynomial ( n = 2 ) {\displaystyle (n=2)} such that the x {\displaystyle x} term has zero coefficient. In 1786, this work was expanded by Erland Samuel Bring who showed that any generic quintic polynomial could be similarly reduced. In 1834, George Jerrard further expanded Tschirnhaus' work by showing a Tschirnhaus transformation may be used to eliminate the x n − 1 {\displaystyle x^{n-1}} , x n − 2 {\displaystyle x^{n-2}} , and x n − 3 {\displaystyle x^{n-3}} for a general polynomial of degree n > 3 {\displaystyle n>3} . [ 3 ]
https://en.wikipedia.org/wiki/Tschirnhaus_transformation
Tsiklon (meaning cyclone , Russian : Циклон ) is the first Soviet satellite navigation system , developed in the former Soviet Union . From 1967 to 1978 a total of 31 Zaliv satellites were launched onboard Kosmos-3 and Kosmos-3M rockets, from the Kapustin Yar and Plesetsk launch sites. [ 1 ] The project was conceived in the 1950s and the draft proposal was approved in 1962, but was not made operational until 1972 due to delays. [ 2 ] The successor satellites to Tsiklon were Parus and Sfera . [ 2 ] Currently, Russia operates the GLONASS system. This article about one or more spacecraft of the Russian Federation is a stub . You can help Wikipedia by expanding it . This article about one or more spacecraft of the Soviet Union is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tsiklon_(satellite_navigation_system)
Tsit Yuen Lam ( Chinese : 林節玄 ; Jyutping : lam4 zit3 jyun4 ; [ 1 ] born 6 February 1942 [ 2 ] ) is a Hong Kong-American mathematician specializing in algebra , especially ring theory and quadratic forms . Lam earned his bachelor's degree at the University of Hong Kong in 1963 and his Ph.D. at Columbia University in 1967 under Hyman Bass , with a thesis titled On Grothendieck Groups . [ 3 ] Subsequently, he was an instructor at the University of Chicago and since 1968 he has been at the University of California, Berkeley , where he became assistant professor in 1969, associate professor in 1972, and full professor in 1976. He served as assistant department head several times. From 1995 to 1997 he was Deputy Director of the Mathematical Sciences Research Institute in Berkeley, California . [ 2 ] Among his doctoral students is Richard Elman . [ 3 ] From 1972 to 1974 he was a Sloan Fellow ; in 1978–79 a Miller Research Professor; and in 1981–82 a Guggenheim Fellow . In 1982 he was awarded the Leroy P. Steele Prize for his textbooks. [ 4 ] In 2012 he became a fellow of the American Mathematical Society . [ 5 ]
https://en.wikipedia.org/wiki/Tsit_Yuen_Lam
9383 n/a ENSG00000270641 n/a n a n/a n/a n/a n/a n/a Tsix is a non-coding RNA gene that is antisense to the Xist RNA . Tsix binds Xist during X chromosome inactivation . The name Tsix comes from the reverse of Xist, which stands for X-inactive specific transcript. [ 3 ] Female mammals have two X chromosomes and males have one X and one Y chromosome . The X chromosome has many active genes. This leads to dosage compensation problems: the two X chromosomes in the female will create twice as many gene products as the one X in the male. To mitigate this, one of the X chromosomes is inactivated in females, so that each sex only has one set of X chromosome genes. The inactive X chromosome in cells of females is visible as a Barr body under the microscope. Males do not have Barr bodies, as they only have one X chromosome. [ 3 ] Xist is only expressed from the future inactive X chromosome in females and is able to "coat" the chromosome from which it was produced. Many copies of Xist RNA bind the future inactivated X chromosome. Tsix prevents the accumulation of Xist on the future active female X chromosome to maintain the active euchromatin state of the chosen chromosome. [ 3 ] [ 4 ] In the extra-embryonic lineage in mice and some other mammals, all female individuals have two X chromosomes. However, during embryonic development, an X chromosome is deactivated, while the other X chromosome is left untouched, in a process called imprinted X-inactivation . Xist inactivates an X chromosome at random in female mice by condensing the chromatin , via histone methylation among other mechanisms that are currently being studied. This inactivation happens at random in each individual cell, allowing for a different X chromosome to be inactivated in each cell. Female mammals are therefore called genetic mosaics, for having two different X chromosomes expressed throughout their body. Tsix binds complementary Xist RNA and renders it non-functional. After binding it, Xist is made inactive through dicer . [ 4 ] Thus, Xist does not condense chromatin on the other X chromosome, letting it remain active. This does not occur on the other chromosome, and Xist proceeds to inactivate that chromosome. [ 5 ] Tsix also functions to silence transcription of Xist through epigenetic regulation . [ 4 ] Tsix and Xist regulate X chromosome protein production in female mice to prevent early embryonic mortality. [ 6 ] X inactivation allows for equal dosage of X-linked genes for both males and females by inactivating the extra X chromosome in the females. [ 7 ] Mutation of the maternal Tsix gene can cause over accumulation of Xist on both X chromosomes, silencing both X chromosomes in females and the single X chromosome in male. This can cause early mortality. However, if the paternal Tsix allele is active, it can rescue female embryos from the over-accumulation of Xist. [ 8 ] When one allele of Tsix in mice is null, the inactivation is skewed toward the mutant X chromosome. This is due to an accumulation of Xist that is not countered by Tsix, and causes the mutant chromosome to be inactivated. When both alleles of Tsix are null ( homozygous mutant), the results are low fertility, lower proportion of female births and a reversion to random X inactivation rather than gene imprinting . [ 9 ] In development, X chromosome inactivation is a part of cellular differentiation . This is accomplished by normal Xist function. To confer pluripotency in an embryonic stem cell, factors inhibit Xist transcription. These factors also upregulate transcription of Tsix, which serves to inhibit Xist further. This cell is then able to remain pluripotent as X inactivation is not accomplished. [ 10 ] The marker Rex1 , as well as other members of the pluripotency network, are recruited to the Tsix promoter and transcription elongation of Tsix occurs. [ 10 ] Along with Tsix and other proteins, factor PRDM14 has been shown to be necessary for the return to pluripotency. Assisted by Tsix, PRDM14 can associate with Xist and remove the inactivation of an X chromosome . [ 11 ] X chromosome inactivation is random in human females, and imprinting does not occur. The deletion of a CpG island , a site involved in epigenetic regulation, in the human Tsix gene prevents Tsix from imprinting on the X chromosomes. Instead, the human Tsix chromosome is coexpressed with the human Xist gene on the inactivated X chromosome, indicating that it does not play an important role in random X chromosome inactivation. [ 12 ] An autosome may be a more likely candidate for regulating this process in humans. The presence of Tsix in humans may be an evolutionary vestige, a sequence that no longer has a function in humans. Alternately, it may be necessary to study cells closer to the X inactivation stage rather than older cells in order to accurately locate Tsix expression and function. [ 5 ]
https://en.wikipedia.org/wiki/Tsix
The Tsuji–Trost reaction (also called the Trost allylic alkylation or allylic alkylation ) is a palladium - catalysed substitution reaction involving a substrate that contains a leaving group in an allylic position. The palladium catalyst first coordinates with the allyl group and then undergoes oxidative addition , forming the π -allyl complex. This allyl complex can then be attacked by a nucleophile , resulting in the substituted product. [ 1 ] This work was first pioneered by Jirō Tsuji in 1965 [ 2 ] and, later, adapted by Barry Trost in 1973 with the introduction of phosphine ligands. [ 3 ] The scope of this reaction has been expanded to many different carbon, nitrogen, and oxygen-based nucleophiles, many different leaving groups, many different phosphorus, nitrogen, and sulfur-based ligands, and many different metals (although palladium is still preferred). [ 4 ] The introduction of phosphine ligands led to improved reactivity and numerous asymmetric allylic alkylation strategies. Many of these strategies are driven by the advent of chiral ligands , which are often able to provide high enantioselectivity and high diastereoselectivity under mild conditions. This modification greatly expands the utility of this reaction for many different synthetic applications. The ability to form carbon-carbon, carbon-nitrogen, and carbon-oxygen bonds under these conditions, makes this reaction very appealing to the fields of both medicinal chemistry and natural product synthesis. In 1962, Smidt published work on the palladium-catalysed oxidation of alkenes to carbonyl groups. In this work, it was determined that the palladium catalyst activated the alkene for the nucleophilic attack of hydroxide . [ 5 ] Gaining insight from this work, Tsuji hypothesized that a similar activation could take place to form carbon-carbon bonds. In 1965, Tsuji reported work that confirmed his hypothesis. By reacting an allylpalladium chloride dimer with the sodium salt of diethyl malonate , the group was able to form a mixture of monoalkylated and dialkylated product. [ 6 ] The scope of the reaction was expanded only gradually until Trost discovered the next big breakthrough in 1973. While attempting to synthesize acyclic sesquiterpene homologs, Trost ran into problems with the initial procedure and was not able to alkylate his substrates. These problems were overcome with the addition of triphenylphosphine to the reaction mixture. These conditions were then tested out for other substrates and some led to "essentially instantaneous reaction at room temperature." Soon after, he developed a way to use these ligands for asymmetric synthesis. [ 7 ] Not surprisingly, this spurred on many other investigations of this reaction and has led to the important role that this reaction now holds in synthetic chemistry. Starting with a zerovalent palladium species and a substrate containing a leaving group in the allylic position, the Tsuji–Trost reaction proceeds through the catalytic cycle outlined below. First, the palladium coordinates to the alkene, forming a η 2 π -allyl- Pd 0 Π complex . The next step is oxidative addition in which the leaving group is expelled with inversion of configuration and a η 3 π -allyl- Pd II is created (also called ionization). The nucleophile then adds to the allyl group regenerating the η 2 π -allyl-Pd 0 complex. At the completion of the reaction, the palladium detaches from the alkene and can start again in the catalytic cycle . [ 8 ] The nucleophiles used are typically generated from precursors (pronucleophiles) in situ after their deprotonation with base. [ 9 ] These nucleophiles are then subdivided into "hard" and "soft" nucleophiles using a paradigm for describing nucleophiles that largely rests on the pKas of their conjugate acids . "Hard" nucleophiles typically have conjugate acids with pKas greater than 25, while "soft" nucleophiles typically have conjugate acids with pKas less than 25. [ 10 ] This descriptor is important because of the impact these nucleophiles have on the stereoselectivity of the product. Stabilized or "soft" nucleophiles invert the stereochemistry of the π -allyl complex. This inversion in conjunction with the inversion in stereochemistry associated with the oxidative addition of palladium yields a net retention of stereochemistry. Unstabilized or "hard" nucleophiles, on the other hand, retain the stereochemistry of the π -allyl complex, resulting in a net inversion of stereochemistry. [ 11 ] This trend is explained by examining the mechanisms of nucleophilic attack. "Soft" nucleophiles attack the carbon of the allyl group, while "hard" nucleophiles attack the metal center, followed by reductive elimination. [ 12 ] Phosphine ligands, such as triphenylphosphine or the Trost ligand , have been used to greatly expand the scope of the Tsuji–Trost reaction. These ligands can modulate the properties of the palladium catalyst such as steric bulk as well as the electronic properties. Importantly, these ligands can also instill chirality to the final product, making it possible for these reactions to be carried out asymmetrically as shown below. The enantioselective version of the Tsuji–Trost reaction is called the Trost asymmetric allylic alkylation (Trost AAA) or simply, asymmetric allylic alkylation (AAA). These reactions are often used in asymmetric synthesis. [ 13 ] [ 14 ] [ 15 ] The reaction was originally developed with a palladium catalyst supported by the Trost ligand , although suitable conditions have greatly expanded since then. Enantioselectivity can be imparted to the reaction during any of the steps aside from the decomplexation of the palladium from the alkene since the stereocenter is already set at that point. Five main ways have been conceptualized to take advantage of these steps and yield enantioselective reaction conditions. These methods of enantiodiscrimination were previously reviewed by Trost: The favored method for enantiodiscrimination is largely dependent on the substrate of interest, and in some cases, the enantioselectivity may be influenced by several of these factors. Many different nucleophiles have been reported to be effective for this reaction. Some of the most common nucleophiles include malonates , enolates , primary alkoxides , carboxylates , phenoxides , amines , azide , sulfonamides , imides , and sulfones . The scope of leaving groups has also been expanded to include a number of different leaving groups, although carbonates , phenols , phosphates , halides and carboxylates are the most widely used. Recent work has demonstrated that the scope of "soft" nucleophiles can be expanded to include some pronucleophiles that have much higher pKas than ~ 25. Some of these "soft" nucleophiles have pKas ranging all the way to 32, [ 16 ] and even more basic pronucleophiles (~44) have been shown to act as soft nucleophiles with the addition of Lewis acids that help to facilitate deprotonation. [ 17 ] The improved pKa range of "soft" nucleophiles is critical because these nucleophiles are the only ones that have been explored [ 18 ] [ 19 ] for enantioselective reactions until very recently [ 20 ] (although non-enantioselective reactions of "hard" nucleophiles have been known for some time [ 21 ] ). By increasing the scope of pronucleophiles that act as "soft" nucleophiles, these substrates can also be incorporated into enantioselective reactions using previously reported and well characterized methods. Building on the reactivity of the triphenylphosphine ligand, the structure of ligands used for the Tsuji–Trost reaction quickly became more complex. Today, these ligands may contain phosphorus, sulfur, nitrogen or some combination of these elements, but most studies have concentrated on the mono- and diphosphine ligands. These ligands can be further classified based on the nature of their chirality, with some ligands containing central chirality on the phosphorus or carbon atoms, some containing biaryl axial chirality , and others containing planar chirality . Diphosphine ligands with central chirality emerged as an effective type of ligand (particularly for asymmetric allylic alkylation procedures) with the Trost Ligand being one such example. [ 22 ] Phosphinooxazolines (PHOX) ligands have been employed in the AAA, particularly with carbon-based nucleophiles. [ 23 ] The reaction substrate has also been extended to allenes . In this specific ring expansion the AAA reaction is also accompanied by a Wagner–Meerwein rearrangement : [ 24 ] [ 25 ] The ability to form carbon-carbon, carbon-nitrogen, and carbon-oxygen bonds enantioselectively under mild conditions makes the Trost asymmetric allylic alkylation extremely appealing for the synthesis of complex molecules. An example of this reaction is the synthesis of an intermediate in the combined total synthesis of galantamine and morphine [ 26 ] with 1 mol% [pi-allylpalladium chloride dimer], 3 mol% ( S,S ) Trost ligand , and triethylamine in dichloromethane at room temperature . These conditions result in the formation of the (−)-enantiomer of the aryl ether in 72% chemical yield and 88% enantiomeric excess . Another Tsuji–Trost reaction was used during the initial stages of the synthesis of (−)- neothiobinupharidine . This recent work demonstrates the ability of this reaction to give highly diastereoselective (10:1) and enantioselective (97.5:2.5) products from achiral starting material with only a small amount of catalyst ( 1% ). [ 27 ] Aside from the practical application of this reaction in medicinal chemistry and natural product synthesis, recent work has also used the Tsuji–Trost reaction to detect palladium in various systems. This detection system is based on a non- fluorescent fluorescein -derived sensor (longer-wavelength sensors have also recently been developed for other applications [ 28 ] ) that becomes fluorescent only in the presence of palladium or platinum. This palladium/platinum sensing ability is driven by the Tsuji–Trost reaction. The sensor contains an allyl group with the fluorescein functioning as the leaving group. The π -allyl complex is formed and after a nucleophile attacks, the fluorescein is released, yielding a dramatic increase in fluorescence. [ 29 ] [ 30 ] This simple, high-throughput method to detect palladium by monitoring fluorescence has been shown to be useful in monitoring palladium levels in metal ores , [ 31 ] pharmaceutical products , [ 32 ] and even in living cells . [ 33 ] With the ever-increasing popularity of palladium catalysis , this type of quick detection should be very useful in reducing the contamination of pharmaceutical products and preventing the pollution of the environment with palladium and platinum.
https://en.wikipedia.org/wiki/Tsuji–Trost_reaction
The Tsuji–Wilkinson decarbonylation reaction is a method for the decarbonylation of aldehydes and some acyl chlorides . The reaction name recognizes Jirō Tsuji , whose team first reported the use of Wilkinson's catalyst (RhCl(PPh 3 ) 3 ) for these reactions: Although decarbonylation can be effected by several transition metal complexes, Wilkinson's catalyst has proven the most effective. [ 1 ] Strictly speaking, this reaction results in the formation of a rhodium carbonyl complex rather than free carbon monoxide. The catalytic cycle is assumed to involve oxidative addition of the aldehyde (or acid chloride) to gives a 16e acyl Rh(III)-hydride intermediate, which undergoes migratory extrusion of CO proceed to form an 18-electron d6 Rh(III) carbonyl complex. Reductive elimination produces the decarbonylated product. In the catalytic variant of the Tsuji–Wilkinson decarbonylation, RhCl(CO)(PPh 3 ) 2 evolves CO above 200 °C, thereby regenerating RhCl(PPh 3 ) n . Otherwise, the reaction mechanism halts by formation of this thermodynamically stable carbonyl complex. [ 2 ] The Tsuji–Wilkinson decarbonylation proceeds under mild conditions and is highly stereospecific . In addition to aliphatic, aromatic, and α,β-unsaturated aldehydes, acyl nitriles and 1,2-diketones are also suitable substrates. Few methods exist for decarbonylation. One illustrative application is the synthesis of the core nucleus of FR-900482. [ 3 ] Note that the ester is unaffected by the rhodium reagent. The Tsuji–Wilkinson decarbonylation is employed in the penultimate step of the synthesis of (–)-presilphiperfolan-8-ol. [ 4 ] They comment “Of note in these final steps, separate reduction and oxidation steps proceeded in inferior yield in generating 38 (70% versus 93%), while the Rh(PPh 3 ) 3 Cl operation proceeded smoothly when conducted on small scale (~15 mg). In total, the synthesis required 13 steps from commercial starting material, and ~15 mg of [(–)-presilphiperfolan-8-ol] has been prepared with spectral properties and optical rotations matching that of the natural isolate.” Unfortunately, the Tsuji–Wilkinson decarbonylation is stoichiometric. The product bis(triphenylphosphine)rhodium carbonyl chloride is not readily converted back to a CO-free reagent. Above 200 °C, carbon monoxide RhCl(CO)(PPh 3 ) 2 does decarbonylate, [ 2 ] however these high temperatures are often prohibitive. The ideal Tsuji–Wilkinson decarbonylation would be by catalytic near ambient temperatures. The reaction has been carried out in flow conditions at low temperatures in which a biphasic liquid-gas flow decarbonylation was developed employing N 2 as a gas carrier. [ 5 ] However, the temperature required for this reaction is 200 °C. Significant improvements of the Tsuji–Wilkinson decarbonylation have been made by using cationic rhodium complexes with chelating bisphosphines. [ 6 ]
https://en.wikipedia.org/wiki/Tsuji–Wilkinson_decarbonylation_reaction
Japan has a nationwide Tsunami Warning system ( Japanese : 大津波警報 ・ 津波警報 ・ 津波注意報 ). The system usually issues warnings a few minutes after an Earthquake Early Warning (EEW) is issued, should waves be expected, [ 1 ] [ 2 ] usually when a combination of high magnitude, seaward epicenter and vertical focal mechanism is observed. The tsunami warning was issued within 3 minutes with the most serious rating on its warning scale during the 2011 Tōhoku earthquake and tsunami ; it was rated as a "major tsunami", being at least 3 m (9.8 ft) high. [ 2 ] [ 3 ] An improved system was unveiled on March 7, 2013, following the 2011 disaster to better assess imminent tsunamis. [ 4 ] [ 5 ] When an earthquake occurs, the Japan Meteorological Agency (JMA) estimates the possibility of tsunami generation from seismic observation data. If disastrous waves are expected in coastal regions, JMA issues a Tsunami Warning/Advisory for each region expected to be affected based on estimated tsunami heights. JMA also issues information on tsunami details such as estimated arrival times and heights. [ 6 ] After an earthquake occurs, JMA issues Tsunami Warnings/Advisories and Tsunami Information bulletins if a tsunami strike is expected. Major Tsunami Warnings are issued in the classification of Emergency Warnings as of 30 August 2013. [ 7 ] When an earthquake occurs that could generate a disastrous tsunami in coastal regions of Japan, JMA issues Major Tsunami Warnings, Tsunami Warnings and/or Tsunami Advisories for individual regions based on estimated tsunami heights around three minutes after the quake (or as early as two minutes in some cases [ N 1 ] ). [ 7 ] Immediately after an earthquake occurs, JMA promptly establishes its location, magnitude and the related tsunami risk. However, it takes time to determine the exact scale of earthquakes with a magnitude of 8 or more. In such cases, JMA issues an initial warning based on the predefined maximum magnitude to avoid underestimation. [ 7 ] When such values are used, estimated maximum tsunami heights are expressed in qualitative terms such as "Huge" and "High" in initial warnings rather than as quantitative expressions. Once the exact magnitude is determined, JMA updates the warning with estimated maximum tsunami heights expressed in quantitative terms. [ 7 ] The color scheme above has been standardized in response to the 2011 Tōhoku earthquake and tsunami , with all alerting agencies and broadcasters now using this scheme on their tsunami alert maps. Major Tsunami Warnings are issued in the classification of Emergency Warnings. Detailed information on Emergency Warnings is provided on the Emergency Warning System article. This article incorporates text available under the CC BY 4.0 license.
https://en.wikipedia.org/wiki/Tsunami_Warning_(Japan)
A tsunami warning system ( TWS ) is used to detect tsunamis in advance and issue the warnings to prevent loss of life and damage to property. It is made up of two equally important components: a network of sensors to detect tsunamis and a communications infrastructure to issue timely alarms to permit evacuation of the coastal areas. There are two distinct types of tsunami warning systems: international and regional . When operating, seismic alerts are used to instigate the watches and warnings; then, data from observed sea level height (either shore-based tide gauges or DART buoys) are used to verify the existence of a tsunami. Other systems have been proposed to augment the warning procedures; for example, it has been suggested that the duration and frequency content of t-wave energy (which is earthquake energy trapped in the ocean SOFAR channel ) is indicative of an earthquake's tsunami potential. [ 1 ] The first rudimentary system to alert communities of an impending tsunami was attempted in Hawaii in the 1920s. More advanced systems were developed in the wake of the April 1, 1946 (caused by the 1946 Aleutian Islands earthquake ) and May 23, 1960 (caused by the 1960 Valdivia earthquake ) tsunamis which caused massive devastation in Hilo, Hawaii . While tsunamis travel at between 500 and 1,000 km/h (around 0.14 and 0.28 km/s) in open water, earthquakes can be detected almost at once as seismic waves travel with a typical speed of 4 km/s (around 14,400 km/h). This gives time for a possible tsunami forecast to be made and warnings to be issued to threatened areas, if warranted. Until a reliable model is able to predict which earthquakes will produce significant tsunamis, this approach will produce many more false alarms than verified warnings. Tsunami warnings ( SAME code: TSW ) for most of the Pacific Ocean are issued by the Pacific Tsunami Warning Center (PTWC), operated by the United States NOAA in Ewa Beach, Hawaii . NOAA's National Tsunami Warning Center (NTWC) in Palmer, Alaska issues warnings for North America, including Alaska, British Columbia, Oregon, California, the Gulf of Mexico, and the East coast. The PTWC was established in 1949, following the 1946 Aleutian Island earthquake and a tsunami that resulted in 165 casualties on Hawaii and in Alaska; NTWC was founded in 1967. International coordination is achieved through the International Coordination Group for the Tsunami Warning System in the Pacific, established by the Intergovernmental Oceanographic Commission of UNESCO . [ 2 ] In 2005, Chile started to implement the Integrated Plate boundary Observatory Chile (IPOC) [ 3 ] which in the following years become a network of 14 multiparameter stations for monitoring the 600-km seismic distance between Antofagasta and Arica . Each station was provided with broadband seismometer , accelerometer , GPS antenna . In four cases, it was installed a short-base tiltmeter (pendulum). Some stations were ubicated underground at a depth of 3–4 meters. The network completed the tidal gauge of the Hydrographic and Oceanographic Service of the Chilean Navy . [ 4 ] The long-base tiltmeters (LBTs) and the STS2 seismometer of the IPOC recorded a series of long- period signals some days after the 2010 Maule earthquake . The same effect was registered by broadband seismometers of India and Japan some days after the 2004 Indian Ocean earthquake and tsunami . Simulations held in 2013 on historical data highlighted "tiltmeters and broadband seismometers are thus valuable instruments for monitoring tsunamis in complement with tide gauge arrays." In the case of the 2010 Maule earthquake, tilt-sensors observed a discriminating signal "starting 20 min before the arrival time of the tsunami at the nearest point on the coastline." [ 4 ] After the 2004 Indian Ocean Tsunami which killed almost 250,000 people, a United Nations conference was held in January 2005 in Kobe , Japan , and decided that as an initial step towards an International Early Warning Programme , the UN should establish an Indian Ocean Tsunami Warning System . This resulted in a warning system for Indonesia and other affected areas. Indonesia's system fell out of service in 2012 because the detection buoys were no longer operational. [ citation needed ] Tsunami prediction was then limited to detection of seismic activity, with no system to predict tsunamis based on volcanic eruptions. Indonesia was hit by tsunamis in September and December 2018. The December 2018 tsunami was caused by a volcano. [ 5 ] Sea level sensors were then installed by the Indonesian government to fill the prediction gap. [ 6 ] The First United Session of the Inter-governmental Coordination Group for the Tsunami Early Warning and Mitigation System in the North Eastern Atlantic, the Mediterranean and connected Seas (ICG/NEAMTWS), established by the Intergovernmental Oceanographic Commission of UNESCO Assembly during its 23rd Session in June 2005, through Resolution XXIII.14, took place in Rome on 21 and 22 November 2005. The meeting, hosted by the Government of Italy (the Italian Ministry of Foreign Affairs and the Italian Ministry for the Environment and Protection of Land and Sea ), was attended by more than 150 participants from 24 countries, 13 organizations and numerous observers. A Caribbean-wide tsunami warning system was planned to be instituted by the year 2010, by representatives of Caribbean nations who met in Panama City in March 2008. Panama 's last major tsunami killed 4,500 people in 1882. [ 7 ] Barbados has said it will review or test its tsunami protocol in February 2010 as a regional pilot. [ 8 ] [ needs update ] Regional (or local) warning system centers use seismic data about nearby recent earthquakes to determine if there is a possible local threat of a tsunami. Such systems are capable of issuing warnings to the general public (via public address systems and sirens) in less than 15 minutes. Although the epicenter and moment magnitude of an underwater quake and the probable tsunami arrival times can be quickly calculated, it is almost always impossible to know whether underwater ground shifts have occurred which will result in tsunami waves. As a result, false alarms can occur with these systems, but the disruption is small, which makes sense due to the highly localized nature of these extremely quick warnings, in combination with how difficult it would be for a false alarm to affect more than a small area of the system. Real tsunamis would affect more than just a small portion. [ citation needed ] Japan has a nationwide tsunami warning system. The system usually issues the warning minutes after an Earthquake Early Warning (EEW) is issued, should there be expected waves. [ 9 ] [ 10 ] The tsunami warning was issued within 3 minutes with the most serious rating on its warning scale during the 2011 Tōhoku earthquake and tsunami ; it was rated as a "major tsunami", being at least 3 m (9.8 ft) high. [ 10 ] [ 11 ] An improved system was unveiled on March 7, 2013, following the 2011 disaster to better assess imminent tsunamis. [ 12 ] [ 13 ] India is one of the 5 countries to have the most advanced tsunami warning systems in the world. [ 14 ] In 2004, right after being hit by an earthquake in Sumatra, a massive tsunami devastated the coasts of India, [ 15 ] prompting the Government of India to set up the INCOIS (Indian National Centre for Ocean Information Services). [ 16 ] The center is an autonomous organization of the Government of India, under the Ministry of Earth Sciences, located in Pragathi Nagar, Hyderabad, India. This center offers ocean information and advisory services to society, industry, government bodies in areas like Tsunami warning, ocean state forecast, fishing zones and more. [ 17 ] This center receives data from over 35 sea level tide gauges at intervals of 5 minutes. [ 18 ] Along with this it receives data from wave rider buoys, bottom pressure readers (BPRs) and a network of seismographs that have been installed at various locations in the IOR (Indian Ocean Region). The Indian Tsunami Buoy Type 1 System [ 19 ] consists of 2 units – a surface buoy and a bottom pressure reader (BPR). Communication between BPR and the surface buoy is through acoustic modems and the surface buoys use the INSAT satellite system to communicate readings back to shore stations. The Tsunami warning station collates information from 17 seismic stations of the Indian Meteorological Department (IMD), 10 stations of Wadia Institute of Himalayan Geology (WIHG) [ 20 ] and more than 300 international stations. INDOFOS (INDian Ocean FOrecasting System) is a service that forecasts the ocean state and is capable of predicting surface and sub surface features and states of the Indian Ocean. [ 21 ] These forecasts are made accessible through Information centers, Radio, local digital sign boards, websites, TV channels and subscription services. Oceansat 2 system is a collection of earth observation satellites operated by ISRO [ 22 ] in conjunction with Oceansat ground station that covers an area of 5000 km radius around India and is capable of monitoring sea flora and fauna along with oceanic features like meandering patterns, eddies, rings, upwelling and others. Oceansat-2 was successfully deployed to predict the landfall and mitigate the effects of Cyclone Phailin , in October 2013. [ 23 ] Detection and prediction of tsunamis is only half the work of the system. Of equal importance is the ability to warn the populations of the areas that will be affected. All tsunami warning systems feature multiple lines of communications (such as Cell Broadcast , SMS , e-mail , fax , radio , texting and telex , often using hardened dedicated systems) [ citation needed ] enabling emergency messages to be sent to the emergency services and armed forces , as well to population-alerting systems (e.g. sirens ) and systems like the Emergency Alert System . [ 24 ] With the speed at which tsunami waves travel through open water, no system can protect against a very sudden tsunami, where the coast in question is too close to the epicenter . A devastating tsunami occurred off the coast of Hokkaidō in Japan as a result of an earthquake on July 12, 1993 . As a result, 202 people on the small island of Okushiri, Hokkaido lost their lives, and hundreds more were missing or injured. [ citation needed ] This tsunami struck just three to five minutes after the quake, and most victims were caught while fleeing for higher ground and secure places after surviving the earthquake. This was also the case in Aceh , Indonesia. [ citation needed ] While there remains the potential for sudden devastation from a tsunami, warning systems can be effective. For example, if there were a very large subduction zone earthquake ( moment magnitude 9.0) off the west coast of the United States, people in Japan , would therefore have more than 12 hours (and likely warnings from warning systems in Hawaii and elsewhere) before any tsunami arrived, giving them some time to evacuate areas likely to be affected.
https://en.wikipedia.org/wiki/Tsunami_warning_system
The program ttcp (Test TCP ) is a utility for measuring network throughput , popular on Unix systems. It measures the network throughput between two systems using the TCP or optionally UDP protocols. [ 1 ] It was written by Mike Muuss and Terry Slattery at BRL sometime before December 1984, [ 2 ] to compare the performance of TCP stacks by the Computer Systems Research Group (CSRG) of the University of California, Berkeley and Bolt, Beranek and Newman (BBN) to help DARPA decide which version to place in 4.3BSD . Many compatible implementations and derivatives exist including the widely used Iperf . [ 3 ] Testing can be done from any platform to any other platform, for example from a Windows machine to a Linux machine, as long as they both have a ttcp application installed. For normal use, ttcp is installed on two machines – one will be the sender, the other the receiver. The receiver is started first and waits for a connection. Once the two connect, the sending machine sends data to the receiver and displays the overall throughput of the network they traverse. The amount of data sent and other options are configurable through command line parameters. The statistics output covers TCP/UDP payload only (not protocol overhead) and is generally displayed by default in KiB/s (kibi Bytes per second) instead of kb/s (kilo bits per second), but it can be configured to be displayed in other ways on some implementations. The reported throughput is more accurately calculated on the receive side than the transmit side, since the transmit operation may complete before all bytes actually have been transmitted. Originally designed for Unix systems, ttcp has since been ported to and reimplemented on many other systems such as Windows . [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] The original Unix implementation developed by Mike Muuss and Terry Slattery, version 1.10 dated 1987-09-02. Uses port 2000 by default unless another one is specified with the -p switch. [ 3 ] Developed at Silicon Graphics , the nttcp implementation made several changes that remain in future implementations such as by default using port 5001 instead of 2000, reversing the meaning of the -s switch to sink data by default, and adding the -w window size switch [ 3 ] Developed at Laboratory for Computational Physics and Fluid Dynamics at Naval Research Lab (LCP & FD at NRL). Provides additional information related to the data transfer such as user, system, and wall-clock time, transmitter and receiver CPU utilization, and loss percentage (for UDP transfers). [ 8 ] Developed by the Distributed Applications Support Team (DAST) at the National Laboratory for Applied Network Research (NLANR). Widely used and ported implementation including additions such as the option for bidirectional traffic. Developed by Microsoft, used to profile and measure Windows networking performance. NTttcp is one of the primary tools Microsoft engineering teams leverage to validate network function and utility. Developed by Shihua Xiao at Microsoft, used to profile and measure Linux networking performance. Provided multiple threading to exchange data in test, and potentially can interop with Windows version of ntttcp. [ 9 ] Native Windows version developed by PCAUSA. [ 7 ] Cisco IOS routers include ttcp as a hidden command that can be set up as either the sender or receiver in version 11.2 or higher and feature sets IP Plus (is- images) or Service Provider (p- images). [ 10 ] Many EnGenius branded wireless access points include an Iperf-based implementation accessible as Speed Test under Diagnostics in the web and command line user interfaces. [ 11 ]
https://en.wikipedia.org/wiki/Ttcp
Raja Permaisuri Tuanku Bainun , Sultan Azlan Muhibbuddin Shah 's widow Tuanku Zara Salim ( Jawi : توانكو زارا سليم; born Zara Salim Davidson ; 22 March 1973) is the Raja Permaisuri ( Queen consort ) of Perak as the wife of Sultan Nazrin Muizzuddin Shah , the current Sultan of the Malaysian state of Perak . A chemical engineer by training, she was heading an oil and gas consultancy firm based in Kuala Lumpur before her marriage to the Sultan. She and the Sultan, who had been the country's most eligible royal bachelor for decades, have known each other since the mid-1990s. She was officially installed as the Raja Permaisuri of Perak during Sultan Nazrin's enthronement ceremony as the 35th Sultan of Perak on 6 May 2015. Born in the city of Ipoh on 22 March 1973, she is the youngest of four children of William Stanley Walker Davidson (Salim Davidson), an Irishman and his ethnic Malay wife of mixed Arab and Thai descent, Sharifah Azaliah binti Syed Omar Shahabudin who is from Alor Setar , Kedah . She is also an extended member of Kedah Royal Family on her maternal side and has three elder brothers. Her father is a prominent lawyer in Perak and Kuala Lumpur . Zara, who has a strong interest in foreign languages , studied at SMK Convent Ipoh and represented her school in squash and tennis from 1988 to 1990. She also represented Perak in swimming between 1981 and 1987. After completing her A-Levels at Prime College in 1992, she left for the United Kingdom to study chemical engineering at the University of Nottingham and graduated with first class honours in July 1995. She also won the top student award for her final-year project. Coincidentally her father-in-law, the late Sultan Azlan Shah , read law at the same university and was conferred the Bachelor of Laws degree in 1953 before being admitted to the English Bar in 1954. Zara joined the Business Evaluation Department in the Corporate Planning Unit of Petronas in December 1995 and was part of the team responsible for the successful establishment of the Kertih and Kuantan integrated petrochemical complexes, whose foreign partners included BP , BASF , Dow Chemicals and Mitsubishi . She then became a project analyst in the Petronas Petrochemical Business Unit and was part of the core team developing the Petronas brand essence, which now forms part of the Petronas global branding strategy. Between February 1999 and October 2000, she was a product manager at Petlin (Malaysia) Sdn Bhd, a Petronas joint venture with DSM of the Netherlands and Sasol of South Africa . She was also part of the Petronas project team to operationalise the largest single-train low-density polyethylene (LDPE) plant in the world at Kertih. Zara left Petronas in November 2001 to become an account manager at Formis Network Services Sdn Bhd and then assumed the post of vice-president of partnerships and alliances at Formis (Malaysia) Berhad, a technology-based company listed on Bursa Malaysia, between 2003 and 2005. Between 2005 and 2007, Zara, who is a certified life-saver and adventure sports enthusiast, became the managing director of Forthwave Consulting Sdn Bhd, a hydrocarbon technical engineering and software development company in Kuala Lumpur . Zara Salim Davidson is the great-grandchild of Kedah's 24th ruler Sultan Abdul Hamid Halim Shah , Utusan Malaysia reported. The Sultan ruled Kedah for 62 years from 1881 to 1943 and one of the Sultan's sons was Tunku Abdul Rahman , Malaysia's first prime minister. Therefore, she is also the grandniece of the Tunku and a niece of Sultan Abdul Halim Mu'adzam Shah and of the current sultan, Sultan Sallehuddin . According to Zara's uncle Syed Mohd Aldinuri Syed Omar, Zara's mother Sharifah Azaliah Syed Omar was the daughter of Tunku Aminah, who was the daughter of Sultan Abdul Hamid and Che Manjalara. From the marriage to Che Manjalara, Sultan Abdul Hamid had seven children, including Tunku Abdul Rahman Putra Al-Haj and Tunku Aminah. Zara also has a close relationship with the late Sultan of Kedah, Sultan Abdul Halim Mu'adzam Shah Sultan Badlishah , who is the grandson of Sultan Abdul Hamid and a nephew to Tunku Abdul Rahman. She is also the granddaughter of the former Kedah Menteri Besar Tan Sri Syed Omar Syed Abdullah Shahabudin. Tunku Aminah married Syed Omar, who became Kedah's second Menteri Besar after independence, from July 1959 to December 1967. Zara's wedding to Raja Nazrin Shah was held at Istana Iskandariah on 17 May 2007. A day after the solemnisation of their vows, there was a special proclamation ceremony to bestow upon her the honorific prefix of Tuanku and the official title as Raja Puan Besar (Crown Princess) of Perak , a title reserved for the royal wife of the Raja Muda ( Crown Prince ) of Perak that had been vacant since April 1987. The royal wedding reception took place on 19 May 2007. The couple's first child, a son named Raja Azlan Muzzaffar Shah, the Raja Kecil Besar of Perak was born 14 March 2008. Their second child, a daughter named Raja Nazira Safya, was born 2 August 2011. On July 9, 2007, a purple, hybrid orchid was named Dendrobium Tuanku Zara Salim in honour of her visit to the 7th Ipoh International Orchid Festival. [ 1 ] She is the current Chancellor of the Sultan Idris Education University (UPSI), and was proclaimed on 1 January 2012. She has been awarded an Honorary Fellowship by the Institution of Chemical Engineers (IChemE) in April 2014 [ 2 ] in recognition of her interest in the continued improvement of Malaysia's education and academic performance. She then was made as the first Royal Patron of the IChemE in Malaysia in October 2016. [ 3 ] On accepting the honour, Tuanku Zara said that chemical engineering can be a challenging career, but rewarding as well. Chemical engineers are highly employable, have wide and diverse career options, and can make a big impact in society and the environment. She intends to, as Royal Patron of the IChemE, do her part to inspire the younger generation to consider chemical engineering as a viable career choice. [ 4 ] She is also a patron of the Family Health Society of Perak, Convent Girls Alumni and Perak Girl Guides . [ 5 ]
https://en.wikipedia.org/wiki/Tuanku_Zara_Salim
Tube-and-fabric construction is a method of building airframes , which include the fuselages and wings of airplanes. It consists of making a framework of metal tubes (generally welded together) and then covering the framework with an aircraft fabric covering . The tubes are usually of steel or aluminum . The advantages of tube-and-fabric construction over other methods of airframe construction (such as wood and sheet metal) are lower cost and faster speed of construction. [ 1 ] This aircraft-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tube-and-fabric_construction
Tube-based nanostructures are nanolattices made of connected tubes and exhibit nanoscale organization above the molecular level. [ 1 ] Lattices are structures formed of arrays of uniformly sized cells. Ceramic lattice nanostructures have been formed using hollow tubes of titanium nitride (TiN). Using vertex-connected, tessellated octahedra with 7-nm hollow struts with elliptical cross-sections and wall thickness of 75-nm produced approximately cubic cells 100-nm on a side at a scale of up to 1 cubic millimeter. The material's relative density was of the order of 0.013 (similar to aerogels ). [ 2 ] Compression experiments with multiple deformation cycles revealed tensile strengths of 1.75 GPa without failure. The material was constructed from a digital design with direct laser writing onto a photopolymer using 2-photon lithography followed by conformal deposition of TiN using atomic layer deposition and a final etching to remove the polymer. [ 2 ] An earlier metallic tube lattice produced hollow tube nickel microlattices with a density of .9 milligram per cubic centimeter and complete recovery after compression exceeding 50% strain with energy absorption similar to elastomers . Young's modulus E scales with density as E ~ ρ2, in contrast to the E ~ ρ3 scaling observed for ultralight aerogels and carbon nanotube nanofoams with stochastic architecture. Hardness of 6 GPa and a modulus of 210 GPa were measured by nanoindentation and hollow tube compression experiments, respectively. These materials are fabricated by starting with a template formed by self-propagating photopolymer waveguide prototyping, coating the template by electroless nickel plating, and subsequently etching away the template. [ 3 ] [ 4 ] Nanostructured hollow multilayered tubes can be created by combining layer-by-layer (LbL) and template leaching. Such materials are of particular interest for tissue engineering since they allow the precise control of physical and biochemical cues of implantable devices. The tubes are based on polyelectrolyte multilayer films. The final tubular structures can be characterized by differential scanning calorimetry (DSC), Fourier transform infrared spectroscopy (FTIR), microscopy, swelling and mechanical tests, including dynamic mechanical analysis (DMA) in physiological simulated conditions. More robust films could be produced via chemical cross-linking with genipin . Water uptake decreases from about 390% to 110% after cross-linking. The cross-linked tubes are more suitable structures for cell adhesion and spreading. Potential applications include tissue engineering . [ 5 ]
https://en.wikipedia.org/wiki/Tube-based_nanostructure
A tube , or tubing , is a long hollow cylinder used for moving fluids ( liquids or gases ) or to protect electrical or optical cables and wires. The terms " pipe " and "tube" are almost interchangeable, although minor distinctions exist — generally, a tube has tighter engineering requirements than a pipe. Both pipe and tube imply a level of rigidity and permanence, whereas a hose is usually portable and flexible. A tube and pipe may be specified by standard pipe size designations, e.g. , nominal pipe size, or by nominal outside or inside diameter and/or wall thickness. The actual dimensions of pipe are usually not the nominal dimensions: A 1-inch pipe will not actually measure 1 inch in either outside or inside diameter, whereas many types of tubing are specified by actual inside diameter, outside diameter, or wall thickness. There are three classes of manufactured tubing: seamless, [ 1 ] as-welded or electric resistant welded (ERW), and drawn-over-mandrel (DOM). There are many industry and government standards for pipe and tubing. Many standards exist for tube manufacture; some of the most common are as follows: ASTM material specifications generally cover a variety of grades or types that indicate a specific material composition. Some of the most commonly used are: In installations using hydrogen , copper and stainless steel tubing must be factory pre-cleaned (ASTM B 280) and/or certified as instrument grade. This is due to hydrogen's particular propensities: to explode in the presence of oxygen , oxygenation sources, or contaminants; to leak due to its atomic size; and to cause embrittlement of metals, particularly under pressure. For a tube of silicone rubber [ 2 ] with a tensile strength of 10 MPa and an 8 mm outside diameter and 2 mm thick walls. The maximum pressure may be calculated as follows: Gives bursting pressure of 5 MPa. Using a safety factor : 33 Tubes are essential components in heat exchange systems to assist with cooling down motors and other instruments. Industrial applications use pressure-resistant tubes to safely contain gases and liquids under pressure without leading to air leakage or malfunction. Moreover, devices powered by ultrasound often employ special types of thin-walled tubes that can generate vibration when exposed to an electric field. Finally, tube components provide efficient energy conservation in housing insulation materials such as silicon rubber foam insulation and polyethylene foam insulation.
https://en.wikipedia.org/wiki/Tube_(fluid_conveyance)
In structural engineering , the tube is a system where, to resist lateral loads (wind, seismic, impact), a building is designed to act like a hollow cylinder, cantilevered perpendicular to the ground. This system was introduced by Fazlur Rahman Khan while at the architectural firm Skidmore, Owings & Merrill (SOM), in their Chicago office. [ 1 ] The first example of the tube's use is the 43-story Khan-designed DeWitt-Chestnut Apartment Building, since renamed Plaza on DeWitt , in Chicago , Illinois , finished in 1966. [ 2 ] The system can be built using steel , concrete , or composite construction (the discrete use of both steel and concrete). It can be used for office , apartment , and mixed-use buildings. Most buildings of over 40 stories built since the 1960s are of this structural type. The tube system concept is based on the idea that a building can be designed to resist lateral loads by designing it as a hollow cantilever perpendicular to the ground. In the simplest incarnation of the tube, the perimeter of the exterior consists of closely spaced columns that are tied together with deep spandrel beams through moment connections. This assembly of columns and beams forms a rigid frame that amounts to a dense and strong structural wall along the exterior of the building. [ 3 ] This exterior framing is designed sufficiently strong to resist all lateral loads on the building, thereby allowing the interior of the building to be simply framed for gravity loads. Interior columns are comparatively few and located at the core. The distance between the exterior and the core frames is spanned with beams or trusses and can be column-free. This maximizes the effectiveness of the perimeter tube by transferring some of the gravity loads within the structure to it, and increases its ability to resist overturning via lateral loads. By 1963, a new structural system of framed tubes had appeared in skyscraper design and construction . Fazlur Rahman Khan , a structural engineer from Bangladesh (then called East Pakistan ) who worked at Skidmore, Owings & Merrill , defined the framed tube structure as "a three dimensional space structure composed of three, four, or possibly more frames, braced frames, or shear walls, joined at or near their edges to form a vertical tube-like structural system capable of resisting lateral forces in any direction by cantilevering from the foundation." [ 4 ] Closely spaced interconnected exterior columns form the tube. Lateral or horizontal loads (wind, seismic, impact) are supported by the structure as a whole. About half the exterior surface is available for windows. Framed tubes require fewer interior columns, and so allow more usable floor space. Where larger openings like garage doors are needed, the tube frame must be interrupted, with transfer girders used to maintain structural integrity. Khan's tube concept was inspired by his hometown in Dhaka , Bangladesh. His hometown did not have any buildings taller than three stories. He also did not see his first skyscraper in person until the age of 21 years old, and he had not stepped inside a mid-rise building until he moved to the United States for graduate school. Despite this, the environment of his hometown in Dhaka later influenced his tube building concept, which was inspired by the bamboo that sprouted around Dhaka. He found that a hollow tube, like the bamboo in Dhaka, lent a high-rise vertical durability. [ 5 ] The first building to apply the tube-frame construction was the DeWitt-Chestnut Apartment Building which Khan designed and which was finished in Chicago by 1963. [ 6 ] This laid the foundations for the tube structural design of many later skyscrapers, including his own John Hancock Center and Sears Tower , and the construction of the World Trade Center , the Petronas Towers , the Jin Mao Building , and most other tall skyscrapers since the 1960s, including the world's tallest building as of 2020 [update] , the Burj Khalifa . [ 7 ] From its conception, the tube has been varied to suit different structural needs. This is the simplest incarnation of the tube. It can appear in a variety of floor plan shapes, including square, rectangular, circular, and freeform. This design was first used in Chicago's DeWitt-Chestnut Apartment Building, designed by Khan and finished in 1965, but the most notable examples are the Aon Center and the original World Trade Center towers. The trussed tube, also termed braced tube , is similar to the simple tube but with comparatively fewer and farther-spaced exterior columns. Steel bracings or concrete shear walls are introduced along the exterior walls to compensate for the fewer columns by tying them together. The most notable examples incorporating steel bracing are the John Hancock Center , the Citigroup Center , and the Bank of China Tower . These structures have a core tube inside the structure, holding the elevator and other services, and another tube around the exterior. Most of the gravity and lateral loads are normally taken by the outer tube because of its greater strength. The 780 Third Avenue 50-story concrete frame office building in Manhattan uses concrete shear walls for bracing and an off-center core to allow column-free interiors. [ 8 ] Instead of one tube, a building consists of several tubes tied together to resist lateral forces. Such buildings have interior columns along the perimeters of the tubes when they fall within the building envelope. Notable examples include Willis Tower , One Magnificent Mile , and the Newport Tower . Beside being efficient structurally and economically, the bundled tube was "innovative in its potential for versatile formulation of architectural space. Efficient towers no longer had to be box-like; the tube-units could take on various shapes and could be bundled together in different sorts of groupings." [ 9 ] The bundled tube structure meant that "[buildings] could become sculpture." [ 10 ] Hybrids include a varied category of structures where the basic concept of tube is used, and supplemented by other structural support(s). This method is used where a building is so thin that one system cannot provide adequate strength or stiffness. The last major buildings engineered by Khan were the One Magnificent Mile and Onterie Center in Chicago, which employed his bundled tube and trussed tube system designs respectively. In contrast to his earlier buildings, which were mainly steel, his last two buildings were concrete. His earlier DeWitt-Chestnut Apartments building, built in 1963 in Chicago, was also a concrete building with a tube structure. [ 7 ] Trump Tower in New York City is also another example that adapted this system. [ 11 ] Some lattice towers have steel tube elements, like the guyed Warsaw Radio Mast or free-standing 3803 KM towers.
https://en.wikipedia.org/wiki/Tube_(structure)
Tube bending is any metal forming processes used to permanently form pipes or tubing . Tube bending may be form-bound or use freeform-bending procedures, and it may use heat supported or cold forming procedures. Form bound bending procedures like “press bending” or “rotary draw bending” are used to form the work piece into the shape of a die . Straight tube stock can be formed using a bending machine to create a variety of single or multiple bends and to shape the piece into the desired form. These processes can be used to form complex shapes out of different types of ductile metal tubing. [ 1 ] Freeform-bending processes, like three-roll-pushbending, shape the workpiece kinematically, thus the bending contour is not dependent on the tool geometry. Generally, round stock is used in tube bending. However, square and rectangular tubes and pipes may also be bent to meet job specifications. Other factors involved in the bending process are the wall thickness, tooling and lubricants needed by the pipe and tube bender to best shape the material, and the different ways the tube may be used (tube, pipe wires). A tube can be bent in multiple directions and angles. Common simple bends consist of forming elbows, which are 90° bends, and U-bends, which are 180° bends. More complex geometries include multiple two-dimensional (2D) bends and three-dimensional (3D) bends. A 2D tube has the openings on the same plane; a 3D has openings on different planes. A two plane bend or compound bend is defined as a compound bend that has a bend in the plan view and a bend in the elevation. When calculating a two plane bend, one must know the bend angle and rotation (dihedral angle). One side effect of bending the workpiece is the wall thickness changes; the wall along the inner radius of the tube becomes thicker and the outer wall becomes thinner. To reduce this the tube may be supported internally and or externally to preserve the cross section . Depending on the bend angle, wall thickness, and bending process the inside of the wall may wrinkle. Tube bending as a process starts with loading a tube into a tube or pipe bender and clamping it into place between two dies, the clamping block and the forming die. The tube is also loosely held by two other dies, the wiper die and the pressure die. The process of tube bending involves using mechanical force to push stock material pipe or tubing against a die, forcing the pipe or tube to conform to the shape of the die. Often, stock tubing is held firmly in place while the end is rotated and rolled around the die. Other forms of processing including pushing stock through rollers that bend it into a simple curve. [ 2 ] For some tube bending processing, a mandrel is placed inside the tube to prevent collapsing. The tube is held in tension by a wiper die to prevent any creasing during stress. A wiper die is usually made of a softer alloy such as aluminum or brass to avoid scratching or damaging the material being bent. Much of the tooling is made of hardened steel or tool steel to maintain and prolong the tool's life. However, when there is a concern of scratching or gouging the work piece, a softer material such as aluminum or bronze is utilized. For example, the clamping block, rotating form block and pressure die are often formed from hardened steel because the tubing is not moving past these parts of the machine. The pressure die and the wiping die are formed from aluminum or bronze to maintain the shape and surface of the work piece as it slides by. Pipe bending machines are typically human powered, pneumatic powered, hydraulic assisted, hydraulic driven, or electric servomotor. Press bending is probably the first bending process used on cold pipes and tubing. [ clarification needed ] In this process a die in the shape of the bend is pressed against the pipe forcing the pipe to fit the shape of the bend. Because the pipe is not supported internally there is some deformation of the shape of the pipe, resulting in an oval cross section. This process is used where a consistent cross section of the pipe is not required. Although a single die can produce various shapes, it only works for one size tube and radius. Rotary draw bending (RDB) is a precise technology, since it bends using tooling or "die sets" which have a constant center line radius (CLR), alternatively indicated as mean bending radius (Rm). Rotary draw benders can be programmable to store multiple bend jobs with varying degrees of bending. Often a positioning index table (IDX) is attached to the bender allowing the operator to reproduce complex bends which can have multiple bends and differing planes. Rotary draw benders are the most popular machines for use in bending tube, pipe and solids for applications like: handrails , frames, motor vehicle roll cages , handles, lines and much more. Rotary draw benders create aesthetically pleasing bends when the right tooling is matched to the application. CNC rotary draw bending machines can be very complex and use sophisticated tooling to produce severe bends with high quality requirements. The complete tooling is required only for high-precision bending of difficult-to-bend tubes with relatively large OD/t (diameter/thickness) ratio and relatively small ratio between the mean bending radius Rm and OD. [ 3 ] The use of axial boosting either on the tube free end or on the pressure die is useful to prevent excessive thinning and collapse of the extrados of the tube. The mandrel, with or without ball with spherical links, is mostly used to prevent wrinkles and ovalization. For relatively easy bending processes (that is, as the difficulty factor BF decreases), the tooling can be progressively simplified, eliminating the need for the axial assist, the mandrel, and the wiper die (which mostly prevents wrinkling). Furthermore, in some particular cases, the standard tooling must be modified in order to meet specific requirements of the products. During the roll bending process the pipe, extrusion, or solid is passed through a series of rollers (typically three) that apply pressure to the pipe gradually changing the bend radius in the pipe. The pyramid style roll benders have one moving roll, usually the top roll. Double pinch type roll benders have two adjustable rolls, usually the bottom rolls, and a fixed top roll. This method of bending causes very little deformation in the cross section of the pipe. This process is suited to producing coils of pipe as well as long gentle bends like those used in truss systems. Three-roll push bending (TRPB) is the most commonly used freeform-bending process to manufacture bending geometries consisting of several plane bending curves. Nevertheless, 3D-shaping is possible. The profile is guided between bending-roll and supporting-roll(s), while being pushed through the tools. The position of the forming-roll defines the bending radius. The bending point is the tangent-point between tube and bending-roll. To change the bending plane, the pusher rotates the tube around its longitudinal axis. Generally, a TRPB tool kit can be applied on a conventional rotary draw bending machine. The process is very flexible since with a unique tool set, several bending radii values Rm can be obtained, although the geometrical precision of the process is not comparable to rotary draw bending . [ 4 ] Bending contours defined as spline- or polynomial-functions can be manufactured. [ 5 ] Three roll bending of tubes and open profiles can also be performed with simpler machines, often semi-automatic and non CNC controlled, able to feed the tube into the bending zone by friction. These machines have often a vertical layout, i.e. the three rolls lie on a vertical plane. An induction coil is placed around a small section of the pipe at the bend point. It is then induction heated to between 800 and 2,200 degrees Fahrenheit (430 and 1,200 C). While the pipe is hot, pressure is placed on the pipe to bend it. The pipe can then be quenched with either air or water spray or be cooled against ambient air. Induction bending is used to produce bends for a wide range of applications, such as (thin walled) pipe lines for both the upstream and down stream and on- and off shore segments of the petrochemical industry, large radius structural parts for the construction industry, thick walled, short radius bends for the power generating industry and city heating systems. Big advantages of induction bending are: The pipe is filled with a water solution, frozen, and bent while cold. The solute (soap can be used) makes the ice flexible. This technique is used to make trombones. [ 6 ] A similar techniques using pitch was formerly used, but discontinued because the pitch was hard to clean out without excessive heat. [ 6 ] In the sand packing process the pipe is filled with fine sand and the ends are capped. The filled pipe is heated in a furnace to 1,600 °F (870 °C) or higher. Then it is placed on a slab with pins set in it, and bent around the pins using a winch, crane, or some other mechanical force. The sand in the pipe minimizes distortion in the pipe cross section. A mandrel is a steel rod or linked ball inserted into the tube while it is being bent to give the tube extra support to reduce wrinkling and breaking the tube during this process. The different types of mandrels are as follows. In production of a product where the bend is not critical a plug mandrel can be used. A form type tapers the end of the mandrel to provide more support in the bend of the tube. When precise bending is needed a ball mandrel (or ball mandrel with steel cable) should be used. The conjoined ball-like disks are inserted into the tubing to allow for bending while maintaining the same diameter throughout. Other styles include using sand, cerrobend, or frozen water. These allow for a somewhat constant diameter while providing an inexpensive alternative to the aforementioned styles. Performance automotive or motorcycle exhaust pipe is a common application for a mandrel. These are strong but flexible springs inserted into a pipe to support the pipe walls during manual bending. They have diameters only slightly less than the internal diameter of the pipe to be bent. They are only suitable for bending 15-and-22 mm (0.6-and-0.9 in) soft copper pipe (typically used in household plumbing) or PVC pipe. [NB these sizes and comments apply in the UK - elsewhere different sizes apply.] The spring is pushed into the pipe until its center is roughly where the bend is to be. A length of flexible wire can be attached to the end of the spring to facilitate its removal. The pipe is generally held against the flexed knee, and the ends of the pipe are pulled up to create the bend. To make it easier to retrieve the spring from the pipe, it is a good idea to bend the pipe slightly more than required, and then slacken it off a little. Springs are less cumbersome than rotary benders, but are not suitable for bending short lengths of piping when it is difficult to get the required leverage on the pipe ends. Bending springs for smaller diameter pipes (10 mm copper pipe) slide over the pipe instead of inside.
https://en.wikipedia.org/wiki/Tube_bending
Tube cleaning describes the activity of, or device for, the cleaning and maintenance of fouled tubes . [ 1 ] The need for cleaning arises because the medium that is transported through the tubes may cause deposits and finally even obstructions . In system engineering and in industry, particular demands are placed upon surface roughness or heat transfer . In the food and pharmaceutical industries as well as in medical technology , the requirements are germproofness , and that the tubes are free from foreign matter, for example after the installation of the tube or after a change of product. Another trouble source may be corrosion due to deposits which may also cause tube failure. [ citation needed ] Depending on application, conveying medium and tube material, the following methods of tube cleaning are available: In the medical field, "lost" tubes are tubes which have to be replaced after single use. This is not genuine tube cleaning in the proper sense and is very often applied in the medical sector, for instance with cannulas of syringes , infusion needles or medical appliances, such as kidney machines at dialysis . The reasons for the single use are primarily the elimination of infection risks but also the fact that cleaning would be very expensive and, particularly with cheap mass products, out of all proportion in terms of cost. Single use is therefore common practice with tubes of up to 20 mm diameter . For the same reasons as in the medical sector, single use may also be applicable in the food and pharmaceutical process technology, however in these sectors the tube diameters may exceed 20 mm. [ citation needed ] In other fields (e.g., in heat exchangers), tubing may also sometimes need to be replaced (or removed, plugged, etc.), but this typically occurs only after a prolonged use, when the tube develops serious flaws (e.g., due to corrosion). [ citation needed ] Chemical tube cleaning is understood to be the use of cleaning liquids or chemicals for removing layers and deposits . A typical example is the deliming of a coffee maker where scale is removed by means of acetic acid or citric acid . Depending on the field of application and tube material, special cleaning liquids may be used which also require a multi-stage treatment: This method of cleaning calls for a shutdown of the relevant system which causes undesired standstill periods. To safeguard a continuous production operation it may be necessary to install several systems. Another disadvantage: in the field of large-scale technology (reactor, heat exchanger , condenser , etc.), huge quantities of cleaning liquids would be required which would cause major disposal problems. A further problem occurs in the food industry through the possible toxicity of the cleaning liquid. Only the strict observance of rinsing instructions and an exact control of the admissible residue tolerances can remedy things here. This in turn requires expensive detection methods. Generally the process of chemical tube cleaning is applicable for any diameter, however practical limits of use ensue from the volume of a pipeline . [ citation needed ] A mechanical tube cleaning system is a cleaning body that is moved through the tube in order to remove deposits from the tube wall. In the most simple case it is a matter of a brush that is moved in the tube by means of a rod or a flexible spring (device) . In large-scale technology and industrial sector, however, several processes have developed which necessitate a more detailed definition. An off-line process is characterized by the fact that the system to be cleaned has to be taken out of operation in order to inject the cleaning body(ies) and to execute the cleaning procedure. An additional distinction must be made between active and passive cleaning bodies. [ citation needed ] Passive cleaning bodies may be a matter of brushes or special constructions like scrapers or so-called " pigs ", for instance, which are conveyed through the tubes by means of pressurized air , water , or other media. In most cases, cleaning is implemented through the oversize of the cleaning bodies compared to the tube inner diameter . The types range from brushes with bristles of plastic or steel to scrapers (with smaller tube diameters) and more expensive designs with spraying nozzles for pipelines. This method is applied for tube and pipe diameters from around 5 mm to several metres. Also belonging to this field is the cleaning of obstructed soil pipes of domestic sewage systems that is done by means of a rotating, flexible shaft. [ citation needed ] The active cleaning bodies are more or less remote controlled robots that move through the tubes and fulfill their cleaning task, pulling along with them not only cables for power supply and communication but also hoses for the cleaning liquid. Also measuring devices or cameras are carried along to monitor the function. To date, such devices have still required minimum diameters of around 300 mm, however a further diminution is being worked on. The reasonable maximum diameter of this kind of devices is 2 m because above this diameter an inspection of the pipe would certainly be less expensive. For such large diameters a robot application is imaginable only if health-hazardous chemicals are in use. [ citation needed ] In the on-line process, the cleaning body moves through the tubes with the conveying medium and cleans them by means of its oversize compared to the tube diameter. In the range of diameters of up to 50 mm these cleaning bodies consist of sponge rubber, in larger diameters up to the size of oil pipelines it is a matter of scrapers or so-called pigs. Sponge rubber balls are applied mainly for cooling water , like sea, river, or cooling tower water. For the chemical or pharmaceutical industry, specially adapted cleaning bodies are imaginable but the conveying media flows are so weak that off-line processes are employed in most cases. Given the fact that the cleaning bodies are not allowed to remain in the conveying medium they have to be collected after passing through the tubes. In the case of sponge rubber balls this is done through special strainer sections; for scrapers or pigs an outward transfer station is provided. According to the Taprogge process , the sponge rubber balls are re-injected upstream of the system to be cleaned by a corresponding ball recirculating unit whereas the scraper or pig is mostly taken out by hand and re-injected into another collector. Sponge rubber balls therefore safeguard a continuous cleaning while the scraper or pig system is discontinuous. [ citation needed ] In thermal tube cleaning the layer or deposit is dried through a heating whereby it flakes off due to its embrittlement and is then discharged, either by the conveying medium or a rinsing liquid. Depending on the required temperature, the heating can be either a parallel tube heating or an induction heating . This process is an off-line process. Occasionally it is also used for the sterilization of tubes in the pharmaceutical or food industry. A diameter range cannot be indicated here because this method can be applied for certain processes only; a technical limitation of the heating is given only by the materials and the required amount of heat . [ citation needed ] Special types of tube cleaning are all such types which are partly in experimental stage only and do not come under the process types mentioned before, such as, for example:
https://en.wikipedia.org/wiki/Tube_cleaning
A tube furnace is an electric heating device used to conduct syntheses and purifications of inorganic compounds and occasionally in organic synthesis . One possible design consists of a cylindrical cavity surrounded by heating coils that are embedded in a thermally insulating matrix. Temperature can be controlled via feedback from a thermocouple . More elaborate tube furnaces have two (or more) heating zones useful for transport experiments . Some digital temperature controllers provide an RS-232 interface, and permit the operator to program segments for uses like ramping, soaking, sintering, and more. Advanced materials in the heating elements, such as molybdenum disilicide (MoSi 2 ) offered in certain models can now produce working temperatures up to 1800 °C. This facilitates more sophisticated applications. [ 1 ] Common material for the reaction tubes include alumina , Pyrex , and fused quartz , or in the case of corrosive materials molybdenum or tungsten tubes can be used. The tube furnace was invented in the first decade of the 20th century and was originally used to manufacture ceramic filaments for Nernst lamps and glowers . [ 2 ] An example of a material prepared using a tube furnace is the superconductor YBa 2 Cu 3 O 7 . A mixture of finely powdered CuO, BaO, and Y 2 O 3 , in the appropriate molar ratio, contained in a platinum or alumina "boat," is heated in a tube furnace at several hundred degrees under flowing oxygen . Similarly tantalum disulfide is prepared in a tube furnace followed by purification, also in a tube furnace using the technique of chemical vapor transport . [ 3 ] Because of the availability of tube furnaces, chemical vapor transport has become a popular technique not only in industry (see van Arkel–de Boer process ) but also in the research laboratory. Tube furnaces can also be used for thermolysis reactions, involving either organic or inorganic reactants. One such example is the preparation of ketenes which may employ a tube furnace in the 'ketene lamp'. Flash vacuum pyrolysis often utilize a fused quartz tube, usually packed with quartz or ceramic beads, which is heated at high temperatures.
https://en.wikipedia.org/wiki/Tube_furnace
In mathematics , particularly topology , the tube lemma , also called Wallace's theorem, is a useful tool in order to prove that the finite product of compact spaces is compact. The lemma uses the following terminology: Tube Lemma — Let X {\displaystyle X} and Y {\displaystyle Y} be topological spaces with Y {\displaystyle Y} compact, and consider the product space X × Y . {\displaystyle X\times Y.} If N {\displaystyle N} is an open set containing a slice in X × Y , {\displaystyle X\times Y,} then there exists a tube in X × Y {\displaystyle X\times Y} containing this slice and contained in N . {\displaystyle N.} Using the concept of closed maps , this can be rephrased concisely as follows: if X {\displaystyle X} is any topological space and Y {\displaystyle Y} a compact space, then the projection map X × Y → X {\displaystyle X\times Y\to X} is closed. Generalized Tube Lemma 1 — Let X {\displaystyle X} and Y {\displaystyle Y} be topological spaces and consider the product space X × Y . {\displaystyle X\times Y.} Let A {\displaystyle A} be a compact subset of X {\displaystyle X} and B {\displaystyle B} be a compact subset of Y . {\displaystyle Y.} If N {\displaystyle N} is an open set containing A × B , {\displaystyle A\times B,} then there exists U {\displaystyle U} open in X {\displaystyle X} and V {\displaystyle V} open in Y {\displaystyle Y} such that A × B ⊆ U × V ⊆ N . {\displaystyle A\times B\subseteq U\times V\subseteq N.} Generalized Tube Lemma 2 — Let X i , i ∈ I {\displaystyle X_{i},i\in I} be topological spaces and consider the product space ∏ i ∈ I X i . {\displaystyle \prod _{i\in I}X_{i}.} For each i ∈ I {\displaystyle i\in I} , let A i {\displaystyle A_{i}} be a compact subset of X i . {\displaystyle X_{i}.} If N {\displaystyle N} is an open set containing ∏ i ∈ I A i , {\displaystyle \prod _{i\in I}A_{i},} then there exists U i {\displaystyle U_{i}} open in X i {\displaystyle X_{i}} with U i = X i {\displaystyle U_{i}=X_{i}} for all but finite amount of i ∈ I {\displaystyle i\in I} , such that ∏ i ∈ I A i ⊆ ∏ i ∈ I U i ⊆ N . {\displaystyle \prod _{i\in I}A_{i}\subseteq \prod _{i\in I}U_{i}\subseteq N.} 1. Consider R × R {\displaystyle \mathbb {R} \times \mathbb {R} } in the product topology, that is the Euclidean plane , and the open set N = { ( x , y ) ∈ R × R : | x y | < 1 } . {\displaystyle N=\{(x,y)\in \mathbb {R} \times \mathbb {R} ~:~|xy|<1\}.} The open set N {\displaystyle N} contains { 0 } × R , {\displaystyle \{0\}\times \mathbb {R} ,} but contains no tube, so in this case the tube lemma fails. Indeed, if W × R {\displaystyle W\times \mathbb {R} } is a tube containing { 0 } × R {\displaystyle \{0\}\times \mathbb {R} } and contained in N , {\displaystyle N,} W {\displaystyle W} must be a subset of ( − 1 / x , 1 / x ) {\displaystyle \left(-1/x,1/x\right)} for all x > 0 {\displaystyle x>0} which means W = { 0 } {\displaystyle W=\{0\}} contradicting the fact that W {\displaystyle W} is open in R {\displaystyle \mathbb {R} } (because W × R {\displaystyle W\times \mathbb {R} } is a tube). This shows that the compactness assumption is essential. 2. The tube lemma can be used to prove that if X {\displaystyle X} and Y {\displaystyle Y} are compact spaces, then X × Y {\displaystyle X\times Y} is compact as follows: Let { G a } {\displaystyle \{G_{a}\}} be an open cover of X × Y {\displaystyle X\times Y} . For each x ∈ X {\displaystyle x\in X} , cover the slice { x } × Y {\displaystyle \{x\}\times Y} by finitely many elements of { G a } {\displaystyle \{G_{a}\}} (this is possible since { x } × Y {\displaystyle \{x\}\times Y} is compact, being homeomorphic to Y {\displaystyle Y} ). Call the union of these finitely many elements N x . {\displaystyle N_{x}.} By the tube lemma, there is an open set of the form W x × Y {\displaystyle W_{x}\times Y} containing { x } × Y {\displaystyle \{x\}\times Y} and contained in N x . {\displaystyle N_{x}.} The collection of all W x {\displaystyle W_{x}} for x ∈ X {\displaystyle x\in X} is an open cover of X {\displaystyle X} and hence has a finite subcover { W x 1 , … , W x n } {\displaystyle \{W_{x_{1}},\dots ,W_{x_{n}}\}} . Thus the finite collection { W x 1 × Y , … , W x n × Y } {\displaystyle \{W_{x_{1}}\times Y,\dots ,W_{x_{n}}\times Y\}} covers X × Y {\displaystyle X\times Y} . Using the fact that each W x i × Y {\displaystyle W_{x_{i}}\times Y} is contained in N x i {\displaystyle N_{x_{i}}} and each N x i {\displaystyle N_{x_{i}}} is the finite union of elements of { G a } {\displaystyle \{G_{a}\}} , one gets a finite subcollection of { G a } {\displaystyle \{G_{a}\}} that covers X × Y {\displaystyle X\times Y} . 3. By part 2 and induction, one can show that the finite product of compact spaces is compact. 4. The tube lemma cannot be used to prove the Tychonoff theorem , which generalizes the above to infinite products. The tube lemma follows from the generalized tube lemma by taking A = { x } {\displaystyle A=\{x\}} and B = Y . {\displaystyle B=Y.} It therefore suffices to prove the generalized tube lemma. By the definition of the product topology, for each ( a , b ) ∈ A × B {\displaystyle (a,b)\in A\times B} there are open sets U a , b ⊆ X {\displaystyle U_{a,b}\subseteq X} and V a , b ⊆ Y {\displaystyle V_{a,b}\subseteq Y} such that ( a , b ) ∈ U a , b × V a , b ⊆ N . {\displaystyle (a,b)\in U_{a,b}\times V_{a,b}\subseteq N.} For any a ∈ A , {\displaystyle a\in A,} { V a , b : b ∈ B } {\displaystyle \left\{V_{a,b}~:~b\in B\right\}} is an open cover of the compact set B {\displaystyle B} so this cover has a finite subcover; namely, there is a finite set B 0 ( a ) ⊆ B {\displaystyle B_{0}(a)\subseteq B} such that V a := ⋃ b ∈ B 0 ( a ) V a , b {\displaystyle V_{a}:=\bigcup _{b\in B_{0}(a)}V_{a,b}} contains B , {\displaystyle B,} where observe that V a {\displaystyle V_{a}} is open in Y . {\displaystyle Y.} For every a ∈ A , {\displaystyle a\in A,} let U a := ⋂ b ∈ B 0 ( a ) U a , b , {\displaystyle U_{a}:=\bigcap _{b\in B_{0}(a)}U_{a,b},} which is an open in X {\displaystyle X} set since B 0 ( a ) {\displaystyle B_{0}(a)} is finite. Moreover, the construction of U a {\displaystyle U_{a}} and V a {\displaystyle V_{a}} implies that { a } × B ⊆ U a × V a ⊆ N . {\displaystyle \{a\}\times B\subseteq U_{a}\times V_{a}\subseteq N.} We now essentially repeat the argument to drop the dependence on a . {\displaystyle a.} Let A 0 ⊆ A {\displaystyle A_{0}\subseteq A} be a finite subset such that U := ⋃ a ∈ A 0 U a {\displaystyle U:=\bigcup _{a\in A_{0}}U_{a}} contains A {\displaystyle A} and set V := ⋂ a ∈ A 0 V a . {\displaystyle V:=\bigcap _{a\in A_{0}}V_{a}.} It then follows by the above reasoning that A × B ⊆ U × V ⊆ N {\displaystyle A\times B\subseteq U\times V\subseteq N} and U ⊆ X {\displaystyle U\subseteq X} and V ⊆ Y {\displaystyle V\subseteq Y} are open, which completes the proof.
https://en.wikipedia.org/wiki/Tube_lemma
Tube sound (or valve sound ) is the characteristic sound associated with a vacuum tube amplifier (valve amplifier in British English), a vacuum tube -based audio amplifier . [ 1 ] At first, the concept of tube sound did not exist, because practically all electronic amplification of audio signals was done with vacuum tubes and other comparable methods were not known or used. After introduction of solid state amplifiers, tube sound appeared as the logical complement of transistor sound, which had some negative connotations due to crossover distortion in early transistor amplifiers. [ 2 ] [ 3 ] However, solid state amplifiers have been developed to be flawless and the sound is later regarded neutral compared to tube amplifiers. Thus the tube sound now means 'euphonic distortion.' [ 4 ] The audible significance of tube amplification on audio signals is a subject of continuing debate among audio enthusiasts. [ 5 ] Many electric guitar , electric bass , and keyboard players in several genres also prefer the sound of tube instrument amplifiers or preamplifiers. Tube amplifiers are also preferred by some listeners for stereo systems. Before the commercial introduction of transistors in the 1950s, electronic amplifiers used vacuum tubes (known in the United Kingdom as "valves"). By the 1960s, solid state (transistorized) amplification had become more common because of its smaller size, lighter weight, lower heat production, and improved reliability. Tube amplifiers have retained a loyal following amongst some audiophiles and musicians. Some tube designs command very high prices, and tube amplifiers have been going through a revival since Chinese and Russian markets have opened to global trade—tube production never went out of vogue in these countries. [ further explanation needed ] Many transistor-based audio power amplifiers use MOSFET (metal–oxide–semiconductor field-effect transistor) devices in their power sections, because their distortion curve is more tube-like. [ 6 ] Some musicians [ 7 ] prefer the distortion characteristics of tubes over transistors for electric guitar, bass, and other instrument amplifiers. In this case, generating deliberate (and in the case of electric guitars often considerable) audible distortion or overdrive is usually the goal. The term can also be used to describe the sound created by specially-designed transistor amplifiers or digital modeling devices that try to closely emulate the characteristics of the tube sound. The tube sound is often subjectively described as having a "warmth" and "richness", but the source of this is by no means agreed on. Possible explanations mention non-linear clipping, or the higher levels of second-order harmonic distortion in single-ended designs, resulting from the tube interacting with the inductance of the output transformer. Triodes (and MOSFETs ) produce a monotonically decaying harmonic distortion spectrum. Even-order harmonics and odd-order harmonics are both natural number multiples of the input frequency. A psychoacoustic analysis tells us that high-order harmonics are more offensive than low. For this reason, distortion measurements should weight audible high-order harmonics more than low. The importance of high-order harmonics suggests that distortion should be regarded in terms of the complete series or of the composite wave-form that this series represents. It has been shown that weighting the harmonics by the square of the order correlates well with subjective listening tests. Weighting the distortion wave-form proportionally to the square of the frequency gives a measure of the reciprocal of the radius of curvature of the wave-form, and is therefore related to the sharpness of any corners on it. [ 8 ] Based on said discovery, highly sophisticated methods of weighting of distortion harmonics have been developed. [ 9 ] Since they concentrate in the origins of the distortion, they are mostly useful for the engineers who develop and design audio amplifiers, but on the other hand they may be difficult to use for the reviewers who only measure the output. [ 10 ] A huge issue is that measurements of objective nature (for example, those indicating magnitude of scientifically quantifiable variables such as current, voltage, power, THD, dB, and so on) fail to address subjective preferences. Especially in case of designing or reviewing instrument amplifiers this is a considerable issue because design goals of such differ widely from design goals of likes of HiFi amplifiers. HiFi design largely concentrates on improving performance of objectively measurable variables. Instrument amplifier design largely concentrates on subjective issues, such as "pleasantness" of certain type of tone. Fine examples are cases of distortion or frequency response: HiFi design tries to minimize distortion and focuses on eliminating "offensive" harmonics. It also aims for ideally flat response. Musical instrument amplifier design deliberately introduces distortion and great non-linearities in frequency response. Former "offensiveness" of certain types of harmonics becomes a highly subjective topic, along with preferences towards certain types of frequency responses (whether flat or un-flat). [ citation needed ] Push–pull amplifiers use two nominally identical gain devices in tandem. One consequence of this is that all even-order harmonic products cancel, allowing only odd-order distortion. [ 11 ] This is because a push–pull amplifier has a symmetric ( odd symmetry ) transfer characteristic . Power amplifiers are of the push-pull type to avoid the inefficiency of Class A amplifiers . A single-ended amplifier will generally produce even as well as odd harmonics. [ 12 ] [ 13 ] [ 14 ] A particularly famous research about "tube sound" compared a selection of single-ended tube microphone preamplifiers to a selection of push-pull transistorized microphone preamplifiers. [ 15 ] The difference in harmonic patterns of these two topologies has henceforth been often incorrectly attributed as difference of tube and solid-state devices (or even the amplifier class). Push–pull tube amplifiers can be run in class A (rarely), AB, or B. Also, a class-B amplifier may have crossover distortion that will be typically high order and thus sonically very undesirable indeed. [ 16 ] The distortion content of class-A circuits (SE or PP) typically monotonically reduces as the signal level is reduced, asymptotic to zero during quiet passages of music. [ 17 ] For this reason class-A amplifiers are especially desired for classical and acoustic music since the distortion relative to signal decreases as the music gets quieter. Class-A amplifiers measure best at low power. Class-AB and B amplifiers measure best just below max rated power. [ citation needed ] Loudspeakers present a reactive load to an amplifier ( capacitance , inductance and resistance ). This impedance may vary in value with signal frequency and amplitude. This variable loading affects the amplifier's performance both because the amplifier has nonzero output impedance (it cannot keep its output voltage perfectly constant when the speaker load varies) and because the phase of the speaker load can change the stability margin of the amplifier. The influence of the speaker impedance is different between tube amplifiers and transistor amplifiers. The reason is that tube amplifiers normally use output transformers, and cannot use much negative feedback due to phase problems in transformer circuits. Notable exceptions are various "OTL" (output-transformerless) tube amplifiers, pioneered by Julius Futterman in the 1950s, or somewhat rarer tube amplifiers that replace the impedance matching transformer with additional (often, though not necessarily, transistorized) circuitry in order to eliminate parasitics and musically unrelated magnetic distortions. [ 18 ] In addition to that, many solid-state amplifiers, designed specifically to amplify electric instruments such as guitars or bass guitars, employ current feedback circuitry. This circuitry increases the amplifier's output impedance, resulting in response similar to that of tube amplifiers. [ citation needed ] The design of speaker crossover networks and other electro-mechanical properties may result in a speaker with a very uneven impedance curve, for a nominal 8 Ω speaker, being as low as 6 Ω at some places and as high as 30–50 Ω elsewhere in the curve. An amplifier with little or no negative feedback will always perform poorly when faced with a speaker where little attention was paid to the impedance curve. [ citation needed ] There has been considerable debate over the characteristics of tubes versus bipolar junction transistors . Triodes and MOSFETs have certain similarities in their transfer characteristics. Later forms of the tube, the tetrode and pentode , have quite different characteristics that are in some ways similar to the bipolar transistor. Yet MOSFET amplifier circuits typically do not reproduce tube sound any more than typical bipolar designs. The reason is circuit differences between a typical tube design and a typical MOSFET design. A characteristic feature of most tube amplifier designs is the high input impedance (typically 100 kΩ or more) in modern designs and as much as 1 MΩ in classic designs. [ 19 ] The input impedance of the amplifier is a load for the source device. Even for some modern music reproduction devices the recommended load impedance is over 50 kΩ. [ 20 ] [ 21 ] This implies that the input of an average tube amplifier is a problem-free load for music signal sources. By contrast, some transistor amplifiers for home use have lower input impedances, as low as 15 kΩ. [ 22 ] Since it is possible to use high output impedance devices due to the high input impedance, other factors may need to be accounted for, such as cable capacitance and microphonics. Loudspeakers usually load audio amplifiers. In audio history, nearly all loudspeakers have been electrodynamic loudspeakers. There exists also a minority of electrostatic loudspeakers and some other more exotic loudspeakers. Electrodynamic loudspeakers transform electric current to force and force to acceleration of the diaphragm which causes sound pressure. Due to the principle of an electrodynamic speaker, most loudspeaker drivers ought to be driven by an electric current signal. The current signal drives the electrodynamic speaker more accurately, causing less distortion than a voltage signal. [ 23 ] [ 24 ] [ 25 ] In an ideal current or transconductance amplifier the output impedance approaches infinity. Practically all commercial audio amplifiers are voltage amplifiers. [ 26 ] [ 27 ] Their output impedances have been intentionally developed to approach zero. Due to the nature of vacuum tubes and audio transformers, the output impedance of an average tube amplifier is usually considerably higher than the modern audio amplifiers produced completely without vacuum tubes or audio transformers. Most tube amplifiers with their higher output impedance are less ideal voltage amplifiers than the solid state voltage amplifiers with their smaller output impedance. Soft clipping is a very important aspect of tube sound especially for guitar amplifiers . A hi-fi amplifier should not normally ever be driven into clipping. The harmonics added to the signal are of lower energy with soft clipping than hard clipping. However, soft clipping is not exclusive to tubes. It can be simulated in transistor circuits (below the point that real hard clipping would occur). (See "Intentional distortion" section.) Large amounts of global negative feedback are not available in tube circuits, due to phase shift in the output transformer, and lack of sufficient gain without large numbers of tubes. With lower feedback, distortion is higher and predominantly of low order. The onset of clipping is also gradual. Large amounts of feedback, allowed by transformerless circuits with many active devices, leads to numerically lower distortion but with more high harmonics, and harder transition to clipping. As input increases, the feedback uses the extra gain to ensure that the output follows it accurately until the amplifier has no more gain to give and the output saturates. However, phase shift is largely an issue only with global feedback loops. Design architectures with local feedback can be used to compensate the lack of global negative feedback magnitude. Design "selectivism" is again a trend to observe: designers of sound producing devices may find the lack of feedback and resulting higher distortion beneficial, designers of sound reproducing devices with low distortion have often employed local feedback loops. Soft clipping is also not a product of lack of feedback alone: Tubes have different characteristic curves. Factors such as bias affect the load line and clipping characteristics. Fixed and cathode-biased amplifiers behave and clip differently under overdrive. The type of phase inverter circuitry can also affect greatly on softness (or lack of it) of clipping: long-tailed pair circuit, for example, has softer transition to clipping than a cathodyne. The coupling of the phase inverter and power tubes is also important, since certain types of coupling arrangements (e.g. transformer coupling) can drive power tubes to class AB2, while some other types can't. In the recording industry and especially with microphone amplifiers it has been shown that amplifiers are often overloaded by signal transients. Russell O. Hamm, an engineer working for Walter Sear at Sear Sound Studios , wrote in 1973 that there is a major difference between the harmonic distortion components of a signal with greater than 10% distortion that had been amplified with three methods: tubes, transistors, or operational amplifiers. [ 15 ] [ 28 ] Mastering engineer R. Steven Mintz wrote a rebuttal to Hamm's paper, saying that the circuit design was of paramount importance, more than tubes vs. solid state components. [ 29 ] Hamm's paper was also countered by Dwight O. Monteith Jr and Richard R. Flowers in their article "Transistors Sound Better Than Tubes", which presented transistor mic preamplifier design that actually reacted to transient overloading similarly as the limited selection of tube preamplifiers tested by Hamm. [ 30 ] Monteith and Flowers said: "In conclusion, the high voltage transistor preamplifier presented here supports the viewpoint of Mintz: 'In the field analysis, the characteristics of a typical system using transistors depends on the design, as is the case in tube circuits. A particular 'sound' may be incurred or avoided at the designer's pleasure no matter what active devices he uses.'" [ 30 ] In other words, soft clipping is not exclusive to vacuum tubes or even an inherent property of them. In practice the clipping characteristics are largely dictated by the entire circuitry and as so they can range from very soft to very hard, depending on circuitry. Same applies to both vacuum tube and solid-state -based circuitry. For example, solid-state circuitry such as operational transconductance amplifiers operated open loop, or MOSFET cascades of CMOS inverters, are frequently used in commercial applications to generate softer clipping than what is provided by generic triode gain stages. In fact, the generic triode gain stages can be observed to clip rather "hard" if their output is scrutinized with an oscilloscope. Early tube amplifiers often had limited response bandwidth , in part due to the characteristics of the inexpensive passive components then available. In power amplifiers most limitations come from the output transformer; low frequencies are limited by primary inductance and high frequencies by leakage inductance and capacitance. Another limitation is in the combination of high output impedance, decoupling capacitor and grid resistor, which acts as a high-pass filter . If interconnections are made from long cables (for example guitar to amp input), a high source impedance with high cable capacitance will act as a low-pass filter . Modern premium components make it easy to produce amplifiers that are essentially flat over the audio band, with less than 3 dB attenuation at 6 Hz and 70 kHz, well outside the audible range. [ citation needed ] Typical (non-OTL) tube power amplifiers could not use as much negative feedback (NFB) as transistor amplifiers due to the large phase shifts caused by the output transformers and their lower stage gains. While the absence of NFB greatly increases harmonic distortion, it avoids instability, as well as slew rate and bandwidth limitations imposed by dominant-pole compensation in transistor amplifiers. However, the effects of using low feedback principally apply only to circuits where significant phase shifts are an issue (e.g. power amplifiers). In preamplifier stages, high amounts of negative feedback can easily be employed. Such designs are commonly found from many tube-based applications aiming to higher fidelity. On the other hand, the dominant pole compensation in transistor amplifiers is precisely controlled: exactly as much of it can be applied as needed to strike a good compromise for the given application. The effect of dominant pole compensation is that gain is reduced at higher frequencies. There is increasingly less NFB at high frequencies due to the reduced loop gain. In audio amplifiers, the bandwidth limitations introduced by compensation are still far beyond the audio frequency range, and the slew rate limitations can be configured such that full amplitude 20 kHz signal can be reproduced without the signal encountering slew rate distortion, which is not even necessary for reproducing actual audio material. Early tube amplifiers had power supplies based on rectifier tubes. These supplies were unregulated, a practice which continues to this day in transistor amplifier designs. The typical anode supply was a rectifier , perhaps half-wave, a choke ( inductor ) and a filter capacitor . When the tube amplifier was operated at high volume, due to the high impedance of the rectifier tubes, the power supply voltage would dip as the amplifier drew more current (assuming class AB), reducing power output and causing signal modulation. The dipping effect is known as "sag." Sag may be desirable effect for some electric guitarists when compared with hard clipping. As the amplifier load or output increases this voltage drop will increase distortion of the output signal. Sometimes this sag effect is desirable for guitar amplification. With added resistance in series with the high-voltage supply, silicon rectifiers can emulate the voltage sag of a tube rectifier. The resistance can be switched in when required. [ 31 ] Electric guitar amplifiers often use a class-AB 1 amplifier. In a class-A stage the average current drawn from the supply is constant with signal level, consequently it does not cause supply line sag until the clipping point is reached. Other audible effects due to using a tube rectifier with this amplifier class are unlikely. Unlike their solid-state equivalents, tube rectifiers require time to warm up before they can supply B+/HT voltages. This delay can protect rectifier-supplied vacuum tubes from cathode damage due to application of B+/HT voltages before the tubes have reached their correct operating temperature by the tube's built-in heater. [ 32 ] The benefit of all class-A amplifiers is the absence of crossover distortion . This crossover distortion was found especially annoying after the first silicon-transistor class-B and class-AB transistor amplifiers arrived on the consumer market. Earlier germanium-based designs with the much lower turn-on voltage of this technology and the non-linear response curves of the devices had not shown large amounts of cross-over distortion. Although crossover distortion is very fatiguing to the ear and perceptible in listening tests, it is also almost invisible (until looked for) in the traditional Total harmonic distortion (THD) measurements of that epoch. [ 33 ] It should be pointed out that this reference is somewhat ironic given its publication date of 1952. As such, it most certainly refers to "ear fatigue" distortion commonly found in existing tube-type designs; the world's first prototype transistorized hi-fi amplifier did not appear until 1955. [ 34 ] A class-A push–pull amplifier produces low distortion for any given level of applied feedback , and also cancels the flux in the transformer cores, so this topology is often seen by HIFI-audio enthusiasts and do-it-yourself builders as the ultimate engineering approach to the tube Hi-fi amplifier for use with normal speakers . Output power of as high as 15 watts can be achieved even with classic tubes such as the 2A3 [ 35 ] or 18 watts from the type 45. Classic pentodes such as the EL34 and KT88 can output as much as 60 and 100 watts respectively. Special types such as the V1505 can be used in designs rated at up to 1100 watts. See "An Approach to Audio Frequency Amplifier Design", a collection of reference designs originally published by G.E.C. SET amplifiers show poor measurements for distortion with a resistive load, have low output power, are inefficient, have poor damping factors and high measured harmonic distortion. But they perform somewhat better in dynamic and impulse response. The triode, despite being the oldest signal amplification device, also can (depending on the device in question) have a more linear no-feedback transfer characteristic than more advanced devices such as beam tetrodes and pentodes. All amplifiers, regardless of class, components, or topology, have some measure of distortion. This mainly harmonic distortion is a unique pattern of simple and monotonically decaying series of harmonics, dominated by modest levels of second harmonic. The result is like adding the same tone one octave higher in the case of second-order harmonics, and one octave plus one fifth higher for third-order harmonics. The added harmonic tone is lower in amplitude, at about 1–5% or less in a no feedback amp at full power and rapidly decreasing at lower output levels. Hypothetically, a single-ended power amplifier's second harmonic distortion might reduce similar harmonic distortion in a single driver loudspeaker, if their harmonic distortions were equal and amplifier was connected to the speaker so that the distortions would neutralize each other. [ 36 ] [ 37 ] [ 38 ] SETs usually only produce about 2 watt (W) for a 2A3 tube amp to 8 W for a 300B up to the practical maximum of 40 W for an 805 tube amp. The resulting sound pressure level depends on the sensitivity of the loudspeaker and the size and acoustics of the room as well as amplifier power output. Their low power also makes them ideal for use as preamps . SET amps have a power consumption of a minimum of 8 times the stated stereo power. For example, a 10 W stereo SET uses a minimum of 80 W, and typically 100 W. The special feature among tetrodes and pentodes is the possibility to obtain ultra-linear or distributed load operation with an appropriate output transformer. In practice, in addition to loading the plate terminal, distributed loading (of which ultra linear circuit is a specific form) distributes the load also to cathode and screen terminals of the tube. An Ultra-linear connection and distributed loading are both in essence negative feedback methods, which enable less harmonic distortion along with other characteristics associated with negative feedback. Ultra-linear topology has mostly been associated with amplifier circuits based on research by D. Hafler and H. Keroes of Dynaco fame. Distributed loading (in general and in various forms) has been employed by the likes of McIntosh and Audio Research. The majority of modern commercial Hi-fi amplifier designs have until recently used class-AB topology (with more or less pure low-level class-A capability depending on the standing bias current used), in order to deliver greater power and efficiency , typically 12–25 watts and higher. Contemporary designs normally include at least some negative feedback . However, class-D topology (which is vastly more efficient than class B) is more and more frequently applied where traditional design would use class AB because of its advantages in both weight and efficiency. Class-AB push–pull topology is nearly universally used in tube amps for electric guitar applications that produce power of more than about 10 watts. Some individual characteristics of the tube sound, such as the waveshaping on overdrive, are straightforward to produce in a transistor circuit or digital filter . For more complete simulations, engineers have been successful in developing transistor amplifiers that produce a sound quality very similar to the tube sound. Usually this involves using a circuit topology similar to that used in tube amplifiers. More recently, a researcher has introduced the asymmetric cycle harmonic injection (ACHI) method to emulate tube sound with transistors. [ 39 ] Using modern passive components , and modern sources, whether digital or analogue, and wide band loudspeakers , it is possible to have tube amplifiers with the characteristic wide bandwidth of modern transistor amplifiers, including using push–pull circuits, class AB, and feedback. Some enthusiasts, such as Nelson Pass , have built amplifiers using transistors and MOSFETs that operate in class A, including single ended, and these often have the "tube sound." [ 40 ] Tubes are added to solid-state amplifiers to impart characteristics that many people find audibly pleasant, such as Musical Fidelity 's use of Nuvistors (tiny triode tubes) to control large bipolar transistors in their NuVista 300 power amp. In America, Moscode and Studio Electric use this method, but use MOSFET transistors for power, rather than bipolar. Pathos, an Italian company, has developed an entire line of hybrid amplifiers. To demonstrate one aspect of this effect, one may use a light bulb in the feedback loop of an infinite gain multiple feedback (IGMF) circuit. The slow response of the light bulb's resistance (which varies according to temperature) can thus be used to moderate the sound and attain a tube-like "soft limiting" of the output, though other aspects of the "tube sound" would not be duplicated in this exercise. Tube sound reproduction using no tubes (extended) It is possible to reproduce the warm and rich sound of vacuum tubes using solid-state systems and even by incorporating fast computers and synthesizers to enhance the effect. One advantage of this approach is the increased reliability of a solid state system compared to vacuum tube system. Here are some techniques and methods to achieve this: 1. **Tube Emulation Circuits:** Special electronic circuits based on transistors and other analog components can be used to mimic the nonlinear characteristics of vacuum tubes. 2. **DSP (Digital Signal Processing):** Digital signal processing allows the accurate reproduction of harmonic distortions. Fast processors and advanced algorithms can be used to simulate the characteristic sound of vacuum tubes in real-time. 3. **Synthesizers:** Digital synthesizers can generate warm tones through internal processing and adjustable parameters that mimic the properties of vacuum tubes. 4. **Mathematical Models:** Mathematical models have been developed to simulate the behavior of tubes and their effects on sound. These include models of harmonic distortions and real-time response models. 5. **Hybrid Analog-Digital Components:** Combining analog and digital circuits can provide the best of both worlds. The analog circuit can provide unique distortions and responsiveness, while the digital circuit allows for advanced signal processing. Using these methods and technologies, it is possible to create audio systems that provide the warm and rich sound characteristic of tubes while maintaining the accuracy and reliability of solid-state systems.
https://en.wikipedia.org/wiki/Tube_sound
Tube tools are tools used to service any tubing (material) in industrial applications including, but not limited to: HVAC or industrial heating and air (hospitals and universities, for example), OEM's( Original equipment manufacturer ), defense contractors, the automotive industry, process industries, aluminum smelting facilities, food and sugar production plants, oil refineries, and power plants. Tube Tools can be categorized into function by application: Tube Cleaners -tube cleaning demands vary widely by application Shell and tube heat exchangers , condensers and chillers : Deposits are typically sediment from impurities in the water circulating through the tubes. Manual scrubbing with a wire brush attached to a long rod is the traditional method of cleaning. Modern methods involve pneumatic or electric motors to pulsate the brush automatically with a medium pressure water jet to further clean out residual deposits. Tube Testers - Use air pressure or vacuum to test for leaks, cracks, and material failures in a tube. Both manual activated and air activated models are available. To test for leaks in tubes, two operators are required with an operator at each end of the vessel. Step 1 - Seal the tube at both ends. Step 2 - Build air pressure or vacuum in the tube. Step 3 - Observe gauge to see if air pressure is dropping or vacuum is not holding. Tube Plugs (Repair)- To regain efficiency of a heat exchanger or chiller, tube plugs are installed to take leaky tubes out of service. The rule of thumb is that a vessel will need to be retubed after approximately 10% of the tubes have been plugged. Tubes need to be plugged at both ends of the pressure vessel . It is a good practice to: Install a plug that is the same as or a compatible material to the tube and tube sheet. Puncture the leaky tube using a one revolution tube cutter to relieve back pressure. Some contractors weld the tube plugs to the tube sheet after they are installed to permanently secure the plugs in place. Tube Removers -One or a combination of the three below methods is used to remove a tube from a boiler or chiller vessel Tube Installation (Tube Expanders) - Tube Expanding is the art of reducing a tube wall by compressing the outer diameter of the tube against a fixed container such as rolling tubes into tube sheets, drums, ferrules or flanges. Construction of heat exchangers , boilers , and surface condenser tubes is mainly limited to copper, steel, stainless steel, and cast iron with exceptions such as the use of titanium in ultra high pressure vessel applications. To assure a proper tube joint, the tube wall must be reduced by a predetermined percentage dependent upon the material the tube is constructed of. For example, Pneumatic or hydraulic torque rolling devices with an expander (consisting of a mandrel, cage with rolls, case assembly with a thrust collar) are used to expand the end of the tube so it seals against the tube sheet of the vessel. It is important to note that the type of tool has to be with paired not only with material but also the inner and outer dimensions of the tube as well. Thickness of the tube sheet (what each individual tube is inserted into) has to be taken into consideration during tube removal or installation procedures.
https://en.wikipedia.org/wiki/Tube_tool
Tuberculin , also known as purified protein derivative , is a combination of proteins that are used in the diagnosis of tuberculosis . [ 1 ] This use is referred to as the tuberculin skin test and is recommended only for those at high risk. [ 2 ] Reliable administration of the skin test requires large amounts of training, supervision, and practice. Injection is done into the skin . [ 2 ] After 48 to 72 hours, if there is more than a five to ten millimeter area of swelling, the test is considered positive. [ 2 ] Common side effects include redness, itchiness (pruritus), and pain at the site of injection. [ 1 ] Allergic reactions may occasionally occur. [ 1 ] The test may be falsely positive in those who have been previously vaccinated with BCG or have been infected by other types of mycobacteria . [ 2 ] The test may be falsely negative within ten weeks of infection, in those less than six months old, and in those who have been infected for many years. [ 2 ] Use is safe in pregnancy . [ 2 ] Tuberculin was discovered in 1890 by Robert Koch . [ 3 ] Koch, best known for his work on the etiology (cause, origin) of tuberculosis (TB), laid down various rigorous guidelines that aided the establishment between a pathogen and the specific disease that followed that were later named Koch's postulates . [ 4 ] Although he initially believed it would cure tuberculosis, this was later disproved. [ 3 ] Tuberculin is made from an extract of Mycobacterium tuberculosis . [ 1 ] It is on the World Health Organization's List of Essential Medicines . [ 5 ] The test used in the United States at present is referred to as the Mantoux test . An alternative test called the Heaf test was used in the United Kingdom until 2005, although the UK now uses the Mantoux test in line with the rest of the world. Both of these tests use the tuberculin derivative PPD (purified protein derivative). [ citation needed ] Tuberculin was invented by German scientist and physician Robert Koch in 1890. The original tuberculin was a glycerine extract of the tubercle bacilli and was developed as a remedy for tuberculosis. This was originally considered a cure for tuberculosis, given to patients in subcutaneous doses of a brownish, transparent liquid that was gathered through cultured filtrates. [ 6 ] However, the treatment did not result in the anticipated reduction of deaths. [ citation needed ] When the tuberculin treatment was first given to patients in 1891, a febrile reaction that lasted between four and five hours was recorded in most patients. The symptoms of these reactions included a fever that was accompanied by vomiting, rigors, or other forms of constitutional symptoms. [ 6 ] After these symptoms became recurring in patients, Koch had noted how increasing dosages of the treatment over time resulted in quicker and more effective healing in the mild cases of tuberculosis, along with the more serious cases where progression was slower, yet still progressive. [ 6 ] British efforts to set up "dispensaries" for the examination, diagnosis and treatment of poor citizens achieved better results, as the protocol of the Edinburgh System encompassed treatment of the homes and all contacts of the patients with TB. [ 7 ] As an example, Dr Hilda Clark's dispensary at Street, Somerset was especially noted for its efficacious treatment of the less severe cases. [ 7 ] Clemens von Pirquet , an Austrian physician, discovered that patients who had previously received injections of horse serum or smallpox vaccine had quicker, more severe reactions to a second injection, and he coined the word allergy to describe this hypersensitivity reaction. Soon after, he discovered that the same type of reaction took place in those infected with tuberculosis. His observations led to the development of the tuberculin skin test. Individuals with active tuberculosis were usually tuberculin positive, but many of those with disseminated and rapidly progressive disease were negative. This led to the widespread but erroneous belief that tuberculin reactivity is an indicator of immunity to tuberculosis. [ citation needed ] In Koch's time, close to one in seven Germans died of tuberculosis. For that reason, the public reacted euphorically to the discovery of the pathogen since it sparked hope for a cure. Until that time, the only effective remedy for an infectious disease was quinine , which was used to treat malaria. [ citation needed ] At the Tenth International Medical Congress held in 1890 in Berlin, Koch unexpectedly introduced a cure for tuberculosis, which he called tuberculin. He did not reveal its composition, which was not unusual as it was not then customary to patent medicine, Phenazone being the only exception. The public trusted the famous physician and reacted enthusiastically. Koch was awarded the Grand Cross of the Order of the Red Eagle . [ citation needed ] The social hygienist Alfred Grotjahn described the arrival of tuberculin in Greifswald : "Finally the great day also arrived for Greifswald on which the Clinic for Internal Medicine was to carry out the first inoculations with tuberculin. It was celebrated like the laying of a foundation stone or the unveiling of a monument. Doctors, nurses and patients dressed in snowy white and the director, garbed in a black frock coat, stood out against a background of laurel trees: ceremonial address by the internist, execution of the vaccination on selected patients, a thunderous cheer for Robert Koch!" [ 8 ] Koch attempted to profit from his discovery, which was held against him since he had conducted his research at a public institution using public money. He demanded that the Ministry of Culture finance an institute to be used exclusively for tuberculin production, and estimated the annual profit at 4.5 million marks . Koch also hinted that he had received offers from the US. [ 9 ] At the time, regulations for testing medicines did not yet exist. According to Koch, he had tested tuberculin on animals, but he was unable to produce the guinea pigs which had allegedly been cured. [ 10 ] : 106 He seemed unconcerned by the evidence that humans had a more dramatic reaction to tuberculin versus his laboratory animals, exhibiting fever, pains in their joints, and nausea. [ 10 ] : 101 In addition to other test subjects, he tested tuberculin on Hedwig Freiberg (his mistress and later wife), who was 16 years old at the time. She relates in her memoirs that Koch had told her that she could "possibly get quite sick" but that she was "not likely to die". [ 9 ] In February 1891, a medical trial was performed on 1769 patients to whom tuberculin was administered, and the results made clear that it was not a true cure. Tuberculin failed to provide any form of protective action as only 1% of people in the trial were cured, 34% of people showed only a slight amount of improvement, 55% of the patients showed little to no change in their health, and 4% died due to the treatment having no effect. [ 6 ] After tuberculin was on the market, articles reporting successful treatments appeared in professional publications and the public media, only to be followed by the first reports of deaths. At first, the negative reports were not viewed with alarm, as the doctors were, after all, experimenting on seriously ill patients. [ 10 ] : 133f After performing autopsies on the corpses, Rudolf Virchow proved that not only did tuberculin not kill the bacteria, it even activated latent bacteria. [ 10 ] : 136 When Robert Koch was forced to reveal the composition of his "secret cure", it was discovered that he himself did not precisely know what it contained. Before tuberculin was released to the public, Koch had initially tested the treatment on himself to determine its toxicity to the human body, which is no longer considered a reliable or acceptable method for establishing drug safety. [ 6 ] It was an extract of tuberculosis pathogens in glycerine, and the presence of the dead pathogens themselves could also be confirmed. [ clarification needed ] Koch asked the Prussian Minister of Culture for time off and went to Egypt, which was interpreted as an attempt to escape from the German public. A heated debate took place in the Prussian parliament in May 1891. Koch remained convinced of the value of his cure. In 1897, he presented a modified form of tuberculin, which was also ineffective as a therapeutic agent. This presentation, and numerous other indications, suggest that he did not intend to commit a "tuberculin scam" (a common accusation), but that he had deluded himself. [ citation needed ] The medical historian Christoph Gradmann has reconstructed Koch's beliefs regarding the function of tuberculin: the medicine did not kill the bacteria but rather initiated a necrosis of the tubercular tissue, thus "starving" the tuberculosis pathogen. [ 10 ] : 100f This idea was then outside customary medical theories, as it remains today. [ citation needed ] The tuberculin scandal was understood as a cautionary tale in regards to testing medicine. Emil von Behring 's introduction of his diphtheria antitoxin in 1893 had been preceded by lengthy clinical testing, and the serum was only slowly introduced into practical use, accompanied by a critical discussion among qualified experts. [ 11 ] Paul Ehrlich also proceeded with conspicuous caution in 1909 when introducing the first synthetically produced chemotherapeutic agent, Salvarsan , as a cure for an infectious disease, syphilis . [ citation needed ] In 1907, Clemens von Pirquet further developed tuberculin as a testing agent for diagnosing tuberculosis, but this was his own achievement, independent of any of Robert Koch's ideas. The company Meister Lucius & Brüning AG (later Hoechst AG ) in Frankfurt/Höchst purchased the large leftover stocks of tuberculin and the company later began production under the leadership of Koch's student Arnold Libbertz. [ 12 ] When Koch first discovered and released the testing process for tuberculosis, there was no realization of how widely this type of diagnostic test would be used. With the various clinical trials and observations made through the differing responses to tuberculin in patients with and without tuberculosis, new methods that corresponded to the backbone of this treatment began to arise. The continued use of new methods that further eliminated systemic symptoms that were caused by a local reaction at the injection site allowed for other medical advances. These included the Pirquet cutaneous test, the Moro percutaneous path test, the Mantoux intracutaneous test, and the Calmette conjunctival test. [ 6 ] With experience gained from the tuberculin skin test during the greater part of the last century, the current body of medical knowledge and advances were made possible by Robert Koch. Through the failures and successes of tuberculin, more than ever before is known about the causes and symptoms of tuberculosis and the measures to prevent it. In addition, the discovery of the tuberculin skin test paved the way to the world's understanding of many other mycobacterial infections as well as certain fungal infections. [ 13 ] Coupled with that, there has been more profound research and discoveries on the immune systems of humans and animals as the idea of skin testing broadened. The in-depth understanding of diagnostic tests was not present until the tuberculin skin test was discovered. [ 13 ]
https://en.wikipedia.org/wiki/Tuberculin
The Tubular Exchanger Manufacturers Association (also known as TEMA ) is an association of fabricators of shell and tube type heat exchangers . [ 1 ] TEMA has established and maintains a set of construction standards for heat exchangers, known as the TEMA Standard. [ 2 ] TEMA also produces software for evaluation of flow-induced vibration and of flexible shell elements ( expansion joints ). TEMA was founded in 1939, and is based in Tarrytown, New York . [ 3 ] The association meets regularly to revise and update the standards, respond to inquiries, and discuss topics related to the industry. The current edition of the TEMA Standard is the Tenth Edition, published in 2019. [ 4 ] Worldwide, the TEMA Standard is used as the construction standard for most shell and tube heat exchangers . [ 5 ] [ 6 ] [ 7 ] The standard is composed of ten sections: [ 8 ] TEMA's standard recognizes three separate classifications of exchangers. [ 9 ] [ 10 ] [ 11 ] Each class has different mechanical construction requirements, based on the expected service. Those classes are: In general, Class C is the least restrictive class, and Class R is the most stringent, insuring more robust designs for longer life in harsher service conditions. [ 12 ] Because heat exchangers can be configured many different ways, TEMA has standardized the nomenclature of exchanger types. [ 13 ] A letter designation is used for the front head type, shell type, and rear head type of an exchanger. For example, a fixed tubesheet exchanger with bolted removable bonnets is designated as a 'BEM' type. A kettle type reboiler with a removable U-tube bundle is a 'BKU' type. Many different letter combinations are possible. The member companies of TEMA must demonstrate high quality exchanger fabrication standards, and possess in-house engineering capability for mechanical and thermal design of shell and tube type heat exchangers. Companies may fabricate other equipment in addition to heat exchangers. The current member companies of TEMA (in alphabetical order) are:
https://en.wikipedia.org/wiki/Tubular_Exchanger_Manufacturers_Association
The tubular pinch effect is a phenomenon in fluid mechanics , which has importance in membrane technology . This effect describes a tendency for suspended particles flowing through a pipe to reach an equilibrium distribution with the region of highest concentration of particles lies between the central axis and the wall of the pipe. Mark C. Porter first suspected that the pinch effect was responsible for the return of separated particles into the core flow by the membrane. This effect was first demonstrated in 1956 by G. Sergé and A. Silberberg. They had been working with dilute suspensions of spherical particles in pipelines. While the particle was flowing through the pipeline, it appeared to migrate away from the pipe axis and pipe wall and reach equilibrium in a radial eccentric position. If: then the pinch effect follows the relation: This effect is of importance in cross-flow filtration and especially in dialysis . It is significant especially for particles with a diameter of 5 μm and for particles which follow laminar flow conditions and slows down the process of filter cake formation, which prolongs the service life and the filtering stays permanently high.
https://en.wikipedia.org/wiki/Tubular_pinch_effect
In organometallic chemistry , a tuck-in complex usually refers to derivatives of Cp* ligands wherein a methyl group is deprotonated and the resulting methylene attaches to the metal. The C 5 –CH 2 –M angle is acute. The term "tucked in" was coined to describe derivatives of organotungsten complexes . [ 1 ] Although most "tucked-in" complexes are derived from Cp* ligands, other pi-bonded rings undergo similar reactions. The "tuck-in" process is related to ortho-metalation in the sense that it is an intramolecular cyclometalation. [ 2 ] Tuck-in complexes derived from Cp* ligands are derivatives of tetramethyl fulvene , sometimes abbreviated Me 4 Fv. A variety of complexes are known for Me 4 Fv and related ligands. In these complexes, the Fv can serve as a 4-electron or as a 6-electron ligand. The original example proceeded via sequential loss of two equivalents of H 2 from decamethyltungstocene dihydride, Cp* 2 WH 2 . The first dehydrogenation step affords a simple tuck-in complex: [ 1 ] The second dehydrogenation step affords a double tuck-in complex: In organouranium chemistry , both tuck-in and tuck-over complexes are recognized, for example in the dihydrido diuranium complex [Cp* 3 ( η 7 -C 5 Me 3 (CH 2 ))U 2 H 2 ]. In this complex the two methylene groups bind to different uranium centers. [ 3 ] The tuck-over mode is binding of the Cp* methylene to a metal center elsewhere in the molecule rather than the one coordinated to that Cp* ligand. Tuck-in complexes retain nucleophilicity at the methylene carbon. They can be activated by Lewis acids to generate active catalysts for use in Ziegler–Natta catalysis . The Lewis acid attaches to the CH 2 group, exposing a vacant site on the electrophilic Zr(IV) centre. [ 4 ]
https://en.wikipedia.org/wiki/Tuck-in_complex
In mathematics , Tucker's lemma is a combinatorial analog of the Borsuk–Ulam theorem , named after Albert W. Tucker . Let T be a triangulation of the closed n -dimensional ball B n {\displaystyle B_{n}} . Assume T is antipodally symmetric on the boundary sphere S n − 1 {\displaystyle S_{n-1}} . That means that the subset of simplices of T which are in S n − 1 {\displaystyle S_{n-1}} provides a triangulation of S n − 1 {\displaystyle S_{n-1}} where if σ is a simplex then so is −σ. Let L : V ( T ) → { + 1 , − 1 , + 2 , − 2 , . . . , + n , − n } {\displaystyle L:V(T)\to \{+1,-1,+2,-2,...,+n,-n\}} be a labeling of the vertices of T which is an odd function on S n − 1 {\displaystyle S_{n-1}} , i.e., L ( − v ) = − L ( v ) {\displaystyle L(-v)=-L(v)} for every vertex v ∈ S n − 1 {\displaystyle v\in S_{n-1}} . Then Tucker's lemma states that T contains a complementary edge - an edge (a 1-simplex) whose vertices are labelled by the same number but with opposite signs. [ 1 ] The first proofs were non-constructive, by way of contradiction. [ 2 ] Later, constructive proofs were found, which also supplied algorithms for finding the complementary edge. [ 3 ] [ 4 ] Basically, the algorithms are path-based: they start at a certain point or edge of the triangulation, then go from simplex to simplex according to prescribed rules, until it is not possible to proceed any more. It can be proved that the path must end in a simplex which contains a complementary edge. An easier proof of Tucker's lemma uses the more general Ky Fan lemma , which has a simple algorithmic proof. The following description illustrates the algorithm for n = 2 {\displaystyle n=2} . [ 5 ] Note that in this case B n {\displaystyle B_{n}} is a disc and there are 4 possible labels: − 2 , − 1 , 1 , 2 {\displaystyle -2,-1,1,2} , like the figure at the top-right. Start outside the ball and consider the labels of the boundary vertices. Because the labeling is an odd function on the boundary, the boundary must have both positive and negative labels: Select an ( + 1 , − 2 ) {\displaystyle (+1,-2)} edge and go through it. There are three cases: The last case can take you outside the ball. However, since the number of ( + 1 , − 2 ) {\displaystyle (+1,-2)} edges on the boundary must be odd, there must be a new, unvisited ( + 1 , − 2 ) {\displaystyle (+1,-2)} edge on the boundary. Go through it and continue. This walk must end inside the ball, either in a ( + 1 , − 2 , + 2 ) {\displaystyle (+1,-2,+2)} or in a ( + 1 , − 2 , − 1 ) {\displaystyle (+1,-2,-1)} simplex. Done. The run-time of the algorithm described above is polynomial in the triangulation size. This is considered bad, since the triangulations might be very large. It would be desirable to find an algorithm which is logarithmic in the triangulation size. However, the problem of finding a complementary edge is PPA -complete even for n = 2 {\displaystyle n=2} dimensions. This implies that there is not too much hope for finding a fast algorithm. [ 6 ] There are several fixed-point theorems which come in three equivalent variants: an algebraic topology variant, a combinatorial variant and a set-covering variant. Each variant can be proved separately using totally different arguments, but each variant can also be reduced to the other variants in its row. Additionally, each result in the top row can be deduced from the one below it in the same column. [ 7 ]
https://en.wikipedia.org/wiki/Tucker's_lemma
Tucotuzumab celmoleukin is an anti- cancer drug. It is a fusion protein of a humanized monoclonal antibody (tucotuzumab) and an interleukin-2 (celmoleukin). [ 1 ] [ 2 ] [ 3 ] This drug was developed by EMD Pharmaceuticals . This monoclonal antibody –related article is a stub . You can help Wikipedia by expanding it . This antineoplastic or immunomodulatory drug article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tucotuzumab_celmoleukin
The Tudor rose (sometimes called the Union rose ) is the traditional floral heraldic emblem of England and takes its name and origins from the House of Tudor , which united the House of Lancaster and the House of York . The Tudor rose consists of five white inner petals, representing the House of York, and five red outer petals to represent the House of Lancaster. In the Battle of Bosworth Field (1485), Henry VII , of the House of Lancaster, took the crown of England from Richard III , of the House of York. He thus brought to an end the retrospectively dubbed " Wars of the Roses ". Kings of the House of Lancaster had sometimes used a red or gold rose as a badge; and the House of York had used a white rose as a badge. Henry's father was Edmund Tudor , and his mother was Margaret Beaufort from the House of Lancaster; in January 1486 he married Elizabeth of York to bring the two factions together. (In battle, Richard III fought under the banner of the boar, [ 1 ] and Henry under the banner of the dragon of his native Wales.) The white rose versus red rose juxtaposition was mostly Henry's invention, created to exploit his appeal as a 'peacemaker king'. [ 2 ] The historian Thomas Penn writes: The "Lancastrian" red rose was an emblem that barely existed before Henry VII. Lancastrian kings used the rose sporadically, but when they did it was often gold rather than red; Henry VI , the king who presided over the country's descent into civil war, preferred his badge of the antelope . Contemparies certainly did not refer to the traumatic civil conflict of the 15th century as the "Wars of the Roses". For the best part of a quarter-century, from 1461 to 1485, there was only one royal rose, and it was white: the badge of Edward IV. The roses were actually created after the war by Henry VII. [ 2 ] On his marriage, Henry VII adopted the Tudor rose badge conjoining the Red Rose of Lancaster and the White Rose of York . The Tudor rose is occasionally seen divided in quarters (heraldically as "quartered") and vertically (in heraldic terms per pale ) red and white. [ 3 ] More often, the Tudor rose is depicted as a double rose , [ 4 ] white on red and is always described, heraldically, as " proper " (that is, naturally-coloured, despite not actually existing in nature). Henry VII was reserved in his usage of the Tudor rose. He regularly used the Lancastrian rose by itself, being the house to which he descended. His successor Henry VIII , descended from the House of York as well through his mother, would use the rose more often. [ 5 ] When Arthur, Prince of Wales , died in 1502, his tomb in Worcester Cathedral used both roses; thereby asserting his royal descent from both the houses of Lancaster and York. [ 5 ] During his reign, Henry VIII had the legendary " Round Table " at Winchester Castle – then believed to be genuine – repainted. [ 6 ] The new paint scheme included a Tudor rose in the centre. Previous to this, his father Henry VII had built the Henry VII Chapel at Westminster Abbey (it was later used for the site of his tomb) and it was decorated principally with the Tudor rose and the Beaufort portcullis – as a form of propaganda to define his claim to the throne. The Tudor rose badge may appear slipped and crowned : shown as a cutting with a stem and leaves beneath a crown; this badge appears in Nicholas Hilliard 's "Pelican Portrait" of Elizabeth I and since an Order in Council (dated 5 November 1800), has served as the royal floral emblem of England . The Tudor rose may also appear dimidiated (cut in half and combined with half another emblem) to form a compound badge. The Westminster Tournament Roll includes a badge of Henry and his first wife Catherine of Aragon with a slipped Tudor rose conjoined with Catherine's personal badge, the Spanish pomegranate ; [ 7 ] their daughter Mary I bore the same badge. [ 8 ] Following his ascent to the English throne, James VI of Scotland and I of England used a badge consisting of a Tudor rose dimidiated with a Scottish thistle and surmounted by a royal crown. [ 9 ] The crowned and slipped Tudor rose is used as the plant badge of England, as Scotland uses the thistle , Wales uses the leek , and Ireland uses the shamrock (Northern Ireland sometimes using flax instead). As such, it is seen on the dress uniforms of the Yeomen Warders at the Tower of London , and of the Yeomen of the Guard . It features in the design of the 20-pence coin minted between 1982 and 2008, and in the royal coat of arms of the United Kingdom . It also features on the coat of arms of Canada . As part of the badge of the Supreme Court of the United Kingdom , the Tudor rose represents England alongside the floral badges of the other constituent parts of Great Britain and Northern Ireland. The heraldic badge of the Royal Navy's current flagship aircraft carrier HMS Queen Elizabeth uses a Tudor rose with colours divided vertically ( per pale ), inheriting the heraldry of the early twentieth century super-dreadnought oil-fired fast battleship HMS Queen Elizabeth . The Tudor rose makes up part of the cap badge of the Intelligence Corps of the British Army . The Tudor rose is used as the emblem of The Nautical Training Corps , a uniformed youth organisation founded in Brighton in 1944 with 20 units in South East England . The corps badge has the Tudor Rose on the shank of an anchor with the motto "For God, Queen and Country". It is also used as part of the Corps' cap badge. The Tudor rose is also prominent in a number of towns and cities. The Royal Town of Sutton Coldfield , uses the emblem frequently, due to the town being given Royal Town status by Henry VIII. The Tudor rose appears on the coat of arms of Oxford . It is also notably used (albeit in a monochromatic form) as the symbol of VisitEngland , England's tourist board . [ 10 ] A half-and-half design was used as the "Border Rose" in some parts of Todmorden , a conurbation that was historically bisected by the Yorkshire-Lancashire border. [ 11 ] The borough and county of Queens in New York City uses a Tudor rose on its flag and seal. [ 12 ] The flag and seal of Annapolis, Maryland , features a Tudor rose and a thistle surmounted with a crown. The city of York, South Carolina is nicknamed "The White Rose City", and the nearby city of Lancaster, South Carolina is nicknamed "The Red Rose City". York, Pennsylvania and Lancaster, Pennsylvania are similarly nicknamed, using stylized white and red roses in their emblems, respectively. There are ten tudor roses present on the crest of the England national football team .
https://en.wikipedia.org/wiki/Tudor_rose