text
stringlengths
11
320k
source
stringlengths
26
161
In mathematics , two-center bipolar coordinates is a coordinate system based on two coordinates which give distances from two fixed centers c 1 {\displaystyle c_{1}} and c 2 {\displaystyle c_{2}} . [ 1 ] This system is very useful in some scientific applications (e.g. calculating the electric field of a dipole on a plane). [ 2 ] [ 3 ] When the centers are at ( + a , 0 ) {\displaystyle (+a,0)} and ( − a , 0 ) {\displaystyle (-a,0)} , the transformation to Cartesian coordinates ( x , y ) {\displaystyle (x,y)} from two-center bipolar coordinates ( r 1 , r 2 ) {\displaystyle (r_{1},r_{2})} is When x > 0, the transformation to polar coordinates from two-center bipolar coordinates is where 2 a {\displaystyle 2a} is the distance between the poles (coordinate system centers). Polar plotters use two-center bipolar coordinates to describe the drawing paths required to draw a target image. This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Two-center_bipolar_coordinates
In molecular biology , a two-component regulatory system serves as a basic stimulus-response coupling mechanism to allow organisms to sense and respond to changes in many different environmental conditions. [ 1 ] Two-component systems typically consist of a membrane -bound histidine kinase that senses a specific environmental stimulus , and a corresponding response regulator that mediates the cellular response, mostly through differential expression of target genes . [ 2 ] Although two-component signaling systems are found in all domains of life , they are most common by far in bacteria , particularly in Gram-negative and cyanobacteria ; both histidine kinases and response regulators are among the largest gene families in bacteria. [ 3 ] They are much less common in archaea and eukaryotes ; although they do appear in yeasts , filamentous fungi , and slime molds , and are common in plants , [ 1 ] two-component systems have been described as "conspicuously absent" from animals . [ 3 ] Two-component systems accomplish signal transduction through the phosphorylation of a response regulator (RR) by a histidine kinase (HK). Histidine kinases are typically homodimeric transmembrane proteins containing a histidine phosphotransfer domain and an ATP binding domain, though there are reported examples of histidine kinases in the atypical HWE and HisKA2 families that are not homodimers. [ 4 ] Response regulators may consist only of a receiver domain, but usually are multi-domain proteins with a receiver domain and at least one effector or output domain, often involved in DNA binding . [ 3 ] Upon detecting a particular change in the extracellular environment, the HK performs an autophosphorylation reaction, transferring a phosphoryl group from adenosine triphosphate (ATP) to a specific histidine residue. The cognate response regulator (RR) then catalyzes the transfer of the phosphoryl group to an aspartate residue on the response regulator's receiver domain . [ 5 ] [ 6 ] This typically triggers a conformational change that activates the RR's effector domain, which in turn produces the cellular response to the signal, usually by stimulating (or repressing) expression of target genes . [ 3 ] Many HKs are bifunctional and possess phosphatase activity against their cognate response regulators, so that their signaling output reflects a balance between their kinase and phosphatase activities. Many response regulators also auto-dephosphorylate, [ 7 ] and the relatively labile phosphoaspartate can also be hydrolyzed non-enzymatically. [ 1 ] The overall level of phosphorylation of the response regulator ultimately controls its activity. [ 1 ] [ 8 ] Some histidine kinases are hybrids that contain an internal receiver domain. In these cases, a hybrid HK autophosphorylates and then transfers the phosphoryl group to its own internal receiver domain, rather than to a separate RR protein. The phosphoryl group is then shuttled to histidine phosphotransferase (HPT) and subsequently to a terminal RR, which can evoke the desired response. [ 9 ] [ 10 ] This system is called a phosphorelay . Almost 25% of bacterial HKs are of the hybrid type, as are the large majority of eukaryotic HKs. [ 3 ] Two-component signal transduction systems enable bacteria to sense, respond, and adapt to a wide range of environments, stressors, and growth conditions. [ 11 ] These pathways have been adapted to respond to a wide variety of stimuli, including nutrients , cellular redox state, changes in osmolarity , quorum signals , antibiotics , temperature , chemoattractants , pH and more. [ 12 ] [ 13 ] The average number of two-component systems in a bacterial genome has been estimated as around 30, [ 14 ] or about 1–2% of a prokaryote's genome. [ 15 ] A few bacteria have none at all – typically endosymbionts and pathogens – and others contain over 200. [ 16 ] [ 17 ] All such systems must be closely regulated to prevent cross-talk, which is rare in vivo . [ 18 ] In Escherichia coli , the osmoregulatory EnvZ/OmpR two-component system controls the differential expression of the outer membrane porin proteins OmpF and OmpC. [ 19 ] The KdpD sensor kinase proteins regulate the kdpFABC operon responsible for potassium transport in bacteria including E. coli and Clostridium acetobutylicum . [ 20 ] The N-terminal domain of this protein forms part of the cytoplasmic region of the protein, which may be the sensor domain responsible for sensing turgor pressure. [ 21 ] Signal transducing histidine kinases are the key elements in two-component signal transduction systems. [ 22 ] [ 23 ] Examples of histidine kinases are EnvZ, which plays a central role in osmoregulation , [ 24 ] and CheA, which plays a central role in the chemotaxis system. [ 25 ] Histidine kinases usually have an N-terminal ligand -binding domain and a C-terminal kinase domain, but other domains may also be present. The kinase domain is responsible for the autophosphorylation of the histidine with ATP, the phosphotransfer from the kinase to an aspartate of the response regulator, and (with bifunctional enzymes) the phosphotransfer from aspartyl phosphate to water . [ 26 ] The kinase core has a unique fold, distinct from that of the Ser/Thr/Tyr kinase superfamily . HKs can be roughly divided into two classes: orthodox and hybrid kinases. [ 27 ] [ 28 ] Most orthodox HKs, typified by the E. coli EnvZ protein, function as periplasmic membrane receptors and have a signal peptide and transmembrane segment(s) that separate the protein into a periplasmic N-terminal sensing domain and a highly conserved cytoplasmic C-terminal kinase core. Members of this family, however, have an integral membrane sensor domain. Not all orthodox kinases are membrane bound, e.g., the nitrogen regulatory kinase NtrB (GlnL) is a soluble cytoplasmic HK. [ 6 ] Hybrid kinases contain multiple phosphodonor and phosphoacceptor sites and use multi-step phospho-relay schemes instead of promoting a single phosphoryl transfer. In addition to the sensor domain and kinase core, they contain a CheY-like receiver domain and a His-containing phosphotransfer (HPt) domain. The number of two-component systems present in a bacterial genome is highly correlated with genome size as well as ecological niche ; bacteria that occupy niches with frequent environmental fluctuations possess more histidine kinases and response regulators. [ 3 ] [ 29 ] New two-component systems may arise by gene duplication or by lateral gene transfer , and the relative rates of each process vary dramatically across bacterial species. [ 30 ] In most cases, response regulator genes are located in the same operon as their cognate histidine kinase; [ 3 ] lateral gene transfers are more likely to preserve operon structure than gene duplications. [ 30 ] Two-component systems are rare in eukaryotes . They appear in yeasts , filamentous fungi , and slime molds , and are relatively common in plants , but have been described as "conspicuously absent" from animals . [ 3 ] Two-component systems in eukaryotes likely originate from lateral gene transfer , often from endosymbiotic organelles, and are typically of the hybrid kinase phosphorelay type. [ 3 ] For example, in the yeast Candida albicans , genes found in the nuclear genome likely originated from endosymbiosis and remain targeted to the mitochondria . [ 31 ] Two-component systems are well-integrated into developmental signaling pathways in plants, but the genes probably originated from lateral gene transfer from chloroplasts . [ 3 ] An example is the chloroplast sensor kinase (CSK) gene in Arabidopsis thaliana , derived from chloroplasts but now integrated into the nuclear genome. CSK function provides a redox -based regulatory system that couples photosynthesis to chloroplast gene expression ; this observation has been described as a key prediction of the CoRR hypothesis , which aims to explain the retention of genes encoded by endosymbiotic organelles. [ 32 ] [ 33 ] It is unclear why canonical two-component systems are rare in eukaryotes, with many similar functions having been taken over by signaling systems based on serine , threonine , or tyrosine kinases; it has been speculated that the chemical instability of phosphoaspartate is responsible, and that increased stability is needed to transduce signals in the more complex eukaryotic cell. [ 3 ] Notably, cross-talk between signaling mechanisms is very common in eukaryotic signaling systems but rare in bacterial two-component systems. [ 34 ] Because of their sequence similarity and operon structure, many two-component systems – particularly histidine kinases – are relatively easy to identify through bioinformatics analysis. (By contrast, eukaryotic kinases are typically easily identified, but they are not easily paired with their substrates .) [ 3 ] A database of prokaryotic two-component systems called P2CS has been compiled to document and classify known examples, and in some cases to make predictions about the cognates of "orphan" histidine kinase or response regulator proteins that are genetically unlinked to a partner. [ 35 ] [ 36 ]
https://en.wikipedia.org/wiki/Two-component_regulatory_system
A two-cube calendar is a desk calendar consisting of two cubes with faces marked by digits 0 through 9 . Each face of each cube is marked with a single digit, and it is possible to arrange the cubes so that any chosen day of the month (from 01, 02, ... through 31) is visible on the two front faces. A puzzle about the two-cube calendar was described in Gardner 's column in Scientific American . [ 1 ] [ 2 ] In the puzzle discussed in Mathematical Circus (1992), two visible faces of one cube have digits 1 and 2 on them, and three visible faces of another cube have digits 3, 4, 5 on them. The cubes are arranged so that their front faces indicate the 25th day of the current month. The problem is to determine the digits hidden on the seven invisible faces. [ 1 ] Gardner wrote he saw a two-cube desk calendar in a store window in New York. [ 1 ] According to a letter received by Gardner from John S. Singleton (England), Singleton patented the calendar in 1957, [ 3 ] but the patent lapsed in 1965. [ 4 ] [ 5 ] A number of variations are manufactured and sold as souvenirs , differing in the appearance and the existence of additional bars or cubes to set the current month and the day of week. Digits 1 and 2 need to be placed on both cubes to allow numbers 11 and 22. That leaves us with 4 sides of each cube (total of 8) for another 8 digits. However, digit 0 needs to be combined with all other digits, so it also needs to be placed on both cubes. That means we need to place remaining 7 digits (from 3 to 9) on the remaining 6 sides of cubes. The solution is possible because digit 6 looks like inverted 9. Therefore, the solution of the problem is: { C 1 := { 0 , 1 , 2 , 3 , 4 , 5 } C 2 := { 0 , 1 , 2 , 6 9 , 7 , 8 } {\displaystyle {\begin{cases}C_{1}:=\{0,1,2,3,4,5\}\\C_{2}:=\{0,1,2,{\tfrac {6}{9}},7,8\}\\\end{cases}}} If the problem is based on another given set of visible digits, the last three digits of each cube could be shuffled between the cubes. A variation with three cubes providing English abbreviations for the twelve months is discussed in a Scientific American column in December 1977. [ 6 ] One solution of this variation allows displaying the first three letters of any month and relies on the fact that lower-case letters u and n and also p and d are inverses of each other. [ 7 ] { C 1 := { e , g , j , o , r , y } C 2 := { a , c , f , n u , s , v } C 3 := { b , d p , l , m , u n , t } {\displaystyle {\begin{cases}C_{1}:=\{e,g,j,o,r,y\}\\C_{2}:=\{a,c,f,{\tfrac {n}{u}},s,v\}\\C_{3}:=\{b,{\tfrac {d}{p}},l,m,{\tfrac {u}{n}},t\}\\\end{cases}}} Polish 3-letter month abbreviations (informal but commonly used for date rubber stamps - sty, lut, mar, kwi, maj, cze, lip, sie, wrz, paź, lis, gru) are also feasible, both in lower and upper case: { C 1 := { a , e , g , l , w , y } C 2 := { c , i , j , r , t , z ´ } C 3 := { k , m , p , s , u , z } {\displaystyle {\begin{cases}C_{1}:=\{a,e,g,l,w,y\}\\C_{2}:=\{c,i,j,r,t,{\acute {z}}\}\\C_{3}:=\{k,m,p,s,u,z\}\\\end{cases}}} Using four cubes for two-digit day number from 01 to 31 and two-digit month number from 01 to 12, and assuming that digits 6 and 9 are indistinguishable , it is possible to represent all days of the year. One possible solution is: { C 1 := { 0 , 1 , X , 3 , 4 , 5 } C 2 := { 0 , 1 , 2 , 3 , 4 , 5 } C 3 := { 0 , 1 , 2 , 6 9 , 7 , 8 } C 4 := { 0 , 1 , 2 , 6 9 , 7 , 8 } {\displaystyle {\begin{cases}C_{1}:=\{0,1,X,3,4,5\}\\C_{2}:=\{0,1,2,3,4,5\}\\C_{3}:=\{0,1,2,{\tfrac {6}{9}},7,8\}\\C_{4}:=\{0,1,2,{\tfrac {6}{9}},7,8\}\\\end{cases}}} The X {\displaystyle X} could be any digit. The last three digits of each cube could be shuffled between the cubes such that each digit from 3 to 9 is placed on at least two different cubes. With assumption that 6 and 9 are distinguishable characters, it is impossible to represent all days of the year because the necessary number of faces would be 25 and four cubes have only 24 faces. However, it is possible to represent almost all days in the year. There is a family of the best solutions which excludes only one day, namely Nov 11 , e.g.: { C 1 := { 0 , 2 , 3 , 4 , 5 , 6 } C 2 := { 0 , 1 , 3 , 4 , 5 , 6 } C 3 := { 0 , 1 , 2 , 7 , 8 , 9 } C 4 := { 0 , 1 , 2 , 7 , 8 , 9 } {\displaystyle {\begin{cases}C_{1}:=\{0,2,3,4,5,6\}\\C_{2}:=\{0,1,3,4,5,6\}\\C_{3}:=\{0,1,2,7,8,9\}\\C_{4}:=\{0,1,2,7,8,9\}\\\end{cases}}} The last four digits of each cube could be shuffled between the cubes such that no cube has two identical faces (especially the 2's). [ 8 ] Maintaining the condition that four visible faces of four cubes should combine to display any possible combination of weekday-day-month-year clearly, the 6 faces of each cube are divided in 4 quarters each to have 24 spaces (ie 6*4) on each cube (a total of 96 spaces) for writing the weekday (Sun, Mon, Tues, Wed, Thurs, Fri and Sat), the day (1 to 31: days of the month), the month (Jan, Feb, Mar, Apr, May, Jun, Jul, Aug, Sep, Oct, Nov and Dec), and the year (N to N+23; N being replaced by any year). This setup works for 24 years. For the weekday-day-month-year format of display, the weekday and the month are written adjacent to the right edge of the cube, and the day and the year are written adjacent to the left edge of the cube in a cyclical manner, so that the four visible faces of  the cubes combine to display all the possible combinations of day-date clearly and distinctly. The weekday and day can be written in any order as long as all the 7 days (Sunday through Saturday) are written on two of the four cubes and the 31 dates (1 through 31) are split in two groups of 15 and 16 numbers on the two cubes. The month and year can also be written in any order as long as all the 12 months (January through December) are written on the other two cubes and the 24 years (N through N+23) are split in two groups of 12 numbers on the two cubes. The weekday-day and month-year parts can work independently from each other, meaning that one part can be removed without affecting the other (as shown on the picture above). One of the possible solutions is: { C 1 := { 1 / 2 / 3 / 4 , 5 / 6 / 7 / 8 , 9 / 10 / 11 / 12 , 13 / 14 / 15 / 16 , s u n / m o n / t u e / w e d , t h u / f r i / s a t / b l a n k } C 2 := { 17 / 18 / 19 / 20 , 21 / 22 / 23 / 24 , 25 / 26 / 27 / 28 , 29 / 30 / 31 / b l a n k , s u n / m o n / t u e / w e d , t h u / f r i / s a t / b l a n k } C 3 := { N / N + 1 / N + 2 / N + 3 , N + 4 / N + 5 / N + 6 / N + 7 , N + 8 / N + 9 / N + 10 / N + 11 , j a n / f e b / m a r / a p r , m a y / j u n / j u l / a u g , s e p / o c t / n o v / d e c } C 4 := { N + 12 / N + 13 / N + 14 / N + 15 , N + 16 / N + 17 / N + 18 / N + 19 , N + 20 / N + 21 / N + 22 / N + 23 , j a n / f e b / m a r / a p r , m a u / j u n / j u l / a u g , s e p / o c t / n o v / d e c } {\displaystyle {\begin{cases}C_{1}:=\{1/2/3/4,5/6/7/8,9/10/11/12,13/14/15/16,sun/mon/tue/wed,thu/fri/sat/blank\}\\C_{2}:=\{17/18/19/20,21/22/23/24,25/26/27/28,29/30/31/blank,sun/mon/tue/wed,thu/fri/sat/blank\}\\C_{3}:=\{N/N+1/N+2/N+3,N+4/N+5/N+6/N+7,N+8/N+9/N+10/N+11,jan/feb/mar/apr,may/jun/jul/aug,sep/oct/nov/dec\}\\C_{4}:=\{N+12/N+13/N+14/N+15,N+16/N+17/N+18/N+19,N+20/N+21/N+22/N+23,jan/feb/mar/apr,mau/jun/jul/aug,sep/oct/nov/dec\}\\\end{cases}}} The original video explaining how to build the weekday-day-month-year cubes can be found on this YouTube channel [ 9 ] This puzzle/logic game -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Two-cube_calendar
Two-dimensional chromatography is a type of chromatographic technique in which the injected sample is separated by passing through two different separation stages. Two different chromatographic columns are connected in sequence, and the effluent from the first system is transferred onto the second column. [ 1 ] Typically the second column has a different separation mechanism, so that bands that are poorly resolved from the first column may be completely separated in the second column. (For instance, a C18 reversed-phase chromatography column may be followed by a phenyl column.) Alternately, the two columns might run at different temperatures. During the second stage of separation the rate at which the separation occurs must be faster than the first stage, since there is still only a single detector. The plane surface is amenable to sequential development in two directions using two different solvents. Modern two-dimensional chromatographic techniques are based on the results of the early developments of paper chromatography and thin-layer chromatography (TLC) which involved liquid mobile phases and solid stationary phases. These techniques would later generate modern gas chromatography (GC) and liquid chromatography (LC) analysis. Different combinations of one-dimensional GC and LC produced the analytical chromatographic technique that is known as two-dimensional chromatography. The earliest form of 2D-chromatography came in the form of a multi-step TLC separation in which a thin sheet of cellulose is used first with one solvent in one direction, then, after the paper has been dried, another solvent is run in a direction at right angles to the first. This methodology first appeared in the literature with a 1944 publication by A. J. P. Martin and coworkers detailing an efficient method for separating amino acids – "...but the two-dimensional chromatogram is especially convenient, in that it shows at a glance information that can be gained otherwise only as the result of numerous experiments" (Biochem J., 1944, 38, 224). Two-dimensional separations can be carried out in gas chromatography or liquid chromatography . Various different coupling strategies have been developed to "resample" from the first column into the second. Some important hardware for two-dimensional separations are Deans' switch and Modulator, which selectively transfer the first dimension eluent to second dimension column. [ 2 ] The chief advantage of two-dimensional techniques is that they offer a large increase in peak capacity, without requiring extremely efficient separations in either column. (For instance, if the first column offers a peak capacity (k 1 )of 100 for a 10-minute separation, and the second column offers a peak capacity of 5 (k 2 ) in a 5-second separation, then the combined peak capacity may approach k 1 × k 2 =500, with the total separation time still ~ 10 minutes). 2D separations have been applied to the analysis of gasoline and other petroleum mixtures, and more recently to protein mixtures. [ 3 ] [ 4 ] Tandem mass spectrometry (Tandem MS or MS/MS) uses two mass analyzers in sequence to separate more complex mixtures of analytes. The advantage of tandem MS is that it can be much faster than other two-dimensional methods, with times ranging from milliseconds to seconds. [ 5 ] Because there is no dilution with solvents in MS, there is less probability of interference, so tandem MS can be more sensitive and have a higher signal-to-noise ratio compared to other two-dimensional methods. The main disadvantage associated with tandem MS is the high cost of the instrumentation needed. Prices can range from $500,000 to over $1 million. [ 6 ] Many form of tandem MS involve a mass selection step and a fragmentation step. The first mass analyzer can be programmed to only pass molecules of a specific mass-to-charge ratio. Then the second mass analyzer can fragment the molecule to determine its identity. This can be especially useful for separating molecules of the same mass (i.e. proteins of the same mass or molecular isomers). Different types of mass analyzers can be coupled to achieve varying effects. One example would be a TOF - Quadrupole system. Ions can be sequentially fragmented and/or analyzed in a quadrupole as they leave the TOF in order of increasing m/z. Another prevalent tandem mass spectrometer is the quadrupole-quadrupole-quadrupole (Q-Q-Q) analyzer. The first quadrupole separates by mass, collisions take place in the second quadrupole, and the fragments are separated by mass in the third quadrupole. Gas chromatography-mass spectrometry (GC-MS) is a two-dimensional chromatography technique that combines the separation technique of gas chromatography with the identification technique of mass spectrometry . GC-MS is the single most important analytical tool for the analysis of volatile and semi- volatile organic compounds in complex mixtures. [ 7 ] It works by first injecting the sample into the GC inlet where it is vaporized and pushed through a column by a carrier gas, typically helium. The analytes in the sample are separated based upon their interaction with the coating of the column, or the stationary phase, and the carrier gas, or the mobile phase. [ 8 ] The compounds eluted from the column are converted into ions via electron impact (EI) or chemical ionization (CI) before traveling through the mass analyzer. [ 9 ] The mass analyzer serves to separate the ions on a mass-to-charge basis. Popular choices perform the same function but differ in the way that they accomplish the separation. [ 10 ] The analyzers typically used with GC-MS are the time-of-flight mass analyzer and the quadrupole mass analyzer . [ 8 ] After leaving the mass analyzer, the analytes reach the detector and produce a signal that is read by a computer and used to create a gas chromatogram and mass spectrum. Sometimes GC-MS utilizes two gas chromatographers in particularly complex samples to obtain considerable separation power and be able to unambiguously assign the specific species to the appropriate peaks in a technique known as GCxGC-(MS). [ 11 ] Ultimately, GC-MS is a technique utilized in many analytical laboratories and is a very effective and adaptable analytical tool. Liquid chromatography-mass spectrometry (LC/MS) couples high resolution chromatographic separation with MS detection.  As the system adopts the high separation of HPLC, analytes which are in the liquid mobile phase are often ionized by various soft ionization methods including atmospheric pressure chemical ionization (APCI), electrospray ionization (ESI) or matrix-assisted laser desorption/ionization (MALDI), which attains the gas phase ionization required for the coupling with MS. [ citation needed ] These ionization methods allow the analysis of a wider range of biological molecules, including those with larger masses, thermally unstable or nonvolatile compounds where GC-MS is typically incapable of analyzing. LC-MS provides high selectivity as unresolved peaks can be isolated by selecting a specific mass. Furthermore, better identification is also attained by mass spectra and the user does not have to rely solely on the retention time of analytes. As a result, molecular mass and structural information as well as quantitative data can all be obtained via LC-MS. [ 9 ] LC-MS can therefore be applied to various fields, such as impurity identification and profiling in drug development and pharmaceutical manufacturing, since LC provides efficient separation of impurities and MS provides structural characterization for impurity profiling. [ 12 ] Common solvents used in normal or reversed phase LC such as water, acetonitrile, and methanol are all compatible with ESI, yet a LC grade solvent may not be suitable for MS. Furthermore, buffers containing inorganic ions should be avoided as they may contaminate the ion source. [ 13 ] Nonetheless, the problem can be resolved by 2D LC-MS, as well as other various issues including analyte coelution and UV detection responses. [ 14 ] Two-dimensional liquid chromatography (2D-LC) combines two separate analyses of liquid chromatography into one data analysis. Modern 2D liquid chromatography has its origins in the late 1970s to early 1980s. During this time, the hypothesized principles of 2D-LC were being proven via experiments conducted along with supplementary conceptual and theoretical work. It was shown that 2D-LC could offer quite a bit more resolving power compared to the conventional techniques of one-dimensional liquid chromatography. In the 1990s, the technique of 2D-LC played an important role in the separation of extremely complex substances and materials found in the proteomics and polymer fields of study. Unfortunately, the technique had been shown to have a significant disadvantage when it came to analysis time. Early work with 2D-LC was limited to small portion of liquid phase separations due to the long analysis time of the machinery. Modern 2D-LC techniques tackled that disadvantage head on, and have significantly reduced what was once a damaging feature. Modern 2D-LC has an instrumental capacity for high resolution separations to be completed in an hour or less. Due to the growing need for instrumentation to perform analysis on substances of growing complexity with better detection limits, the development of 2D-LC pushes forward. Instrumental parts have become a mainstream industry focus and are much easier to attain then before. Prior to this, 2D-LC was performed using components from 1D-LC instruments, and would lead to results of varying degrees in both accuracy and precision. The reduced stress on instrumental engineering has allowed for pioneering work in the field and technique of 2D-LC. The purpose of employing this technique is to separate mixtures that one-dimensional liquid chromatography otherwise cannot separate effectively. Two-dimensional liquid chromatography is better suited to analyzing complex mixtures samples such as urine, environmental substances and forensic evidence such as blood. Difficulties in separating mixtures can be attributed to the complexity of the mixture in the sense that separation cannot occur due to the number of different effluents in the compound. Another problem associated with one-dimensional liquid chromatography involves the difficulty associated to resolving closely related compounds. Closely related compounds have similar chemical properties that may prove difficult to separate based on polarity, charge, etc. [ 15 ] Two-dimensional liquid chromatography provides separation based on more than one chemical or physical property. Using an example from Nagy and Vekey, a mixture of peptides can be separated based on their basicity, but similar peptides may not elute well. Using a subsequent LC technique, the similar basicity between the peptides can be further separated by employing differences in apolar character. [ 16 ] As a result, to be able to separate mixtures more efficiently, a subsequent LC analysis must employ very different separation selectivity relative to the first column. Another requirement to effectively use 2D liquid chromatography, according to Bushey and Jorgenson, is to employ highly orthogonal techniques which means that the two separation techniques must be as different as possible. [ 17 ] There are two major classifications of 2D liquid chromatography. These include: Comprehensive 2D liquid chromatography (LCxLC) and Heart-cutting 2D liquid chromatography (LC-LC). [ 18 ] In comprehensive 2D-LC, all the peaks from a column elution are fully sampled, but it has been deemed unnecessary to transfer the entire sample from the first to the second column. A portion of the sample is sent to waste while the rest is sent to the sampling valve. In heart-cutting 2D-LC specific peaks are targeted with only a small portion of the peak being injected onto a second column. Heart-cutting 2D-LC has proven to be quite useful for sample analysis of substances that are not very complex provided they have similar retention behavior. Compared to comprehensive 2D-LC, heart-cutting 2D-LC provides an effective technique with much less system setup and a much lower operating cost. Multiple heart-cutting (mLC-LC) may be utilized to sample multiple peaks from first dimensional analysis without risking temporary overlap of second dimensional analysis. [ 18 ] Multiple heart-cutting (mLC-LC) utilizes a setup of multiple sampling loops. For 2D-LC, peak capacity is a very important issue. This can be generated using gradient elution separation with much greater efficiency than an isocratic separation given a reasonable amount of time. While isocratic elution is much easier on a fast time scale, it is preferable to perform a gradient elution separation in the second dimension. The mobile phase strength is varied from a weak eluent composition to a stronger one. Based on linear solvent strength theory (LSST) of gradient elution for reversed phase chromatography, the relationship between retention time, instrumental variables and solute parameters is shown below. [ 18 ] While a lot of pioneering work has been completed in the years since 2D-LC became a major analytical chromatographic technique, there are still many modern problems to be considered. Large amounts of experimental variables have yet to be decided on, and the technique is constantly in a state of development. Comprehensive two-dimensional gas chromatography is an analytical technique that separates and analyzes complex mixtures. It has been utilized in fields such as: flavor, fragrance, environmental studies, pharmaceuticals, petroleum products and forensic science. GCxGC provides a high range of sensitivity and produces a greater separation power due to the increased peak capacity.
https://en.wikipedia.org/wiki/Two-dimensional_chromatography
Two dimensional correlation analysis is a mathematical technique that is used to study changes in measured signals. As mostly spectroscopic signals are discussed, sometime also two dimensional correlation spectroscopy is used and refers to the same technique. In 2D correlation analysis, a sample is subjected to an external perturbation while all other parameters of the system are kept at the same value. This perturbation can be a systematic and controlled change in temperature, pressure, pH, chemical composition of the system, or even time after a catalyst was added to a chemical mixture. As a result of the controlled change (the perturbation ), the system will undergo variations which are measured by a chemical or physical detection method. The measured signals or spectra will show systematic variations that are processed with 2D correlation analysis for interpretation. When one considers spectra that consist of few bands, it is quite obvious to determine which bands are subject to a changing intensity. Such a changing intensity can be caused for example by chemical reactions. However, the interpretation of the measured signal becomes more tricky when spectra are complex and bands are heavily overlapping. Two dimensional correlation analysis allows one to determine at which positions in such a measured signal there is a systematic change in a peak, either continuous rising or drop in intensity. 2D correlation analysis results in two complementary signals, which referred to as the 2D synchronous and 2D asynchronous spectrum. These signals allow amongst others [ 1 ] [ 2 ] [ 3 ] 2D correlation analysis originated from 2D NMR spectroscopy . Isao Noda developed perturbation based 2D spectroscopy in the 1980s. [ 4 ] This technique required sinusoidal perturbations to the chemical system under investigation. This specific type of the applied perturbation severely limited its possible applications. Following research done by several groups of scientists, perturbation based 2D spectroscopy could be developed to a more extended and generalized broader base. Since the development of generalized 2D correlation analysis in 1993 based on Fourier transformation of the data, 2D correlation analysis gained widespread use. Alternative techniques that were simpler to calculate, for example the disrelation spectrum , were also developed simultaneously. Because of its computational efficiency and simplicity, the Hilbert transform is nowadays used for the calculation of the 2D spectra. To date, 2D correlation analysis is used for the interpretation of many types of spectroscopic data (including XRF , UV/VIS spectroscopy , fluorescence , infrared , and Raman spectra), although its application is not limited to spectroscopy. 2D correlation analysis is frequently used for its main advantage: increasing the spectral resolution by spreading overlapping peaks over two dimensions and as a result simplification of the interpretation of one-dimensional spectra that are otherwise visually indistinguishable from each other. [ 4 ] Further advantages are its ease of application and the possibility to make the distinction between band shifts and band overlap. [ 3 ] Each type of spectral event, band shifting, overlapping bands of which the intensity changes in the opposite direction, band broadening, baseline change, etc. has a particular 2D pattern. See also the figure with the original dataset on the right and the corresponding 2D spectrum in the figure below. 2D synchronous and asynchronous spectra are basically 3D-datasets and are generally represented by contour plots. X- and y-axes are identical to the x-axis of the original dataset, whereas the different contours represent the magnitude of correlation between the spectral intensities. The 2D synchronous spectrum is symmetric relative to the main diagonal. The main diagonal thus contains positive peaks. As the peaks at ( x , y ) in the 2D synchronous spectrum are a measure for the correlation between the intensity changes at x and y in the original data, these main diagonal peaks are also called autopeaks and the main diagonal signal is referred to as autocorrelation signal . The off-diagonal cross-peaks can be either positive or negative. On the other hand, the asynchronous spectrum is asymmetric and never has peaks on the main diagonal. Generally contour plots of 2D spectra are oriented with rising axes from left to right and top to down. Other orientations are possible, but interpretation has to be adapted accordingly. [ 5 ] Suppose the original dataset D contains the n spectra in rows. The signals of the original dataset are generally preprocessed. The original spectra are compared to a reference spectrum. By subtracting a reference spectrum, often the average spectrum of the dataset, so called dynamic spectra are calculated which form the corresponding dynamic dataset E . The presence and interpretation may be dependent on the choice of reference spectrum. The equations below are valid for equally spaced measurements of the perturbation. A 2D synchronous spectrum expresses the similarity between spectral of the data in the original dataset. In generalized 2D correlation spectroscopy this is mathematically expressed as covariance (or correlation ). [ 6 ] where: Orthogonal spectra to the dynamic dataset E are obtained with the Hilbert-transform: where: The values of N , N j, k are determined as follows: where: Interpretation of two-dimensional correlation spectra can be considered to consist of several stages. [ 4 ] As real measurement signals contain a certain level of noise, the derived 2D spectra are influenced and degraded with substantial higher amounts of noise. Hence, interpretation begins with studying the autocorrelation spectrum on the main diagonal of the 2D synchronous spectrum. In the 2D synchronous main diagonal signal on the right 4 peaks are visible at 10, 20, 30, and 40 (see also the 4 corresponding positive autopeaks in the 2D synchronous spectrum on the right). This indicates that in the original dataset 4 peaks of changing intensity are present. The intensity of peaks on the autocorrelation spectrum are directly proportional to the relative importance of the intensity change in the original spectra. Hence, if an intense band is present at position x , it is very likely that a true intensity change is occurring and the peak is not due to noise. Additional techniques help to filter the peaks that can be seen in the 2D synchronous and asynchronous spectra. [ 7 ] It is not always possible to unequivocally determine the direction of intensity change, such as is for example the case for highly overlapping signals next to each other and of which the intensity changes in the opposite direction. This is where the off diagonal peaks in the synchronous 2D spectrum are used for: As can be seen in the 2D synchronous spectrum on the right, the intensity changes of the peaks at 10 and 30 are related and the intensity of the peak at 10 and 30 changes in the opposite direction (negative cross-peak at (10,30)). The same is true for the peaks at 20 and 40. Most importantly, with the sequential order rules , also referred to as Noda's rules , the sequence of the intensity changes can be determined. [ 4 ] By carefully interpreting the signs of the 2D synchronous and asynchronous cross peaks with the following rules, the sequence of spectral events during the experiment can be determined: Following the rules above. It can be derived that the changes at 10 and 30 occur simultaneously and the changes in intensity at 20 and 40 occur simultaneously as well. Because of the positive asynchronous cross-peak at (10, 20), the changes at 10 and 30 (predominantly) occur before the intensity changes at 20 and 40. In some cases the Noda rules cannot be so readily implied, predominately when spectral features are not caused by simple intensity variations. This may occur when band shifts occur, or when a very erratic intensity variation is present in a given frequency range.
https://en.wikipedia.org/wiki/Two-dimensional_correlation_analysis
The two-dimensional critical Ising model is the critical limit of the Ising model in two dimensions. It is a two-dimensional conformal field theory whose symmetry algebra is the Virasoro algebra with the central charge c = 1 2 {\displaystyle c={\tfrac {1}{2}}} . Correlation functions of the spin and energy operators are described by the ( 4 , 3 ) {\displaystyle (4,3)} minimal model . While the minimal model has been exactly solved (see Ising critical exponents ), the solution does not cover other observables such as connectivities of clusters. The Kac table of the ( 4 , 3 ) {\displaystyle (4,3)} minimal model is: This means that the space of states is generated by three primary states , which correspond to three primary fields or operators: [ 1 ] The decomposition of the space of states into irreducible representations of the product of the left- and right-moving Virasoro algebras is where R Δ {\displaystyle {\mathcal {R}}_{\Delta }} is the irreducible highest-weight representation of the Virasoro algebra with the conformal dimension Δ {\displaystyle \Delta } . In particular, the Ising model is diagonal and unitary. The characters of the three representations of the Virasoro algebra that appear in the space of states are [ 1 ] where η ( q ) {\displaystyle \eta (q)} is the Dedekind eta function , and θ i ( 0 | q ) {\displaystyle \theta _{i}(0|q)} are theta functions of the nome q = e 2 π i τ {\displaystyle q=e^{2\pi i\tau }} , for example θ 3 ( 0 | q ) = ∑ n ∈ Z q n 2 2 {\displaystyle \theta _{3}(0|q)=\sum _{n\in \mathbb {Z} }q^{\frac {n^{2}}{2}}} . The modular S-matrix , i.e. the matrix S {\displaystyle {\mathcal {S}}} such that χ i ( − 1 τ ) = ∑ j S i j χ j ( τ ) {\displaystyle \chi _{i}(-{\tfrac {1}{\tau }})=\sum _{j}{\mathcal {S}}_{ij}\chi _{j}(\tau )} , is [ 1 ] where the fields are ordered as 1 , ϵ , σ {\displaystyle 1,\epsilon ,\sigma } . The modular invariant partition function is The fusion rules of the model are The fusion rules are invariant under the Z 2 {\displaystyle \mathbb {Z} _{2}} symmetry σ → − σ {\displaystyle \sigma \to -\sigma } . The three-point structure constants are Knowing the fusion rules and three-point structure constants, it is possible to write operator product expansions, for example where Δ 1 , Δ σ , Δ ϵ {\displaystyle \Delta _{\mathbf {1} },\Delta _{\sigma },\Delta _{\epsilon }} are the conformal dimensions of the primary fields, and the omitted terms O ( z ) {\displaystyle O(z)} are contributions of descendant fields . Any one-, two- and three-point function of primary fields is determined by conformal symmetry up to a multiplicative constant. This constant is set to be one for one- and two-point functions by a choice of field normalizations. The only non-trivial dynamical quantities are the three-point structure constants, which were given above in the context of operator product expansions. with z i j = z i − z j {\displaystyle z_{ij}=z_{i}-z_{j}} . The three non-trivial four-point functions are of the type ⟨ σ 4 ⟩ , ⟨ σ 2 ϵ 2 ⟩ , ⟨ ϵ 4 ⟩ {\displaystyle \langle \sigma ^{4}\rangle ,\langle \sigma ^{2}\epsilon ^{2}\rangle ,\langle \epsilon ^{4}\rangle } . For a four-point function ⟨ ∏ i = 1 4 V i ( z i ) ⟩ {\displaystyle \left\langle \prod _{i=1}^{4}V_{i}(z_{i})\right\rangle } , let F j ( s ) {\displaystyle {\mathcal {F}}_{j}^{(s)}} and F j ( t ) {\displaystyle {\mathcal {F}}_{j}^{(t)}} be the s- and t-channel Virasoro conformal blocks , which respectively correspond to the contributions of V j ( z 2 ) {\displaystyle V_{j}(z_{2})} (and its descendants) in the operator product expansion V 1 ( z 1 ) V 2 ( z 2 ) {\displaystyle V_{1}(z_{1})V_{2}(z_{2})} , and of V j ( z 4 ) {\displaystyle V_{j}(z_{4})} (and its descendants) in the operator product expansion V 1 ( z 1 ) V 4 ( z 4 ) {\displaystyle V_{1}(z_{1})V_{4}(z_{4})} . Let x = z 12 z 34 z 13 z 24 {\displaystyle x={\frac {z_{12}z_{34}}{z_{13}z_{24}}}} be the cross-ratio. In the case of ⟨ ϵ 4 ⟩ {\displaystyle \langle \epsilon ^{4}\rangle } , fusion rules allow only one primary field in all channels, namely the identity field. [ 2 ] In the case of ⟨ σ 2 ϵ 2 ⟩ {\displaystyle \langle \sigma ^{2}\epsilon ^{2}\rangle } , fusion rules allow only the identity field in the s-channel, and the spin field in the t-channel. [ 2 ] In the case of ⟨ σ 4 ⟩ {\displaystyle \langle \sigma ^{4}\rangle } , fusion rules allow two primary fields in all channels: the identity field and the energy field. [ 2 ] In this case we write the conformal blocks in the case ( z 1 , z 2 , z 3 , z 4 ) = ( x , 0 , ∞ , 1 ) {\displaystyle (z_{1},z_{2},z_{3},z_{4})=(x,0,\infty ,1)} only: the general case is obtained by inserting the prefactor x 1 24 ( 1 − x ) 1 24 ∏ 1 ≤ i < j ≤ 4 z i j − 1 24 {\displaystyle x^{\frac {1}{24}}(1-x)^{\frac {1}{24}}\prod _{1\leq i<j\leq 4}z_{ij}^{-{\frac {1}{24}}}} , and identifying x {\displaystyle x} with the cross-ratio. In the case of ⟨ σ 4 ⟩ {\displaystyle \langle \sigma ^{4}\rangle } , the conformal blocks are: From the representation of the model in terms of Dirac fermions , it is possible to compute correlation functions of any number of spin or energy operators: [ 1 ] These formulas have generalizations to correlation functions on the torus, which involve theta functions . [ 1 ] The two-dimensional Ising model is mapped to itself by a high-low temperature duality. The image of the spin operator σ {\displaystyle \sigma } under this duality is a disorder operator μ {\displaystyle \mu } , which has the same left and right conformal dimensions ( Δ μ , Δ ¯ μ ) = ( Δ σ , Δ ¯ σ ) = ( 1 16 , 1 16 ) {\displaystyle (\Delta _{\mu },{\bar {\Delta }}_{\mu })=(\Delta _{\sigma },{\bar {\Delta }}_{\sigma })=({\tfrac {1}{16}},{\tfrac {1}{16}})} . Although the disorder operator does not belong to the minimal model, correlation functions involving the disorder operator can be computed exactly, for example [ 1 ] whereas The Ising model has a description as a random cluster model due to Fortuin and Kasteleyn. In this description, the natural observables are connectivities of clusters, i.e. probabilities that a number of points belong to the same cluster. The Ising model can then be viewed as the case q = 2 {\displaystyle q=2} of the q {\displaystyle q} -state Potts model , whose parameter q {\displaystyle q} can vary continuously, and is related to the central charge of the Virasoro algebra . In the critical limit, connectivities of clusters have the same behaviour under conformal transformations as correlation functions of the spin operator. Nevertheless, connectivities do not coincide with spin correlation functions: for example, the three-point connectivity does not vanish, while ⟨ σ σ σ ⟩ = 0 {\displaystyle \langle \sigma \sigma \sigma \rangle =0} . There are four independent four-point connectivities, and their sum coincides with ⟨ σ σ σ σ ⟩ {\displaystyle \langle \sigma \sigma \sigma \sigma \rangle } . [ 3 ] Other combinations of four-point connectivities are not known analytically. In particular they are not related to correlation functions of the minimal model, [ 4 ] although they are related to the q → 2 {\displaystyle q\to 2} limit of spin correlators in the q {\displaystyle q} -state Potts model. [ 3 ]
https://en.wikipedia.org/wiki/Two-dimensional_critical_Ising_model
In fluid mechanics , a two-dimensional flow is a form of fluid flow where the flow velocity at every point is parallel to a fixed plane. The velocity at any point on a given normal to that fixed plane should be constant. Considering a two dimensional flow in the X − Y {\displaystyle X-Y} plane, the flow velocity at any point ( x , y , z ) {\displaystyle (x,y,z)} at time t {\displaystyle t} can be expressed as – Considering a two dimensional flow in the r − θ {\displaystyle r-\theta } plane, the flow velocity at a point ( r , θ , z ) {\displaystyle (r,\theta ,z)} at a time t {\displaystyle t} can be expressed as – Vorticity in two dimensional flows in the X − Y {\displaystyle X-Y} plane can be expressed as – Vorticity in two dimensional flows in the r − θ {\displaystyle r-\theta } plane can be expressed as – A line source is a line from which fluid appears and flows away on planes perpendicular to the line. When we consider 2-D flows on the perpendicular plane, a line source appears as a point source. By symmetry, we can assume that the fluid flows radially outward from the source. The strength of a source can be given by the volume flow rate Q {\displaystyle Q} that it generates. Similar to a line source, a line sink is a line which absorbs fluid flowing towards it, from planes perpendicular to it. When we consider 2-D flows on the perpendicular plane, it appears as a point sink. By symmetry, we assume the fluid flows radially inwards towards the sink. The strength of a sink is given by the volume flow rate Q {\displaystyle Q} of the fluid it absorbs. A radially symmetrical flow field directed outwards from a common point is called a source flow. The central common point is the line source described above. Fluid is supplied at a constant rate Q {\displaystyle Q} from the source. As the fluid flows outward, the area of flow increases. As a result, to satisfy continuity equation , the velocity decreases and the streamlines spread out. The velocity at all points at a given distance from the source is the same. The velocity of fluid flow can be given as - We can derive the relation between flow rate and velocity of the flow. Consider a cylinder of unit height, coaxial with the source. The rate at which the source emits fluid should be equal to the rate at which fluid flows out of the surface of the cylinder. The stream function associated with source flow is – The steady flow from a point source is irrotational, and can be derived from velocity potential . The velocity potential is given by – Sink flow is the opposite of source flow. The streamlines are radial, directed inwards to the line source. As we get closer to the sink, area of flow decreases. In order to satisfy the continuity equation , the streamlines get bunched closer and the velocity increases as we get closer to the source. As with source flow, the velocity at all points equidistant from the sink is equal. The velocity of the flow around the sink can be given by – The stream function associated with sink flow is – The flow around a line sink is irrotational and can be derived from velocity potential. The velocity potential around a sink can be given by – A vortex is a region where the fluid flows around an imaginary axis. For an irrotational vortex, the flow at every point is such that a small particle placed there undergoes pure translation and does not rotate. Velocity varies inversely with radius in this case. Velocity will tend to inf {\displaystyle \inf } at r = 0 {\displaystyle r=0} that is the reason for center being a singular point. The velocity is mathematically expressed as – Since the fluid flows around an axis, The stream function for irrotational vortices is given by – While the velocity potential is expressed as – For the closed curve enclosing origin, circulation (line integral of velocity field) Γ = K {\displaystyle \Gamma =K} and for any other closed curves, Γ = 0 {\displaystyle \Gamma =0} A doublet can be thought of as a combination of a source and a sink of equal strengths kept at an infinitesimally small distance apart. Thus the streamlines can be seen to start and end at the same point. The strength of a doublet made by a source and sink of strength Q {\displaystyle Q} kept a distance d s {\displaystyle ds} is given by – The velocity of fluid flow can be expressed as – The equations and the plot are for the limiting condition of d s → 0 {\displaystyle ds\rightarrow 0} The concept of a doublet is very similar to that of electric dipoles and magnetic dipoles in electrodynamics .
https://en.wikipedia.org/wiki/Two-dimensional_flow
A two-dimensional gas is a collection of objects constrained to move in a planar or other two-dimensional space in a gaseous state. The objects can be: classical ideal gas elements such as rigid disks undergoing elastic collisions ; elementary particles , or any ensemble of individual objects in physics which obeys laws of motion without binding interactions. The concept of a two-dimensional gas is used either because: While physicists have studied simple two body interactions on a plane for centuries, the attention given to the two-dimensional gas (having many bodies in motion) is a 20th-century pursuit. Applications have led to better understanding of superconductivity , [ 1 ] gas thermodynamics , certain solid state problems and several questions in quantum mechanics . Research at Princeton University in the early 1960s [ 2 ] posed the question of whether the Maxwell–Boltzmann statistics and other thermodynamic laws could be derived from Newtonian laws applied to multi-body systems rather than through the conventional methods of statistical mechanics . While this question appears intractable from a three-dimensional closed form solution , the problem behaves differently in two-dimensional space. In particular an ideal two-dimensional gas was examined from the standpoint of relaxation time to equilibrium velocity distribution given several arbitrary initial conditions of the ideal gas. Relaxation times were shown to be very fast: on the order of mean free time . In 1996 a computational approach was taken to the classical mechanics non-equilibrium problem of heat flow within a two-dimensional gas. [ 3 ] This simulation work showed that for N>1500, good agreement with continuous systems is obtained. While the principle of the cyclotron to create a two-dimensional array of electrons has existed since 1934, the tool was originally not really used to analyze interactions among the electrons (e.g. two-dimensional gas dynamics ). An early research investigation explored cyclotron resonance behavior and the de Haas–van Alphen effect in a two-dimensional electron gas. [ 4 ] The investigator was able to demonstrate that for a two-dimensional gas, the de Haas–van Alphen oscillation period is independent of the short-range electron interactions. In 1991 a theoretical proof was made that a Bose gas can exist in two dimensions. [ 5 ] In the same work an experimental recommendation was made that could verify the hypothesis. In general, 2D molecular gases are experimentally observed on weakly interacting surfaces such as metals, graphene etc. at a non-cryogenic temperature and a low surface coverage. As a direct observation of individual molecules is not possible due to fast diffusion of molecules on a surface, experiments are either indirect (observing an interaction of a 2D gas with surroundings, e.g. condensation of a 2D gas) or integral (measuring integral properties of 2D gases, e.g. by diffraction methods). An example of the indirect observation of a 2D gas is the study of Stranick et al. who used a scanning tunnelling microscope in ultrahigh vacuum (UHV) to image an interaction of a two-dimensional benzene gas layer in contact with a planar solid interface at 77 kelvins . [ 6 ] The experimenters were able to observe mobile benzene molecules on the surface of Cu(111), to which a planar monomolecular film of solid benzene adhered. Thus the scientists could witness the equilibrium of the gas in contact with its solid state. Integral methods that are able to characterize a 2D gas usually fall into a category of diffraction (see for example study of Kroger et al. [ 7 ] ). The exception is the work of Matvija et al. who used a scanning tunneling microscope to directly visualize a local time-averaged density of molecules on a surface. [ 8 ] This method is of special importance as it provides an opportunity to probe local properties of 2D gases; for instance it enables to directly visualize a pair correlation function of a 2D molecular gas in a real space. If the surface coverage of adsorbates is increased, a 2D liquid is formed, [ 9 ] followed by a 2D solid. It was shown that the transition from a 2D gas to a 2D solid state can be controlled by a scanning tunneling microscope which can affect the local density of molecules via an electric field. [ 10 ] A multiplicity of theoretical physics research directions exist for study via a two-dimensional gas, such as: [ citation needed ]
https://en.wikipedia.org/wiki/Two-dimensional_gas
Two-dimensional gel electrophoresis , abbreviated as 2-DE or 2-D electrophoresis , is a form of gel electrophoresis commonly used to analyze proteins . Mixtures of proteins are separated by two properties in two dimensions on 2D gels. 2-DE was independently introduced in 1969 by Macko and Stegemann [ 1 ] (working with potato proteins) and Dale and Latner [ 2 ] (working with serum). 2-D electrophoresis begins with electrophoresis in the first dimension and then separates the molecules perpendicularly from the first to create an electropherogram in the second dimension. In electrophoresis in the first dimension, molecules are separated linearly according to their isoelectric point. In the second dimension, the molecules are then separated at 90 degrees from the first electropherogram according to molecular mass. Since it is unlikely that two molecules will be similar in two distinct properties, molecules are more effectively separated in 2-D electrophoresis than in 1-D electrophoresis. [ citation needed ] The two dimensions that proteins are separated into using this technique can be isoelectric point , protein complex mass in the native state, or protein mass . [ citation needed ] The result of this is a gel with proteins spread out on its surface. These proteins can then be detected by a variety of means, but the most commonly used stains are silver and Coomassie brilliant blue staining. In the former case, a silver colloid is applied to the gel. The silver binds to cysteine groups within the protein. The silver is darkened by exposure to ultra-violet light. The amount of silver can be related to the darkness, and therefore the amount of protein at a given location on the gel. This measurement can only give approximate amounts, but is adequate for most purposes. Silver staining is 100x more sensitive than Coomassie brilliant blue with a 40-fold range of linearity. [ 3 ] Molecules other than proteins can be separated by 2D electrophoresis. In supercoiling assays, coiled DNA is separated in the first dimension and denatured by a DNA intercalator (such as ethidium bromide or the less carcinogenic chloroquine ) in the second. This is comparable to the combination of native PAGE/SDS-PAGE in protein separation. [ citation needed ] A common technique is to use an Immobilized pH gradient (IPG) in the first dimension. This technique is referred to as IPG-DALT . The sample is first separated onto IPG gel (which is commercially available) then the gel is cut into slices for each sample which is then equilibrated in SDS-mercaptoethanol and applied to an SDS-PAGE gel for resolution in the second dimension. Typically IPG-DALT is not used for quantification of proteins due to the loss of low molecular weight components during the transfer to the SDS-PAGE gel. [ 4 ] See Isoelectric focusing In quantitative proteomics , these tools primarily analyze bio-markers by quantifying individual proteins, and showing the separation between one or more protein "spots" on a scanned image of a 2-DE gel. Additionally, these tools match spots between gels of similar samples to show, for example, proteomic differences between early and advanced stages of an illness. While this technology is widely utilized, the intelligence has not been perfected. For example, some software may tend to agree on the quantification and analysis of well-defined well-separated protein spots, they deliver different results and analysis tendencies with less-defined less-separated spots. [ 5 ] Comparative studies have previously been published to guide researchers on the "best" software for their analysis. [ 6 ] Challenges for automatic software-based analysis include incompletely separated (overlapping) spots (less-defined or separated), weak spots / noise (e.g., "ghost spots"), running differences between gels (e.g., protein migrates to different positions on different gels), unmatched/undetected spots, leading to missing values , [ 7 ] mismatched spots, errors in quantification (several distinct spots may be erroneously detected as a single spot by the software and parts of a spot may be excluded from quantification), and differences in software algorithms and therefore analysis tendencies Generated picking lists can be used for the automated in-gel digestion of protein spots, and subsequent identification of the proteins by mass spectrometry . Mass spectrometry analysis can identify precise mass measurements along with the sequencing of peptides that range from 1000–4000 atomic mass units. [ 8 ]
https://en.wikipedia.org/wiki/Two-dimensional_gel_electrophoresis
A two-dimensional liquid ( 2D liquid ) is a collection of objects constrained to move in a planar space or other two-dimensional space in a liquid state. The movement of the particles in a 2D liquid is similar to 3D, but with limited degrees of freedom. E.g. rotational motion can be limited to rotation about only one axis, in contrast to a 3D liquid, where rotation of molecules about two or three axis would be possible. The same is true for the translational motion. The particles in 2D liquids can move in a 2D plane, whereas the particles is a 3D liquid can move in three directions inside the 3D volume. Vibrational motion is in most cases not constrained in comparison to 3D. The relations with other states of aggregation (see below) are also analogously in 2D and 3D. 2D liquids are related to 2D gases . If the density of a 2D liquid is decreased, a 2D gas is formed. This was observed by scanning tunnelling microscopy under ultra-high vacuum (UHV) conditions for molecular adsorbates. [ 1 ] 2D liquids are related to 2D solids . If the density of a 2D liquid is increased, the rotational degree of freedom is frozen and a 2D solid is created. [ 2 ] This article about statistical mechanics is a stub . You can help Wikipedia by expanding it . This thermodynamics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Two-dimensional_liquid
A two-dimensional polymer ( 2DP ) is a sheet-like monomolecular macromolecule consisting of laterally connected repeat units with end groups along all edges. [ 1 ] [ 2 ] This recent definition of 2DP is based on Hermann Staudinger 's polymer concept from the 1920s. [ 3 ] [ 4 ] [ 5 ] [ 6 ] According to this, covalent long chain molecules ("Makromoleküle") do exist and are composed of a sequence of linearly connected repeat units and end groups at both termini. Moving from one dimension to two offers access to surface morphologies such as increased surface area, porous membranes, and possibly in-plane pi orbital-conjugation for enhanced electronic properties. They are distinct from other families of polymers because 2D polymers can be isolated as multilayer crystals or as individual sheets. [ 7 ] The term 2D polymer has also been used more broadly to include linear polymerizations performed at interfaces, layered non-covalent assemblies, or to irregularly cross-linked polymers confined to surfaces or layered films. [ 8 ] 2D polymers can be organized based on these methods of linking (monomer interaction): covalently linked monomers, coordination polymers and supramolecular polymers. 2D polymers containing pores are also known as porous polymers . Topologically, 2DPs may thus be understood as structures made up from regularly tessellated regular polygons (the repeat units). Figure 1 displays the key features of a linear and a 2DP according to this definition. For usage of the term "2D polymer" in a wider sense, see "History". 2DPs include the individual layers or sheets of graphite (called graphenes ), MoS2 , (BN)x and layered covalent organic frameworks . As required by the above definition, these sheets have a periodic internal structure. Graphene has a honeycomb lattice of carbon atoms that exhibit semiconducting properties. A potential repeat unit of graphene is a sp2-hybridized carbon atom. Individual sheets can in principle be obtained by exfoliation procedures, though in reality this is a non-trivial enterprise. Molybdenum disulfide can be produced as a sheet-like structure by exfoliation . Such sheets represent two-dimensional polymers. Porphyrins are an additional class of conjugated, heterocyclic macrocycles. Control of monomer assembly through covalent assembly has also been demonstrated using covalent interactions with porphyrins. Upon thermal activation of porphyrin building blocks, covalent bonds form to create a conductive polymer, a versatile route for bottom-up construction of electronic circuits has been demonstrated. [ 9 ] Two dimensional covalent organic frameworks (COFs) are one type of microporous coordination polymer that can be fabricated as a 2DP. The dimensionality and topology of the 2D COFs result from both the shape of the monomers and the relative and dimensional orientations of their reactive groups. These materials contain desirable properties in fields of materials chemistry including thermal stability, tunable porosity, high specific surface area, and the low density of organic material. By careful selection of organic building units, long range π-orbital overlap parallel to the stacking direction of certain organic frameworks can be achieved. [ 7 ] It is possible to synthesize COFs using both dynamic covalent and non-covalent chemistry. The kinetic approach involves a stepwise process of polymerizing pre-assembled 2D-monomer while thermodynamic control exploits reversible covalent chemistry to allow simultaneous monomer assembly and polymerization. Under thermodynamic control, bond formation and crystallization also occur simultaneously. Covalent organic frameworks formed by dynamic covalent bond formation involves chemical reactions carried out reversibly under conditions of equilibrium control. [ 7 ] Because the formation of COFs in dynamic covalent formation occurs under thermodynamic control, product distributions depend only on the relative stabilities of the final products. Covalent assembly to form 2D COFs has been previously done using boronate esters from catechol acetonides in the presence of a lewis acid (BF 3 *OEt 2 ). [ 10 ] 2D polymerization under kinetic control relies on non-covalent interactions and monomer assembly prior to bond formation. The monomers can be held together in a pre-organized position by non-covalent interactions, such as hydrogen bonding or van der Waals. [ 11 ] Supramolecular assembly requires non-covalent interactions directing the formation of 2D polymers by relying on electrostatic interactions such as hydrogen bonding and van der Waals forces. To design artificial assemblies capable of high selectivity requires correct manipulation of energetic and stereochemical features of non-covalent forces. [ 11 ] Some benefits of non-covalent interactions is their reversible nature and response to external factors such as temperature and concentration. [ 12 ] The mechanism of non-covalent polymerization in supramolecular chemistry is highly dependent on the interactions during the self-assembly process. The degree of polymerization depends highly on temperature and concentration. The mechanisms may be divided into three categories: isodesmic, ring-chain, and cooperative. [ 12 ] One example of isodesmic associations in supramolecular aggregates is seen in Figure 7, (CA*M) cyanuric acid (CA) and melamine (M) interactions and assembly through hydrogen bonding. [ 13 ] Hydrogen bonding has been used to guide assembly of molecules into two-dimensional networks, that can then serve as new surface templates and offer an array of pores of sufficient capacity to accommodate large guest molecules. [ 14 ] An example of utilizing surface structures through non-covalent assembly uses adsorbed monolayers to create binding sites for target molecules through hydrogen bonding interactions. Hydrogen bonding is used to guide the assembly of two different molecules into a 2D honeycomb porous network under ultra high vacuum seen in figure 8. [ 14 ] 2D polymers based on DNA have been reported [ 15 ] 2DPs as two dimensional sheet macromolecules have a crystal lattice, that is they consist of monomer units that repeat in two dimensions. Therefore, a clear diffraction pattern from their crystal lattice should be observed as a proof of crystallinity. The internal periodicity is supported by electron microscopy imaging , electron diffraction and Raman-spectroscopic analysis . 2DPs should in principle also be obtainable by, e.g., an interfacial approach whereby proving the internal structure, however, is more challenging and has not yet been achieved. [ 16 ] [ 17 ] [ 18 ] In 2014 a 2DP was reported synthesised from a trifunctional photoreactive anthracene derived monomer, preorganised in a lamellar crystal and photopolymerised in a [4+4]cycloaddition. [ 19 ] Another reported 2DP also involved an anthracene-derived monomer [ 20 ] 2DPs are expected to be superb membrane materials because of their defined pore sizes. Furthermore, they can serve as ultrasensitive pressure sensors, as precisely defined catalyst supports, for surface coatings and patterning, as ultrathin support for cryo-TEM , and many other applications. Since 2D polymers provide an availability of large surface area and uniformity in sheets, they also found useful applications in areas such as selective gas adsorption and separation. [ 7 ] Metal organic frameworks have become popular recently due to the variability of structures and topology which provide tunable pore structures and electronic properties. There are also ongoing methods for creation of nanocrystals of MOFs and their incorporation into nanodevices. [ 21 ] Additionally, metal-organic surfaces have been synthesized with cobalt dithionlene catalysts for efficient hydrogen production through reduction of water as an important strategy for fields of renewable energy. [ 22 ] The fabrication of 2D organic frameworks, have also synthesized two-dimensional, porous covalent organic frameworks to be used as storage media for hydrogen, methane and carbon dioxide in clean energy applications. [ 23 ] First attempts to synthesize 2DPs date back to the 1930s when Gee reported interfacial polymerizations at the air/water interface in which a monolayer of an unsaturated fatty acid derivative was laterally polymerized to give a 2D cross-linked material. [ 24 ] [ 25 ] [ 26 ] Since then a number of important attempts were reported in terms of cross-linking polymerization of monomers confined to layered templates or various interfaces. [ 1 ] [ 27 ] These approaches provide easy accesses to sheet-like polymers. However, the sheets' internal network structures are intrinsically irregular and the term "repeat unit" is not applicable (See for example: [ 28 ] [ 29 ] [ 30 ] ). In organic chemistry, creation of 2D periodic network structures has been a dream for decades. [ 31 ] Another noteworthy approach is "on-surface polymerization" [ 32 ] [ 33 ] whereby 2DPs with lateral dimensions not exceeding some tens of nanometers were reported. [ 34 ] [ 35 ] [ 36 ] Laminar crystals are readily available, each layer of which can ideally be regarded as latent 2DP. There have been a number of attempts to isolate the individual layers by exfoliation techniques (see for example: [ 37 ] [ 38 ] [ 39 ] ).
https://en.wikipedia.org/wiki/Two-dimensional_polymer
A two-dimensional semiconductor (also known as 2D semiconductor ) is a type of natural semiconductor with thicknesses on the atomic scale. Geim and Novoselov et al. initiated the field in 2004 when they reported a new semiconducting material graphene , a flat monolayer of carbon atoms arranged in a 2D honeycomb lattice . [ 1 ] A 2D monolayer semiconductor is significant because it exhibits stronger piezoelectric coupling than traditionally employed bulk forms. This coupling could enable applications. [ 2 ] One research focus is on designing nanoelectronic components by the use of graphene as electrical conductor , hexagonal boron nitride as electrical insulator , and a transition metal dichalcogenide as semiconductor . [ 3 ] [ 4 ] Graphene, consisting of single sheets of carbon atoms, has high electron mobility and high thermal conductivity . One issue regarding graphene is its lack of a band gap , which poses a problem in particular with digital electronics because it is unable to switch off field-effect transistors (FETs). [ 3 ] Monolayer hexagonal boron nitride (h-BN) is an insulator with a high energy gap (5.97 eV). [ 5 ] However, it can also function as a semiconductor with enhanced conductivity due to its zigzag sharp edges and vacancies. h-BN is often used as substrate and barrier due to its insulating property. h-BN also has a large thermal conductivity. Transition-metal dichalcogenide monolayers (TMDs or TMDCs) are a class of two-dimensional materials that have the chemical formula MX 2 , where M represents transition metals from group IV, V and VI, and X represents a chalcogen such as sulfur , selenium or tellurium . [ 6 ] MoS 2 , MoSe 2 , MoTe 2 , WS 2 and WSe 2 are TMDCs. TMDCs have layered structure with a plane of metal atoms in between two planes of chalcogen atoms as shown in Figure 1. Each layer is bonded strongly in plane, but weakly in interlayers. Therefore, TMDCs can be easily exfoliated into atomically thin layers through various methods. TMDCs show layer-dependent optical and electrical properties. When exfoliated into monolayers, the band gaps of several TMDCs change from indirect to direct, [ 7 ] which lead to broad applications in nanoelectronics, [ 3 ] optoelectronics, [ 8 ] [ 9 ] and quantum computing . [ 10 ] While exfoliated TMDC monolayers exhibit promising optoelectronic properties, they are often limited by intrinsic and extrinsic defects, [ 11 ] such as sulfur vacancies and grain boundaries, which can negatively affect their performance. To address these issues, various chemical passivation techniques, including the use of superacids and thiol molecules, [ 12 ] have been developed to enhance their photoluminescence and charge transport properties. Additionally, phase [ 13 ] and strain engineering [ 14 ] have emerged as powerful strategies to further optimize the electronic characteristics of TMDCs, making them more suitable for advanced applications in nanoelectronics and quantum computing. Another class of 2D semiconductors are III-VI chalcogenides. These materials have the chemical formula MX, where M is a metal from group 13 ( Ga , In ) and X is a chalcogen atom ( S , Se , Te ). Typical members of this group are InSe and GaSe , both of which have shown high electronic mobilities and band gaps suitable for a wide range of electronic applications. [ 15 ] [ 16 ] 2D semiconductor materials are often synthesized using a chemical vapor deposition (CVD) method. Because CVD can provide large-area, high-quality, and well-controlled layered growth of 2D semiconductor materials, it also allows synthesis of two-dimensional heterojunctions . [ 17 ] When building devices by stacking different 2D materials, mechanical exfoliation followed by transferring is often used. [ 4 ] [ 6 ] Other possible synthesis methods include electrochemical deposition , [ 18 ] [ 19 ] chemical exfoliation, hydrothermal synthesis, and thermal decomposition . In 2008 cadmium selenide CdSe quasi 2D platelets were first synthesized by colloidal method with thicknesses of several atomic layers and lateral sizes up to dozens of nanometers. [ 20 ] Modification of the procedure allowed to obtain other nanoparticles with different compositions (like CdTe, [ 21 ] HgSe, [ 22 ] CdSe x S 1−x alloys, [ 23 ] core/shell [ 24 ] and core/crown [ 25 ] heterostructures) and shapes (as scrolls, [ 26 ] nanoribbons, [ 27 ] etc.). 2D semiconductor materials unique crystal structures often yield unique mechanical properties, especially in the monolayer limit, such as high stiffness and strength in the 2D atomic plane, but low flexural rigidity. [ 28 ] Testing these materials is more challenging that their bulk counterparts, with methods employing the use of scanning probe techniques such as atomic force microscopy (AFM). These experimental methods are typically performed on 2D materials suspended over holes in a substrate. The tip of the AFM is then used to press into the flake and measure the response of the material. From this mechanical properties such as Young modulus, yield strain, and flexural strength. With a Youngs modulus of almost 1 TPa, [ 29 ] graphene boasts incredible toughness due to the strength of the carbon-carbon bonding. Graphene however, has a fracture toughness of about 4 MPa/m, making it brittle and easy to crack . [ 30 ] Graphene was later shown by the same group that discovered its fracture toughness, to have incredible force distribution abilities, with about ten times the ability of steel. [ 31 ] Monolayer boron nitride has fracture strength and Youngs modulus of 70.5 GPa and 0.865 TPa, respectively. Boron nitride also maintains its high Youngs modulus and fracture strengths with increasing thickness. [ 32 ] 2D transition metal dichalcogenides are often used in applications such as flexible and stretchable electronics, where an understanding of their mechanical properties and the operational impact of mechanical changes to the materials is paramount for device performance. Under strain TMDs change their electronic bandgap structure of both the direct gap monolayer and the indirect gap few layer cases indicating applied strain as a tunable parameter. [ 33 ] Monolayer MoS 2 has a Youngs modulus of 270 GPA and with a maximum strain of 10% before yield. [ 34 ] In comparison, bilayer MoS2 has a Youngs modulus of 200 GPa attributed to interlayer slip. [ 34 ] As layer number is increased further the interlayer slip is overshadowed by the bending rigidity with a Youngs modulus of 330 GPa. [ 35 ] Some applications include electronic devices, [ 37 ] photonic and energy harvesting devices, and flexible and transparent substrates. [ 3 ] Other applications include on quantum computing qubit devices [ 10 ] solar cells, [ 38 ] and flexible electronics. [ 6 ] Theoretical work has predicted the control of the band edges hybridization on some van der Waals heterostructures via electric fields and proposed its usage in quantum bit devices, considering the ZrSe 2 /SnSe 2 heterobilayer as an example. [ 10 ] Further experimental work has confirmed these predictions for the case of the MoS 2 /WS 2 heterobilayer. [ 39 ] 2D layered magnetic materials are attractive building blocks for nanoelectromechanical systems (NEMS): while they share high stiffness and strength and low mass with other 2D materials, they are magnetically active. Among the large class of newly emerged 2D layered magnetic materials, of particular interest is few-layer CrI3, whose magnetic ground state consists of antiferromagnetically coupled ferromagnetic (FM) monolayers with out-of-plane easy axis. The interlayer exchange interaction is relatively weak, a magnetic field on the order of 0.5 T in the out-of-plane (𝒛) direction can induce spin-flip transition in bilayer CrI3. Remarkable phenomena and device concepts based on detecting and controlling the interlayer magnetic state have been recently demonstrated, including spin-filter giant magnetoresistance, magnetic switching by electric field or electrostatic doping, and spin transistors. The coupling between the magnetic and mechanical properties in atomically thin materials, the basis for 2D magnetic NEMS, however, remains elusive although NEMS made of thicker magnetic materials or coated with FM metals have been studied.
https://en.wikipedia.org/wiki/Two-dimensional_semiconductor
Windowing is a process where an index-limited sequence has its maximum energy concentrated in a finite frequency interval. This can be extended to an N -dimension where the N -D window has the limited support and maximum concentration of energy in a separable or non-separable N -D passband. The design of an N -dimensional window particularly a 2-D window finds applications in various fields such as spectral estimation of multidimensional signals , design of circularly symmetric and quadrantally symmetric non-recursive 2D filters , [ 1 ] design of optimal convolution functions, image enhancement so as to reduce the effects of data-dependent processing artifacts, optical apodization and antenna array design. [ 2 ] Due to the various applications of multi-dimensional signal processing , the various design methodologies of 2-D windows is of critical importance in order to facilitate these applications mentioned above, respectively. Consider a two-dimensional window function (or window array ) w ( n 1 , n 2 ) {\displaystyle w(n_{1},n_{2})} with its Fourier transform denoted by W ( w 1 , w 2 ) {\displaystyle W(w_{1},w_{2})} . Let i ( n 1 , n 2 ) {\displaystyle i(n_{1},n_{2})} and I ( w 1 , w 2 ) {\displaystyle I(w_{1},w_{2})} denote the impulse and frequency response of an ideal filter and h ( n 1 , n 2 ) {\displaystyle h(n_{1},n_{2})} and H ( w 1 , w 2 ) {\displaystyle H(w_{1},w_{2})} denote the impulse and frequency response of a filter approximating the ideal filter, then we can approximate I ( w 1 , w 2 ) {\displaystyle I(w_{1},w_{2})} by h ( n 1 , n 2 ) {\displaystyle h(n_{1},n_{2})} . Since i ( n 1 , n 2 ) {\displaystyle i(n_{1},n_{2})} has an infinite extent it can be approximated as a finite impulse response by multiplying with a window function as shown below and in the Fourier domain The problem is to choose a window function with an appropriate shape such that H ( w 1 , w 2 ) {\displaystyle H(w_{1},w_{2})} is close to I ( w 1 , w 2 ) {\displaystyle I(w_{1},w_{2})} and in any region surrounding a discontinuity of I ( w 1 , w 2 ) {\displaystyle I(w_{1},w_{2})} , H ( w 1 , w 2 ) {\displaystyle H(w_{1},w_{2})} shouldn't contain excessive ripples due to the windowing. There are four approaches for generating 2-D windows using a one-dimensional window as a prototype. [ 3 ] Approach I One of the methods of deriving the 2-D window is from the outer product of two 1-D windows, i.e., w ( n 1 , n 2 ) = w 1 ( n 1 ) w 2 ( n 2 ) . {\displaystyle w(n_{1},n_{2})=w_{1}(n_{1})w_{2}(n_{2}).} The property of separability is exploited in this approach. The window formed has a square region of support and is separable in the two variables. In order to understand this approach, [ 4 ] consider 1-D Kaiser window whose window function is given by then the corresponding 2-D function is given by where: The Fourier transform of w ( n 1 , n 2 ) {\displaystyle w(n_{1},n_{2})} is the outer product of the Fourier transforms of w 1 ( n 1 ) and w 2 ( n 2 ) {\displaystyle w_{1}(n_{1}){\text{ and }}w_{2}(n_{2})} . Hence W ( w 1 , w 2 ) = W 1 ( w 1 ) W 2 ( w 2 ) {\displaystyle W(w_{1},w_{2})=W_{1}(w_{1})W_{2}(w_{2})} . [ 5 ] Approach II Another method of extending the 1-D window design to a 2-D design is by sampling a circularly rotated 1-D continuous window function. [ 2 ] A function is said to possess circular symmetry if it can be written as a function of its radius, independent of θ {\displaystyle \theta } i.e. f ( r , θ ) = f ( r ) . {\displaystyle f(r,\theta )=f(r).} If w ( n ) denotes a good 1-D even symmetric window then the corresponding 2-D window function [ 2 ] is (where a {\displaystyle a} is a constant) and The transformation of the Fourier transform of the window function in rectangular co-ordinates to polar co-ordinates results in a Fourier–Bessel transform expression which is called as Hankel transform . Hence the Hankel transform is used to compute the Fourier transform of the 2-D window functions. If this approach is used to find the 2-D window from the 1-D window function then their Fourier transforms have the relation where: and This is the most widely used approach to design the 2-D windows. 2-D filter design by windowing using window formulations obtained from the above two approaches will result in the same filter order. This results in an advantage for the second approach since its circular region of support has fewer non-zero samples than the square region of support obtained from the first approach which in turn results in computational savings due to reduced number of coefficients of the 2-D filter. But the disadvantage of this approach is that the frequency characteristics of the 1-D window are not well preserved in 2-D cases by this rotation method. [ 3 ] It was also found that the mainlobe width and sidelobe level of the 2-D windows are not as well behaved and predictable as their 1-D prototypes. [ 4 ] While designing a 2-D window there are two features that have to be considered for the rotation. Firstly, the 1-D window is only defined for integer values of n {\displaystyle n} but n 1 2 + n 2 2 {\displaystyle {\sqrt {n_{1}^{2}+n_{2}^{2}}}} value isn't an integer in general. To overcome this, the method of interpolation can be used to define values for w ( n 1 , n 2 ) {\displaystyle w(n_{1},n_{2})} for any arbitrary w ( n 1 2 + n 2 2 ) . {\displaystyle w\left({\sqrt {n_{1}^{2}+n_{2}^{2}}}\right).} Secondly, the 2-D FFT must be applicable to the 2-D window. Approach III Another approach is to obtain 2-D windows by rotating the frequency response of a 1-D window in Fourier space followed by the inverse Fourier transform. [ 6 ] In approach II, the spatial-domain signal is rotated whereas in this approach the 1-D window is rotated in a different domain (e.g., frequency-signal). Thus the Fourier transform of the 2-D window function is given by The 2-D window function w 2 ( n 1 , n 2 ) {\displaystyle w_{2}(n_{1},n_{2})} can be obtained by computing the inverse inverse Fourier transform of W 2 ( w 1 , w 2 ) {\displaystyle W_{2}(w_{1},w_{2})} . Another way to show the type-preserving rotation is when the relation W 1 ( w 1 ) = W 2 ( w 1 , w 2 ) a t w 2 = 0 {\displaystyle W_{1}(w_{1})=W_{2}(w_{1},w_{2})\ at\ w_{2}=0} is satisfied. This implies that a slice of the frequency response of 2-D window is equal to that of the 1-D window where the orientation of ( w 1 , w 2 ) {\displaystyle (w_{1},w_{2})} is arbitrary. In spatial domain, this relation is given by w 1 ( n ) = ∫ − ∞ ∞ w 2 ( n 1 , n 2 ) d n 2 {\displaystyle w_{1}(n)=\int _{-\infty }^{\infty }\!w_{2}(n_{1},n_{2})\,dn_{2}} . This implies that a slice of the frequency response W 2 ( w 1 , w 2 ) {\displaystyle W_{2}(w_{1},w_{2})} is the same as the Fourier transform of the one-directional integration of the 2-D window w 2 ( n 1 , n 2 ) {\displaystyle w_{2}(n_{1},n_{2})} . The advantage of this approach is that the individual features of 1-D window response W 1 ( w 1 ) {\displaystyle W_{1}(w_{1})} are well preserved in the obtained 2-D window response W 2 ( w 1 , w 2 ) {\displaystyle W_{2}(w_{1},w_{2})} . Also, the circular symmetry is improved considerably in a discrete system. The drawback is that it's computationally inefficient due to the requirement of 2-D inverse Fourier transform and hence less useful in practice. [ 3 ] Approach IV A new method was proposed to design a 2-D window by applying the McClellan transformation to a 1-D window. [ 7 ] Each coefficient of the resulting 2-D window is the linear combination of coefficients of the corresponding 1-D window with integer or power of 2 weighting. Consider a case of even length, then the frequency response of the 1-D window of length N can be written as Consider the McClellan transformation: which is equivalent to Substituting the above, we get the frequency response of the corresponding 2-D window From the above equation, the coefficients of the 2-D window can be obtained. To illustrate this approach, consider the Tseng window. The 1-D Tseng window of 2 N {\displaystyle 2N} weights can be written as By implementing this approach, the frequency response of the 2-D McClellan-transformed Tseng window is given by where w ( n 1 , n 2 ) {\displaystyle w(n_{1},n_{2})} are the 2-D Tseng window coefficients. This window finds applications in antenna array design for the detection of AM signals. [ 8 ] The advantages include simple and efficient design, nearly circularly symmetric frequency response of the 2-D window, preserving of the 1-D window prototype features. However, when this approach is used for FIR filter design it was observed that the 2-D filters designed were not as good as those originally proposed by McClellan. Using the above approaches, the 2-D window functions for few of the 1-D windows are as shown below. When Hankel transform is used to find the frequency response of the window function, it is difficult to represent it in a closed form. Except for rectangular window and Bartlett window , the other window functions are represented in their original integral form. The two-dimensional window function is represented as w ( r ) {\displaystyle w(r)} with a region of support given by | r | < a {\displaystyle |r|<a} where the window is set to unity at origin and w ( r ) = 0 {\displaystyle w(r)=0} for | r | > a . {\displaystyle |r|>a.} Using the Hankel transform , the frequency response of the window function is given by where J 0 {\displaystyle J_{0}} is Bessel function identity. The two-dimensional version of a circularly symmetric rectangular window is as given below [ 9 ] The window is cylindrical with the height equal to one and the base equal to 2a. The vertical cross-section of this window is a 1-D rectangular window. The frequency response of the window after substituting the window function as defined above, using the Hankel transform , is as shown below The two-dimensional mathematical representation of a Bartlett window is as shown below [ 9 ] The window is cone-shaped with its height equal to 1 and the base is a circle with its radius 2a. The vertical cross-section of this window is a 1-D triangle window. The Fourier transform of the window using the Hankel transform is as shown below The 2-D Kaiser window is represented by [ 9 ] The cross-section of the 2-D window gives the response of a 1-D Kaiser Window function. The Fourier transform of the window using the Hankel transform is as shown below
https://en.wikipedia.org/wiki/Two-dimensional_window_design
The two-domain system is a biological classification by which all organisms in the tree of life are classified into two domains , Bacteria and Archaea . [ 1 ] [ 2 ] [ 3 ] It emerged from development of knowledge of archaea diversity and challenges the widely accepted three-domain system that classifies life into Bacteria, Archaea, and Eukarya . [ 4 ] It was preceded by the eocyte hypothesis of James A. Lake in the 1980s, [ 5 ] which was largely superseded by the three-domain system, due to evidence at the time. [ 6 ] Better understanding of archaea, especially of their roles in the origin of eukaryotes through symbiogenesis with bacteria, led to the revival of the eocyte hypothesis in the 2000s. [ 7 ] [ 8 ] The two-domain system became more widely accepted after the discovery of a large kingdom of archaea called Promethearchaeati in 2017, [ 9 ] which evidence suggests to be the evolutionary root of eukaryotes, thereby making eukaryotes members of the domain Archaea. [ 10 ] While the features of promethearchaea do not completely rule out the three-domain system, [ 11 ] [ 12 ] the notion that eukaryotes originated within Archaea has been strengthened by genetic and proteomic studies. [ 13 ] Under the three-domain system, Eukarya is mainly distinguished by the presence of "eukaryotic signature proteins" that are not found in Archaea and Bacteria. However, promethearchaea contain genes that code for multiple such proteins. [ 3 ] Classification of life into two main divisions is not a new concept, with the first such proposal by French biologist Édouard Chatton in 1938. Chatton distinguished organisms into: These were later named empires, and Chatton's classification as the two-empire system . [ 15 ] Chatton used the name Eucaryotes only for protozoans, excluded other eukaryotes, and published in limited circulation so that his work was not recognised. His classification was rediscovered by Canadian bacteriologist Roger Yates Stanier of the University of California in Berkeley in 1961 while at the Pasteur Institute in Paris. [ 14 ] The next year, Stanier and his colleague Cornelis Bernardus van Niel published in Archiv für Mikrobiologie (now Archives of Microbiology ) Chatton's classification with Eucaryotes eloborated to include higher algae, protozoans, fungi, plants, and animals. [ 16 ] It became a popular system of classification, as John O. Corliss wrote in 1986: "[The] Chatton-Stanier concept of a kingdom (better, superkingdom) Prokaryota for bacteria (in the broadest sense) and a second superkingdom Eukaryota for all other organisms has been widely accepted with enthusiasm." [ 17 ] In 1977, Carl Woese and George E. Fox classified prokaryotes into two groups (kingdoms), Archaebacteria (for methanogens , the first known archaea) and Eubacteria, based on their 16S ribosomal RNA (16S rRNA) genes. [ 18 ] In 1984, James A. Lake , Michael W. Clark, Eric Henderson , and Melanie Oakes of the University of California, Los Angeles described what was known as "a group of sulfur-dependent bacteria" as a new group of organisms called eocytes (for "dawn cells") and created a new kingdom Eocyta. With it they proposed the existence of four kingdoms, based on the structure and composition of the ribosomal subunits, namely Archaebacteria, Eubacteria, Eukaryote and Eocyta [ 19 ] Lake further analysed the rRNA sequences of the four groups and suggested that eukaryotes originated from eocytes, and not archaebacteria, as was generally assumed. [ 20 ] This was the basis of the eocyte hypothesis . [ 6 ] In 1988, he proposed the division of all life forms into two taxonomic groups: [ 5 ] In 1990, Woese, Otto Kandler , and Mark Wheelis showed that archaea are a distinct group of organisms and that eocytes (renamed Crenarchaeota as a phylum of Archaea [ 22 ] but corrected as Thermoproteota in 2021 [ 23 ] ) are Archaea. They introduced the major division of life into the three-domain system comprising domain Eucarya, domain Bacteria, and domain Archaea. [ 24 ] With a number of revisions of details and discoveries of several new archaea lineages, Woese's classification gradually gained acceptance as "arguably the best-developed and most widely-accepted scientific hypotheses [with the five-kingdom classification] regarding the evolutionary history of life." [ 25 ] The three-domain concept did not, however, resolve the issues with the relationship between Archaea and eukaryotes. [ 12 ] [ 26 ] As Ford Doolittle , then at the Dalhousie University, put it in 2020: "[The] three-domain tree wrongly represents evolutionary relationships, presenting a misleading view about how eukaryotes evolved from prokaryotes. The three-domain tree does recognize a specific archaeal–eukaryotic affinity, but it would have the latter arising independently, not from within, the former." [ 4 ] The two-domain system relies mainly on two key concepts that define eukaryotes as members of the domain Archaea and not as a separate domain: eukaryotes originated within Archaea, and promethearchaea represent the origin of eukaryotes. [ 27 ] [ 28 ] The three-domain system presumes that eukaryotes are more closely related to archaea than to Bacteria and are sister group to Archaea, thus, it treats them as separate domain. [ 29 ] As more new archaea were discovered in the early 2000s, this distinction became doubtful as eukaryotes became deeply nested within Archaea. The origin of eukaryotes from Archaea, meaning the two are of the same larger group, came to be supported by studies based on ribosome protein sequencing and phylogenetic analyses in 2004. [ 30 ] [ 31 ] Phylogenomic analysis of about 6000 gene sets from 185 bacterial, archaeal, and eukaryotic genomes in 2007 also suggested the origin of eukaryotes from Methanobacteriota (specifically the Thermoplasmatales ). [ 32 ] In 2008, researchers from Natural History Museum, London and Newcastle University reported a comprehensive analysis of 53 genes from archaea, bacteria, and eukaryotes that included essential components of the nucleic acid replication, transcription, and translation machineries. The conclusion was that eukaryotes evolved from archaea, specifically Crenarchaeota (eocytes) and the results "favor a topology that supports the eocyte hypothesis rather than archaebacterial monophyly and the 3-domains tree of life." [ 26 ] A study around the same time also found several genes common to eukaryotes and Crenarchaeota. [ 33 ] These accumulating evidences support the two-domain system. [ 22 ] In 2019, research led by Gergely J. Szöllősi assistant professor at ELTE has also concluded that two domains are the correct system. The studies conducted used simulations of more than 3,000 gene families. The study concluded that eukaryotes probably evolved from a bacterium entering an Promethearchaeati host (probably from the phylum Heimdallarchaeota ). [ 34 ] [ 35 ] [ 36 ] One of the distinctions of the domain Eukarya in the three-domain system is that eukaryotes have unique proteins such as actin ( cytoskeletal microfilament involved in cell motility), tubulin (component of the large cytoskeleton, microtubule ), and the ubiquitin system (protein degradation and recycling) that are not found in prokaryotes. However, these so-called "eukaryotic signature proteins" [ 3 ] are encoded in genomes of Thermoproteati (comprising the phyla Thaumarchaeota , Aigarchaeota , Crenarchaeota and Korarchaeota ) archaea, but not encoded in other archaea genomes. [ 22 ] The first eukaryotic proteins identified in Crenarchaeota were actin and actin-related proteins (Arp) 2 and 3, perhaps explaining the origin of eukaryotes by symbiogenic phagocytosis , in which an ancient archaeal host had an actin-based mechanism by which to envelop other cells, like protomitochondrial bacteria. [ 37 ] Tubulin-like proteins named artubulins are found in the genomes of several ammonium-oxidising Thaumarchaeota. [ 38 ] Endosomal sorting complexes, required for transport ( ESCRT III), involved in eukaryotic cell division, are found in all Thermoproteati groups. [ 39 ] The ESCRT-III-like proteins constitute the primary cell division system in these archaea. [ 40 ] [ 41 ] Genes encoding the ubiquitin system are known from multiple genomes of Aigarchaeota. [ 42 ] Ubiquitin-related protein called Urm1 is also present in Crenarchaeota. [ 43 ] DNA replication system (GINS proteins) in Crenarchaeota and Halobacteria are similar to the CMG (CDC45, MCM, GINS) complex of eukaryotes. [ 44 ] The presence of these eukaryotic proteins in Archaea indicates their direct relationship and that eukaryotes emerged from Archaea. [ 22 ] [ 45 ] The discovery of Promethearchaeati, described as "eukaryote-like archaea", [ 46 ] in 2012 [ 47 ] [ 48 ] and the following phylogenetic analyses have strengthened the two-domain view of life. [ 49 ] Promethearchaea called Lokiarchaeota contain even more eukaryotic protein-genes than the Thermoproteati kingdom. Initial genetic analysis and later reanalysis showed that out of over 31 selected eukaryotic genes in the archaea, 75% of them directly support eukaryote-archaea grouping, meaning a single domain of Archaea including eukaryotes; [ 50 ] [ 51 ] although the findings did not completely rule out the three-domain system. [ 52 ] As more Promethearchaeati groups were subsequently discovered including Thorarchaeota , Odinarchaeota , and Heimdallarchaeota, their relationships with eukaryotes became better established. Phylogenetic analyses using ribosomal RNA genes indicated that eukaryotes stemmed from promethearchaea, and that Heimdallarchaeota are the closest relatives of eukaryotes. [ 9 ] [ 53 ] Eukaryotic origin from Heimdallarchaeota is also supported by phylogenomic study in 2020. [ 13 ] A new group of Promethearchaeati found in 2021 (provisionally named Wukongarchaeota) also indicated a deep root for eukaryotic origin. [ 54 ] A report in 2022 of another Promethearchaeati, named Njordarchaeota, indicates that Heimdallarchaeota-Wukongarchaeota branch is possibly the origin group for eukaryotes. [ 55 ] The promethearchaea contain at least 80 genes for eukaryotic signature proteins. [ 56 ] In addition to actin, tubulin, ubiquitin, and ESCRT proteins found in Thermoproteati archaea, promethearchaea contain functional genes for several other eukaryotic proteins such as profilins , [ 57 ] ubiquitin system (E1-like, E2-like and small-RING finger (srfp) proteins), [ 58 ] membrane-trafficking systems (such as Sec23/24 and TRAPP domains), a variety of small GTPases [ 49 ] (including Gtr/Rag family GTPase orthologues [ 59 ] ), and gelsolins . [ 60 ] Although this information do not completely resolve the three-domain and two-domain controversies, [ 46 ] they are generally considered to favour the two-domain system. [ 3 ] [ 13 ] [ 61 ] The two-domain system defines classification of all known cellular life forms into two domains: Bacteria and Archaea. It overrides the domain Eukaryota recognised in the three-domain classification as one of the main domains. In contrast to the eocyte hypothesis, which proposed two major groups of life (similar to domains) and posited that Archaea could be divided to both bacterial and eukaryotic groups, it merged Archaea and eukaryotes into a single domain, Bacteria entirely in a separate domain. [ 4 ] It consists of all bacteria, which are prokaryotes (lacking nucleus), thus, Domain Bacteria is made up solely of prokaryotic organisms. [ 62 ] [ 63 ] Some examples are: It comprises both prokaryotic and eukaryotic organisms. [ 68 ] Archaea are prokaryotic organisms, some examples are: Eukaryotes have a nucleus in their cells, and include:
https://en.wikipedia.org/wiki/Two-domain_system
In condensed matter physics , the two-fluid model is a macroscopic model to explain superfluidity . The idea was suggested by László Tisza in 1938 and reformulated by Lev Landau in 1941 to explain the behavior of superfluid helium-4 . [ 1 ] [ 2 ] This model states that there will be two components in liquid helium below its lambda point (the temperature where superfluid forms). These components are a normal fluid and a ideal fluid component. Each liquid has a different density and together their sum makes the total density, which remains constant. The ratio of superfluid density to the total density increases as the temperature approaches absolute zero. The two-fluid model can be described by a system of coupled inviscid and viscous fluid system, in the low velocity limit, the equations are given by [ 3 ] where the P {\displaystyle P} is the pressure, T {\displaystyle T} is the temperature, η {\displaystyle \eta } is the viscosity of the normal component, σ {\displaystyle \sigma } is the entropy per unit mass, and ρ = ρ s + ρ n {\displaystyle \rho =\rho _{\rm {s}}+\rho _{\rm {n}}} is the density as the sum of the density of the two components such that it follows a continuity equation where the total flow is given by These corresponds to a coupled Navier-Stokes equations (normal component) to Euler equations (ideal superfluid component). There is also a two-fluid model also refers to a macroscopic traffic flow model to represent traffic in a town/city or metropolitan area, put forward in the 1970s by Ilya Prigogine and Robert Herman . [ 4 ] It was inspired by the superluid model. [ 4 ]
https://en.wikipedia.org/wiki/Two-fluid_model
In mathematics , a two-graph is a set of unordered triples chosen from a finite vertex set X , such that every unordered quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the same number of triples of the two-graph. Two-graphs have been studied because of their connection with equiangular lines and, for regular two-graphs, strongly regular graphs , and also finite groups because many regular two-graphs have interesting automorphism groups . A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory , such as 2-regular graphs . On the set of vertices {1,...,6} the following collection of unordered triples is a two-graph: This two-graph is a regular two-graph since each pair of distinct vertices appears together in exactly two triples. Given a simple graph G = ( V , E ), the set of triples of the vertex set V whose induced subgraph has an odd number of edges forms a two-graph on the set V . Every two-graph can be represented in this way. [ 1 ] This example is referred to as the standard construction of a two-graph from a simple graph. As a more complex example, let T be a tree with edge set E . The set of all triples of E that are not contained in a path of T form a two-graph on the set E . [ 2 ] A two-graph is equivalent to a switching class of graphs and also to a (signed) switching class of signed complete graphs . Switching a set of vertices in a (simple) graph means reversing the adjacencies of each pair of vertices, one in the set and the other not in the set: thus the edge set is changed so that an adjacent pair becomes nonadjacent and a nonadjacent pair becomes adjacent. The edges whose endpoints are both in the set, or both not in the set, are not changed. Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence class of graphs under switching is called a switching class . Switching was introduced by van Lint & Seidel (1966) and developed by Seidel; it has been called graph switching or Seidel switching , partly to distinguish it from switching of signed graphs . In the standard construction of a two-graph from a simple graph given above, two graphs will yield the same two-graph if and only if they are equivalent under switching, that is, they are in the same switching class. Let Γ be a two-graph on the set X . For any element x of X , define a graph with vertex set X having vertices y and z adjacent if and only if { x , y , z } is in Γ. In this graph, x will be an isolated vertex. This construction is reversible; given a simple graph G , adjoin a new element x to the set of vertices of G , retaining the same edge set, and apply the standard construction above. This two-graph is called the extension of G by x in design theoretic language . [ 3 ] In a given switching class of graphs of a regular two-graph, let Γ x be the unique graph having x as an isolated vertex (this always exists, just take any graph in the class and switch the open neighborhood of x ) without the vertex x . That is, the two-graph is the extension of Γ x by x . In the first example above of a regular two-graph, Γ x is a 5-cycle for any choice of x . [ 4 ] To a graph G there corresponds a signed complete graph Σ on the same vertex set, whose edges are signed negative if in G and positive if not in G . Conversely, G is the subgraph of Σ that consists of all vertices and all negative edges. The two-graph of G can also be defined as the set of triples of vertices that support a negative triangle (a triangle with an odd number of negative edges) in Σ. Two signed complete graphs yield the same two-graph if and only if they are equivalent under switching. Switching of G and of Σ are related: switching the same vertices in both yields a graph H and its corresponding signed complete graph. The adjacency matrix of a two-graph is the adjacency matrix of the corresponding signed complete graph; thus it is symmetric , is zero on the diagonal, and has entries ±1 off the diagonal. If G is the graph corresponding to the signed complete graph Σ, this matrix is called the (0, −1, 1)-adjacency matrix or Seidel adjacency matrix of G . The Seidel matrix has zero entries on the main diagonal, −1 entries for adjacent vertices and +1 entries for non-adjacent vertices. If graphs G and H are in a same switching class, the multisets of eigenvalues of the two Seidel adjacency matrices of G and H coincide, since the matrices are similar. [ 5 ] A two-graph on a set V is regular if and only if its adjacency matrix has just two distinct eigenvalues ρ 1 > 0 > ρ 2 say, where ρ 1 ρ 2 = 1 − | V |. [ 6 ] Every two-graph is equivalent to a set of lines in some dimensional euclidean space each pair of which meet in the same angle. The set of lines constructed from a two graph on n vertices is obtained as follows. Let −ρ be the smallest eigenvalue of the Seidel adjacency matrix , A , of the two-graph, and suppose that it has multiplicity n − d . Then the matrix ρ I + A is positive semi-definite of rank d and thus can be represented as the Gram matrix of the inner products of n vectors in euclidean d -space. As these vectors have the same norm (namely, ρ {\displaystyle {\sqrt {\rho }}} ) and mutual inner products ±1, any pair of the n lines spanned by them meet in the same angle φ where cos φ = 1/ρ. Conversely, any set of non-orthogonal equiangular lines in a euclidean space can give rise to a two-graph (see equiangular lines for the construction). [ 7 ] With the notation as above, the maximum cardinality n satisfies n ≤ d (ρ 2 − 1)/(ρ 2 − d ) and the bound is achieved if and only if the two-graph is regular. The two-graphs on X consisting of all possible triples of X and no triples of X are regular two-graphs and are considered to be trivial two-graphs. For non-trivial two-graphs on the set X , the two-graph is regular if and only if for some x in X the graph Γ x is a strongly regular graph with k = 2μ (the degree of any vertex is twice the number of vertices adjacent to both of any non-adjacent pair of vertices). If this condition holds for one x in X , it holds for all the elements of X . [ 8 ] It follows that a non-trivial regular two-graph has an even number of points. If G is a regular graph whose two-graph extension is Γ having n points, then Γ is a regular two-graph if and only if G is a strongly regular graph with eigenvalues k , r and s satisfying n = 2( k − r ) or n = 2( k − s ). [ 9 ]
https://en.wikipedia.org/wiki/Two-graph
Two-hybrid screening (originally known as yeast two-hybrid system or Y2H ) is a molecular biology technique used to discover protein–protein interactions (PPIs) [ 1 ] and protein–DNA interactions [ 2 ] [ 3 ] by testing for physical interactions (such as binding) between two proteins or a single protein and a DNA molecule, respectively. The premise behind the test is the activation of downstream reporter gene (s) by the binding of a transcription factor onto an upstream activating sequence (UAS). For two-hybrid screening, the transcription factor is split into two separate fragments, called the DNA-binding domain (DBD or often also abbreviated as BD) and activating domain (AD). The BD is the domain responsible for binding to the UAS and the AD is the domain responsible for the activation of transcription . [ 1 ] [ 2 ] The Y2H is thus a protein-fragment complementation assay . Pioneered by Stanley Fields and Ok-Kyu Song in 1989, the technique was originally designed to detect protein–protein interactions using the Gal4 transcriptional activator of the yeast Saccharomyces cerevisiae . The Gal4 protein activated transcription of a gene involved in galactose utilization, which formed the basis of selection. [ 4 ] Since then, the same principle has been adapted to describe many alternative methods, including some that detect protein–DNA interactions or DNA-DNA interactions , as well as methods that use different host organisms such as Escherichia coli or mammalian cells instead of yeast. [ 3 ] [ 5 ] The key to the two-hybrid screen is that in most eukaryotic transcription factors, the activating and binding domains are modular and can function in proximity to each other without direct binding. [ 6 ] This means that even though the transcription factor is split into two fragments, it can still activate transcription when the two fragments are indirectly connected. The most common screening approach is the yeast two-hybrid assay. In this approach the researcher knows where each prey is located on the used medium (agar plates). Millions of potential interactions in several organisms have been screened in the latest decade using high-throughput screening systems (often using robots) and over thousands of interactions have been detected and categorized in databases as BioGRID . [ 7 ] [ 8 ] This system often utilizes a genetically engineered strain of yeast in which the biosynthesis of certain nutrients (usually amino acids or nucleic acids ) is lacking. When grown on media that lacks these nutrients, the yeast fail to survive. This mutant yeast strain can be made to incorporate foreign DNA in the form of plasmids . In yeast two-hybrid screening, separate bait and prey plasmids are simultaneously introduced into the mutant yeast strain or a mating strategy is used to get both plasmids in one host cell. [ 9 ] The second high-throughput approach is the library screening approach. In this set up the bait and prey harboring cells are mated in a random order. After mating and selecting surviving cells on selective medium the scientist will sequence the isolated plasmids to see which prey (DNA sequence) is interacting with the used bait. This approach has a lower rate of reproducibility and tends to yield higher amounts of false positives compared to the matrix approach. [ 9 ] Plasmids are engineered to produce a protein product in which the DNA-binding domain (BD) fragment is fused onto a protein while another plasmid is engineered to produce a protein product in which the activation domain (AD) fragment is fused onto another protein. The protein fused to the BD may be referred to as the bait protein, and is typically a known protein the investigator is using to identify new binding partners. The protein fused to the AD may be referred to as the prey protein and can be either a single known protein or a library of known or unknown proteins. In this context, a library may consist of a collection of protein-encoding sequences that represent all the proteins expressed in a particular organism or tissue, or may be generated by synthesising random DNA sequences. [ 3 ] Regardless of the source, they are subsequently incorporated into the protein-encoding sequence of a plasmid, which is then transfected into the cells chosen for the screening method. [ 3 ] This technique, when using a library, assumes that each cell is transfected with no more than a single plasmid and that, therefore, each cell ultimately expresses no more than a single member from the protein library. If the bait and prey proteins interact (i.e., bind), then the AD and BD of the transcription factor are indirectly connected, bringing the AD in proximity to the transcription start site and transcription of reporter gene(s) can occur. If the two proteins do not interact, there is no transcription of the reporter gene. In this way, a successful interaction between the fused protein is linked to a change in the cell phenotype. [ 1 ] The challenge of separating cells that express proteins that happen to interact with their counterpart fusion proteins from those that do not, is addressed in the following section. In any study, some of the protein domains, those under investigation, will be varied according to the goals of the study whereas other domains, those that are not themselves being investigated, will be kept constant. For example, in a two-hybrid study to select DNA-binding domains, the DNA-binding domain, BD, will be varied while the two interacting proteins, the bait and prey, must be kept constant to maintain a strong binding between the BD and AD. There are a number of domains from which to choose the BD, bait and prey and AD, if these are to remain constant. In protein–protein interaction investigations, the BD may be chosen from any of many strong DNA-binding domains such as Zif268 . [ 2 ] A frequent choice of bait and prey domains are residues 263–352 of yeast Gal11P with a N342V mutation [ 2 ] and residues 58–97 of yeast Gal4, [ 2 ] respectively. These domains can be used in both yeast- and bacterial-based selection techniques and are known to bind together strongly. [ 1 ] [ 2 ] The AD chosen must be able to activate transcription of the reporter gene, using the cell's own transcription machinery. Thus, the variety of ADs available for use in yeast-based techniques may not be suited to use in their bacterial-based analogues. The herpes simplex virus-derived AD, VP16 and yeast Gal4 AD have been used with success in yeast [ 1 ] whilst a portion of the α-subunit of E. coli RNA polymerase has been utilised in E. coli -based methods. [ 2 ] [ 3 ] Whilst powerfully activating domains may allow greater sensitivity towards weaker interactions, conversely, a weaker AD may provide greater stringency. A number of engineered genetic sequences must be incorporated into the host cell to perform two-hybrid analysis or one of its derivative techniques. The considerations and methods used in the construction and delivery of these sequences differ according to the needs of the assay and the organism chosen as the experimental background. There are two broad categories of hybrid library: random libraries and cDNA-based libraries. A cDNA library is constituted by the cDNA produced through reverse transcription of mRNA collected from specific cells of types of cell. This library can be ligated into a construct so that it is attached to the BD or AD being used in the assay. [ 1 ] A random library uses lengths of DNA of random sequence in place of these cDNA sections. A number of methods exist for the production of these random sequences, including cassette mutagenesis . [ 2 ] Regardless of the source of the DNA library, it is ligated into the appropriate place in the relevant plasmid/phagemid using the appropriate restriction endonucleases . [ 2 ] By placing the hybrid proteins under the control of IPTG -inducible lac promoters , they are expressed only on media supplemented with IPTG. Further, by including different antibiotic resistance genes in each genetic construct, the growth of non-transformed cells is easily prevented through culture on media containing the corresponding antibiotics. This is particularly important for counter selection methods in which a lack of interaction is needed for cell survival. [ 2 ] The reporter gene may be inserted into the E. coli genome by first inserting it into an episome , a type of plasmid with the ability to incorporate itself into the bacterial cell genome [ 2 ] with a copy number of approximately one per cell. [ 10 ] The hybrid expression phagemids can be electroporated into E. coli XL-1 Blue cells which after amplification and infection with VCS-M13 helper phage , will yield a stock of library phage. These phage will each contain one single-stranded member of the phagemid library. [ 2 ] Once the selection has been performed, the primary structure of the proteins which display the appropriate characteristics must be determined. This is achieved by retrieval of the protein-encoding sequences (as originally inserted) from the cells showing the appropriate phenotype. The phagemid used to transform E. coli cells may be "rescued" from the selected cells by infecting them with VCS-M13 helper phage. The resulting phage particles that are produced contain the single-stranded phagemids and are used to infect XL-1 Blue cells. [ 2 ] The double-stranded phagemids are subsequently collected from these XL-1 Blue cells, essentially reversing the process used to produce the original library phage. Finally, the DNA sequences are determined through dideoxy sequencing . [ 2 ] The Escherichia coli -derived Tet-R repressor can be used in line with a conventional reporter gene and can be controlled by tetracycline or doxicycline (Tet-R inhibitors). Thus the expression of Tet-R is controlled by the standard two-hybrid system but the Tet-R in turn controls (represses) the expression of a previously mentioned reporter such as HIS3 , through its Tet-R promoter. Tetracycline or its derivatives can then be used to regulate the sensitivity of a system utilising Tet-R. [ 1 ] Sensitivity may also be controlled by varying the dependency of the cells on their reporter genes. For example, this may be affected by altering the concentration of histidine in the growth medium for his3 -dependent cells and altering the concentration of streptomycin for aadA dependent cells. [ 2 ] [ 3 ] Selection-gene-dependency may also be controlled by applying an inhibitor of the selection gene at a suitable concentration. 3-Amino-1,2,4-triazole (3-AT) for example, is a competitive inhibitor of the HIS3 -gene product and may be used to titrate the minimum level of HIS3 expression required for growth on histidine-deficient media. [ 2 ] Sensitivity may also be modulated by varying the number of operator sequences in the reporter DNA. A third, non-fusion protein may be co-expressed with two fusion proteins. Depending on the investigation, the third protein may modify one of the fusion proteins or mediate or interfere with their interaction. [ 1 ] Co-expression of the third protein may be necessary for modification or activation of one or both of the fusion proteins. For example, S. cerevisiae possesses no endogenous tyrosine kinase. If an investigation involves a protein that requires tyrosine phosphorylation, the kinase must be supplied in the form of a tyrosine kinase gene. [ 1 ] The non-fusion protein may mediate the interaction by binding both fusion proteins simultaneously, as in the case of ligand-dependent receptor dimerization. [ 1 ] For a protein with an interacting partner, its functional homology to other proteins may be assessed by supplying the third protein in non-fusion form, which then may or may not compete with the fusion-protein for its binding partner. Binding between the third protein and the other fusion protein will interrupt the formation of the reporter expression activation complex and thus reduce reporter expression, leading to the distinguishing change in phenotype. [ 1 ] One limitation of classic yeast two-hybrid screens is that they are limited to soluble proteins. It is therefore impossible to use them to study the protein–protein interactions between insoluble integral membrane proteins . The split-ubiquitin system provides a method for overcoming this limitation. [ 11 ] In the split-ubiquitin system, two integral membrane proteins to be studied are fused to two different ubiquitin moieties: a C-terminal ubiquitin moiety ("Cub", residues 35–76) and an N-terminal ubiquitin moiety ("Nub", residues 1–34). These fused proteins are called the bait and prey, respectively. In addition to being fused to an integral membrane protein, the Cub moiety is also fused to a transcription factor (TF) that can be cleaved off by ubiquitin specific proteases . Upon bait–prey interaction, Nub and Cub-moieties assemble, reconstituting the split-ubiquitin. The reconstituted split-ubiquitin molecule is recognized by ubiquitin specific proteases, which cleave off the transcription factor, allowing it to induce the transcription of reporter genes . [ 12 ] Zolghadr and co-workers presented a fluorescent two-hybrid system that uses two hybrid proteins that are fused to different fluorescent proteins as well as LacI, the lac repressor . The structure of the fusion proteins looks like this: FP2-LacI-bait and FP1-prey where the bait and prey proteins interact and bring the fluorescent proteins (FP1 = GFP , FP2= mCherry ) in close proximity at the binding site of the LacI protein in the host cell genome. [ 13 ] The system can also be used to screen for inhibitors of protein–protein interactions. [ 14 ] While the original Y2H system used a reconstituted transcription factor, other systems create enzymatic activities to detect PPIs. For instance, the KInase Substrate Sensor ("KISS"), is a mammalian two-hybrid approach has been designed to map intracellular PPIs. Here, a bait protein is fused to a kinase -containing portion of TYK2 and a prey is coupled to a gp130 cytokine receptor fragment. When bait and prey interact, TYK2 phosphorylates STAT3 docking sites on the prey chimera, which ultimately leads to activation of a reporter gene . [ 15 ] The one-hybrid variation of this technique is designed to investigate protein–DNA interactions and uses a single fusion protein in which the AD is linked directly to the binding domain. The binding domain in this case however is not necessarily of fixed sequence as in two-hybrid protein–protein analysis but may be constituted by a library. This library can be selected against the desired target sequence, which is inserted in the promoter region of the reporter gene construct. In a positive-selection system, a binding domain that successfully binds the UAS and allows transcription is thus selected. [ 1 ] Note that selection of DNA-binding domains is not necessarily performed using a one-hybrid system, but may also be performed using a two-hybrid system in which the binding domain is varied and the bait and prey proteins are kept constant. [ 2 ] [ 3 ] RNA-protein interactions have been investigated through a three-hybrid variation of the two-hybrid technique. In this case, a hybrid RNA molecule serves to adjoin together the two protein fusion domains—which are not intended to interact with each other but rather the intermediary RNA molecule (through their RNA-binding domains). [ 1 ] Techniques involving non-fusion proteins that perform a similar function, as described in the 'non-fusion proteins' section above, may also be referred to as three-hybrid methods. Simultaneous use of the one- and two-hybrid methods (that is, simultaneous protein–protein and protein–DNA interaction) is known as a one-two-hybrid approach and expected to increase the stringency of the screen. [ 1 ] Although theoretically, any living cell might be used as the background to a two-hybrid analysis, there are practical considerations that dictate which is chosen. The chosen cell line should be relatively cheap and easy to culture and sufficiently robust to withstand application of the investigative methods and reagents. [ 1 ] The latter is especially important for doing high-throughput studies . Therefore the yeast S. cerevisiae has been the main host organism for two-hybrid studies. However it is not always the ideal system to study interacting proteins from other organisms. [ 16 ] Yeast cells often do not have the same post translational modifications, have a different codon use or lack certain proteins that are important for the correct expression of the proteins. To cope with these problems several novel two-hybrid systems have been developed. Depending on the system used agar plates or specific growth medium is used to grow the cells and allow selection for interaction. The most common used method is the agar plating one where cells are plated on selective medium to see of interaction takes place. Cells that have no interaction proteins should not survive on this selective medium. [ 7 ] [ 17 ] The yeast S. cerevisiae was the model organism used during the two-hybrid technique's inception. It is commonly known as the Y2H system. It has several characteristics that make it a robust organism to host the interaction, including the ability to form tertiary protein structures, neutral internal pH, enhanced ability to form disulfide bonds and reduced-state glutathione among other cytosolic buffer factors, to maintain a hospitable internal environment. [ 1 ] The yeast model can be manipulated through non-molecular techniques and its complete genome sequence is known. [ 1 ] Yeast systems are tolerant of diverse culture conditions and harsh chemicals that could not be applied to mammalian tissue cultures. [ 1 ] A number of yeast strains have been created specifically for Y2H screens, e.g. Y187 [ 18 ] and AH109 , [ 19 ] both produced by Clontech . Yeast strains R2HMet and BK100 have also been used. [ 20 ] C. albicans is a yeast with a particular feature: it translates the CUG codon into serine rather than leucine. Due to this different codon usage it is difficult to use the model system S. cerevisiae as a Y2H to check for protein-protein interactions using C. albicans genes. To provide a more native environment a C. albicans two-hybrid (C2H) system was developed. With this system protein-protein interactions can be studied in C. albicans itself. [ 21 ] [ 22 ] A recent addition was the creation of a high-throughput system. [ 23 ] [ 24 ] [ 25 ] Bacterial two hybrid methods (B2H or BTH) are usually carried out in E. coli and have some advantages over yeast-based systems. For instance, the higher transformation efficiency and faster rate of growth lends E. coli to the use of larger libraries (in excess of 10 8 ). [ 2 ] The absence of requirements for a nuclear localisation signal to be included in the protein sequence and the ability to study proteins that would be toxic to yeast may also be major factors to consider when choosing an experimental background organism. [ 2 ] The methylation activity of certain E. coli DNA methyltransferase proteins may interfere with some DNA-binding protein selections. If this is anticipated, the use of an E. coli strain that is defective for a particular methyltransferase may be an obvious solution. [ 2 ] The B2H may not be ideal when studying eukaryotic protein-protein interactions (e.g. human proteins) as proteins may not fold as in eukaryotic cells or may lack other processing. In recent years a mammalian two hybrid (M2H) system has been designed to study mammalian protein-protein interactions in a cellular environment that closely mimics the native protein environment. [ 26 ] Transiently transfected mammalian cells are used in this system to find protein-protein interactions. [ 27 ] [ 28 ] Using a mammalian cell line to study mammalian protein-protein interactions gives the advantage of working in a more native context. [ 5 ] The post-translational modifications, phosphorylation, acylation and glycosylation are similar. The intracellular localization of the proteins is also more correct compared to using a yeast two hybrid system. [ 29 ] [ 30 ] It is also possible with the mammalian two-hybrid system to study signal inputs. [ 31 ] Another big advantage is that results can be obtained within 48 hours after transfection. [ 5 ] In 2005 a two hybrid system in plants was developed. Using protoplasts of A. thaliana protein-protein interactions can be studied in plants. This way the interactions can be studied in their native context. In this system the GAL4 AD and BD are under the control of the strong 35S promoter. Interaction is measured using a GUS reporter. In order to enable a high-throughput screening the vectors were made Gateway compatible . The system is known as the protoplast two hybrid (P2H) system. [ 32 ] The sea hare A californica is a model organism in neurobiology to study among others the molecular mechanisms of long-term memory. To study interactions, important in neurology, in a more native environment a two-hybrid system has been developed in A californica neurons. A GAL4 AD and BD are used in this system. [ 33 ] [ 34 ] An insect two-hybrid (I2H) system was developed in a silkworm cell line from the larva or caterpillar of the domesticated silk moth, Bombyx mori (BmN4 cells). This system uses the GAL4 BD and the activation domain of mouse NF-κB P65. Both are under the control of the OpIE2 promoter. [ 35 ] By changing specific amino acids by mutating the corresponding DNA base-pairs in the plasmids used, the importance of those amino acid residues in maintaining the interaction can be determined. [ 1 ] After using bacterial cell-based method to select DNA-binding proteins, it is necessary to check the specificity of these domains as there is a limit to the extent to which the bacterial cell genome can act as a sink for domains with an affinity for other sequences (or indeed, a general affinity for DNA). [ 2 ] Protein–protein signalling interactions pose suitable therapeutic targets due to their specificity and pervasiveness. The random drug discovery approach uses compound banks that comprise random chemical structures, and requires a high-throughput method to test these structures in their intended target. [ 1 ] [ 17 ] The cell chosen for the investigation can be specifically engineered to mirror the molecular aspect that the investigator intends to study and then used to identify new human or animal therapeutics or anti-pest agents. [ 1 ] [ 17 ] By determination of the interaction partners of unknown proteins, the possible functions of these new proteins may be inferred. [ 1 ] This can be done using a single known protein against a library of unknown proteins or conversely, by selecting from a library of known proteins using a single protein of unknown function. [ 1 ] To select zinc finger proteins (ZFPs) for protein engineering , methods adapted from the two-hybrid screening technique have been used with success. [ 2 ] [ 3 ] A ZFP is itself a DNA-binding protein used in the construction of custom DNA-binding domains that bind to a desired DNA sequence. [ 36 ] By using a selection gene with the desired target sequence included in the UAS, and randomising the relevant amino acid sequences to produce a ZFP library, cells that host a DNA-ZFP interaction with the required characteristics can be selected. Each ZFP typically recognises only 3–4 base pairs, so to prevent recognition of sites outside the UAS, the randomised ZFP is engineered into a 'scaffold' consisting of another two ZFPs of constant sequence. The UAS is thus designed to include the target sequence of the constant scaffold in addition to the sequence for which a ZFP is selected. [ 2 ] [ 3 ] A number of other DNA-binding domains may also be investigated using this system. [ 2 ] The reason for this high error rate lies in the characteristics of the screen: Each of these points alone can give rise to false results. Due to the combined effects of all error sources yeast two-hybrid have to be interpreted with caution. The probability of generating false positives means that all interactions should be confirmed by a high confidence assay, for example co-immunoprecipitation of the endogenous proteins, which is difficult for large scale protein–protein interaction data. Alternatively, Y2H data can be verified using multiple Y2H variants [ 38 ] or bioinformatics techniques. The latter test whether interacting proteins are expressed at the same time, share some common features (such as gene ontology annotations or certain network topologies ), have homologous interactions in other species. [ 39 ]
https://en.wikipedia.org/wiki/Two-hybrid_screening
Two-level game theory is a political model , derived from game theory , that illustrates the domestic-international interactions between states. It was originally introduced in 1988 by Robert D. Putnam in his publication "Diplomacy and Domestic Politics: The Logic of Two-Level Games". [ 1 ] Putnam had been involved in research around the G7 summits between 1976 and 1979. However, at the fourth summit , held in Bonn in 1978, he observed a qualitative shift in how the negotiations worked. He noted that attending countries agreed to adopt policies in contrast to what they might have in the absence of their international counterparts. However, the agreement was only viable due to strong domestic influence - within each international government - in favour of implementing the agreement internationally. This culminated in international policy co-ordination as a result of the entanglement of international and domestic agendas. [ 1 ] The model views international negotiations between states as consisting of simultaneous negotiations at two levels. At the international level, the national government (i.e., chief negotiator) seeks an agreement, with an opposing country, relating to topics of concern. At the domestic level, societal actors pressure the chief negotiator for favourable policies. The chief negotiator absorbs the concern of societal actors and builds coalitions with them. Simultaneously, the chief negotiator then seeks to maximise the domestic concerns, yet minimise the impact of any contrary views from the opposing country. [ 1 ] At the international level, countries will approach negotiations with a defined set of objectives. It is expected that chief negotiators of both states arrive at a range of outcomes where their objectives overlap. However, before committing to this, the chief negotiator must seek approval from domestic actors. This ratification can be in the form of both formal voting requirements or informal methods, such as public opinion polls. [ 3 ] Due to a potential difference in domestic concerns, the full range of agreement outcomes at the international level may not necessarily be approved. As such, the possible agreement outcomes at the international level that are accepted by domestic interest groups is defined as a state's "win-set". [ 1 ] International agreements only occur when there is an overlap between the win-sets of the states involved in the international negotiations. [ 4 ] Win-set size plays an important role in determining the success of negotiations at the international level. Naturally, the larger the win-set, the more likely the win-sets will overlap, potentially leading to successful negotiations. Conversely, negotiations are more likely to fail when opposing state's win-sets are smaller. [ 1 ] The perceived win-set size, however, is just as important as the actual win-set size. If a state's win-set size is perceived to be large, the opposing state will, therefore, have greater bargaining power . Alternatively, if a state's win-set is perceived to be rather small, this can lead to them attaining an advantage in negotiations, whereby they can influence the opposing state to concede more in order for negotiations to be a success. [ 1 ] In the context of climate change, all countries are negatively affected. But, when compared to the costs of steps taken to mitigate this, the majority of states benefit from the actions of a minority of large contributors. This lop-sided costs-to-benefits creates an incentive for some states to neglect their responsibilities and free-ride on the actions taken by others. [ 5 ] As a classic case of the Prisoner's Dilemma , states are therefore more incentivised to do nothing, rather than contributing to mitigating climate change. This unequal burden-sharing has led to varying conceptions between states of what is fair under the Paris Agreement , resulting in both small and large countries utilising their negotiating assets to arrive at an agreement. [ 5 ] However, as with any two-level game, domestic forces have influence on a state's win-set, which impacts the ability to negotiate an outcome at the international level. A recent example of this is the United States withdrawal from the Paris Agreement , which was supported by many Republicans as well as domestic interest groups aligned with the first Trump Administration . [ 6 ] In the period leading to the Falklands War , Anglo-Argentine negotiations resulted in several tentative agreements. The failure of domestic political forces to ratify these agreements meant the win-sets of the two countries did not overlap. [ 1 ] This political science article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Two-level_game_theory
In decision theory , economics , and finance , a two-moment decision model is a model that describes or prescribes the process of making decisions in a context in which the decision-maker is faced with random variables whose realizations cannot be known in advance, and in which choices are made based on knowledge of two moments of those random variables. The two moments are almost always the mean—that is, the expected value , which is the first moment about zero—and the variance , which is the second moment about the mean (or the standard deviation , which is the square root of the variance). The most well-known two-moment decision model is that of modern portfolio theory , which gives rise to the decision portion of the Capital Asset Pricing Model ; these employ mean-variance analysis , and focus on the mean and variance of a portfolio's final value. Suppose that all relevant random variables are in the same location-scale family , meaning that the distribution of every random variable is the same as the distribution of some linear transformation of any other random variable. Then for any von Neumann–Morgenstern utility function , using a mean-variance decision framework is consistent with expected utility maximization, [ 1 ] [ 2 ] as illustrated in example 1: Example 1: [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] Let there be one risky asset with random return r {\displaystyle r} , and one riskfree asset with known return r f {\displaystyle r_{f}} , and let an investor's initial wealth be w 0 {\displaystyle w_{0}} . If the amount q {\displaystyle q} , the choice variable, is to be invested in the risky asset and the amount w 0 − q {\displaystyle w_{0}-q} is to be invested in the safe asset, then, contingent on q {\displaystyle q} , the investor's random final wealth will be w = ( w 0 − q ) r f + q r {\displaystyle w=(w_{0}-q)r_{f}+qr} . Then for any choice of q {\displaystyle q} , w {\displaystyle w} is distributed as a location-scale transformation of r {\displaystyle r} . If we define random variable x {\displaystyle x} as equal in distribution to w − μ w σ w , {\displaystyle {\tfrac {w-\mu _{w}}{\sigma _{w}}},} then w {\displaystyle w} is equal in distribution to μ w + σ w x {\displaystyle \mu _{w}+\sigma _{w}x} , where μ represents an expected value and σ represents a random variable's standard deviation (the square root of its second moment). Thus we can write expected utility in terms of two moments of w {\displaystyle w} : where u ( ⋅ ) {\displaystyle u(\cdot )} is the von Neumann–Morgenstern utility function , f ( x ) {\displaystyle f(x)} is the density function of x {\displaystyle x} , and v ( ⋅ , ⋅ ) {\displaystyle v(\cdot ,\cdot )} is the derived mean-standard deviation choice function , which depends in form on the density function f . The von Neumann–Morgenstern utility function is assumed to be increasing, implying that more wealth is preferred to less, and it is assumed to be concave, which is the same as assuming that the individual is risk averse . It can be shown that the partial derivative of v with respect to μ w is positive, and the partial derivative of v with respect to σ w is negative; thus more expected wealth is always liked, and more risk (as measured by the standard deviation of wealth) is always disliked. A mean-standard deviation indifference curve is defined as the locus of points ( σ w , μ w ) with σ w plotted horizontally, such that E u ( w ) has the same value at all points on the locus. Then the derivatives of v imply that every indifference curve is upward sloped: that is, along any indifference curve dμ w / d σ w > 0. Moreover, it can be shown [ 3 ] that all such indifference curves are convex: along any indifference curve, d 2 μ w / d (σ w ) 2 > 0. Example 2: The portfolio analysis in example 1 can be generalized. If there are n risky assets instead of just one, and if their returns are jointly elliptically distributed , then all portfolios can be characterized completely by their mean and variance—that is, any two portfolios with identical mean and variance of portfolio return have identical distributions of portfolio return—and all possible portfolios have return distributions that are location-scale-related to each other. [ 11 ] [ 12 ] Thus portfolio optimization can be implemented using a two-moment decision model. Example 3: Suppose that a price-taking , risk-averse firm must commit to producing a quantity of output q before observing the market realization p of the product's price. [ 13 ] Its decision problem is to choose q so as to maximize the expected utility of profit: where E is the expected value operator, u is the firm's utility function, c is its variable cost function , and g is its fixed cost . All possible distributions of the firm's random revenue pq , based on all possible choices of q , are location-scale related; so the decision problem can be framed in terms of the expected value and variance of revenue. If the decision-maker is not an expected utility maximizer , decision-making can still be framed in terms of the mean and variance of a random variable if all alternative distributions for an unpredictable outcome are location-scale transformations of each other. [ 14 ]
https://en.wikipedia.org/wiki/Two-moment_decision_model
In fluid mechanics , two-phase flow is a flow of gas and liquid — a particular example of multiphase flow . Two-phase flow can occur in various forms, such as flows transitioning from pure liquid to vapor as a result of external heating , separated flows, and dispersed two-phase flows where one phase is present in the form of particles, droplets , or bubbles in a continuous carrier phase (i.e. gas or liquid). The widely accepted method to categorize two-phase flows is to consider the velocity of each phase as if there is not other phases available. The parameter is a hypothetical concept called Superficial velocity . Historically, probably the most commonly studied cases of two-phase flow are in large-scale power systems. Coal and gas-fired power stations used very large boilers to produce steam for use in turbines . In such cases, pressurised water is passed through heated pipes and it changes to steam as it moves through the pipe. The design of boilers requires a detailed understanding of two-phase flow heat-transfer and pressure drop behaviour, which is significantly different from the single-phase case. Even more critically, nuclear reactors use water to remove heat from the reactor core using two-phase flow. A great deal of study has been performed on the nature of two-phase flow in such cases, so that engineers can design against possible failures in pipework, loss of pressure, and so on (a loss-of-coolant accident (LOCA)). [ 1 ] Another case where two-phase flow can occur is in pump cavitation . Here a pump is operating close to the vapor pressure of the fluid being pumped. If pressure drops further, which can happen locally near the vanes for the pump, for example, then a phase change can occur and gas will be present in the pump. Similar effects can also occur on marine propellers; wherever it occurs, it is a serious problem for designers. When the vapor bubble collapses, it can produce very large pressure spikes, which over time will cause damage on the propeller or turbine. The above two-phase flow cases are for a single fluid occurring by itself as two different phases, such as steam and water. The term 'two-phase flow' is also applied to mixtures of different fluids having different phases, such as air and water, or oil and natural gas. Sometimes even three -phase flow is considered, such as in oil and gas pipelines where there might be a significant fraction of solids. Although oil and water are not strictly distinct phases (since they are both liquids) they are sometimes considered as a two-phase flow; and the combination of oil, gas and water (e.g. the flow from an offshore oil well) may also be considered a three-phase flow. Other interesting areas where two-phase flow is studied includes water electrolysis , [ 2 ] climate systems such as clouds , [ 1 ] and in groundwater flow, in which the movement of water and air through the soil is studied. Other examples of two-phase flow include bubbles , rain , waves on the sea , foam , fountains , mousse , cryogenics , and oil slicks . One final example is in the electrical explosion of metal. Several features make two-phase flow an interesting and challenging branch of fluid mechanics: Additional exhaustive information, like applied mathematical models can be found in. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] Gurgling is a characteristic sound made by unstable two-phase fluid flow, for example, as liquid is poured from a bottle, or during gargling . Modelling of two phase flow is still under development. Known methods are
https://en.wikipedia.org/wiki/Two-phase_flow
Two-photon excitation microscopy ( TPEF or 2PEF ) is a fluorescence imaging technique that is particularly well-suited to image scattering living tissue of up to about one millimeter in thickness. Unlike traditional fluorescence microscopy , where the excitation wavelength is shorter than the emission wavelength, two-photon excitation requires simultaneous excitation by two photons with longer wavelength than the emitted light. The laser is focused onto a specific location in the tissue and scanned across the sample to sequentially produce the image. Due to the non-linearity of two-photon excitation, mainly fluorophores in the micrometer-sized focus of the laser beam are excited, which results in the spatial resolution of the image. This contrasts with confocal microscopy , where the spatial resolution is produced by the interaction of excitation focus and the confined detection with a pinhole. Two-photon excitation microscopy typically uses near-infrared (NIR) excitation light which can also excite fluorescent dyes . Using infrared light minimizes scattering in the tissue because infrared light is scattered less in typical biological tissues. Due to the multiphoton absorption, the background signal is strongly suppressed. Both effects lead to an increased penetration depth for this technique. Two-photon excitation can be a superior alternative to confocal microscopy due to its deeper tissue penetration, efficient light detection, and reduced photobleaching . [ 1 ] [ 2 ] Two-photon excitation employs two-photon absorption , a concept first described by Maria Goeppert Mayer (1906–1972) in her doctoral dissertation in 1931, [ 3 ] and first observed in 1961 in a CaF 2 :Eu 2+ crystal using laser excitation by Wolfgang Kaiser . [ 4 ] Isaac Abella showed in 1962 in caesium vapor that two-photon excitation of single atoms is possible. [ 5 ] Two-photon excitation fluorescence microscopy has similarities to other confocal laser microscopy techniques such as laser scanning confocal microscopy and Raman microscopy . These techniques use focused laser beams scanned in a raster pattern to generate images, and both have an optical sectioning effect. Unlike confocal microscopes, multiphoton microscopes do not contain pinhole apertures that give confocal microscopes their optical sectioning quality. The optical sectioning produced by multiphoton microscopes is a result of the point spread function of the excitation. The concept of two-photon excitation is based on the idea that two photons, of comparably lower photon energy than needed for one-photon excitation, can also excite a fluorophore in one quantum event. Each photon carries approximately half the energy necessary to excite the molecule. The emitted photon is at a higher energy (shorter wavelength) than either of the two exciting photons. The probability of the near-simultaneous absorption of two photons is extremely low. Therefore, a high peak flux of excitation photons is typically required, usually generated by femtosecond pulsed laser . For example, the same average laser power but without pulsing results in no detectable fluorescence compared to fluorescence generated by the pulsed laser via the two-photon effect. The longer wavelength, lower energy (typically infrared) excitation lasers of multiphoton microscopes are well-suited to use in imaging live cells as they cause less damage than the short-wavelength lasers typically used for single-photon excitation, so living tissues may be observed for longer periods with fewer toxic effects. The most commonly used fluorophores have excitation spectra in the 400–500 nm range, whereas the laser used to excite the two-photon fluorescence lies in the ~700–1100 nm (infrared) range produced by Ti-sapphire lasers . If the fluorophore absorbs two infrared photons simultaneously, it will absorb enough energy to be raised into the excited state. The fluorophore will then emit a single photon with a wavelength that depends on the type of fluorophore used (typically in the visible spectrum ). Because two photons are absorbed during the excitation of the fluorophore, the probability of fluorescent emission from the fluorophores increases quadratically with the excitation intensity. Therefore, much more two-photon fluorescence is generated where the laser beam is tightly focused than where it is more diffuse. Effectively, excitation is restricted to the tiny focal volume (~1 femtoliter), resulting in a high degree of rejection of out-of-focus objects. This localization of excitation is the key advantage compared to single-photon excitation microscopes, which need to employ elements such as pinholes to reject out-of-focus fluorescence. The fluorescence from the sample is then collected by a high-sensitivity detector, such as a photomultiplier tube. This observed light intensity becomes one pixel in the eventual image; the focal point is scanned throughout a desired region of the sample to form all the pixels of the image. Two-photon microscopy was pioneered and patented by Winfried Denk and James Strickler in the lab of Watt W. Webb at Cornell University in 1990. They combined the idea of two-photon absorption with the use of a laser scanner. [ 1 ] [ 6 ] In two-photon excitation microscopy an infrared laser beam is focused through an objective lens. The Ti-sapphire laser normally used has a pulse width of approximately 100 femtoseconds (fs) and a repetition rate of about 80 MHz , allowing the high photon density and flux required for two-photon absorption, and is tunable across a wide range of wavelengths. The use of infrared light to excite fluorophores in light-scattering tissue has added benefits. [ 7 ] Longer wavelengths are scattered to a lesser degree than shorter ones, which is a benefit to high-resolution imaging. In addition, these lower-energy photons are less likely to cause damage outside the focal volume. Compared to a confocal microscope, photon detection is much more effective since even scattered photons contribute to the usable signal. These benefits for imaging in scattering tissues were only recognized several years after the invention of two-photon excitation microscopy. [ 8 ] There are several caveats to using two-photon microscopy: The pulsed lasers needed for two-photon excitation are much more expensive than the continuous wave (CW) lasers used in confocal microscopy. The two-photon absorption spectrum of a molecule may vary significantly from its one-photon counterpart. Higher-order photodamage becomes a problem and bleaching scales with the square of the laser power, whereas it is linear for single-photon (confocal). For very thin objects such as isolated cells, single-photon (confocal) microscopes can produce images with higher optical resolution due to their shorter excitation wavelengths. In scattering tissue, on the other hand, the superior optical sectioning and light detection capabilities of the two-photon microscope result in better performance. Two-photon microscopy has been involved in numerous fields including: physiology, neurobiology, embryology and tissue engineering . Even thin, nearly transparent tissues (such as skin cells) have been visualized with clear detail due to this technique. [ 9 ] Two-photon microscopy's high speed imaging capabilities may also be utilized in noninvasive optical biopsy. [ 10 ] Two-photon microscopy has been aptly used for producing localized chemical reactions, [ 8 ] an effect that has been used also for two-photon-based lithography . Using two-photon fluorescence and second-harmonic generation –based microscopy, it was shown that organic porphyrin -type molecules can have different transition dipole moments for two-photon fluorescence and second harmonic generation, [ 11 ] which are otherwise thought to occur from the same transition dipole moment. [ 12 ] Non-degenerative two-photon excitation, or using 2 photons of unequal wavelengths, was shown to increase the fluorescence of all tested small molecules and fluorescent proteins. [ 13 ] 2PEF was also proven to be very valuable for characterizing skin cancer , [ 14 ] in addition monitoring breast cancer in vitro. [ 15 ] [ 16 ] It had also been shown to reveal tumor cell arrest, tumor cell-platelet interaction, tumor cell-leukocyte interaction and metastatic colonization processes. [ 17 ] 2PEF has shown to be advantageous over other techniques, such as confocal microscopy when it comes to long-term live-cell imaging of mammalian embryos. [ 18 ] 2PEF has also been used in visualization of difficult-to-access cell types, especially in regards to kidney cells. [ 19 ] It has been used in better understanding fluid dynamics and filtration. [ 20 ] 2PEF was also proven to be valuable tool for monitoring correlates of viral ( SARS-CoV-2 ) infection in cell culture using a 2P-active Ca 2+ sensitive dye. [ 21 ] 2PEF as well as the extension of this method to 3PEF are used to characterize intact neural tissues in the brain of living and even behaving animals. In particular, the method is advantageous for calcium imaging of a neuron or populations of neurons, [ 22 ] for photopharmacology including localized uncaging of components such as glutamate, [ 23 ] GABA [ 24 ] or isomerization of photoswitchable drugs, [ 25 ] [ 26 ] and for the imaging of other genetically encoded sensors that report the concentration of neurotransmitters. [ 27 ] Currently, two-photon microscopy is widely used to image the live firing of neurons in model organisms including fruit flies ( Drosophila melanogaster ) , rats , songbirds , primates , ferrets , mice ( Mus musculus ) , zebrafish . [ 28 ] [ 29 ] [ 30 ] The animals are typically head-fixed due to the size of the microscope and scan devices, but also miniatured microscopes are being developed that enable imaging of neurons in the moving and freely behaving animals. [ 31 ] [ 32 ] Simultaneous absorption of three or more photons is also possible, allowing for higher-order multiphoton excitation microscopy. [ 33 ] So-called "three-photon excitation fluorescence microscopy" (3PEF) is the most used technique after 2PEF, to which it is complementary. Localized isomerization of photoswitchable drugs in vivo using three-photon excitation has also been reported. [ 34 ] In general, all commonly used fluorescent proteins (CFP, GFP, YFP, RFP) and dyes can be excited in two-photon mode. Two-photon excitation spectra are often considerably broader, making it more difficult to excite fluorophores selectively by switching excitation wavelengths. [ citation needed ] Several green, red and NIR emitting dyes (probes and reactive labels) with extremely high 2-photon absorption cross-sections have been reported. [ 35 ] Due to the donor-acceptor-donor type structure, squaraine dyes such as Seta-670 , Seta-700 and Seta-660 exhibit very high 2-photon absorption (2PA) efficiencies in comparison to other dyes, [ 35 ] [ 36 ] [ 37 ] SeTau-647 and SeTau-665 , a new type of squaraine- rotaxane , exhibit extremely high two-photon action cross-sections of up to 10,000 GM in the near IR region, unsurpassed by any other class of organic dyes. [ 35 ]
https://en.wikipedia.org/wiki/Two-photon_excitation_microscopy
Two-point tensors , or double vectors , are tensor -like quantities which transform as Euclidean vectors with respect to each of their indices. They are used in continuum mechanics to transform between reference ("material") and present ("configuration") coordinates. [ 1 ] Examples include the deformation gradient and the first Piola–Kirchhoff stress tensor . As with many applications of tensors, Einstein summation notation is frequently used. To clarify this notation, capital indices are often used to indicate reference coordinates and lowercase for present coordinates. Thus, a two-point tensor will have one capital and one lower-case index; for example, A jM . A conventional tensor can be viewed as a transformation of vectors in one coordinate system to other vectors in the same coordinate system. In contrast, a two-point tensor transforms vectors from one coordinate system to another. That is, a conventional tensor, actively transforms a vector u to a vector v such that where v and u are measured in the same space and their coordinates representation is with respect to the same basis (denoted by the " e "). In contrast, a two-point tensor, G will be written as and will transform a vector, U , in E system to a vector, v , in the e system as Suppose we have two coordinate systems one primed and another unprimed and a vectors' components transform between them as For tensors suppose we then have A tensor in the system e i {\displaystyle e_{i}} . In another system, let the same tensor be given by We can say Then is the routine tensor transformation. But a two-point tensor between these systems is just which transforms as The most mundane example of a two-point tensor is the transformation tensor, the Q in the above discussion. Note that Now, writing out in full, and also This then requires Q to be of the form By definition of tensor product , So we can write Thus Incorporating ( 1 ), we have
https://en.wikipedia.org/wiki/Two-point_tensor
The two-second rule is a rule of thumb by which a driver may maintain a safe trailing distance at any speed. [ 1 ] [ 2 ] The rule is that a driver should ideally stay at least two seconds behind any vehicle that is directly in front of his or her vehicle. It is intended for automobiles, although its general principle applies to other types of vehicles. Some areas recommend a three-second rule instead of a two-second rule to give an additional buffer. The rule is not a guide to safe stopping distance, it is more a guide to reaction times. The two-second rule tells a defensive driver the minimum distance needed to reduce the risk of collision under ideal driving conditions. The allotted two-seconds is a safety buffer, to allow the following driver time to respond. The practice has been shown to considerably reduce the risk of collision and also the severity of any injuries if a collision occurs. It also helps to avoid tailgating and road rage for all drivers. A large risk of tailgating is the collision avoidance time being much less than the driver reaction time. Driving instructors advocate that drivers always use the "two-second rule" regardless of speed or the type of road. During adverse weather , downhill slopes, or hazardous conditions such as black ice , it is important to maintain an even greater distance. The two-second rule is useful as it can be applied to any speed. Drivers can find it difficult to estimate the correct distance from the car in front, let alone remember the stopping distances that are required for a given speed, or to compute the equation on the fly. The two-second rule provides a simpler way of perceiving the distance. To estimate the time, a driver can wait until the rear end of the vehicle in front passes any distinct and fixed point on the roadway—e.g. a road sign, mailbox, line/crack/patch in the road. After the car ahead passes a given fixed point, the front of one's car should pass the same point no less than two seconds later. If the elapsed time is less than this, one should increase the distance, then repeat the method again until the time is at least two seconds. One can count the duration of time simply by saying "zero... one... two". Some instructors suggest that drivers say "only a fool breaks the two-second rule". [ 3 ] At a normal speaking rate, this sentence takes approximately two seconds to say and serves as a reminder to the driver of the importance of the rule itself. The TailGuardian distance advisory decals recently adopted by Stagecoach Buses in the UK use the two-second rule in their calibration. [ 4 ] Advisory Decals for 30, 50 and 70 mph are calibrated to be invisible outside those safe distance, only rendering themselves visible once the car following has entered the safety zone for the speed that they are travelling. Some authorities regard two seconds as inadequate, and recommend a three-second rule. [ 5 ] German law requires a minimum 0.9 second distance but when tested under relaxed conditions [ 6 ] researchers found that their test subjects spent 41% of the test time at following distances under 0.9 seconds. The United States National Safety Council suggests that a three-second rule—with increases of one second per factor of driving difficulty—is more appropriate. Factors that make driving more difficult include poor lighting conditions (dawn and dusk are the most common); inclement weather (ice, rain, snow, fog, etc.), adverse traffic mix (heavy vehicles, slow vehicles, impaired drivers, pedestrians, bicyclists, etc.), and personal condition (fatigue, sleepiness, drug-related loss of response time, distracting thoughts, etc.). For example, a fatigued driver piloting a car in rainy weather at dusk would do well to observe a six-second following distance, rather than the basic three-second gap. [ 7 ]
https://en.wikipedia.org/wiki/Two-second_rule
A two-state trajectory (also termed two-state time trajectory or a trajectory with two states ) is a dynamical signal that fluctuates between two distinct values: ON and OFF, open and closed, + / − {\displaystyle +/-} , etc. Mathematically, the signal X ( t ) {\displaystyle X(t)} has, for every t , {\displaystyle t,} either the value X ( t ) = c o f f {\displaystyle X(t)=c_{\mathrm {off} }} or X ( t ) = c o n {\displaystyle X(t)=c_{\mathrm {on} }} . In most applications, the signal is stochastic ; nevertheless, it can have deterministic ON-OFF components. A completely deterministic two-state trajectory is a square wave . There are many ways one can create a two-state signal, e.g. flipping a coin repeatedly. A stochastic two-state trajectory is among the simplest stochastic processes. Extensions include: three-state trajectories, higher discrete state trajectories, and continuous trajectories in any dimension. [ 1 ] Two state trajectories are very common. Here, we focus on relevant trajectories in scientific experiments: these are seen in measurements in chemistry, physics, and the biophysics of individual molecules [ 2 ] [ 3 ] (e.g. measurements of protein dynamics and DNA and RNA dynamics , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] activity of ion channels , [ 9 ] [ 10 ] enzyme activity , [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] quantum dots [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] ). From these experiments, one aims at finding the correct model explaining the measured process. [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] We explain about various relevant systems in what follows. Since the ion channel is either opened or closed, when recording the number of ions that go through the channel when time elapses, observed is a two-state trajectory of the current versus time. Here, there are several possible experiments on the activity of individual enzymes with a two-state signal. For example, one can create substrate that only upon the enzymatic activity shines light when activated (with a laser pulse). So, each time the enzyme acts, we see a burst of photons during the time period that the product molecule is in the laser area. Structural changes of molecules are viewed in various experiments' type. Förster resonance energy transfer is an example. In many cases one sees a time trajectory that fluctuates among several cleared defined states. Another system that fluctuates among an on state and an off state is a quantum dot . Here, the fluctuations are since the molecule is either in a state that emits photons or in a dark state that does not emit photons (the dynamics among the states are influenced also from its interactions with the surroundings).
https://en.wikipedia.org/wiki/Two-state_trajectory
The two-state vector formalism ( TSVF ) is a description of quantum mechanics in terms of a causal relation in which the present is caused by quantum states of the past and of the future taken in combination. The two-state vector formalism is one example of a time-symmetric interpretation of quantum mechanics (see Interpretations of quantum mechanics ). Time-symmetric interpretations of quantum mechanics were first suggested by Walter Schottky in 1921, [ 1 ] and later by several other scientists. The two-state vector formalism was first developed by Satosi Watanabe [ 2 ] in 1955, who named it the Double Inferential state-Vector Formalism (DIVF). Watanabe proposed that information given by forwards evolving quantum states is not complete; rather, both forwards and backwards evolving quantum states are required to describe a quantum state: a first state vector that evolves from the initial conditions towards the future, and a second state vector that evolves backwards in time from future boundary conditions. Past and future measurements, taken together, provide complete information about a quantum system. Watanabe's work was later rediscovered by Yakir Aharonov , Peter Bergmann and Joel Lebowitz in 1964, who later renamed it the Two-State Vector Formalism (TSVF). [ 3 ] Conventional prediction , as well as retrodiction , can be obtained formally by separating out the initial conditions (or, conversely, the final conditions) by performing sequences of coherence-destroying operations, thereby cancelling out the influence of the two state vectors. [ 4 ] The two-state vector is represented by: ⟨ Φ | | Ψ ⟩ {\displaystyle \langle \Phi |\ \ \ |\Psi \rangle } where the state ⟨ Φ | {\displaystyle \langle \Phi |} evolves backwards from the future and the state | Ψ ⟩ {\displaystyle |\Psi \rangle } evolves forwards from the past. In the example of the double-slit experiment , the first state vector evolves from the electron leaving its source, the second state vector evolves backwards from the final location of the electron on the detection screen, and the combination of forwards and backwards evolving state vectors determines what occurs when the electron passes the slits. The two-state vector formalism provides a time-symmetric description of quantum mechanics, and is constructed such as to be time-reversal invariant . [ 5 ] It can be employed in particular for analyzing pre- and post-selected quantum systems. Building on the notion of two-state, Reznik and Aharonov constructed a time-symmetric formulation of quantum mechanics that encompasses probabilistic observables as well as nonprobabilistic weak observables. [ 6 ] In view of the TSVF approach, and in order to allow information to be obtained about quantum systems that are both pre- and post-selected, Yakir Aharonov, David Albert and Lev Vaidman developed the theory of weak values . In TSVF, causality is time-symmetric; that is, the usual chain of causality is not simply reversed. Rather, TSVF combines causality both from the past (forward causation) and the future (backwards causation, or retrocausality ). Similarly as the de Broglie–Bohm theory , TSVF yields the same predictions as standard quantum mechanics. [ 7 ] Lev Vaidman emphasizes that TSVF fits very well with Hugh Everett 's many-worlds interpretation , [ 8 ] with the difference that initial and final conditions single out one branch of wavefunctions (our world). [ 9 ] The two-state vector formalism has similarities with the transactional interpretation of quantum mechanics proposed by John G. Cramer in 1986, although Ruth Kastner has argued that the two interpretations (Transactional and Two-State Vector) have important differences as well. [ 10 ] [ 11 ] It shares the property of time symmetry with the Wheeler–Feynman absorber theory by Richard Feynman and John Archibald Wheeler and with the time-symmetric theories of Kenneth B. Wharton and Michael B. Heaney [ 12 ]
https://en.wikipedia.org/wiki/Two-state_vector_formalism
Two-tone testing is a means of testing electronic components and systems, particularly radio systems, for intermodulation distortion . It consists of simultaneously injecting two sinusoidal signals of different frequencies (tones) into the component or system. Intermodulation distortion usually occurs in active components like amplifiers , but can also occur in some circumstances in passive items such as cable connectors, especially at high power. Measurement in two-tone testing is most commonly done by examining the output of the device under test (DUT) with a spectrum analyser with which intermodulation products can be directly observed. Sometimes this is not possible with complete systems and instead the consequences of intermodulation are observed. For instance, in a radar system the result of intermodulation might be the generation of false targets. An electronic device can be tested by applying a single frequency to its input and measuring the response at its output. If there is any non-linearity in the device, this will cause harmonic distortion at the output. This kind of distortion consists of whole-number multiples of the applied signal frequency, as well as the original frequency being present at the device output. Intermodulation distortion can produce outputs at other frequencies. The new frequencies created by intermodulation are the sum and difference of the injected frequencies and the harmonics of these. Intermodulation effects cannot be detected with single-tone testing, but they may be just as, or more undesirable than harmonic distortion depending on their frequency and level . [ 1 ] Two-tone testing can also be used to determine the discrimination of a radio receiver. That is, the ability of the receiver to distinguish between transmissions close in frequency. [ 2 ] Circuit components such as amplifiers can be tested using the two-tone method with a test setup like that shown in the figure. Two signal generators , set to two different frequencies F1 and F2, are fed into a power combiner through circulators . The combiner needs to have good isolation to prevent the signal from one generator being sent to the output of the other. If this happens, intermodulation can occur in the non-linear parts of the generator internal circuit. The resulting intermodulation products will give a false result to the test. The circulators are there to provide even more isolation between the generators and isolation between any signal that might get reflected back from the device under test (DUT) and the generator. The circulators have one port connected to a resistive load so that they act as isolators . Low-pass filters may also be provided at the generator outputs to remove any harmonic distortion. These harmonics could cause unexpected intermodulation products in the DUT, again giving misleading results. The output of the DUT is fed to a spectrum analyser where the results are observed, possibly via an attenuator to reduce the signal to a level the instrument can cope with. [ 3 ] Passive components such as cables, connectors and antennas, are generally expected to be linear and therefore not liable to generate any intermodulation. However, especially at high power, a number of effects can lead to non-linearity through formation of a metal–semiconductor junction at what is supposed to be a metal-metal junction. These effects include corrosion, surface oxidisation, dirtiness, and simple failure to fully make mechanical contact. Some passive materials are intrinsically non-linear. These include ferrites , ferrous metals , and carbon-fibre composites . [ 4 ] Intermodulation distortion is a particularly difficult problem at the cellular base stations of mobile phone cellular networks . These have to deal with multiple transmissions at closely spaced frequencies and it is necessary to ensure that these do not interact with each other. A typical specification is that intermodulation products should not exceed −125 dBm in the presence of 40 dbm transmissions. This equates to a requirement for a signal to intermodulation ratio of 165 dB , an exceedingly stringent specification. To achieve this, materials and components must be chosen with great care and installation and maintenance done to a high standard. Likewise, two-tone testing of these components needs to be done with great care and precision since intermodulation products at these low levels can easily be generated within the test setup accidentally. [ 5 ] There is an international standard, IEC 62037 "Passive RF and microwave devices, intermodulation level measurement", for measuring intermodualtion distortion of passive components. Testing to the standard ensures that specifications from different manufacturers are done under the same conditions and can be compared with each other. [ 6 ] Militaries will typically use their own standards for testing. For instance US procurement contracts may specify MIL-STD-461 . [ 7 ] A test setup suitable for testing receivers at microwave frequencies is shown in the figure. The two signal generators, F1 and F2, are combined using a directional coupler in reverse. That is, the two generators are connected to what would normally be the coupled and transmitted output ports respectively. The combined signal appears at what would normally be the input port. The advantage of using a directional coupler rather than a simple summing circuit is that the directional coupler provides isolation between the two generators. As with the component testing, another signal being injected into the output of a signal generator can cause intermodulation distortion within the generator. Isolators are included in the test set up as with the component testing. [ 10 ] The combined test signal can be injected directly in to the receiver if the antenna is removable. A second directional coupler, connected in the conventional configuration, can be used to provide a feed of the input to a spectrum analyser. This allows confirmation that the input signal is free of intermodulation products. If the test signal cannot be directly injected, for instance, because the receiver uses an active antenna , then the test signal is transmitted through its own transmitting antenna. A feed for a spectrum analyser can be provided by connecting a receiving antenna to its input. Tests done by the latter method are normally performed in an anechoic chamber to avoid broadcasting the test signal to the world at large. [ 11 ] The consequences of intermodulation distortion depend on the nature and purpose of the receiver. For a set receiving audio, it can manifest itself as an interfering signal making the wanted station unintelligible. In a radar receiver, it can manifest as a false detection of a target. [ 12 ] For transmitters that are designed for the transmission of speech or music, two frequencies within the audio band can be injected into the normal input of the transmitter. The output of the transmitter can be examined with a spectrum analyzer to look for intermodulation products. This kind of end-to-end testing tests all parts of the transmitter for non-linearity: from the audio stage, through the mixing and IF amplifier , to the final RF power amplifier . Likewise, a transmitter used for passing data can be injected with two frequencies within the baseband of the data stream. In some cases, there is no accessible input to a transmitter. Radar transmitters, for instance, do not take an input; the circuitry generating the radar signal is internal to the transmitter. In such cases the tones must be injected at some internal point of the device, or else the amplifiers and other stages must be tested as separate components. [ 13 ] A dummy load may be connected to the output of the transmitter to prevent it actually broadcasting, and a directional coupler, possibly together with an attenuator, used to provide a feed to the spectrum analyser. [ 14 ] The spacing in frequency between the two tones is of some significance in transmitter testing. The spacing determines whether intermodulation products are going to be in-band or out-of-band . That is, whether or not they occur within the band that the transmitter is designed to operate. In-band intermodulation is problematic because it interferes with the operation of the transmitter. However, out-of-band intermodulation can be an even greater problem. In most countries the telecommunications authority licenses the operator to use specific frequencies. Out-of-band signals are required to be virtually suppressed altogether. However, the greater frequency difference between the wanted and unwanted signal makes out-of-band intermodulation products relatively easy to remove with filters . [ 15 ] Just as two tones provide a more realistic test than a single tone, multi-tone testing can be used to even better simulate the behaviour of a real signal. The idea is to spread the tones over the bandwidth of the real signal with a similar frequency power density. For accurate results, it is important that the phase of the tones relative to each other is considered. It is usually undesirable that the tones are in a synchronised phase relationship as this can give misleading results. For this reason, it is often endeavoured to generate tones with random phases in multi-tone testing. [ 16 ]
https://en.wikipedia.org/wiki/Two-tone_testing
In mathematical logic and computer science , two-variable logic is the fragment of first-order logic where formulae can be written using only two different variables . [ 1 ] This fragment is usually studied without function symbols . Some important problems about two-variable logic, such as satisfiability and finite satisfiability , are decidable . [ 2 ] This result generalizes results about the decidability of fragments of two-variable logic, such as certain description logics ; however, some fragments of two-variable logic enjoy a much lower computational complexity for their satisfiability problems. By contrast, for the three-variable fragment of first-order logic without function symbols, satisfiability is undecidable. [ 3 ] The two-variable fragment of first-order logic with no function symbols is known to be decidable even with the addition of counting quantifiers , [ 4 ] and thus of uniqueness quantification . This is a more powerful result, as counting quantifiers for high numerical values are not expressible in that logic. Counting quantifiers actually improve the expressiveness of finite-variable logics as they allow to say that there is a node with n {\displaystyle n} neighbors, namely Φ = ∃ x ∃ ≥ n y E ( x , y ) {\displaystyle \Phi =\exists x\exists ^{\geq n}yE(x,y)} . Without counting quantifiers n + 1 {\displaystyle n+1} variables are needed for the same formula. There is a strong connection between two-variable logic and the Weisfeiler-Leman (or color refinement ) algorithm. Given two graphs, then any two nodes have the same stable color in color refinement if and only if they have the same C 2 {\displaystyle C^{2}} type, that is, they satisfy the same formulas in two-variable logic with counting. [ 5 ]
https://en.wikipedia.org/wiki/Two-variable_logic
A two-vector or bivector [ 1 ] is a tensor of type ( 2 0 ) {\displaystyle \scriptstyle {\binom {2}{0}}} and it is the dual of a two-form , meaning that it is a linear functional which maps two-forms to the real numbers (or more generally, to scalars). The tensor product of a pair of vectors is a two-vector. Then, any two-form can be expressed as a linear combination of tensor products of pairs of vectors, especially a linear combination of tensor products of pairs of basis vectors. If f is a two-vector, then [ 2 ] where the f α β are the components of the two-vector. Notice that both indices of the components are contravariant . This is always the case for two-vectors, by definition. A bivector may operate on a one-form, yielding a vector: although a problem might be which of the upper indices of the bivector to contract with. (This problem does not arise with mixed tensors because only one of such tensor's indices is upper.) However, if the bivector is symmetric then the choice of index to contract with is indifferent. An example of a bivector is the stress–energy tensor . Another one is the orthogonal complement [ 3 ] of the metric tensor . If one assumes that vectors may only be represented as column matrices and covectors as row matrices; then, since a square matrix operating on a column vector must yield a column vector, it follows that square matrices can only represent mixed tensors. However, there is nothing in the abstract algebraic definition of a matrix that says that such assumptions must be made. Then dropping that assumption matrices can be used to represent bivectors as well as two-forms. Example: ( f 00 f 01 f 02 f 03 f 10 f 11 f 12 f 13 f 20 f 21 f 22 f 23 f 30 f 31 f 32 f 33 ) ( u 0 u 1 u 2 u 3 ) = ( f 00 u 0 + f 01 u 1 + f 02 u 2 + f 03 u 3 f 10 u 0 + f 11 u 1 + f 12 u 2 + f 13 u 3 f 20 u 0 + f 21 u 1 + f 22 u 2 + f 23 u 3 f 30 u 0 + f 31 u 1 + f 32 u 2 + f 33 u 3 ) = ( v 0 v 1 v 2 v 3 ) ⟺ f α β u β = v α {\displaystyle {\begin{pmatrix}f^{00}&&f^{01}&&f^{02}&&f^{03}\\f^{10}&&f^{11}&&f^{12}&&f^{13}\\f^{20}&&f^{21}&&f^{22}&&f^{23}\\f^{30}&&f^{31}&&f^{32}&&f^{33}\end{pmatrix}}{\begin{pmatrix}u_{0}\\u_{1}\\u_{2}\\u_{3}\end{pmatrix}}={\begin{pmatrix}f^{00}u_{0}+f^{01}u_{1}+f^{02}u_{2}+f^{03}u_{3}\\f^{10}u_{0}+f^{11}u_{1}+f^{12}u_{2}+f^{13}u_{3}\\f^{20}u_{0}+f^{21}u_{1}+f^{22}u_{2}+f^{23}u_{3}\\f^{30}u_{0}+f^{31}u_{1}+f^{32}u_{2}+f^{33}u_{3}\end{pmatrix}}={\begin{pmatrix}v^{0}\\v^{1}\\v^{2}\\v^{3}\end{pmatrix}}\iff f^{\alpha \beta }u_{\beta }=v^{\alpha }} ( u 0 u 1 u 2 u 3 ) ( f 00 f 01 f 02 f 03 f 10 f 11 f 12 f 13 f 20 f 21 f 22 f 23 f 30 f 31 f 32 f 33 ) {\displaystyle {\begin{pmatrix}u_{0}&&u_{1}&&u_{2}&&u_{3}\end{pmatrix}}{\begin{pmatrix}f^{00}&&f^{01}&&f^{02}&&f^{03}\\f^{10}&&f^{11}&&f^{12}&&f^{13}\\f^{20}&&f^{21}&&f^{22}&&f^{23}\\f^{30}&&f^{31}&&f^{32}&&f^{33}\end{pmatrix}}} = ( u 0 f 00 + u 1 f 10 + u 2 f 20 + u 3 f 30 u 0 f 01 + u 1 f 11 + u 2 f 21 + u 3 f 31 u 0 f 02 + u 1 f 12 + u 2 f 22 + u 3 f 32 u 0 f 03 + u 1 f 13 + u 2 f 23 + u 3 f 33 ) {\displaystyle ={\begin{pmatrix}u_{0}f^{00}+u_{1}f^{10}+u_{2}f^{20}+u_{3}f^{30}&&u_{0}f^{01}+u_{1}f^{11}+u_{2}f^{21}+u_{3}f^{31}&&u_{0}f^{02}+u_{1}f^{12}+u_{2}f^{22}+u_{3}f^{32}&&u_{0}f^{03}+u_{1}f^{13}+u_{2}f^{23}+u_{3}f^{33}\end{pmatrix}}} = ( w 0 w 1 w 2 w 3 ) ⟺ u α f α β = f α β u α = w β {\displaystyle ={\begin{pmatrix}w^{0}&&w^{1}&&w^{2}&&w^{3}\end{pmatrix}}\iff u_{\alpha }f^{\alpha \beta }=f^{\alpha \beta }u_{\alpha }=w^{\beta }} or f β α u β = w α {\displaystyle f^{\beta \alpha }u_{\beta }=w^{\alpha }} . If f is symmetric, i.e., f α β = f β α {\displaystyle f^{\alpha \beta }=f^{\beta \alpha }} , then v α = w α {\displaystyle v^{\alpha }=w^{\alpha }} .
https://en.wikipedia.org/wiki/Two-vector
The two envelopes problem , also known as the exchange paradox , is a paradox in probability theory . It is of special interest in decision theory and for the Bayesian interpretation of probability theory . It is a variant of an older problem known as the necktie paradox . The problem is typically introduced by formulating a hypothetical challenge like the following example: Imagine you are given two identical envelopes , each containing money. One contains twice as much as the other. You may pick one envelope and keep the money it contains. Having chosen an envelope at will, but before inspecting it, you are given the chance to switch envelopes. Should you switch? Since the situation is symmetric, it seems obvious that there is no point in switching envelopes. On the other hand, a simple calculation using expected values suggests the opposite conclusion, that it is always beneficial to swap envelopes, since the person stands to gain twice as much money if they switch, while the only risk is halving what they currently have. [ 1 ] A person is given two indistinguishable envelopes, each of which contains a sum of money. One envelope contains twice as much as the other. The person may pick one envelope and keep whatever amount it contains. They pick one envelope at random but before they open it they are given the chance to take the other envelope instead. [ 1 ] Now suppose the person reasons as follows: The puzzle is to find the flaw in the line of reasoning in the switching argument. This includes determining exactly why and under what conditions that step is not correct, to be sure not to make this mistake in a situation where the misstep may not be so obvious. In short, the problem is to solve the paradox. The puzzle is not solved by finding another way to calculate the probabilities that does not lead to a contradiction. The envelope paradox dates back at least to 1943, when Belgian mathematician Maurice Kraitchik proposed a puzzle in his book Recreational Mathematics concerning two men who meet and compare their fine neckties. [ 2 ] [ 3 ] Each of them knows what his own necktie is worth and agrees for the winner to give his necktie to the loser as consolation. Kraitchik also discusses a variant in which the two men compare the contents of their purses. He assumes that each purse is equally likely to contain 1 up to some large number x of pennies, the total number of pennies minted to date. [ 2 ] The puzzle is also mentioned in a 1953 book on elementary mathematics and mathematical puzzles by the mathematician John Edensor Littlewood , who credited it to the physicist Erwin Schrödinger , where it concerns a pack of cards, each card has two numbers written on it, the player gets to see a random side of a random card, and the question is whether one should turn over the card. Littlewood's pack of cards is infinitely large and his paradox is a paradox of improper prior distributions. Martin Gardner popularized Kraitchik's puzzle in his 1982 book Aha! Gotcha , in the form of a wallet game: Two people, equally rich, meet to compare the contents of their wallets. Each is ignorant of the contents of the two wallets. The game is as follows: whoever has the least money receives the contents of the wallet of the other (in the case where the amounts are equal, nothing happens). One of the two men can reason: "I have the amount A in my wallet. That's the maximum that I could lose. If I win (probability 0.5), the amount that I'll have in my possession at the end of the game will be more than 2 A . Therefore the game is favourable to me." The other man can reason in exactly the same way. In fact, by symmetry, the game is fair. Where is the mistake in the reasoning of each man? Gardner confessed that though, like Kraitchik, he could give a sound analysis leading to the right answer (there is no point in switching), he could not clearly put his finger on what was wrong with the reasoning for switching, and Kraitchik did not give any help in this direction, either. In 1988 and 1989, Barry Nalebuff presented two different two-envelope problems, each with one envelope containing twice what is in the other, and each with computation of the expectation value 5 A /4. The first paper just presents the two problems. The second discusses many solutions to both of them. The second of his two problems is nowadays the more common, and is presented in this article. According to this version, the two envelopes are filled first, then one is chosen at random and called Envelope A. Martin Gardner independently mentioned this same version in his 1989 book Penrose Tiles to Trapdoor Ciphers and the Return of Dr Matrix . Barry Nalebuff's asymmetric variant, often known as the Ali Baba problem, has one envelope filled first, called Envelope A, and given to Ali. Then a fair coin is tossed to decide whether Envelope B should contain half or twice that amount, and only then given to Baba. Broome in 1995 called a probability distribution 'paradoxical' if for any given first-envelope amount x , the expectation of the other envelope conditional on x is greater than x . The literature contains dozens of commentaries on the problem, much of which observes that a distribution of finite values can have an infinite expected value. [ 4 ] There have been many solutions proposed, and commonly one writer proposes a solution to the problem as stated, after which another writer shows that altering the problem slightly revives the paradox. Such sequences of discussions have produced a family of closely related formulations of the problem, resulting in voluminous literature on the subject. [ 5 ] No proposed solution is widely accepted as definitive. [ 6 ] Despite this, it is common for authors to claim that the solution to the problem is easy, even elementary. [ 7 ] Upon investigating these elementary solutions, however, they often differ from one author to the next. Suppose that the total amount in both envelopes is a constant c = 3 x {\displaystyle c=3x} , with x {\displaystyle x} in one envelope and 2 x {\displaystyle 2x} in the other. If you select the envelope with x {\displaystyle x} first you gain the amount x {\displaystyle x} by swapping. If you select the envelope with 2 x {\displaystyle 2x} first you lose the amount x {\displaystyle x} by swapping. So you gain on average G = 1 2 ( x ) + 1 2 ( − x ) = 1 2 ( x − x ) = 0 {\displaystyle G={1 \over 2}(x)+{1 \over 2}(-x)={1 \over 2}(x-x)=0} by swapping. So on this supposition that the total amount is fixed, swapping is not better than keeping. The expected value E = 1 2 2 x + 1 2 x = 3 2 x {\displaystyle \operatorname {E} ={\frac {1}{2}}2x+{\frac {1}{2}}x={\frac {3}{2}}x} is the same for both the envelopes. Thus no contradiction exists. [ 8 ] The famous mystification is evoked by confusing the situation where the total amount in the two envelopes is fixed with the situation where the amount in one envelope is fixed and the other can be either double or half that amount. The so-called paradox presents two already appointed and already locked envelopes, where one envelope is already locked with twice the amount of the other already locked envelope. Whereas step 6 boldly claims "Thus the other envelope contains 2A with probability 1/2 and A/2 with probability 1/2", in the given situation, that claim can never apply to any A nor to any average A . This claim is never correct for the situation presented; this claim applies to the Nalebuff asymmetric variant only (see below). In the situation presented, the other envelope cannot generally contain 2A, but can contain 2A only in the very specific instance where envelope A, by chance contains the smaller amount of Total 3 {\displaystyle {\frac {\text{Total}}{3}}} , but nowhere else. The other envelope cannot generally contain A/2 but can contain A/2 only in the very specific instance where envelope A, by chance, actually contains 2 Total 3 {\displaystyle 2{\frac {\text{Total}}{3}}} , but nowhere else. The difference between the two already appointed and locked envelopes is always Total 3 {\displaystyle {\frac {\text{Total}}{3}}} . No "average amount A" can ever form any initial basis for any expected value, as this does not get to the heart of the problem. [ 9 ] A widely-discussed way to resolve the paradox, both in popular literature and part of the academic literature, especially in philosophy, is to assume that the 'A' in step 7 is intended to be the expected value in envelope A and that we intended to write down a formula for the expected value in envelope B. Step 7 states that the expected value in B = 1/2(2A + A/2). It is pointed out that the 'A' in the first part of the formula is the expected value, given that envelope A contains less than envelope B, but the 'A', in the second part of the formula is the expected value in A, given that envelope A contains more than envelope B. The flaw in the argument is that the same symbol is used with two different meanings in both parts of the same calculation but is assumed to have the same value in both cases. This line of argument is introduced by McGrew, Shier and Silverstein (1997). [ 10 ] A correct calculation would be: If we then take the sum in one envelope to be x and the sum in the other to be 2x the expected value calculations become: which is equal to the expected sum in A. In non-technical language, what goes wrong is that, in the scenario provided, the mathematics use relative values of A and B (that is, it assumes that one would gain more money if A is less than B than one would lose if the opposite were true). However, the two values of money are fixed (one envelope contains, say, $20 and the other $40). If the values of the envelopes are restated as x and 2 x , it's much easier to see that, if A were greater, one would lose x by switching and, if B were greater, one would gain x by switching. One does not gain a greater amount of money by switching because the total T of A and B (3 x ) remains the same, and the difference x is fixed to T/3 . Line 7 should have been worked out more carefully as follows: E ⁡ ( B ) = E ⁡ ( B ∣ A < B ) P ( A < B ) + E ⁡ ( B ∣ A > B ) P ( A > B ) = E ⁡ ( 2 A ∣ A < B ) 1 2 + E ⁡ ( 1 2 A ∣ A > B ) 1 2 = E ⁡ ( A ∣ A < B ) + 1 4 E ⁡ ( A ∣ A > B ) {\displaystyle {\begin{aligned}\operatorname {E} (B)&=\operatorname {E} (B\mid A<B)P(A<B)+\operatorname {E} (B\mid A>B)P(A>B)\\&=\operatorname {E} (2A\mid A<B){\frac {1}{2}}+\operatorname {E} \left({\frac {1}{2}}A\mid A>B\right){\frac {1}{2}}\\&=\operatorname {E} (A\mid A<B)+{\frac {1}{4}}\operatorname {E} (A\mid A>B)\end{aligned}}} A will be larger when A is larger than B, than when it is smaller than B. So its average values (expectation values) in those two cases are different. And the average value of A is not the same as A itself, anyway. Two mistakes are being made: the writer forgot he was taking expectation values, and he forgot he was taking expectation values under two different conditions. It would have been easier to compute E(B) directly. Denoting the lower of the two amounts by x , and taking it to be fixed (even if unknown) we find that E ⁡ ( B ) = 1 2 2 x + 1 2 x = 3 2 x {\displaystyle \operatorname {E} (B)={\frac {1}{2}}2x+{\frac {1}{2}}x={\frac {3}{2}}x} We learn that 1.5 x is the expected value of the amount in Envelope B. By the same calculation it is also the expected value of the amount in Envelope A. They are the same hence there is no reason to prefer one envelope to the other. This conclusion was, of course, obvious in advance; the point is that we identified the false step in the argument for switching by explaining exactly where the calculation being made there went off the rails. We could also continue from the correct but difficult to interpret result of the development in line 7: E ⁡ ( B ) = E ⁡ ( A ∣ A < B ) + 1 4 E ⁡ ( A ∣ A > B ) = x + 1 4 2 x = 3 2 x {\displaystyle \operatorname {E} (B)=\operatorname {E} (A\mid A<B)+{\frac {1}{4}}\operatorname {E} (A\mid A>B)=x+{\frac {1}{4}}2x={\frac {3}{2}}x} so (of course) different routes to calculate the same thing all give the same answer. Tsikogiannopoulos presented a different way to do these calculations. [ 12 ] It is by definition correct to assign equal probabilities to the events that the other envelope contains double or half that amount in envelope A. So the "switching argument" is correct up to step 6. Given that the player's envelope contains the amount A, he differentiates the actual situation in two different games: The first game would be played with the amounts (A, 2A) and the second game with the amounts (A/2, A). Only one of them is actually played but we don't know which one. These two games need to be treated differently. If the player wants to compute his/her expected return (profit or loss) in case of exchange, he/she should weigh the return derived from each game by the average amount in the two envelopes in that particular game. In the first case the profit would be A with an average amount of 3A/2, whereas in the second case the loss would be A/2 with an average amount of 3A/4. So the formula of the expected return in case of exchange, seen as a proportion of the total amount in the two envelopes, is: E = 1 2 ⋅ + A 3 A / 2 + 1 2 ⋅ − A / 2 3 A / 4 = 0 {\displaystyle E={\frac {1}{2}}\cdot {\frac {+A}{3A/2}}+{\frac {1}{2}}\cdot {\frac {-A/2}{3A/4}}=0} This result means yet again that the player has to expect neither profit nor loss by exchanging his/her envelope. We could actually open our envelope before deciding on switching or not and the above formula would still give us the correct expected return. For example, if we opened our envelope and saw that it contained 100 euros then we would set A=100 in the above formula and the expected return in case of switching would be: E = 1 2 ⋅ + 100 150 + 1 2 ⋅ − 50 75 = 0 {\displaystyle E={\frac {1}{2}}\cdot {\frac {+100}{150}}+{\frac {1}{2}}\cdot {\frac {-50}{75}}=0} The mechanism by which the amounts of the two envelopes are determined is crucial for the decision of the player to switch her envelope. [ 12 ] [ 13 ] Suppose that the amounts in the two envelopes A and B were not determined by first fixing the contents of two envelopes E1 and E2, and then naming them A and B at random (for instance, by the toss of a fair coin [ 14 ] ). Instead, we start right at the beginning by putting some amount in envelope A and then fill B in a way which depends both on chance (the toss of a coin) and on what we put in A. Suppose that first of all the amount a in envelope A is fixed in some way or other, and then the amount in Envelope B is fixed, dependent on what is already in A, according to the outcome of a fair coin. If the coin fell Heads then 2 a is put in Envelope B, if the coin fell Tails then a /2 is put in Envelope B. If the player was aware of this mechanism, and knows that she holds Envelope A, but do not know the outcome of the coin toss, and do not know a , then the switching argument is correct and she is recommended to switch envelopes. This version of the problem was introduced by Nalebuff (1988) and is often called the Ali-Baba problem. Notice that there is no need to look in envelope A in order to decide whether or not to switch. Many more variants of the problem have been introduced. Nickerson and Falk systematically survey a total of 8. [ 14 ] The simple resolution above assumed that the person who invented the argument for switching was trying to calculate the expectation value of the amount in Envelope A, thinking of the two amounts in the envelopes as fixed ( x and 2 x ). The only uncertainty is which envelope has the smaller amount x . However, many mathematicians and statisticians interpret the argument as an attempt to calculate the expected amount in Envelope B, given a real or hypothetical amount "A" in Envelope A. One does not need to look in the envelope to see how much is in there, in order to do the calculation. If the result of the calculation is an advice to switch envelopes, whatever amount might be in there, then it would appear that one should switch anyway, without looking. In this case, at Steps 6, 7 and 8 of the reasoning, "A" is any fixed possible value of the amount of money in the first envelope. This interpretation of the two envelopes problem appears in the first publications in which the paradox was introduced in its present-day form, Gardner (1989) and Nalebuff (1988). [ 15 ] ) It is common in the more mathematical literature on the problem. It also applies to the modification of the problem (which seems to have started with Nalebuff) in which the owner of envelope A does actually look in his envelope before deciding whether or not to switch; though Nalebuff does also emphasize that there is no need to have the owner of envelope A look in his envelope. If he imagines looking in it, and if for any amount which he can imagine being in there, he has an argument to switch, then he will decide to switch anyway. Finally, this interpretation was also the core of earlier versions of the two envelopes problem (Littlewood's, Schrödinger's, and Kraitchik's switching paradoxes); see the history section . This kind of interpretation is often called "Bayesian" because it assumes the writer is also incorporating a prior probability distribution of possible amounts of money in the two envelopes in the switching argument. The simple resolution depended on a particular interpretation of what the writer of the argument is trying to calculate: namely, it assumed he was after the (unconditional) expectation value of what's in Envelope B. In the mathematical literature on Two Envelopes Problem, a different interpretation is more common, involving the conditional expectation value (conditional on what might be in Envelope A). To solve this and related interpretations or versions of the problem, most authors use the Bayesian interpretation of probability, which means that probability reasoning is not only applied to truly random events like the random pick of an envelope, but also to our knowledge (or lack of knowledge) about things which are fixed but unknown, like the two amounts originally placed in the two envelopes, before one is picked at random and called "Envelope A". Moreover, according to a long tradition going back at least to Laplace and his principle of insufficient reason one is supposed to assign equal probabilities when one has no knowledge at all concerning the possible values of some quantity. Thus the fact that we are not told anything about how the envelopes are filled can already be converted into probability statements about these amounts. No information means that probabilities are equal. In steps 6 and 7 of the switching argument, the writer imagines that envelope A contains a certain amount a , and then seems to believe that given that information, the other envelope would be equally likely to contain twice or half that amount. That assumption can only be correct, if prior to knowing what was in Envelope A, the writer would have considered the following two pairs of values for both envelopes equally likely: the amounts a /2 and a ; and the amounts a and 2 a . (This follows from Bayes' rule in odds form: posterior odds equal prior odds times likelihood ratio). But now we can apply the same reasoning, imagining not a but a/2 in Envelope A. And similarly, for 2 a . And similarly, ad infinitum, repeatedly halving or repeatedly doubling as many times as you like. [ 16 ] Suppose for the sake of argument, we start by imagining an amount of 32 in Envelope A. In order that the reasoning in steps 6 and 7 is correct whatever amount happened to be in Envelope A, we apparently believe in advance that all the following ten amounts are all equally likely to be the smaller of the two amounts in the two envelopes: 1, 2, 4, 8, 16, 32, 64, 128, 256, 512 (equally likely powers of 2 [ 16 ] ). But going to even larger or even smaller amounts, the "equally likely" assumption starts to appear a bit unreasonable. Suppose we stop, just with these ten equally likely possibilities for the smaller amount in the two envelopes. In that case, the reasoning in steps 6 and 7 was entirely correct if envelope A happened to contain any of the amounts 2, 4, ... 512: switching envelopes would give an expected (average) gain of 25%. If envelope A happened to contain the amount 1, then the expected gain is actually 100%. But if it happened to contain the amount 1024, a massive loss of 50% (of a rather large amount) would have been incurred. That only happens once in twenty times, but it is exactly enough to balance the expected gains in the other 19 out of 20 times. Alternatively, we do go on ad infinitum but now we are working with a quite ludicrous assumption, implying for instance, that it is infinitely more likely for the amount in envelope A to be smaller than 1, and infinitely more likely to be larger than 1024, than between those two values. This is a so-called improper prior distribution : probability calculus breaks down; expectation values are not even defined. [ 16 ] Many authors have also pointed out that if a maximum sum that can be put in the envelope with the smaller amount exists, then it is very easy to see that Step 6 breaks down, since if the player holds more than the maximum sum that can be put into the "smaller" envelope they must hold the envelope containing the larger sum, and are thus certain to lose by switching. This may not occur often, but when it does, the heavy loss the player incurs means that, on average, there is no advantage in switching. Some writers consider that this resolves all practical cases of the problem. [ 17 ] But the problem can also be resolved mathematically without assuming a maximum amount. Nalebuff, [ 17 ] Christensen and Utts, [ 18 ] Falk and Konold, [ 16 ] Blachman, Christensen and Utts, [ 19 ] Nickerson and Falk, [ 14 ] pointed out that if the amounts of money in the two envelopes have any proper probability distribution representing the player's prior beliefs about the amounts of money in the two envelopes, then it is impossible that whatever the amount A=a in the first envelope might be, it would be equally likely, according to these prior beliefs, that the second contains a /2 or 2 a . Thus step 6 of the argument, which leads to always switching , is a non-sequitur, also when there is no maximum to the amounts in the envelopes. The first two resolutions discussed above (the "simple resolution" and the "Bayesian resolution") correspond to two possible interpretations of what is going on in step 6 of the argument. They both assume that step 6 indeed is "the bad step". But the description in step 6 is ambiguous. Is the author after the unconditional (overall) expectation value of what is in envelope B (perhaps - conditional on the smaller amount, x ), or is he after the conditional expectation of what is in envelope B, given any possible amount a which might be in envelope A? Thus, there are two main interpretations of the intention of the composer of the paradoxical argument for switching, and two main resolutions. A large literature has developed concerning variants of the problem. [ 20 ] [ 21 ] The standard assumption about the way the envelopes are set up is that a sum of money is in one envelope, and twice that sum is in another envelope. One of the two envelopes is randomly given to the player ( envelope A ). The originally proposed problem does not make clear exactly how the smaller of the two sums is determined, what values it could possibly take and, in particular, whether there is a minimum or a maximum sum it might contain. [ 22 ] [ 23 ] However, if we are using the Bayesian interpretation of probability, then we start by expressing our prior beliefs as to the smaller amount in the two envelopes through a probability distribution. Lack of knowledge can also be expressed in terms of probability. A first variant within the Bayesian version is to come up with a proper prior probability distribution of the smaller amount of money in the two envelopes, such that when Step 6 is performed properly, the advice is still to prefer Envelope B, whatever might be in Envelope A. So though the specific calculation performed in step 6 was incorrect (there is no proper prior distribution such that, given what is in the first envelope A, the other envelope is always equally likely to be larger or smaller) a correct calculation, depending on what prior we are using, does lead to the result E ( B | A = a ) > a {\displaystyle E(B|A=a)>a} for all possible values of a . [ 24 ] In these cases, it can be shown that the expected sum in both envelopes is infinite. There is no gain, on average, in swapping. Though Bayesian probability theory can resolve the first mathematical interpretation of the paradox above, it turns out that examples can be found of proper probability distributions, such that the expected value of the amount in the second envelope, conditioned on the amount in the first, does exceed the amount in the first, whatever it might be. The first such example was already given by Nalebuff. [ 17 ] See also Christensen and Utts (1992). [ 18 ] [ 25 ] [ 26 ] [ 27 ] Denote again the amount of money in the first envelope by A and that in the second by B . We think of these as random. Let X be the smaller of the two amounts and Y=2X be the larger. Notice that once we have fixed a probability distribution for X then the joint probability distribution of A, B is fixed, since A, B = X, Y or Y, X each with probability 1/2, independently of X, Y . The bad step 6 in the "always switching" argument led us to the finding E(B|A=a)>a for all a , and hence to the recommendation to switch, whether or not we know a . Now, it turns out that one can quite easily invent proper probability distributions for X , the smaller of the two amounts of money, such that this bad conclusion is still true. One example is analyzed in more detail, in a moment. As mentioned before, it cannot be true that whatever a , given A=a , B is equally likely to be a /2 or 2 a , but it can be true that whatever a , given A=a , B is larger in expected value than a . Suppose for example that the envelope with the smaller amount actually contains 2 n dollars with probability 2 n /3 n +1 where n = 0, 1, 2, ... These probabilities sum to 1, hence the distribution is a proper prior (for subjectivists) and a completely decent probability law also for frequentists. [ 28 ] Imagine what might be in the first envelope. A sensible strategy would certainly be to swap when the first envelope contains 1, as the other must then contain 2. Suppose on the other hand the first envelope contains 2. In that case, there are two possibilities: the envelope pair in front of us is either {1, 2} or {2, 4}. All other pairs are impossible. The conditional probability that we are dealing with the {1, 2} pair, given that the first envelope contains 2, is P ( { 1 , 2 } ∣ 2 ) = P ( { 1 , 2 } ) / 2 P ( { 1 , 2 } ) / 2 + P ( { 2 , 4 } ) / 2 = P ( { 1 , 2 } ) P ( { 1 , 2 } ) + P ( { 2 , 4 } ) = 1 / 3 1 / 3 + 2 / 9 = 3 / 5 , {\displaystyle {\begin{aligned}P(\{1,2\}\mid 2)&={\frac {P(\{1,2\})/2}{P(\{1,2\})/2+P(\{2,4\})/2}}\\&={\frac {P(\{1,2\})}{P(\{1,2\})+P(\{2,4\})}}\\&={\frac {1/3}{1/3+2/9}}=3/5,\end{aligned}}} and consequently the probability it's the {2, 4} pair is 2/5, since these are the only two possibilities. In this derivation, P ( { 1 , 2 } ) / 2 {\displaystyle P(\{1,2\})/2} is the probability that the envelope pair is the pair 1 and 2, and envelope A happens to contain 2; P ( { 2 , 4 } ) / 2 {\displaystyle P(\{2,4\})/2} is the probability that the envelope pair is the pair 2 and 4, and (again) envelope A happens to contain 2. Those are the only two ways that envelope A can end up containing the amount 2. It turns out that these proportions hold in general unless the first envelope contains 1. Denote by a the amount we imagine finding in Envelope A, if we were to open that envelope, and suppose that a = 2 n for some n ≥ 1. In that case the other envelope contains a /2 with probability 3/5 and 2 a with probability 2/5. So either the first envelope contains 1, in which case the conditional expected amount in the other envelope is 2, or the first envelope contains a > 1, and though the second envelope is more likely to be smaller than larger, its conditionally expected amount is larger: the conditionally expected amount in Envelope B is 3 5 a 2 + 2 5 2 a = 11 10 a {\displaystyle {\frac {3}{5}}{\frac {a}{2}}+{\frac {2}{5}}2a={\frac {11}{10}}a} which is more than a . This means that the player who looks in envelope A would decide to switch whatever he saw there. Hence there is no need to look in envelope A to make that decision. This conclusion is just as clearly wrong as it was in the preceding interpretations of the Two Envelopes Problem. But now the flaws noted above do not apply; the a in the expected value calculation is a constant and the conditional probabilities in the formula are obtained from a specified and proper prior distribution. Most writers think that the new paradox can be defused, although the resolution requires concepts from mathematical economics. [ 29 ] Suppose E ( B | A = a ) > a {\displaystyle E(B|A=a)>a} for all a {\displaystyle a} . It can be shown that this is possible for some probability distributions of X (the smaller amount of money in the two envelopes) only if E ( X ) = ∞ {\displaystyle E(X)=\infty } . That is, only if the mean of all possible values of money in the envelopes is infinite. To see why, compare the series described above in which the probability of each X is 2/3 as likely as the previous X with one in which the probability of each X is only 1/3 as likely as the previous X . When the probability of each subsequent term is greater than one-half of the probability of the term before it (and each X is twice that of the X before it) the mean is infinite, but when the probability factor is less than one-half, the mean converges. In the cases where the probability factor is less than one-half, E ( B | A = a ) < a {\displaystyle E(B|A=a)<a} for all a other than the first, smallest a , and the total expected value of switching converges to 0. In addition, if an ongoing distribution with a probability factor greater than one-half is made finite by, after any number of terms, establishing a final term with "all the remaining probability," that is, 1 minus the probability of all previous terms, the expected value of switching with respect to the probability that A is equal to the last, largest a will exactly negate the sum of the positive expected values that came before, and again the total expected value of switching drops to 0 (this is the general case of setting out an equal probability of a finite set of values in the envelopes described above). Thus, the only distributions that seem to point to a positive expected value for switching are those in which E ( X ) = ∞ {\displaystyle E(X)=\infty } . Averaging over a , it follows that E ( B ) = E ( A ) = ∞ {\displaystyle E(B)=E(A)=\infty } (because A and B have identical probability distributions, by symmetry, and both A and B are greater than or equal to X ). If we do not look into the first envelope, then clearly there is no reason to switch, since we would be exchanging one unknown amount of money ( A ), whose expected value is infinite, for another unknown amount of money ( B ), with the same probability distribution and infinite expected value. However, if we do look into the first envelope, then for all values observed ( A = a {\displaystyle A=a} ) we would want to switch because E ( B | A = a ) > a {\displaystyle E(B|A=a)>a} for all a . As noted by David Chalmers , this problem can be described as a failure of dominance reasoning. [ 30 ] Under dominance reasoning, the fact that we strictly prefer A to B for all possible observed values a should imply that we strictly prefer A to B without observing a ; however, as already shown, that is not true because E ( B ) = E ( A ) = ∞ {\displaystyle E(B)=E(A)=\infty } . To salvage dominance reasoning while allowing E ( B ) = E ( A ) = ∞ {\displaystyle E(B)=E(A)=\infty } , one would have to replace expected value as the decision criterion, thereby employing a more sophisticated argument from mathematical economics. For example, we could assume the decision maker is an expected utility maximizer with initial wealth W whose utility function, u ( w ) {\displaystyle u(w)} , is chosen to satisfy E ( u ( W + B ) | A = a ) < u ( W + a ) {\displaystyle E(u(W+B)|A=a)<u(W+a)} for at least some values of a (that is, holding onto A = a {\displaystyle A=a} is strictly preferred to switching to B for some a ). Although this is not true for all utility functions, it would be true if u ( w ) {\displaystyle u(w)} had an upper bound, β < ∞ {\displaystyle \beta <\infty } , as w increased toward infinity (a common assumption in mathematical economics and decision theory). [ 31 ] Michael R. Powers provides necessary and sufficient conditions for the utility function to resolve the paradox, and notes that neither u ( w ) < β {\displaystyle u(w)<\beta } nor E ( u ( W + A ) ) = E ( u ( W + B ) ) < ∞ {\displaystyle E(u(W+A))=E(u(W+B))<\infty } is required. [ 32 ] Some writers would prefer to argue that in a real-life situation, u ( W + A ) {\displaystyle u(W+A)} and u ( W + B ) {\displaystyle u(W+B)} are bounded simply because the amount of money in an envelope is bounded by the total amount of money in the world ( M ), implying u ( W + A ) ≤ u ( W + M ) {\displaystyle u(W+A)\leq u(W+M)} and u ( W + B ) ≤ u ( W + M ) {\displaystyle u(W+B)\leq u(W+M)} . From this perspective, the second paradox is resolved because the postulated probability distribution for X (with E ( X ) = ∞ {\displaystyle E(X)=\infty } ) cannot arise in a real-life situation. Similar arguments are often used to resolve the St. Petersburg paradox . As mentioned above, any distribution producing this variant of the paradox must have an infinite mean. So before the player opens an envelope the expected gain from switching is "∞ − ∞", which is not defined. In the words of David Chalmers , this is "just another example of a familiar phenomenon, the strange behavior of infinity". [ 30 ] Chalmers suggests that decision theory generally breaks down when confronted with games having a diverging expectation, and compares it with the situation generated by the classical St. Petersburg paradox . However, Clark and Shackel argue that this blaming it all on "the strange behavior of infinity" does not resolve the paradox at all; neither in the single case nor the averaged case. They provide a simple example of a pair of random variables both having infinite mean but where it is clearly sensible to prefer one to the other, both conditionally and on average. [ 33 ] They argue that decision theory should be extended so as to allow infinite expectation values in some situations. The logician Raymond Smullyan questioned if the paradox has anything to do with probabilities at all. [ 34 ] He did this by expressing the problem in a way that does not involve probabilities. The following plainly logical arguments lead to conflicting conclusions: A number of solutions have been put forward. Careful analyses have been made by some logicians. Though solutions differ, they all pinpoint semantic issues concerned with counterfactual reasoning. We want to compare the amount that we would gain by switching if we would gain by switching, with the amount we would lose by switching if we would indeed lose by switching. However, we cannot both gain and lose by switching at the same time. We are asked to compare two incompatible situations. Only one of them can factually occur, the other is a counterfactual situation—somehow imaginary. To compare them at all, we must somehow "align" the two situations, providing some definite points in common. James Chase argues that the second argument is correct because it does correspond to the way to align two situations (one in which we gain, the other in which we lose), which is preferably indicated by the problem description. [ 35 ] Also, Bernard Katz and Doris Olin argue this point of view. [ 36 ] In the second argument, we consider the amounts of money in the two envelopes as being fixed; what varies is which one is first given to the player. Because that was an arbitrary and physical choice, the counterfactual world in which the player, counterfactually, got the other envelope to the one he was actually (factually) given is a highly meaningful counterfactual world and hence the comparison between gains and losses in the two worlds is meaningful. This comparison is uniquely indicated by the problem description, in which two amounts of money are put in the two envelopes first, and only after that is one chosen arbitrarily and given to the player. In the first argument, however, we consider the amount of money in the envelope first given to the player as fixed and consider the situations where the second envelope contains either half or twice that amount. This would only be a reasonable counterfactual world if in reality the envelopes had been filled as follows: first, some amount of money is placed in the specific envelope that will be given to the player; and secondly, by some arbitrary process, the other envelope is filled (arbitrarily or randomly) either with double or with half of that amount of money. Byeong-Uk Yi, on the other hand, argues that comparing the amount you would gain if you would gain by switching with the amount you would lose if you would lose by switching is a meaningless exercise from the outset. [ 37 ] According to his analysis, all three implications (switch, indifferent, do not switch) are incorrect. He analyses Smullyan's arguments in detail, showing that intermediate steps are being taken, and pinpointing exactly where an incorrect inference is made according to his formalization of counterfactual inference. An important difference with Chase's analysis is that he does not take account of the part of the story where we are told that the envelope called envelope A is decided completely at random. Thus, Chase puts probability back into the problem description in order to conclude that arguments 1 and 3 are incorrect, argument 2 is correct, while Yi keeps "two envelope problem without probability" completely free of probability and comes to the conclusion that there are no reasons to prefer any action. This corresponds to the view of Albers et al., that without a probability ingredient, there is no way to argue that one action is better than another, anyway. Bliss argues that the source of the paradox is that when one mistakenly believes in the possibility of a larger payoff that does not, in actuality, exist, one is mistaken by a larger margin than when one believes in the possibility of a smaller payoff that does not actually exist. [ 38 ] If, for example, the envelopes contained $5.00 and $10.00 respectively, a player who opened the $10.00 envelope would expect the possibility of a $20.00 payout that simply does not exist. Were that player to open the $5.00 envelope instead, he would believe in the possibility of a $2.50 payout, which constitutes a smaller deviation from the true value; this results in the paradoxical discrepancy. Albers, Kooi, and Schaafsma consider that without adding probability (or other) ingredients to the problem, [ 21 ] Smullyan's arguments do not give any reason to swap or not to swap, in any case. Thus, there is no paradox. This dismissive attitude is common among writers from probability and economics: Smullyan's paradox arises precisely because he takes no account whatever of probability or utility. As an extension to the problem, consider the case where the player is allowed to look in envelope A before deciding whether to switch. In this "conditional switching" problem, it is often possible to generate a gain over the "never switching" strategy", depending on the probability distribution of the envelopes. [ 39 ]
https://en.wikipedia.org/wiki/Two_envelopes_problem
Twyman's law states that "Any figure that looks interesting or different is usually wrong", [ 1 ] following the principle that "the more unusual or interesting the data, the more likely they are to have been the result of an error of one kind or another". It is named after the media and market researcher Tony Twyman and has been described as one of the most important laws of data analysis . [ 2 ] [ 3 ] [ 4 ] The law is based on the fact that errors in data measurement and analysis can lead to observed quantities that are wildly different from typical values. These errors are usually more common than real changes of similar magnitude in the underlying process being measured. For example, if an analyst at a software company notices that the number of users has doubled overnight, the most likely explanation is a bug in logging , rather than a true increase in users. [ 3 ] The law can also be extended to situations where the underlying data is influenced by unexpected factors that differ from what was intended to be measured. For example, when schools show unusually large improvements in test scores , subsequent investigation often reveals that those scores were driven by fraud . [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Twyman's_law
The Ty5 is a type of retrotransposon native to the Saccharomyces cerevisiae organism. Ty5 is one of five [ 1 ] endogenous retrotransposons native to the model organism Saccharomyces cerevisiae , all of which target integration to gene poor regions. Endogenous retrotransposons are hypothesized to target gene poor chromosomal targets in order to reduce the chance of inactivating host genes. [ 2 ] Ty1-Ty4 integrate upstream of Pol III promoters, while Ty5 targets integration to loci bound in heterochromatin . [ 3 ] In the case of Ty5, this likely occurs by means of an interaction between the C-terminus of integrase and a target protein. [ 4 ] The tight targeting patterns seen for the Ty elements are thought to be a means to limit damage to its host, which has a very gene dense genome. [ 5 ] Ty5 was discovered in the mid 1990s in the laboratory of Daniel Voytas at Iowa State University . [ 6 ] Ty5 is used as a model system by which to understand the biology of the telomere and heterochromatin. The Ty5 retrotransposon is used as a genetic model to study the architecture and dynamics of the telomeres and heterochromatin. [ 7 ] Heterochromatin in S. cerevisiae is composed of a wide array of proteins and plays several roles. The first stage of heterochromatin formation requires DNA binding proteins, which interact with specific cis DNA sequences at the telomeres, rDNA and HM loci. These proteins, including Rap1p and the origin recognition complex (ORC), serve as a platform for other proteins to bind, condense the DNA, and modify neighboring histones . Some of these proteins, notably Rap1p, also play other roles, including initiation of transcription. The first known step in the formation of dedicated heterochromatin is the binding of Sir4p to Rap1p (Luo, Vega-Palas et al. 2002). Sir4p is one of four ‘Silent Information Regulator’ proteins that also include Sir1p, Sir2p and Sir3p. Of these, Sir2p, Sir3p and Sir4p form the core of heterochromatin. [ 8 ] Sir4p serves as a binding site for Sir2p, which is the next to bind. Sir2p deacetylates adjacent histones, which is thought to further condense the chromatin and prevent the binding of other transcription promoting histone modification enzymes. [ 9 ] Sir3p binding follows, further condensing the heterochromatin. Sir1p plays a role in the initiation of silencing at the HM loci. A large number of other proteins act in both a synergistic and antagonistic manner. [ 10 ] Early work characterizing Ty5 targeted transposition focused on two fronts: identifying the component of Ty5 responsible for targeting and identifying the factor with which it interacted. Due to the central role of the Sir proteins in heterochromatin formation, they were initially considered as potential targeting signals. Because integration is mediated by the retrotransposon integrase enzyme, it was speculated to contain a component that would recognize heterochromatin. The C-terminus of the Ty retrotransposon’s integrase contains an extension not seen in the retroviruses. This region is also not conserved between Ty1 and Ty5, whereas the rest of the integrase is, suggesting that this divergence could be responsible for the different targeting of the yeast Ty elements. A mutation was identified in the integrase C-terminus that randomized Ty5 integration, suggesting that this region of integrase was in fact involved in targeted transposition. [ 4 ] Ty5 is a relative of the Retroviridae family of retroviruses , which includes the human pathogen HIV . Ty5 is a tractable system in which to study the biology of retrovirus integration.
https://en.wikipedia.org/wiki/Ty5_retrotransposon
Tyche / ˈ t aɪ k i / is a hypothetical gas giant located in the Solar System 's Oort cloud , first proposed in 1999 by astrophysicists John Matese, Patrick Whitman and Daniel Whitmire of the University of Louisiana at Lafayette . [ 1 ] [ 2 ] They argued that evidence of Tyche's existence could be seen in a supposed bias in the points of origin for long-period comets . More recently, Matese [ 3 ] and Whitmire [ 4 ] re-evaluated the comet data and noted that Tyche, if it existed, would be detectable in the archive of data that was collected by NASA 's Wide-field Infrared Survey Explorer (WISE) telescope. [ 5 ] In 2014, NASA announced that the WISE survey had ruled out any object with Tyche's characteristics, indicating that Tyche as hypothesized by Matese, Whitman, and Whitmire does not exist. [ 6 ] [ 7 ] [ 8 ] Matese, Whitmire and their colleague Patrick Whitman first proposed the existence of this planet in 1999, [ 9 ] based on observations of the orbits of long-period comets. Most astronomers agree that long-period comets (those with orbits of thousands to millions of years) have a roughly isotropic distribution; that is, they arrive at random from every point in the sky. [ 10 ] Because comets are volatile and dissipate over time, astronomers suspect that they must be held in a spherical cloud tens of thousands of AU distant (known as the Oort cloud ) for most of their existence. [ 10 ] However, Matese and Whitmire claimed that rather than arriving from random points across the sky as is commonly thought, comet orbits were in fact clustered in a band inclined to the orbital plane of the planets . Such clustering could be explained if they were disturbed by an unseen object at least as large as Jupiter , possibly a brown dwarf , located in the outer part of the Oort cloud . [ 11 ] [ 12 ] They also suggested that such an object might explain the trans-Neptunian object Sedna 's peculiar orbit. [ 13 ] However, the sample size of Oort comets was small and the results were inconclusive. [ 14 ] Whitmire and Matese speculated that Tyche's orbit would lie at approximately 500 times Neptune 's distance, some 15,000 AU (2.2 × 10 12 km) from the Sun , a little less than one quarter of a light year . This is well within the Oort cloud , whose boundary is estimated to be beyond 50,000 AU. It would have an orbital period of roughly 1.8 million years. [ 15 ] A failed search of older IRAS data suggests that an object of 5 M J would need to have a distance greater than 10,000 AU. [ 7 ] Such a planet would orbit in a plane different from the ecliptic, [ 16 ] and would probably have been in a wide-binary orbit at the time of its formation. [ 7 ] Wide binaries may form through capture during the dissolution of a star's birth cluster . [ 7 ] In 2011, Whitmire and Matese speculated that the hypothesized planet could be up to four times the mass of Jupiter and have a relatively high temperature of approximately 200 K (−73 °C; −100 °F), [ 7 ] due to residual heat from its formation and Kelvin–Helmholtz heating . [ citation needed ] It would be insufficiently massive to undergo nuclear fusion reactions in its interior, a process that occurs in objects above roughly 13 Jupiter masses . Although more massive than Jupiter, Tyche would be about Jupiter's size since degenerate pressure causes massive gas giants to increase only in density, not in size, relative to their mass. [ a ] If Tyche was to be found, it was expected to be found by the end of 2013 and only be 1–2 Jupiter masses. [ 19 ] Tyche ( τύχη , meaning "fortune" or "luck" in Greek ) was the Greek goddess of fortune and prosperity. The name was chosen to avoid confusion with an earlier similar hypothesis that the Sun has a dim companion named Nemesis , whose gravity triggers influxes of comets into the inner Solar System, leading to mass-extinctions on Earth . Tyche was the name of the "good sister" of Nemesis . [ 7 ] This name was first used for an outer Oort cloud object by J. Davy Kirkpatrick at the Infrared Processing and Analysis Center of the California Institute of Technology. [ 20 ] The Wide-field Infrared Survey Explorer (WISE) space telescope has completed an all-sky infrared survey that includes areas where Whitmire and Matese anticipate that Tyche may be found. [ 7 ] On March 14, 2012, the first-pass all-sky survey catalog of the WISE mission was released. [ 21 ] The co-added (AllWISE) post-cryo second survey of the sky was released at the end of 2013. [ 22 ] On March 7, 2014, NASA reported that the WISE telescope had ruled out the possibility of a Saturn-sized object out to 10,000—28,000 AU, and a Jupiter-sized or larger object out to 26,000—82,000 AU (0.4 light-years ). [ 6 ] [ 23 ] Solar System → Local Interstellar Cloud → Local Bubble → Gould Belt → Orion Arm → Milky Way → Milky Way subgroup → Local Group → Local Sheet → Virgo Supercluster → Laniakea Supercluster → Local Hole → Observable universe → Universe Each arrow ( → ) may be read as "within" or "part of".
https://en.wikipedia.org/wiki/Tyche_(hypothetical_planet)
The Tycho Brahe Medal is awarded by the European Astronomical Society . Namesake is the Danish astronomer of the Renaissance Tycho Brahe . Inaugurated in 2008, the prize is awarded annually in recognition of the pioneering of European astronomical instrumentations, or major discoveries based largely on such instruments. [ 1 ] Since 2019, a medal has been awarded. [ 1 ] The following persons have received the Tycho Brahe Prize/Medal: [ 1 ]
https://en.wikipedia.org/wiki/Tycho_Brahe_Medal
In mathematics , Tychonoff's theorem states that the product of any collection of compact topological spaces is compact with respect to the product topology . The theorem is named after Andrey Nikolayevich Tikhonov (whose surname sometimes is transcribed Tychonoff ), who proved it first in 1930 for powers of the closed unit interval and in 1935 stated the full theorem along with the remark that its proof was the same as for the special case. The earliest known published proof is contained in a 1935 article by Tychonoff, "Über einen Funktionenraum" . [ 1 ] Tychonoff's theorem is often considered as perhaps the single most important result in general topology (along with Urysohn's lemma ). [ 2 ] The theorem is also valid for topological spaces based on fuzzy sets . [ 3 ] The theorem depends crucially upon the precise definitions of compactness and of the product topology ; in fact, Tychonoff's 1935 paper defines the product topology for the first time. Conversely, part of its importance is to give confidence that these particular definitions are the most useful (i.e. most well-behaved) ones. Indeed, the Heine–Borel definition of compactness—that every covering of a space by open sets admits a finite subcovering—is relatively recent. More popular in the 19th and early 20th centuries was the Bolzano-Weierstrass criterion that every bounded infinite sequence admits a convergent subsequence, now called sequential compactness . These conditions are equivalent for metrizable spaces , but neither one implies the other in the class of all topological spaces. It is almost trivial to prove that the product of two sequentially compact spaces is sequentially compact—one passes to a subsequence for the first component and then a subsubsequence for the second component. An only slightly more elaborate "diagonalization" argument establishes the sequential compactness of a countable product of sequentially compact spaces. However, the product of continuum many copies of the closed unit interval (with its usual topology) fails to be sequentially compact with respect to the product topology, even though it is compact by Tychonoff's theorem (e.g., see Wilansky 1970 , p. 134). This is a critical failure: if X is a completely regular Hausdorff space , there is a natural embedding from X into [0,1] C ( X ,[0,1]) , where C ( X ,[0,1]) is the set of continuous maps from X to [0,1]. The compactness of [0,1] C ( X ,[0,1]) thus shows that every completely regular Hausdorff space embeds in a compact Hausdorff space (or, can be "compactified".) This construction is the Stone–Čech compactification . Conversely, all subspaces of compact Hausdorff spaces are completely regular Hausdorff, so this characterizes the completely regular Hausdorff spaces as those that can be compactified. Such spaces are now called Tychonoff spaces . Tychonoff's theorem has been used to prove many other mathematical theorems. These include theorems about compactness of certain spaces such as the Banach–Alaoglu theorem on the weak-* compactness of the unit ball of the dual space of a normed vector space , and the Arzelà–Ascoli theorem characterizing the sequences of functions in which every subsequence has a uniformly convergent subsequence. They also include statements less obviously related to compactness, such as the De Bruijn–Erdős theorem stating that every minimal k -chromatic graph is finite, and the Curtis–Hedlund–Lyndon theorem providing a topological characterization of cellular automata . As a rule of thumb, any sort of construction that takes as input a fairly general object (often of an algebraic, or topological-algebraic nature) and outputs a compact space is likely to use Tychonoff: e.g., the Gelfand space of maximal ideals of a commutative C*-algebra , the Stone space of maximal ideals of a Boolean algebra , and the Berkovich spectrum of a commutative Banach ring . 1) Tychonoff's 1930 proof used the concept of a complete accumulation point . 2) The theorem is a quick corollary of the Alexander subbase theorem . More modern proofs have been motivated by the following considerations: the approach to compactness via convergence of subsequences leads to a simple and transparent proof in the case of countable index sets. However, the approach to convergence in a topological space using sequences is sufficient when the space satisfies the first axiom of countability (as metrizable spaces do), but generally not otherwise. However, the product of uncountably many metrizable spaces, each with at least two points, fails to be first countable. So it is natural to hope that a suitable notion of convergence in arbitrary spaces will lead to a compactness criterion generalizing sequential compactness in metrizable spaces that will be as easily applied to deduce the compactness of products. This has turned out to be the case. 3) The theory of convergence via filters, due to Henri Cartan and developed by Bourbaki in 1937, leads to the following criterion: assuming the ultrafilter lemma , a space is compact if and only if each ultrafilter on the space converges. With this in hand, the proof becomes easy: the (filter generated by the) image of an ultrafilter on the product space under any projection map is an ultrafilter on the factor space, which therefore converges, to at least one x i . One then shows that the original ultrafilter converges to x = ( x i ). In his textbook, Munkres gives a reworking of the Cartan–Bourbaki proof that does not explicitly use any filter-theoretic language or preliminaries. 4) Similarly, the Moore–Smith theory of convergence via nets, as supplemented by Kelley's notion of a universal net , leads to the criterion that a space is compact if and only if each universal net on the space converges. This criterion leads to a proof (Kelley, 1950) of Tychonoff's theorem, which is, word for word, identical to the Cartan/Bourbaki proof using filters, save for the repeated substitution of "universal net" for "ultrafilter base". 5) A proof using nets but not universal nets was given in 1992 by Paul Chernoff. All of the above proofs use the axiom of choice (AC) in some way. For instance, the third proof uses that every filter is contained in an ultrafilter (i.e., a maximal filter), and this is seen by invoking Zorn's lemma . Zorn's lemma is also used to prove Kelley's theorem, that every net has a universal subnet. In fact these uses of AC are essential: in 1950 Kelley proved that Tychonoff's theorem implies the axiom of choice in ZF . Note that one formulation of AC is that the Cartesian product of a family of nonempty sets is nonempty; but since the empty set is most certainly compact, the proof cannot proceed along such straightforward lines. Thus Tychonoff's theorem joins several other basic theorems (e.g. that every vector space has a basis) in being equivalent to AC. On the other hand, the statement that every filter is contained in an ultrafilter does not imply AC. Indeed, it is not hard to see that it is equivalent to the Boolean prime ideal theorem (BPI), a well-known intermediate point between the axioms of Zermelo-Fraenkel set theory (ZF) and the ZF theory augmented by the axiom of choice (ZFC). A first glance at the second proof of Tychnoff may suggest that the proof uses no more than (BPI), in contradiction to the above. However, the spaces in which every convergent filter has a unique limit are precisely the Hausdorff spaces. In general we must select, for each element of the index set, an element of the nonempty set of limits of the projected ultrafilter base, and of course this uses AC. However, it also shows that the compactness of the product of compact Hausdorff spaces can be proved using (BPI), and in fact the converse also holds. Studying the strength of Tychonoff's theorem for various restricted classes of spaces is an active area in set-theoretic topology . The analogue of Tychonoff's theorem in pointless topology does not require any form of the axiom of choice. To prove that Tychonoff's theorem in its general version implies the axiom of choice, we establish that every infinite cartesian product of non-empty sets is nonempty. The trickiest part of the proof is introducing the right topology. The right topology, as it turns out, is the cofinite topology with a small twist. It turns out that every set given this topology automatically becomes a compact space. Once we have this fact, Tychonoff's theorem can be applied; we then use the finite intersection property (FIP) definition of compactness. The proof itself (due to J. L. Kelley ) follows: Let { A i } be an indexed family of nonempty sets, for i ranging in I (where I is an arbitrary indexing set). We wish to show that the cartesian product of these sets is nonempty. Now, for each i , take X i to be A i with the index i itself tacked on (renaming the indices using the disjoint union if necessary, we may assume that i is not a member of A i , so simply take X i = A i ∪ { i }). Now define the cartesian product X = ∏ i ∈ I X i {\displaystyle X=\prod _{i\in I}X_{i}} along with the natural projection maps π i which take a member of X to its i th term. We give each X j the topology whose open sets are: the empty set, the singleton { i }, the set X i . This makes X i compact, and by Tychonoff's theorem, X is also compact (in the product topology). The projection maps are continuous; all the A i ' s are closed, being complements of the singleton open set { i } in X i . So the inverse images π i −1 ( A i ) are closed subsets of X . We note that ∏ i ∈ I A i = ⋂ i ∈ I π i − 1 ( A i ) {\displaystyle \prod _{i\in I}A_{i}=\bigcap _{i\in I}\pi _{i}^{-1}(A_{i})} and prove that these inverse images have the FIP. Let i 1 , ..., i N be a finite collection of indices in I . Then the finite product A i 1 × ... × A i N is non-empty (only finitely many choices here, so AC is not needed); it merely consists of N -tuples. Let a = ( a 1 , ..., a N ) be such an N -tuple. We extend a to the whole index set: take a to the function f defined by f ( j ) = a k if j = i k , and f ( j ) = j otherwise. This step is where the addition of the extra point to each space is crucial , for it allows us to define f for everything outside of the N -tuple in a precise way without choices (we can already choose, by construction, j from X j ). π i k ( f ) = a k is obviously an element of each A i k so that f is in each inverse image; thus we have ⋂ k = 1 N π i k − 1 ( A i k ) ≠ ∅ . {\displaystyle \bigcap _{k=1}^{N}\pi _{i_{k}}^{-1}(A_{i_{k}})\neq \varnothing .} By the FIP definition of compactness, the entire intersection over I must be nonempty, and the proof is complete.
https://en.wikipedia.org/wiki/Tychonoff's_theorem
In topology and related branches of mathematics , Tychonoff spaces and completely regular spaces are kinds of topological spaces . These conditions are examples of separation axioms . A Tychonoff space is any completely regular space that is also a Hausdorff space ; there exist completely regular spaces that are not Tychonoff (i.e. not Hausdorff). Paul Urysohn had used the notion of completely regular space in a 1925 paper [ 1 ] without giving it a name. But it was Andrey Tychonoff who introduced the terminology completely regular in 1930. [ 2 ] A topological space X {\displaystyle X} is called completely regular if points can be separated from closed sets via (bounded) continuous real-valued functions. In technical terms this means: for any closed set A ⊆ X {\displaystyle A\subseteq X} and any point x ∈ X ∖ A , {\displaystyle x\in X\setminus A,} there exists a real-valued continuous function f : X → R {\displaystyle f:X\to \mathbb {R} } such that f ( x ) = 1 {\displaystyle f(x)=1} and f | A = 0. {\displaystyle f\vert _{A}=0.} (Equivalently one can choose any two values instead of 0 {\displaystyle 0} and 1 {\displaystyle 1} and even require that f {\displaystyle f} be a bounded function.) A topological space is called a Tychonoff space (alternatively: T 3½ space , or T π space , or completely T 3 space ) if it is a completely regular Hausdorff space . Remark. Completely regular spaces and Tychonoff spaces are related through the notion of Kolmogorov equivalence . A topological space is Tychonoff if and only if it's both completely regular and T 0 . On the other hand, a space is completely regular if and only if its Kolmogorov quotient is Tychonoff. Across mathematical literature different conventions are applied when it comes to the term "completely regular" and the "T"-Axioms. The definitions in this section are in typical modern usage. Some authors, however, switch the meanings of the two kinds of terms, or use all terms interchangeably. In Wikipedia, the terms "completely regular" and "Tychonoff" are used freely and the "T"-notation is generally avoided. In standard literature, caution is thus advised, to find out which definitions the author is using. For more on this issue, see History of the separation axioms . Almost every topological space studied in mathematical analysis is Tychonoff, or at least completely regular. For example, the real line is Tychonoff under the standard Euclidean topology . Other examples include: There are regular Hausdorff spaces that are not completely regular, but such examples are complicated to construct. One of them is the so-called Tychonoff corkscrew , [ 3 ] [ 4 ] which contains two points such that any continuous real-valued function on the space has the same value at these two points. An even more complicated construction starts with the Tychonoff corkscrew and builds a regular Hausdorff space called Hewitt's condensed corkscrew , [ 5 ] [ 6 ] which is not completely regular in a stronger way, namely, every continuous real-valued function on the space is constant. Complete regularity and the Tychonoff property are well-behaved with respect to initial topologies . Specifically, complete regularity is preserved by taking arbitrary initial topologies and the Tychonoff property is preserved by taking point-separating initial topologies. It follows that: Like all separation axioms, complete regularity is not preserved by taking final topologies . In particular, quotients of completely regular spaces need not be regular . Quotients of Tychonoff spaces need not even be Hausdorff , with one elementary counterexample being the line with two origins . There are closed quotients of the Moore plane that provide counterexamples. For any topological space X , {\displaystyle X,} let C ( X ) {\displaystyle C(X)} denote the family of real-valued continuous functions on X {\displaystyle X} and let C b ( X ) {\displaystyle C_{b}(X)} be the subset of bounded real-valued continuous functions. Completely regular spaces can be characterized by the fact that their topology is completely determined by C ( X ) {\displaystyle C(X)} or C b ( X ) . {\displaystyle C_{b}(X).} In particular: Given an arbitrary topological space ( X , τ ) {\displaystyle (X,\tau )} there is a universal way of associating a completely regular space with ( X , τ ) . {\displaystyle (X,\tau ).} Let ρ be the initial topology on X {\displaystyle X} induced by C τ ( X ) {\displaystyle C_{\tau }(X)} or, equivalently, the topology generated by the basis of cozero sets in ( X , τ ) . {\displaystyle (X,\tau ).} Then ρ will be the finest completely regular topology on X {\displaystyle X} that is coarser than τ . {\displaystyle \tau .} This construction is universal in the sense that any continuous function f : ( X , τ ) → Y {\displaystyle f:(X,\tau )\to Y} to a completely regular space Y {\displaystyle Y} will be continuous on ( X , ρ ) . {\displaystyle (X,\rho ).} In the language of category theory , the functor that sends ( X , τ ) {\displaystyle (X,\tau )} to ( X , ρ ) {\displaystyle (X,\rho )} is left adjoint to the inclusion functor CReg → Top . Thus the category of completely regular spaces CReg is a reflective subcategory of Top , the category of topological spaces . By taking Kolmogorov quotients , one sees that the subcategory of Tychonoff spaces is also reflective. One can show that C τ ( X ) = C ρ ( X ) {\displaystyle C_{\tau }(X)=C_{\rho }(X)} in the above construction so that the rings C ( X ) {\displaystyle C(X)} and C b ( X ) {\displaystyle C_{b}(X)} are typically only studied for completely regular spaces X . {\displaystyle X.} The category of realcompact Tychonoff spaces is anti-equivalent to the category of the rings C ( X ) {\displaystyle C(X)} (where X {\displaystyle X} is realcompact) together with ring homomorphisms as maps. For example one can reconstruct X {\displaystyle X} from C ( X ) {\displaystyle C(X)} when X {\displaystyle X} is (real) compact. The algebraic theory of these rings is therefore subject of intensive studies. A vast generalization of this class of rings that still resembles many properties of Tychonoff spaces, but is also applicable in real algebraic geometry , is the class of real closed rings . Tychonoff spaces are precisely those spaces that can be embedded in compact Hausdorff spaces . More precisely, for every Tychonoff space X , {\displaystyle X,} there exists a compact Hausdorff space K {\displaystyle K} such that X {\displaystyle X} is homeomorphic to a subspace of K . {\displaystyle K.} In fact, one can always choose K {\displaystyle K} to be a Tychonoff cube (i.e. a possibly infinite product of unit intervals ). Every Tychonoff cube is compact Hausdorff as a consequence of Tychonoff's theorem . Since every subspace of a compact Hausdorff space is Tychonoff one has: Of particular interest are those embeddings where the image of X {\displaystyle X} is dense in K ; {\displaystyle K;} these are called Hausdorff compactifications of X . {\displaystyle X.} Given any embedding of a Tychonoff space X {\displaystyle X} in a compact Hausdorff space K {\displaystyle K} the closure of the image of X {\displaystyle X} in K {\displaystyle K} is a compactification of X . {\displaystyle X.} In the same 1930 article [ 2 ] where Tychonoff defined completely regular spaces, he also proved that every Tychonoff space has a Hausdorff compactification. Among those Hausdorff compactifications, there is a unique "most general" one, the Stone–Čech compactification β X . {\displaystyle \beta X.} It is characterized by the universal property that, given a continuous map f {\displaystyle f} from X {\displaystyle X} to any other compact Hausdorff space Y , {\displaystyle Y,} there is a unique continuous map g : β X → Y {\displaystyle g:\beta X\to Y} that extends f {\displaystyle f} in the sense that f {\displaystyle f} is the composition of g {\displaystyle g} and j . {\displaystyle j.} Complete regularity is exactly the condition necessary for the existence of uniform structures on a topological space. In other words, every uniform space has a completely regular topology and every completely regular space X {\displaystyle X} is uniformizable . A topological space admits a separated uniform structure if and only if it is Tychonoff. Given a completely regular space X {\displaystyle X} there is usually more than one uniformity on X {\displaystyle X} that is compatible with the topology of X . {\displaystyle X.} However, there will always be a finest compatible uniformity, called the fine uniformity on X . {\displaystyle X.} If X {\displaystyle X} is Tychonoff, then the uniform structure can be chosen so that β X {\displaystyle \beta X} becomes the completion of the uniform space X . {\displaystyle X.}
https://en.wikipedia.org/wiki/Tychonoff_space
Tygon® is a brand name for a family of flexible polymer tubing consisting of a variety of materials to be used "across a range of specialized fluid transfer requirements". [ 1 ] The specific composition of each type is a trade secret. Some variants have multiple layers of different materials. Tygon is a registered trademark of Saint-Gobain Corporation. It is an invented word, owned and used by Saint-Gobain and originated in the late 1930s. Tygon products are produced in three countries, but sold throughout the world. Tygon tubing is used in many markets, including food and beverage, chemical processing, industrial, laboratory, medical, pharmaceutical, and semiconductor processing. There are many formulations of clear, flexible, Tygon tubing. The chemical resistance and physical properties vary among the different formulations, but the tubing generally is intended to be "so resistant to chemical attack that it will handle practically any chemical", whether liquid, gas, or slurry. [ 2 ] While largely non-reactive, Tygon has been reported to liberate carbon monoxide and is listed among carbon monoxide-releasing molecules . Tygon B-44-3 , Tygon B-44-4X , Tygon B-44-4X I.B., and Tygon Silver (antimicrobial) were widely used in the food and beverage industry, in particular in: beverage dispensing, dairy processing, soft-serve dispensing, vitamin and flavor concentrate systems, cosmetic production, and water purification systems. These formulations each meet U.S. Food and Drug Administration 3-A and NSF International 51 criteria but they do not comply with European Directives (European Directive 2002/72/EC of 6 August 2002 relating to plastic materials and articles intended to come into contact with foodstuffs as modified in particular by Directive 2007/19/EC of 2 April 2007). Several formulations of Tygon are USP class VI approved and can be used in either surgical procedures or pharmaceutical processing. Tygon Medical/Surgical Tubing S-50-HL — Characterized to the latest ISO 10993 standards and U.S. Food and Drug Administration (FDA) guidelines for biocompatibility . This material is non-toxic, non- hemolytic , and non- pyrogenic . This formulation is used in minimally invasive devices, dialysis equipment, for bypass procedures , and chemotherapy drug delivery. Tygon Medical Tubing S-54-HL was introduced in 1964 for use in medical applications. This material can be used in catheters , for intravenous or intra-arterial infusion and other surgical uses. Tygon S-54-HL can also be fabricated into cannulae or protective sheath products using thermoforming and flaring techniques. Tygon LFL (Long Flex Life) pump tubing is non-toxic clear tubing with broad chemical resistance. [ citation needed ] It is often used in product filtration and fermentation and surfactant delivery. Tygon 2275 High Purity Tubing is a plasticizer-free material that is often used in sterile filling and dispensing systems and diagnostic equipment. This formulation is also considered to have low absorption / adsorption properties, which minimizes the risk of fluid alteration. Tygon 2275 I.B. High-Purity Pressure Tubing is plasticizer-free and is reinforced with a braid for use with elevated working pressures. Many formulations of Tygon can be used in peristaltic pumps , including the following: Tygon R-3603 Laboratory Tubing is commonly used in chemical laboratories. It is often used in incubators and as a replacement for rubber tubing for Bunsen burners . This material is produced in vacuum sizes and can withstand a full vacuum at room temperature. It is a thermoplastic PVC-based material with plasticizer. [ 3 ] Tygon R-1000 Ultra-Soft Tubing is used in general laboratory applications. It is the softest of the Tygon formulations with a durometer hardness of Shore A 40 (ASTM Method D2240-02). Because of the low durometer of this material it is often used in low- torque peristaltic pumps. Tygon LFL (Long Flex Life) Pump Tubing, Tygon 3350 , Tygon S-50-HL Medical/Surgical Tubing, Tygon 2275 High Purity Tubing , and Tygon 2001 Tubing are also used in peristaltic pump applications. Tygon tubing is available in Plasticizer-free/non- DEHP (non- Phthalate )-formulations. These formulations have a high degree of chemical resistance and do not release any hazardous material when properly incinerated. Tygon 2275 High Purity tubing , Tygon Ultra Chemical Resistant Tubing 2075 , and Tygon Plasticizer Free Tubing 2001 are all plasticizer -free. "ND-100 series" products are non-DEHP and use a non-Phthalate plasticizer. Tygon Silver Tubing has a plasticizer-free inner bore and a silver -based compound on the inner surface to decrease bacterial growth and protect against microbes . There are several formulations of Tygon that are used in industrial applications.
https://en.wikipedia.org/wiki/Tygon_tubing
The Tyler poison gas plot was an American domestic terrorism plan in Tyler, Texas , thwarted in April 2003 with the arrest of three individuals and the seizure of a cyanide gas bomb along with a large arsenal. [ 1 ] [ 2 ] Authorities had been investigating the white supremacist conspirators for several years and the case received little media coverage and limited attention in public from the government. The three individuals were linked to white supremacist and anti-government groups. They were: [ 1 ] [ 3 ] Krar was alleged to have made his living travelling across the country selling bomb components and other weapons to violent underground anti-government groups. [ 1 ] [ 3 ] After leaving community college, he moved to New Hampshire and first opened a restaurant and then in 1984 began selling weapons without a license under the name International Development Corporation (IDC). His father was a gunsmith. [ 3 ] He was convicted and fined for impersonating a police officer in 1985. [ 3 ] He worked for a building supply company and often traveled to Central America - though not on company business - until the company closed in 1988, when he stopped filing tax returns for IDC. [ 3 ] He met Bruey in 1989. Federal authorities had been observing Krar since at least 1995, when ATF agents investigated a possible plot to bomb government buildings, but Krar was not charged. [ 2 ] In June 2001, police investigating a fire at Krar's Goffstown storage facility found guns and ammunition, but were persuaded this was legitimate as part of his business. [ 3 ] "Hope this package gets to you O.K. We would hate to have this fall into the wrong hands." Since the September 11 attacks , their attention was focused on Middle Eastern terrorist activities. They were only alerted to Krar's recent activities by accident when he mailed Feltus a package of counterfeit birth certificates from North Dakota, Vermont, and West Virginia, and United Nations Multinational Force and Observers and Defense Intelligence Agency IDs in January 2002. [ 2 ] [ 5 ] The package was mistakenly delivered to a Staten Island man who alerted police. [ 1 ] In August 2002, FBI investigators spoke to Feltus, who admitted to being in a militia and to be storing weapons. [ 3 ] In January 2003, a Nashville state trooper stopped Krar in a routine traffic stop and found drugs, chemicals, false IDs and weapons in the car. Krar was arrested and the FBI were alerted. Krar was bailed and one month later an FBI lab reported that white powder found in the car was sodium cyanide ; an arrest warrant was issued for Krar. [ 3 ] In April 2003, investigators found weapons, pure sodium cyanide and white supremacist material in a storage facility in Noonday, Texas rented by Krar and Bruey. [ 4 ] More weapons were found at their Tyler, Texas home. [ 6 ] The weapons included at least 100 other conventional bombs (including briefcase bombs and pipe bombs), machine guns , an assault rifle, an unregistered silencer, and 500,000 rounds of ammunition. The chemical stockpile seized included sodium cyanide , hydrochloric acid , nitric acid and acetic acid . [ 1 ] [ 3 ] The cyanide was in a device with acid that would trigger its release as a gas bomb. [ 6 ] On May 4, 2004, Krar was sentenced to 135 months in prison after he pleaded guilty to building and possessing chemical weapons. Bruey was sentenced to 57 months after pleading to "conspiracy to possess illegal weapons." [ 7 ] As per a lookup at the Bureau of Prisons prisoner database [ 8 ] on September 18, 2012, Krar (09751-078) died in prison on May 7, 2009, [ 9 ] Bruey (10601-078) was released on May 30, 2008, and no information is available for Edward Feltus. Paul Krugman writing in the New York Times noted how John Ashcroft and the US Justice Department gave no comment or press release about the case, in contrast to other foiled plots of international terrorism. Krugman's piece was noted in Congress by John Conyers . [ 10 ] The Christian Science Monitor noted in December 2003 "there have been two government press releases and a handful of local stories, but no press conference and no coverage in the national newspapers." [ 11 ]
https://en.wikipedia.org/wiki/Tyler_poison_gas_plot
Tyndallization is a process from the nineteenth century for sterilizing substances, usually food, named after its inventor John Tyndall , that can be used to kill heat-resistant endospores . Although now considered dated, it is still occasionally used. [ citation needed ] A simple and effective sterilizing method commonly used today is autoclaving : heating the substance being sterilized to 121 °C (250 °F) for 15 minutes in a pressured system. [ citation needed ] If autoclaving is not possible because of lack of equipment, or the need to sterilize something that will not withstand the higher temperature, unpressurized heating for a prolonged period at a temperature of up to 100 °C (212 °F), the boiling point of water, may be used. The heat will kill any bacterial cells ; however, bacterial spores capable of later germinating into bacterial cells may survive. Tyndallization can be used to destroy the spores. [ 1 ] Tyndallization essentially consists of heating the substance to boiling point (or just a little below boiling point) and holding it there for 15 minutes, three days in succession. After each heating, the resting period will allow spores that have survived to germinate into bacterial cells; these cells will be killed by the next day's heating. During the resting periods the substance being sterilized is kept in a moist environment at a warm room temperature, conducive to germination of the spores. When the environment is favourable for bacteria, it is conducive to the germination of cells from spores, and spores do not form from cells in this environment [ citation needed ] (see bacterial spores ). The Tyndallization process is usually effective in practice. But it is not considered completely reliable — some spores may survive and later germinate and multiply. It is not often used today, but is used for sterilizing items that cannot withstand pressurized heating, such as plant seeds. [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Tyndallization
Type-1.5 superconductors are multicomponent superconductors characterized by two or more coherence lengths , at least one of which is shorter than the magnetic field penetration length λ {\displaystyle \lambda } , and at least one of which is longer. This is in contrast to single-component superconductors, where there is only one coherence length ξ {\displaystyle \xi } and the superconductor is necessarily either type 1 ( ξ > λ {\displaystyle \xi >\lambda } ) or type 2 ( ξ < λ {\displaystyle \xi <\lambda } ) (often a coherence length is defined with extra 2 1 / 2 {\displaystyle 2^{1/2}} factor, with such a definition the corresponding inequalities are ξ > 2 λ {\displaystyle \xi >{\sqrt {2}}\lambda } and ξ < 2 λ {\displaystyle \xi <{\sqrt {2}}\lambda } ). When placed in magnetic field, type-1.5 superconductors should form quantum vortices : magnetic-flux-carrying excitations. They allow magnetic field to pass through superconductors due to a vortex-like circulation of superconducting particles (electronic pairs). In type-1.5 superconductors these vortices have long-range attractive, short-range repulsive interaction. As a consequence a type-1.5 superconductor in a magnetic field can form a phase separation into domains with expelled magnetic field and clusters of quantum vortices which are bound together by attractive intervortex forces. The domains of the Meissner state retain the two-component superconductivity, while in the vortex clusters one of the superconducting components is suppressed. Thus such materials should allow coexistence of various properties of type-I and type-II superconductors. Type-I superconductors completely expel external magnetic fields if the strength of the applied field is sufficiently low. Also the supercurrent can flow only on the surface of such a superconductor but not in its interior. This state is called the Meissner state . However at elevated magnetic field, when the magnetic field energy becomes comparable with the superconducting condensation energy, the superconductivity is destroyed by the formation of macroscopically large inclusions of non-superconducting phase. Type-II superconductors , besides the Meissner state , possess another state: a sufficiently strong applied magnetic field can produce currents in the interior of superconductor due to formation of quantum vortices . The vortices also carry magnetic flux through the interior of the superconductor. These quantum vortices repel each other and thus tend to form uniform vortex lattices or liquids. [ 1 ] Formally, vortex solutions exist also in models of type-I superconductivity, but the interaction between vortices is purely attractive, so a system of many vortices is unstable against a collapse onto a state of a single giant normal domain with supercurrent flowing on its surface. More importantly, the vortices in type-I superconductor are energetically unfavorable. To produce them would require the application of a magnetic field stronger than what a superconducting condensate can sustain. Thus a type-I superconductor goes to non-superconducting states rather than forming vortices. In the usual Ginzburg–Landau theory , only the quantum vortices with purely repulsive interaction are energetically cheap enough to be induced by applied magnetic field. It was proposed [ 2 ] that the type-I/type-II dichotomy could be broken in a multi-component superconductors, which possess multiple coherence lengths. Examples of multi-component superconductivity are multi-band superconductors magnesium diboride and oxypnictides and exotic superconductors with nontrivial Cooper-pairing. There, one can distinguish two or more superconducting components associated, for example with electrons belong to different bands band structure . A different example of two component systems is the projected superconducting states of liquid metallic hydrogen or deuterium where mixtures of superconducting electrons and superconducting protons or deuterons were theoretically predicted. It was also pointed out that systems which have phase transitions between different superconducting states such as between s {\displaystyle s} and s + i s {\displaystyle s+is} or between U ( 1 ) {\displaystyle U(1)} and U ( 1 ) × U ( 1 ) {\displaystyle U(1)\times U(1)} should rather generically fall into type-1.5 state near that transition due to divergence of one of the coherence lengths. For multicomponent superconductors with so called U(1)xU(1) symmetry the Ginzburg-Landau model is a sum of two single-component Ginzburg-Landau model which are coupled by a vector potential A {\displaystyle A} : F = ∑ i , j = 1 , 2 1 2 m | ( ∇ − i e A ) ψ i | 2 + α i | ψ i | 2 + β i | ψ i | 4 + 1 2 ( ∇ × A ) 2 {\displaystyle F=\sum _{i,j=1,2}{\frac {1}{2m}}|(\nabla -ieA)\psi _{i}|^{2}+\alpha _{i}|\psi _{i}|^{2}+\beta _{i}|\psi _{i}|^{4}+{\frac {1}{2}}(\nabla \times A)^{2}} where ψ i = | ψ i | e i ϕ i , i = 1 , 2 {\displaystyle \psi _{i}=|\psi _{i}|e^{i\phi _{i}},i=1,2} are two superconducting condensates. In case if the condensates are coupled only electromagnetically, i.e. by A {\displaystyle A} the model has three length scales: the London penetration length λ = 1 e | ψ 1 | 2 + | ψ 2 | 2 {\displaystyle \lambda ={\frac {1}{e{\sqrt {|\psi _{1}|^{2}+|\psi _{2}|^{2}}}}}} and two coherence lengths ξ 1 = 1 2 α 1 , ξ 2 = 1 2 α 2 {\displaystyle \xi _{1}={\frac {1}{\sqrt {2\alpha _{1}}}},\xi _{2}={\frac {1}{\sqrt {2\alpha _{2}}}}} . The vortex excitations in that case have cores in both components which are co-centered because of electromagnetic coupling mediated by the field A {\displaystyle A} . The necessary but not sufficient condition for occurrence of type-1.5 regime is ξ 1 > λ > ξ 2 {\displaystyle \xi _{1}>\lambda >\xi _{2}} . [ 2 ] Additional condition of thermodynamic stability is satisfied for a range of parameters. These vortices have a nonmonotonic interaction: they attract each other at large distances and repel each other at short distances. [ 2 ] [ 3 ] [ 4 ] It was shown that there is a range of parameters where these vortices are energetically favorable enough to be excitable by an external field, attractive interaction notwithstanding. This results in the formation of a special superconducting phase in low magnetic fields dubbed "Semi-Meissner" state. [ 2 ] The vortices, whose density is controlled by applied magnetic flux density, do not form a regular structure. Instead, they should have a tendency to form vortex "droplets" because of the long-range attractive interaction caused by condensate density suppression in the area around the vortex. Such vortex clusters should coexist with the areas of vortex-less two-component Meissner domains. Inside such vortex cluster the component with larger coherence length is suppressed: so that component has appreciable current only at the boundary of the cluster. In a two-band superconductor the electrons in different bands are not independently conserved thus the definition of two superconducting components is different. A two-band superconductor is described by the following Ginzburg-Landau model. [ 5 ] F = ∑ i , j = 1 , 2 1 2 m | ( ∇ − i e A ) ψ i | 2 + α i | ψ i | 2 + β i | ψ i | 4 − η ( ψ 1 ψ 2 ∗ + ψ 1 ∗ ψ 2 ) + γ [ ( ∇ − i e A ) ψ 1 ⋅ ( ∇ + i e A ) ψ 2 ∗ + ( ∇ + i e A ) ψ 1 ∗ ⋅ ( ∇ − i e A ) ψ 2 ] + ν | ψ 1 | 2 | ψ 2 | 2 + 1 2 ( ∇ × A ) 2 {\displaystyle F=\sum _{i,j=1,2}{\frac {1}{2m}}|(\nabla -ieA)\psi _{i}|^{2}+\alpha _{i}|\psi _{i}|^{2}+\beta _{i}|\psi _{i}|^{4}-\eta (\psi _{1}\psi _{2}^{*}+\psi _{1}^{*}\psi _{2})+\gamma [(\nabla -ieA)\psi _{1}\cdot (\nabla +ieA)\psi _{2}^{*}+(\nabla +ieA)\psi _{1}^{*}\cdot (\nabla -ieA)\psi _{2}]+\nu |\psi _{1}|^{2}|\psi _{2}|^{2}+{\frac {1}{2}}(\nabla \times A)^{2}} where again ψ i = | ψ i | e i ϕ i , i = 1 , 2 {\displaystyle \psi _{i}=|\psi _{i}|e^{i\phi _{i}},i=1,2} are two superconducting condensates. In multiband superconductors quite generically η ≠ 0 , γ ≠ 0 {\displaystyle \eta \neq 0,\gamma \neq 0} . When η ≠ 0 , γ ≠ 0 , ν ≠ 0 {\displaystyle \eta \neq 0,\gamma \neq 0,\nu \neq 0} three length scales of the problem are again the London penetration length and two coherence lengths. However, in this case the coherence lengths ξ ~ 1 ( α 1 , β 1 , α 2 , β 2 , η , γ , ν ) , ξ ~ 2 ( α 1 , β 1 , α 2 , β 2 , η , γ , ν ) {\displaystyle {\tilde {\xi }}_{1}(\alpha _{1},\beta _{1},\alpha _{2},\beta _{2},\eta ,\gamma ,\nu ),{\tilde {\xi }}_{2}(\alpha _{1},\beta _{1},\alpha _{2},\beta _{2},\eta ,\gamma ,\nu )} are associated with "mixed" combinations of density fields. [ 3 ] [ 4 ] [ 6 ] A microscopic theory of type-1.5 superconductivity has been reported. [ 4 ] In 2009, experimental results have been reported [ 7 ] [ 8 ] [ 9 ] claiming that magnesium diboride may fall into this new class of superconductivity. The term type-1.5 superconductor was coined for this state. Further experimental data backing this conclusion was reported in. [ 10 ] More recent theoretical works show that the type-1.5 may be more general phenomenon because it does not require a material with two truly superconducting bands, but can also happen as a result of even very small interband proximity effect [ 6 ] and is robust in the presence of various inter-band couplings such as interband Josephson coupling. [ 3 ] [ 11 ] In 2014, experimental study suggested that Sr2RuO4 is type-1.5 superconductor. [ 12 ] Type-I and type-II superconductors feature dramatically different charge flow patterns. Type-I superconductors have two state-defining properties: The lack of electric resistance and the fact that they do not allow an external magnetic field to pass through them. When a magnetic field is applied to these materials, superconducting electrons produce a strong current on the surface, which in turn produces a magnetic field in the opposite direction to cancel the interior magnetic field, similar to how typical conductors cancel interior electric fields with surface charge distributions. An externally applied magnetic field of sufficiently low strength is cancelled in the interior of a type-I superconductor by the field produced by the surface current. In type-II superconducting materials, however, a complicated flow of superconducting electrons can form deep in the interior of the material. In a type-II material, magnetic fields can penetrate into the interior, carried inside by vortices that form an Abrikosov vortex lattice. In type-1.5 superconductors, there are at least two superconducting components. In such materials, the external magnetic field can produce clusters of tightly packed vortex droplets because in such materials vortices should attract each other at large distances and repel at short length scales. Since the attraction originates in vortex core's overlaps in one of the superconducting components, this component will be depleted in the vortex cluster. Thus a vortex cluster will represent two competing types of superflow. One component will form vortices bunched together while the second component will produce supercurrent flowing on the surface of vortex clusters in a way similar to how electrons flow on the exterior of type-I superconductors. These vortex clusters are separated by "voids," with no vortices, no currents and no magnetic field. [ 13 ] Movies from numerical simulations of the Semi-Meissner state where Meissner domains coexist with clusters where vortex droplets form in one superconducting components and macroscopic normal domains in the other. [ 14 ] Animations from numerical calculations of vortex cluster formation are available at " Numerical simulations of vortex clusters formation in type-1.5 superconductors. "
https://en.wikipedia.org/wiki/Type-1.5_superconductor
In superconductivity , a type-II superconductor is a superconductor that exhibits an intermediate phase of mixed ordinary and superconducting properties at intermediate temperature and fields above the superconducting phases. It also features the formation of magnetic field vortices with an applied external magnetic field . This occurs above a certain critical field strength H c1 . The vortex density increases with increasing field strength. At a higher critical field H c2 , superconductivity is destroyed. Type-II superconductors do not exhibit a complete Meissner effect . [ 2 ] In 1935, J.N. Rjabinin and Lev Shubnikov [ 3 ] [ 4 ] experimentally discovered the type-II superconductors. In 1950, the theory of the two types of superconductors was further developed by Lev Landau and Vitaly Ginzburg in their paper on Ginzburg–Landau theory . [ 5 ] In their argument, a type-I superconductor had positive free energy of the superconductor-normal metal boundary. Ginzburg and Landau pointed out the possibility of type-II superconductors that should form inhomogeneous state in strong magnetic fields. However, at that time, all known superconductors were type-I, and they commented that there was no experimental motivation to consider precise structure of type-II superconducting state. The theory for the behavior of the type-II superconducting state in magnetic field was greatly improved by Alexei Alexeyevich Abrikosov , [ 6 ] who was elaborating on the ideas by Lars Onsager and Richard Feynman of quantum vortices in superfluids . Quantum vortex solution in a superconductor is also very closely related to Fritz London 's work on magnetic flux quantization in superconductors. The Nobel Prize in Physics was awarded for the theory of type-II superconductivity in 2003. [ 7 ] Ginzburg–Landau theory introduced the superconducting coherence length ξ in addition to London magnetic field penetration depth λ . According to Ginzburg–Landau theory, in a type-II superconductor λ / ξ > 1 / 2 {\displaystyle \lambda /\xi >1/{\sqrt {2}}} . Ginzburg and Landau showed that this leads to negative energy of the interface between superconducting and normal phases. The existence of the negative interface energy was also known since the mid-1930s from the early works by the London brothers. A negative interface energy suggests that the system should be unstable against maximizing the number of such interfaces. This instability was not observed until the experiments of Shubnikov in 1936 where two critical fields were found. In 1952 an observation of type-II superconductivity was also reported by Zavaritskii. Fritz London demonstrated [ 8 ] [ 9 ] that a magnetic flux can penetrate a superconductor via a topological defect that has integer phase winding and carries quantized magnetic flux. Onsager and Feynman demonstrated that quantum vortices should form in superfluids. [ 10 ] [ 11 ] A 1957 paper by A. A. Abrikosov [ 12 ] generalizes these ideas. In the limit of very short coherence length the vortex solution is identical to London's fluxoid, [ 9 ] where the vortex core is approximated by a sharp cutoff rather than a gradual vanishing of superconducting condensate near the vortex center. Abrikosov found that the vortices arrange themselves into a regular array known as a vortex lattice . [ 7 ] Near a so-called upper critical magnetic field, the problem of a superconductor in an external field is equivalent to the problem of vortex state in a rotating superfluid, discussed by Lars Onsager and Richard Feynman . In the vortex state, a phenomenon known as flux pinning becomes possible. This is not possible with type-I superconductors , since they cannot be penetrated by magnetic fields. [ 13 ] If a superconductor is cooled in a field, the field can be trapped, which can allow the superconductor to be suspended over a magnet, with the potential for a frictionless joint or bearing. The worth of flux pinning is seen through many implementations such as lifts, frictionless joints, and transportation. The thinner the superconducting layer, the stronger the pinning that occurs when exposed to magnetic fields. Type-II superconductors are usually made of metal alloys or complex oxide ceramics . All high-temperature superconductors are type-II superconductors. While most elemental superconductors are type-I, niobium , vanadium , and technetium are elemental type-II superconductors. Boron -doped diamond and silicon are also type-II superconductors. Metal alloy superconductors can also exhibit type-II behavior (e.g., niobium–titanium , one of the most common superconductors in applied superconductivity), as well as intermetallic compounds like niobium–tin . Other type-II examples are the cuprate - perovskite ceramic materials which have achieved the highest superconducting critical temperatures. These include La 1.85 Ba 0.15 CuO 4 , BSCCO , and YBCO ( Yttrium - Barium - Copper - Oxide ), which is famous as the first material to achieve superconductivity above the boiling point of liquid nitrogen (77 K). Due to strong vortex pinning , the cuprates are close to ideally hard superconductors . Strong superconducting electromagnets (used in MRI scanners, NMR machines, and particle accelerators ) often use coils wound of niobium-titanium wires or, for higher fields, niobium-tin wires. These materials are type-II superconductors with substantial upper critical field H c2 , and in contrast to, for example, the cuprate superconductors with even higher H c2 , they can be easily machined into wires. Recently, however, 2nd generation superconducting tapes are allowing replacement of cheaper niobium-based wires with much more expensive, but superconductive at much higher temperatures and magnetic fields "2nd generation" tapes.
https://en.wikipedia.org/wiki/Type-II_superconductor
The interior of a bulk superconductor cannot be penetrated by a weak magnetic field , a phenomenon known as the Meissner effect . When the applied magnetic field becomes too large, superconductivity breaks down. Superconductors can be divided into two types according to how this breakdown occurs. In type-I superconductors , superconductivity is abruptly destroyed via a first order phase transition when the strength of the applied field rises above a critical value H c . This type of superconductivity is normally exhibited by pure metals, e.g. aluminium, lead, and mercury. The only alloys known up to now which exhibit type I superconductivity are tantalum silicide (TaSi 2 ). [ 1 ] and BeAu [ 2 ] The covalent superconductor SiC:B, silicon carbide heavily doped with boron, is also type-I. [ 3 ] Depending on the demagnetization factor, one may obtain an intermediate state. This state, first described by Lev Landau , is a phase separation into macroscopic non-superconducting and superconducting domains forming a Husimi Q representation . [ 4 ] This behavior is different from type-II superconductors which exhibit two critical magnetic fields. The first, lower critical field occurs when magnetic flux vortices penetrate the material but the material remains superconducting outside of these microscopic vortices. When the vortex density becomes too large, the entire material becomes non-superconducting; this corresponds to the second, higher critical field. The ratio of the London penetration depth λ to the superconducting coherence length ξ determines whether a superconductor is type-I or type-II. Type-I superconductors are those with 0 < λ ξ < 1 2 {\displaystyle 0<{\tfrac {\lambda }{\xi }}<{\tfrac {1}{\sqrt {2}}}} , and type-II superconductors are those with λ ξ > 1 2 {\displaystyle {\tfrac {\lambda }{\xi }}>{\tfrac {1}{\sqrt {2}}}} . [ 5 ]
https://en.wikipedia.org/wiki/Type-I_superconductor
In mathematical logic , System U and System U − are pure type systems , i.e. special forms of a typed lambda calculus with an arbitrary number of sorts , axioms and rules (or dependencies between the sorts). System U was proved inconsistent by Jean-Yves Girard in 1972 [ 1 ] (and the question of consistency of System U − was formulated). This result led to the realization that Martin-Löf 's original 1971 type theory was inconsistent, as it allowed the same "Type in Type" behaviour that Girard's paradox exploits. System U is defined [ 2 ] : 352 as a pure type system with System U − is defined the same with the exception of the ( △ , ∗ ) {\displaystyle (\triangle ,\ast )} rule. The sorts ∗ {\displaystyle \ast } and ◻ {\displaystyle \square } are conventionally called “Type” and “ Kind ”, respectively; the sort △ {\displaystyle \triangle } doesn't have a specific name. The two axioms describe the containment of types in kinds ( ∗ : ◻ {\displaystyle \ast :\square } ) and kinds in △ {\displaystyle \triangle } ( ◻ : △ {\displaystyle \square :\triangle } ). Intuitively, the sorts describe a hierarchy in the nature of the terms. The rules govern the dependencies between the sorts: ( ∗ , ∗ ) {\displaystyle (\ast ,\ast )} says that values may depend on values ( functions ), ( ◻ , ∗ ) {\displaystyle (\square ,\ast )} allows values to depend on types ( polymorphism ), ( ◻ , ◻ ) {\displaystyle (\square ,\square )} allows types to depend on types ( type operators ), and so on. The definitions of System U and U − allow the assignment of polymorphic kinds to generic constructors in analogy to polymorphic types of terms in classical polymorphic lambda calculi, such as System F . An example of such a generic constructor might be [ 2 ] : 353 (where k denotes a kind variable) This mechanism is sufficient to construct a term with the type ( ∀ p : ∗ , p ) {\displaystyle (\forall p:\ast ,p)} (equivalent to the type ⊥ {\displaystyle \bot } ), which implies that every type is inhabited . By the Curry–Howard correspondence , this is equivalent to all logical propositions being provable, which makes the system inconsistent. Girard's paradox is the type-theoretic analogue of Russell's paradox in set theory .
https://en.wikipedia.org/wiki/Type-in-type
A type-in program or type-in listing was computer source code printed in a home computer magazine or book. It was meant to be entered via the keyboard by the reader and then saved to cassette tape or floppy disk . The result was a usable game, utility, or application program. Type-in programs were common in the home computer era from the late 1970s through the early 1990s, when the RAM of 8-bit systems was measured in kilobytes and most computer owners did not have access to networks such as bulletin board systems . Magazines such as Softalk , Compute! , ANALOG Computing , and Ahoy! dedicated much of each issue to type-in programs. The magazines could contain multiple games or other programs for a fraction of the cost of purchasing commercial software on removable media , but the user had to spend up to several hours typing each one in. Most listings were either in a system-specific BASIC dialect or machine code . Machine code programs were long lists of decimal or hexadecimal numbers, often in the form of DATA statements in BASIC. [ 1 ] Most magazines had error checking software to make sure a program was typed correctly. Type-in programs did not carry over to 16-bit computers such as the Amiga and Atari ST in a significant way, as both programs and data (such as graphics) became much larger. It became common to include a covermount 3 1 ⁄ 2 -inch floppy disk or CD-ROM with each issue of a magazine. A reader would take a printed copy of the program listing, such as from a magazine or book, sit down at a computer, and manually enter the lines of code. Computers of this era automatically booted into a programming environment – even the commands to load and run a prepackaged program were really programming commands executed in direct mode . After typing the program in, the user would be able to run it and also to save it to disk or a cassette for future use. Users were often cautioned to save the program before running it, as errors could result in a crash requiring a reboot, which would render the program irretrievable unless it had been saved. While some type-in programs were short, simple utility or demonstration programs, many type-ins were fully functional games or application software, sometimes rivaling commercial packages. Type-ins were usually written in BASIC or a combination of a BASIC loader and machine code . In the latter case, the opcodes and operands of the machine code part were often simply given as DATA statements within the BASIC program, and were loaded using a POKE loop, since few users had access to an assembler . [ a ] In some cases, a special program for entering machine code numerically was provided. Programs with a machine code component sometimes included assembly language listings for users who had assemblers and who were interested in the internal workings of the program. The downside of type-ins was labor. The work required to enter a medium-sized type-in was on the order of hours. If the resulting program turned out not to be to the user's taste, it was quite possible that the user spent more time keying in the program than using it. Additionally, type-ins were error-prone, both for users and for the magazines. This was especially true of the machine code parts of BASIC programs, which were nothing but line after line of data, e.g. DATA statements in the BASIC language. In some cases where the version of ASCII used on the type of computer the program was published for included printable characters for each value from 0–255, the code could have been printed using strings that contained the glyphs that the values mapped to, or a mnemonic such as [SHIFT-R] instructing the user which keys to press. While a BASIC program would often stop with an error at an incorrect statement, the machine code parts of a program could fail in untraceable ways. This made the correct entry of programs difficult. [ b ] Other solutions existed for the tedium of typing in seemingly-endless lines of code. Freelance authors wrote most magazine type-in programs and, in the accompanying article, often provided readers a mailing address to send a small sum ( US$ 3 was typical) to buy the program on disk or tape. By the mid-1980s, recognising this demand from readers, many US-published magazines offered all of each issue's type-ins on an optional disk, often with a bonus program or two. Some of these disks became electronic publications in their own right, outlasting their parent magazine as happened with Loadstar . Some UK magazines occasionally offered a free flexi disc that played on a turntable connected to the microcomputer's cassette input. Other input methods, such as the Cauzin Softstrip , were tried, without much success. Not all type-ins were long. Run magazine's "Magic" column specialized in one-liner programs for the Commodore 64. [ 2 ] These programs were often graphic demos or meant to illustrate a technical quirk of the computer's architecture; the text accompanying the graphics demo programs would avoid explicitly describing the resultant image, enticing the reader to type it in. [ 3 ] Type-in programs preceded the home computer era. As David H. Ahl wrote in 1983: In 1971, while education product line manager at Digital Equipment Corp. , I put out a call for games to educational institutions throughout North America. I was overwhelmed with the response. I selected the best games and put them together in a book, 101 Basic Computer Games . After putting the book together on my own time, I convinced reluctant managers at DEC to publish it. They were convinced it wouldn't sell. It, plus its sequel, More Basic Computer Games have sold over half a million copies proving that people are intrigued by computer games. [ 4 ] Upon Ahl's departure from DEC in July 1974, he initiated a bimonthly magazine titled Creative Computing while serving as an educational marketing manager at AT&T. The inaugural issue was released in October of that year, and by the fourth year, a team of eight individuals were working on it. The magazine featured computer games and its debut coincided with the introduction of the Altair 8800 - the first widely accessible computer kit - which was announced in January 1975, according to Ahl. [ 5 ] Most early computer magazines published type-in programs. The professional and business-oriented journals such as Byte and Popular Computing printed them less frequently, often as a test program to illustrate a technical topic covered in the magazine rather than an application for general use. [ 6 ] Consumer-oriented publications such as Compute! and Family Computing ran several each issue. The programs were sometimes specific to a given home computer and sometimes compatible with several computers. Platform-specific magazines such as Compute!'s Gazette ( VIC-20 and Commodore 64 ) and Antic ( Atari 8-bit computers ), since they only had to print one version of each program, were able to print more, longer listings. Although type in programs were usually copyrighted, like the many games in BASIC Computer Games , authors often encouraged users to modify them, adding capabilities or otherwise changing them to suit their needs. Many authors used the article accompanying the type-ins to suggest modifications for the reader and programmer to perform. Users would sometimes send their changes back into the magazine for later publication. [ 7 ] This could be considered a predecessor to open source software , but today most open source licenses specify that code be available in a machine-readable format. Antic stated in 1985 that its staff "spends a good portion of our time diligently combing the incoming submissions for practical application programs. We receive a lot of disk directory programs, recipe file storers, mini word processors, and other rehashed versions of old ideas". [ 8 ] While most type-ins were simple games or utilities and likely only to hold a user's interest for a short time, some were very ambitious, rivaling commercial software. Perhaps the most famous example is the type-in word processor SpeedScript , published by Compute!'s Gazette and Compute! for several 8-bit computers starting in 1984. Compute! also published SpeedScript , along with some accessory programs, in book form. It retained a following into the next decade as users refined and added capabilities to it. Compute! discontinued type-in programs in May 1988, stating "As computers and software have grown more powerful, we've realized it's not possible to offer top quality type-in programs for all machines. And we also realize that you're less inclined to type in those programs". [ 9 ] As the cost of cassette tapes and floppy disks declined, and as the sophistication of commercial programs and the technical capabilities of the computers they ran on steadily increased, the importance of the type-in declined. In Europe , magazine covermount disks became common, and type-ins became virtually non-existent. To prevent errors when typing in listings, most publications provided short programs to verify that code was entered correctly. These were specific to a magazine or family of magazines, and different validation programs were usually used for BASIC source and binary data. Compute! and Compute!'s Gazette printed a short listing in each issue for The Automatic Proofreader to check BASIC programs, while ANALOG Computing used D:CHECK (for disk) and C:CHECK (for cassette tape). For binary listings, Compute! and Ahoy! provided MLX and Flankspeed respectively, which were both interactive programs for entering data. The MIKBUG machine code monitor for the Motorola 6800 of the late 1970s incorporated a checksum into its hexadecimal program listings. [ 10 ] ANALOG Computing presented machine code programs as BASIC DATA statements, then prepended a short program to compute checksums. Running the program output a list of values to be checked against those printed in the magazine. Upon successful validation, the program was saved as a binary file and the BASIC code no longer needed.
https://en.wikipedia.org/wiki/Type-in_program
In biology , a type is a particular specimen (or in some cases a group of specimens) of an organism to which the scientific name of that organism is formally associated. In other words, a type is an example that serves to anchor or centralizes the defining features of that particular taxon . In older usage (pre-1900 in botany), a type was a taxon rather than a specimen. [ 1 ] A taxon is a scientifically named grouping of organisms with other like organisms, a set that includes some organisms and excludes others, based on a detailed published description (for example a species description ) and on the provision of type material, which is usually available to scientists for examination in a major museum research collection, or similar institution. [ 1 ] [ 2 ] According to a precise set of rules laid down in the International Code of Zoological Nomenclature (ICZN) and the International Code of Nomenclature for algae, fungi, and plants (ICN), the scientific name of every taxon is almost always based on one particular specimen , or in some cases specimens. Types are of great significance to biologists, especially to taxonomists . Types are usually physical specimens that are kept in a museum or herbarium research collection, but failing that, an image of an individual of that taxon has sometimes been designated as a type. [ 3 ] Describing species and appointing type specimens is part of scientific nomenclature and alpha taxonomy . When identifying material, a scientist attempts to apply a taxon name to a specimen or group of specimens based on their understanding of the relevant taxa, [ clarification needed ] [ citation needed ] based on (at least) having read the type description(s), [ citation needed ] preferably also based on an examination of all the type material of all of the relevant taxa. If there is more than one named type that all appear to be the same taxon, then the oldest name takes precedence and is considered to be the correct name of the material in hand. If on the other hand, the taxon appears never to have been named at all, then the scientist or another qualified expert picks a type specimen and publishes a new name and an official description. [ citation needed ] Depending on the nomenclature code applied to the organism in question, a type can be a specimen, a culture, an illustration , or (under the bacteriological code) a description. Some codes consider a subordinate taxon to be the type, but under the botanical code, the type is always a specimen or illustration. [ citation needed ] For example, in the research collection of the Natural History Museum in London, there is a bird specimen numbered 1886.6.24.20. This is a specimen of a kind of bird commonly known as the spotted harrier , which currently bears the scientific name Circus assimilis . This particular specimen is the holotype for that species; the name Circus assimilis refers, by definition, to the species of that particular specimen. That species was named and described by Jardine and Selby in 1828, and the holotype was placed in the museum collection so that other scientists might refer to it as necessary. [ citation needed ] At least for type specimens there is no requirement for a "typical" individual to be used. Genera and families , particularly those established by early taxonomists, tend to be named after species that are more "typical" for them, but here too this is not always the case and due to changes in systematics cannot be. Hence, the term name-bearing type or onomatophore is sometimes used, to denote the fact that biological types do not define "typical" individuals or taxa , but rather fix a scientific name to a specific operational taxonomic unit . Type specimens are theoretically even allowed to be aberrant or deformed individuals or color variations, though this is rarely chosen to be the case, as it makes it hard to determine to which population the individual belonged. [ 1 ] [ 2 ] [ 4 ] The usage of the term type is somewhat complicated by slightly different uses in botany and zoology . In the PhyloCode , type-based definitions are replaced by phylogenetic definitions . [ citation needed ] In some older taxonomic works the word "type" has sometimes been used differently. The meaning was similar in the first Laws of Botanical Nomenclature , [ 5 ] [ 6 ] but has a meaning closer to the term taxon in some other works: [ 7 ] Ce seul caractère permet de distinguer ce type de toutes les autres espèces de la section. ... Après avoir étudié ces diverses formes, j'en arrivai à les considérer comme appartenant à un seul et même type spécifique. Translation: This single character permits [one to] distinguish this type from all other species of the section ... After studying the diverse forms, I came to consider them as belonging to the one and the same specific type. In botanical nomenclature , a type ( typus , nomenclatural type ), "is that element to which the name of a taxon is permanently attached." (article 7.2) [ 8 ] In botany, a type is either a specimen or an illustration. A specimen is a real plant (or one or more parts of a plant or a lot of small plants), dead and kept safe, "curated", in a herbarium (or the equivalent for fungi). Examples of where an illustration may serve as a type include: A type does not determine the circumscription of the taxon. For example, the common dandelion is a controversial taxon: some botanists consider it to consist of over a hundred species, and others regard it as a single species. The type of the name Taraxacum officinale is the same whether the circumscription of the species includes all those small species ( Taraxacum officinale is a "big" species) or whether the circumscription is limited to only one small species among the other hundred ( Taraxacum officinale is a "small" species). The name Taraxacum officinale is the same and the type of the name is the same, but the extent to which the name actually applies varies greatly. Setting the circumscription of a taxon is done by a taxonomist in a publication. Miscellaneous notes: The ICN provides a listing of the various kinds of types (article 9 and the Glossary), [ 8 ] the most important of which is the holotype. These are The word "type" appears in botanical literature as a part of some older terms that have no status under the ICN : for example a clonotype. In zoological nomenclature , the type of a species or subspecies is a specimen or series of specimens. The type of a genus or subgenus is a species. The type of a suprageneric taxon (e.g., family, etc.) is a genus. Names higher than superfamily rank do not have types. A "name-bearing type" is a specimen or image that "provides the objective standard of reference whereby the application of the name of a nominal taxon can be determined." [ citation needed ] Although in reality biologists may examine many specimens (when available) of a new taxon before writing an official published species description, nonetheless, under the formal rules for naming species (the International Code of Zoological Nomenclature), a single type must be designated, as part of the published description. [ citation needed ] A type description must include a diagnosis (typically, a discussion of similarities to and differences from closely related species), and an indication of where the type specimen or specimens are deposited for examination. The geographical location where a type specimen was originally found is known as its type locality . In the case of parasites, the term type host (or symbiotype) is used to indicate the host organism from which the type specimen was obtained. [ 9 ] Zoological collections are maintained by universities and museums. Ensuring that types are kept in good condition and made available for examination by taxonomists are two important functions of such collections. And, while there is only one holotype designated, there can be other "type" specimens, the following of which are formally defined: When a single specimen is clearly designated in the original description, this specimen is known as the holotype of that species. [ 10 ] The holotype is typically placed in a major museum, or similar well-known public collection, so that it is freely available for later examination by other biologists. When the original description designated a holotype, there may be additional specimens that the author designates as additional representatives of the same species, termed paratypes. These are not name-bearing types . [ citation needed ] An allotype is a specimen of the opposite sex to the holotype, designated from among paratypes. The word was also formerly used for a specimen that shows features not seen in the holotype of a fossil. [ 11 ] The term is not regulated by the ICZN . [ citation needed ] A neotype is a specimen later selected to serve as the single type specimen when an original holotype has been lost or destroyed or where the original author never cited a specimen. A syntype is any one of two or more specimens that is listed in a species description where no holotype was designated; historically, syntypes were often explicitly designated as such, and under the present ICZN this is a requirement, but modern attempts to publish species description based on syntypes are generally frowned upon by practicing taxonomists, and most are gradually being replaced by lectotypes. Those that still exist are still considered name-bearing types. [ citation needed ] A lectotype is a specimen later selected to serve as the single type specimen for species originally described from a set of syntypes . In zoology, a lectotype is a kind of name-bearing type . When a species was originally described on the basis of a name-bearing type consisting of multiple specimens, one of those may be designated as the lectotype. Having a single name-bearing type reduces the potential for confusion, especially considering that it is not uncommon for a series of syntypes to contain specimens of more than one species. Formally, Carl Linnaeus is the lectotype for Homo sapiens , designated in 1959. [ 12 ] [ 13 ] He published the first book considered to be part of taxonomical nomenclature, the 10th edition of Systema Naturae, which included the first description of Homo sapiens and determined all valid syntypes for the species. [ 12 ] Crucially, in 1959, Professor William Stearne wrote in a passing remark on Linnaeus's contributions, "Linnaeus himself, must stand as the type of his Homo sapiens. " [ 12 ] [ 14 ] He justified his choice by noting that the specimen that Linnaeus, who wrote his own autobiography five times, had most studied was probably himself. [ 15 ] This sufficiently and correctly designated Linnaeus to be the lectotype for Homo sapiens . [ 12 ] It has also been suggested that Edward Cope is the lectotype for Homo sapiens , based on the 1994 reporting by Louie Psihoyos of an unpublished proposal by Bob Bakker to do so. [ 12 ] However, this designation is invalid both because Edward Cope was not one of the specimens described in Systema Naturae 10th Ed., and therefore not being a syntype is not eligible, and because Stearne's designation in 1959 has seniority and invalidates future designations. [ 12 ] A paralectotype is any additional specimen from among a set of syntypes after a lectotype has been designated from among them. These are not name-bearing types. [ 16 ] A special case in protists where the type consists of two or more specimens of "directly related individuals" within a preparation medium such as a blood smear. The terms parahapantotype and lectohapantotype refer to type preparations additional to the hapantotype and designated by the describing author. [ 17 ] As with other type designations the use of the prefix "Neo-", such as Neohapantotype , is employed when a replacement for the original hapantotype is designated, or when an original description did not include a designated type specimen. [ 18 ] An illustration on which a new species or subspecies was based. For instance, the Burmese python, Python bivittatus , is one of many species that are based on illustrations by Albertus Seba (1734). [ 19 ] [ 20 ] An ergatotype is a specimen selected to represent a worker member in hymenopterans which have polymorphic castes. [ 11 ] A hypotype is a specimen whose details have previously been published that is used in a supplementary figure or description of the species. [ 21 ] The term " kleptotype " informally refers to a type specimen or a part of it that has been stolen, or improperly relocated. [ 22 ] [ 23 ] [ 24 ] [ 25 ] Type illustrations have also been used by zoologists, as in the case of the Réunion parakeet , which is known only from historical illustrations and descriptions. [ 26 ] : 24 Recently, some species have been described where the type specimen was released alive back into the wild, such as the Bulo Burti boubou (a bushshrike ), described as Laniarius liberatus , in which the species description included DNA sequences from blood and feather samples. Assuming there is no future question as to the status of such a species, the absence of a type specimen does not invalidate the name, but it may be necessary for the future to designate a neotype for such a taxon, should any questions arise. However, in the case of the bushshrike, ornithologists have argued that the specimen was a rare and hitherto unknown color morph of a long-known species, using only the available blood and feather samples. While there is still some debate on the need to deposit actual killed individuals as type specimens, it can be observed that given proper vouchering and storage, tissue samples can be just as valuable should dispute about the validity of a species arise. [ citation needed ] The various types listed above are necessary [ citation needed ] because many species were described one or two centuries ago, when a single type specimen, a holotype, was often not designated. Also, types were not always carefully preserved, and intervening events such as wars and fires have resulted in the destruction of the original type material. The validity of a species name often rests upon the availability of original type specimens; or, if the type cannot be found, or one has never existed, upon the clarity of the description. The ICZN has existed only since 1961 when the first edition of the Code was published. The ICZN does not always demand a type specimen for the historical validity of a species, and many "type-less" species do exist. The current edition of the Code, Article 75.3, prohibits the designation of a neotype unless there is "an exceptional need" for "clarifying the taxonomic status" of a species (Article 75.2). There are many other permutations and variations on terms using the suffix "-type" (e.g., allotype , cotype, topotype , generitype , isotype , isoneotype, isolectotype, etc.) but these are not formally regulated by the Code, and a great many are obsolete and/or idiosyncratic. However, some of these categories can potentially apply to genuine type specimens, such as a neotype; e.g., isotypic/topotypic specimens are preferred to other specimens, when they are available at the time a neotype is chosen (because they are from the same time and/or place as the original type). [ citation needed ] A topotype is a specimen that was obtained from the same location that the original type specimen came from. [ 27 ] The term fixation is used by the Code for the declaration of a name-bearing type, whether by original or subsequent designation. [ citation needed ] Each genus must have a designated type species (the term "genotype" was once used for this but has been abandoned because the word has become much better known as the term for a different concept in genetics ). The description of a genus is usually based primarily on its type species, modified and expanded by the features of other included species. The generic name is permanently associated with the name-bearing type of its type species. [ citation needed ] Ideally, a type species best exemplifies the essential characteristics of the genus to which it belongs, but this is subjective and, ultimately, technically irrelevant, as it is not a requirement of the Code. If the type species proves, upon closer examination, to belong to a pre-existing genus (a common occurrence), then all of the constituent species must be either moved into the pre-existing genus or disassociated from the original type species and given a new generic name; the old generic name passes into synonymy and is abandoned unless there is a pressing need to make an exception (decided case-by-case, via petition to the International Commission on Zoological Nomenclature). [ citation needed ] A type genus is a genus from which the name of a family or subfamily is formed. As with type species, the type genus is not necessarily the most representative but is usually the earliest described, largest or best-known genus. It is not uncommon for the name of a family to be based upon the name of a type genus that has passed into synonymy; the family name does not need to be changed in such a situation. [ citation needed ]
https://en.wikipedia.org/wiki/Type_(biology)
In model theory and related areas of mathematics , a type is an object that describes how a (real or possible) element or finite collection of elements in a mathematical structure might behave. More precisely, it is a set of first-order formulas in a language L with free variables x 1 , x 2 ,..., x n that are true of a set of n -tuples of an L -structure M {\displaystyle {\mathcal {M}}} . Depending on the context, types can be complete or partial and they may use a fixed set of constants, A , from the structure M {\displaystyle {\mathcal {M}}} . The question of which types represent actual elements of M {\displaystyle {\mathcal {M}}} leads to the ideas of saturated models and omitting types . Consider a structure M {\displaystyle {\mathcal {M}}} for a language L . Let M be the universe of the structure. For every A ⊆ M , let L ( A ) be the language obtained from L by adding a constant c a for every a ∈ A . In other words, A 1-type (of M {\displaystyle {\mathcal {M}}} ) over A is a set p ( x ) of formulas in L ( A ) with at most one free variable x (therefore 1-type) such that for every finite subset p 0 ( x ) ⊆ p ( x ) there is some b ∈ M , depending on p 0 ( x ), with M ⊨ p 0 ( b ) {\displaystyle {\mathcal {M}}\models p_{0}(b)} (i.e. all formulas in p 0 ( x ) are true in M {\displaystyle {\mathcal {M}}} when x is replaced by b ). Similarly an n -type (of M {\displaystyle {\mathcal {M}}} ) over A is defined to be a set p ( x 1 ,..., x n ) = p ( x ) of formulas in L ( A ), each having its free variables occurring only among the given n free variables x 1 ,..., x n , such that for every finite subset p 0 ( x ) ⊆ p ( x ) there are some elements b 1 ,..., b n ∈ M with M ⊨ p 0 ( b 1 , … , b n ) {\displaystyle {\mathcal {M}}\models p_{0}(b_{1},\ldots ,b_{n})} . A complete type of M {\displaystyle {\mathcal {M}}} over A is one that is maximal with respect to inclusion . Equivalently, for every ϕ ( x ) ∈ L ( A , x ) {\displaystyle \phi ({\boldsymbol {x}})\in L(A,{\boldsymbol {x}})} either ϕ ( x ) ∈ p ( x ) {\displaystyle \phi ({\boldsymbol {x}})\in p({\boldsymbol {x}})} or ¬ ϕ ( x ) ∈ p ( x ) {\displaystyle \lnot \phi ({\boldsymbol {x}})\in p({\boldsymbol {x}})} . Any non-complete type is called a partial type . So, the word type in general refers to any n -type, partial or complete, over any chosen set of parameters (possibly the empty set). An n -type p ( x ) is said to be realized in M {\displaystyle {\mathcal {M}}} if there is an element b ∈ M n such that M ⊨ p ( b ) {\displaystyle {\mathcal {M}}\models p({\boldsymbol {b}})} . The existence of such a realization is guaranteed for any type by the compactness theorem , although the realization might take place in some elementary extension of M {\displaystyle {\mathcal {M}}} , rather than in M {\displaystyle {\mathcal {M}}} itself. If a complete type is realized by b in M {\displaystyle {\mathcal {M}}} , then the type is typically denoted t p n M ( b / A ) {\displaystyle tp_{n}^{\mathcal {M}}({\boldsymbol {b}}/A)} and referred to as the complete type of b over A . A type p ( x ) is said to be isolated by φ {\displaystyle \varphi } , for φ ∈ p ( x ) {\displaystyle \varphi \in p(x)} , if for all ψ ( x ) ∈ p ( x ) , {\displaystyle \psi ({\boldsymbol {x}})\in p({\boldsymbol {x}}),} we have Th ⁡ ( M ) ⊨ φ ( x ) → ψ ( x ) {\displaystyle \operatorname {Th} ({\mathcal {M}})\models \varphi ({\boldsymbol {x}})\rightarrow \psi ({\boldsymbol {x}})} . Since finite subsets of a type are always realized in M {\displaystyle {\mathcal {M}}} , there is always an element b ∈ M n such that φ ( b ) is true in M {\displaystyle {\mathcal {M}}} ; i.e. M ⊨ φ ( b ) {\displaystyle {\mathcal {M}}\models \varphi ({\boldsymbol {b}})} , thus b realizes the entire isolated type. So isolated types will be realized in every elementary substructure or extension. Because of this, isolated types can never be omitted (see below). A model that realizes the maximum possible variety of types is called a saturated model , and the ultrapower construction provides one way of producing saturated models. Consider the language L with one binary relation symbol , which we denote as ∈ {\displaystyle \in } . Let M {\displaystyle {\mathcal {M}}} be the structure ⟨ ω , ∈ ω ⟩ {\displaystyle \langle \omega ,\in _{\omega }\rangle } for this language, which is the ordinal ω {\displaystyle \omega } with its standard well-ordering . Let T {\displaystyle {\mathcal {T}}} denote the first-order theory of M {\displaystyle {\mathcal {M}}} . Consider the set of L (ω)-formulas p ( x ) := { n ∈ ω x ∣ n ∈ ω } {\displaystyle p(x):=\{n\in _{\omega }x\mid n\in \omega \}} . First, we claim this is a type. Let p 0 ( x ) ⊆ p ( x ) {\displaystyle p_{0}(x)\subseteq p(x)} be a finite subset of p ( x ) {\displaystyle p(x)} . We need to find a b ∈ ω {\displaystyle b\in \omega } that satisfies all the formulas in p 0 {\displaystyle p_{0}} . Well, we can just take the successor of the largest ordinal mentioned in the set of formulas p 0 ( x ) {\displaystyle p_{0}(x)} . Then this will clearly contain all the ordinals mentioned in p 0 ( x ) {\displaystyle p_{0}(x)} . Thus we have that p ( x ) {\displaystyle p(x)} is a type. Next, note that p ( x ) {\displaystyle p(x)} is not realized in M {\displaystyle {\mathcal {M}}} . For, if it were there would be some n ∈ ω {\displaystyle n\in \omega } that contains every element of ω {\displaystyle \omega } . If we wanted to realize the type, we might be tempted to consider the structure ⟨ ω + 1 , ∈ ω + 1 ⟩ {\displaystyle \langle \omega +1,\in _{\omega +1}\rangle } , which is indeed an extension of M {\displaystyle {\mathcal {M}}} that realizes the type. Unfortunately, this extension is not elementary, for example, it does not satisfy T {\displaystyle {\mathcal {T}}} . In particular, the sentence ∃ x ∀ y ( y ∈ x ∨ y = x ) {\displaystyle \exists x\forall y(y\in x\lor y=x)} is satisfied by this structure and not by M {\displaystyle {\mathcal {M}}} . So, we wish to realize the type in an elementary extension. We can do this by defining a new L -structure, which we will denote M ′ {\displaystyle {\mathcal {M}}'} . The domain of the structure will be ω ∪ Z ′ {\displaystyle \omega \cup \mathbb {Z} '} where Z ′ {\displaystyle \mathbb {Z} '} is the set of integers adorned in such a way that Z ′ ∩ ω = ∅ {\displaystyle \mathbb {Z} '\cap \omega =\emptyset } . Let < {\displaystyle <} denote the usual order of Z ′ {\displaystyle \mathbb {Z} '} . We interpret the symbol ∈ {\displaystyle \in } in our new structure by ∈ M ′ = ∈ ω ∪ < ∪ ( ω × Z ′ ) {\displaystyle \in _{{\mathcal {M}}'}=\in _{\omega }\cup <\cup \,(\omega \times \mathbb {Z} ')} . The idea being that we are adding a " Z {\displaystyle \mathbb {Z} } -chain", or copy of the integers, above all the finite ordinals. Clearly any element of Z ′ {\displaystyle \mathbb {Z} '} realizes the type p ( x ) {\displaystyle p(x)} . Moreover, one can verify that this extension is elementary. Another example: the complete type of the number 2 over the empty set, considered as a member of the natural numbers, would be the set of all first-order statements (in the language of Peano arithmetic ), describing a variable x , that are true when x = 2. This set would include formulas such as x ≠ 1 + 1 + 1 {\displaystyle \,\!x\neq 1+1+1} , x ≤ 1 + 1 + 1 + 1 + 1 {\displaystyle x\leq 1+1+1+1+1} , and ∃ y ( y < x ) {\displaystyle \exists y(y<x)} . This is an example of an isolated type, since, working over the theory of the naturals, the formula x = 1 + 1 {\displaystyle x=1+1} implies all other formulas that are true about the number 2. As a further example, the statements and describing the square root of 2 are consistent with the axioms of ordered fields , and can be extended to a complete type. This type is not realized in the ordered field of rational numbers, but is realized in the ordered field of reals. Similarly, the infinite set of formulas (over the empty set) {x>1, x>1+1, x>1+1+1, ...} is not realized in the ordered field of real numbers, but is realized in the ordered field of hyperreals . Similarly, we can specify a type { 0 < x < 1 / n ∣ n ∈ N } {\displaystyle \{0<x<1/n\mid n\in \mathbb {N} \}} that is realized by an infinitesimal hyperreal that violates the Archimedean property . The reason it is useful to restrict the parameters to a certain subset of the model is that it helps to distinguish the types that can be satisfied from those that cannot. For example, using the entire set of real numbers as parameters one could generate an uncountably infinite set of formulas like x ≠ 1 {\displaystyle x\neq 1} , x ≠ π {\displaystyle x\neq \pi } , ... that would explicitly rule out every possible real value for x , and therefore could never be realized within the real numbers. It is useful to consider the set of complete n -types over A as a topological space . Consider the following equivalence relation on formulas in the free variables x 1 ,..., x n with parameters in A : One can show that ψ ≡ ϕ {\displaystyle \psi \equiv \phi } if and only if they are contained in exactly the same complete types. The set of formulas in free variables x 1 ,..., x n over A up to this equivalence relation is a Boolean algebra (and is canonically isomorphic to the set of A -definable subsets of M n ). The complete n -types correspond to ultrafilters of this Boolean algebra. The set of complete n -types can be made into a topological space by taking the sets of types containing a given formula as a basis of open sets . This constructs the Stone space associated to the Boolean algebra, which is a compact , Hausdorff , and totally disconnected space. Example . The complete theory of algebraically closed fields of characteristic 0 has quantifier elimination , which allows one to show that the possible complete 1-types (over the empty set) correspond to: In other words, the 1-types correspond exactly to the prime ideals of the polynomial ring Q [ x ] over the rationals Q : if r is an element of the model of type p , then the ideal corresponding to p is the set of polynomials with r as a root (which is only the zero polynomial if r is transcendental). More generally, the complete n -types correspond to the prime ideals of the polynomial ring Q [ x 1 ,..., x n ], in other words to the points of the prime spectrum of this ring. (The Stone space topology can in fact be viewed as the Zariski topology of a Boolean ring induced in a natural way from the Boolean algebra. While the Zariski topology is not in general Hausdorff, it is in the case of Boolean rings.) For example, if q ( x , y ) is an irreducible polynomial in two variables, there is a 2-type whose realizations are (informally) pairs ( x , y ) of elements with q ( x , y )=0. Given a complete n -type p one can ask if there is a model of the theory that omits p , in other words there is no n -tuple in the model that realizes p . If p is an isolated point in the Stone space, i.e. if { p } is an open set, it is easy to see that every model realizes p (at least if the theory is complete). The omitting types theorem says that conversely if p is not isolated then there is a countable model omitting p (provided that the language is countable). Example : In the theory of algebraically closed fields of characteristic 0, there is a 1-type represented by elements that are transcendental over the prime field Q . This is a non-isolated point of the Stone space (in fact, the only non-isolated point). The field of algebraic numbers is a model omitting this type, and the algebraic closure of any transcendental extension of the rationals is a model realizing this type. All the other types are "algebraic numbers" (more precisely, they are the sets of first-order statements satisfied by some given algebraic number), and all such types are realized in all algebraically closed fields of characteristic 0.
https://en.wikipedia.org/wiki/Type_(model_theory)
Type 1 regulatory cells or Tr1 ( T R 1 ) cells are a class of regulatory T cells participating in peripheral immunity as a subset of CD4+ T cells . Tr1 cells regulate tolerance towards antigens of any origin. Tr1 cells are self or non-self antigen specific and their key role is to induce and maintain peripheral tolerance [ 1 ] and suppress tissue inflammation in autoimmunity and graft vs. host disease. [ 2 ] The specific cell-surface markers for Tr1 cells in humans and mice are CD4 + CD49b + LAG-3 + CD226 + from which LAG-3 + and CD49b + are indispensable. [ 3 ] LAG-3 is a membrane protein on Tr1 cells that negatively regulates TCR -mediated signal transduction in cells. LAG-3 activates dendritic cells (DCs) and enhances the antigen -specific T-cell response which is necessary for Tr1 cells antigen specificity. [ 3 ] [ 4 ] [ 5 ] CD49b belongs to the integrin family and is a receptor for many (extracellular) matrix and non-matrix molecules. CD49b provides only little contribution to the differentiation and function of Tr1 cells. [ 3 ] They characteristically produce high levels of IL-10, IFN-γ, IL-5 and also TGF- β but neither IL-4 nor IL-2. [ 6 ] Production of IL-10 is also much more rapid than its production by other T-helper cell types. [ 6 ] Tr1 cells do not constitutively express FOXP3 [ 7 ] but only transiently, upon their activation and in smaller amounts than CD25 + FOXP3 + regulatory cells. [ 8 ] FOXP3 is not required for Tr1 induction, nor for its function. [ 1 ] They also express repressor of GATA-3 (ROG), while CD25 + FOXP3 + regulatory cells do not. [ 9 ] ROG then downregulates GATA-3, a characteristic transcription factor for Th2 cells . Tr1 cells express high levels of regulatory factors, such as glucocorticoid-induced tumor necrosis factor receptor ( GITR ), OX40 ( CD134 ), and tumor-necrosis factor receptor ( TNFRSF9 ). [ 8 ] Resting human Tr1 cells express Th1 associated chemokine receptors CXCR3 and CCR5 , and Th2-associated CCR3, CCR4 and CCR8 . [ 8 ] Upon activation, Tr1 cells migrate preferentially in response to I-309, a ligand for CCR8. [ 8 ] The suppressing and tolerance-inducing effect of Tr1 cells is mediated mainly by cytokines. The other mechanism as cell to cell contact, modulation of dendritic cells, metabolic disruption and cytolysis is however also available to them. [ 1 ] In vivo Tr1 cells need to be activated, to be able to exert their regulatory effects. [ 6 ] Tr1 cells secrete large amount of suppressing cytokines IL-10 and TGF-β. [ 7 ] IL-10 directly inhibits T cells by blocking its production of IL-2, IFN-γ and GM-CSF and have tolerogenic effect on B cells and support differentiation of other regulatory T cells . [ 10 ] IL-10 indirectly downregulates MHC II molecules and co-stimulatory molecules on antigen-presenting cells (APC) and force them to upregulate tolerogenic molecules such as ILT-3, ILT-4 and HLA-G. [ 11 ] Type 1 regulatory T cells poses inhibitory receptor CTLA-4 through which they exert suppressor function. [ 12 ] Tr1 cells can express ectoenzymes CD39 and CD73 and are suspected of generating adenosine which suppresses effector T cell proliferation and their cytokine production in vitro. [ 13 ] Tr1 cells can both express Granzyme A and granzyme B. It was shown recently, that Tr1 cells, in vitro and also ex vivo, specifically lyse cells of myeloid origin, but not other APC or T or B lymphocytes. [ 14 ] Cytolysis indirectly suppresses immune response by reducing numbers of myeloid-origin antigen presenting cells. Tr 1 cells are inducible, arising from precursors naive T cells. They can be differentiated ex vivo and in vivo. [ 15 ] The ways of Tr1 cells induction in vivo, ex vivo and in vitro differ and also envelop many different approaches but the molecular mechanism appears to be conserved. IL-27, together with TGF-β induces IL-10–producing regulatory T cells with Tr1-like properties cells. [ 16 ] [ 17 ] IL-27 alone can induce IL-10-producing Tr1 cells, but in the absence of TGF-β, the cells produce large quantities of both IFN-γ and IL-10. [ 18 ] IL-6 and IL-21 also plays a role in differentiation as they regulate expression of transcription factors necessary for IL-10 production, which is believed to start up the differentiation itself later on. Proposed transcription biomarkers for type 1 regulatory cells differentiation are : [ 18 ] Expression of these transcriptional factors are driven by IL-6 in IL-21 and IL-2 dependant manner. Tr1 cells possess huge clinical potential in means to prevent, block and even cure several T cells mediated diseases, including GvHD , allograft rejection, autoimmunity and chronic inflammatory diseases. The first successful tests were performed on mouse models [ 19 ] [ 20 ] and on humans as well. [ 20 ] [ 21 ] Transplantation research has shown, that donor Tr1 in response to recipient alloantigens, was found to correlate with the absence of GvHD after bone marrow transplantation, while decreased numbers of Tr1 markedly associated with severe GvHD. [ 21 ] Decreased levels of IL-10 CD4+ producing cells were also observed in inflamed synovium and peripheral blood of patients with rheumatoid arthritis. [ 7 ] Phase I/II of clinical trials of Tr1 cell treatment concerning Crohn's disease have been successful and appear to be safe and do not lead to a general immune suppression. [ 20 ] [ 21 ]
https://en.wikipedia.org/wiki/Type_1_regulatory_T_cell
Type 2 inflammation is a pattern of immune response . Its physiological function is to defend the body against helminths , but a dysregulation of the type 2 inflammatory response has been implicated in the pathophysiology of several diseases. [ 1 ] [ 2 ] IL-25 , IL-33 , and TSLP are alarmins released from damaged epithelial cells. These cytokines mediate the activation of type 2 T helper cells (T h 2 cells), type 2 innate lymphoid cells (ILC2 cells), and dendritic cells . T h 2 cells and ILC2 cells secrete IL-4 , IL-5 and IL-13 . [ 1 ] [ 3 ] IL-4 further drives CD4+ T cell differentiation towards the T h 2 subtype and induces isotype switching to IgE in B cells. IL-4 and IL-13 stimulate trafficking of eosinophils to the site of inflammation, while IL-5 promotes both eosinophil trafficking and production. [ 2 ] Type 2 inflammation has been implicated in several chronic diseases : Persons with one type 2 inflammatory disease are more likely to have other type 2 inflammatory diseases. [ 9 ] Several medicines have been developed that target mediators of type 2 inflammation: [ 2 ]
https://en.wikipedia.org/wiki/Type_2_inflammation
The Type 94 disinfecting vehicle and Type 94 gas scattering vehicle were variants of the Type 94 tankette adapted to chemical warfare by the Imperial Japanese Army . The Type 94 disinfecting vehicle and Type 94 gas scattering vehicle were configured as either an independent mobile liquid dissemination chemical vehicle or a respective mobile disinfecting anti-chemical agents vehicle to support the Japanese chemical infantry units in combat. [ 1 ] These special vehicles for chemical warfare were developed in 1933–1934. The Type 94 tankette was modified and used as a "tractor"; closed for protection against these agents. It pulled either a configured independent tracked mobile liquid dissemination chemical vehicle or a respective tracked mobile disinfecting anti-chemical agents vehicle. [ 1 ] The gas scattering vehicle version could scatter a mustard gas chemical agent within an 8 m width and the disinfecting vehicle version scattered " bleaching powder to counteract the poison gas" or pathogenic agents. [ 1 ] [ 2 ] In a similar way, the Soviet Red Army developed chemical and biological warfare special protection armored vehicles , including using medium or light tanks with modified turrets with dispersers or gas scatterers, liquid or powder dissemination systems and special armor protection against agents for their respective chemical and biological units in the years prior to and during World War II . [ a ] One example of this was the chemical and flame tank versions of the T-26 . [ 3 ] Other major powers also had their own versions of vehicles designed to deliver chemical and biological weapons on the battlefield, usually using light or infantry tanks as their basis. Also produced for the Imperial Japanese Army were the Type 97 disinfecting vehicle and Type 97 gas scattering vehicle. They were based on the Type 97 Te-Ke tankette chassis, but had Type 94 turrets. They operated in the same way as the Type 94 tankette-based versions. They used the same type of towed liquid dissemination chemical vehicle trailer or tracked mobile disinfecting anti-chemical agents vehicle trailer as the Type 94 versions. [ 4 ]
https://en.wikipedia.org/wiki/Type_94_disinfecting_vehicle_and_Type_94_gas_scattering_vehicle
The Type Es 3750 or simply the Es 3750 is a series of bucket chain excavators built by TAKRAF and used in Germany . According to TAKRAF , they boast that the Type Es 3750 is the largest bucket chain excavators in the world. [ 3 ] [ 4 ] Type Es 3750s are notable for being always used in conjunction with the Overburden Conveyor Bridge F60 , another absurdly large land vehicle. [ 5 ] The Type Es 3750 are incredibly immense digging vessels , [ 6 ] the largest of its kind according to TAKRAF . Each Type Es 3750 has the total length of 137 m (449 ft), a height of 40.5 m (133 ft) and a weight of 5,118 t (11,300,000 lb). [ 1 ] The cutting height of the BCE's chain boom is 34 m (112 ft) to 35.5 m (116 ft), whilst its cutting depth is 31 m (102 ft) to 31.2 m (102 ft). [ 3 ] In total, the chain boom is capable of excavating a maximum capacity of 14,500 m 3 /h. [ 3 ] The buckets itself is reinforced by 5 to 10 mm steel plates to prevent deformation and wear-and-tear. [ 6 ] A unique design choice for the Type Es 3750 is the presence of two excavator's control cockpit, each spraying outwards on the left and right side of the machine. Given that it predominantly moves side-to-side with the F60, this is to be expected. [ 2 ] Likewise, it also possess a small complement of men of around 2–5. [ 2 ] Another unique feature is that the Type Es 3750 runs on rails. Since it is largely grouped closely with the F60, the Type Es 3750 share the same gauge as the F60 - 56 in (1,435 mm). [ 5 ] Likewise, the vehicles share their power source with the F60 and the nearby external coal power plant and therefore, moves the same speed as the F60. [ 2 ] The Type Es 3750 was built in almost exactly the same time as the F60s during 1978 in East Germany . [ 1 ] Each F60 were to be expected to be accompanied by two Type Es 3750 to assist the machine in transferring overburden and lignite coal. One Type Es 3750 is purposed to excavate the topside whilst another is used to excavate the depths. [ 2 ] Excavated materials would be transported on side conveyors towards the F60 for the materials to be properly redistributed. [ 2 ] As there were originally five F60s, a total of 10 Type Es 3750 BCEs were built, but with the retirement of one of the F60s in Lichterfeld-Schacksdorf , only 8 remain in service. [ 5 ]
https://en.wikipedia.org/wiki/Type_Es_3750_bucket_chain_excavator
The bacterial type IV secretion system , also known as the type IV secretion system or the T4SS , is a secretion protein complex found in gram negative bacteria , gram positive bacteria , and archaea . It is able to transport proteins and DNA across the cell membrane . [ 1 ] The type IV secretion system is just one of many bacterial secretion systems . Type IV secretion systems are related to conjugation machinery which generally involve a single-step secretion system and the use of a pilus . [ 2 ] Type IV secretion systems are used for conjugation, DNA exchange with the extracellular space , and for delivering proteins to target cells . The type IV secretion system is divided into type IVA and type IVB based on genetic ancestry. Notable instances of the type IV secretion system include the plasmid insertion into plants of Agrobacterium tumefaciens , the toxin delivery methods of Bordetella pertussis ( whooping cough ) and Legionella pneumophila ( Legionnaires' disease ), the translocation of effector proteins into host cells by bacteria from the Brucella genus ( Brucellosis ), and the F sex pilus . The type IV secretion system is a protein complex found in prokaryotes used to transport DNA , proteins, or effector molecules from the cytoplasm to the extracellular space beyond the cell. [ 1 ] The type IV secretion system is related to prokaryotic conjugation machinery. [ 2 ] Type IV secretion systems are a highly versatile group, present in Gram positive bacteria , Gram negative bacteria , and archaea . They usually involve a single step which utilizes a pilus, though exceptions exist. [ 3 ] Type IV secretion systems are highly diverse, with a variety of functions and types due to different evolutionary paths. Primarily, type IV secretion systems are grouped based on structural and genetic similarity and are only distantly related to each other. Type IVA systems are similar to the VirB/D4 system of Agrobacterium tumefaciens . Type IVB systems are similar to the Dot/Icm systems found in intracellular pathogens such as Legionella pneumophila . The “other” type systems resemble neither IVA or IVB. [ 3 ] Types are genetically distinct and use separate sets of proteins, however, proteins between the sets have strong homologies to each other, which leads them to function similarly. [ 1 ] Type IV secretion systems are also classified by function into three main types. Conjugative systems: used for DNA transfer via cell to cell contact (a process called conjugation ); DNA release and uptake systems: used to exchange DNA with the extracellular environment (a process called transformation ); and effector systems: used to transfer proteins to target cells. [ 4 ] Conjugative as well as DNA release and uptake systems play an important role in horizontal gene transfer , which allows prokaryotes to adapt to their environment, such as, developing antibiotic resistance . [ 5 ] Effector systems allow for the interaction between microbes and larger organisms. The effector systems are used as a toxin delivery method by many human pathogens such as, Helicobacter pylori (stomach ulcers), whooping cough , and Legionnaires' disease . [ 1 ] Currently, only the structure of type IVA secretion systems, which occur in gram-negative bacteria, is well described. It is composed of 12 protein subunits, VirB1 - VirB11 and VirD4, analogies of which exist in all type IVA systems. [ 1 ] The Type IV secretion system’s components can be separated into 3 groups: the translocation channel scaffold, the ATPases, and the pilus. The translocation channel scaffold is the portion of the machinery that creates the channel between extracellular space and the cytoplasm through the inner and outer membranes , and contains VirB6 - VirB10. The core complex of the scaffold is composed of 14 copies of VirB7, VirB9, and VirB10 which form a cylindrical channel that spans both membranes and connects the cytoplasm to the extracellular space. [ 6 ] A single protein, VirB10 is integral in both the inner and outer membranes. It inserts into the outer membrane using an α-helical barrel structure which helps form a channel between the two membranes. [ 7 ] There is an opening on the cytoplasmic end of the channel which is followed by a large chamber and a second opening. The second opening requires a conformational change to allow substrate passage from the cytoplasm into the channel. [ 1 ] Either VirB6 or VirB8 is believed to form the inner membrane pore, as they are integral proteins on the inner membrane and have direct contact with the substrate . [ 8 ] The ATPases consist of VirB4, VirB11, and VirD4, which drive the substrate motion through the channel and provide the system with energy. VirB11 belongs to a class of transmembrane transporters called “traffic ATPases”. VirB4 is not well characterized. [ 9 ] [ 1 ] The pilus is composed of VirB2 and VirB5, with VirB2 being the major component. [ 1 ] In A. tumefaciens , the pilus is 8-12 nm in diameter, and less than one μm in length. F pili , another commonly examined type of pilus, are much longer with a length of 2-20 μm. [ 2 ] Due to the wide variety of type IV secretion systems in both origin and function, it is difficult to state much mechanistically about the group as a whole. In general, after DNA is packaged in a conjugative system it is recruited by ATPase analogues to the VirD4 coupling protein, then translocated through the pilus. [ 3 ] In A. tumefaciens specifically, the DNA passes through a characterized chain of enzymes before reaching the pilus. The DNA is recruited by VirD4, then VirB11, then to the intermembrane proteins (VirB6, and VirB8), moved to VirB9, and finally sent to the pilus (VirB2). [ 10 ] [ 1 ]
https://en.wikipedia.org/wiki/Type_IV_secretion_system
The Type IX secretion system is a specialized protein secretion system found in the Fibrobacteres-Chlorobi-Bacteroidetes superphylum . It plays a crucial role in various cellular processes, including gliding motility [ 1 ] and the secretion of virulence factors in Porphyromonas gingivalis . [ 2 ] To date, at least nineteen components of the T9SS have been identified, though their precise architecture and mechanistic functions remain incompletely understood. Secretion systems come in several different varieties. These are intricate complexes of proteins that are incorporated within the membranes of many different species of bacteria. These proteins are used by the bacteria to expel and transport intracellular enzymes, proteins, and molecules across the cytoplasmic membrane into a host cell or into the surrounding extracellular space. The type of secretion system used is dependent upon the function required and the type of cell that is utilizing it. A gram-negative, pathogenic diderm might employ a secretion system's membrane bound proteins to inject toxins into the host cell, while that of a Type IX Secretion System (T9SS) may only be used to secrete proteins into the extracellular space. [ 4 ] Various components of this system had been previously discovered as early as 2005. [ 5 ] Namely, PorT and its ability to transport Gingipains in then-novel organisms Flavobacterium johnsoniae and Porphyromonas gingivalis . These components eventually led to the differentiation of the T9SS, setting it apart from the others. Through the conglomeration of other research done on these two novel organisms, specific proteins were identified in patterns as being used for similar functions across Bacteroidetes. GldK, GldL, GldM, and GldN proteins were observed in F. johnsoniae to be necessary for the cells to have motility and the ability to use chitin. And protease transportation was only enabled in P. gingivalis specimens if PorT, PorL, PorM, PorN, PorK, SprA/Sov proteins were present and functional within the cell. A later discovery that PorT was also necessary for the membrane facilitation of chitinase in F. johnsoniae led to the subsequent observation that the aforementioned list of proteins made up an entirely unique secretion system. [ 3 ] Formerly known as Porphyromonas secretion systems (PorSS), due to its discovery on Porphyromonas gingivalis , Type IX Secretion Systems were officially recognized and renamed in 2010 as the ninth secretion system by research groups that were headed by M.J. McBride and K. Nakayama. [ 6 ] These research groups found that Type IX secretion systems are exclusive to the phylum of Bacteroidetes and that they are present within a majority of species within that phylum. [ 6 ] Further research that was carried out by S.S. Abby [ 7 ] found that about 62% of members from the phylum Bacteroidetes contain the T9SS. [ 7 ] The only phylum of bacteria to house a T9SS, Bacteroidetes are largely found throughout the gastrointestinal tracts of mammals . While there presence is stronger within fecal material, as much as 20% of all bacteria present within the oral cavities of mammals can belong to the phylum Bacteroidetes. [ 8 ] Though mammals house a strong presence of these T9SS bacteria, Bacteroidetes can also be found within echinoderm , arthropod , and avian species. [ 8 ] This illustrates that the presence of T9SS is relatively widespread. The Type IX Secretion System likely evolved from ancient protein transport systems adapted to gliding motility and environmental interactions in Bacteroidetes . Genomic studies suggest that components of T9SS may have evolved in parallel with those of the Type VI Secretion System (T6SS), sharing structural and energy-transducing similarities. Unlike injectisome-type systems, T9SS developed primarily for secretion into the extracellular environment rather than into host cells, supporting its unique ecological roles. [ 4 ] The Type IX bacterial secretion system contains 18 genes that are needed for proper function. [ 9 ] There are many genes in the P. gingivalis genome that code for specific parts of the secretion system that are found in various areas, while the genes PorK-PorL-PorM-PorN-PorP are transcribed together. Other subunits include GldO, GldJ, β-barrel , and plug proteins. The PorLM/GldLM motor uses the proton motive force (PMF) across the inner membrane to power movement. GldL and GldM form a proton channel. As protons flow through, this generates torque that moves proteins like SprA through the outer membrane. This process supports secretion of enzymes like chitinases and proteases and helps build biofilms, especially in bacteria like Flavobacterium johnsoniae and Porphyromonas gingivalis . Advances in cryo-electron microscopy have resolved the ring-like architecture of PorK and PorN complexes, revealing a periplasmic channel that aligns with the outer membrane translocon . These structures highlight the modularity and coordination of energy use and substrate specificity across the system. [ 11 ] Rotation of the T9SS can be used to enable motility for the cell in the form of gliding motility. [ 11 ] It can also be used to secrete a variety of proteins into the extracellular environment. [ 12 ] These secreted proteins include: virulence factors , adhesins, protective surface proteins, cargo proteins, and enzymes such as hydrolytic enzymes , cellulases , chitinases , and proteases , each of which vary in utility for the cell. [ 9 ] Secreted virulence factors are used as a coating for the cell and the cargo vesicles that it releases. This coating allows these packaged vesicles to enter into a host cell and impair immune response in the host. Virulence factors on vesicles contribute to immune evasion but may also trigger inflammatory responses in host tissues. [ 13 ] Adhesins act to fasten the cell to other cells and to ensure that it can dock and lock onto other surfaces that would be more beneficial for the cell's survival. These secreted adhesins help to establish biofilms around the cells which contribute to resisting external distress and an increase in cellular resilience to the environment. [ 14 ] The enzymes that can be secreted are used for the breakdown of extracellular molecules for the acquisition of nutrients from the environment, or for protection by cleaving complement plasma proteins or peptides found in the environment. [ 9 ] T9SS also helps non-pathogenic bacteria survive in nature. In marine species, it supports the breakdown of seaweed and contributes to nutrient recycling and carbon cycling. T9SS is not just important for infections. It plays a key role in the environment, especially in marine bacteria that use it to break down complex carbohydrates like chitin and cellulose . This helps recycle nutrients in aquatic ecosystems. [ 13 ] Researchers are also studying T9SS for industrial uses, including enzyme production, wastewater treatment, and converting plant material into energy. Its unique secretion mechanism may be useful for future biotechnological applications. T9SS is used by a bacteria for the release of a diverse array of proteins including virulence factors that can add to the bacterial pathogenicity. Some of the main pathogens are referred to as gingipains and are Kgp, RgpA, and RgpB. Gingipains are virulence factors that cause around 85% of protein degradation or proteolysis, greatly contributing to inflammatory conditions and the destruction of periodontal tissue. [ 9 ] The T9SS also can be a major component to motility for various bacterial systems. [ 2 ] The proteins that make up this externally assembled rotary system can be recognized, much like other pathogen-associated molecular patterns (PAMPs) , by a host's innate immune system , resulting in complement cascades . These plasma proteins meet the enzymatic T9SS cargo proteins and become susceptible to degradation. [ 9 ] Porphyromonas peptidyl arginine deiminase (PPAD) is an enzyme that was additionally discovered to strictly be used with a T9SS in P. gingivalis that breaks down and alters protein structures by converting any arginine residues within the proteins into a neutrally charged citrulline . Secretion of PPADs can contribute to various deregulatory and inflammatory diseases. Periodontitis and rheumatoid arthritis (RA) are among the more common diseases that PPAD can contribute to, other diseases include psoriasis , multiple sclerosis (MS) , Alzheimer's , and even some forms of cancer . [ 9 ] Therapeutic treatments for bacteria with T9SS release include the administration of proper antibiotics, which can also target the proteolytic enzymes that T9SSs secrete. Cranberry and rice extracts were also seen to have a degree of success with inhibiting the activity of gingipains and in preventing pathogenic biofilm formations and growth. [ 16 ]
https://en.wikipedia.org/wiki/Type_IX_secretion_system
A Type VIII secretion system is a type of secretion system found within the inner and outer membranes of gram-negative bacteria . This system is also referred to as the curli biogenesis pathway or the extracellular nucleation-precipitation pathway . It is associated with the formation of biofilms and infecting hosts. [ 1 ] Curli formation is especially efficient at evading the host's immune system due to the subunits being able to quickly assemble in a single process and not having intermediates. This system is associated with curli-specific genes and utilizes multiple proteins in its process to form curli fibers. These proteins include CsgA CsgB, CsgC, CsgD, CsgE, CsgF, and CsgG. [ 1 ] Type VIII secretion system facilitates the assembly and translocation of curli fibers. Curli fibers are made through the curli biogenesis system, also known as the type VIII secretion system, and are essentially long, linear structures made from proteins that are secreted to the outside of the cell into its surrounding environment. They are made mostly by gram-negative bacteria and, upon secretion, they form compact clusters around the outside of the cell. [ 2 ] The main function of the curli fibers involves their interactions with biofilms. In pathogenic bacteria, curlis can contribute to virulence by helping in cell invasion and activating the innate immune response. Knowing how curli fibers are made, and how the type VIII secretion system works, can help develop an inhibitor to stop or reduce the production of these curli fibers and overall reduce the virulence of the bacteria that produce them. [ 2 ] Understanding these mechanisms can also play a big role in creating treatments for infections that are associated with biofilms. [ 1 ] Curli biogenesis is an adaptable process that uses a direct route and can transform from an intrinsically disordered complex system to a simple amyloid state. [ 2 ] The proteins in this system are encoded by two separate operons . One operon codes for CsgA, CsgB, and CsgC, whereas the other codes for CsgD, CsgE, CsgF, and CsgG. [ 3 ] The two major subunits involved in this process are CsgA and CsgB, with CsgA being the most important to the system. [ 2 ] CsgA and CsgB are responsible for the system's control and extension of fibers. CsgA can transition from a disorder to an ordered amyloid state while the CsgB functions as a nucleator to help promote the polymerization of CsgA. Then, CsgC is introduced as a chaperone and works to keep CsgA from reaching the amyloid state prematurely. The process by which CsgC prevents this is still misunderstood, but the positive charge beta-strand is most theorized. [ 2 ] CsgG is part of a secretion channel that facilitates the translocation of CsgA to the periplasm . CsgE functions as a specificity binder to help guide CsgA to the CsgG secretion channel so that CsgA will be the correct conformation for polymerization . Throughout this process, CsgF interacts with CsgA and CsgB to help enhance the assembly of CsgA and coordinate the nucleating activity of CsgB. CsgD functions as a transcriptional regulator that influences the expression of CsgA and CsgB through environmental factors. The resulting structure is made up of alternating CsgA and CsgB subunits with CsgF unit at the base and the entirety of the structure will be on the outside of the bacterial cell. The secretion of the assembled units requires energy. Energy within a bacterial cell is typically supplied by ATP or GTP , proton motive force, or other membrane potentials . However, with type VIII secretions systems, it is unlikely that energy is derived from one of these typical methods due to its location on the outer membrane of gram-negative bacteria. [ 2 ] The CsgG protein complex is the channel used to allow the assembled CsgA, CsgB, and CsgF subunits to move through the membrane to the outside of the cell where they remain in close proximity to the CsgG protein. It is thought that the energy released from the subunits folding and unfolding as well as the potential from the movement of the subunits across the membrane gives the necessary energy for secretion. [ 2 ] While the type VIII secretion pathway is most desirable, some bacterial species may use the functional amyloid pathway , or Fap, to form a biofilm so it can attach to surfaces. [ 2 ]
https://en.wikipedia.org/wiki/Type_VIII_secretion_system
In biological taxonomy , the type genus is the genus which defines a biological family and the root of the family name. According to the International Code of Zoological Nomenclature , "The name-bearing type of a nominal family-group taxon is a nominal genus called the 'type genus'; the family-group name is based upon that of the type genus." [ 1 ] Any family-group name must have a type genus (and any genus-group name must have a type species , but any species-group name may, but need not, have one or more type specimens). The type genus for a family-group name is also the genus that provided the stem to which was added the ending -idae (for families). In botanical nomenclature , the phrase "type genus" is used, unofficially, as a term of convenience. In the ICN this phrase has no status. The code uses type specimens for ranks up to family, and types are optional for higher ranks. [ 2 ] The Code does not refer to the genus containing that type as a "type genus". The 2008 Revision of the Bacteriological Code states, "The nomenclatural type […] of a taxon above genus, up to and including order, is the legitimate name of the included genus on whose name the name of the relevant taxon is based. One taxon of each category must include the type genus. The names of the taxa which include the type genus must be formed by the addition of the appropriate suffix to the stem of the name of the type genus[…]." [ 3 ] In 2019, it was proposed that all ranks above genus should use the genus category as the nomenclatural type. [ 4 ] This proposal was subsequently adopted for the rank of phylum. [ 5 ]
https://en.wikipedia.org/wiki/Type_genus
In mathematical logic , System U and System U − are pure type systems , i.e. special forms of a typed lambda calculus with an arbitrary number of sorts , axioms and rules (or dependencies between the sorts). System U was proved inconsistent by Jean-Yves Girard in 1972 [ 1 ] (and the question of consistency of System U − was formulated). This result led to the realization that Martin-Löf 's original 1971 type theory was inconsistent, as it allowed the same "Type in Type" behaviour that Girard's paradox exploits. System U is defined [ 2 ] : 352 as a pure type system with System U − is defined the same with the exception of the ( △ , ∗ ) {\displaystyle (\triangle ,\ast )} rule. The sorts ∗ {\displaystyle \ast } and ◻ {\displaystyle \square } are conventionally called “Type” and “ Kind ”, respectively; the sort △ {\displaystyle \triangle } doesn't have a specific name. The two axioms describe the containment of types in kinds ( ∗ : ◻ {\displaystyle \ast :\square } ) and kinds in △ {\displaystyle \triangle } ( ◻ : △ {\displaystyle \square :\triangle } ). Intuitively, the sorts describe a hierarchy in the nature of the terms. The rules govern the dependencies between the sorts: ( ∗ , ∗ ) {\displaystyle (\ast ,\ast )} says that values may depend on values ( functions ), ( ◻ , ∗ ) {\displaystyle (\square ,\ast )} allows values to depend on types ( polymorphism ), ( ◻ , ◻ ) {\displaystyle (\square ,\square )} allows types to depend on types ( type operators ), and so on. The definitions of System U and U − allow the assignment of polymorphic kinds to generic constructors in analogy to polymorphic types of terms in classical polymorphic lambda calculi, such as System F . An example of such a generic constructor might be [ 2 ] : 353 (where k denotes a kind variable) This mechanism is sufficient to construct a term with the type ( ∀ p : ∗ , p ) {\displaystyle (\forall p:\ast ,p)} (equivalent to the type ⊥ {\displaystyle \bot } ), which implies that every type is inhabited . By the Curry–Howard correspondence , this is equivalent to all logical propositions being provable, which makes the system inconsistent. Girard's paradox is the type-theoretic analogue of Russell's paradox in set theory .
https://en.wikipedia.org/wiki/Type_in_type
In printing , type metal refers to the metal alloys used in traditional typefounding and hot metal typesetting . Historically, type metal was an alloy of lead , tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting. The proportions used are in the range: lead 50‒86%, antimony 11‒30% and tin 3‒20%. Antimony and tin are added to lead for durability while reducing the difference between the coefficients of expansion of the matrix and the alloy. Apart from durability, the general requirements for type-metal are that it should produce a true and sharp cast, and retain correct dimensions and form after cooling down. It should also be easy to cast, at reasonable low melting temperature , iron should not dissolve in the molten metal, and mould and nozzles should stay clean and easy to maintain. Today, Monotype machines can utilize a wide range of different alloys. Mechanical linecasting equipment uses alloys that are close to eutectic . Although the knowledge of casting soft metals in moulds was well established before Johannes Gutenberg 's time, his discovery of an alloy that was hard, durable, and would take a clear impression from the mould represents a fundamental aspect of his solution to the problem of printing with movable type . This alloy did not shrink as much as lead alone when cooled. Gutenberg's other contributions were the creation of inks that would adhere to metal type and a method of softening handmade printing paper so that it would take the impression well. Cheap, plentifully available as galena and easily workable, lead has many of the ideal characteristics, but on its own it lacks the necessary hardness and does not make castings with sharp details because molten lead shrinks and sags when it cools to a solid. After much experimentation it was found that adding pewterer 's tin , obtained from cassiterite , improved the ability of the cast type to withstand the wear and tear of the printing process, making it tougher but not more brittle. Despite patiently trying different proportions of both metals, solving the second part of the type metal problem proved very difficult without the addition of yet a third metal, antimony . Alchemists had shown that when stibnite , an antimony sulfide ore , was heated with scrap iron, metallic antimony was produced. The typefounder would typically introduce powdered stibnite and horseshoe nails into his crucible to melt lead, tin and antimony into type metal. Both the iron and the sulfides would be rejected in the process. The addition of antimony conferred the much needed improvements in the properties of hardness, wear resistance and especially, the sharpness of reproduction of the type design , given that it has the curious property of diminishing the shrinkage of the alloy upon solidification. Type metal is an alloy of lead, tin and antimony in different proportions depending on the application, be it individual character mechanical casting for hand setting, mechanical line casting or individual character mechanical typesetting and stereo plate casting. The proportions used are in the range: lead 50‒86%, antimony 11‒30% and tin 3‒20%. The basic characteristics of these metals are as follows: Type metal is an alloy of lead (Pb). Pure lead is a relatively cheap metal, is soft thus easy to work , and it is easy to cast since it melts at 327 °C (621 °F). However, it shrinks when it solidifies making letters that are not sharp enough for printing. In addition pure lead letters will quickly deform during use; a direct result of the easy workability of lead. Lead is exceptionally soft, malleable , and ductile but with little tensile strength. Lead oxide is a poison , that primarily damages brain function. Metallic lead is more stable and less toxic than its oxidized form. Metallic lead cannot be absorbed through contact with skin, so may be handled, carefully, with far less risk than lead oxide. Tin (Sn) promotes the fluidity of the molten alloy and makes the type tough, giving the alloy resistance to wear. It is harder, stiffer and tougher than lead. Antimony (Sb) is a metalloid element, which melts at 630 °C (1,166 °F). Antimony has a crystalline appearance while being both brittle and fusible. [ 1 ] When alloyed with lead to produce type metal, antimony gives it the hardness it needs to resist deformation during printing, and gives it sharper castings from the mould to produce clear, easily read printed text on the page. The actual compositions differed over time, different machines were adjusted to different alloys depending on the intended uses of the type. Printers had sometimes their own preferences about the quality of particular alloys. The Lanston Monotype Corporation in the United Kingdom had a whole range of alloys listed in their manuals. Most mechanical typesetting is divided basically into two different competing technologies: line casting ( Linotype and Intertype ) and single character casting ( Monotype ). The manuals for the Monotype composition caster (1952 and later editions) mention at least five different alloys to be used for casting, depending the purpose of the type and the work to be done with it. Although in general Monotype cast type characters can be visually identified as having a square nick (as opposed to the round nicks used on foundry type), there is no easy way to identify the alloy aside from an expensive chemical assay in a laboratory. Apart from this the two Monotype companies in the United States and the UK also made moulds with 'round' nicks. Typefounders and printers could and did order specially designed moulds to their own specifications: height, size, kind of nick, even the number of nicks could be changed. Type produced with these special moulds can only be identified if the foundry or printer is known. In Switzerland the company "Metallum Pratteln AG", in Basel had yet another list of type-metal alloys. If needed, any alloy according to customer specifications could be produced. Regeneration-metal [ clarification needed ] was melted into the crucible to replace lost tin and antimony through the dross . [ citation needed ] Every time type metal is remelted, tin and antimony oxidise . These oxides form on the surface of the crucible and must be removed. After stirring the molten metal, grey powder forms on the surface, the dross, needing to be skimmed. Dross contains recoverable amounts of tin and antimony. Dross must be processed at specialized companies, in order to extract the pure metals in conditions that would prevent environmental pollution and remain economically feasible. Pure metal melts and solidifies in a simple manner at a specific temperature. This is not the case with alloys. There we find a range of temperatures with all kinds of different events. The melting temperature of all mixtures is considerably lower than the pure components. The addition of a small amount of antimony (5% to 6%) to lead will significantly alter the alloy's behavior compared to pure lead: although the melting point of pure antimony is 630 °C, this mixture will be completely molten and a homogeneous fluid even at temperatures as low as 371 °C. Letting this mixture cool the alloy will remain liquid even through 355 °C, the melting point of pure lead. Once the temperature reaches 291 °C, lead crystals will start to form, increasing the cohesion of the liquid alloy. At 252 °C, the mixture will start to fully solidify, during which the temperature will remain constant. Only when the mixture has fully solidified will the temperature start to decrease again. Using a 10% antimony, 90% lead mixture delays lead crystal formation until approximately 260 °C. Using a 12% antimony, 88% lead mixture prevents crystal formation entirely, becoming a eutectic . This alloy has a clear melting point, at 252 °C. Increasing the antimony content beyond 12% will lead to predominantly antimony crystallization. Adding tin to this bipolar-system complicates the behaviour even further. Some tin enters into the eutectic. A mixture of 4% tin, 12% antimony, and 84% lead solidifies at 240 °C. Depending from the metals in excess, compared with the eutectic, crystals are formed, depleting the liquid, until the eutectic 4/12 mixture is formed once more. The 12/20 alloy contains many mixed crystals of tin and antimony, these crystals constitute the hardness of the alloy and the resistance against wear. Raising the content of antimony cannot be done without adding some tin too. Because the fluidity of the mixture will dramatically diminish when the temperature goes down somewhere in the channels of the machine. Nozzles can be blocked by antimony crystals. Eutectic alloys are used on Linotype-machines and Ludlow-casters to prevent blockage of the mould and to ensure continuous trouble-free casting. Alloys used on Monotype machines tend to contain higher contents of tin, to obtain tougher character. All characters should be able to resist the pressure during printing. This meant an extra investment, but Monotype was an expensive system all the way. The fierce competition between the different mechanical typecasting systems like Linotype and Monotype has given rise to some lasting fairy tales about typemetal. Linotype users looked down on Monotype and vice versa. Monotype machines however can utilize a wide range of different alloys; maintaining a constant and a high production meant a strict standardization of the typemetal in the company, so as to reduce by all means any interruption of the production. Repeated assays were done at regular intervals to monitor the alloy used, since every time the metal is recycled, roughly half a per cent of tin content is lost through oxidation . These oxides are removed with the dross while cleaning the surface of the molten metal. Nowadays this "battle" has lost its importance, at least for Monotype. The quality of the produced type is far more important. Alloys with a high-content of antimony, and subsequently a high content of tin, can be cast at a higher temperature, and at a lower speed and with more cooling at a Monotype composition or supercaster. Although care was taken to avoid mixing different types of type metal in shops with different type casting systems, in actual practice this often occurred. Since a Monotype composition caster can cope with a variety of different metal alloys, occasional mixing of Linotype alloy with discarded typefounders alloy has proven its usefulness. Copper has been used for hardening type metal; this metal easily forms mixed crystals with tin when the alloy cools down. These crystals will grow just below the exit opening of the nozzle in Monotype machines, resulting in a total blockage after some time. These nozzles are very difficult to clean, because the hard crystals will resist drilling. Brass spaces contain zinc , which is extremely counterproductive in type metal. Even a tiny amount — less than 1% — will form a dusty surface on the molten metal surface that is difficult to remove. Characters cast from contaminated type metal such as this are of inferior quality, the solution being to discard and replace with fresh alloy. Brass and zinc should therefore be removed before remelting. The same applies to aluminium , although this metal will float on top of the melt, and will be easily discovered and removed, before it is dissolved into the lead. Magnesium plates are very dangerous in molten lead, because this metal can easily burn and will ignite in the molten lead. Iron is hardly dissolved into type metal, although the molten metal is always in contact with the cast iron surface of the melting pot. Joseph Moxon , in his Mechanick Exercises , mentions a mix of equal amounts of "antimony" and iron nails . [ 3 ] Paragraph 2. Of making Mettal. The Metal Founders make Printing Letters of, is Lead hardned with Iron : Thus they chuse stub-Nails for the best Iron to Melt, as well because they are asured stub-Nails are made of good soft and tough Iron , as because (they being in small pieces of Iron ) will Melt the sooner. To make the Iron Run , they mingle an equal weight of Antimony (beaten in an Iron-Morter into small pieces) and stub-Nails together. And preparing so many Earthen forty or fifty pounds Melting-pots (made for that purpose to endure the Fire ) as they intend to use: They Charge these Pots with the mingeld Iron and Antimony as full as they will hold. Every time they melt Mettal , they built a new Furnace to melt it in: This Furnace is called an Open Furnace ; because the air blows in through all its sides to fan the Fire . They make it of bricks in an open place, as well because the air may have free access to all its sides, as that the vapours of the Antimony (which are obnoxious) may the less offend those that officiate at the Making the Mettal : And also because the violent fire made in the Furnace should not endanger the Firing any adjacent Houses. The "antimony" here was in fact stibnite , antimony-sulfide (Sb 2 S 3 ). The iron was burned away in this process, reducing the antimony and at the same time removing the unwanted sulfur . In this way ferro-sulfide was formed, that would evaporate with all the fumes. The mixture of stibnite and nails was heated red hot in an open-air furnace , until all is molten and finished. The resulting metal can contain up to 9% of iron. Further purification can be done by mixing the hot melt with kitchen-salt, NaCl. After this red hot lead from another melting pot is added and stirred thoroughly. [ 4 ] Some tin was added to the alloy for casting small characters and narrow spaces, to better fill narrow areas of the mould. The good properties of tin were well known. The use of tin was sometime minimized to save expenses. Much of this toxic work was done by child labour , a labor force that includes children . [ 5 ] Hitherto a Man (nay, a Boy) might officiate all this work. As a supposed antidote to the inhaled toxic metal fumes, the workers were given a mixture of red wine and salad oil: [ 6 ] Now (according to Custom) is Half a Pint of Sack mingled with Sallad Oyl, provided for each Workman to drink; intended or an Antidote against the Poysonous Fumes of the Antimony , and to restore the Spirits that so Violent a Fire and Hard Labour may have exhausted.
https://en.wikipedia.org/wiki/Type_metal
In zoological nomenclature , a type species ( species typica ) is the species name with which the name of a genus or subgenus is considered to be permanently taxonomically associated, i.e., the species that contains the biological type specimen (or specimens). [ 1 ] A similar concept is used for suprageneric groups and called a type genus . [ 2 ] In botanical nomenclature , these terms have no formal standing under the code of nomenclature , but are sometimes borrowed from zoological nomenclature. In botany, the type of a genus name is a specimen (or, rarely, an illustration) which is also the type of a species name. The species name with that type can also be referred to as the type of the genus name. Names of genus and family ranks, the various subdivisions of those ranks, and some higher-rank names based on genus names, have such types. [ 3 ] In bacteriology , a type species is assigned for each genus. [ 4 ] Whether or not currently recognized as valid , every named genus or subgenus in zoology is theoretically associated with a type species. In practice, however, there is a backlog of untypified names defined in older publications when it was not required to specify a type. A type species is both a concept and a practical system that is used in the classification and nomenclature (naming) of animals. The "type species" represents the reference species and thus "definition" for a particular genus name. Whenever a taxon containing multiple species must be divided into more than one genus, the type species automatically assigns the name of the original taxon to one of the resulting new taxa, the one that includes the type species. The term "type species" is regulated in zoological nomenclature by article 42.3 of the International Code of Zoological Nomenclature , which defines a type species as the name-bearing type of the name of a genus or subgenus (a " genus-group name "). In the Glossary, type species is defined as The nominal species that is the name-bearing type of a nominal genus or subgenus. [ 5 ] The type species permanently attaches a formal name (the generic name) to a genus by providing just one species within that genus to which the genus name is permanently linked (i.e. the genus must include that species if it is to bear the name). The species name in turn is fixed, in theory, to a type specimen. For example, the type species for the land snail genus Monacha is Helix cartusiana , the name under which the species was first described, known as Monacha cartusiana when placed in the genus Monacha . That genus is currently placed within the family Hygromiidae . The type genus for that family is the genus Hygromia . The concept of the type species in zoology was introduced by Pierre André Latreille . [ 6 ] The International Code of Zoological Nomenclature states that the original name (binomen) of the type species should always be cited. It gives an example in Article 67.1. Astacus marinus Fabricius, 1775 was later designated as the type species of the genus Homarus , thus giving it the name Homarus marinus (Fabricius, 1775) . However, the type species of Homarus should always be cited using its original name, i.e. Astacus marinus Fabricius, 1775 , even though that is a junior synonym of Cancer grammarius Linnaeus, 1758 . [ 1 ] Although the International Code of Nomenclature for algae, fungi, and plants does not contain the same explicit statement, examples make it clear that the original name is used, so that the "type species" of a genus name need not have a name within that genus. Thus in Article 10, Ex. 3, the type of the genus name Elodes is quoted as the type of the species name Hypericum aegypticum , not as the type of the species name Elodes aegyptica . [ 3 ] ( Elodes is not now considered distinct from Hypericum .)
https://en.wikipedia.org/wiki/Type_species
In mathematics and theoretical computer science , a type theory is the formal presentation of a specific type system . [ a ] Type theory is the academic study of type systems. Some type theories serve as alternatives to set theory as a foundation of mathematics . Two influential type theories that have been proposed as foundations are: Most computerized proof-writing systems use a type theory for their foundation . A common one is Thierry Coquand 's Calculus of Inductive Constructions . Type theory was created to avoid paradoxes in naive set theory and formal logic [ b ] , such as Russell's paradox which demonstrates that, without proper axioms, it is possible to define the set of all sets that are not members of themselves; this set both contains itself and does not contain itself. Between 1902 and 1908, Bertrand Russell proposed various solutions to this problem. By 1908, Russell arrived at a ramified theory of types together with an axiom of reducibility , both of which appeared in Whitehead and Russell 's Principia Mathematica published in 1910, 1912, and 1913. This system avoided contradictions suggested in Russell's paradox by creating a hierarchy of types and then assigning each concrete mathematical entity to a specific type. Entities of a given type were built exclusively of subtypes of that type, [ c ] thus preventing an entity from being defined using itself. This resolution of Russell's paradox is similar to approaches taken in other formal systems, such as Zermelo-Fraenkel set theory . [ 4 ] Type theory is particularly popular in conjunction with Alonzo Church 's lambda calculus . One notable early example of type theory is Church's simply typed lambda calculus . Church's theory of types [ 5 ] helped the formal system avoid the Kleene–Rosser paradox that afflicted the original untyped lambda calculus. Church demonstrated [ d ] that it could serve as a foundation of mathematics and it was referred to as a higher-order logic . In the modern literature, "type theory" refers to a typed system based around lambda calculus. One influential system is Per Martin-Löf 's intuitionistic type theory , which was proposed as a foundation for constructive mathematics . Another is Thierry Coquand 's calculus of constructions , which is used as the foundation by Rocq (previously known as Coq ), Lean , and other computer proof assistants . Type theory is an active area of research, one direction being the development of homotopy type theory . The first computer proof assistant, called Automath , used type theory to encode mathematics on a computer. Martin-Löf specifically developed intuitionistic type theory to encode all mathematics to serve as a new foundation for mathematics. There is ongoing research into mathematical foundations using homotopy type theory . Mathematicians working in category theory already had difficulty working with the widely accepted foundation of Zermelo–Fraenkel set theory . This led to proposals such as Lawvere's Elementary Theory of the Category of Sets (ETCS). [ 7 ] Homotopy type theory continues in this line using type theory. Researchers are exploring connections between dependent types (especially the identity type) and algebraic topology (specifically homotopy ). Much of the current research into type theory is driven by proof checkers , interactive proof assistants , and automated theorem provers . Most of these systems use a type theory as the mathematical foundation for encoding proofs, which is not surprising, given the close connection between type theory and programming languages: Many type theories are supported by LEGO and Isabelle . Isabelle also supports foundations besides type theories, such as ZFC . Mizar is an example of a proof system that only supports set theory. Any static program analysis , such as the type checking algorithms in the semantic analysis phase of compiler , has a connection to type theory. A prime example is Agda , a programming language which uses UTT (Luo's Unified Theory of dependent Types) for its type system. The programming language ML was developed for manipulating type theories (see LCF ) and its own type system was heavily influenced by them. Type theory is also widely used in formal theories of semantics of natural languages , [ 8 ] [ 9 ] especially Montague grammar [ 10 ] and its descendants. In particular, categorial grammars and pregroup grammars extensively use type constructors to define the types ( noun , verb , etc.) of words. The most common construction takes the basic types e {\displaystyle e} and t {\displaystyle t} for individuals and truth-values , respectively, and defines the set of types recursively as follows: A complex type ⟨ a , b ⟩ {\displaystyle \langle a,b\rangle } is the type of functions from entities of type a {\displaystyle a} to entities of type b {\displaystyle b} . Thus one has types like ⟨ e , t ⟩ {\displaystyle \langle e,t\rangle } which are interpreted as elements of the set of functions from entities to truth-values, i.e. indicator functions of sets of entities. An expression of type ⟨ ⟨ e , t ⟩ , t ⟩ {\displaystyle \langle \langle e,t\rangle ,t\rangle } is a function from sets of entities to truth-values, i.e. a (indicator function of a) set of sets. This latter type is standardly taken to be the type of natural language quantifiers , like everybody or nobody ( Montague 1973, Barwise and Cooper 1981). [ 11 ] Type theory with records is a formal semantics representation framework, using records to express type theory types . It has been used in natural language processing , principally computational semantics and dialogue systems . [ 12 ] [ 13 ] Gregory Bateson introduced a theory of logical types into the social sciences; his notions of double bind and logical levels are based on Russell's theory of types. A type theory is a mathematical logic , which is to say it is a collection of rules of inference that result in judgments . Most logics have judgments asserting "The proposition φ {\displaystyle \varphi } is true", or "The formula φ {\displaystyle \varphi } is a well-formed formula ". [ 14 ] A type theory has judgments that define types and assign them to a collection of formal objects, known as terms. A term and its type are often written together as t e r m : t y p e {\displaystyle \mathrm {term} :{\mathsf {type}}} . A term in logic is recursively defined as a constant symbol , variable , or a function application , where a term is applied to another term. Constant symbols could include the natural number 0 {\displaystyle 0} , the Boolean value t r u e {\displaystyle \mathrm {true} } , and functions such as the successor function S {\displaystyle \mathrm {S} } and conditional operator i f {\displaystyle \mathrm {if} } . Thus some terms could be 0 {\displaystyle 0} , ( S 0 ) {\displaystyle (\mathrm {S} \,0)} , ( S ( S 0 ) ) {\displaystyle (\mathrm {S} \,(\mathrm {S} \,0))} , and ( i f t r u e 0 ( S 0 ) ) {\displaystyle (\mathrm {if} \,\mathrm {true} \,0\,(\mathrm {S} \,0))} . Most type theories have 4 judgments: Judgments may follow from assumptions. For example, one might say "assuming x {\displaystyle x} is a term of type b o o l {\displaystyle {\mathsf {bool}}} and y {\displaystyle y} is a term of type n a t {\displaystyle {\mathsf {nat}}} , it follows that ( i f x y y ) {\displaystyle (\mathrm {if} \,x\,y\,y)} is a term of type n a t {\displaystyle {\mathsf {nat}}} ". Such judgments are formally written with the turnstile symbol ⊢ {\displaystyle \vdash } . x : b o o l , y : n a t ⊢ ( if x y y ) : n a t {\displaystyle x:{\mathsf {bool}},y:{\mathsf {nat}}\vdash ({\textrm {if}}\,x\,y\,y):{\mathsf {nat}}} If there are no assumptions, there will be nothing to the left of the turnstile. ⊢ S : n a t → n a t {\displaystyle \vdash \mathrm {S} :{\mathsf {nat}}\to {\mathsf {nat}}} The list of assumptions on the left is the context of the judgment. Capital greek letters, such as Γ {\displaystyle \Gamma } and Δ {\displaystyle \Delta } , are common choices to represent some or all of the assumptions. The 4 different judgments are thus usually written as follows. Some textbooks use a triple equal sign ≡ {\displaystyle \equiv } to stress that this is judgmental equality and thus an extrinsic notion of equality. [ 15 ] The judgments enforce that every term has a type. The type will restrict which rules can be applied to a term. A type theory's inference rules say what judgments can be made, based on the existence of other judgments. Rules are expressed as a Gentzen -style deduction using a horizontal line, with the required input judgments above the line and the resulting judgment below the line. [ 16 ] For example, the following inference rule states a substitution rule for judgmental equality. Γ ⊢ t : T 1 Δ ⊢ T 1 = T 2 Γ , Δ ⊢ t : T 2 {\displaystyle {\begin{array}{c}\Gamma \vdash t:T_{1}\qquad \Delta \vdash T_{1}=T_{2}\\\hline \Gamma ,\Delta \vdash t:T_{2}\end{array}}} The rules are syntactic and work by rewriting . The metavariables Γ {\displaystyle \Gamma } , Δ {\displaystyle \Delta } , t {\displaystyle t} , T 1 {\displaystyle T_{1}} , and T 2 {\displaystyle T_{2}} may actually consist of complex terms and types that contain many function applications, not just single symbols. To generate a particular judgment in type theory, there must be a rule to generate it, as well as rules to generate all of that rule's required inputs, and so on. The applied rules form a proof tree , where the top-most rules need no assumptions. One example of a rule that does not require any inputs is one that states the type of a constant term. For example, to assert that there is a term 0 {\displaystyle 0} of type n a t {\displaystyle {\mathsf {nat}}} , one would write the following. ⊢ 0 : n a t {\displaystyle {\begin{array}{c}\hline \vdash 0:nat\\\end{array}}} Generally, the desired conclusion of a proof in type theory is one of type inhabitation . [ 17 ] The decision problem of type inhabitation (abbreviated by ∃ t . Γ ⊢ t : τ ? {\displaystyle \exists t.\Gamma \vdash t:\tau ?} ) is: Girard's paradox shows that type inhabitation is strongly related to the consistency of a type system with Curry–Howard correspondence. To be sound, such a system must have uninhabited types. A type theory usually has several rules, including ones to: Also, for each "by rule" type, there are 4 different kinds of rules For examples of rules, an interested reader may follow Appendix A.2 of the Homotopy Type Theory book, [ 15 ] or read Martin-Löf's Intuitionistic Type Theory. [ 18 ] The logical framework of a type theory bears a resemblance to intuitionistic , or constructive, logic. Formally, type theory is often cited as an implementation of the Brouwer–Heyting–Kolmogorov interpretation of intuitionistic logic. [ 18 ] Additionally, connections can be made to category theory and computer programs . When used as a foundation, certain types are interpreted to be propositions (statements that can be proven), and terms inhabiting the type are interpreted to be proofs of that proposition. When some types are interpreted as propositions, there is a set of common types that can be used to connect them to make a Boolean algebra out of types. However, the logic is not classical logic but intuitionistic logic , which is to say it does not have the law of excluded middle nor double negation . Under this intuitionistic interpretation, there are common types that act as the logical operators: Because the law of excluded middle does not hold, there is no term of type Π a . A + ( A → ⊥ ) {\displaystyle \Pi a.A+(A\to \bot )} . Likewise, double negation does not hold, so there is no term of type Π A . ( ( A → ⊥ ) → ⊥ ) → A {\displaystyle \Pi A.((A\to \bot )\to \bot )\to A} . It is possible to include the law of excluded middle and double negation into a type theory, by rule or assumption. However, terms may not compute down to canonical terms and it will interfere with the ability to determine if two terms are judgementally equal to each other. [ citation needed ] Per Martin-Löf proposed his intuitionistic type theory as a foundation for constructive mathematics . [ 14 ] Constructive mathematics requires when proving "there exists an x {\displaystyle x} with property P ( x ) {\displaystyle P(x)} ", one must construct a particular x {\displaystyle x} and a proof that it has property P {\displaystyle P} . In type theory, existence is accomplished using the dependent product type, and its proof requires a term of that type. An example of a non-constructive proof is proof by contradiction . The first step is assuming that x {\displaystyle x} does not exist and refuting it by contradiction. The conclusion from that step is "it is not the case that x {\displaystyle x} does not exist". The last step is, by double negation, concluding that x {\displaystyle x} exists. Constructive mathematics does not allow the last step of removing the double negation to conclude that x {\displaystyle x} exists. [ 19 ] Most of the type theories proposed as foundations are constructive, and this includes most of the ones used by proof assistants. [ citation needed ] It is possible to add non-constructive features to a type theory, by rule or assumption. These include operators on continuations such as call with current continuation . However, these operators tend to break desirable properties such as canonicity and parametricity . The Curry–Howard correspondence is the observed similarity between logics and programming languages. The implication in logic, "A → {\displaystyle \to } B" resembles a function from type "A" to type "B". For a variety of logics, the rules are similar to expressions in a programming language's types. The similarity goes farther, as applications of the rules resemble programs in the programming languages. Thus, the correspondence is often summarized as "proofs as programs". The opposition of terms and types can also be viewed as one of implementation and specification . By program synthesis , (the computational counterpart of) type inhabitation can be used to construct (all or parts of) programs from the specification given in the form of type information. [ 20 ] Many programs that work with type theory (e.g., interactive theorem provers) also do type inferencing. It lets them select the rules that the user intends, with fewer actions by the user. Although the initial motivation for category theory was far removed from foundationalism, the two fields turned out to have deep connections. As John Lane Bell writes: "In fact categories can themselves be viewed as type theories of a certain kind; this fact alone indicates that type theory is much more closely related to category theory than it is to set theory." In brief, a category can be viewed as a type theory by regarding its objects as types (or sorts [ 21 ] ), i.e. "Roughly speaking, a category may be thought of as a type theory shorn of its syntax." A number of significant results follow in this way: [ 22 ] The interplay, known as categorical logic , has been a subject of active research since then; see the monograph of Jacobs (1999) for instance. Homotopy type theory attempts to combine type theory and category theory. It focuses on equalities, especially equalities between types. Homotopy type theory differs from intuitionistic type theory mostly by its handling of the equality type. In 2016, cubical type theory was proposed, which is a homotopy type theory with normalization. [ 23 ] [ 24 ] The most basic types are called atoms, and a term whose type is an atom is known as an atomic term. Common atomic terms included in type theories are natural numbers , often notated with the type n a t {\displaystyle {\mathsf {nat}}} , Boolean logic values ( t r u e {\displaystyle \mathrm {true} } / f a l s e {\displaystyle \mathrm {false} } ), notated with the type b o o l {\displaystyle {\mathsf {bool}}} , and formal variables , whose type may vary. [ 17 ] For example, the following may be atomic terms. In addition to atomic terms, most modern type theories also allow for functions . Function types introduce an arrow symbol, and are defined inductively : If σ {\displaystyle \sigma } and τ {\displaystyle \tau } are types, then the notation σ → τ {\displaystyle \sigma \to \tau } is the type of a function which takes a parameter of type σ {\displaystyle \sigma } and returns a term of type τ {\displaystyle \tau } . Types of this form are known as simple types . [ 17 ] Some terms may be declared directly as having a simple type, such as the following term, a d d {\displaystyle \mathrm {add} } , which takes in two natural numbers in sequence and returns one natural number. a d d : n a t → ( n a t → n a t ) {\displaystyle \mathrm {add} :{\mathsf {nat}}\to ({\mathsf {nat}}\to {\mathsf {nat}})} Strictly speaking, a simple type only allows for one input and one output, so a more faithful reading of the above type is that a d d {\displaystyle \mathrm {add} } is a function which takes in a natural number and returns a function of the form n a t → n a t {\displaystyle {\mathsf {nat}}\to {\mathsf {nat}}} . The parentheses clarify that a d d {\displaystyle \mathrm {add} } does not have the type ( n a t → n a t ) → n a t {\displaystyle ({\mathsf {nat}}\to {\mathsf {nat}})\to {\mathsf {nat}}} , which would be a function which takes in a function of natural numbers and returns a natural number. The convention is that the arrow is right associative , so the parentheses may be dropped from a d d {\displaystyle \mathrm {add} } 's type. [ 17 ] New function terms may be constructed using lambda expressions , and are called lambda terms. These terms are also defined inductively: a lambda term has the form ( λ v . t ) {\displaystyle (\lambda v.t)} , where v {\displaystyle v} is a formal variable and t {\displaystyle t} is a term, and its type is notated σ → τ {\displaystyle \sigma \to \tau } , where σ {\displaystyle \sigma } is the type of v {\displaystyle v} , and τ {\displaystyle \tau } is the type of t {\displaystyle t} . [ 17 ] The following lambda term represents a function which doubles an input natural number. ( λ x . a d d x x ) : n a t → n a t {\displaystyle (\lambda x.\mathrm {add} \,x\,x):{\mathsf {nat}}\to {\mathsf {nat}}} The variable is x {\displaystyle x} and (implicit from the lambda term's type) must have type n a t {\displaystyle {\mathsf {nat}}} . The term a d d x x {\displaystyle \mathrm {add} \,x\,x} has type n a t {\displaystyle {\mathsf {nat}}} , which is seen by applying the function application inference rule twice. Thus, the lambda term has type n a t → n a t {\displaystyle {\mathsf {nat}}\to {\mathsf {nat}}} , which means it is a function taking a natural number as an argument and returning a natural number. A lambda term is often referred to [ e ] as an anonymous function because it lacks a name. The concept of anonymous functions appears in many programming languages. The power of type theories is in specifying how terms may be combined by way of inference rules . [ 5 ] Type theories which have functions also have the inference rule of function application : if t {\displaystyle t} is a term of type σ → τ {\displaystyle \sigma \to \tau } , and s {\displaystyle s} is a term of type σ {\displaystyle \sigma } , then the application of t {\displaystyle t} to s {\displaystyle s} , often written ( t s ) {\displaystyle (t\,s)} , has type τ {\displaystyle \tau } . For example, if one knows the type notations 0 : nat {\displaystyle 0:{\textsf {nat}}} , 1 : nat {\displaystyle 1:{\textsf {nat}}} , and 2 : nat {\displaystyle 2:{\textsf {nat}}} , then the following type notations can be deduced from function application. [ 17 ] Parentheses indicate the order of operations ; however, by convention, function application is left associative , so parentheses can be dropped where appropriate. [ 17 ] In the case of the three examples above, all parentheses could be omitted from the first two, and the third may simplified to a d d 1 ( a d d 2 0 ) : nat {\displaystyle \mathrm {add} \,1\,(\mathrm {add} \,2\,0):{\textsf {nat}}} . Type theories that allow for lambda terms also include inference rules known as β {\displaystyle \beta } -reduction and η {\displaystyle \eta } -reduction. They generalize the notion of function application to lambda terms. Symbolically, they are written The first reduction describes how to evaluate a lambda term: if a lambda expression ( λ v . t ) {\displaystyle (\lambda v.t)} is applied to a term s {\displaystyle s} , one replaces every occurrence of v {\displaystyle v} in t {\displaystyle t} with s {\displaystyle s} . The second reduction makes explicit the relationship between lambda expressions and function types: if ( λ v . t v ) {\displaystyle (\lambda v.t\,v)} is a lambda term, then it must be that t {\displaystyle t} is a function term because it is being applied to v {\displaystyle v} . Therefore, the lambda expression is equivalent to just t {\displaystyle t} , as both take in one argument and apply t {\displaystyle t} to it. [ 5 ] For example, the following term may be β {\displaystyle \beta } -reduced. ( λ x . a d d x x ) 2 → a d d 2 2 {\displaystyle (\lambda x.\mathrm {add} \,x\,x)\,2\rightarrow \mathrm {add} \,2\,2} In type theories that also establish notions of equality for types and terms, there are corresponding inference rules of β {\displaystyle \beta } -equality and η {\displaystyle \eta } -equality. [ 17 ] The empty type has no terms. The type is usually written ⊥ {\displaystyle \bot } or 0 {\displaystyle \mathbb {0} } . One use for the empty type is proofs of type inhabitation . If for a type a {\displaystyle a} , it is consistent to derive a function of type a → ⊥ {\displaystyle a\to \bot } , then a {\displaystyle a} is uninhabited , which is to say it has no terms. The unit type has exactly 1 canonical term. The type is written ⊤ {\displaystyle \top } or 1 {\displaystyle \mathbb {1} } and the single canonical term is written ∗ {\displaystyle \ast } . The unit type is also used in proofs of type inhabitation. If for a type a {\displaystyle a} , it is consistent to derive a function of type ⊤ → a {\displaystyle \top \to a} , then a {\displaystyle a} is inhabited , which is to say it must have one or more terms. The Boolean type has exactly 2 canonical terms. The type is usually written bool {\displaystyle {\textsf {bool}}} or B {\displaystyle \mathbb {B} } or 2 {\displaystyle \mathbb {2} } . The canonical terms are usually t r u e {\displaystyle \mathrm {true} } and f a l s e {\displaystyle \mathrm {false} } . Natural numbers are usually implemented in the style of Peano Arithmetic . There is a canonical term 0 : n a t {\displaystyle 0:{\mathsf {nat}}} for zero. Canonical values larger than zero use iterated applications of a successor function S : n a t → n a t {\displaystyle \mathrm {S} :{\mathsf {nat}}\to {\mathsf {nat}}} . Some type theories allow for types of complex terms, such as functions or lists, to depend on the types of its arguments; these are called type constructors . For example, a type theory could have the dependent type l i s t a {\displaystyle {\mathsf {list}}\,a} , which should correspond to lists of terms, where each term must have type a {\displaystyle a} . In this case, l i s t {\displaystyle {\mathsf {list}}} has the kind U → U {\displaystyle U\to U} , where U {\displaystyle U} denotes the universe of all types in the theory. The product type, × {\displaystyle \times } , depends on two types, and its terms are commonly written as ordered pairs ( s , t ) {\displaystyle (s,t)} . The pair ( s , t ) {\displaystyle (s,t)} has the product type σ × τ {\displaystyle \sigma \times \tau } , where σ {\displaystyle \sigma } is the type of s {\displaystyle s} and τ {\displaystyle \tau } is the type of t {\displaystyle t} . Each product type is then usually defined with eliminator functions f i r s t : σ × τ → σ {\displaystyle \mathrm {first} :\sigma \times \tau \to \sigma } and s e c o n d : σ × τ → τ {\displaystyle \mathrm {second} :\sigma \times \tau \to \tau } . Besides ordered pairs, this type is used for the concepts of logical conjunction and intersection . The sum type is written as either + {\displaystyle +} or ⊔ {\displaystyle \sqcup } . In programming languages, sum types may be referred to as tagged unions . Each type σ ⊔ τ {\displaystyle \sigma \sqcup \tau } is usually defined with constructors l e f t : σ → ( σ ⊔ τ ) {\displaystyle \mathrm {left} :\sigma \to (\sigma \sqcup \tau )} and r i g h t : τ → ( σ ⊔ τ ) {\displaystyle \mathrm {right} :\tau \to (\sigma \sqcup \tau )} , which are injective , and an eliminator function m a t c h : ( σ → ρ ) → ( τ → ρ ) → ( σ ⊔ τ ) → ρ {\displaystyle \mathrm {match} :(\sigma \to \rho )\to (\tau \to \rho )\to (\sigma \sqcup \tau )\to \rho } such that The sum type is used for the concepts of logical disjunction and union . Some theories also allow terms to have their definitions depend on types. For instance, an identity function of any type could be written as λ x . x : ∀ α . α → α {\displaystyle \lambda x.x:\forall \alpha .\alpha \to \alpha } . The function is said to be polymorphic in α {\displaystyle \alpha } , or generic in x {\displaystyle x} . As another example, consider a function a p p e n d {\displaystyle \mathrm {append} } , which takes in a l i s t a {\displaystyle {\mathsf {list}}\,a} and a term of type a {\displaystyle a} , and returns the list with the element at the end. The type annotation of such a function would be a p p e n d : ∀ a . l i s t a → a → l i s t a {\displaystyle \mathrm {append} :\forall \,a.{\mathsf {list}}\,a\to a\to {\mathsf {list}}\,a} , which can be read as "for any type a {\displaystyle a} , pass in a l i s t a {\displaystyle {\mathsf {list}}\,a} and an a {\displaystyle a} , and return a l i s t a {\displaystyle {\mathsf {list}}\,a} ". Here a p p e n d {\displaystyle \mathrm {append} } is polymorphic in a {\displaystyle a} . With polymorphism, the eliminator functions can be defined generically for all product types as f i r s t : ∀ σ τ . σ × τ → σ {\displaystyle \mathrm {first} :\forall \,\sigma \,\tau .\sigma \times \tau \to \sigma } and s e c o n d : ∀ σ τ . σ × τ → τ {\displaystyle \mathrm {second} :\forall \,\sigma \,\tau .\sigma \times \tau \to \tau } . Likewise, the sum type constructors can be defined for all valid types of sum members as l e f t : ∀ σ τ . σ → ( σ ⊔ τ ) {\displaystyle \mathrm {left} :\forall \,\sigma \,\tau .\sigma \to (\sigma \sqcup \tau )} and r i g h t : ∀ σ τ . τ → ( σ ⊔ τ ) {\displaystyle \mathrm {right} :\forall \,\sigma \,\tau .\tau \to (\sigma \sqcup \tau )} , which are injective , and the eliminator function can be given as m a t c h : ∀ σ τ ρ . ( σ → ρ ) → ( τ → ρ ) → ( σ ⊔ τ ) → ρ {\displaystyle \mathrm {match} :\forall \,\sigma \,\tau \,\rho .(\sigma \to \rho )\to (\tau \to \rho )\to (\sigma \sqcup \tau )\to \rho } such that Some theories also permit types to be dependent on terms instead of types. For example, a theory could have the type v e c t o r n {\displaystyle {\mathsf {vector}}\,n} , where n {\displaystyle n} is a term of type n a t {\displaystyle {\mathsf {nat}}} encoding the length of the vector . This allows for greater specificity and type safety : functions with vector length restrictions or length matching requirements, such as the dot product , can encode this requirement as part of the type. [ 26 ] There are foundational issues that can arise from dependent types if a theory is not careful about what dependencies are allowed, such as Girard's Paradox . The logician Henk Barendegt introduced the lambda cube as a framework for studying various restrictions and levels of dependent typing. [ 27 ] Two common type dependencies , dependent product and dependent sum types, allow for the theory to encode BHK intuitionistic logic by acting as equivalents to universal and existential quantification ; this is formalized by Curry–Howard Correspondence . [ 26 ] As they also connect to products and sums in set theory , they are often written with the symbols Π {\displaystyle \Pi } and Σ {\displaystyle \Sigma } , respectively. Sum types are seen in dependent pairs , where the second type depends on the value of the first term. This arises naturally in computer science where functions may return different types of outputs based on the input. For example, the Boolean type is usually defined with an eliminator function i f {\displaystyle \mathrm {if} } , which takes three arguments and behaves as follows. Ordinary definitions of i f {\displaystyle \mathrm {if} } require x {\displaystyle x} and y {\displaystyle y} to have the same type. If the type theory allows for dependent types, then it is possible to define a dependent type x : b o o l ⊢ T F x : U → U → U {\displaystyle x:{\mathsf {bool}}\,\vdash \,\mathrm {TF} \,x:U\to U\to U} such that The type of i f {\displaystyle \mathrm {if} } may then be written as ∀ σ τ . Π x : b o o l . σ → τ → T F x σ τ {\displaystyle \forall \,\sigma \,\tau .\Pi _{x:{\mathsf {bool}}}.\sigma \to \tau \to \mathrm {TF} \,x\,\sigma \,\tau } . Following the notion of Curry-Howard Correspondence, the identity type is a type introduced to mirror propositional equivalence , as opposed to the judgmental (syntactic) equivalence that type theory already provides. An identity type requires two terms of the same type and is written with the symbol = {\displaystyle =} . For example, if x + 1 {\displaystyle x+1} and 1 + x {\displaystyle 1+x} are terms, then x + 1 = 1 + x {\displaystyle x+1=1+x} is a possible type. Canonical terms are created with a reflexivity function, r e f l {\displaystyle \mathrm {refl} } . For a term t {\displaystyle t} , the call r e f l t {\displaystyle \mathrm {refl} \,t} returns the canonical term inhabiting the type t = t {\displaystyle t=t} . The complexities of equality in type theory make it an active research topic; homotopy type theory is a notable area of research that mainly deals with equality in type theory. Inductive types are a general template for creating a large variety of types. In fact, all the types described above and more can be defined using the rules of inductive types. Two methods of generating inductive types are induction-recursion and induction-induction . A method that only uses lambda terms is Scott encoding . Some proof assistants , such as Rocq (previously known as Coq ) and Lean , are based on the calculus for inductive constructions, which is a calculus of constructions with inductive types. The most commonly accepted foundation for mathematics is first-order logic with the language and axioms of Zermelo–Fraenkel set theory with the axiom of choice , abbreviated ZFC. Type theories having sufficient expressibility may also act as a foundation of mathematics. There are a number of differences between these two approaches. Proponents of type theory will also point out its connection to constructive mathematics through the BHK interpretation , its connection to logic by the Curry–Howard isomorphism , and its connections to Category theory . Terms usually belong to a single type. However, there are set theories that define "subtyping". Computation takes place by repeated application of rules. Many types of theories are strongly normalizing , which means that any order of applying the rules will always end in the same result. However, some are not. In a normalizing type theory, the one-directional computation rules are called "reduction rules", and applying the rules "reduces" the term. If a rule is not one-directional, it is called a "conversion rule". Some combinations of types are equivalent to other combinations of types. When functions are considered "exponentiation", the combinations of types can be written similarly to algebraic identities. [ 28 ] Thus, 0 + A ≅ A {\displaystyle {\mathbb {0} }+A\cong A} , 1 × A ≅ A {\displaystyle {\mathbb {1} }\times A\cong A} , 1 + 1 ≅ 2 {\displaystyle {\mathbb {1} }+{\mathbb {1} }\cong {\mathbb {2} }} , A B + C ≅ A B × A C {\displaystyle A^{B+C}\cong A^{B}\times A^{C}} , A B × C ≅ ( A B ) C {\displaystyle A^{B\times C}\cong (A^{B})^{C}} . Most type theories do not have axioms . This is because a type theory is defined by its rules of inference. This is a source of confusion for people familiar with Set Theory, where a theory is defined by both the rules of inference for a logic (such as first-order logic ) and axioms about sets. Sometimes, a type theory will add a few axioms. An axiom is a judgment that is accepted without a derivation using the rules of inference. They are often added to ensure properties that cannot be added cleanly through the rules. Axioms can cause problems if they introduce terms without a way to compute on those terms. That is, axioms can interfere with the normalizing property of the type theory. [ 29 ] Some commonly encountered axioms are: The Axiom of Choice does not need to be added to type theory, because in most type theories it can be derived from the rules of inference. This is because of the constructive nature of type theory, where proving that a value exists requires a method to compute the value. The Axiom of Choice is less powerful in type theory than most set theories, because type theory's functions must be computable and, being syntax-driven, the number of terms in a type must be countable. (See Axiom of choice § In constructive mathematics .)
https://en.wikipedia.org/wiki/Type_theory
In computer science , a typed assembly language ( TAL ) is an assembly language that is extended to include a method of annotating the datatype of each value that is manipulated by the code. These annotations can then be used by a program (type checker) that processes the assembly language code in order to analyse how it will behave when it is executed. Specifically, such a type checker can be used to prove the type safety of code that meets the criteria of some appropriate type system . Typed assembly languages usually include a high-level memory management system based on garbage collection . A typed assembly language with a suitably expressive type system can be used to enable the safe execution of untrusted code without using an intermediate representation like bytecode , allowing features similar to those currently provided by virtual machine environments like Java and .NET . This computer science article is a stub . You can help Wikipedia by expanding it . This programming-language -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Typed_assembly_language
Typeeto is a software that allows users to use a Bluetooth -compatible Macintosh keyboard with a range of different devices, including iOS and Android smartphones and tablets, Apple TV, game consoles, Windows PCs, iPad, iPhone, iPod Touch, and MacBooks. The tool allows the keyboard to connect to multiple devices simultaneously, and users can switch between them using a designated hotkey. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] Typeeto received recognition for its versatility and functionality when it was featured on Product Hunt and received generally positive responses from the community. This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Typeeto
Since Dimitri Mendeleev formulated the periodic law in 1871, and published an associated periodic table of chemical elements , authors have experimented with varying types of periodic tables including for teaching, aesthetic or philosophical purposes. Earlier, in 1869, Mendeleev had mentioned different layouts including short, medium, and even cubic forms. It appeared to him that the latter (three-dimensional) form would be the most natural approach but that "attempts at such a construction have not led to any real results". [ 2 ] [ n 1 ] On spiral periodic tables, "Mendeleev...steadfastly refused to depict the system as [such]...His objection was that he could not express this function mathematically." [ 4 ] In 1934, George Quam, a chemistry professor at Long Island University, New York, and Mary Quam, a librarian at the New York Public Library compiled and published a bibliography of 133 periodic tables using a five-fold typology: I. short; II. long (including triangular); III. spiral; IV. helical, and V. miscellaneous. In 1952, Moeller expressed disdain as to the many types of periodic table: The literature is replete with suggested (and discarded) modifications of the M periodic table. In fact so many modifications have appeared that one is tempted to conclude that practically every author has his [sic] own concept of what a workable arrangement must be. Unfortunately, the majority of the tabulations proposed are either unwieldy or utterly worthless, and only a few valuable suggestions have been made. Geometry does not permit of an arrangement which is sufficiently ideal to serve all the required purposes equally well. Thus the many three-dimensional models, embracing globes, helices, cones, prisms, castles, etc., are interesting but lacking in utility. To a lesser extent, the more involved two-dimensional arrangements do little toward solving the difficulty, and essentially the only suggestions as to modifications which are truly constructive are those centering in reflection of electronic configurations. Certainly the most useful of these modifications, and at the same time one of the earliest to be proposed, is the so-called long or [18-column]...table. [ 5 ] In 1954, Tomkeieff referred to the three principal types of periodic table as helical, rectilinear, and spiral. He added that, "unfortunately there also a number of freaks". [ 6 ] In 1974 Edward Mazurs , a professor of chemistry, published a survey and analysis of about seven hundred periodic tables that had been published in the preceding one hundred years; he recognized short, medium, long, helical, spiral, series tables, and tables not classified. In 1999 Mark Leach, a chemist, inaugurated the INTERNET database of Periodic Tables. It has over 1200 entries as of May 2023. [ n 2 ] While the database is a chronological compilation, specific types of periodic tables that can be searched for are spiral and helical; 3-dimensional; and miscellaneous. For convenience, periodic tables may be typified as either: 1. short; 2. triangular; 3. medium; 4. long; 5. continuous (circular, spiral, lemniscate, or helical); 6. folding; or 7. spatial. Tables that defy easy classification are counted as type 8. unclassified. Short tables have around eight columns. This form became popular following the publication of Mendeleev's eight-column periodic table in 1871. Also shown in this section is a modernized version of the same table. Mendeleev and others who discovered chemical periodicity in the 1860s had noticed that when the elements were arranged in order of their atomic weights there was as an approximate repetition of physiochemical properties after every eight elements. Consequently, Mendeleev organized the elements known at that time into a table with eight columns. He used the table to predict the properties of then unknown elements. While his hit rate was less than 50% it was his successes that propelled the widespread acceptance of the idea of a periodic table of the chemical elements. [ 8 ] The eight-column style remains popular to this day, most notably in Russia, Mendeleev's country of birth. An earlier attempt by John Newlands , an English chemist, to present the nub of the same idea to the London Chemical Society , in 1866, was unsuccessful; [ 9 ] members were less than receptive to theoretical ideas, as was the British tendency at the time. [ 10 ] He referred to his idea as the Law of Octaves , at one point drawing an analogy with an eight-key musical scale. John Gladstone , a fellow chemist, objected on the basis that Newlands's table presumed no elements remained to be discovered. "The last few years had brought forth thallium, indium, caesium, and rubidium, and now the finding of one more would throw out the whole system." [ 9 ] He believed there was as close an analogy between the metals named in the last vertical column as in any of the elements standing on the same horizontal line. Fellow English chemist Carey Foster humorously inquired of Newlands whether he had ever examined the elements according to the order of their initial letters. Foster believed that any arrangement would present occasional coincidences, but he condemned one which placed so far apart manganese and chromium, or iron from nickel and cobalt. The advantages of the short form of periodic table are its compact size and that it shows the relationships between main group elements and transition metal groups Its disadvantages are that it appears to group dissimilar elements, such as chlorine and manganese, together; the separation of metals and nonmetals is hard to discern; there are "inconsistencies in the grouping together of elements giving colorless, diamagnetic ions with elements giving colored, paramagnetic ions; and [a] lack of reasonable positions for hydrogen, the lanthanide elements, and the actinide elements." [ 11 ] Some other notable short periodic tables include: Triangular tables have column widths of 2-8-18-32 or thereabouts. An early example, appearing in 1882, was provided by Bayley. [ 27 ] Through the use of connecting lines, such tables make it easier to indicate analogous properties among the elements. In some ways they represent a form intermediate between the short and medium tables, since the average width of the fully mature version (with widths of 2+8+18+32 = 60) is 15 columns. An early drawback of this form was to make predictions for missing elements based on considerations of symmetry. For example, Bayely considered the rare earth metals to be indirect analogues of other elements such as, for example, zirconium and niobium, a presumption which turned out to be largely unfounded. [ 28 ] Advantages of this form are its aesthetic appeal, and relatively compact size; disadvantages are its width, the fact that it is harder to draw, and interpreting certain periodic trends or relationships may be more challenging compared to the traditional rectangular format. Some other notable triangular periodic tables include: Medium tables have around 18 columns. The popularity of this form is thought to be a result of it having a good balance of features in terms of ease of construction and size, and its depiction of atomic order and periodic trends. [ 43 ] Deming's version of a medium table, which appeared in the first edition of his 1923 textbook "General Chemistry: An Elementary Survey Emphasizing Industrial Applications of Fundamental Principles", has been credited with popularizing the 18-column form. [ 44 ] [ n 6 ] LeRoy [ 45 ] referred to Deming's table, "this...being better known as the 'eighteen columns'-form" as representing "a very marked improvement over the original Mendeleef type as far as presentation to beginning classes is concerned." Merck and Company prepared a handout form of Deming's table, in 1928, which was widely circulated in American schools. By the 1930s his table was appearing in handbooks and encyclopedias of chemistry. It was also distributed for many years by the Sargent-Welch Scientific Company. [ 46 ] [ 47 ] [ 48 ] The advantages of the medium form are that it correlates the positions of the elements with their electronic structures, accommodates the vertical, horizontal and diagonal trends that characterise the elements, and separates the metals and nonmetals; its disadvantages are that it obscures the relationships between main group elements and transition metals. Some other notable medium tables include: Long tables have around 32 columns. Early examples are given by Bassett (1892), [ 58 ] with 37 columns arranged albeit vertically rather than horizontally; Gooch & Walker (1905), [ 59 ] with 25 columns; and by Werner (1905), [ 60 ] with 33 columns. In the first image in this section, of a so-called left step table: The elements remain positioned in order of atomic number ( Z ). The left step table was developed by Charles Janet , in 1928, originally for aesthetic purposes. That being said it shows a reasonable correspondence with the Madelung energy ordering rule this being a notional sequence in which the electron shells of the neutral atoms in their ground states are filled. A more conventional long form of periodic table is included for comparison. The advantage of the long form is that shows where the lanthanides and actinides fit into the periodic table; its disadvantage is its width. Some other notable long tables include: Encompassing circular, spiral , lemniscate , or helical tables. Crookes's lemniscate periodic table, shown in this section, has the following elements falling under one another: The collocation of manganese with iron, nickel and cobalt is later seen in the modernised version of von Bichowsky's table of 1918, in the unclassified section of this article. The French geologist Alexandre-Émile Béguyer de Chancourtois was the first person to make use of atomic weights to produce a classification of periodicity. He drew the elements as a continuous spiral around a metal cylinder divided into 16 parts. [ 73 ] The atomic weight of oxygen was taken as 16 and was used as the standard against which all the other elements were compared. Tellurium was situated at the centre, prompting vis tellurique , or telluric screw . The advantage of this form is that it emphasizes, to a greater or lesser degree, that the elements form a continuous sequence; that said, continuous tables are harder to construct, read and memorize than the traditional rectangular form of periodic table. Some other notable forms of continuous periodic tables include: Such tables, which incorporate a folding mechanism, are relatively uncommon: The advantages of such tables are their novelty and that they can depict relationships that ordinarily require spatial periodic tables, yet retain the portability and convenience of two-dimensional tables. A disadvantage is that they require marginally more effort to construct. Spatial tables pass through three or more dimensions (helical tables are instead classed as continuous tables). Such tables are relatively niche and not as commonly used as traditional tables. While they offer unique advantages, their complexity and customization requirements make them more suitable for specialized research, advanced education, or specific areas of study where a deeper understanding of multidimensional relationships is desired. Advantages of periodic tables of three or more dimensions include: Disadvantages are: Some other notable spatial periodic tables include: Unclassified periodic tables defy easy classification:
https://en.wikipedia.org/wiki/Types_of_periodic_tables
Typewise is a Swiss deep tech company that builds text prediction AI. [ 1 ] In January 2022, the company filed a patent for its technology which it claims outperforms that of Google's and Apple's. [ 2 ] The company's first product was a virtual keyboard for Android and iOS devices. The keyboard features a self-developed hexagonal layout and a predictive typing engine suggesting the next word depending on context and multilingual language support. It includes a dark color theme as well as other designs. The keyboard supports more than 40 languages. In December 2021, Typewise keyboard had been installed over 1.4 million times [ 3 ] and in January 2022, the keyboard won a Consumer Electronics Show (CES) Innovation Award for the second year running. [ 4 ] [ 5 ] The company is now developing an AI writing assistant aimed at business users. [ 6 ] Typewise was founded in May 2019 by David Eberle and Janis Berneker. [ 7 ] Its head office is in Zürich , Switzerland . In 2015, Eberle and Berneker initiated a Kickstarter Crowdfunding Campaign where they raised approx. USD 17,000. With those the founders launched an app prototype in 2016 under the name “WRIO Keyboard” on App Store and Play Store. In 2019, Typewise launched the app to the public. In July 2020, Typewise sought and received funding from angel investors as well as a Swiss research grant, totaling US$1.04 million, allowing for the continued development of the keyboard's artificial intelligence. At that point, the app had amassed roughly 250,000 downloads and had approximately 65,000 active users. [ 8 ] By November 2020, Typewise expanded its total financing to US$1.52 million, including the research grant. [ 9 ] In October 2021, Typewise raised another $2m via a crowdfund campaign on the platform Seedrs . [ 10 ] Typewise have three products: a smartphone keyboard app, an AI writing assistant, and an API. Typewise writing assistant is a browser-based predictive text tool designed to increase the speed and quality of written communication, specifically for customer support and sales teams. The company claims it can increase productivity by 2-3 times. [ 11 ] Typewise keyboard is a mobile application for iOS and Android smartphones that provides features for typing on a smartphone. The app offers two keyboard layouts, the traditional QWERTY keyboard and the self-invented hexagonal layout (“honeycomb layout”) which was developed especially for typing with two thumbs. [ 12 ] [ 13 ] The keyboard employs swipe gestures that replace keys like ⇧ Shift and ← Backspace to edit text. Deleting text is done with a swipe to the left. Deleted text can be restored with a swipe back to the right. Letters can be capitalized or lowercased by swiping up or down on their respective keys, and diacritics can be added to letters by pressing and holding the corresponding key. [ 14 ] Typewise keyboard supports over 40 languages and allows typing in multiple languages simultaneously [ 15 ] by means of an algorithmic language recognition. Typewise’s core technology draws on text prediction, which consists of auto-corrections and word-completion. The company collaborates with ETH Zurich 's Data Analytics Lab, supported by Innosuisse (Switzerland's Innovation Agency) to further develop the technology. Typewise also have released an API that enables developers to use Typewise's AI on third party platforms. [ 16 ] [ 17 ] Typewise uses a hexagonal keyboard layout that is designed to introduce fewer typos into text typed with the keyboard than a QWERTY keyboard on a mobile device. [ 18 ] [ 19 ] While the arrangement of the letters on the keyboard is influenced by the QWERTY layout, the hexagonal shape allows for larger keys than a rectangular layout. [ 20 ] Besides the shape and size of the keys, the Typewise layout features a number of differences to QWERTY and other rectangular layouts. [ 21 ] [ 22 ] Instead of a singular Space bar at the bottom of the keyboard, there are two smaller space bars in the middle. A lot of keys are replaced by swipe gestures. To power text prediction for the keyboard, Typewise developed an artificial intelligence with the Swiss science institute ETH Zurich . [ 23 ] Typewise's artificial intelligence is designed to run entirely on the user's device in light of privacy concerns related to transmitting potentially sensitive user typing data over the internet. [ 24 ] [ 25 ] The keyboard’s text prediction technology does not send any typing data to a cloud as it runs offline. [ 26 ]
https://en.wikipedia.org/wiki/Typewise
The type–token distinction is the difference between a type of objects (analogous to a class ) and the individual tokens of that type (analogous to instances ). Since each type may be instantiated by multiple tokens, there are generally more tokens than types of an object. For example, the sentence "A Rose is a rose is a rose" contains three word types: three word tokens of the type a , two word tokens of the type is, and three word tokens of the type rose . The distinction is important in disciplines such as logic , linguistics , metalogic , typography , and computer programming . The type–token distinction separates types (abstract descriptive concepts) from tokens (objects that instantiate concepts). For example, in the sentence " the bicycle is becoming more popular " the word bicycle represents the abstract concept of bicycles and this abstract concept is a type, whereas in the sentence " the bicycle is in the garage ", it represents a particular object and this particular object is a token. Similarly, the word type 'letter' uses only four letter types: L , E , T and R . Nevertheless, it uses both E and T twice. One can say that the word type 'letter' has six letter tokens, with two tokens each of the letter types E and T . Whenever a word type is inscribed, the number of letter tokens created equals the number of letter occurrences in the word type. Some logicians consider a word type to be the class of its tokens. Other logicians counter that the word type has a permanence and constancy not found in the class of its tokens. The type remains the same while the class of its tokens is continually gaining new members and losing old members. [ citation needed ] In typography , the type–token distinction is used to determine the presence of a text printed by movable type : [ 1 ] The defining criteria which a typographic print has to fulfill is that of the type identity of the various letter forms which make up the printed text. In other words: each letter form which appears in the text has to be shown as a particular instance ("token") of one and the same type which contains a reverse image of the printed letter . The distinctions between using words as types or tokens were first made by American logician and philosopher Charles Sanders Peirce in 1906 using terminology that he established. [ 2 ] Peirce's type–token distinction applies to words, sentences, paragraphs and so on: to anything in a universe of discourse of character-string theory, or concatenation theory . Peirce's original words are the following: A common mode of estimating the amount of matter in a ... printed book is to count the number of words. There will ordinarily be about twenty 'thes' on a page, and, of course, they count as twenty words. In another sense of the word 'word,' however, there is but one word 'the' in the English language; and it is impossible that this word should lie visibly on a page, or be heard in any voice .... Such a ... Form, I propose to term a Type. A Single ... Object ... such as this or that word on a single line of a single page of a single copy of a book, I will venture to call a Token. .... In order that a Type may be used, it has to be embodied in a Token which shall be a sign of the Type, and thereby of the object the Type signifies.
https://en.wikipedia.org/wiki/Type–token_distinction
Typhoid adware is a type of computer security threat that uses a Man-in-the-middle attack to inject advertising into web pages a user visits when using a public network, like a Wi-Fi hotspot . Researchers from the University of Calgary identified the issue, which does not require the affected computer to have adware installed in order to display advertisements on this computer. The researchers said that the threat was not yet observed, but described its mechanism and potential countermeasures . [ 1 ] [ 2 ] The environment for the threat to work is an area of non-encrypted wireless connection , such as a wireless internet cafe or other Wi-Fi hotspots . Typhoid adware would trick a laptop to recognize it as the wireless provider and inserts itself into the route of the wireless connection between the computer and the actual provider. After that the adware may insert various advertisements into the data stream to appear on the computer during the browsing session. In this way even a video stream, e.g., from YouTube may be modified. What is more, the adware may run from an infested computer whose owner would not see any manifestations, yet will affect neighboring ones. For the latter peculiarity it was named in an analogy with Typhoid Mary (Mary Mallon), the first identified person who never experienced any symptoms yet spread infection. [ 1 ] [ 3 ] At the same time running antivirus software on the affected computer is useless, since it has no adware installed. The implemented proof of concept was described in an article written in March 2010, by Daniel Medeiros Nunes de Castro, Eric Lin, John Aycock, and Mea Wang. [ 3 ] While typhoid adware is a variant of the well-known man-in-the-middle attack , the researchers point out a number of new important issues, such as protection of video content and growing availability of public wireless internet access which are not well-monitored. [ 3 ] [ 4 ] Researchers say that annoying advertisements are only one threat of many. A serious danger may come from, e.g., promotions of rogue antivirus software seemingly coming from a trusted source. [ 1 ] Suggested countermeasures include: All these approaches have been investigated earlier in other contexts. [ 3 ]
https://en.wikipedia.org/wiki/Typhoid_adware
In information theory , the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers . The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself. This has great use in compression theory as it provides a theoretical means for compressing data, allowing us to represent any sequence X n using nH ( X ) bits on average, and, hence, justifying the use of entropy as a measure of information from a source. The AEP can also be proven for a large class of stationary ergodic processes , allowing typical set to be defined in more general cases. Additionally, the typical set concept is foundational in understanding the limits of data transmission and error correction in communication systems. By leveraging the properties of typical sequences, efficient coding schemes like Shannon's source coding theorem and channel coding theorem are developed, enabling near-optimal data compression and reliable transmission over noisy channels. If a sequence x 1 , ..., x n is drawn from an independent identically-distributed random variable (IID) X defined over a finite alphabet X {\displaystyle {\mathcal {X}}} , then the typical set, A ε ( n ) ∈ X {\displaystyle \in {\mathcal {X}}} ( n ) is defined as those sequences which satisfy: where is the information entropy of X . The probability above need only be within a factor of 2 n ε . Taking the logarithm on all sides and dividing by -n , this definition can be equivalently stated as For i.i.d sequence, since we further have By the law of large numbers, for sufficiently large n An essential characteristic of the typical set is that, if one draws a large number n of independent random samples from the distribution X , the resulting sequence ( x 1 , x 2 , ..., x n ) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any ε > 0 {\displaystyle \varepsilon >0} , one can choose n such that: For a general stochastic process { X ( t )} with AEP, the (weakly) typical set can be defined similarly with p ( x 1 , x 2 , ..., x n ) replaced by p ( x 0 τ ) (i.e. the probability of the sample limited to the time interval [0, τ ]), n being the degree of freedom of the process in the time interval and H ( X ) being the entropy rate . If the process is continuous valued, differential entropy is used instead. Counter-intuitively, the most likely sequence is often not a member of the typical set. For example, suppose that X is an i.i.d Bernoulli random variable with p (0)=0.1 and p (1)=0.9. In n independent trials, since p (1)> p (0), the most likely sequence of outcome is the sequence of all 1's, (1,1,...,1). Here the entropy of X is H ( X )=0.469, while So this sequence is not in the typical set because its average logarithmic probability cannot come arbitrarily close to the entropy of the random variable X no matter how large we take the value of n . For Bernoulli random variables, the typical set consists of sequences with average numbers of 0s and 1s in n independent trials. This is easily demonstrated: If p(1) = p and p(0) = 1-p , then for n trials with m 1's, we have The average number of 1's in a sequence of Bernoulli trials is m = np . Thus, we have For this example, if n =10, then the typical set consist of all sequences that have a single 0 in the entire sequence. In case p (0)= p (1)=0.5, then every possible binary sequences belong to the typical set. If a sequence x 1 , ..., x n is drawn from some specified joint distribution defined over a finite or an infinite alphabet X {\displaystyle {\mathcal {X}}} , then the strongly typical set, A ε,strong ( n ) ∈ X {\displaystyle \in {\mathcal {X}}} is defined as the set of sequences which satisfy where N ( x i ) {\displaystyle {N(x_{i})}} is the number of occurrences of a specific symbol in the sequence. It can be shown that strongly typical sequences are also weakly typical (with a different constant ε), and hence the name. The two forms, however, are not equivalent. Strong typicality is often easier to work with in proving theorems for memoryless channels. However, as is apparent from the definition, this form of typicality is only defined for random variables having finite support. Two sequences x n {\displaystyle x^{n}} and y n {\displaystyle y^{n}} are jointly ε-typical if the pair ( x n , y n ) {\displaystyle (x^{n},y^{n})} is ε-typical with respect to the joint distribution p ( x n , y n ) = ∏ i = 1 n p ( x i , y i ) {\displaystyle p(x^{n},y^{n})=\prod _{i=1}^{n}p(x_{i},y_{i})} and both x n {\displaystyle x^{n}} and y n {\displaystyle y^{n}} are ε-typical with respect to their marginal distributions p ( x n ) {\displaystyle p(x^{n})} and p ( y n ) {\displaystyle p(y^{n})} . The set of all such pairs of sequences ( x n , y n ) {\displaystyle (x^{n},y^{n})} is denoted by A ε n ( X , Y ) {\displaystyle A_{\varepsilon }^{n}(X,Y)} . Jointly ε-typical n -tuple sequences are defined similarly. Let X ~ n {\displaystyle {\tilde {X}}^{n}} and Y ~ n {\displaystyle {\tilde {Y}}^{n}} be two independent sequences of random variables with the same marginal distributions p ( x n ) {\displaystyle p(x^{n})} and p ( y n ) {\displaystyle p(y^{n})} . Then for any ε>0, for sufficiently large n , jointly typical sequences satisfy the following properties: In information theory , typical set encoding encodes only the sequences in the typical set of a stochastic source with fixed length block codes. Since the size of the typical set is about 2 nH(X) , only nH(X) bits are required for the coding, while at the same time ensuring that the chances of encoding error is limited to ε. Asymptotically, it is, by the AEP, lossless and achieves the minimum rate equal to the entropy rate of the source. In information theory , typical set decoding is used in conjunction with random coding to estimate the transmitted message as the one with a codeword that is jointly ε-typical with the observation. i.e. where w ^ , x 1 n ( w ) , y 1 n {\displaystyle {\hat {w}},x_{1}^{n}(w),y_{1}^{n}} are the message estimate, codeword of message w {\displaystyle w} and the observation respectively. A ε n ( X , Y ) {\displaystyle A_{\varepsilon }^{n}(X,Y)} is defined with respect to the joint distribution p ( x 1 n ) p ( y 1 n | x 1 n ) {\displaystyle p(x_{1}^{n})p(y_{1}^{n}|x_{1}^{n})} where p ( y 1 n | x 1 n ) {\displaystyle p(y_{1}^{n}|x_{1}^{n})} is the transition probability that characterizes the channel statistics, and p ( x 1 n ) {\displaystyle p(x_{1}^{n})} is some input distribution used to generate the codewords in the random codebook.
https://en.wikipedia.org/wiki/Typical_set
In quantum information theory , the idea of a typical subspace plays an important role in the proofs of many coding theorems (the most prominent example being Schumacher compression ). Its role is analogous to that of the typical set in classical information theory . Consider a density operator ρ {\displaystyle \rho } with the following spectral decomposition : The weakly typical subspace is defined as the span of all vectors such that the sample entropy H ¯ ( x n ) {\displaystyle {\overline {H}}(x^{n})} of their classical label is close to the true entropy H ( X ) {\displaystyle H(X)} of the distribution p X ( x ) {\displaystyle p_{X}(x)} : where The projector Π ρ , δ n {\displaystyle \Pi _{\rho ,\delta }^{n}} onto the typical subspace of ρ {\displaystyle \rho } is defined as where we have "overloaded" the symbol T δ X n {\displaystyle T_{\delta }^{X^{n}}} to refer also to the set of δ {\displaystyle \delta } -typical sequences: The three important properties of the typical projector are as follows: where the first property holds for arbitrary ϵ , δ > 0 {\displaystyle \epsilon ,\delta >0} and sufficiently large n {\displaystyle n} . Consider an ensemble { p X ( x ) , ρ x } x ∈ X {\displaystyle \left\{p_{X}(x),\rho _{x}\right\}_{x\in {\mathcal {X}}}} of states. Suppose that each state ρ x {\displaystyle \rho _{x}} has the following spectral decomposition : Consider a density operator ρ x n {\displaystyle \rho _{x^{n}}} which is conditional on a classical sequence x n ≡ x 1 ⋯ x n {\displaystyle x^{n}\equiv x_{1}\cdots x_{n}} : We define the weak conditionally typical subspace as the span of vectors (conditional on the sequence x n {\displaystyle x^{n}} ) such that the sample conditional entropy H ¯ ( y n | x n ) {\displaystyle {\overline {H}}(y^{n}|x^{n})} of their classical labels is close to the true conditional entropy H ( Y | X ) {\displaystyle H(Y|X)} of the distribution p Y | X ( y | x ) p X ( x ) {\displaystyle p_{Y|X}(y|x)p_{X}(x)} : where The projector Π ρ x n , δ {\displaystyle \Pi _{\rho _{x^{n}},\delta }} onto the weak conditionally typical subspace of ρ x n {\displaystyle \rho _{x^{n}}} is as follows: where we have again overloaded the symbol T δ Y n | x n {\displaystyle T_{\delta }^{Y^{n}|x^{n}}} to refer to the set of weak conditionally typical sequences: The three important properties of the weak conditionally typical projector are as follows: where the first property holds for arbitrary ϵ , δ > 0 {\displaystyle \epsilon ,\delta >0} and sufficiently large n {\displaystyle n} , and the expectation is with respect to the distribution p X n ( x n ) {\displaystyle p_{X^{n}}(x^{n})} .
https://en.wikipedia.org/wiki/Typical_subspace
Typographical Number Theory ( TNT ) is a formal axiomatic system describing the natural numbers that appears in Douglas Hofstadter 's book Gödel, Escher, Bach . It is an implementation of Peano arithmetic that Hofstadter uses to help explain Gödel's incompleteness theorems . Like any system implementing the Peano axioms, TNT is capable of referring to itself (it is self-referential ). TNT does not use a distinct symbol for each natural number . Instead it makes use of a simple, uniform way of giving a compound symbol to each natural number: The symbol S can be interpreted as "the successor of" , or "the number after". Since this is, however, a number theory, such interpretations are useful, but not strict. It cannot be said that because four is the successor of three that four is SSSS0 , but rather that since three is the successor of two, which is the successor of one, which is the successor of zero, which has been described as 0 , four can be "proved" to be SSSS0 . TNT is designed such that everything must be proven before it can be said to be true. In order to refer to unspecified terms, TNT makes use of five variables . These are More variables can be constructed by adding the prime symbol after them; for example, In the more rigid version of TNT, known as "austere" TNT, only In Typographical Number Theory, the usual symbols of "+" for additions, and "·" for multiplications are used. Thus to write "b plus c" is to write and "a times d" is written as The parentheses are required. Any laxness would violate TNT's formation system (although it is trivially proved this formalism is unnecessary for operations which are both commutative and associative). Also only two terms can be operated on at once. Therefore, to write "a plus b plus c" is to write either or The "Equals" operator is used to denote equivalence. It is defined by the symbol "=", and takes roughly the same meaning as it usually does in mathematics. For instance, is a theorem statement in TNT, with the interpretation "3 plus 3 equals 6". In Typographical Number Theory, negation , i.e. the turning of a statement to its opposite, is denoted by the "~" or negation operator. For instance, is a theorem in TNT, interpreted as "3 plus 3 is not equal to 7". By negation, this means negation in Boolean logic ( logical negation ), rather than simply being the opposite. For example, if I were to say "I am eating a grapefruit", the opposite is "I am not eating a grapefruit", rather than "I am eating something other than a grapefruit". Similarly "The Television is on" is negated to "The Television is not on", rather than "The Television is off", because, for example, it might be broken. This is a subtle difference, but an important one. If x and y are well-formed formulas, and provided that no variable which is free in one is quantified in the other, then the following are all well-formed formulas Examples: The quantification status of a variable doesn't change here. There are two quantifiers used: ∀ and ∃ . Note that unlike most other logical systems where quantifiers over sets require a mention of the element's existence in the set, this is not required in TNT because all numbers and terms are strictly natural numbers or logical boolean statements. It is therefore equivalent to say ∀a:(a∈N):∀b:(b∈N):(a+b)=(b+a) and ∀ a : ∀ b :( a + b ) = ( b + a ) For example: ("For every number a and every number b, a plus b equals b plus a", or more figuratively, "Addition is commutative.") ("There does not exist a number c such that c plus one equals zero", or more figuratively, "Zero is not the successor of any (natural) number.") All the symbols of propositional calculus apart from the Atom symbols are used in Typographical Number Theory, and they retain their interpretations. Atoms are here defined as strings which amount to statements of equality, such as 2 plus 3 equals five: 2 plus 2 is equal to 4:
https://en.wikipedia.org/wiki/Typographical_Number_Theory
The tyranny of numbers was a problem faced in the 1960s by computer engineers . Engineers were unable to increase the performance of their designs due to the huge number of components involved. In theory, every component needed to be wired to every other component (or at least many other components) and were typically strung and soldered by hand. In order to improve performance, more components would be needed, and it seemed that future designs would consist almost entirely of wiring. The first known recorded use of the term in this context was made by the Vice President of Bell Labs in an article celebrating the 10th anniversary of the invention of the transistor , for the "Proceedings of the IRE" (Institute of Radio Engineers), June 1958 [1] . Referring to the problems many designers were having, he wrote: For some time now, electronic man has known how 'in principle' to extend greatly his visual, tactile, and mental abilities through the digital transmission and processing of all kinds of information. However, all these functions suffer from what has been called 'the tyranny of numbers.' Such systems, because of their complex digital nature, require hundreds, thousands, and sometimes tens of thousands of electron devices. At the time, computers were typically built up from a series of "modules", each module containing the electronics needed to perform a single function. A complex circuit like an adder would generally require several modules working in concert. The modules were typically built on printed circuit boards of a standardized size, with a connector on one edge that allowed them to be plugged into the power and signaling lines of the machine, and were then wired to other modules using twisted pair or coaxial cable . Since each module was relatively custom, modules were assembled and soldered by hand or with limited automation. As a result, they suffered major reliability problems. Even a single bad component or solder joint could render the entire module inoperative. Even with properly working modules, the mass of wiring connecting them together was another source of construction and reliability problems. As computers grew in complexity, and the number of modules increased, the complexity of making a machine actually work grew more and more difficult. This was the "tyranny of numbers". It was precisely this problem that Jack Kilby was thinking about while working at Texas Instruments . Theorizing that germanium could be used to make all common electronic components ( transistors , resistors , capacitors , etc.), he set about building a single-slab component that combined the functionality of an entire module. Although successful in this goal, it was Robert Noyce 's silicon version and the associated fabrication techniques that make the integrated circuit (IC) truly practical. [ 1 ] Unlike modules, ICs were built using photoetching techniques on an assembly line , greatly reducing their cost. Although any given IC might have the same chance of working or not working as a module, they cost so little that if they didn't work you simply threw it away and tried another. In fact, early IC assembly lines had failure rates around 90% or greater, which kept their prices high. The U.S. Air Force and NASA were major purchasers of early ICs, where their small size and light weight overcame any cost issues. They demanded high reliability, and the industry's response not only provided the desired reliability but meant that the increased yield had the effect of driving down prices. ICs from the early 1960s were not complex enough for general computer use, but as the complexity increased through the 1960s, practically all computers switched to IC-based designs. The result was what are today referred to as the third-generation computers , which became commonplace during the early 1970s. The progeny of the integrated circuit, the microprocessor , eventually superseded the use of individual ICs as well, placing the entire collection of modules onto one chip. Seymour Cray was particularly well known for making complex designs work in spite of the tyranny of numbers. His attention to detail and ability to fund several attempts at a working design meant that pure engineering effort could overcome the problems they faced. Yet even Cray eventually succumbed to the problem during the CDC 8600 project, which eventually led to him leaving Control Data .
https://en.wikipedia.org/wiki/Tyranny_of_numbers
3LXN , 3LXP , 3NYX , 3NZ0 , 3ZON , 4GFO , 4GIH , 4GII , 4GJ2 , 4GJ3 , 4GVJ , 4OLI , 4PO6 , 4PY1 , 4WOV , 5C01 , 5C03 , 5F20 , 5F1Z 7297 54721 ENSG00000105397 ENSMUSG00000032175 P29597 Q9R117 NM_003331 NM_001205312 NM_018793 NP_003322 NP_001192241 NP_061263 Non-receptor tyrosine-protein kinase TYK2 is an enzyme that in humans is encoded by the TYK2 gene . [ 5 ] [ 6 ] TYK2 was the first member of the JAK family that was described (the other members are JAK1 , JAK2 , and JAK3 ). [ 7 ] It has been implicated in IFN-α , IL-6 , IL-10 and IL-12 signaling. This gene encodes a member of the tyrosine kinase and, to be more specific, the Janus kinases (JAKs) protein families. This protein associates with the cytoplasmic domain of type I and type II cytokine receptors and promulgate cytokine signals by phosphorylating receptor subunits. It is also component of both the type I and type III interferon signaling pathways. As such, it may play a role in anti-viral immunity. [ 6 ] Cytokines play pivotal roles in immunity and inflammation by regulating the survival, proliferation, differentiation, and function of immune cells, as well as cells from other organ systems. [ 8 ] Hence, targeting cytokines and their receptors is an effective means of treating such disorders. Type I and II cytokine receptors associate with Janus family kinases (JAKs) to affect intracellular signaling. Cytokines including interleukins, interferons and hemopoietins activate the Janus kinases, which associate with their cognate receptors. [ 9 ] The mammalian JAK family has four members: JAK1, JAK2, JAK3 and tyrosine kinase 2 (TYK2). [ 7 ] The connection between Jaks and cytokine signaling was first revealed when a screen for genes involved in interferon type I (IFN-1) signaling identified TYK2 as an essential element, which is activated by an array of cytokine receptors . [ 10 ] TYK2 has broader and profound functions in humans than previously appreciated on the basis of analysis of murine models, which indicate that TYK2 functions primarily in IL-12 and type I-IFN signaling. TYK2 deficiency has more dramatic effects in human cells than in mouse cells. However, in addition to IFN-α and -β and IL-12 signaling, TYK2 has major effects on the transduction of IL-23 , IL-10 , and IL-6 signals. Since, IL-6 signals through the gp-130 receptor -chain that is common to a large family of cytokines, including IL-6, IL-11 , IL-27 , IL-31 , oncostatin M (OSM), ciliary neurotrophic factor , cardiotrophin 1 , cardiotrophin-like cytokine , and LIF , TYK2 might also affect signaling through these cytokines. Recently, it has been recognized that IL-12 and IL-23 share ligand and receptor subunits that activate TYK2. IL-10 is a critical anti-inflammatory cytokine, and IL-10 −/− mice suffer from fatal, systemic autoimmune disease. TYK2 is activated by IL-10 , and its deficiency affects the ability to generate and respond to IL-10. [ 11 ] Under physiological conditions, immune cells are, in general, regulated by the action of many cytokines and it has become clear that cross-talk between different cytokine-signalling pathways is involved in the regulation of the JAK–STAT pathway. [ 12 ] It is now widely accepted that atherosclerosis is a result of cellular and molecular events characteristic of inflammation. [ 13 ] Vascular inflammation can be caused by upregulation of Ang-II , which is produced locally by inflamed vessels and induces synthesis and secretion of IL-6 , a cytokine responsible for induction of angiotensinogen synthesis in liver through JAK/ STAT3 pathway, which gets activated through high affinity membrane protein receptors on target cells, termed IL-6R -chain recruiting gp-130 that is associated with tyrosine kinases (Jaks 1/2, and TYK2 kinase). [ 14 ] Cytokines IL-4 and IL-13 gets elevated in lungs of chronically suffered asthmatics. Signalling through IL-4/IL-13 complexes is thought to occur through IL-4Rα -chain, which is responsible for activation of JAK-1 and TYK2 kinases. [ 15 ] A role of TYK2 in rheumatoid arthritis is directly observed in TYK2-deficient mice that were resistant to experimental arthritis. [ 16 ] TYK2 −/− mice displayed a lack of responsiveness to a small amount of IFN-α , but they respond normally to a high concentration of IFN-α/β. [ 12 ] [ 17 ] In addition, these mice respond normally to IL-6 and IL-10, suggesting that TYK2 is dispensable for mediating for IL-6 and IL-10 signaling and does not play a major role in IFN-α signaling. Although TYK2 −/− mice are phenotypically normal, they exhibit abnormal responses to inflammatory challenges in a variety of cells isolated from TYK2 −/− mice. [ 18 ] The most remarkable phenotype observed in TYK2-deficient macrophages was lack of nitric oxide production upon stimulation with LPS . Further elucidation of molecular mechanisms of LPS signaling, showed that TYK2 and IFN-β deficiency leads resistance to LPS-induced endotoxin shock, whereas STAT1 -deficient mice are susceptible. [ 19 ] Development of a TYK2 inhibitor appears to be a rational approach in the drug discovery. [ 20 ] A mutation in this gene has been associated with hyperimmunoglobulin E syndrome (HIES), a primary immunodeficiency characterized by elevated serum immunoglobulin E . [ 21 ] [ 22 ] [ 23 ] TYK2 appears to play a central role in the inflammatory cascade responses in the pathogenesis of immune-mediated inflammatory diseases such as psoriasis . [ 24 ] The drug deucravacitinib (marketed as Sotyktu), a small-molecule TYK2 inhibitor, was approved for moderate-to-severe plaque psoriasis in 2022. The P1104A allele of TYK2 has been shown to increase risk of tuberculosis when carried as a homozygote; population genetic analyses suggest that the arrival of tuberculosis in Europe drove the frequency of that allele down three-fold about 2,000 years before present. [ 25 ] Tyrosine kinase 2 has been shown to interact with FYN , [ 26 ] PTPN6 , [ 27 ] IFNAR1 , [ 28 ] [ 29 ] Ku80 [ 30 ] and GNB2L1 . [ 31 ] This article incorporates text from the United States National Library of Medicine , which is in the public domain .
https://en.wikipedia.org/wiki/Tyrosine_kinase_2
The Tyson Medal is a prize awarded for the best performance in subjects relating to astronomy at the University of Cambridge , England . [ 1 ] It is awarded annually for achievement in the examinations for Part III of the Mathematical Tripos when there is a candidate deserving of the prize. In his will , Henry Tyson made the following bequest: That the sum of three hundred pounds be paid to the Cambridge University the interest annually to be for a gold medal for the best proficient in Mathematics and Astronomy in the same way as Dr Smith's and to bear the donor's name. [ 2 ] The value of the fund was £65,095 in 2008. [ 3 ] Most of this list is from The Times newspaper archive. [ 4 ] The winners of the prize are published in The Cambridge University Reporter . [ citation needed ]
https://en.wikipedia.org/wiki/Tyson_Medal
Tyzzeria allenae Tyzzeria anseris Tyzzeria boae Tyzzeria chalcides Tyzzeria chenicusae Tyzzeria galli Tyzzeria natrix Tyzzeria parvula Tyzzeria pellerdyi Tyzzeria peomysci Tyzzeria perdyi Tyzzeria perniciosa Tyzzeria typhlopis Tyzzeria is a genus of parasitic alveolates that with one exception ( Tyzzeria boae ) infect the cells of the small intestine . [ 1 ] The genus Tyzzeria was first described by Allen in 1936. [ 2 ] As is all too common in protozoal taxonomy the validity of several of the species is controversial. Species occurring in Anseriforme birds appear to be valid whereas the other species may be misidentifications. The application of DNA based methods it is to be hoped will resolve these matters. The oocysts lack sporocysts: each oocyst possesses eight sporozoites . Tyzzeria allenae - common goldeneye ( Chenicus coromandelianus ) Tyzzeria boae - red tailed boa ( Boa constrictor constrictor ) Tyzzeria chalcides - ocellated skink ( Chalcides ocellatus ) Tyzzeria chenicusae - common goldeneye ( Chenicus coromandelianus ) Tyzzeria galli - Ceylon jungle fowl ( Gallus lafayettei ) Tyzzeria natrix - yamakagashi ( Rhabdophis tigrinus ) Tyzzeria parvula - greater white-fronted goose ( Anser albifrons ), greylag goose ( Anser anser ), snow goose ( Anser caerulescens ), Ross' goose ( Anser rossii ), brent goose ( Branta bernicla ), Canada goose ( Branta canadensis ), tundra swan ( Cygnus columbianus ) Tyzzeria pellerdyi - northern pintail ( Anas acuta ), American wigeon ( Anas americana ), northern shoveler ( Anas clypeata ), common teal ( Anas crecca ), blue-winged teal ( Anas discors ), mallard ( Anas platyrhynchos ), gadwall ( Anas strepera ), ferruginous duck ( Aythya nyroca ) Tyzzeria perniciosa - northern pintail ( Anas acutus ), mallard ( Anas platyrhynchos ), lesser scaup ( Anas affinis ), lesser white-fronted goose ( Aythya erythropus ), tufted duck ( Aythya fuligule ), white-headed duck ( Oxyura leucocephala ), common shelduck ( Tadorna tadora ) Tyzzeria peomysci - white-footed mouse ( Peromyscus leucopus ), deer mouse ' Peromyscus maniculatus ) Tyzzeria perniciosa - lesser scaup ( Aythya affinis ) Tyzzeria typhlopis - European blind snake ( Typhlops vermicularis ) This Apicomplexa -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Tyzzeria
In dermatopathology , the Tzanck test , also Tzanck smear , is scraping of an ulcer base to look for Tzanck cells . It is sometimes also called the chickenpox skin test and the herpes skin test . It is a simple, low-cost, and rapid office based test. [ 1 ] Tzanck cells (acantholytic cells) are found in: Arnault Tzanck did the first cytological examinations in order to diagnose skin diseases. [ 3 ] To diagnose pemphigus , he identified acantholytic cells, and to diagnose of herpetic infections he identified multinucleated giant cells and acantholytic cells. He extended his cytologic findings to certain skin tumors as well. Even though cytological examination can provide rapid and reliable diagnosis for many skin diseases, its use is limited to a few diseases. In endemic regions, Tzanck test is used to diagnose leishmaniasis and leprosy. For other regions, Tzanck test is mainly used to diagnose pemphigus and herpetic infections. Some clinics use biopsies even for herpetic infections. [ 4 ] This is because the advantages of this test are not well known, and the main textbooks of dermatopathology do not include dedicated sections for cytology or Tzanck smear. [ 5 ] A deep learning model called TzanckNet has been developed to lower the experience barrier needed to use this test. [ 6 ] A modified test can be performed using proprietary agents which requires fewer steps and allows the sample to be fixed quicker. For microscopic evaluation, samples are first scanned with low magnification objectives (X4 and X10) and then examined in detail with the high magnification objective (X100). The X4 objectives are used to select the areas to investigate in detail and to detect some ectoparasites, but the basis of the cytological diagnostic process is the X10 objective. With X10 magnification, the individual characteristics of the cells, the relationship of the cells to each other and the presence of some infection and infestation agents are evaluated. For this reason, most of the cytological examination is spent at this magnification, and most samples are diagnosed at this magnification. The key cytological findings that are observed at low magnification or, in other words, should be investigated according to the clinical characteristics of the patient are as follows: acantholytic cells, tadpole cells, granulomatous inflammation, infectious agents and increases in specific cells.
https://en.wikipedia.org/wiki/Tzanck_test
The tzolkinex is an eclipse cycle equal to a period of two saros (13,170.636 days) minus one inex (10,571.946 days). As consecutive eclipses in an inex series belongs to the next consecutive saros series, each consecutive Tzolkinex belongs to the previous saros series. The tzolkinex is equal to 2598.691 days (about 7 years, 1 month and 12 days). It is related to the tritos in that a period of one tritos plus one tzolkinex is exactly equal to one saros. It is also related to the inex in that a period of one inex plus one tzolkinex is exactly equal to two saros. It corresponds to: Because of the non-integer number of anomalistic month each eclipse varies in type, i.e. total vs. annular, and greatly varies in length. From remainder of 0.31081, being near 1 ⁄ 3 , every third tzolkinex comes close to an even number of anomalistic months, but occurs during a different season, and in the opposite hemisphere, thus they may be of the same type (annular vs. total) but otherwise do not have a similar character. It was first studied by George van den Bergh (1951). The name Tzolkinex was suggested by Felix Verbelen (2001) as its length is nearly 10 Tzolk'ins (260-day periods). [ 1 ] It alternates hemispheres with each cycle, occurring at alternating nodes, each successive occurrence is one saros less than the last.
https://en.wikipedia.org/wiki/Tzolkinex
Tŷ Nant is a mineral water brand bottled at source in Bethania, Ceredigion , Wales . Tŷ Nant is Welsh for "House by the stream". The Tŷ Nant water source was discovered in 1976 by a water diviner . A borehole was sunk through one hundred feet of rock and clay to an underground aquifer which was found to be drinkable. In December 1996, Tŷ Nant invested in a new 45,000 sq ft bottling plant which increased production capacity fivefold. Built with assistance from the Development Board for Rural Wales, it is sited directly above the original borehole where the first waters were drawn in 1976. [ 1 ] Tŷ Nant's cobalt blue glass bottle range was launched in 1989, and won the British "First Glass" Award for Design Excellence. [ citation needed ] A crimson red bottle, called "Tŷ Nant Too", was produced in 1999 to mark the company's 10th anniversary. In 2001 Tŷ Nant entered the PET bottle market. [ 2 ] A new product, TAU spring water, was launched in 2003 with an award-winning design. [ 1 ] [ 3 ] The company sources its spring water from within a 300-acre (120 ha) site, managed sustainably, and situated on the edge of the Cambrian Mountains of Wales. [ 4 ] As it is surrounded by rocks of very low transmissivity, the spring consists of recharge directly through the surface of the ground above, whilst lateral flows into the aquifer from the surrounding, relatively low permeability rock are insignificant. [ 5 ] Tŷ Nant was bought out in 1992 by its Italian distributor, Biscaldi Luigi Import Export Srl.
https://en.wikipedia.org/wiki/Tŷ_Nant
Uranium borohydride is the inorganic compound with the empirical formula U (BH 4 ) 4 . Two polymeric forms are known, as well as a monomeric derivative that exists in the gas phase. Because the polymers convert to the gaseous form at mild temperatures, uranium borohydride once attracted much attention. It is solid green. [ 1 ] It is a homoleptic coordination complex with borohydride (also called tetrahydroborate). These anions can serve as bidentate (κ 2 -BH 4 − ) bridges between two uranium atoms or as tridentate ligands (κ 3 -BH 4 − ) on single uranium atoms. In the solid state, a polymeric form exists that has a 14- coordinate structure with two tridentate terminal groups and four bidentate bridging groups. [ 2 ] Gaseous features a monomeric 12-coordinate uranium, with four κ 3 -BH 4 − ligands, which envelop the metal, conferring volatility. [ 3 ] This compound was first prepared by treating uranium tetrafluoride with aluminium borohydride : [ 1 ] It may also be prepared by the solid-state reaction of uranium tetrachloride with lithium borohydride : [ 1 ] Although solid U(BH 4 ) 4 is a polymer, it undergoes cracking, converting to the monomer. The related methylborohydride complex U(BH 3 CH 3 ) 4 is monomeric as a solid and hence more volatile. During the Manhattan Project , the need arose to find volatile compounds of uranium suitable for use in the diffusion separation of uranium isotopes. Uranium borohydride is, after uranium hexafluoride , the most volatile compound of uranium known with a vapor pressure of 4 mmHg (530 Pa) at 60 °C. Uranium borohydride was discovered by Hermann Irving Schlesinger and Herbert C. Brown , who also discovered sodium borohydride. [ 1 ] Uranium hexafluoride is corrosive , which led to serious consideration of the borohydride. However, by the time the synthesis method was finalized, the problems related to uranium hexafluoride were solved. Borohydrides are nonideal ligands for isotope separations, since there are isotopes of boron that occur naturally in high abundance: 10 B (20%) and 11 B (80%), while fluorine-19 is the only isotope of fluorine that occurs in nature in more than trace quantities.
https://en.wikipedia.org/wiki/U(BH4)4
Uranium(IV) sulfate (U(SO 4 ) 2 ) is a water-soluble salt of uranium . It is a very toxic compound . Uranium sulfate minerals commonly are widespread around uranium bearing mine sites, where they usually form during the evaporation of acid sulfate-rich mine tailings which have been leached by oxygen-bearing waters. Uranium sulfate is a transitional compound in the production of uranium hexafluoride . It was also used to fuel aqueous homogeneous reactors . Uranyl sulfate in solution is readily photochemically reduced to uranium(IV) sulfate. The photoreduction is carried out in the sun and requires the addition of ethanol as a reducing agent. Uranium(IV) crystallizes or is precipitated by ethanol in excess. It can be obtained with different degrees of hydration. U(SO 4 ) 2 can also be prepared through electrochemical reduction of U(VI) and the addition of sulfates. Reduction of U(VI) to U(IV) occurs naturally through a variety of means, including through the actions of microorganisms. Formation of U(SO 4 ) 2 is an entropically and thermodynamically favorable reaction. In-situ leaching (ISL), a widespread technique used to mine uranium, is implicated in the artificial increase of uranium sulfate compounds. ISL was the most widely used method to mine uranium in the United States during the 1990s. The method involves pumping an extraction liquid (either sulfuric acid or an alkaline carbonate solution) into an ore deposit, where it complexes with the uranium, removing the liquid and purifying the uranium. This synthetic addition of sulfuric acid unnaturally raises the abundance of uranium sulfate complexes at the site. The lower pH caused by the introduction of acid increases the solubility of U(IV), which is typically relatively insoluble and precipitates out of solution at neutral pH. Oxidation states for uranium range from U 3+ to U 6+ , U(III) and U(V) are rarely found, while U(VI) and U(IV) predominate. U(VI) forms stable aqueous complexes and is thus fairly mobile. Preventing the spread of toxic uranium compounds from mining sites often involves reduction of U(VI) to the far less soluble U(IV). The presence of sulfuric acid and sulfates prevents this sequestration, however, by both lowering the pH and through the formation of uranium salts. U(SO 4 ) 2 is soluble in water, and thus far more mobile. Uranium sulfate complexes also form quite readily. U(IV) is much less soluble, and thus less environmentally mobile, than U(VI), which also forms sulfate compounds such as UO 2 (SO 4 ) . Bacteria which are able to reduce uranium have been proposed as a means of eliminating U(VI) from contaminated areas, such as mine tailings and nuclear weapons manufacture sites. Contamination of groundwater by uranium is considered a serious health risk, and can be damaging to the environment as well. Several species of sulfate reducing bacteria also have the ability to reduce uranium. The ability to clear the environment of both sulfate (which solubilizes reduced uranium) and mobile U(VI) makes bioremediation of ISL mining sites a possibility. U(SO 4 ) 2 is a semi-soluble compound and exists in a variety of hydration states , with up to nine coordinating waters. U(IV) can have up to five coordinating sulfates, although nothing above U(SO 4 ) 2 has been significantly described. Kinetics data for U(SO 4 ) 2+ and U(SO 4 ) 2 reveal that the bidentate complex is strongly favored thermodynamically, with a reported K 0 of 10.51, as compared to K 0 =6.58 for the monodentate complex. U(IV) is much more stable as a sulfate compound, particularly as U(SO 4 ) 2 . Běhounekite is a recently (2011) described U(IV) mineral with the chemical composition U(SO 4 ) 2 (H 2 O) 4 . The uranium center has eight oxygen ligands, four provided by the sulfate groups and four from the water ligands. U(SO 4 ) 2 (H 2 O) 4 forms short, green crystals. Běhounekite is the first naturally occurring U(IV) sulfate to be described.
https://en.wikipedia.org/wiki/U(SO4)2
U-Report is a social messaging tool and data collection system developed by UNICEF to improve citizen engagement, inform leaders, and foster positive change. [ 1 ] [ 2 ] The program sends SMS polls and alerts to its participants, collecting real-time responses, and subsequently publishes gathered data. Issues polled include health, education, gender, climate change, [ 3 ] water, sanitation and hygiene, youth unemployment, HIV/AIDS, and disease outbreaks. [ 4 ] The program currently has 28 million u-reporters in 95 countries. [ 5 ] In 2007, UNICEF Innovation used RapidSMS to develop U-Report, a platform that would allow anyone to publish real-time information and data analytics in SMS format without the need of a programmer. [ 6 ] [ 7 ] In May 2011, Uganda became the first country in which UNICEF launched the U-Report mobile initiative, [ 8 ] due to its population being, on average, one of the youngest in the world. Another reason UNICEF cited for introducing the program in Uganda was the nation's high cellphone use compared to other developing nations, with 48% of the nation's citizens owning a cellphone. [ 9 ] Due to U-Report's success in Uganda, UNICEF expanded the program to Zambia in December 2012 [ 10 ] and to Nigeria in June 2014. [ 11 ] In Zambia, U-report was used to prevent HIV among adolescents and young people, with voluntary HIV testing in the country rising from 24% of the population to 40%. In Nigeria, U-Report primarily conducts surveys on social and medical issues. In July 2015, U-Report reached a total of one million reporters in fifteen countries. [ 12 ] In October 2015, Ukraine became the first country in Europe to join the U-Report program, [ 13 ] growing to 68,273 participants by September 30, 2018. [ 14 ]
https://en.wikipedia.org/wiki/U-Report
In quantum mechanics , the u-bit or ubit is a proposed theoretical entity which arises in attempts to reformulate wave functions using only real numbers instead of the complex numbers conventionally used. [ 1 ] In order to discover the real probability of a given quantum event occurring, the conventional calculation carries out an operation, analogous to squaring, on an associated set of complex numbers. A complex number involves the use of the square root of minus one , a number which is described as " imaginary " in contrast to the familiar " real" numbers used for counting and describing real physical objects. Because the computed result is required to be a real number, information is lost in the computation. [ 2 ] This situation is regarded as unsatisfactory by some researchers, who seek an alternative formulation which does not involve the square root of minus one. Bill Wootters , of Williams College , Williamstown, Massachusetts , and colleagues have derived such a model. This model requires the presence of a universal entity which is quantum-entangled with every quantum wave and which he calls the u-bit. [ 3 ] Mathematically the u-bit may be represented as a vector rotating in a real two-dimensional plane. It has no known physical representation in the real world. [ 3 ]
https://en.wikipedia.org/wiki/U-bit
u-blox is a Swiss company that creates wireless semiconductors and modules for consumer, automotive and industrial markets. They operate as a fabless IC and design house. The company is listed at the Swiss Stock Exchange (SIX:UBXN) and has offices in the US, Singapore, China, Taiwan (China), Korea, Japan, India, Pakistan, Australia, Ireland, the UK, Belgium, Germany, Sweden, Finland, Italy and Greece. u-blox is a spin-off of the Swiss Federal Institute of Technology in Zurich (ETH) [ 3 ] [ 4 ] and was founded in 1997 and is headquartered in Thalwil , Switzerland. Thomas Seiler served as chief executive officer of u-blox AG from 2002 until his retirement on Dec 31, 2022. Stephan Zizala, who joined the company in 2022, succeeded Seiler. [ 5 ] In 2016, u-blox opened a new office in Taipei , Taiwan. [ 6 ] u-blox provides starter kits which allow quick prototyping of variety of applications for the Internet of Things. [ 7 ] It develops and sells chips and modules that support global navigation satellite systems ( GNSS ), including receivers for GPS , GLONASS , Galileo , BeiDou and QZSS . [ 8 ] The wireless range consists of GSM -, UMTS - and CDMA2000 and LTE modules, as well as Bluetooth - and WiFi -modules. All these products enable the delivery of complete systems for location-based services and M2M applications (machine-to-machine communication) in the Internet of Things , that rely on the convergence of 2G/3G/4G, Bluetooth-, Wi-Fi technology and satellite navigation. [ 9 ] A collaboration to create GNSS receiver that work globally was started between u-blox, SoftBank and ALES in 2021. [ 10 ] One year later, in 2022, u-blox released the at the time smallest LTE Cat 4 Module LARA-L6. [ 11 ] The company launched a dual-band GNSS module in 2023 that uses the L1 as well as the L5 GPS frequency bands . [ 12 ] In 2024 u-blox released the LEXI-R10, which was, according to the company, the smallest LTE Cat 1bis module at time of launch. [ 13 ] [ 14 ] They acquired a dozen companies after their IPO in 2007, after acquiring connectblue [ 15 ] in 2014 and Lesswire in 2015 [ 16 ] they acquired Rigado's module business in 2019. [ 17 ] In 2020, u-blox acquired Thingstream. [ 18 ] In 2021, u-blox AG acquired Sapcorda Services GmbH, a provider of high precision GNSS (global navigation satellite system) services. [ 19 ] and Naventik GmbH, a German company specializing in the development of safe positioning solutions for autonomous driving. [ 20 ]
https://en.wikipedia.org/wiki/U-blox
In model theory , a branch of mathematical logic, U-rank is one measure of the complexity of a (complete) type , in the context of stable theories . As usual, higher U-rank indicates less restriction, and the existence of a U-rank for all types over all sets is equivalent to an important model-theoretic condition: in this case, superstability . U-rank is defined inductively, as follows, for any (complete) n-type p over any set A: We say that U ( p ) = α when the U ( p ) ≥ α but not U ( p ) ≥ α + 1. If U ( p ) ≥ α for all ordinals α , we say the U-rank is unbounded, or U ( p ) = ∞. Note: U-rank is formally denoted U n ( p ) {\displaystyle U_{n}(p)} , where p is really p(x), and x is a tuple of variables of length n. This subscript is typically omitted when no confusion can result. U-rank is monotone in its domain. That is, suppose p is a complete type over A and B is a subset of A . Then for q the restriction of p to B , U ( q ) ≥ U ( p ). If we take B (above) to be empty, then we get the following: if there is an n -type p , over some set of parameters, with rank at least α , then there is a type over the empty set of rank at least α . Thus, we can define, for a complete (stable) theory T , U n ( T ) = sup { U n ( p ) : p ∈ S ( T ) } {\displaystyle U_{n}(T)=\sup\{U_{n}(p):p\in S(T)\}} . We then get a concise characterization of superstability; a stable theory T is superstable if and only if U n ( T ) < ∞ {\displaystyle U_{n}(T)<\infty } for every n . Pillay, Anand (2008) [1983]. An Introduction to Stability Theory . Dover. p. 57. ISBN 978-0-486-46896-9 .
https://en.wikipedia.org/wiki/U-rank
The U.S. Army Corps of Engineers Bay Model is a working hydraulic scale model of the San Francisco Bay and Sacramento-San Joaquin River Delta System. While the Bay Model is still operational, it is no longer used for scientific research but is instead open to the public alongside educational exhibits about Bay hydrology. The model is located in the Bay Model Visitor Center at 2100 Bridgeway Blvd. in Sausalito, California . In the late 1940s, John Reber proposed to build two large dams in the San Francisco Bay as a way to provide a more reliable freshwater supply to residents and farms and to connect local communities. In 1953, the U.S. Army Corps of Engineers proposed a detailed study of the so-called Reber Plan . Cornelius Biemond proposed a similar plan which would dam the Sacramento River in the delta region to feed aqueducts with freshwater. Authorized by Section 110 of the Rivers and Harbors Act of 1950 , construction of the Bay Model was completed in 1957 to study the plans. [ 1 ] [ 2 ] The tests proved that the plan was not viable, and the Reber Plan was scuttled. [ 3 ] The Sacramento-San Joaquin River Delta portion was added to the model in 1966-1969 to provide information for studies concerning impacts of the deepening of navigation channels, realignment of Delta channels (via the Peripheral Canal ), and various flow arrangements on water quality. When completed, the expanded model covered 2 acres (0.81 ha) of land. [ 4 ] The model is approximately 320 feet long in the north-south direction and about 400 feet long in the east-west direction. It is constructed out of 286 five-ton concrete slabs joined together like a jigsaw puzzle . Features that affect the water flow of the San Francisco Bay and Sacramento-San Joaquin Delta are reproduced, including ship channels , rivers , creeks , sloughs, the canals in the Delta, fills , major wharfs , piers , slips , dikes , bridges , and breakwaters . [ 5 ] The limits of the model encompass the Pacific Ocean extending 17 miles beyond the Golden Gate , San Francisco Bay, San Pablo Bay , Suisun Bay and all of the Sacramento-San Joaquin River Delta to Verona , 17 miles north of Sacramento on the north, and to Vernalis , 32 miles south of Stockton on the San Joaquin River on the south. [ 5 ] The scale of the model is 1:1000 on the horizontal axis and 1:100 on the vertical axis. The model operates at a time scale of 1:100. [ 6 ] The model is distorted by a factor of ten between the horizontal and vertical scales. The distortion is designed into the model to ensure a proper hydraulic flow over the tidal flats and shallows. The distortion does increase the hydraulic efficiency of the flows. These increased efficiencies are corrected by the use of copper strips throughout the model. The exact number of copper strips is adjusted during the calibration of the model. [ 6 ]
https://en.wikipedia.org/wiki/U.S._Army_Corps_of_Engineers_Bay_Model
The U.S. Chemical Safety and Hazard Investigation Board ( USCSB ), generally referred to [ 1 ] as the Chemical Safety Board ( CSB ), is an independent U.S. federal agency charged with investigating industrial chemical accidents . Headquartered in Washington, D.C. , the agency's board members are appointed by the president and confirmed by the United States Senate . The CSB conducts root cause investigations of chemical accidents at fixed industrial facilities. [ 2 ] The U.S. Chemical Safety Board is authorized by the Clean Air Act Amendments of 1990 and became operational in January 1998. The Senate legislative history states: "The principal role of the new chemical safety board is to investigate accidents to determine the conditions and circumstances which led up to the event and to identify the cause or causes so that similar events might be prevented." Congress gave the CSB a unique statutory mission and provided in law that no other agency or executive branch official may direct the activities of the board. Following the successful model of the National Transportation Safety Board and the Department of Transportation , Congress directed that the CSB's investigative function be completely independent of the rulemaking, inspection, and enforcement authorities of the Environmental Protection Agency and Occupational Safety and Health Administration . Congress recognized that board investigations would identify chemical hazards that were not addressed by those agencies. [ 3 ] Also similarly to the NTSB, the CSB performs "investigations [that] identify the root causes of chemical incidents and share these findings broadly across industries to prevent future incidents." [ 4 ] Following criticism from lawmakers and allegations of mismanagement, the former chairman of the CSB, Rafael Moure-Eraso , resigned in March 2015. [ 5 ] [ 6 ] [ 7 ] He was replaced by Vanessa Allen Sutherland in August 2015. [ 8 ] Sutherland resigned with two years left in her five-year term after the Trump administration proposed shutting down the CSB as part of the 2019 United States federal budget which ultimately would not occur. [ 9 ] The board consists of five members who are appointed by the president with the advice and consent of the Senate. The terms of office are five years. The president designates one of the members as chairperson, again with the advice and consent of the Senate. The chairperson is the chief executive officer of the board, and exercises the executive and administrative functions of the board. [ 10 ] The current board members as of December 22, 2024 [update] : [ 11 ] The USCSB has investigated many of the most devastating industrial chemical accidents in the U.S. since its inception in 1998. It is known for its highly detailed and technically oriented post mortem analyses of individual incidents, as well as its transparent public relations practices. The latter include at length reconstructions of an incident, alongside root cause analysis and subsequent recommendations the board has made; unusual for a governmental agency, they are often attended by a video form safety report, with careful narration and high quality computer graphics. Their videos are narrated by Sheldon Smith. [ 15 ] [ 16 ] The agency publishes its videos on a public YouTube channel, which as of August 2024 [ref] has over 341,000 subscribers. [ 17 ] The CSB's videos have been lauded for their quality, with experts encouraging their use in teaching process safety fundamentals. [ 18 ] In the mid to late 2000s, many of the USCSB's videos centered on explosive dust hazards, and OSHA's response to USCSB's recommendations on the issue. Of the 8 investigations (as of December 2021 [update] ) concerning explosions and fires caused by combustible dust conducted by the USCSB, 5 of them had their final report released from 2004 to 2009. [ 19 ] USCSB's notable investigations include:
https://en.wikipedia.org/wiki/U.S._Chemical_Safety_Board
The Chemical Corps is the branch of the United States Army tasked with defending against and using chemical , biological , radiological , and nuclear ( CBRN ) weapons . The Chemical Warfare Service was established on 28 June 1918, combining activities that until then had been dispersed among five separate agencies of the United States federal government . It was made a permanent branch of the Regular Army by the National Defense Act of 1920 . In 1945, it was redesignated the Chemical Corps. Discussion of the topic dates back to the American Civil War . A letter to the War Department dated 5 April 1862 from New York City resident John Doughty proposed the use of chlorine shells to drive the Confederate Army from its positions. Doughty included a detailed drawing of the shell with his letter. It is unknown how the military reacted to Doughty's proposal but the letter was unnoticed in a pile of old official documents until modern times. Another American, Forrest Shepherd , also proposed a chemical weapon attack against the Confederates . Shepherd's proposal involved hydrogen chloride , an attack that would have likely been non-lethal but may have succeeded in driving enemy soldiers from their positions. Shepherd was a well-known geologist at the time and his proposal was in the form of a letter directly to the White House . [ 1 ] The earliest predecessors to the United States Army Chemical Corps owe their existence to changes of military technology early in World War I. By 1915, the combatants were using poison gases and chemical irritants on the battlefield. In that year, the United States War Department first became interested in providing individual soldiers with personal protection against chemical warfare and they tasked the Medical Department with developing the technology. Nevertheless, troops were neither supplied with masks nor trained for offensive gas warfare until the U.S. became involved in World War I in 1917. [ 2 ] By 1917, the use of chemical weapons by both the Allied and Central Powers had become commonplace along the Western , Eastern and Italian Fronts , occurring daily in some regions. [ 3 ] In 1917, Secretary of the Interior Franklin K. Lane , directed the Bureau of Mines to assist the Army and Navy in creating a gas war program. [ 2 ] Researchers at the Bureau of Mines had experience in developing gas masks for miners , drawing poisonous air through an activated carbon filter. [ 4 ] After the Director of the Bureau of Mines, Van H. Manning , formally offered the bureau's service to the Military Committee of the National Research Council , the council appointed a Subcommittee on Noxious Gases. [ 2 ] [ 4 ] Manning recruited chemists from industry, universities, and government to help study mustard-gas poisoning, investigate and mass-produce new toxic chemicals, and develop gas-masks and other treatments. [ 4 ] A center for chemical weapons research was established at American University in Washington, D.C. to house researchers. The U.S. military paid to convert classrooms into laboratories. Within a year of setting up the center, the number of scientists and technicians employed there would increase from 272 to over 1,000. Industrial plants were established in nearby cities to synthesize toxic chemicals for use in research and armaments. Shells were filled with toxic gas in Edgewood, Maryland . Women were employed to produce gas masks in Long Island City . [ 4 ] On 5 July 1917 General John J. Pershing oversaw the creation of a new military unit dealing with gas, the Gas Service Section. [ 5 ] [ 6 ] The government recruited soldiers for it to be based at Camp American University , Washington, D.C. [ 4 ] [ 7 ] The predecessor to the 1st Gas Regiment was the 30th Engineer Regiment (Gas and Flame). The 30th was activated on 15 August 1917 at Camp American University [ 8 ] A 17 October 1917 memorandum from the Adjutant General to the Chief of Engineers directed that the Gas Service Section consist of four majors, six captains, 10 first lieutenants and 15 second lieutenants. [ 6 ] Additional War Department orders established a Chemical Service Section that included 47 commissioned officers and 95 enlisted personnel. [ 6 ] Before deploying to France in 1917 many of the soldiers in the 30th Engineer Regiment (Gas and Flame) spent their time stateside in training that did not emphasize any chemical warfare skills; [ 9 ] instead the training focused on drill, marching, guard duty, and inspections. [ 9 ] [ 10 ] Despite the conventional training, the public perceived the 30th as dealing mainly with "poisonous gas and hell fire". [ 10 ] By the time those in the 30th Engineers arrived in France most of them knew nothing of chemical warfare and had no specialized equipment. [ 9 ] In 1918, the 30th Engineer Regiment (Gas and Flame) was redesignated the First Gas Regiment and deployed to assist and support Army gas operations, both offensive and defensive. [ citation needed ] On 28 June 1918, the Chemical Warfare Service (CWS) was officially formed and encompassed the "Gas Service" and "Chemical Service" Sections. [ 5 ] [ 6 ] By 1 November 1918 the CWS included 1,654 commissioned officers and 18,027 enlisted personnel. [ 11 ] Major General William L. Sibert served as the first director of the CWS on the day it was created, [ 12 ] and he resigned in April 1920. [ 13 ] In the interwar period , the Chemical Warfare Service maintained its arsenal despite public pressure and presidential wishes in favor of disarmament. Major General Amos Fries , the CWS chief from 1920–29, viewed chemical disarmament as a Communist plot. [ 3 ] Through his instigation and lobbying, the CWS and its various Congressional, chemist, and chemical company allies were able to halt the U.S. Senate's ratification of the 1925 Geneva Protocol which forbade "first use" of chemical weapons. [ 3 ] Even countries who had signed the Geneva Protocol still produced and stockpiled chemical weapons, since the Protocol did not prohibit retaliation in kind. In 1937, President Roosevelt opposed changing the name of the Service to Corps, stating: [ 14 ] It is my thought that the major functions of the Chemical Warfare Service are those of a "Service" rather than a "Corps." It is desirable to designate as a Corps only those supply branches of the Army which are included in the line of the Army. To have changed the name to the "Chemical Service" would have been more in keeping with its functions than to designate it as the "Chemical Corps." I have a far more important objection to this change of name. It has been and is the policy of this Government to do everything in its power to outlaw the use of chemicals in warfare. Such use is inhuman and contrary to what modern civilization should stand for. I am doing everything in my power to discourage the use of gases and other chemicals in any war between nations. While, unfortunately, the defensive necessities of the United States call for study of the use of chemicals in warfare, I do not want the Government of the United States to do anything to aggrandize or make permanent any special bureau of the Army or the Navy engaged in these studies. I hope the time will come when the Chemical Warfare Service can be entirely abolished. To dignify this Service by calling it the "Chemical Corps" is, in my judgment, contrary to a sound public policy. The Chemical Warfare Service deployed and prepared gas weapons for use throughout the world during World War II . However, these weapons were never used in combat . [ 16 ] Despite the lack of chemical warfare during the conflict, the CWS saw its funding and personnel increase substantially due to concerns that the Germans and Japanese had a formidable chemical weapons capability. By 1942 the CWS employed 60,000 soldiers and civilians and was appropriated $1 billion. [ 16 ] The CWS completed a variety of non-chemical warfare related tasks and missions during the war including producing incendiaries for flame throwers , flame tanks and other weapons. Chemical soldiers were also involved in smoke generation missions . Chemical mortar battalions used the 4.2-inch chemical mortar to support armor and infantry units. [ 17 ] During all parts of the war, use of chemical and biological weapons were extremely limited by both sides. Italy used mustard gas and phosgene during the short Second Italo-Abyssinian War , Germany employed chemical agents such as Zyklon B against Jews, political prisoners and other victims in extermination camps during the Holocaust , and Japan employed chemical and biological weapons in China. [ 18 ] In 1943 a U.S. ship carrying a secret Chemical Warfare Service cargo of mustard gas as a precautionary retaliatory measure was sunk in an air raid in Italy , causing 83 deaths and about 600 hospitalized military victims plus a larger number of civilian casualties. In the event, neither chemical nor biological weapons were used on the battlefield by any combatant during World War II. Though the political leadership of the United States remained decidedly against the use of chemical weapons, there were those within the military command structure who advocated the use of such weapons. Following the Battle of Tarawa , during which the U.S. forces suffered more than 3,400 casualties in three days, CWS chief Major General William N. Porter pushed superiors to approve the use of poison gas against Japan. "We have an overwhelming advantage in the use of gas. Properly used gas could shorten the war in the Pacific and prevent loss of many American lives," Porter said. Popular support was not completely lacking. Some newspaper editorials supported the use of chemical weapons in the Pacific theater. The New York Daily News proclaimed in 1943, "We Should Gas Japan", and the Washington Times Herald wrote in 1944, "We Should Have Used Gas at Tarawa because "You Can Cook 'Em Better with Gas". [ 18 ] [ 19 ] Despite rising between 1944 and 1945, popular public opinion never rose above 40 percent in favor of the use of gas weapons. [ 19 ] [ check quotation syntax ] Where there was support for Chemical Warfare was in V Amphibious Corps and X U.S. Army . Colonel George F. Unmacht (US Army) became commander of the Army's Chemical Warfare Service, Pacific Ocean Area in 1943. [ 20 ] Along with that he was the Hawaii Territorial Coordinator for Civilian Gas Defense and Joint service Pacific theater chief chemical warfare officer under Adm. Nimitz . Under his leadership the research, development, and production of flamethrowing tanks and napalm took place at Schofield Barracks . His crews of Seabees produced more flamethrowing tanks than commercial production in the States. The Army and Marine Corps felt the tanks saved many American troops on Iwo Jima and Okinawa . [ 20 ] [ 21 ] The Marines felt they were the best weapon they had in the taking of Iwo Jima. In 1946, the Chemical Warfare Service was re-designated as the "U.S. Army Chemical Corps", a name the branch still uses. [ 17 ] With the change came the added mission of defending against nuclear warfare , in addition, the corps continued to refine its offensive and defensive chemical capabilities. [ 17 ] Immediately following World War II, production of U.S. biological warfare (BW) agents went from "factory-level to laboratory-level". [ 22 ] Meanwhile, work on BW delivery systems increased. [ 22 ] Live testing in Panama was carried out during the San Jose Project . From the end of World War II through the Korean War , the U.S. Army, the Chemical Corps and the U.S. Air Force made great strides in their biological warfare programs, especially concerning delivery systems. [ 22 ] During the Korean War (1950–53) chemical soldiers had to again man the 4.2 inch chemical mortar for smoke and high explosive munitions delivery. [ 17 ] During the war, the Pine Bluff Arsenal was opened and used for BW production, and research facilities were expanded at Fort Detrick . [ 22 ] North Korea, the Soviet Union and China leveled accusations at the United States claiming the U.S. used biological agents during the Korean War ; an assertion the U.S. government denied. [ 22 ] After the end of the Korean War, the Army decided to strip the Chemical Corps of the 4.2 inch mortar system and made that an infantry weapon, given its utility against Chinese mortars. [ citation needed ] From 1952 until 1999 the Chemical Corps School was located at Fort McClellan . The Chemical Corps Intelligence Agency (CCIA) was founded in 1955 [ 23 ] within a facility at Arlington Hall Station , Virginia . [ 24 ] which also housed the Army Security Agency , the National Security Agency (NSA) and the Defense Intelligence Agency 's National Intelligence University . [ 25 ] The CCIA accomplished the intelligence function of the U.S. Army Chemical Corps . Its mission was to support the national intelligence effort with particular emphasis on the military aspects of Chemical, Biological and Radiological (CBR) intelligence information. [ 24 ] U.S. Army Chemical Corps Information and Liaison Office, Europe (CCILO–E) was established and located in Frankfurt, Germany. [ 24 ] During November and December 1961 two CCIA officers visited the Far East on an intelligence collection trip. [ 24 ] This visit led to a recommendation by CCIA to establish an information and liaison office in Tokyo patterned on the Frankfurt agency. [ 24 ] The Chief Chemical Officer and Assistant Chief of Staff for Intelligence (ACS/I) approved the recommendation and foresaw an activation date in financial year 1964. Two CCIA staff members again toured selected U.S. intelligence agencies in Japan , Korea , Okinawa , Taiwan , the Philippines , and Hong Kong in the third quarter of financial year 1962. [ 24 ] The purposes were to establish liaison with the Chemical Corps personnel, to reemphasize the importance of CBR intelligence, and to provide on the spot guidance and discuss the establishment of a U.S. Army Chemical Corps Information and Liaison Office in Tokyo. [ 24 ] Beginning in 1962 during the Vietnam War , the Chemical Corps operated a program that would become known as Operation Ranch Hand . Ranch Hand was a herbicidal warfare program which used herbicides and defoliants such as Agent Orange . [ 26 ] The chemicals were color-coded based on what compound they contained. The U.S. and its allies officially argued that herbicides and defoliants fell outside the definition of "chemical weapons", since these substances were not designed to asphyxiate or poison humans, but to destroy plants which provided cover or concealment to the enemy. The Chemical Corps continued to support U.S. forces through the use of incendiary weapons , such as napalm , and riot control measures, among other missions. As the war progressed into the late 1960s, public sentiment against the Chemical Corps increased because of the Army's continued use of herbicides, criticized in the press as being against the Geneva Protocol; napalm; and riot control agents. [ 27 ] Besides supplying flame weapons, and preparing for any eventuality of weapons of mass destruction, the Vietnam-era Chemical Corps also developed " people sniffers ", a type of personnel detector. Major Herb Thornton led chemical soldiers, who became known as tunnel rats and developed techniques for clearing enemy tunnels in Vietnam. [ 28 ] In March 1968, the Dugway sheep incident was one of several key events which increased the growing public furor against the corps. An open air spraying of VX was blamed for killing over 4,000 sheep near the US Dugway Proving Ground . [ 27 ] The Army eventually settled the case and paid the ranchers. Meanwhile, another incident involving Operation CHASE (Cut Holes and Sink 'Em) was exposed, which sought to dump chemical weapons 250 miles (400 km) off of the Florida coast, spurring concerns over the damage to the ocean environment and risk of chemical munitions washing up on shore. [ 27 ] Beginning in the late 1960s, the chemical warfare capabilities of the United States began to decline due to, in part, a decline in public opinion concerning the corps. [ 27 ] The corps continued to be plagued with bad press and mishaps. A 1969 incident , in which 23 soldiers and one Japanese civilian were exposed to sarin on the island of Okinawa , while cleaning sarin-filled bombs, created international concern while revealing the presence of chemical munitions in Southeast Asia. [ 29 ] Also in 1969, President Richard Nixon reaffirmed a no first-use policy on chemical weapons as well as renouncing the use of biological weapons (BW). [ 29 ] When the U.S. BW program ended in 1969, it had developed seven standardized biological weapons in the form of agents that cause anthrax , tularemia , brucellosis , Q-fever , VEE , and botulism . [ 22 ] In addition, Staphylococcal Enterotoxin B was produced as an incapacitating agent. [ 22 ] During the summer of 1972, Nixon nominated General Creighton Abrams for the post of Army Chief of Staff . Upon assuming that position, Abrams and others began to address the reformation of the Army in the wake of Vietnam. [ 27 ] Abrams investigated the possibility of merging Chemical Corps into other Army branches. An ad hoc committee, designed to study possibilities, recommended that the Chemical Corps' smoke and flame mission be integrated into the Engineer Corps and the chemical operations be integrated into the Ordnance Corps . The groups recommendations were accepted in December 1972 and the United States Army Chemical Corps was officially disbanded, but not formally disestablished, by the Army on 11 January 1973. [ 27 ] To formally disestablish the corps, the U.S. Congress had to approve the move, because it had officially established the Chemical Corps in 1946. Congress chose to table action on the fate of the Chemical Corps, leaving it in limbo for several years. [ 27 ] Recruitment and career advancement was halted and the Chemical School at Fort McClellan was shut down and moved to Aberdeen Proving Grounds . [ 30 ] By the mid–1970s the chemical warfare and defense capability of the United States had degraded and by 1978 the Chairman of the Joint Chiefs of Staff characterized U.S. ability to conduct operations in a chemical environment as "not prepared." [ 31 ] Secretary of the Army Martin R. Hoffmann rescinded the 1972 recommendations, and in 1976 Army Chief of Staff General Bernard W. Rogers ordered the resumption of Chemical Corps officer commissioning. However, the U.S. Army Chemical School at Fort McClellan, Anniston, Alabama did not reopen until 1980. [ 27 ] By 1982 the Chemical Corps was running smoothly once again. [ 30 ] In an effort to hasten chemical defense capabilities the corps restructured its doctrine, modernized its equipment, and altered its force structure . This shift led to every unit in the army having chemical specialists in-house by the mid-1980s. Between 1979 and 1989 the Army established 28 active duty chemical defense companies. [ 30 ] After Iraq invaded Kuwait in 1990 and much of the world responded by amassing military assets in the region, the United States Army faced the possibility of experiencing chemical or biological (CB) attack. [ 32 ] The possibility of CB attack forced the army to respond with NBC defense crash courses in theater. [ 32 ] Troops deployed to the Gulf with protective masks at the ready, protective clothing was made available to those troops whose vicinity to the enemy or mission required it. [ 33 ] Large scale drills were conducted in the desert to better acclimatize troops to wearing the bulky protective clothing (called MOPP gear) in hot weather conditions. [ 33 ] Though Saddam Hussein had renounced the use of chemical weapons in 1989, many did not believe he would really honor that during a conflict with the United States and the broader coalition forces. [ 33 ] As American troops headed to the desert, analysts speculated about their vulnerability to CB attack. Although the location of Hussein's chemical munitions was unknown, their existence was never doubted. [ 33 ] Gulf War I was fought without the Iraqi Army unleashing chemical or biological munitions; [ 33 ] Eric R. Taylor, of the CATO Institute , maintained that the effective, U.S. threat of nuclear retaliation halted Hussein from employing his chemical weapons. [ 32 ] The locations of many of Iraq's chemical stockpiles were never uncovered and there is widespread speculation that U.S. troops were exposed to chemical munitions while destroying weapons caches, [ 33 ] particularly near the Khamisiyah storage site. [ 34 ] After the war, analysis suggested the chemical defense capabilities of U.S. forces were woefully inadequate during and after the conflict. [ 32 ] In addition, some experts, such as Jonathan B. Tucker , suggest that the Iraqis did indeed employ chemical weapons during the war. [ 35 ] As a result of the 1995 sarin gas attack on a Tokyo subway and the growing concern about a terrorist chemical attack, the U.S. Congress passed laws to implement a program to train civilian, law enforcement, and fire agencies on responding to incidents involving chemical agents. Further, United States Army Reserve chemical units began fielding equipment and training Soldiers to perform mass casualty decontamination operations. [ 36 ] A 1996 United States Government Accountability Office report concluded that U.S. troops remained highly vulnerable to attack from both chemical and biological agents. The report blamed the U.S. Department of Defense for failure to address shortcoming identified five years earlier during combat in the Persian Gulf War. These shortcomings included inadequate training, a lack of decontamination kits and other equipment, and vaccine shortages. [ 37 ] From 1952 until 1999 the Chemical Corps School was located at Fort McClellan . Since its closure due to Base Realignment and Closure in 1999, the Army's Chemical Corps and the United States Army Chemical, Biological, Radiological and Nuclear (CBRN) School are located at Fort Leonard Wood , Missouri. There are approximately 22,000 members of the Chemical Corps in the U.S. Army, spread among the Active, Army Reserve, and Army National Guard. [ citation needed ] The school trains officers and enlisted personnel in CBRN warfare and defense with a mission is "To protect the force and allow the Army to fight and win against a CBRN threat. Develop doctrine, equipment and training for CBRN defense which serve as a deterrent to any adversary possessing weapons of mass destruction . Provide the Army with the combat multipliers of smoke, obscurant, and flame capabilities." [ citation needed ] The Chemical Corps, like all branches of the U.S. Army, uses specific insignia to indicate a soldier's affiliation with the corps. The Chemical Corps branch insignia consists of a cobalt blue, enamel benzene ring superimposed over two crossed gold retorts . The branch insignia, which was adopted in 1918 by the fledgling Chemical Service, measures .5 inches in height by 1.81 inches in width. Crossed shells with a dragon head was also commonly used in France for the Chemical service. The Chemical Warfare Service approved the insignia in 1921 and in 1924 the ring adopted the cobalt blue enamel. When the Chemical Warfare Service changed designations to the Chemical Corps in 1946 the symbol was retained. [ 38 ] The regimental motto, Elementis Regamus Proelium translates to: "We rule the battle by means of the elements." The Chemical Corps regimental insignia was approved on 2 May 1986. The insignia consists of a 1.2 inch shield of gold and blue emblazoned with a dragon and a tree. The shield is enclosed on three sides by a blue ribbon with Elementis regamus proelium written around it in gold lettering. The phrase translates to: "We rule the battle through the elements.". The regimental insignia incorporates specific symbolism in its design. The colors, gold and blue, are the colors of the Chemical Corps, while the tree's trunk is battle scarred, a reference to the historical beginnings of U.S. chemical warfare, battered tree trunks were often the only reference points that chemical mortar teams had across no man's land during World War I. [ 39 ] The tree design was taken from the coat of arms of the First Gas Regiment . The mythical chlorine breathing green dragon symbolizes the first use of chemical weapons in warfare (chlorine). Individual Chemical Corps soldiers are often referred to as "Dragon Soldiers." [ 40 ] The Chemical Corps Regimental Association operates the "Chemical Corps Hall of Fame". The list includes soldiers from many different eras of the Chemical Corps history, including Amos Fries, Earl J. Atkisson, and William L. Sibert. [ 41 ] The organization conducts annual inductions, and the honor is considered the highest offered by the corps. [ 42 ] Baseball Hall of Fame baseball player, manager, and executive Branch Rickey served in the 1st Gas Regiment during World War I. Rickey spent over four months as a member of the CWS. [ 43 ] Other Hall of Famers also served in the CWS during World War I, among them Ty Cobb and Christy Mathewson ; Mathewson suffered lung damage after inhaling gas in a training accident, which contributed to his later death from tuberculosis. [ 44 ] [ 45 ] Robert S. Mulliken served in the CWS making poison gas during World War I, and he later earned the Nobel Prize in 1966 for his work on the electronic structure of molecules.
https://en.wikipedia.org/wiki/U.S._Chemical_Warfare_Service
The U.S. National Vegetation Classification ( NVC or USNVC ) is a scheme for classifying the natural and cultural vegetation communities of the United States . The purpose of this standardized vegetation classification system is to facilitate communication between land managers, scientists, and the public when managing, researching, and protecting plant communities. The non-profit group NatureServe maintains the NVC for the U.S. government. This ecology -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/U.S._National_Vegetation_Classification
U12 Intron Database ( U12DB ) is a biological database of containing the sequence of eukaryotic introns that are spliced out by a specialised minor spliceosome that contains U12 minor spliceosomal RNA in place of U2 spliceosomal RNA. [ 1 ] These U12-dependent introns are under-represented in genome annotations because they often have non canonical splice sites. Release 1 of the database contains 6,397 known and predicted U12-dependent introns across 20 species. This Biological database -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/U12_intron_database
Uranium carbide , a carbide of uranium , is a hard refractory ceramic material. It comes in several stoichiometries ( x differs in UC x ), such as uranium methanide ( UC , CAS number 12070-09-6), uranium sesquicarbide ( U 2 C 3 , CAS number 12076-62-9), [ 2 ] and uranium acetylide ( UC 2 , CAS number 12071-33-9). [ 3 ] Like uranium dioxide and some other uranium compounds, uranium carbide can be used as a nuclear fuel for nuclear reactors , usually in the form of pellets or tablets. Uranium carbide fuel was used in late designs of nuclear thermal rockets . Uranium carbide pellets are used as fuel kernels for the US version of pebble bed reactors ; the German version uses uranium dioxide instead. As nuclear fuel, uranium carbide can be used either on its own, or mixed with plutonium carbide (PuC and Pu 2 C 3 ). The mixture is also labeled as uranium-plutonium carbide ( (U,Pu)C ). Uranium carbide is also a popular target material for particle accelerators . [ citation needed ] Ammonia synthesis from nitrogen and hydrogen is sometimes accomplished in the presence of uranium carbide acting as a catalyst. [ 4 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/U2C3
Uranium nitrides is any of a family of several ceramic materials: uranium mononitride (UN), uranium sesquinitride (U 2 N 3 ) and uranium dinitride (UN 2 ). The word nitride refers to the −3 oxidation state of the nitrogen bound to the uranium . Uranium nitride has been considered as a potential nuclear fuel and will be used as such in the BREST-300 nuclear reactor currently under construction in Russia. It is said to be safer, stronger, denser, more thermally conductive and having a higher temperature tolerance. Challenges to implementation of the fuel include a complex conversion route from enriched UF 6 , the need to prevent oxidation during manufacturing and the need to define and license a final disposal route. The necessity to use expensive, highly isotopically enriched 15 N is a significant factor to overcome. This is necessary due to the (relatively) high neutron capture cross-section of the far-more-common 14 N, which affects the neutron economy of a reactor. [ 2 ] The common technique for generating UN is carbothermic reduction of uranium oxide (UO 2 ) in a 2 step method illustrated below. [ 3 ] [ 4 ] Sol-gel methods and arc melting of pure uranium under nitrogen atmosphere can also be used. [ 5 ] Another common technique for generating UN 2 is the ammonolysis of uranium tetrafluoride . Uranium tetrafluoride is exposed to ammonia gas under high pressure and temperature, which replaces the fluorine with nitrogen and generates hydrogen fluoride . [ 6 ] Hydrogen fluoride is a colourless gas at this temperature and mixes with the ammonia gas. An additional method of UN synthesis employs fabrication directly from metallic uranium. By exposing metallic uranium to hydrogen gas at temperatures in excess of 280 °C, UH 3 can be formed. [ 7 ] Furthermore, since UH 3 has a higher specific volume than the metallic phase, hydridation can be used to physically decompose otherwise solid uranium. Following hydridation, UH 3 can be exposed to a nitrogen atmosphere at temperatures around 500 °C, thereby forming U 2 N 3 . By additional heating to temperatures above 1150 °C, the sesquinitride can then be decomposed to UN. Use of the isotope 15 N (which constitutes around 0.37% of natural nitrogen) is preferable because the predominant isotope, 14 N, has a significant neutron absorption cross section which affects neutron economy and, in particular, it undergoes an (n,p) reaction which produces significant amounts of radioactive 14 C which would need to be carefully contained and sequestered during reprocessing or permanent storage. [ 8 ] Each uranium dinitride complex is considered to have three distinct compounds present simultaneously because of decomposing of uranium dinitride (UN 2 ) into uranium sesquinitride (U 2 N 3 ), and then uranium mononitride (UN). Uranium dinitrides decompose to uranium mononitride by the following sequence of reactions: [ 9 ] Decomposition of UN 2 is the most common method for isolating uranium sesquinitride (U 2 N 3 ). Uranium mononitride is being considered as a potential fuel for generation IV reactors such as the Hyperion Power Module reactor created by Hyperion Power Generation . [ 10 ] It has also been proposed as nuclear fuel in some fast neutron nuclear test reactors. UN is considered superior because of its higher fissionable density, thermal conductivity , and melting temperature than the most common nuclear fuel, uranium oxide (UO 2 ), while also demonstrating lower release of fission product gases and swelling, and decreased chemical reactivity with cladding materials. [ 11 ] It also has a superior mechanical, thermal, and radiation stability compared to standard metallic uranium fuel. [ 9 ] [ 12 ] The thermal conductivity is on the order of 4–8 times higher than that of uranium dioxide, the most commonly used nuclear fuel, at typical operating temperatures. Increased thermal conductivity results in a smaller thermal gradient between inner and outer sections of the fuel, [ 8 ] potentially allowing for higher operating temperatures and reducing macroscopic restructuring of the fuel, which limits fuel lifetime. [ 4 ] The uranium dinitride (UN 2 ) compound has a face-centered cubic crystal structure of the calcium fluoride (CaF 2 ) type with a space group of Fm 3 m. [ 13 ] Nitrogen forms triple bonds on each side of uranium forming a linear structure . [ 14 ] [ 15 ] α-(U 2 N 3 ) has a body-centered cubic crystal structure of the (Mn 2 O 3 ) type with a space group of Ia 3 . [ 13 ] UN has a face-centered cubic crystal structure of the NaCl type. [ 14 ] [ 16 ] The metal component of the bond uses the 5 f orbital of the uranium but forms a relatively weak interaction but is important for the crystal structure . The covalent portion of the bonds forms from the overlap between the 6 d orbital and 7 s orbital on the uranium and the 2 p orbitals on the nitrogen. [ 14 ] [ 17 ] N forms a triple bond with uranium creating a linear structure. [ 15 ] Recently, there have been many developments in the synthesis of complexes with terminal uranium nitride (–U≡N) bonds. In addition to radioactive concerns common to all uranium chemistry, production of uranium nitrido complexes has been slowed by harsh reaction conditions and solubility challenges. Nonetheless, syntheses of such complexes have been reported in the past few years, for example the three shown below among others. [ 18 ] [ 19 ] Other U≡N compounds have also been synthesized or observed with various structural features, such as bridging nitride ligands in di-/polynuclear species, and various oxidation states. [ 20 ] [ 21 ]
https://en.wikipedia.org/wiki/U2N3
Amorphous uranium(VI) oxide ( am -U 2 O 7 ) is an orange diuranyl compound, most commonly obtained from the thermal decomposition of uranyl peroxide tetrahydrate at temperatures between 150 and 500 °C (300 and 930 °F). It exists at room temperature as a powder. Am -U 2 O 7 does not comprise a regular, long-range atomic structure, as demonstrated by its characteristic diffuse scattering pattern obtained by X-ray diffraction . As a result, the molecular structure of this material is little understood, although experimental and computational attempts to elucidate a local atomic environment have yielded some success. [ 2 ] [ 3 ] Am -U 2 O 7 is produced by the thermal decomposition of uranyl peroxide tetrahydrate at temperatures between 150 and 500 °C (300 and 930 °F), in either an air or nitrogen atmosphere. The resultant powder is tan orange in color. Further heating results in the formation of alpha uranium trioxide (α-UO 3 ). Because of the amorphous nature of a m -U 2 O 7 , the long-range atomic structure of this compound has not been determined. However, recent computational investigations, chiefly accomplished using density functional theory (DFT), have helped to predict a local structure. [ 2 ] [ 4 ] Resembling a regular uranate compound, two uranyl ( UO 2+ 2 ) groups are bridged by a μ 2 -O atom, where both uranium atoms are bonded to an O-O peroxo unit. In this case, a tetrameric ring would be the most stable conformation of the compound. The presence of a peroxide bond in species obtained in this temperature range is unusual; uranyl peroxide has previously been considered to be the only peroxide bearing uranium compound. [ 5 ] Developments on this structure propose a two-site metastudtite and UO 3 -like bonding environment, including the bond types already mentioned. [ 4 ] Few other suggestions for the local atomic structure of a m -U 2 O 7 have been made. However, a crystalline form of U 2 O 7 , calculated as a two-site 6 and 8-coordinate structure, has been reported. [ 3 ] In the same study, it was again found that the U 2 O 7 species contained peroxide bonding. Am -U 2 O 7 is known to undergo hydrolysis in the presence of water, to produce a crystalline metaschoepite powder. In addition to a change in crystallinity, this reaction involves a change in color from orange to bright yellow.
https://en.wikipedia.org/wiki/U2O7
U2opia Mobile is a Singaporean mobile technology company. The company’s main product Fonetwish enables customers to receive real time updates from social networking sites such as Facebook , [ 2 ] Twitter and Google on any handset without access to the internet. [ 3 ] It also develops several social applications. U2opia Mobile was founded by Sumesh Menon [ 4 ] and Ankit Nautiyal in 2010. By 2014, the company had a customer base of 15 million in over 33 countries. [ 5 ] In October 2017, U2opia Mobile launched Reycreo, a platform geared to help game discovery and adoption in frontier markets. [ 6 ] Fonetwish, the company’s main product, is a mobile application platform that works on the USSD protocol. It enables customers to access their Facebook or Twitter accounts from any location without having a 3G, EDGE or any other internet connection. [ 7 ] Users can access their accounts by navigating through a textual, session-based interface. The service is popular in several countries in Asia and Africa. The service is also available in Central and South American countries like Haiti , El Salvador and Bolivia . The versions of Facebook and Twitter are text only and do not have any photos or videos. [ citation needed ] The company also develops several social applications. [ citation needed ] U2opia Mobile has offices in Dubai , Gurgaon , and San Francisco along with their headquarters in Singapore. It has over 150 employees and was backed by the private equity investment firm Matrix Partners in 2011. It has partnered with telecom companies such as Bharti Airtel , Facebook and Twitter. [ citation needed ]
https://en.wikipedia.org/wiki/U2opia_Mobile
Triuranium octoxide (U 3 O 8 ) [ 4 ] is a compound of uranium . It is present as an olive green to black, odorless solid. It is one of the more popular forms of yellowcake and is shipped between mills and refineries in this form. U 3 O 8 has potential long-term stability in a geologic environment . [ 5 ] In the presence of oxygen (O 2 ), uranium dioxide (UO 2 ) is oxidized to U 3 O 8 , whereas uranium trioxide (UO 3 ) loses oxygen at temperatures above 500 °C and is reduced to U 3 O 8 . [ 6 ] [ 7 ] [ 8 ] The compound can be produced by the calcination of ammonium diuranate or ammonium uranyl carbonate . [ 9 ] Due to its high stability, it can be used for the disposal of depleted uranium . [ 10 ] Its particle density is 8.38 g cm −3 . Triuranium octoxide is converted to uranium hexafluoride for the purpose of uranium enrichment . Triuranium octoxide can be formed by the multi-step oxidation of uranium dioxide by oxygen gas at around 250°C: [ 7 ] It can also be formed from the reduction of compounds like ammonium uranyl carbonate, ammonium diuranate, and uranium trioxide through calcination at high temperatures (~600°C for (NH 4 ) 2 U 2 O 7 , 700°C for UO 3 ): [ 8 ] [ 9 ] [ 11 ] [ 12 ] Calcination of ammonium uranyl carbonate and ammonium diuranate is the main method for the production of U 3 O 8 . [ 9 ] Uranium trioxide can be reduced by other methods, such as reaction with reducing agents like hydrogen gas at around 500°C−700°C: [ 11 ] [ 12 ] This process can produce other uranium oxides, such as U 4 O 9 and UO 2 . [ 12 ] While many studies have shown contradicting results on the oxidation state of uranium in U 3 O 8 , a study on its absorption spectrum determined that each formula unit of U 3 O 8 contains 2 U V atoms and 1 U VI atom, without any atoms of U IV . The study used the compounds uranium dioxide and uranyl acetylacetonate as references for the spectra of U IV and U VI , respectively. [ 13 ] The analysis that U 3 O 8 contains 2 U V and 1 U VI is supported by other studies. [ 14 ] Triuranium octoxide can be reduced to uranium dioxide through reduction with hydrogen: [ 11 ] [ 12 ] Triuranium octoxide also loses oxygen to form a non-stoichiometric compound (U 3 O 8- z ) at high temperatures (>800°C), but recovers it when reverted to normal temperatures. [ 15 ] Triuranium octoxide is slowly oxidized to uranium trioxide under high pressures of oxygen: [ 15 ] Triuranium octoxide is attacked by hydrofluoric acid at 250 °C to form uranyl fluoride : [ 16 ] Triuranium octoxide can also be attacked by a solution of hydrochloric acid and hydrogen peroxide to form uranyl chloride . [ 17 ] Triuranium octoxide has multiple polymorphs , including α -U 3 O 8 , β -U 3 O 8 , γ -U 3 O 8 , and a non-stoichiometric high-pressure phase with the fluorite structure . [ 6 ] [ 15 ] [ 18 ] α -U 3 O 8 is the most commonly encountered polymorph of triuranium octoxide, being the most stable under standard conditions. At room temperature, it has an orthorhombic pseudo- hexagonal structure , with lattice constants a =6.72Å, b =11.97Å, c =4.15Å and space group Amm2 . At higher temperatures (~350 °C), it transitions into a true hexagonal structure, with space group P 6 2m . [ 6 ] [ 15 ] [ 18 ] α -U 3 O 8 is made up of layers of uranium and oxygen atoms. Each layer has the same U-O structure, and oxygen bridges connect corresponding uranium atoms in different layers. Within each layer, the U sites are surrounded by five oxygen atoms. This means that each U atom is bonded to seven oxygen atoms total, giving U a molecular geometry of pentagonal bipyramidal . [ 6 ] β -U 3 O 8 can be formed by heating α -U 3 O 8 to 1350 °C and slowly cooling. The structure of β -U 3 O 8 is similar to that of α -U 3 O 8 , having a similar sheet-like arrangement and similar lattice constants ( a =7.07Å, b =11.45Å, c =8.30Å [ c/2 =4.15Å]). It also has an orthorhombic cell, with space group Cmcm . [ 6 ] Like α -U 3 O 8 , β -U 3 O 8 has a layered structure containing uranium and oxygen atoms, but unlike α -U 3 O 8 , adjacent layers have a different structure- instead, every other layer has the same arrangement of U and O atoms. It also features oxygen bridges between U and O atoms in adjacent layers, though instead of all U atoms having a geometry of pentagonal bipyramidal, 2 U atoms per formula unit have distinct pentagonal bipyramidal molecular geometries, and the other U atom has a molecular geometry of tetragonal bipyramidal . [ 6 ] γ -U 3 O 8 is formed at around 200-300 °C and at 16,000 atmospheres of pressure. [ 15 ] Very little information on it is available. A high-pressure phase of U 3 O 8 with a hyperstoichiometric fluorite-type structure is formed at pressures greater than 8.1 GPa. During the phase transition, the volume of the solid decreases by more than 20%. The high-pressure phase is stable under ambient conditions, in which it is 28% denser than α -U 3 O 8 . [ 18 ] This phase has a cubic structure with a high amount of defects . Its formula is UO 2+ x , where x ≈ 0.8. [ 18 ] Triuranium octoxide can be found in small quantities (~0.01-0.05%) in the mineral pitchblende . [ 19 ] Triuranium octoxide can be used to produce uranium hexafluoride , which is used for the enrichment of uranium in the nuclear fuel cycle . In the so-called 'dry' process, common in the United States, triuranium octoxide is purified through calcination, then crushed. Another process, called the 'wet' process, common outside the U.S., involves dissolving U 3 O 8 in nitric acid to form uranyl nitrate , followed by calcining to uranium trioxide in a fluidized bed reactor . [ 20 ] [ 21 ] No matter which method is used, the uranium oxide is then reduced using hydrogen gas to form uranium dioxide, which is then reacted with hydrofluoric acid to form uranium tetrafluoride and then with fluorine gas to produce uranium hexafluoride. This can then be separated into uranium-235 and uranium-238 hexafluoride. [ 20 ] [ 21 ] Triuranium octoxide is a certified reference material and can be used to determine the impurity of a sample of uranium. [ 2 ] [ 22 ] Triuranium octoxide is a carcinogen and is toxic by inhalation and ingestion with repeated exposure. If consumed, it targets the kidney, liver, lungs, and brain, and causes irritation upon contact with the skin and eyes. It should only be handled with adequate ventilation. In addition, it is also radioactive , being an alpha emitter. [ 2 ]
https://en.wikipedia.org/wiki/U3O8
The UAProf ( U ser A gent Prof ile) specification is concerned with capturing capability and preference information for wireless devices. This information can be used by content providers to produce content in an appropriate format for the specific device. UAProf is related to the Composite Capability/Preference Profiles Specification created by the World Wide Web Consortium . UAProf is based on RDF . UAProf files typically have the file extensions rdf or xml , and are usually served with mimetype application/xml. They are an XML -based file format. The RDF format means that the document schema is extensible. A UAProf file describes the capabilities of a mobile handset, including Vendor, Model, Screensize, Multimedia Capabilities, Character Set support, and more. Recent UAProfiles have also begun to include data conforming to MMS, PSS5 and PSS6 schemas, which includes much more detailed data about video, multimedia, streaming and MMS capabilities. A mobile handset sends a header within an http request, containing the URL to its UAProf. The http header is usually X-WAP-Profile: , but sometimes may look more like 19-Profile: , WAP-Profile: or a number of other similar headers. UAProf production for a device is voluntary: for GSM devices, the UAProf is normally produced by the vendor of the device (e.g. Nokia , Samsung , LG ) whereas for CDMA / BREW devices it's more common for the UAProf to be produced by the telecommunications company. A content delivery system (such as a WAP site) can use UAProf to adapt content for display, or to decide what items to offer for download. However, drawbacks to relying solely on UAProf are (See also [ 1 ] ): UAProf device profiles are one of the sources of device capability information for WURFL , which maps the UAProfile schema to its own with many other items and boolean fields relating to device markup, multimedia capabilities and more. This XML data is keyed on the User-Agent: header in a web request. Another approach to the problem is to combine real-time derived information, component analysis, manual data and UAProfiles to deal with the actual device itself rather than the idealised representation of "offline" approaches such as UAProf or WURFL. This approach allows detection of devices modified by the user, Windows Mobile devices, Legacy devices, Spiders and Bots , and is evidenced in at least one commercially available system. The W3C MWI (Mobile Web Initiative) and the associated DDWG (Device Description Working Group), recognising the difficulty in collecting and keeping track of UAProfs and device handset information, and the practical shortcomings in the implementation of UAProf across the industry have outlined specifications for a Device Description Repository , in the expectation that an ecosystem of such Repositories will eventually eliminate the need for local device repositories in favour of a web service ecosystem.
https://en.wikipedia.org/wiki/UAProf
uBiome, Inc. was a biotechnology company based in San Francisco that developed technology to sequence the human microbiome . Founded in 2012, the company filed for bankruptcy in 2019 following an FBI raid in an investigation over possible insurance fraud [ 1 ] [ 2 ] involving the US health insurance program Medicare . [ 3 ] In 2021, the Securities and Exchange Commission charged two of the cofounders (Jessica Richman and Zachary Apte) with defrauding investors. [ 4 ] [ 5 ] The couple were also charged with federal crimes including conspiracy to commit fraud and money laundering . [ 6 ] Since 2021, the FBI has considered the founders to be fugitives. [ 7 ] [ 8 ] The company was founded by Jessica Richman, Zachary Apte, and Will Ludington who were scientists in the California Institute for Quantitative Biosciences . [ 9 ] In November 2012, uBiome generated $350,000 through a crowdfunding campaign. [ 10 ] The founders received mentoring and funding from Y Combinator and further funding from Andreessen Horowitz and 8VC. [ 11 ] [ 12 ] [ 13 ] Jessica Richman was also found to have lied about her age to get on tech lists such as Forbes 30 under 30. She was in her forties at the time. In April 2019, FBI agents raided the uBiome office in an investigation over possible insurance fraud [ 1 ] [ 2 ] involving the US health insurance program Medicare . [ 3 ] According to company insiders, the company often repeatedly billed patients without their consent and pressured doctors to approve tests. [ 14 ] Cofounders Apte and Richman were put on administrative leave pending an investigation by the company's board. [ 15 ] The company filed for Chapter 11 bankruptcy (reorganization of debt) in September 2019, amidst the investigation, [ 16 ] and less than a month later it filed for a Chapter 7 bankruptcy ( liquidation ) and shut down. [ 3 ] In 2021, the Securities and Exchange Commission charged two of the cofounders (Richman and Apte) with defrauding investors. [ 4 ] [ 5 ] The couple were also charged with federal crimes including conspiracy to commit fraud and money laundering . [ 6 ] Richman and Apte married in 2019 and relocated to Germany in June 2020. Since 2021, the FBI has considered them to be fugitives. [ 7 ] Customers purchase kits to sample one or more parts of their body, including the gut, genitals, mouth, nose, or skin. After swabbing, a participant takes a survey which is used to make correlations with microbiome data. The participant sends the kit to the company in the mail and receives data in a few weeks; they can compare their data with that of uBiome's data set. [ 17 ] [ 18 ] In 2015 uBiome received Clinical Laboratory Improvement Amendments (CLIA) certification from the State of California. [ 19 ] In 2016, uBiome received accreditation from the College of American Pathologists . [ 20 ] As of 2015 [update] , the company first amplified a portion of the bacterial gene that encodes 16S ribosomal RNA using PCR , then sequenced the amplified 16S ribosomal RNA gene, in order to categorize the bacteria at the genus level. [ 21 ] The company had proprietary machine learning algorithms that analyzed the sequence data and compared it with the company's proprietary database of microbiomes, built from the samples that partners [ clarification needed ] and single [ clarification needed ] customers sent to them, and web-based software that allowed individuals to view their microbiome and make certain comparisons. [ 22 ] [ 23 ] A 2014 report in Xconomy said the company outsourced the sequencing. [ 23 ] The sequencing was done on the Illumina NextSeq500 sequencer. [ 24 ] [ 25 ] In October 2015 the company introduced an app on iOS using ResearchKit that allowed customers to view their results on mobile devices. [ 26 ] uBiome has been compared with Theranos and 23andMe , each of which was also a biotechnology company influenced by Silicon Valley . [ 27 ] [ 28 ] Amy Dockser Marcus noted in a 2014 essay in The Wall Street Journal that when uBiome raised its initial round of crowdfunding in early 2013, many questions were raised by bioethicists about the company's citizen science business model — namely whether it had actually obtained informed consent from its customers, and whether direct to consumer genetic testing initiatives could be ethically conducted at all, and its lack of institutional review board (IRB) approval. [ 29 ] [ 30 ] [ 31 ] The Wall Street Journal essay also noted that questions were raised about the quality of data obtained in citizen science initiatives, with regard to self-selection and other issues. [ 29 ] [ 32 ] The company obtained an institutional review board approval in July 2013. [ 29 ] [ 33 ] [ 34 ] In 2014, people experienced in biotechnology entrepreneurship also raised questions about the ethics of crowdfunding a biotech company, as the risks of such ventures are high even for people with scientific and business sophistication. [ 29 ] [ 35 ] As of 2015, uBiome offered a $1 million grant program to researchers and citizen scientists for microbiome sampling and related analysis. [ 36 ] One winner of the first round of such grants was the Centers for Disease Control and Prevention . [ 27 ] In March 2018, uBiome made Fast Company 's list for The World's Most Innovative Companies in Data Science, acknowledging uBiome's work collecting data to develop tests for HPV and STIs. [ 37 ]
https://en.wikipedia.org/wiki/UBiome
The UCL Department of Science and Technology Studies (STS) is an academic department in University College London , London , England. It is part of UCL's Faculty of Mathematics and Physical Sciences. The department offers academic training at both undergraduate and graduate (MSc and MPhil/PhD) levels. The department received its current name in 1995. It had been the "Department of History and Philosophy of Science" from 1938 to 1995, and the "Department of History and Method of Science" from 1921 to 1938. [ 2 ] University College London was the first UK university to offer single honours undergraduate degrees in this interdisciplinary subject, launching its BSc in history and philosophy of science in 1993. Two related BSc degrees followed shortly thereafter. At UCL, science and technology studies (abbreviated "STS") includes three specialist research clusters: "history of science," "philosophy of science," and "science, culture, and democracy". In 2022 STS accepted its first cohort for an MSc in Science Communication. [ 3 ] The department offices are located on UCL's campus in Gordon Square , Bloomsbury , London.
https://en.wikipedia.org/wiki/UCL_Department_of_Science_and_Technology_Studies
The UCSC Genome Browser is an online and downloadable genome browser hosted by the University of California, Santa Cruz (UCSC). [ 2 ] [ 3 ] [ 4 ] It is an interactive website offering access to genome sequence data from a variety of vertebrate and invertebrate species and major model organisms , integrated with a large collection of aligned annotations. The Browser is a graphical viewer optimized to support fast interactive performance and is an open-source, web-based tool suite built on top of a MySQL database for rapid visualization, examination, and querying of the data at many levels. The Genome Browser Database, browsing tools, downloadable data files, and documentation can all be found on the UCSC Genome Bioinformatics website. The UCSC Genome Browser was developed in 2000 by graduate student Jim Kent and Professor David Haussler at the University of California, Santa Cruz (UCSC), to provide public access to the draft human genome sequence produced by the Human Genome Project . [ 5 ] On July 7, 2000, UCSC released the first working draft of the human genome online, accompanied by an initial version of the Genome Browser. [ 5 ] This release enabled researchers worldwide to access and explore the genome data interactively. The project received early funding from the Howard Hughes Medical Institute (HHMI) and the National Human Genome Research Institute (NHGRI). [ 6 ] In 2002, the team published a detailed description of the Genome Browser in Genome Research , outlining its MySQL -based database and web interface. [ 7 ] The browser featured various aligned annotation tracks, including gene predictions, mRNA / EST alignments, and SNP markers, all presented in a scrollable view. [ 7 ] Users could also add custom tracks to visualize their data alongside official annotations. In that same year, the browser expanded to include the mouse genome, facilitating comparative genomics studies. Tools like BLAT (BLAST-like alignment tool) and LiftOver were introduced to enhance sequence alignment and coordinate conversion between different genome assemblies. [ 8 ] Between 2004 and 2010, the UCSC Genome Browser incorporated numerous additional genomes, including those of rat, chicken, dog, and chimpanzee, among others. [ 9 ] The development of chain and net alignment algorithms allowed for whole-genome alignments between species, and the Conservation track visualized evolutionary conserved elements. [ 10 ] To accommodate the influx of data from new genomic technologies, UCSC introduced Genome Graphs in 2007–2008, enabling users to plot genome-wide datasets, such as association study p-values , across entire genomes. [ 11 ] The browser also implemented the BigBed and BigWig binary data formats in 2010, facilitating efficient visualization of large-scale sequencing datasets. [ 12 ] In 2011, UCSC launched Track Data Hubs, allowing external researchers to integrate their annotation tracks into the Genome Browser via remote URLs. [ 13 ] UCSC played a pivotal role in the ENCODE (Encyclopedia of DNA Elements) project since its launch in 2003. This new feature significantly enhanced how researchers could interact with and visualize large-scale genomic datasets. The browser hosted a vast array of functional genomics data generated by ENCODE, including ChIP-seq , RNA-seq , and DNase hypersensitivity assays. [ 14 ] The browser also integrated data from the 1000 Genomes Project , providing comprehensive access to human genetic variation data. [ 15 ] In 2013, UCSC partnered with the GENCODE project to adopt its high-quality gene annotations. In 2015, the GENCODE gene set (GRCh38/hg38 assembly) replaced UCSC's in-house track as the default gene set of the human genome browser. [ 16 ] Beginning in 2016, the UCSC Genome Browser expanded its capabilities by integrating clinical and variant datasets, including those from ClinVar and various cancer genomics resources. [ 17 ] In 2017, UCSC launched the UCSC Cell Browser, a companion platform designed to handle single-cell sequencing datasets and spatial transcriptomics . [ 18 ] The browser has also integrated data from the Genotype-Tissue Expression (GTEx) project, providing visualization resources for gene expression across various human tissues. [ 19 ] The browser now hosts over 180 genome assemblies from more than 100 species, including the fully telomere -to-telomere human genome assembly (T2T-CHM13) released by the T2T Consortium in 2022. [ 20 ] Funding for the UCSC Genome Browser has transitioned to rely exclusively on NIH grants, with continued support from the NHGRI. In 2022, the browser was recognized as one of the inaugural Global Core Biodata Resources , highlighting its critical role in life science research and ensuring prioritized long-term funding. [ 5 ] As of 2025, the UCSC Genome Browser continues to serve as an essential, freely accessible tool for researchers worldwide, accommodating daily usage by tens of thousands and regularly updating with new genomic data and functionalities. [ 5 ] In the years since its inception, the UCSC Browser has expanded to accommodate genome sequences of all vertebrate species and selected invertebrates for which high-coverage genomic sequences is available, [ 21 ] now including 108 species . High coverage is necessary to allow overlap to guide the construction of larger contiguous regions. Genomic sequences with less coverage are included in multiple-alignment tracks on some browsers, but the fragmented nature of these assemblies does not make them suitable for building full featured browsers. (more below on multiple-alignment tracks). The species hosted with full-featured genome browsers are shown in the table. [ 22 ] It is important to note that updates to this section are dependent on new genome releases from sequencing centers and that explains the reason as to why there was a 2 year difference between the last two genome additions. Apart from these 108 species and their assemblies, the UCSC Genome Browser also offers Assembly Hubs , web-accessible directories of genomic data that can be viewed on the browser and include assemblies that are not hosted natively on it. There, users can load and annotate unique assemblies for which UCSC does not provide an annotation database. A full list of species and their assemblies can be viewed in the GenArk Portal , including 2,589 assemblies hosted by both UCSC Genome Browser database and Assembly Hubs. An example can be seen in the Vertebrate Genomes Project assembly hub. Below is a snippet of what users can find when they use the assembly hub: The UCSC Genome browser is a good tool to use for analyzing genomic sequences and data but it has its own limitations some which include a legacy website interface. In this age of advancements in technology, it would be expected that a website commonly used by thousands of students and researchers globally would have a user friendly interface that is easy to navigate but that is not the case with the UCSC genome browser. Another pitfall of the UCSC Genome browser is that it is primarily a visualization tool used to showcase various sequences and to do some analysis of these sequences, users would have to use external tools such as MAFFT, COFFEE or MUSCLE. The large amount of data about biological systems that is accumulating in the literature makes it necessary to collect and digest information using the tools of bioinformatics . The UCSC Genome Browser presents a diverse collection of annotation datasets (known as "tracks" and presented graphically), including mRNA alignments, mappings of DNA repeat elements, gene predictions, gene-expression data, disease-association data (representing the relationships of genes to diseases), and mappings of commercially available gene chips (e.g., Illumina and Agilent ). The basic paradigm of display is to show the genome sequence in the horizontal dimension, and show graphical representations of the locations of the mRNAs, gene predictions, etc. Blocks of color along the coordinate axis show the locations of the alignments of the various data types. The ability to show this large variety of data types on a single coordinate axis makes the browser a handy tool for the vertical integration of the data. [ 23 ] To find a specific gene or genomic region, the user may type in the gene name, a DNA sequence, an accession number for an RNA, the name of a genomic cytological band (e.g., 20p13 for band 13 on the short arm of chr20) or a chromosomal position (chr17:38,450,000-38,531,000 for the region around the gene BRCA1 ). Presenting the data in the graphical format allows the browser to present link access to detailed information about any of the annotations. The gene details page of the UCSC Genes track provides a large number of links to more specific information about the gene at many other data resources, such as Online Mendelian Inheritance in Man ( OMIM ) and SwissProt . Designed for the presentation of complex and voluminous data, the UCSC Browser is optimized for speed. By pre-aligning millions of RNA secuences from GenBank to each of the 244 genome assemblies (many of the 108 species have more than one assembly), the browser allows instant access to the alignments of any RNA to any of the hosted species. The juxtaposition of the many types of data allow researchers to display exactly the combination of data that will answer specific questions. A pdf/postscript output functionality allows export of a camera-ready image for publication in academic journals. One unique and useful feature that distinguishes the UCSC Browser from other genome browsers is the continuously variable nature of the display. Sequence of any size can be displayed, from a single DNA base up to the entire chromosome (human chr1 = 245 million bases, Mb) with full annotation tracks. Researchers can display a single gene, a single exon, or an entire chromosome band, showing dozens or hundreds of genes and any combination of the many annotations. A convenient drag-and-zoom feature allows the user to choose any region in the genome image and expand it to occupy the full screen. Researchers may also use the browser to display their own data via the Custom Tracks tool. This feature allows users to upload a file of their own data and view the data in the context of the reference genome assembly. Users may also use the data hosted by UCSC, creating subsets of the data of their choosing with the Table Browser tool (such as only the SNPs that change the amino acid sequence of a protein) and display this specific subset of the data in the browser as a Custom Track. [ 24 ] Any browser view created by a user, including those containing Custom Tracks, may be shared with other users via the Saved Sessions Custom Tracks support multiple file formats, including BED, WIG, GFF, GTF, PSL, and big* formats such as bigBed and bigWig. Users may input data via direct paste, file upload, or by referencing a URL pointing to the remote data. Tracks are temporary and those not associated with a saved session are removed after 48 hours. [ 24 ] Users can configure tracks with track lines to specify attributes such as name, description, visibility, color, and link targets. Optional browser lines may be included to define initial display coordinates and browser settings. Uploaded tracks can be managed, updated, or deleted through the “Manage Custom Tracks” interface. [ 24 ] For larger or more persistent data hosting, users may use Track Hubs , which provide a scalable system for remote data integration and advanced configuration. Below the displayed images of the UCSC Genome browser are eleven categories of additional tracks that can be selected and displayed alongside the original data. Researchers can select tracks which best represent their query to allow for more applicable data to be displayed depending on the type and depth of research being done. These categories are as follows: The UCSC site hosts a set of genome analysis tools. Each tool allows users to create, find, and modify sequences to find similar sequences or patterns. These tools are generally free to use for academic purposes, nonprofit organizations and individuals with a personal interest in genomics. Tools developed by UCSC include: Genome Browser, BLAT, In-Silico PCR, Table Browser, LiftOver, REST API, Variant Annotation Integrator, Gene Sorter, Genome Graph, Data Integrator, UShER, Gene Interactions, VisiGene, DNA Duster, Protein Duster, and Phylogenetic Tree PNG Maker. Source Code for BLAT, LiftOver and Genome Browser is available for download on the UCSC website . Other useful tools that work with UCSC file formats include: BEDOPS, bedtools, bwtool CrossMap, CruzDb, G-OnRamp, libBigWig, MakeHub, RTrackLayer, trackhub, twobitreader, ucsc-genomes-download, and Wiggle Tools. BLAT [ 27 ] is a FASTA format sequence alignment tool that is useful for finding sequences in the massive sequence (human genome = 3.23 billion bases [Gb]) of any of the featured genomes. Users are able to paste a sequence into the text box or upload a file containing the sequence. The tool also includes customizability depending on what a user is looking for. Users may choose the genome and assembly the sequence belongs to, Query type, Sort output and Output type. Using BLAT on DNA finds sequences ≥ 95% similarity of bases of lengths ≥ 25. It indexes the genome in memory consisting of all overlapping 11-mers stepping by 5 unless there are repeats. Using BLAT on protein finds sequences ≥ 80% similarity of amino acids of lengths ≥ 20. It indexes the genome in memory of all overlapping 4-mers stepping 5 unless there are repeats. BLAT was written by Jim Kent, more information about the software can be found on his website . The Genome Graphs tool allows users to view all chromosomes at once and display the results of genome-wide association studies (GWAS). Users can customize the clad an organism is in, the genome and assembly type, graph colors, and the significance threshold. Users can also either upload their own data, import database assemblies or configure the layout of the graph, graph style, and chromosome layout. There is a more detailed instruction guide for users who may want to utilize all features to their fullest potential on the Genome Graphs User's Guide . The LiftOver [ 28 ] tool uses whole-genome alignments to allow conversion of sequences from one assembly to another or between species. A user can enter the genome coordinates and annotations into the textbox or upload the file to the system. The original genome and assembly are selected first as well as the new genome and assembly that it is going to be converted into. The input can be customized in two categories: Regions defined by chrom:start-end (BED 4 to BED 6) and Regions with an exon-intron structure (usually transcripts, BED 12). Regions defined by chrom:start-end can be customized to allow for multiple output regions, set the minimum hit size in query and set minimum chain size in target. Regions with an exon-intron structure can be customized to set the minimum ratio of alignment blocks or exons that must map and set; if an exon is not mapped, use the closest mapped base. The UCSC Genome Browser provides Python-compatible interfaces that allow researchers to programmatically access genomic data and annotations. These APIs support automation, integration into computational workflows, and large-scale analysis tasks, enhancing accessibility beyond the graphical browser. The UCSC REST API is the primary method for programmatic interaction. It allows users to send HTTP requests to retrieve genomic sequences, annotation tracks, and gene-related information. While the API itself is language-agnostic, Python developers can easily integrate it using libraries such as requests . Community-developed wrappers and tools further simplify API usage in Python-based bioinformatics environments. Common uses of the UCSC REST API in Python include: These capabilities make the API useful for custom dashboards, automated annotation pipelines, and downstream analysis in tools like Jupyter Notebooks or Snakemake. This snippet requests the sequence from position 1,000,000 to 1,000,100 on the hg38 human genome assembly and returns the raw DNA bases. It illustrates how researchers can access genome content without downloading entire datasets. This flexibility makes the REST API ideal for rapid, scriptable access to UCSC’s genomic resources. While the UCSC REST API is highly accessible, it is limited by: For large datasets or bulk analysis, users may still prefer downloading entire tracks or working with the UCSC Genome Browser database locally. The UCSC Browser code base is open-source for non-commercial use, and is mirrored locally by many research groups, allowing private display of data in the context of the public data. The UCSC Browser is mirrored at several locations worldwide, as shown in the table. The Browser code is also used in separate installations by the UCSC Malaria Genome Browser and the Archaea Browser.
https://en.wikipedia.org/wiki/UCSC_Genome_Browser
The UCR Herbarium is a clearinghouse for information regarding plant species distribution in the Western hemisphere . The collection houses over 110,000 dried specimens , approximately 80,000 of which are from the United States , and 32,000 from Mexico . The collection is especially strong in the flora of Southern California and the Baja California peninsula . [ 1 ] The Herbarium maintains an online-assessable Filemaker database of every specimen in the stacks, which is constantly updated. The Herbarium's staff makes between 5-10 thousand identifications a year for visitors who bring in plant samples, approximately 1/4 of which have been made into new specimens. In addition, the Herbarium's active collection program generates thousands of additional specimens a year. Current field projects include the flora of the San Bernardino Mountains and western Riverside County , as well as an investigation of the Curú Biological Reserve on the Nicoya Peninsula in Costa Rica . The Herbarium is an active correspondent to the Inventory of Rare and Endangered Vascular Plants of California, a list of endangered plants published by the California Native Plant Society (CNPS). The Herbarium's records allow the CNPS to provide scientific evidence to support biological conservation , leading to the nomination of certain species for federal listing under the Endangered Species Act . [ 2 ]
https://en.wikipedia.org/wiki/UC_Riverside_Herbarium
The UC Santa Barbara Bren School of Environmental Science and Management is the graduate environmental studies school of the University of California, Santa Barbara . The mission of the Bren School is to play a leading role in researching environmental issues, identifying and solving environmental problems, and training research scientists and environmental-management professionals. [ 1 ] In 1991, recognizing the need for a graduate school dedicated to the study of the environment, the Regents of the University of California established the School of Environmental Science & Management at UC Santa Barbara. [ 2 ] In 1994, Jeff Dozier became the school's first dean. In 1995, the first faculty were appointed, and in 1996 the first master's students were admitted, receiving their degrees in 1998. In 2000, the first PhD students were admitted. The first PhD degrees were awarded in 2002. [ 3 ] In 1997, the school received a major gift from the Donald Bren Foundation to provide funding for endowed faculty chairs, faculty scholars, visiting lecturers, conferences, and student support. In recognition of the gift, the school was renamed the Bren School of Environmental Science & Management. [ 4 ] In 2002, Bren Hall was completed, providing the school with the facilities and the physical focus it retains to this day. In 2020, the Bren School launched the Master of Environmental Data Science program and admitted the first cohort of MEDS students. The Bren School offers Master of Environmental Data Science (MEDS), Master of Environmental Science & Management (MESM), and Doctor of Philosophy (PhD) degrees. [ 5 ] The Bren School's Master of Environmental Data Science (MEDS) degree is an 11-month program focused on using data science to advance solutions to environmental problems. Students complete a capstone project during their last two quarters of the program. Students collaborate with real-world clients—which may be internal clients (Bren School faculty, other UCSB researchers) or outside clients from industry, government, or non-government organizations–in teams of 3–4 students to build real world experience and skills to solve environmental problems using data science. The Bren School's Master of Environmental Science and Management (MESM) degree is a two-year program comparable to the Master of Environmental Management (MEM) degree offered by other schools. The Bren School chose to include "Science" in the name of its degree to emphasize its focus on environmental science. In addition to the broad training they receive in the core courses, [ 6 ] MESM students select one or two specializations to pursue in greater depth. There are seven specializations: In addition to the seven specializations, students may also select an optional focus. The two foci are: One notable aspect of the master's program at the Bren School is the group project. The master's thesis for MESM candidates consists of a project, usually proposed by an agency or company having a local, statewide, nationwide, or international presence. Group projects begin during the spring quarter of the first year of study, and culminate in a final presentation to the Bren community, project stakeholders, and local professionals. [ 7 ] The Bren School's PhD program is designed to prepare leaders in Environmental Science & Management. The program is notable for its focus on a wide variety of environmental issues. The school's classrooms, laboratories, and other facilities are located in Bren Hall . Since opening in 2002, Bren Hall has been recognized as an exemplar of sustainable building practices. The building was the first laboratory building in the United States to receive a LEED Platinum certification from the U.S. Green Building Council (USGBC) in 2002. [ 8 ] In 2009, the building became the first structure in the nation to receive a second LEED Platinum certification—the LEED for Existing Buildings: Operations & Maintenance . [ 9 ] In 2017, Bren Hall received its third LEED Platinum certification. The building was the highest scoring LEED project in the country. [ 10 ]
https://en.wikipedia.org/wiki/UC_Santa_Barbara_Bren_School_of_Environmental_Science_and_Management
UCbase is a database of ultraconserved sequences (UCRs or UCEs) that were first described by Bejerano, G. et al. [ 2 ] in 2004. They are highly conserved genome regions that share 100% identity among human, mouse and rat. UCRs are 481 sequences longer than 200 bases. They are frequently located at genomic regions involved in cancer, differentially expressed in human leukemias and carcinomas and in some instances regulated by microRNAs. [ 3 ] The first release of UCbase was published by Taccioli, C. et al. in 2009. [ 4 ] Recent updates include new annotation based on hg19 Human genome, information about disorders related to the chromosome coordinates using the SNOMED CT classification, a query tool to search for SNPs, and a new text box to directly interrogate the database using a MySQL interface. Moreover, a sequence comparison tool allows the researchers to match selected sequences against ultraconserved elements located in genomic regions involved in specific disorders. To facilitate the interactive, visual interpretation of UCR chromosomal coordinates, the authors have implemented the graph visualization feature of UCbase creating a link to the UCSC Genome Browser . UCbase 2.0 does not provide microRNAs (miRNAs) information anymore focusing only on UCRs. The official release of UCbase 2.0 was published in 2014. [ 1 ]
https://en.wikipedia.org/wiki/UCbase
Uranium(III) chloride , UCl 3 , is a water soluble salt of uranium. UCl 3 is used mostly to reprocess spent nuclear fuel. Uranium(III) chloride is synthesized in various ways from uranium(IV) chloride ; however, UCl 3 is less stable than UCl 4 . There are two ways to synthesize uranium(III) chloride. The following processes describe how to produce uranium(III) chloride. (1) In a mixture of NaCl-KCl at 670–710 °C, add uranium tetrachloride with uranium metal. (2) Heat uranium(IV) chloride in hydrogen gas. In solid uranium(III) chloride each uranium atom has nine chlorine atoms as near neighbours, at approximately the same distance, in a tricapped trigonal prismatic configuration. [ 3 ] Uranium(III) chloride is a green crystalline solid at room temperature. UCl 3 melts at 837 °C and boils at 1657 °C. Uranium(III) chloride has a density of 5500 kg/m 3 or 5.500 g/cm 3 . Its composition by weight: Its formal oxidative states: This salt is very soluble in water and is also very hygroscopic . UCl 3 is more stable in a solution of hydrochloric acid . [ 4 ] Uranium(III) chloride is used in reactions with tetrahydrofuran (THF) and sodium methylcyclopentadiene to prepare various uranium metallocene complexes. [ 5 ] Uranium(III) chloride is used as a catalyst during reactions between lithium aluminium hydride (LiAlH 4 ) and olefins to produce alkyl aluminate compounds. [ 6 ] Molten uranium(III) chloride is an important component of liquid nuclear fuel used in molten-salt reactors . Neutron scattering and computational studies point to the presence of unusual heterogeneous bonding environment around U(III) at high temperatures, with distinct inner- and outer-coordination subshells. [ 7 ] The molten form of uranium(III) chloride is also a typical compound in pyrochemical processes as it is important in the reprocessing of spent nuclear fuels . [ 8 ] UCl 3 is usually the form that uranium takes as spent fuel in electrorefining processes. [ 8 ] [ 9 ] There are three hydrates of uranium(III) chloride: Each are synthesized by the reduction of uranium(IV) chloride in methylcyanide ( acetonitrile ), with specific amounts of water and propionic acid . [ 10 ] While there are no long-term data on the toxic effects thas UCl 3 , it is important to minimize exposure to this compound when possible. Similar to other uranium compounds that are soluble in water, UCl 3 is likely absorbed into the blood through the alveolar pockets of the lungs within days of exposure. Exposure to uranium(III) chloride leads to toxicity of the renal system . [ 11 ]
https://en.wikipedia.org/wiki/UCl3
Uranium tetrachloride is an inorganic compound , a salt of uranium and chlorine, with the formula UCl 4 . It is a hygroscopic olive-green solid. It was used in the electromagnetic isotope separation (EMIS) process of uranium enrichment . It is one of the main starting materials for organouranium chemistry . Uranium tetrachloride is synthesised generally by the reaction of uranium trioxide (UO 3 ) and hexachloropropene . Solvent UCl 4 adducts can be formed by a simpler reaction of UI 4 with hydrogen chloride in organic solvents. Uranium tetrachloride also forms the nonahydrate, which can be produced by evaporating a mildly acidic solution of UCl 4 . [ 1 ] According to X-ray crystallography the uranium centers are eight-coordinate, being surrounded by eight chlorine atoms, four at 264 pm and the other four at 287pm. [ 2 ] Dissolution in protic solvents is more complicated. When UCl 4 is added to water the uranium aqua ion is formed. The aqua ion [U(H 2 O) x ] 4+ , (x is 8 or 9 [ 3 ] ) is strongly hydrolyzed. The pK a for this reaction is ca. 1.6, [ 4 ] so hydrolysis is absent only in solutions of acid strength 1 mol dm −3 or stronger (pH < 0). Further hydrolysis occurs at pH > 3. Weak chloro complexes of the aqua ion may be formed. Published estimates of the log K value for the formation of [UCl] 3+ (aq) vary from −0.5 to +3 because of difficulty in dealing with simultaneous hydrolysis. [ 4 ] With alcohols, partial solvolysis may occur. Uranium tetrachloride dissolves in non-protic solvents such as tetrahydrofuran , acetonitrile , dimethyl formamide etc. that can act as Lewis bases . Solvates of formula UCl 4 L x are formed which may be isolated. The solvent must be completely free of dissolved water, or hydrolysis will occur, with the solvent, S, picking up the released proton. The solvent molecules may be replaced by other ligand in a reaction such as The solvent is not shown, just as when complexes of other metal ions are formed in aqueous solution. Solutions of UCl 4 are susceptible to oxidation by air, resulting in the production of complexes of the uranyl ion. Uranium tetrachloride is produced commercially by the reaction of carbon tetrachloride with pure uranium dioxide UO 2 at 370 °C. It has been used as feed in the electromagnetic isotope separation (EMIS) process of uranium enrichment . Beginning in 1944, the Oak Ridge Y-12 Plant converted UO 3 to UCl 4 feed for the Ernest O. Lawrence 's calutrons . Its major benefit was that the uranium tetrachloride used in the calutrons is not as corrosive as the uranium hexafluoride used in most other enrichment technologies This process was abandoned in the 1950s. In the 1980s Iraq unexpectedly revived this option as part of its nuclear weapons program. In the enrichment process, uranium tetrachloride is ionized into a uranium plasma . The uranium ions are then accelerated and passed through a strong magnetic field . After traveling along half of a circle, the beam is split into a region nearer the outside wall, which is depleted , and a region nearer the inside wall, which is enriched in 235 U . The large amounts of energy required in maintaining the strong magnetic fields as well as the low recovery rates of the uranium feed material and slower more inconvenient facility operation make this an unlikely choice for large scale enrichment plants. Work is being done in the use of molten uranium chloride–alkali chloride mixtures as reactor fuels in molten salt reactors . Uranium tetrachloride melts dissolved in a lithium chloride – potassium chloride eutectic have also been explored as a means to recover actinides from irradiated nuclear fuels through pyrochemical nuclear reprocessing . [ 5 ] Like all water soluble uranium salts, uranium tetrachloride is nephrotoxic (poisonous to the kidney) and can cause severe renal damage and acute renal failure if ingested.
https://en.wikipedia.org/wiki/UCl4
Uranium pentachloride is an inorganic chemical compound composed of uranium in the +5 oxidation state and five chlorine atoms. Uranium pentachloride can be prepared from the reaction of uranium trioxide with carbon tetrachloride , with a previously prepared amount of the compound serving as a catalyst . [ 1 ] It can also be prepared from the reaction between uranium tetrachloride and chlorine in a fluidized bed reactor at 550 °C. [ 1 ] Uranium pentachloride is available as red-brown microcrystalline powders or black-red crystals with metallic sheen. Unlike the tetrachloride, it is soluble in liquid chlorine. It is very hygroscopic and decomposes into uranium hexachloride and uranium tetrachloride when in water or heated. Additionally, it reacts with some organic solvents such as alcohols , acetone , diethyl ether , or dioxane , but does form stable solutions in carbon tetrachloride, carbon disulfide , and thionyl chloride . There are two crystalline forms, each of which has the uranium atom in an octahedral geometry among six chlorine atoms. Usually, it is in the α-form, which has a monoclinic crystal structure with space group P 2 1 / n . There is also a triclinic β-form, which has space group P 1 [ 2 ] and consists of U 2 Cl 10 dimers like in uranium pentabromide . [ 3 ] The gaseous form has C 4v symmetry due to strong f -orbital contribution, and has an electron affinity of 4.76 ± 0.03 eV . [ 4 ]
https://en.wikipedia.org/wiki/UCl5
The Platform Initialization Specification ( PI Specification ) is a specification published by the Unified EFI Forum that describes the internal interfaces between different parts of computer platform firmware . [ 1 ] This allows for more interoperability between firmware components from different sources. This specification is normally, but not by requirement, used in conjunction with the UEFI specification. Platform Initialization Specification 1.7, Released January 2019. As of version 1.3, the PI specification contains five volumes: This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/UEFI_Platform_Initialization