text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
The transient hot wire method (THW) is a very popular, accurate and precise technique to measure the thermal conductivity of gases , liquids , [ 1 ] solids, [ 2 ] nanofluids [ 3 ] and refrigerants [ 4 ] in a wide temperature and pressure range. The technique is based on recording the transient temperature rise of a thin vertical metal wire with infinite length when a step voltage is applied to it. The wire is immersed in a fluid and can act both as an electrical heating element and a resistance thermometer . The transient hot wire method has advantage over the other thermal conductivity methods, since there is a fully developed theory and there is no calibration or single-point calibration. Furthermore, because of the very small measuring time (1 s) there is no convection present in the measurements and only the thermal conductivity of the fluid is measured with very high accuracy.
Most of the transient hot wire sensors used in academia consist of two identical very thin wires with only difference in the length. [ 1 ] Sensors using a single wire [ 5 ] [ 6 ] are used both in academia and industry with the advantage over the two-wire sensors in the ease of handling of the sensor and change of the wire.
An ASTM standard is published for the measurements of engine coolants using a single-transient hot wire method. [ 7 ]
200 years ago scientists were using a crude version of this method to make the first ever thermal conductivity measurements on gases. | https://en.wikipedia.org/wiki/Transient_hot_wire_method |
Transient kinetic isotope effects (or fractionation ) occur when the reaction leading to isotope fractionation does not follow pure first-order kinetics (FOK) and therefore isotopic effects cannot be described with the classical equilibrium fractionation equations or with steady-state kinetic fractionation equations (also known as the Rayleigh equation). [ 1 ] In these instances, the general equations for biochemical isotope kinetics ( GEBIK ) and the general equations for biochemical isotope fractionation ( GEBIF ) can be used.
The GEBIK and GEBIF equations are the most generalized approach to describe isotopic effects in any chemical , catalytic reaction and biochemical reactions because they can describe isotopic effects in equilibrium reactions, kinetic chemical reactions and kinetic biochemical reactions. [ 2 ] In the latter two cases, they can describe both stationary and non-stationary fractionation (i.e., variable and inverse fractionation). In general, isotopic effects depend on the number of reactants and on the number of combinations resulting from the number of substitutions in all reactants and products. Describing with accuracy isotopic effects, however, depends also on the specific rate law used to describe the chemical or biochemical reaction that produces isotopic effects. Normally, regardless of whether a reaction is purely chemical or whether it involves some enzyme of biological nature, the equations used to describe isotopic effects base on FOK. This approach systematically leads to isotopic effects that can be described by means of the Rayleigh equation. In this case, isotopic effects will always be expressed as a constant, hence will not be able to describe isotopic effects in reactions where fractionation and enrichment are variable or inverse during the course of a reaction. Most chemical reactions do not follow FOK; neither biochemical reactions can normally be described with FOK. To properly describe isotopic effects in chemical or biochemical reactions, different approaches must be employed such as the use of Michaelis–Menten reaction order (for chemical reactions) or coupled Michaelis–Menten and Monod reaction orders (for biochemical reactions). However, conversely to Michaelis–Menten kinetics, GEBIK and GEBIF equations are solved under the hypothesis of non-steady state. This characteristic allows GEBIK and GEBIF to capture transient isotopic effects.
The GEBIK and GEBIF equations are introduced here below.
The GEBIK and GEBIF equations describe the dynamics of the following state variables
Both S and P contain at least one isotopic expression of a tracer atom. For instance, if the carbon element is used as a tracer, both S and P contain at least one C atom, which may appear as C 12 {\displaystyle {\ce {^{12}C}}} and C 13 {\displaystyle {\ce {^{13}C}}} . The isotopic expression within a molecule is
where a {\displaystyle _{a}} is the number of tracer atoms within S, while b {\displaystyle ^{b}} is the number of isotopic substitutions in the same molecule. The condition 0 ≤ b ≤ a {\displaystyle 0\leq b\leq a} must be satisfied. For example, the N 2 {\displaystyle {\ce {N2}}} product in which 1 isotopic substitution occurs (e.g., N 14 15 N {\displaystyle {\ce {^{15}N^{14}N}}} ) will be described by P 2 1 {\displaystyle {\ce {^1_2P}}} .
Substrates and products appear in a chemical reaction with specific stoichiometric coefficients. When chemical reactions comprise combinations of reactants and products with various isotopic expressions, the stoichiometric coefficients are functions of the isotope substitution number. If x b {\displaystyle x_{b}} and y d {\displaystyle y_{d}} are the stoichiometric coefficient for a b S {\displaystyle _{a}^{b}{\ce {S}}} substrate and c d P {\displaystyle _{c}^{d}{\ce {P}}} product, a reaction takes the form
For example, in the reaction NO 3 − 14 + 15 NO 3 − ⟶ N 14 NO 15 {\displaystyle {\ce {{^{14}NO3^{-}}+^{15}NO3^{-}->{^{14}{N}}{^{15}{NO}}}}} , the notation is S 1 0 + S 1 1 ⟶ P 2 1 {\displaystyle {\ce {{^{0}_{1}S}+{^{1}_{1}S}->{^{1}_{2}P}}}} with x 0 = x 1 = 1 {\displaystyle x_{0}=x_{1}=1} for both isotopologue reactants of the same substrate with substitution number b = 0 {\displaystyle b=0} and b = 1 {\displaystyle b=1} , and with y 1 = 1 {\displaystyle y_{1}=1} for P 2 1 {\displaystyle {\ce {^1_2P}}} and y 0 = y 2 = 0 {\displaystyle y_{0}=y_{2}=0} because the reaction does not comprise production of P 2 0 = N 2 14 O {\displaystyle {\ce {^{0}_{2}P={^{14}N2O}}}} and P 2 2 = N 2 15 O {\displaystyle {\ce {^{2}_{2}P={^{15}N2O}}}} .
For isotopomers, the substitution location is taken into account as a b S β {\displaystyle _{a}^{b}{\ce {S}}^{\beta }} and a b S γ {\displaystyle _{a}^{b}{\ce {S}}^{\gamma }} , where β {\displaystyle \beta } and γ {\displaystyle \gamma } indicate a different expressions of the same isotopologue a b S {\displaystyle _{a}^{b}{\ce {S}}} . Isotopomers only exist when 1 ≤ b < a {\displaystyle 1\leq b<a} and a ≥ 2 {\displaystyle a\geq 2} . The substitution location has to be specifically defined depending on the number of tracer atoms a , number of substitutions b , and molecule structure. For multiatomic molecules that are symmetric with respect to tracer position, there is no need to specify the substitution position when b = 1 {\displaystyle b=1} . For example, one substitution of deuterium D = H 2 {\displaystyle {\ce {D\ =\ {^{2}H}}}} [ a ] in the symmetric methane molecule CDH 3 {\displaystyle {\ce {CDH3}}} does not require the use of the right superscript. In the case that b = 2 {\displaystyle b=2} , the substitution location has to be specified, while for CHD 3 {\displaystyle {\ce {CHD3}}} and CD 4 {\displaystyle {\ce {CD4}}} it is not required. For example, two 2 H substitutions in CD 2 H 2 {\displaystyle {\ce {CD2H2}}} can occur in adjacent or non-adjacent locations. Using this notation, the reaction CD 2 H 2 + 2 O 2 ⟶ H 2 O + D 2 O + CO 2 , {\displaystyle {\ce {{CD2H2}+ {2O2}-> {H2O}+ {D2O}+ {CO2},}}} can be written as
where β {\displaystyle \beta } in S 4 2 β {\displaystyle {\ce {{^{2}_{4}S}^{\beta }}}} defines only one of the two methane forms (either with adjacent or non-adjacent D atoms). The location of D in the two isotopologue water molecules produced on the right-hand side of the reaction has not been indicated because D is present in only one water molecule at saturation, and because the water molecule is symmetric. For asymmetric and multiatomic molecules with 1 ≤ b < a {\displaystyle 1\leq b<a} and a ≥ 2 {\displaystyle a\geq 2} , definition of the substitution location is always required. For instance, the isotopomers of the (asymmetric) nitrous oxide molecule N 2 O {\displaystyle {\ce {N2O}}} are S β 2 1 = N 15 NO 14 {\displaystyle {\ce {^{1}_{2}S^{\beta }={^{15}N}{^{14}NO}}}} and S γ 2 1 = N 14 NO 15 {\displaystyle {\ce {^{1}_{2}S^{\gamma }={^{14}N}{^{15}NO}}}} .
Reactions of asymmetric isotopomers can be written using the partitioning coefficient u as
where u γ = 1 {\displaystyle u_{\gamma }=1} . For example, using N isotope tracers, the isotopomer reactions
can be written as one reaction in which each isotopomer product is multiplied by its partition coefficient as
with u γ = 1 − u β {\displaystyle u_{\gamma }=1-u_{\beta }} . More generally, the tracer element does not necessarily occur in only one substrate and one product. If n S {\displaystyle n_{{\ce {S}}}} substrates react releasing n P {\displaystyle n_{{\ce {P}}}} products, each having an isotopic expression of the tracer element, then the generalized reaction notation is
For instance, consider the O 16 {\displaystyle {\ce {^{16}O}}} and O 18 {\displaystyle {\ce {^{18}O}}} tracers in the reaction
In this case the reaction can be written as
with two substrates and two products without indication of the substitution location because all molecules are symmetric.
Biochemical kinetic reactions of type ( 1 ) are often catalytic reactions in which one or more substrates, S j {\displaystyle {\ce {S}}_{j}} , bind to an enzyme, E, to form a reversible activated complex, C, which releases one or more
products, P h {\displaystyle {\ce {P}}_{h}} , and free, unchanged enzyme. These reactions belong to the type of reactions that can be described by Michaelis–Menten kinetics . Using this approach for substrate and product isotopologue and isotopomer expressions, and under the prescribed stoichiometric relationships among them, leads to the general reactions of the Michaelis–Menten type
with the index i = 1 , . . . , m {\displaystyle i=1,...,m} , where m depends on the number of possible atomic combinations among all isotopologues and isotopomers. Here, k 1 ( i ) {\displaystyle k_{1(i)}} , k 2 ( i ) {\displaystyle k_{2(i)}} , and k 3 ( i ) {\displaystyle k_{3(i)}} are the rate constants indexed for each of the m reactions.
The reactions
can be written as
The following isotope mass balances must hold
To solve for the concentration of all components appearing in any general biochemical reaction as in ( 2 ), the Michaelis–Menten kinetics for an enzymatic reaction are coupled with the Monod kinetics for biomass dynamics. The most general case is to assume that the enzyme concentration is proportional to the biomass concentration and that the reaction is not in quasi-steady state. These hypotheses lead to the following system of equations
with i = 1 , . . . , m {\displaystyle i=1,...,m} , and where S ¯ i {\displaystyle {\overline {S}}_{i}} is the concentration of the most limiting substrate in each reaction i , z is the enzyme yield coefficient, Y is the yield coefficient expressing the biomass gain per unit of released product and μ {\displaystyle \mu } is the biomass mortality rate. [ 4 ]
The isotopic composition of the components in a biochemical system can be defined in different ways depending on the definition of isotopic ratio. Three definitions are described here:
Isotopic ratio relative to each component in the system, each with its isotopic expression, with respect to the concentration of its most abundant isotopologue
Isotopic ratio relative to the mass of the tracer element in each component;
where, b j M S j {\displaystyle ^{b_{j}}M_{S_{j}}} and d h M P h {\displaystyle ^{d_{h}}M_{P_{h}}} are the molecular weight of each isotopic expression of the substrate and product.
Isotopic ratio relative to the mass of the tracer element in the accumulated substrates and products
Regardless of the definition of the isotopic ratio, the isotopic composition of substrate and product are expressed as
where R s t d {\displaystyle R_{std}} is a standard isotopic ration. Here, definition 3 of isotopic ratio has been used, however, any of the three definitions of isotopic ratio can equally be used.
The isotopic ratio of the product can be used to define the instantaneous isotopic ratio
and the time-dependent fractionation factor
The time-dependent isotopic enrichment is simply defined as
Under specific assumptions, the GEBIK and GEBIF equations become equivalent to the equation for steady-state kinetic isotope fractionation in both chemical and biochemical reactions. Here two mathematical treatments are proposed: (i) under biomass-free and enzyme-invariant (BFEI) hypothesis and (ii) under quasi-steady-state (QSS) hypothesis.
In instances where the biomass and enzyme concentrations are not appreciably changing in time, we can assume that biomass dynamics is negligible and that the total enzyme concentration is constant, and the GEBIK equations become
Eqs. ( 4 ) for isotopic compositions, Eq. ( 6 ) for the fractionation factor and Eq. ( 7 ) for the enrichment factor equally applies to the GEBIK equations under the BFEI hypothesis.
If the quasi-steady-state hypothesis is assumed in addition to BFEI hypothesis, then the complex concentration can be assumed to be in a stationary (steady) state according to the Briggs– Haldane hypothesis, and the GEBIK equations become
which are written in a form similar to the classical Micaelis-Menten equations for any substrate and product. Here, the equations also show that the various isotopologue and isotopomer substrates appear as competing species. Eqs. ( 4 ) for isotopic compositions, Eq. ( 6 ) for the fractionation factor and Eq. ( 7 ) for the enrichment factor equally applies to the GEBIK equations under the BFEI and QSS hypothesis.
An example is shown where GEBIK and GEBIF equations are used to describe the isotopic reactions of N 2 O {\displaystyle {\ce {N2O}}} consumption into N 2 {\displaystyle {\ce {N2}}} according to the simultaneous set of reactions
These can be rewritten using the notation introduced before as.
The substrate S 2 2 = N 2 15 O {\displaystyle {\ce {^{2}_{2}S\ =\ {^{15}N2O}}}} has not been included due to its scarcity. In addition, we have not specified the isotopic substitution in the N 2 {\displaystyle {\ce {N2}}} product of the second and third reactions because N 2 {\displaystyle {\ce {N2}}} is symmetric. Assuming that the second and third reactions have identical reaction rates ( k 1 ( 3 ) ≡ k 1 ( 2 ) {\displaystyle (k_{1(3)}\equiv k_{1(2)}} , k 2 ( 3 ) ≡ k 2 ( 2 ) {\displaystyle k_{2(3)}\equiv k_{2(2)}} , and k 3 ( 3 ) ≡ k 3 ( 2 ) ) {\displaystyle k_{3(3)}\equiv k_{3(2)})} , the full GEBIK and GEBIF equations are
The same reaction can be described with the GEBIK and GEBIF equations under the BFEI and QSS approximations as
where K 3 {\displaystyle {\ce {K3}}} has been substituted with K 2 {\displaystyle {\ce {K2}}} because the rate constants in the third reaction have been assumed to equal those of the second reaction. | https://en.wikipedia.org/wiki/Transient_kinetic_isotope_fractionation |
Transient liquid phase diffusion bonding (TLPDB) is a joining process that has been applied for bonding many metallic and ceramic systems which cannot be bonded by conventional fusion welding techniques. The bonding process produces joints with a uniform composition profile, tolerant of surface oxides and geometrical defects. The bonding technique has been exploited in a wide range of applications, from the production and repair of turbine engines in the aerospace industry, [ 1 ] [ 2 ] [ 3 ] to nuclear power plants, [ 4 ] [ 5 ] and in making connections to integrated circuit dies as a part of the microelectronics industry. [ 6 ] [ 7 ]
Transient liquid phase diffusion bonding is a process that differs from diffusion bonding . In transient liquid phase diffusion bonding, an element or alloy with a lower melting point in an interlayer diffuses into the lattice and grain boundaries of the substrates at the bonding temperature. Solid state diffusional processes lead to a change of composition at the bond interface and the dissimilar interlayer melts at a lower temperature than the parent materials. Thus, a thin layer of liquid spreads along the interface to form a joint at a lower temperature than the melting point of either of the parent materials. This method differs from brazing in that it is "isothermally solidifying". While holding the temperature above the filler metal melting point, interdiffusion shifts the composition away from eutectic, so solidification occurs at the process temperature. If sufficient interdiffusion occurs, the joint will remain solid and strong well above the original melt process temperature. This is why it is termed "transient liquid phase." The liquid solidifies before cooling.
In this technique it is necessary to select a suitable interlayer by considering its wettability , flow characteristics, high stability to prevent reactions with the base materials, and the ability to form a composition having a remelt temperature higher than the bonding temperature. The joining technique dates back to ancient times. [ 8 ] [ 9 ] [ 10 ] For example, copper oxide painted as an interlayer and covered with tallow or glue to hold gold balls on to a gold article were heated in a reducing flame to form a eutectic alloy alloy at the bond area.
There are many theories on the kinetics of the bonding process but the most common theory divides the process into four main stages. [ 11 ] [ 12 ] The stages are: | https://en.wikipedia.org/wiki/Transient_liquid_phase_diffusion_bonding |
Transient modelling (also called time‑dependent modelling or unsteady simulation ) is the practice of analysing physical, biological or socio‑economic processes whose state variables vary continuously with time. Unlike steady state (equilibrium) analysis—where only the initial and final conditions are considered—transient modelling follows the complete evolution of a system from one state to another, capturing the rates, lags and feedbacks that occur along the way. [ 1 ]
Transient techniques are used in any discipline where the governing equations (e.g. the Navier–Stokes equations , the heat equation, mass‑balance or cash‑flow equations) contain an explicit time derivative. Common fields include
Mathematically, transient problems are described by ordinary or partial differential equations of the general form
where u ( t ) {\displaystyle \mathbf {u} (t)} is the state vector and F {\displaystyle {\mathcal {F}}} represents the physical laws and boundary conditions. Analytical solutions exist only for a limited class of simple geometries and linear systems (e.g. one‑dimensional heat conduction). Most practical applications therefore rely on numerical time‑integration schemes such as the explicit or implicit Euler method, Runge–Kutta methods or, for stiff systems, Crank–Nicolson and higher‑order multi‑step solvers. [ 5 ]
A simple example is a garden water tank . This is being topped up by rainfall from the roof, but when the tank is full, the remaining water goes to the drain. When the gardener draws water off, the level falls. If the garden is large and the summer is hot, a steady state will occur in summer where the tank is nearly always empty in summer. If the season is wet, the garden is getting water from the sky, and the tank is not being emptied sufficiently, so in steady state it will be observed to be always full. If the gardener has a way of observing the level of water in the tank, and a record of daily rainfall and temperatures, and is precisely metering the amount of water being drawn off every day, the numbers and the dates can be recorded in spreadsheet at daily intervals. After enough samples are taken, a chart can be developed to model the rise and fall pattern over a year, or over 2 years. With a better understanding of the process, it might emerge that a 200litre water tank would run out 20–25 days a year, but a 400-litre water tank would never run out, and a 300-litre tank would run out only 1-2 day a year and therefore that would be an acceptable risk and it would be the most economical solution.
One of the best examples of transient modelling is transient climate simulation . The analysis of ice cores in glaciers to understand climate change . Ice cores have thousands of layers, each of which represents a winter season of snowfall, and trapped in these are bubbles of air, particle of space dust and pollen which reveal climatic data of the time. By mapping these to a time scale, scientists can analyse the fluctuations over time and make predictions for the future.
Transient modelling is the basis of weather forecasting , of managing ecosystems , rail timetabling, managing the electricity grid, setting the national budget, floating currency , understanding traffic flows on a freeway , solar gains on glass fronted buildings, or even of checking the day-to-day transactions of one's monthly bank statement .
With the transient modelling approach, you understand the whole process better when the inputs and outputs are graphed against time. | https://en.wikipedia.org/wiki/Transient_modelling |
A transient recovery voltage ( TRV ) for high-voltage circuit breakers is the voltage that appears across the terminals after current interruption. It is a critical parameter for fault interruption by a high-voltage circuit breaker, its characteristics (amplitude, rate of rise) can lead either to a successful current interruption or to a failure (called reignition or restrike).
The TRV is dependent on the characteristics of the system connected on both terminals of the circuit-breaker, and on the type of fault that this circuit breaker has to interrupt (single, double or three-phase faults, grounded or ungrounded fault...).
Characteristics of the system include:
The most severe TRV is applied on the first pole of a circuit-breaker that interrupts current (called the first-pole-to-clear in a three-phase system). The parameters of TRVs are defined in international standards such as IEC and IEEE (or ANSI ).
Typical cases of capacitive loads are unloaded lines and capacitor banks.
A terminal fault is a fault that occurs at the circuit breaker terminals. The circuit breaker interrupts a short-circuit at current zero, at this instant the supply voltage is maximum and the recovery voltage tends to reach the supply voltage with a high frequency transient. The normalized value of the overshoot or amplitude factor is 1.4.
A short-line-fault is a fault that occurs on a line a few hundred meters to several kilometers down the line from the circuit breaker terminal. As shown on Figure 5, the TRV is characterized, in its initial part, by a steep rate-of-rise due to a high-frequency oscillation produced by travelling waves that travel on the line with positive and negative reflections at the circuit breaker terminal and at the fault point, respectively. [ 1 ] The superposition of these travelling waves gives the voltage profiles on the line shown on Figures 6 to 14 with, on the horizontal axis, the circuit breaker terminal position on the left and the short-circuit point on the right.
The voltage profile is given at different instants after current interruption, where T L is time needed for a wave to travel from the circuit breaker down the line and back to the circuit breaker terminal.
Figure 15 shows, as function of time, the variation of voltage on the line-side terminal of the circuit breaker. The voltage variation is two times the initial voltage if losses are neglected, in reality it is approximately 1.6 times the initial voltage. The triangular waveshape of voltage on the line-side terminal, combined with a supply-side voltage variation at a lower frequency, produces the sawtooth variation of TRV shown on Figure 5.
A short-line-fault TRV is characterized by a rate-of-rise that is proportional to the slope of current at the time of interruption and therefore to the amplitude of the short-circuit current : d u d t = Z d i d t {\displaystyle {\frac {du}{dt}}=Z{di \over dt}\,} , where Z is the surge impedance of the line.
The standardized value of Z is 450 Ω, it is equal to ( l / c ) {\displaystyle {\sqrt {(}}l/c)\,} , where l and c are the line self-inductance and capacitance per unit length. | https://en.wikipedia.org/wiki/Transient_recovery_voltage |
In electrical engineering and mechanical engineering , a transient response is the response of a system to a change from an equilibrium or a steady state . The transient response is not necessarily tied to abrupt events but to any event that affects the equilibrium of the system. The impulse response and step response are transient responses to a specific input (an impulse and a step, respectively).
In electrical engineering specifically, the transient response is the circuit’s temporary response that will die out with time. [ 1 ] It is followed by the steady state response, which is the behavior of the circuit a long time after an external excitation is applied. [ 1 ]
The response can be classified as one of three types of damping that describes the output in relation to the steady-state response .
Transient response can be quantified with the following properties.
Oscillation is an effect caused by a transient stimulus to an underdamped circuit or system. It is a transient event preceding the final steady state following a sudden change of a circuit [ 5 ] or start-up. Mathematically, it can be modeled as a damped harmonic oscillator .
Inductor volt-second balance and capacitor ampere-second balance are disturbed by transients. These balances encapsulate the circuit analysis simplifications used for steady-state AC circuits. [ 6 ]
An example of transient oscillation can be found in digital (pulse) signals in computer networks. [ 7 ] Each pulse produces two transients, an oscillation resulting from the sudden rise in voltage and another oscillation from the sudden drop in voltage. This is generally considered an undesirable effect as it introduces variations in the high and low voltages of a signal, causing instability.
Electromagnetic pulses (EMP) occur internally as the result of the operation of switching devices. Engineers use voltage regulators and surge protectors to prevent transients in electricity from affecting delicate equipment. External sources include lightning , electrostatic discharge and nuclear electromagnetic pulse .
Within Electromagnetic compatibility testing, transients are deliberately administered to electronic equipment to test their performance and resilience to transient interference. Many such tests administer the induced fast transient oscillation directly, in the form of a damped sine wave , rather than attempting to reproduce the original source. International standards define the magnitude and methods used to apply them.
The European standard for Electrical Fast Transient (EFT) testing is EN-61000-4-4 . The U.S. equivalent is IEEE C37.90. Both of these standards are similar. The standard chosen is based on the intended market. | https://en.wikipedia.org/wiki/Transient_response |
In systems theory , a system is said to be transient or in a transient state when a process variable or variables have been changed and the system has not yet reached a steady state . In electrical engineering , the time taken for an electronic circuit to change from one steady state to another steady state is called the transient time.
When a chemical reactor is being brought into operation, the concentrations, temperatures, species compositions, and reaction rates are changing with time until operation reaches its nominal process variables.
When a switch is closed in an electrical circuit containing a capacitor or inductor , the component draws out the resulting change in voltage or current, causing the system to take a substantial amount of time to reach a new steady state . This period of time is known as the transient state.
A capacitor acts as a short circuit immediately after the switch is closed, increasing its impedance during the transient state until it acts as an open circuit in its steady state.
An inductor is the opposite, behaving as an open circuit until reaching a short circuit steady state.
This engineering-related article is a stub . You can help Wikipedia by expanding it .
This chemical reaction article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transient_state |
Transillumination is the technique of sample illumination by transmission of light through the sample. Transillumination is used in a variety of methods of imaging.
In microscopy transillumination refers to the illumination of a sample by transmitted light. In its most basic form it generates a bright field image, and is commonly used with transillumination techniques such as phase contrast and differential interference contrast microscopy.
In medicine transillumination generally refers to the transmission of light through tissues of the body. A common example is the transmission of light through fingers, producing a red glow due to red blood cells absorbing other wavelengths of light. Organs analysed include the sinuses, the breasts and the testes. It is widely used by pediatricians to shine light in bodies of infants and observe the amount of scattered light. Since their skeleton is not fully calcified, light can easily penetrate tissues. Common examples of diagnostic applications are:
Failed obliteration of the processus vaginalis allows serous fluid to collect around the testes via a communicating connection between the tunica vaginalis and the peritoneum. The resulting hydrocele presents as painless enlargement of the scrotum, similar to what may be encountered with testicular neoplasms. A convenient method to differentiate the conditions is to transilluminate the scrotum, as the hydrocele will appear a soft red while a solid tumor will not transmit light. Any uncertainty should be followed up with an ultrasound. [ 1 ]
Hydranencephaly is a condition in which the brain's cerebral hemispheres are absent to a great degree and the remaining cranial cavity is filled with cerebrospinal fluid. Transillumination can be used to diagnose hydranencephaly. The device used in this operation is a Chun gun that uses a 150 watt projection bulb as a light source.
Bright light penetrates the thin front chest wall and reflects off the back chest wall to indicate the degree of pneumothorax. To treat it, a physician inserts a needle attached to a syringe into the area of collapse to remove the air between lungs and chest wall, causing the lung to reinflate.
There is brilliant transillumination in case of meningocele due to presence of CSF inside the cyst.
Meningomyelocele, on the other hand, is partially transilluminant as it contains nerve root fibres along with the CSF.
Bright transilluminated light can highlight dental caries and sign of dental trauma such as enamel infractions .
Light bright enough to penetrate the shell can be used to verify egg yolks are intact, as the yolk is opaque while the albumin is transparent. | https://en.wikipedia.org/wiki/Transillumination |
Transistors are simple devices with complicated behavior [ citation needed ] . In order to ensure the reliable operation of circuits employing transistors, it is necessary to scientifically model the physical phenomena observed in their operation using transistor models . There exists a variety of different models that range in complexity and in purpose. Transistor models divide into two major groups: models for device design and models for circuit design.
The modern transistor has an internal structure that exploits complex physical mechanisms. Device design requires a detailed understanding of how device manufacturing processes such as ion implantation , impurity diffusion , oxide growth , annealing , and etching affect device behavior. Process models simulate the manufacturing steps and provide a microscopic description of device "geometry" to the device simulator . "Geometry" does not mean readily identified geometrical features such as a planar or wrap-around gate structure, or raised or recessed forms of source and drain (see Figure 1 for a memory device with some unusual modeling challenges related to charging the floating gate by an avalanche process). It also refers to details inside the structure, such as the doping profiles after completion of device processing.
With this information about what the device looks like, the device simulator models the physical processes taking place in the device to determine its electrical behavior in a variety of circumstances: DC current–voltage behavior, transient behavior (both large-signal and small-signal), dependence on device layout (long and narrow versus short and wide, or interdigitated versus rectangular, or isolated versus proximate to other devices). These simulations tell the device designer whether the device process will produce devices with the electrical behavior needed by the circuit designer, and is used to inform the process designer about any necessary process improvements. Once the process gets close to manufacture, the predicted device characteristics are compared with measurement on test devices to check that the process and device models are working adequately.
Although long ago the device behavior modeled in this way was very simple – mainly drift plus diffusion in simple geometries – today many more processes must be modeled at a microscopic level; for example, leakage currents [ 1 ] in junctions and oxides, complex transport of carriers including velocity saturation and ballistic transport, quantum mechanical effects, use of multiple materials (for example, Si-SiGe devices, and stacks of different dielectrics ) and even the statistical effects due to the probabilistic nature of ion placement and carrier transport inside the device. Several times a year the technology changes and simulations have to be repeated. The models may require change to reflect new physical effects, or to provide greater accuracy. The maintenance and improvement of these models is a business in itself.
These models are very computer intensive, involving detailed spatial and temporal solutions of coupled partial differential equations on three-dimensional grids inside the device. [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] Such models are slow to run and provide detail not needed for circuit design. Therefore, faster transistor models oriented toward circuit parameters are used for circuit design.
Transistor models are used for almost all modern electronic design work. Analog circuit simulators such as SPICE use models to predict the behavior of a design. Most design work is related to integrated circuit designs which have a very large tooling cost, primarily for the photomasks used to create the devices, and there is a large economic incentive to get the design working without any iterations. Complete and accurate models allow a large percentage of designs to work the first time.
Modern circuits are usually very complex. The performance of such circuits is difficult to predict without accurate computer models, including but not limited to models of the devices used. The device models include effects of transistor layout: width, length, interdigitation, proximity to other devices; transient and DC current–voltage characteristics ; parasitic device capacitance, resistance, and inductance; time delays; and temperature effects; to name a few items. [ 7 ]
Nonlinear , or large signal transistor models fall into three main types: [ 8 ] [ 9 ]
Small-signal or linear models are used to evaluate stability , gain , noise and bandwidth , both in the conceptual stages of circuit design (to decide between alternative design ideas before computer simulation is warranted) and using computers. A small-signal model is generated by taking derivatives of the current–voltage curves about a bias point or Q-point . As long as the signal is small relative to the nonlinearity of the device, the derivatives do not vary significantly, and can be treated as standard linear circuit elements.
An advantage of small signal models is they can be solved directly, while large signal nonlinear models are generally solved iteratively, with possible convergence or stability issues. By simplification to a linear model, the whole apparatus for solving linear equations becomes available, for example, simultaneous equations , determinants , and matrix theory (often studied as part of linear algebra ), especially Cramer's rule . Another advantage is that a linear model is easier to think about, and helps to organize thought.
A transistor's parameters represent its electrical properties. Engineers employ transistor parameters in production-line testing and in circuit design. A group of a transistor's parameters sufficient to predict circuit gain , input impedance , and output impedance are components in its small-signal model .
A number of different two-port network parameter sets may be used to model a transistor. These include:
Scattering parameters, or S parameters, can be measured for a transistor at a given bias point with a vector network analyzer . S parameters can be converted to another parameter set using standard matrix algebra operations. | https://en.wikipedia.org/wiki/Transistor_model |
A transistor radio is a small portable radio receiver that uses transistor -based circuitry. Previous portable radios used vacuum tubes , which were bulky, fragile, had a limited lifetime, consumed excessive power and required large heavy batteries. Following the invention of the transistor in 1947—a semiconductor device that amplifies and acts as an electronic switch, which revolutionized the field of consumer electronics by introducing small but powerful, convenient hand-held devices—the Regency TR-1 was released in 1954 becoming the first commercial transistor radio. The mass-market success of the smaller and cheaper Sony TR-63, released in 1957, led to the transistor radio becoming the most popular electronic communication device of the 1960s and 1970s. Billions had been manufactured by about 2012. [ 1 ]
The pocket size of transistor radios sparked a change in popular music listening habits, allowing people to listen to music and other broadcasts on the radio anywhere they went. Beginning around 1980, however, cheap AM transistor radios were superseded initially by the boombox and the Sony Walkman , and later on by digitally-based devices with higher audio quality such as portable CD players , personal audio players , MP3 players and smartphones , many of which contain FM radios. [ 2 ] [ 3 ] Transistor radios continue to be built and sold for portable and in-car use but the term "transistor" is no longer used in marketing as virtually all modern technology make use of transistors.
Before the transistor was invented, radios used vacuum tubes . Although portable vacuum tube radios were produced, they were typically bulky and heavy. The need for a low voltage high current source to power the filaments of the tubes and high voltage for the anode potential typically required two batteries. Vacuum tubes were also inefficient and fragile compared to transistors and had a limited lifetime.
Bell Laboratories demonstrated the first transistor on December 23, 1947. [ 4 ] The scientific team at Bell Laboratories responsible for the solid-state amplifier included William Shockley , Walter Houser Brattain , and John Bardeen [ 5 ] After obtaining patent protection, the company held a news conference on June 30, 1948, at which a prototype transistor radio was demonstrated. [ 6 ]
There are many claimants to the title of the first company to produce practical transistor radios, often incorrectly attributed to Sony (originally Tokyo Telecommunications Engineering Corporation). Texas Instruments had demonstrated all-transistor AM (amplitude modulation) radios as early as May 25, 1954, [ 7 ] [ 8 ] but their performance was well below that of equivalent vacuum tube models. A workable all-transistor radio was demonstrated in August 1953 at the Düsseldorf Radio Fair by the German firm Intermetall. [ 9 ] It was built with four of Intermetall 's hand-made transistors, based upon the 1948 invention of the "Transistor"-germanium point-contact transistor by Herbert Mataré and Heinrich Welker . However, as with the early Texas Instruments units (and others) only prototypes were ever built; it was never put into commercial production. RCA had demonstrated a prototype transistor radio as early as 1952, and it is likely that they and the other radio makers were planning transistor radios of their own, but Texas Instruments and Regency Division of I.D.E.A., were the first to offer a production model starting in October 1954. [ 10 ]
The use of transistors instead of vacuum tubes as the amplifier elements meant that the device was much smaller, required far less power to operate than a tube radio, and was more resistant to physical shock. Since the transistor's base element draws current, its input impedance is low in contrast to the high input impedance of the vacuum tubes. [ 11 ] It also allowed "instant-on" operation, since there were no filaments to heat up. The typical portable tube radio of the fifties was about the size and weight of a lunchbox and contained several heavy, non-rechargeable batteries—one or more so-called "A" batteries to heat the tube filaments and a large 45- to 90-volt "B" battery to power the signal circuits. By comparison, the transistor radio could fit in a pocket and weighed half a pound or less, and was powered by standard flashlight batteries or a single compact battery. The 9-volt battery was introduced for powering transistor radios. [ citation needed ]
Two companies working together, Texas Instruments of Dallas, and Industrial Development Engineering Associates (I.D.E.A.) of Indianapolis, Indiana, were behind the unveiling of the Regency TR-1 , the world's first commercially produced transistor radio. Previously, Texas Instruments was producing instrumentation for the oil industry and locating devices for the U.S. Navy and I.D.E.A. built home television antenna boosters. The two companies worked together on the TR-1, looking to grow revenues for their respective companies by breaking into this new product area. [ 6 ] [ 12 ] In May 1954, Texas Instruments had designed and built a prototype and was looking for an established radio manufacturer to develop and market a radio using their transistors. The Chief Project Engineer for the radio design at Texas Instruments' headquarters in Dallas, Texas was Paul D. Davis Jr., who had a degree in Electrical Engineering from Southern Methodist University. He was assigned the project due to his experience with radio engineering in World War II. None of the major radio makers including RCA, GE, Philco, and Emerson were interested. The President of I.D.E.A. at the time, Ed Tudor, jumped at the opportunity to manufacture the TR-1, predicting sales of the transistor radios at "20 million radios in three years". [ 13 ] The Regency TR-1 was announced on October 18, 1954, by the Regency Division of I.D.E.A., was put on sale in November 1954 and was the first practical transistor radio made in any significant numbers. [ 14 ] Billboard reported in 1954 that "the radio has only four transistors. One acts as a combination mixer-oscillator, one as an audio amplifier, and two as intermediate-frequency amplifiers." [ 15 ] One year after the release of the TR-1 sales approached the 100,000 mark. The look and size of the TR-1 were well received, but with only four transistors the sound quality was poor, and the reviews of the TR-1's performance were typically adverse. [ 13 ] [ 12 ] The Regency TR-1 was patented [ 16 ] by Richard C. Koch, former Project Engineer of I.D.E.A.
In February 1955, the second transistor radio, the 8-TP-1, was introduced by Raytheon . It was larger than the TR-1, including a four-inch speaker and eight transistors, four more than the TR-1, so the sound quality was much better. An additional benefit of the 8-TP-1 was its efficient battery consumption; the 8-TP-1 cost 1/6 cent per hour to operate, while the TR-1 cost 40 times as much. While the Raytheon radio cost $30 more than the RCA 6-BX-63 tube radio, the latter used $38 of batteries over the same time that the 8-TP-1 used 60 cents. In July 1955 the first positive review of a transistor radio appeared in the Consumer Reports . Noting the 8-TP-1's high sound quality and very low battery cost, the magazine stated that "The transistors in this set have not been used in an effort to build the smallest radio on the market, and good performance has not been sacrificed". [ 13 ]
Following the success of the 8-TP-1, Zenith, RCA, DeWald, Westinghouse, and Crosley produced many additional transistor radio models. The TR-1 remained the only shirt pocket-sized radio; rivals made "coat-pocket radios" that Consumer Reports also reviewed as not performing well. [ 13 ]
Chrysler and Philco announced that they had developed and produced the world's first all-transistor car radio in the April 28th 1955 edition of the Wall Street Journal . [ 17 ] Chrysler made the all-transistor car radio, Mopar model 914HR, available as an "option" in fall 1955 for its new line of 1956 Chrysler and Imperial cars, which hit the showroom floor on October 21, 1955. The all-transistor car radio was a $150 option (equivalent to $1,760 in 2024). [ 18 ] [ 19 ] [ 20 ] [ 21 ]
While on a trip to the United States in 1952, Masaru Ibuka , founder of Tokyo Telecommunications Engineering Corporation (now Sony ), discovered that AT&T was about to make licensing available for the transistor. Ibuka and his partner, physicist Akio Morita , convinced the Japanese Ministry of International Trade and Industry (MITI) to finance the $25,000 licensing fee (equivalent to $296,021 today). [ 22 ] For several months Ibuka traveled around the United States borrowing ideas from the American transistor manufacturers. Improving upon the ideas, Tokyo Telecommunications Engineering Corporation made its first functional transistor radio in 1954. [ 13 ] Within five years, Tokyo Telecommunications Engineering Corporation grew from seven employees to approximately five hundred. [ citation needed ]
Other Japanese companies soon followed their entry into the American market and the grand total of electronic products exported from Japan in 1958 increased 2.5 times in comparison to 1957. [ 23 ]
In August 1955, while still a small company, Tokyo Telecommunications Engineering Corporation introduced their TR-55 five-transistor radio under the new brand name Sony . [ 24 ] [ 25 ] [ 12 ] With this radio, Sony became the first company to manufacture the transistors and other components they used to construct the radio. The TR-55 was also the first transistor radio to utilize all miniature components. It's estimated that only 5,000 to 10,000 units were produced. [ citation needed ]
The TR-63 was introduced by Sony to the United States in December 1957. The TR-63 was 6 mm ( 1 ⁄ 4 in) narrower and 13 mm ( 1 ⁄ 2 in) shorter than the original Regency TR-1. Like the TR-1 it was offered in four colors: lemon, green, red, and black. In addition to its smaller size, the TR-63 had a small tuning capacitor and required a new battery design to produce the proper voltage. It used the nine-volt battery , which would become the standard for transistor radios. Approximately 100,000 units of the TR-63 were imported in 1957. [ 13 ] This "pocketable" (the term "pocketable" was a matter of some interpretation, as Sony allegedly had special shirts made with oversized pockets for their salesmen) model proved highly successful. This should be treated with caution. A restored Sony TR63 readily fits a common shirt pocket. [ 26 ]
The TR-63 was the first transistor radio to sell in the millions, leading to the mass-market penetration of transistor radios. [ 27 ] The TR-63 went on to sell seven million units worldwide by the mid-1960s. [ 28 ] With the visible success of the TR-63, Japanese competitors such as Toshiba and Sharp Corporation joined the market. By 1959, in the United States market, there were more than six million transistor radio sets produced by Japanese companies that represented $62 million in revenue. [ 13 ]
The success of transistor radios led to transistors replacing vacuum tubes as the dominant electronic technology in the late 1950s. [ 29 ] The transistor radio went on to become the most popular electronic communication device of the 1960s and 1970s. Billions of transistor radios are estimated to have been sold worldwide between the 1950s and 2012. [ 27 ]
Prior to the Regency TR-1, transistors were difficult to produce. Only one in five transistors that were produced worked as expected (only a 20% yield) and as a result the price remained extremely high. When it was released in 1954, the Regency TR-1 cost $49.95 (equivalent to $585 today) and sold about 150,000 units. Raytheon and Zenith Electronics transistor radios soon followed and were priced even higher. In 1955, Raytheon's 8-TR-1 was priced at $80 (equivalent to $939 today). [ 13 ] By November 1956 a transistor radio small enough to wear on the wrist and a claimed battery life of 100 hours cost $29.95. [ 30 ]
Sony's TR-63, released in December 1957, cost $39.95 (equivalent to $448 today). Following the success of the TR-63 Sony continued to make their transistor radios smaller. Because of the extremely low labor costs in Japan, Japanese transistor radios began selling for as low as $25. By 1962, the TR-63 cost as low as $15 (equivalent to $156 today), [ 27 ] which led to American manufacturers dropping prices of transistor radios down to $15 as well. [ 13 ]
Rock 'n roll music became popular at the same time as transistor radios. Parents found that purchasing a small transistor radio was a way for children to listen to their music without using the family tube radio. Sony and other Japanese companies were much faster than Americans to focus on stylish, pocket-sized radios for the youth market, helping them to dominate the radio market. American companies began using lower-cost Japanese components but their radios were less attractive or sophisticated. By 1964 no transistor radio with only US components was available; by the mid-1960s the Japanese radio components had also been supplanted by even less-expensive manufacturing in Korea, Taiwan, and Hong Kong. The Zenith Trans-Oceanic 7000 was, until 1970, the last transistor radio manufactured in the US. [ 13 ]
Transistor radios were extremely successful because of three social forces—a large number of young people due to the post–World War II baby boom , a public with disposable income amidst a period of prosperity, and the growing popularity of rock 'n' roll music. The influence of the transistor radio during this period is shown by its appearance in popular films, songs, and books of the time, such as the movie Lolita .
Inexpensive transistor radios running on batteries enabled many in impoverished rural areas to become regular radio listeners for the first time. Music broadcast from New Orleans and received in Jamaica through transistor radios inspired the development of ska, and less directly, reggae music.
In the late 1950s, transistor radios took on more elaborate designs as a result of heated competition. Eventually, transistor radios doubled as novelty items. The small components of transistor radios that became smaller over time were used to make anything from " Jimmy Carter Peanut-shaped" radios to "Gun-shaped" radios to " Mork from Ork Eggship-shaped" radios. Corporations used transistor radios to advertise their business. " Charlie the Tuna -shaped" radios could be purchased from Star-Kist for an insignificant amount of money giving their company visibility amongst the public. These novelty radios are now bought and sold as collectors' items amongst modern-day collectors. [ 31 ] [ 32 ] [ 33 ]
Since the 1980s, the popularity of radio-only portable devices declined with the rise of portable audio players which allowed users to carry and listen to tape-recorded music. This began in the late 1970s with boom boxes and portable cassette players such as the Sony Walkman , followed by portable CD players , digital audio players , and smartphones . | https://en.wikipedia.org/wiki/Transistor_radio |
Transit Wireless is an American telecommunication company founded in 2005, based in New York City . It was formed as a consortium of several entities, including Dianet Communications. [ 1 ] It specializes in building wireless communication infrastructure using distributed antenna system networks to provide Wi-Fi and cellular phone coverage in the places that are unreachable by traditional cellular phone services such as in the underground portions of the New York City Subway . [ 2 ] [ 3 ] In 2010, the company was injected with financial support from infrastructure company BAI Communications for its first project with the New York City Transit Authority , which consisted of adding wireless access to subway stations . [ 4 ] The architectural framework for the system and wireless engineering efforts were led by RF designer Mark Parr. [ 5 ] The resulting wireless solution grew to provide coverage for hundreds of stations and serve well over a billion riders annually. [ 6 ] The company is now a subsidiary of BAI Communications. [ 7 ]
This article about a telecommunications company is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transit_Wireless |
Transit of Venus is a play by Canadian playwright Maureen Hunter . It was first produced at the Manitoba Theatre Centre in November 1992 . [ 1 ]
The play is based on the life of Guillaume Le Gentil (1725–1792), a gentleman astronomer .
In the play, he is obsessed with observing the transit of Venus . He leaves Celeste, the girl who loves him, to embark on an expedition to observe it. He returns after six years, having failed to observe the transit. He immediately makes preparations for a new expedition to observe the next transit.
Some artistic license was taken: the real-life Guillaume Le Gentil did not return until after the second transit, remaining overseas during the eight-year interim.
The play was subsequently performed across Canada, by the Royal Shakespeare Company and the BBC .
The play was transformed into an opera of the same name with libretto by Maureen Hunter and music by Victor Davies. This was presented by Manitoba Opera on November 24, 2007.
This article on a play from the 1990s is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transit_of_Venus_(play) |
Transiting Exoplanet Survey Satellite ( TESS ) is a space telescope for NASA 's Explorer program , designed to search for exoplanets using the transit method in an area 400 times larger than that covered by the Kepler mission. [ 6 ] It was launched on 18 April 2018, atop a Falcon 9 launch vehicle and was placed into a highly elliptical 13.70-day orbit around the Earth . [ 6 ] [ 2 ] [ 7 ] [ 8 ] [ 9 ] The first light image from TESS was taken on 7 August 2018, and released publicly on 17 September 2018. [ 1 ] [ 10 ] [ 11 ]
In the two-year primary mission, TESS was expected to detect about 1,250 transiting exoplanets orbiting the targeted stars , and an additional 13,000 orbiting stars not targeted but observed. [ 12 ] After the end of the primary mission around 4 July 2020, scientists continued to search its data for more planets, while the extended missions acquires additional data. As of 16 May 2025 [update] , TESS had identified 7,643 candidate exoplanets, of which 627 had been confirmed. [ 13 ]
The primary mission objective for TESS was to survey the brightest stars near the Earth for transiting exoplanets over a two-year period. The TESS satellite uses an array of wide-field cameras to perform a survey of 85% of the sky. With TESS, it is possible to study the mass, size, density and orbit of a large cohort of small planets, including a sample of rocky planets in the habitable zones of their host stars. TESS provides prime targets for further characterization by the James Webb Space Telescope (JWST), as well as other large ground-based and space-based telescopes of the future. While previous sky surveys with ground-based telescopes have mainly detected giant exoplanets and the Kepler space telescope has mostly found planets around distant stars that are too faint for characterization, TESS finds many small planets around the nearest stars in the sky. TESS records the nearest and brightest main sequence stars hosting transiting exoplanets, which are the most favorable targets for detailed investigations. [ 14 ] Detailed information about such planetary systems with hot Jupiters makes it possible to better understand the architecture of such systems. [ 15 ] [ 16 ]
Led by the Massachusetts Institute of Technology (MIT) with seed funding from Google , [ 17 ] on 5 April 2013, it was announced that TESS, along with the Neutron Star Interior Composition Explorer (NICER), had been selected by NASA for launch. [ 18 ] [ 19 ] On 18 July 2019, after the first year of operation, the southern portion of the survey was completed, and the northern survey was started. The primary mission ended with the completion of the northern survey on 4 July 2020, which was followed by the first extended mission. The first extended mission concluded in September 2022 and the spacecraft entered its second extended mission [ 20 ] which should last for another three years.
The concept of TESS was first discussed in 2005 by the Massachusetts Institute of Technology (MIT) and the Smithsonian Astrophysical Observatory (SAO). [ 21 ] The genesis of TESS was begun during 2006, when a design was developed from private funding by individuals, Google, and The Kavli Foundation . [ 22 ] In 2008, MIT proposed that TESS become a full NASA mission and submitted it for the Small Explorer program at Goddard Space Flight Center , [ 22 ] but it was not selected. [ 23 ] It was resubmitted in 2010 as an Explorer program mission, and was approved in April 2013 as a Medium Explorer mission. [ 24 ] [ 22 ] [ 25 ] TESS passed its critical design review (CDR) in 2015, allowing production of the satellite to begin. [ 22 ] While Kepler had cost US$640 million at launch, TESS cost only US$200 million (plus US$87 million for launch). [ 26 ] [ 27 ] The mission will find exoplanets that periodically block part of the light from their host stars, events called transits. TESS will survey 200,000 of the brightest stars near the Sun to search for transiting exoplanets. TESS was launched on 18 April 2018, aboard a SpaceX Falcon 9 launch vehicle.
In July 2019, an Extended Mission 2020 to 2022 was approved. [ 28 ] On 3 January 2020, NASA reported that TESS had discovered its first potentially habitable Earth-sized planet, TOI-700 d . [ 29 ]
TESS is designed to carry out the first spaceborne all-sky transiting exoplanet survey. [ 18 ] [ 30 ] It is equipped with four wide-angle telescopes and associated charge-coupled device (CCD) detectors. Science data are transmitted to Earth every two weeks. Full-frame images with an effective exposure time of two hours are transmitted as well, enabling scientists to search for unexpected transient phenomena, such as the optical counterparts to gamma-ray bursts . TESS also hosts a Guest Investigator program, allowing scientists from other organizations to use TESS for their own research. The resources allocated to Guest programs allow an additional 20,000 celestial bodies to be observed. [ 31 ]
TESS uses a novel highly elliptical orbit around the Earth with an apogee approximately at the distance of the Moon and a perigee of 108,000 km (67,000 mi). TESS orbits Earth twice during the time the Moon orbits once, a 2:1 resonance with the Moon. [ 32 ] The orbit is expected to remain stable for a minimum of ten years.
In order to obtain unobstructed imagery of both the northern and southern hemispheres of the sky, TESS utilizes a 2:1 lunar resonant orbit called P/2, an orbit that has never been used before (although Interstellar Boundary Explorer (IBEX) uses a similar P/3 orbit). The highly elliptical orbit has a 375,000 km (233,000 mi) apogee, timed to be positioned approximately 90° away from the position of the Moon to minimize its destabilizing effect . This orbit should remain stable for decades and will keep TESS's cameras in a stable temperature range. The orbit is entirely outside the Van Allen belts to avoid radiation damage to TESS, and most of the orbit is spent far outside the belts. Every 13.70 days at its perigee of 108,000 km (67,000 mi), TESS downlinks to Earth over a period of approximately 3 hours the data it has collected during the just finished orbit. [ 33 ]
TESS's two-year all-sky survey would focus on nearby G- , K- , and M - type stars with apparent magnitudes brighter than magnitude 12. [ 34 ] Approximately 500,000 stars were to be studied, including the 1,000 closest red dwarfs across the whole sky, [ 35 ] [ 36 ] an area 400 times larger than that covered by the Kepler mission. TESS was expected to find more than 3,000 transiting exoplanet candidates, including 500 Earth-sized planets and super-Earths . [ 35 ] Of those discoveries, an estimated 20 were expected to be super-Earths located in the habitable zone around a star. [ 37 ] The stated goal of the mission was to determine the masses of at least 50 Earth-sized planets (at most 4 times Earth radius). Most detected exoplanets are expected to be between 30 and 300 light-years away.
The survey was broken up into 26 observation sectors, each sector being 24° × 96°, with an overlap of sectors at the ecliptic poles to allow additional sensitivity toward smaller and longer-period exoplanets in that region of the celestial sphere. The spacecraft will spend two 13.70-day orbits observing each sector, mapping the southern hemisphere of sky in its first year of operation and the northern hemisphere in its second year. [ 38 ] The cameras actually take images every 2 seconds, but all the raw images would represent much more data volume than can be stored or downlinked. To deal with this, cutouts around 15,000 selected stars (per orbit) will be coadded over a 2-minute period and saved on board for downlink, while full-frame images will also be coadded over a 30-minute period and saved for downlink. The actual data downlinks will occur every 13.70 days near perigee. [ 39 ] This means that during the 2 years, TESS will continuously survey 85% of the sky for 27 days, with certain parts being surveyed across multiple runs. The survey methodology was designed such that the area that will be surveyed, essentially continuously, over an entire year (351 observation days) and makes up about 5% of the entire sky, will encompass the regions of sky (near the ecliptic poles) which will be observable at any time of year with the James Webb Space Telescope (JWST). [ 40 ]
In October 2019, Breakthrough Listen started a collaboration with scientists from the TESS team to look for signs of advanced extraterrestrial life. Thousands of new planets found by TESS will be scanned for "technosignatures" by Breakthrough Listen partner facilities across the globe. Data from TESS monitoring of stars will also be searched for anomalies. [ 41 ]
The TESS team also plans to use a 30-minute observation cadence for full-frame images, which has been noted for imposing a hard Nyquist limit that can be problematic for asteroseismology of stars. [ 42 ] Asteroseismology is the science that studies the internal structure of stars by the interpretation of their frequency spectra. Different oscillation modes penetrate to different depths inside the star. The Kepler and PLATO observatories are also intended for asteroseismology. [ 43 ]
During the 27 month First Extended Mission, data collection was slightly changed: [ 44 ]
During the second extended mission, [ 45 ] the full-frame image cadence will be further increased from every 10 minutes to every 200 seconds, number of 2-minute cadence targets reduced to ~8000 per sector, and number of 20-second cadence targets increased to ~2000 per sector. [ 46 ]
In December 2014, SpaceX was awarded the contract to launch TESS in August 2017, [ 47 ] for a total contract value of US$87 million. [ 48 ] The 362 kg (798 lb) spacecraft was originally scheduled to launch on 20 March 2018, but this was pushed back by SpaceX to allow additional time to prepare the launch vehicle and meet NASA launch service requirements. [ 49 ] A static fire of the Falcon 9 rocket was completed on 11 April 2018, at approximately 18:30 UTC. [ 50 ] The launch was postponed again from 16 April 2018, [ 7 ] and TESS was eventually launched on a SpaceX Falcon 9 launch vehicle from the SLC-40 launch site at Cape Canaveral Air Force Station (CCAFS) on 18 April 2018. [ 8 ] [ 9 ]
The Falcon 9 launch sequence included a 149-second burn by the first stage, followed by a 6-minute second stage burn. Meanwhile, the first-stage booster performed controlled-reentry maneuvers and successfully landed on the autonomous drone ship Of Course I Still Love You . An experimental water landing was performed for the fairing, [ 51 ] as part of SpaceX's attempt to develop fairing reusability .
After coasting for 35 minutes, the second stage performed a final 54-second burn that placed TESS into a supersynchronous transfer orbit of 200 × 270,000 km (120 × 167,770 mi) at an inclination of 28.50°. [ 51 ] [ 52 ] The second stage released the payload, after which the stage itself was placed in a heliocentric orbit .
In 2013, Orbital Sciences Corporation received a four-year, US$75 million contract to build TESS for NASA. [ 53 ] TESS uses an Orbital Sciences LEOStar-2 satellite bus , capable of three-axis stabilization using four hydrazine thrusters plus four reaction wheels providing better than three arcsecond fine spacecraft pointing control. Power is provided by two single-axis solar arrays generating 400 watts . A Ka-band dish antenna provides a 100 Mbit/s science downlink. [ 35 ] [ 54 ]
Once injected into the initial orbit by the Falcon 9 second stage , the spacecraft performed four additional independent burns that placed it into a lunar flyby orbit. [ 55 ] On 17 May 2018, the spacecraft underwent a gravity assist by the Moon at 8,253.5 km (5,128.5 mi) above the surface, [ 56 ] and performed the final period adjustment burn on 30 May 2018. [ 57 ] It achieved an orbital period of 13.65 days in the desired 2:1 resonance with the Moon, at 90° phase offset to the Moon at apogee, which is expected to be a stable orbit for at least 20 years, thus requiring very little fuel to maintain. [ 8 ] The entire maneuvering phase was expected to take a total of two months, and put the craft in an eccentric orbit (17–75 R 🜨 ) at a 37° inclination. The total delta-v budget for orbit maneuvers was 215 m/s (710 ft/s), which is 80% of the mission's total available reserves. If TESS receives an on-target or slightly above nominal orbit insertion by the Falcon 9, a theoretical mission duration in excess of 15 years would be possible from a consumables standpoint. [ 52 ]
The first light image was made on 7 August 2018, and released publicly on 17 September 2018. [ 1 ] [ 10 ] [ 11 ] [ 58 ]
TESS completed its commissioning phase at the end of July and the science phase officially started on 25 July 2018. [ 59 ]
For the first two years of operation TESS monitored both the southern (year 1) and northern (year 2) celestial hemispheres . During its nominal mission TESS tiles the sky in 26 separate segments, with a 27.4-day observing period per segment. [ 38 ] The first southern survey was completed in July 2019. The first northern survey finished in July 2020.
A 27-month First Extended mission ran until September 2022. A second extended mission will run approximately additional three years.
The sole instrument on TESS is a package of four wide-field-of-view charge-coupled device (CCD) cameras. Each camera features four low-noise, low-power 4 megapixel CCDs created by MIT Lincoln Laboratory . The four CCDs are arranged in a 2 x 2 detector array for a total of 16 megapixels per camera and 16 CCDs for the entire instrument. Each camera has a 24° × 24° field of view , a 100 mm (3.9 in) effective pupil diameter , a lens assembly with seven optical elements, and a bandpass range of 600 to 1000 nm. [ 35 ] [ 3 ] The TESS lenses have a combined field of view of 24° × 96° (2300 deg 2 , around 5% of the entire sky) and a focal ratio of f/1.4. The ensquared energy, the fraction of the total energy of the point-spread function that is within a square of the given dimensions centered on the peak, is 50% within 15 × 15 μm and 90% within 60 × 60 μm. [ 3 ] For comparison, Kepler's primary mission only covered an area of the sky measuring 105 deg 2 , though the K2 extension has covered many such areas for shorter times.
The four telescopes in the assembly each have a 10.5-cm diameter lens entrance aperture, with a f/1.4 focal ratio, with a total of seven lenses in the optical train . [ 60 ]
The TESS ground system is divided between eight sites around the United States. These include Space Network and the Jet Propulsion Laboratory 's NASA Deep Space Network for command and telemetry, Orbital ATK 's Mission Operations Center, Massachusetts Institute of Technology 's Payload Operations Center, the Ames Research Center 's Science Processing Operations Center, The Goddard Space Flight Center 's Flight Dynamics Facility, the Smithsonian Astrophysical Observatory 's TESS Science Office, and the Mikulski Archive for Space Telescopes (MAST) . [ 61 ]
One of the issues facing the development of this type of instrument is having an ultra-stable light source to test on. In 2015, a group at the University of Geneva made a breakthrough in the development of a stable light source. While this instrument was created to support ESA's CHEOPS exoplanet observatory, one was also ordered by the TESS program. [ 62 ] Although both observatories plan to look at bright nearby stars using the transit method, CHEOPS is focused on collecting more data on known exoplanets, including those found by TESS and other survey missions. [ 63 ]
Current mission results as of 16 May 2025: 627 confirmed exoplanets discovered by TESS, with 7643 candidate-planets that are still awaiting confirmation or rejection as false positive by the scientific community . [ 64 ] TESS team partners include the Massachusetts Institute of Technology, the Kavli Institute for Astrophysics and Space Research, NASA's Goddard Space Flight Center, MIT's Lincoln Laboratory, Orbital ATK, NASA's Ames Research Center, the Harvard-Smithsonian Center for Astrophysics, and the Space Telescope Science Institute .
TESS started science operations on 25 July 2018. [ 65 ] The first announced finding from the mission was the observation of comet C/2018 N1 . [ 65 ]
The first exoplanet detection announcement was on 18 September 2018, announcing the discovery of a super-Earth in the Pi Mensae system orbiting the star every 6 days, adding to a known Super-Jupiter orbiting the same star every 5.9 years. [ 66 ]
On 20 September 2018, the discovery of an ultra-short period planet was announced, slightly larger than Earth, orbiting the red dwarf LHS 3844 . With an orbital period of 11 hours, LHS 3844 b is one of the planets with the shortest known period. It orbits its star at a distance of 932,000 km (579,000 mi). LHS 3844 b is also one of the closest known exoplanets to Earth, at a distance of 14.9 parsecs. [ 67 ]
TESS's third discovered exoplanet is HD 202772 Ab , a hot Jupiter orbiting the brighter component of the visual binary star HD 202772 , located in the constellation Capricornus at a distance of about 480 light-years from Earth. The discovery was announced on 5 October 2018. HD 202772 Ab orbits its host star once every 3.3 days. It is an inflated hot Jupiter, and a rare example of hot Jupiters around evolved stars. It is also one of the most strongly irradiated planets known, with an equilibrium temperature of 2,100 K (1,830 °C; 3,320 °F). [ 68 ]
On 15 April 2019, TESS' first discovery of an earth-sized planet was reported. HD 21749 c is a planet described as "likely rocky", with about 89% of Earth's diameter and orbits the K-type main sequence star HD 21749 in about 8 days. The planet's surface temperature is estimated to be as high as 427 °C. Both known planets in the system, HD 21749 b and HD 21749 c , were discovered by TESS. HD 21749 c represents the 10th confirmed planet discovery by TESS. [ 69 ]
Data on exoplanet candidates continue to be made available at MAST. [ 70 ] As of 20 April 2019, the total number of candidates on the list was up to 335. Besides candidates identified as previously discovered exoplanets, this list also includes ten newly discovered exoplanets, including the five mentioned above. Forty-four of the candidates from Sector 1 in this list were selected for follow-up observations by the TESS Follow-Up Program (TFOP), which aims to aid the discovery of 50 planets with a planetary radius of R < 4 R E through repeated observations. [ 71 ] The list of candidate exoplanets continues to grow as additional results are being published on the same MAST page.
On 18 July 2019, after the first year of operation the southern portion of the survey was completed, it turned its cameras to the Northern Sky. As of this time it has discovered 21 planets and has over 850 candidate exoplanets. [ 72 ]
On 23 July 2019, the discovery of the young exoplanet DS Tucanae Ab (HD 222259 Ab) in the ~45 Myr old Tucana-Horologium young moving group was published in a paper. TESS first observed the planet in November 2018 and it was confirmed in March 2019. The young planet is larger than Neptune, but smaller than Saturn. The system is bright enough to follow up with radial velocity and transmission spectroscopy. [ 73 ] [ 74 ] ESA's CHEOPS mission will observe the transits of the young exoplanet DS Tuc Ab. A team of scientists got 23.4 orbits approved in the first Announcement of Opportunity (AO-1) for the CHEOPS Guest Observers (GO) Programme to characterize the planet. [ 75 ]
On 31 July 2019, the discovery of exoplanets around the M-type dwarf star Gliese 357 at a distance of 31 light years from Earth was announced. [ 76 ] TESS directly observed the transit of GJ 357 b , a hot earth with an equilibrium temperature of around 250 °C. Follow-up ground observations and analyses of historic data lead to the discovery of GJ 357 c and GJ 357 d . While GJ 357 b and GJ 357 c are too close to the star to be habitable, GJ 357 d resides at the outer edge of the star's habitable zone and may possess habitable conditions if it has an atmosphere. With at least 6.1 M E it is classified as a Super-Earth . [ 76 ]
As of September 2019, over 1000 TESS Objects of Interest ( ToI ) have been listed in the public database, [ 77 ] at least 29 of which are confirmed planets, about 20 of which within the stated goal of the mission of Earth-sized (<4 Earth radii). [ 78 ]
On 26 September 2019, it was announced that TESS observed its first tidal disruption event (TDE), called ASASSN-19bt . The TESS data revealed that ASASSN-19bt began to brighten on 21 January 2019, ~8.3 days before the discovery by ASAS-SN . [ 79 ] [ 80 ]
On 6 January 2020, NASA reported the discovery of TOI-700 d , the first Earth-sized exoplanet in the habitable zone discovered by the TESS. The exoplanet orbits the star TOI-700 100 light-years away in the Dorado constellation . [ 29 ] The TOI-700 system contains two other planets: TOI-700 b, another Earth-sized planet, and TOI-700 c, a super-Earth. This system is unique in that the larger planet is found between the two smaller planets. It is currently unknown how this arrangement of planets came to be, whether these planets formed in this order or if the larger planet migrated to its current orbit. [ 81 ] On the same day, NASA announced that astronomers used TESS data to show that Alpha Draconis is an eclipsing binary star . [ 82 ]
NASA also announced the discovery of TOI-1338 b, the first circumbinary planet discovered by TESS. TOI-1338 b is around 6.9 times larger than Earth, or between the sizes of Neptune and Saturn . It lies in a system 1,300 light-years away in the constellation Pictor . The stars in the system make an eclipsing binary, which occurs when the stellar companions circle each other in our plane of view. One is about 10% more massive than the Sun, while the other is cooler, dimmer and only one-third the Sun's mass. TOI-1338 b's transits are irregular, between every 93 and 95 days, and vary in depth and duration thanks to the orbital motion of its stars. TESS only sees the transits crossing the larger star — the transits of the smaller star are too faint to detect. Although the planet transits irregularly, its orbit is stable for at least the next 10 million years. The orbit's angle to us, however, changes enough that the planet transit will cease after November 2023 and resume eight years later. [ 83 ]
On 25 January 2021, a team led by astrochemist Tansu Daylan, with the help of two high school interns as part of the Science Research Mentoring Program at Harvard & MIT, discovered and validated four extrasolar planets — composed of one super-Earth and three sub-Neptunes - hosted by the bright, nearby, Sun-like star HD 108236 . The two high schoolers, 18 year old Jasmine Wright of Bedford High School in Bedford, Massachusetts , and 16 year old Kartik Pinglé of Cambridge Ringe And Latin School, of Cambridge, Massachusetts , are reported to be the youngest individuals in history to discover a planet, let alone four. [ 84 ] [ 85 ]
On 27 January 2021, several news agencies reported that a team using TESS had determined that TIC 168789840 , a stellar system with six stars in three binary pairs was oriented so astronomers could observe the eclipses of all the stars. [ 86 ] [ 87 ] [ 88 ] [ 89 ] [ 90 ] It is the first six star system of its kind.
In March 2021, NASA announced that TESS found 2200 exoplanet candidates. [ 91 ] By the end of 2021, TESS had discovered over 5000 candidates. [ 92 ]
On 17 May 2021, an international team of scientists, including researchers from NASA's Jet Propulsion Laboratory and the University of New Mexico , reported, and confirmed by a ground based telescope, the space telescope's first discovery of a Neptune-sized exoplanet, TOI-1231 b, inside a habitable zone. The planet orbits a nearby red dwarf star, 90 light-years away in the Vela constellation. [ 93 ]
The TESS Objects of Interest (TOI) are assigned by the TESS team [ 94 ] and the Community TOIs (CTOI) are assigned by independent researchers. [ 95 ] The primary mission of TESS produced 2241 TOIs. [ 94 ] Other small and large collaborations of researchers try to confirm the TOIs and CTOIs, or try to find new CTOIs.
Some of the collaborations with names that are searching exclusively for TESS planets are:
Collaborations with currently a smaller amount of discovery papers:
The TESS community is also producing software and programs to help validate the planet candidates, such as TRICERATOPS, [ 102 ] DAVE, [ 103 ] Lightkurve, [ 104 ] Eleanor [ 105 ] and Planet Patrol . [ 106 ]
TESS is featured accurately in the 2018 film Clara . | https://en.wikipedia.org/wiki/Transiting_Exoplanet_Survey_Satellite |
Transition-metal allyl complexes are coordination complexes with allyl and its derivatives as ligands . Allyl is the radical with the connectivity CH 2 CHCH 2 , although as a ligand it is usually viewed as an allyl anion CH 2 =CH−CH 2 − , which is usually described as two equivalent resonance structures.
The allyl ligand is commonly in organometallic chemistry . Usually, allyl ligands bind to metals via all three carbon atoms, the η 3 -binding mode. The η 3 -allyl group is classified as an LX-type ligand in the Green LXZ ligand classification scheme , serving as a 3e – donor using neutral electron counting and 4e – donor using ionic electron counting.
Commonly, allyl ligands occur in mixed ligand complexes. Examples include (η 3 -allyl)Mn(CO) 4 and CpPd(allyl) .
Substituents on the allyl group are also common, e.g. 2-methallyl. [ 1 ]
1,3- Dienes such as butadiene and isoprene dimerize in the coordination spheres of some metals, giving chelating bis(allyl) complexes. Such complexes also arise from ring-opening of divinylcyclobutane. Chelating bis(allyl) complexes are intermediates in the metal-catalyzed dimerization of butadiene to give vinylcyclohexene and cycloocta-1,5-diene . [ 4 ]
Complexes with η 1 -allyl ligands (classified as X-type ligands) are also known. One example is CpFe(CO) 2 (η 1 -C 3 H 5 ), in which only the methylene group is attached to the Fe centre (i.e., it has the connectivity [Fe]–CH 2 –CH=CH 2 ). As is the case for many other η 1 -allyl complexes, the monohapticity of the allyl ligand in this species is enforced by the 18-electron rule , since CpFe(CO) 2 (η 1 -C 3 H 5 ) is already an 18-electron complex, while an η 3 -allyl ligand would result in an electron count of 20 and violate the 18-electron rule . Such complexes can convert to the η 3 -allyl derivatives by dissociation of a neutral (two-electron) ligand L. For CpFe(CO) 2 (η 1 -C 3 H 5 ), dissociation of L = CO occurs under photochemical conditions: [ 5 ]
Allyl complexes are often generated by oxidative addition of allylic halides to low-valent metal complexes. This route is used to prepare (allyl) 2 Ni 2 Cl 2 : [ 1 ] [ 6 ]
A similar oxidative addition involves the reaction of allyl bromide to diiron nonacarbonyl . [ 7 ] The oxidative addition route has also been used to prepared Mo(II) allyl complexes: [ 8 ]
Other methods of synthesis involve addition of nucleophiles to η 4 -diene complexes and hydride abstraction from alkene complexes. [ 3 ] For example, palladium(II) chloride attacks alkenes to give first an alkene complex, but then abstracts hydrogen to give a dichlorohydridopalladium alkene complex, and then eliminates hydrogen chloride : [ 9 ]
One allyl complex can transfer an allyl ligand to another complex. [ 10 ] An anionic metal complex can displace a halide, to give an allyl complex. However, if the metal center is coordinated to 6 or more other ligands, the allyl may end up "trapped" as a σ (η 1 -) ligand. In such circumstances, heating or irradiation can dislocate another ligand to free up space for the alkene-metal bond. [ 11 ]
In principle, salt metathesis reactions can adjoin an allyl ligand from an allylmagnesium bromide or related allyl lithium reagent. [ 3 ] However, the carbanion salt precursors require careful synthesis, as allyl halides readily undergo Wurtz coupling . Mercury and tin allyl halides appear to avoid this side-reaction. [ 12 ]
Benzyl and allyl ligands often exhibit similar chemical properties. Benzyl ligands commonly adopt either η 1 or η 3 bonding modes. The interconversion reactions parallel those of η 1 - or η 3 -allyl ligands:
In all bonding modes, the benzylic carbon atom is more strongly attached to the metal as indicated by M-C bond distances, which differ by ca. 0.2 Å in η 3 -bonded complexes. [ 14 ] X-ray crystallography demonstrate that the benzyl ligands in tetrabenzylzirconium are highly flexible. One polymorph features four η 2 -benzyl ligands, whereas another polymorph has two η 1 - and two η 2 -benzyl ligands . [ 13 ]
Allyl complexes are often discussed in academic research, [ 15 ] [ 16 ] [ 17 ] [ 18 ] but few have commercial applications. A popular allyl complex is allyl palladium chloride . [ 19 ]
The reactivity of allyl ligands depends on the overall complex, although the influence of the metal center can be roughly summarized as [ 20 ]
Such complexes are usually electrophilic (i.e., react with nucleophiles), but nickel allyl complexes are usually nucleophilic (resp. with electrophiles). [ 21 ] In the former case, the addition may occur at unusual locations , and can be useful in organic synthesis . [ 22 ] | https://en.wikipedia.org/wiki/Transition-metal_allyl_complex |
Transition refers to a computer science paradigm in the context of communication systems which describes the change of communication mechanisms, i.e., functions of a communication system, in particular, service and protocol components. In a transition, communication mechanisms within a system are replaced by functionally comparable mechanisms with the aim to ensure the highest possible quality, e.g., as captured by the quality of service .
Transitions enable communication systems to adapt to changing conditions during runtime. This change in conditions can, for example, be a rapid increase in the load on a certain service that may be caused, e.g., by large gatherings of people with mobile devices. A transition often impacts multiple mechanisms at different communication layers of a layered architecture .
Mechanisms are given as conceptual elements of a networked communication system and are linked to specific functional units, for example, as a service or protocol component. In some cases, a mechanism can also comprise an entire protocol. For example on the transmission layer, LTE can be regarded as such a mechanism. Following this definition, there exist numerous communication mechanisms that are partly equivalent in their basic functionality, such as Wi-Fi , Bluetooth and Zigbee for local wireless networks and UMTS and LTE for broadband wireless connections. For example, LTE and Wi-Fi have equivalent basic functionality, but they are technologically significantly different in their design and operation. Mechanisms affected by transitions are often components of a protocol or service. For example, in case of video streaming/transmission, the use of different video data encoding can be carried out depending on the available data transmission rate. These changes are controlled and implemented by transitions; A research example is a context-aware video adaptation service to support mobile video applications. [ 1 ] Through analyzing the current processes in a communication system, it is possible to determine which transitions need to be executed at which communication layer in order to meet the quality requirements. In order for communication systems to adapt to the respective framework conditions, architectural approaches of self-organizing, adaptive systems can be used, such as the MAPE cycle [ 2 ] (Monitor-Analyze-Plan-Execute). This central concept of Autonomic Computing can be used to determine the state of the communication system, to analyze the monitoring data and to plan and execute the necessary transition(s). A central goal is that users do not consciously perceive a transition while running applications and that the functionality of the used services is perceived as smooth and fluid.
The study of new and fundamental design methods, models and techniques that enable automated, coordinated and cross-layer transitions between functionally similar mechanisms within a communication system is the main goal of a collaborative research center funded by the German research foundation (DFG). The DFG collaborative research center 1053 MAKI - Multi-mechanism Adaptation for the future Internet - focuses on research questions in the following areas: (i) Fundamental research on transition methods, (ii) Techniques for adapting transition-capable communication systems on the basis of achieved and targeted quality, and (iii) specific and exemplary transitions in communication systems as regarded from different technical perspectives.
A formalization of the concept of transitions that captures the features and relations within a communication system to express and optimize the decision making process that is associated with such a system is given in. [ 3 ] The associated building blocks comprise (i) Dynamic Software Product Lines , (ii) Markov Decision Processes and (iii) Utility Design. While Dynamic Software Product Lines provide a method to concisely capture a large configuration space and to specify run time variability of adaptive systems, Markov Decision Processes provide a mathematical tool to define and plan transitions between available communication mechanisms. Finally, utility functions quantify the performance of individual configurations of the transition-based communication system and provide the means to optimize the performance in such a system.
Applications of the idea of transitions have found their way to wireless sensor networks [ 4 ] and mobile networks, [ 5 ] distributed reactive programming, [ 6 ] [ 7 ] WiFi firmware modification, [ 8 ] planning of autonomic computing systems, [ 9 ] analysis of CDNs , [ 10 ] flexible extensions of the ISO OSI stack, [ 11 ] 5G mmWave vehicular communications, [ 12 ] [ 13 ] the analysis of MapReduce -like parallel systems, [ 14 ] scheduling of Multipath TCP , [ 15 ] adaptivity for beam training in 802.11ad , [ 16 ] operator placement in dynamic user environments, [ 17 ] DASH video player analysis, [ 18 ] adaptive bitrate streaming [ 19 ] and complex event processing on mobile devices. [ 20 ] | https://en.wikipedia.org/wiki/Transition_(computer_science) |
The transition dipole moment or transition moment , usually denoted d n m {\displaystyle \mathbf {d} _{nm}} for a transition between an initial state, m {\displaystyle m} , and a final state, n {\displaystyle n} , is the electric dipole moment associated with the transition between the two states. In general the transition dipole moment is a complex vector quantity that includes the phase factors associated with the two states. Its direction gives the polarization of the transition, which determines how the system will interact with an electromagnetic wave of a given polarization, while the square of the magnitude gives the strength of the interaction due to the distribution of charge within the system. The SI unit of the transition dipole moment is the Coulomb - meter (Cm); a more conveniently sized unit is the Debye (D).
For a transition where a single charged particle changes state from | ψ a ⟩ {\displaystyle |\psi _{a}\rangle } to | ψ b ⟩ {\displaystyle |\psi _{b}\rangle } , the transition dipole moment (t.d.m.) {\displaystyle {\text{(t.d.m.)}}} is ( t.d.m. a → b ) = ⟨ ψ b | ( q r ) | ψ a ⟩ = q ∫ ψ b ∗ ( r ) r ψ a ( r ) d 3 r {\displaystyle ({\text{t.d.m. }}a\rightarrow b)=\langle \psi _{b}|(q\mathbf {r} )|\psi _{a}\rangle =q\int \psi _{b}^{*}(\mathbf {r} )\,\mathbf {r} \,\psi _{a}(\mathbf {r} )\,d^{3}\mathbf {r} } where q is the particle's charge, r is its position, and the integral is over all space ( ∫ d 3 r {\textstyle \int d^{3}\mathbf {r} } is shorthand for ∭ d x d y d z {\textstyle \iiint dx\,dy\,dz} ). The transition dipole moment is a vector; for example its x -component is ( x-component of t.d.m. a → b ) = ⟨ ψ b | ( q x ) | ψ a ⟩ = q ∫ ψ b ∗ ( r ) x ψ a ( r ) d 3 r {\displaystyle ({\text{x-component of t.d.m. }}a\rightarrow b)=\langle \psi _{b}|(qx)|\psi _{a}\rangle =q\int \psi _{b}^{*}(\mathbf {r} )\,x\,\psi _{a}(\mathbf {r} )\,d^{3}\mathbf {r} } In other words, the transition dipole moment can be viewed as an off-diagonal matrix element of the position operator , multiplied by the particle's charge.
When the transition involves more than one charged particle, the transition dipole moment is defined in an analogous way to an electric dipole moment : The sum of the positions, weighted by charge. If the i th particle has charge q i and position operator r i , then the transition dipole moment is: ( t.d.m. a → b ) = ⟨ ψ b | ( q 1 r 1 + q 2 r 2 + ⋯ ) | ψ a ⟩ = ∫ ψ b ∗ ( r 1 , r 2 , … ) ( q 1 r 1 + q 2 r 2 + ⋯ ) ψ a ( r 1 , r 2 , … ) d 3 r 1 d 3 r 2 ⋯ {\displaystyle {\begin{aligned}({\text{t.d.m. }}a\rightarrow b)&=\langle \psi _{b}|(q_{1}\mathbf {r} _{1}+q_{2}\mathbf {r} _{2}+\cdots )|\psi _{a}\rangle \\&=\int \psi _{b}^{*}(\mathbf {r} _{1},\mathbf {r} _{2},\ldots )\,(q_{1}\mathbf {r} _{1}+q_{2}\mathbf {r} _{2}+\cdots )\,\psi _{a}(\mathbf {r} _{1},\mathbf {r} _{2},\ldots )\,d^{3}\mathbf {r} _{1}\,d^{3}\mathbf {r} _{2}\cdots \end{aligned}}}
For a single, nonrelativistic particle of mass m , in zero magnetic field, the transition dipole moment between two energy eigenstates ψ a and ψ b can alternatively be written in terms of the momentum operator , using the relationship [ 1 ] ⟨ ψ a | r | ψ b ⟩ = i ℏ ( E b − E a ) m ⟨ ψ a | p | ψ b ⟩ {\displaystyle \langle \psi _{a}|\mathbf {r} |\psi _{b}\rangle ={\frac {i\hbar }{(E_{b}-E_{a})m}}\langle \psi _{a}|\mathbf {p} |\psi _{b}\rangle }
This relationship can be proven starting from the commutation relation between position x and the Hamiltonian H : [ H , x ] = [ p 2 2 m + V ( x , y , z ) , x ] = [ p 2 2 m , x ] = 1 2 m ( p x [ p x , x ] + [ p x , x ] p x ) = − i ℏ p x m {\displaystyle [H,x]=\left[{\frac {p^{2}}{2m}}+V(x,y,z),x\right]=\left[{\frac {p^{2}}{2m}},x\right]={\frac {1}{2m}}(p_{x}[p_{x},x]+[p_{x},x]p_{x})={\frac {-i\hbar p_{x}}{m}}} Then ⟨ ψ a | ( H x − x H ) | ψ b ⟩ = − i ℏ m ⟨ ψ a | p x | ψ b ⟩ {\displaystyle \langle \psi _{a}|(Hx-xH)|\psi _{b}\rangle ={\frac {-i\hbar }{m}}\langle \psi _{a}|p_{x}|\psi _{b}\rangle } However, assuming that ψ a and ψ b are energy eigenstates with energy E a and E b , we can also write ⟨ ψ a | ( H x − x H ) | ψ b ⟩ = ( ⟨ ψ a | H ) x | ψ b ⟩ − ⟨ ψ a | x ( H | ψ b ⟩ ) = ( E a − E b ) ⟨ ψ a | x | ψ b ⟩ {\displaystyle \langle \psi _{a}|(Hx-xH)|\psi _{b}\rangle =(\langle \psi _{a}|H)x|\psi _{b}\rangle -\langle \psi _{a}|x(H|\psi _{b}\rangle )=(E_{a}-E_{b})\langle \psi _{a}|x|\psi _{b}\rangle } Similar relations hold for y and z , which together give the relationship above.
A basic, phenomenological understanding of the transition dipole moment can be obtained by analogy with a classical dipole. While the comparison can be very useful, care must be taken to ensure that one does not fall into the trap of assuming they are the same.
In the case of two classical point charges, + q {\displaystyle +q} and − q {\displaystyle -q} , with a displacement vector , r {\displaystyle \mathbf {r} } , pointing from the negative charge to the positive charge, the electric dipole moment is given by p = q r . {\displaystyle \mathbf {p} =q\mathbf {r} .}
In the presence of an electric field , such as that due to an electromagnetic wave, the two charges will experience a force in opposite directions, leading to a net torque on the dipole. The magnitude of the torque is proportional to both the magnitude of the charges and the separation between them, and varies with the relative angles of the field and the dipole: | τ | = | q r | | E | sin θ . {\displaystyle \left|\mathbf {\tau } \right|=\left|q\mathbf {r} \right|\left|\mathbf {E} \right|\sin \theta .}
Similarly, the coupling between an electromagnetic wave and an atomic transition with transition dipole moment d n m {\displaystyle \mathbf {d} _{nm}} depends on the charge distribution within the atom, the strength of the electric field, and the relative polarizations of the field and the transition. In addition, the transition dipole moment depends on the geometries and relative phases of the initial and final states.
When an atom or molecule interacts with an electromagnetic wave of frequency ω {\displaystyle \omega } , it can undergo a transition from an initial to a final state of energy difference ℏ ω {\displaystyle \hbar \omega } through the coupling of the electromagnetic field to the transition dipole moment. When this transition is from a lower energy state to a higher energy state, this results in the absorption of a photon . A transition from a higher energy state to a lower energy state results in the emission of a photon. If the charge, e {\displaystyle e} , is omitted from the electric dipole operator during this calculation, one obtains R α {\displaystyle \mathbf {R} _{\alpha }} as used in oscillator strength .
The transition dipole moment is useful for determining if transitions are allowed under the electric dipole interaction. For example, the transition from a bonding π {\displaystyle \pi } orbital to an antibonding π ∗ {\displaystyle \pi ^{*}} orbital is allowed because the integral defining the transition dipole moment is nonzero. Such a transition occurs between an even and an odd orbital; the dipole operator, μ → {\displaystyle {\vec {\mu }}} , is an odd function of r {\displaystyle \mathbf {r} } , hence the integrand is an even function. The integral of an odd function over symmetric limits returns a value of zero, while for an even function this is not necessarily the case. This result is reflected in the parity selection rule for electric dipole transitions . The transition moment integral ∫ ψ 1 ∗ μ → ψ 2 d τ , {\displaystyle \int \psi _{1}^{*}{\vec {\mu }}\psi _{2}d\tau ,} of an electronic transition within similar atomic orbitals, such as s-s or p-p, is forbidden due to the triple integral returning an ungerade (odd) product. Such transitions only redistribute electrons within the same orbital and will return a zero product. If the triple integral returns a gerade (even) product, the transition is allowed.
"IUPAC compendium of Chemical Terminology" . IUPAC. 1997. doi : 10.1351/goldbook.T06460 . Retrieved 2007-01-15 . {{ cite journal }} : Cite journal requires |journal= ( help ) | https://en.wikipedia.org/wiki/Transition_dipole_moment |
Transition engineering is the professional-engineering discipline that deals with the application of the principles of science to the design, innovation and adaptation of engineered systems that meet the needs of today without compromising the ecological , societal and economic systems on which future generations will depend to meet their own needs. Today safety is an expected consideration in design, operation and end use. Transition Engineering aims for a similar consideration of sustainability . Transition engineering is a trans-disciplinary field that addresses wicked problems while creating opportunities to increase resilience and adaptation through change projects. [ 1 ]
Engineering professions emerge when new technologies, new problems or new opportunities arise. This was the case when safety engineering grew in the early 1900s to combat the high workplace injury and fatality rates. In the 1960s, Environmental engineering emerged as a discipline to reduce industrial pollution and mitigate impacts on environmental health and water quality . Quality engineering came about with the increase in mass production techniques during WWII and the need to confirm the quality of the products. When engineered systems must change, either due to failure risks, obsolescence or modernisation, change management is a well-known process. Transition Engineering is focused on identifying the unsustainable aspects of currently operational engineered systems, innovating the projects that down-shift the unsustainable energy, material, environmental and social aspects, and then carrying out an inclusive change management process.
There are two serious problems driving the emergence of Transition Engineering; the exponential growth in the concentration of carbon dioxide in Earth’s atmosphere and the lack of growth and imminent decline of conventional oil production sometimes characterised as peak oil . The concentration of carbon dioxide in the atmosphere ran past the "climate safe" 350 ppm range in the 1990s, and has now exceeded 420ppm, a level that Earth has not known for 800,000 years. [ 2 ] Transition engineering aims to take advantage of the current access to the remaining lower cost and higher EROI energy resources to re-develop all aspects of urban and industrial engineered systems to adapt as fossil fuel use is dramatically reduced. [ 3 ]
Recognition of the field of Transition Engineering and Energy Transition Engineering started in 2010 when the Institution of Engineering and Technology (IET) Prestige Lecture in NZ was given by Associate Professor Susan Krumdieck , University of Canterbury . [ 4 ]
In 2014 the engineering text book, "Principles of Sustainable Energy Systems" by Professor Frank Kreith, featured Chapter 13 on Transition Engineering. [ 5 ] In 2017, Transition Engineering was invited for Chapter 32 the book "Energy Solutions to Combat Global Warming". [ 6 ] In 2019, a full text on the methodologies and principles of transition engineering was published, "Transition Engineering, Building a Sustainable Future". [ 7 ]
Since 2015, Transition Engineering has been taught at the university level in a range of full courses, workshops, guest lectures and seminars. Courses have been held in Grenoble INP, France; Munich University of Applied Sciences, Germany; University of Duisburg-Essen, Germany; Bristol University, UK; at University of Canterbury, New Zealand, and Heriot-Watt University, Scotland. In 2020 the first on-line courses were offered continuing professional development for engineers and other professionals.
The idea behind transition engineering originated from many different roots, both technical and non-technical. The concept of sustainable development has been around since 1987 and the problem of sustainability was a driving force in the development of transition engineering. The Transition Town movement provided further inspiration as it showed that there were many groups of people around the world motivated to prepare for peak oil and climate change . Transition towns and ecovillages demonstrate the need for engineers to build systems that manage un-sustainable risks and provide people with sustainable options. Engineers are ethically required to "hold paramount the safety, health and welfare of the public" and answer society's need for sustainable development [ 8 ]
The origins of safety engineering provided much of the inspiration for transition engineering. At the beginning of the 1900s, business owners viewed workplace safety as a wasted investment and politicians were slow to change. After the Triangle Shirtwaist Factory fire in New York City killed 156 trapped workers, 62 engineers came together to investigate how to make the workplace a safer place to be. This eventually lead to the formation of the American Society of Safety Engineers . [ 9 ]
As safety engineering manages the risks of unsafe conditions, transition engineering manages the risks of unsustainable conditions. To give engineers a better grasp of sustainability, transition engineering defines the problem as UN-sustainability. This is similar to the problem of un-safe conditions that is the purpose of safety engineering. We do not necessarily know what a perfectly safe system looks like, but we do know what unsafe systems look like and how to improve them; the same applies to unsustainability of systems. By reducing unsustainability issues we take steps in the right direction [ 10 ]
The Transition Engineering method involves seven steps to help engineers develop projects to deal with changing unsustainable activities. As a discipline, Transition Engineering recognizes that "Business as Usual" projections of future scenarios from past trends are not valid because the underlying conditions have changed sufficiently from the conditions of the past. For example, the projection of future oil supply in 2050 from data prior to 2005 would give an expectation of a 50% increase in demand over that time-frame. However, the actual production rate of conventional oil has not increased since 2005 and is projected to decline by more than 50% by 2050. [ 11 ]
GATE opened the first group in the UK in Feb 2014. GATE is a Professional Engineering Institution; a membership association and learned society, and comprises an emerging network of engineers and non-engineers that share the idea that engineers are responsible for changing engineered systems in order to adapt to reducing fossil fuel and other unsustainable resources. Transition Engineering is a change management discipline. Like Safety Engineering, Transition Engineering uses and audit and stock-take of current system design and operation to quantify the risks to essential activities and resources over a time-frame of study. The time-frame of study should be commensurate with the lifetime of the assets involved in the activity. An activity is anything that the engineered system supports, for example manufacturing, sewage treatment, mobility, or food preservation . Transition Engineering recognizes that the analytical methods of strategic analysis over a life-cycle time-frame are at odds with most economic analyses that discount values with time. The strategic analysis carried out by Transition Engineers seeks to avoid stranded investment by recognizing resource risks. A classic example of stranded investments is the North Atlantic Cod Fishery – where the largest number of bottom trawling ships (e.g. those ships responsible for destroying the Cod spawning beds) were manufactured in the year that the fish stocks collapsed. [ 17 ] [ 18 ] The Global Association for Transition Engineering is registered charity number 1166048, registered with the UK Charity Commission on 14 March 2016. It is a "Charitable Incorporated Organisation" or CIO.
Published in Nov 2019 by CRC Press, Taylor & Francis
The textbook sets out the premise, processes, methods and tools of Transition Engineering. The book includes the perspective stories that Professor Susan Krumdieck has used for sensemaking around wicked problems of change to downshift fossil fuels. Professor Krumdieck was awarded Queens New Years Honours in 2021 with New Zealand Order of Merit for her research, teaching and publication of the book. The book is also popular with non-technical readers. [ 19 ]
Transition Engineering, Building a Sustainable Future , Susan Krumdieck (2019) CRC Press, Taylor & Francis, Boca Raton | https://en.wikipedia.org/wiki/Transition_engineering |
In coordination chemistry , a transition metal NHC complex is a metal complex containing one or more N-heterocyclic carbene ligands. Such compounds are the subject of much research, in part because of prospective applications in homogeneous catalysis . One such success is the second generation Grubbs catalyst . [ 1 ]
Historically, N -heterocyclic carbenes were thought to mimic properties of tertiary phosphines . Many steric and electronic differences exist between the two ligands. [ 2 ] Compared to phosphine ligands , NHC ligands' cone angle is more complex. The imidazole ring of the NHC ligand is angled away from the metal center, yet the substituents at the 1,3 positions of the imidazole ring are angled towards it. The presence of the ligand inside of the metal coordination sphere affects the metal reactivity. In terms of electronic effects, NHC are often stronger sigma donation . [ 2 ] [ 3 ]
The popularization of NHC ligands can be traced to Arduengo , [ 4 ] who reported the deprotonation of dimesityl imidazolium cation to give IMes. [ 5 ] IMes is a free NHC that can be used as a ligand. Other NHCs have been isolated as the free ligands. [ 6 ] Aside from IMes, another important NHC ligand is IPr, which features diisopropylphenyl groups in place of the mesityl groups. [ 1 ] [ 7 ] NHCs with saturated backbones include SIMes and SIPr.
Usually, transition metal NHC complexes are prepared less directly. A popular method entails transmetallation of silver-NHC complexes. Such reagents are generated by the reaction of silver(I) oxide with the imidazolium salt. [ 8 ]
A third method involves decarboxylation of NHC-carboxylates. In this approach, N-methylimidazoles react with methyl formate to give zwitterionic N,N'-dimethylimidazolium-2-carboxylate. This zwitterion decarboxylates in the presence of metal ions to give N,N'dimethylimidazolidene-based NHC complexes. [ 9 ] | https://en.wikipedia.org/wiki/Transition_metal_NHC_complex |
Transition metal acyl complexes describes organometallic complexes containing one or more acyl (RCO) ligands . Such compounds occur as transient intermediates in many industrially useful reactions, especially carbonylations . [ 2 ] A special case are the transition metal formyl complexes .
Acyl complexes are usually low-spin and spin-paired.
Monometallic acyl complexes adopt one of two related structures, C-bonded and η 2 -C-O-bonded. These forms sometimes interconvert. For the purpose of electron-counting, C-bonded acyl ligands count as 1-electron ligands, akin to pseudohalides. η 2 -Acyl ligands count as 3-electron "L-X" ligands.
bridging acyl ligands are also well known, where the carbon bonds to one metal and the oxygen bonds to a second metal. One example is the bis(μ-acetyl) complex [(CO) 3 Fe(C(O)CH 3 ) 2 Fe(CO) 3 ] 2- . [ 4 ]
Metal acyls are often generated by the reaction of low-valent metal centers with acyl chlorides . Illustrative is the oxidative addition of acetyl chloride to Vaska's complex , converting square planar Ir(I) to octahedral Ir(III): [ 5 ] [ 6 ]
Some acyl complexes can be produced from aldehydes by C-H oxidative addition. This reaction underpins hydroacylation .
In a related reaction, metal carbonyl anions are acylated by acyl chlorides:
Another important route to metal acyls entails insertion of CO into a metal alkyl bond. In this pathway, the alkyl ligand migrates to an adjacent CO ligand. This reaction is a step in the hydroformylation process.
Coordinatively saturated metal carbonyls react with organolithium reagents to give acyls. This reaction proceeds by attack of the alkyl nucleophile on the electrophilic CO ligand.
In practical sense, the most important reaction of metal acyls is their detachment by reductive elimination of aldehydes from acyl metal hydrides:
This reaction is the final step of hydroformylation .
Another important reaction is decarbonylation. This reaction requires that the acyl complex be coordinatively unsaturated:
The oxygen center of acyl ligands is basic. This aspect is manifested in O-alkylations, which converts acyl complexes to alkoxycarbene complexes :
Metal acyl complexes participate in several commercial processes, including:
A reaction involving metal acyl complexes of occasional value in organic synthesis is the Tsuji–Wilkinson decarbonylation reaction of aldehydes. | https://en.wikipedia.org/wiki/Transition_metal_acyl_complexes |
In organometallic chemistry , a transition metal alkene complex is a coordination compound containing one or more alkene ligands . The inventory is large. [ 1 ] Such compounds are intermediates in many catalytic reactions that convert alkenes to other organic products. [ 2 ]
Complexes of ethylene are particularly common. Examples include Zeise's salt (see figure), Rh 2 Cl 2 (C 2 H 4 ) 4 , Cp* 2 Ti(C 2 H 4 ), and the homoleptic Ni(C 2 H 4 ) 3 . Substituted monoalkene include the cyclic cyclooctene , as found in chlorobis(cyclooctene)rhodium dimer . Alkenes with electron-withdrawing groups commonly bind strongly to low-valent metals. Examples of such ligands are TCNE , tetrafluoroethylene , maleic anhydride , and esters of fumaric acid . These acceptors form adducts with many zero-valent metals. [ 1 ]
Butadiene , cyclooctadiene , and norbornadiene are well-studied chelating agents. Trienes and even some tetraenes can bind to metals through several adjacent carbon centers. Common examples of such ligands are cycloheptatriene and cyclooctatetraene . The bonding is often denoted using the hapticity formalism. Keto-alkenes are tetrahapto ligands that stabilize highly unsaturated low valent metals as found in (benzylideneacetone)iron tricarbonyl and tris(dibenzylideneacetone)dipalladium(0) .
The bonding between alkenes and transition metals is described by the Dewar–Chatt–Duncanson model , which involves donation of electrons in the pi-orbital on the alkene to empty orbitals on the metal. This interaction is reinforced by back bonding that entails sharing of electrons in other metal orbitals into the empty pi-antibonding level on the alkene. Early metals of low oxidation state (Ti(II), Zr(II), Nb(III) etc.) are strong pi donors, and their alkene complexes are often described as metallacyclopropanes. Treatment of such species with acids gives the alkanes. Late metals (Ir(I), Pt(II)), which are poorer pi-donors, tend to engage the alkene as a Lewis acid – Lewis base interaction. Similarly, C 2 F 4 is a stronger pi-acceptor than C 2 H 4 , as reflected in metal-carbon bond distances. [ 3 ]
The barrier for the rotation of the alkene about the M-centroid vector is a measure of the strength of the M-alkene pi-bond. Low symmetry complexes are suitable for analysis of these rotational barriers associated with the metal-ethylene bond.In Cp Rh(C 2 H 4 )(C 2 F 4 ), the ethylene ligand is observed to rotate with a barrier near 12 kcal/mol but no rotation is observed for about the Rh-C 2 F 4 bond. [ 4 ]
Alkene ligands lose much of their unsaturated character upon complexation. Most famously, the alkene ligand undergoes migratory insertion , wherein it is attacked intramolecularly by alkyl and hydride ligands to form new alkyl complexes. Cationic alkene complexes are susceptible to attack by nucleophiles. [ 1 ]
Metal alkene complexes are intermediates in many or most transition metal catalyzed reactions of alkenes: polymerization ., hydrogenation , hydroformylation , and many other reactions. [ 5 ]
Since alkenes are mainly produced as mixtures with alkanes, the separation of alkanes and alkenes is of commercial interest. Separation technologies often rely on facilitated transport membranes containing Ag + or Cu + salts that reversibly bind alkenes. [ 6 ]
In argentation chromatography , stationary phases that contain silver salts are used to analyze organic compounds on the basis of the number and type of alkene (olefin) groups. This methodology is commonly employed for the analysis of the unsaturated content in fats and fatty acids . [ 7 ]
Metal-alkene complexes are uncommon in nature, with one exception. Ethylene affects the ripening of fruit and flowers by complexation to a Cu(I) center in a transcription factor . [ 8 ] | https://en.wikipedia.org/wiki/Transition_metal_alkene_complex |
A transition metal alkoxide complex is a kind of coordination complex containing one or more alkoxide ligands, written as RO − , where R is the organic substituent . [ citation needed ] Metal alkoxides are used for coatings and as catalysts . [ 1 ] [ 2 ]
Many alkoxides are prepared by salt-forming reactions from a metal chlorides and sodium alkoxide :
Such reactions are favored by the lattice energy of the NaCl, and purification of the product alkoxide is simplified by the fact that NaCl is insoluble in common organic solvents.
For electrophilic metal halides, conversion to the alkoxide requires no or mild base. Titanium tetrachloride reacts with alcohols to give the corresponding tetraalkoxides, concomitant with the evolution of hydrogen chloride : [ 3 ]
The reaction can be accelerated by the addition of a base, such as a tertiary amine . Other electrophilic metal halides can be used instead of titanium, for example NbCl 5 . [ citation needed ]
Many alkoxides can be prepared by anodic dissolution of the corresponding metals in water-free alcohols in the presence of electroconductive additive. The metals may be Co , Ga , Ge , Hf , Fe , Ni , Nb , Mo , La , Re , Sc , Si , Ti , Ta , W , Y , Zr , etc. The conductive additive may be lithium chloride , quaternary ammonium halide, or other. Some examples of metal alkoxides obtained by this technique: Ti(OCH(CH 3 ) 2 ) 4 , Nb 2 (OCH 3 ) 10 , Ta 2 (OCH 3 ) 10 , [MoO(OCH 3 ) 4 ] 2 , Re 2 O 3 (OCH 3 ) 6 , Re 4 O 6 (OCH 3 ) 12 , and Re 4 O 6 (OCH(CH 3 ) 2 ) 10 .
Aliphatic metal alkoxides decompose in water : [ 4 ] where R is an organic substituent and L is an unspecified ligand (often an alkoxide). A well-studied case is the irreversible hydrolysis of titanium isopropoxide:
By controlling the stoichiometry and steric properties of the alkoxide, such reactions can be arrested leading to metal-oxy-alkoxides, which usually are oligonuclear. Other alcohols can be employed in place of water. In this way one alkoxide can be converted to another, and the process is properly referred to as alcoholysis (although there is an issue of terminology confusion with transesterification, a different process - see below). The position of the equilibrium can be controlled by the acidity of the alcohol; for example phenols typically react with alkoxides to release alcohols, giving the corresponding phenoxide. [ citation needed ] More simply, the alcoholysis can be controlled by selectively evaporating the more volatile component. In this way, ethoxides can be converted to butoxides, since ethanol (b.p. 78 °C) is more volatile than butanol (b.p. 118 °C).
Many metal alkoxide compounds also feature oxo- ligands . Oxo-ligands typically arise via the hydrolysis, often accidentally, and via ether elimination: [ 5 ]
Additionally, low valent metal alkoxides are susceptible to oxidation by air. [ citation needed ]
Characteristically, transition metal alkoxides are polynuclear, that is they contain more than one metal. Alkoxides are sterically undemanding and highly basic ligands that tend to bridge metals . [ citation needed ]
Upon the isomorphic substitution of metal atoms close in properties crystalline complexes of variable composition are formed. The metal ratio in such compounds can vary over a broad range. For instance, the substitution of molybdenum and tungsten for rhenium in the complexes Re 4 O 6− y (OCH 3 ) 12+ y allowed one to obtain complexes Re 4− x Mo x O 6− y (OCH 3 ) 12+ y in the range 0 ≤ x ≤ 2.82 and Re 4− x W x O 6− y (OCH 3 ) 12+ y in the range 0 ≤ x ≤ 2 .
Alkoxide ligands are often nucleophilic. For example, molybdenum alkoxides undergo insertion reactions with unsaturated substrates such as carbon dioxide and isocyanates : [ 6 ]
The metal-alkoxide bond is susceptible to hydrogenolysis , especially for platinum metal derivatives: [ 7 ] | https://en.wikipedia.org/wiki/Transition_metal_alkoxide_complex |
Transition metal alkyl complexes are coordination complexes that contain a bond between a transition metal and an alkyl ligand . Such complexes are not only pervasive but are of practical and theoretical interest. [ 1 ] [ 2 ]
Most metal alkyl complexes contain other, non-alkyl ligands. Great interest, mainly theoretical, has focused on the homoleptic complexes. Indeed, the first reported example of a complex containing a metal-sp 3 carbon bond was the homoleptic complex diethylzinc . Other examples include hexamethyltungsten , tetramethyltitanium, and tetranorbornylcobalt. [ 3 ]
Mixed ligand, or heteroleptic, complexes containing alkyls are numerous. In nature, vitamin B12 and its many derivatives contain reactive Co-alkyl bonds.
Metal alkyl complexes are prepared generally by two pathways, use of alkyl nucleophiles and use of alkyl electrophiles. Nucleophilic sources of alkyl ligands include Grignard reagents and organolithium compounds . Since many strong nucleophiles are also potent reductants, mildly nucleophilic alkylating agents are sometimes employed to avoid redox reactions. Organozinc compounds and organoaluminium compounds are such milder reagents.
Electrophilic alkylation commonly starts with low valence metal complexes. Typical electrophilic reagents are alkyl halides . Illustrative is the preparation of the methyl derivative of cyclopentadienyliron dicarbonyl anion : [ 16 ]
Many metal alkyls are prepared by oxidative addition : [ 2 ]
An example is the reaction of a Vaska's complex with methyl iodide .
Some metal alkyls feature agostic interactions between a C-H bond on the alkyl group and the metal. Such interactions are especially common for complexes of early transition metals in their highest oxidation states. [ 18 ]
One determinant of the kinetic stability of metal-alkyl complexes is the presence of hydrogen at the position beta to the metal. If such hydrogens are present and if the metal center is coordinatively unsaturated , then the complex can undergo beta-hydride elimination to form a metal-alkene complex:
These conversions are assumed to proceed via the intermediacy of agostic interactions.
Many homogeneous catalysts operate via the intermediacy of metal alkyls. These reactions include hydrogenation , hydroformylation , alkene isomerization, and olefin polymerization . It is assumed that the corresponding heterogeneous reactions also involve metal-alkyl bonds. [ 19 ] | https://en.wikipedia.org/wiki/Transition_metal_alkyl_complexes |
In organometallic chemistry , a transition metal alkyne complex is a coordination compound containing one or more alkyne ligands . Such compounds are intermediates in many catalytic reactions that convert alkynes to other organic products, e.g. hydrogenation and trimerization . [ 1 ]
Transition metal alkyne complexes are often formed by the displacement of labile ligands by the alkyne. For example, a variety of cobalt-alkyne complexes arise by the reaction of alkynes with dicobalt octacarbonyl. [ 2 ]
Many alkyne complexes are produced by reduction of metal halides: [ 3 ]
The coordination of alkynes to transition metals is similar to that of alkenes. The bonding is described by the Dewar–Chatt–Duncanson model . Upon complexation the C-C bond elongates and the alkynyl carbon bends away from 180º. For example, in the phenylpropyne complex Pt(PPh 3 ) 2 (MeC 2 Ph), the C-C distance is 1.277(25) vs 1.20 Å for a typical alkyne. The C-C-C angle distorts 40° from linearity upon complexation. [ 4 ] Because the bending induced by complexation, strained alkynes such as cycloheptyne and cyclooctyne are stabilized by complexation. [ 5 ]
The C≡C vibration of alkynes occurs near 2300 cm −1 in the IR spectrum. This mode shifts upon complexation to around 1800 cm −1 , indicating a weakening of the C-C bond.
When bonded side-on to a single metal atom, an alkyne serves as a dihapto usually two-electron donor. For early metal complexes, e.g., Cp 2 Ti(C 2 R 2 ), strong π-backbonding into one of the π* antibonding orbitals of the alkyne is indicated. This complex is described as a metallacyclopropene derivative of Ti(IV). For late transition metal complexes, e.g., Pt(PPh 3 ) 2 (MeC 2 Ph), the π-backbonding is less prominent, and the complex is assigned oxidation state 0. [ 6 ] [ 7 ]
In some complexes, the alkyne is classified as a four-electron donor. In these cases, both pairs of pi-electrons donate to the metal. This kind of bonding was first implicated in complexes of the type W(CO)(R 2 C 2 ) 3 . [ 8 ]
Because alkynes have two π bonds, alkynes can form stable complexes in which they bridge two metal centers. The alkyne donates a total of four electrons, with two electrons donated to each of the metals. And example of a complex with this bonding scheme is η 2 -diphenylacetylene-(hexacarbonyl)dicobalt(0). [ 7 ]
Transition metal benzyne complexes represent a special case of alkyne complexes since the free benzynes are not stable in the absence of the metal. [ 9 ]
Metal alkyne complexes are intermediates in the semi hydrogenation of alkynes to alkenes:
This transformation is conducted on a large scale in refineries, which unintentionally produce acetylene during the production of ethylene. It is also useful in the preparation of fine chemicals. Semihydrogenation affords cis alkenes. [ 10 ]
Metal-alkyne complexes are also intermediates in the metal-catalyzed trimerization and tetramerizations. Cyclooctatetraene is produced from acetylene via the intermediacy of metal alkyne complexes. Variants of this reaction are exploited for some syntheses of substituted pyridines .
The Pauson–Khand reaction provides a route to cyclopentenones via the intermediacy of cobalt-alkyne complexes.
Acrylic acid was once prepared by the hydrocarboxylation of acetylene: [ 11 ]
With the shift away from coal-based (acetylene) to petroleum-based feedstocks (olefins), catalytic reactions with alkynes are not widely practiced industrially.
Polyacetylene has been produced using metal catalysis involving alkyne complexes.
Cuprous chloride also catalyzes the dimerization of acetylene to vinylacetylene , once used as a precursor to various polymers such a neoprene . Mechanistic studies suggest that this reaction proceeds by insertion of acetylene into a copper(I) acetylide complex. [ 12 ] | https://en.wikipedia.org/wiki/Transition_metal_alkyne_complex |
Transition metal amino acid complexes are a large family of coordination complexes containing the conjugate bases of the amino acids , the 2-aminocarboxylates. Amino acids are prevalent in nature, and all of them function as ligands toward the transition metals. [ 1 ] Not included in this article are complexes of the amides (including peptide) and ester derivatives of amino acids. Also excluded are the polyamino acids including the chelating agents EDTA and NTA .
Most commonly, amino acids coordinate to metal ions as N,O bidentate ligands, utilizing the amino group and the carboxylate. A five-membered chelate ring (NCCCOM) is formed. The chelate ring is only slightly ruffled at the sp 3 -hybridized carbon and nitrogen centers.
N,O bidentate amino carboxylates are "L-X" ligands in the Covalent bond classification method . With respect to HSAB theory , N,O bidentate amino carboxylate is a pair of hard ligands.
For those amino acids containing coordinating substituents, the resulting complexes are more structurally diverse since these substituents can coordinate. Histidine , aspartic acid , and methionine sometimes function as tridentate N,N,O, N,O,O, and S,N,O, ligands, respectively. Doubly deprotonated cysteine is often an N,S-bidentate ligand.
Using kinetically inert metal ions, complexes containing monodentate amino acids have been characterized. These complexes exist in either the N or the O linkage isomers. It can be assumed that such monodentate complexes exist transiently for many kinetically labile metal ions (e.g. Zn 2+ ).
Mixing simple metal salts with solutions of amino acids near neutral or elevated pH often affords bis- or tris complexes. For metal ions that prefer octahedral coordination, these complexes often adopt the stoichiometry M(aa) 3 (aa = amino carboxylate, such as glycinate, H 2 NCH 2 CO 2 − ).
Complexes of the 3:1 stoichiometry have the formula is [M(O 2 CC(R)HNH 2 ) 3 ] z . Such complexes adopt octahedral coordination geometry . These complexes can exist in facial and meridional isomers, both of which are chiral. The stereochemical possibilities increase when the amino acid ligands are not homochiral . Both the violet meridional and red-pink facial isomers of tris(glycinato)cobalt(III) have been characterized [ 6 ] With L- alanine , L- leucine , and other amino acids, one obtains four stereoisomers. [ 7 ] With cysteine, the amino acid binds through N and thiolate. [ 8 ]
Complexes with the 2:1 stoichiometry are illustrated by copper(II) glycinate [Cu(O 2 CC(R)HNH 2 ) 2 ], which exists both in anhydrous and pentacoordinate geometries. When the metal is square planar, these complexes can exist as cis and trans isomers. The stereochemical possibilities increase when the amino acid ligands are not homochiral . Homoleptic complexes are also known where the amino carboxylate is tridentate amino acids. One such complex is Ni(κ 3 -histidinate) 2 .
In addition to the amino acids, peptides and proteins bind metal cofactors through their side chains. For the most part, the α-amino and carboxylate groups are unavailable for binding as they are otherwise engaged in the peptide bond. The situation is more complicated for the N-terminal and O-terminal residues where the α-amino and carboxylate groups are unavailable, respectively. Especially important in this regard are histidine ( imidazole ), cysteine ( thiolate ), methionine ( thioether ).
Mixed ligand complexes are common for amino acids. Well known examples include [Co(en) 2 (glycinate)] 2+ , where en ( ethylenediamine ) is a spectator ligand. In the area of organometallic complexes, one example of Cp*Ir(κ 3 -methionine).
A well studied complex is tris(glycinato)cobalt(III) . It is produced by the reaction of glycine with sodium tris(carbonato)cobalt(III) . [ 6 ] Similar synthetic methods apply to the preparation of tris(chelates) of other amino acids . [ 10 ]
Commonly amino acid complexes are prepared by ligand displacement reactions of metal aquo complexes and the conjugate bases of amino acids: [ 11 ] [ 12 ]
Relevant to bioinorganic chemistry , amino acid complexes can be generated by the hydrolysis of amino acid esters and amides (en = ethylenediamine ):
Because their 5-membered MNC 2 O chelate ring is rather stable, amino acid complexes represent protecting groups for amino acids, allowing diverse reactions of the side chains. [ 13 ]
Organic compounds featuring two or more 2- and 3-aminocarboxylate groups are ligands of extensive use in nature, industry, and research. Famous examples include EDTA and NTA . | https://en.wikipedia.org/wiki/Transition_metal_amino_acid_complexes |
Metal arene complexes are organometallic compounds of the formula (C 6 R 6 ) x ML y . Common classes are of the type (C 6 R 6 )ML 3 and (C 6 R 6 ) 2 M. These compounds are reagents in inorganic and organic synthesis . The principles that describe arene complexes extend to related organic ligands such as many heterocycles (e.g. thiophene ) and polycyclic aromatic compounds (e.g. naphthalene ). [ 1 ]
Also known as reductive Friedel–Crafts reaction, the Fischer–Hafner synthesis entails treatment of metal chlorides with arenes in the presence of aluminium trichloride and aluminium metal. The method was demonstrated in the 1950s with the synthesis of bis(benzene)chromium by Walter Hafner and his advisor E. O. Fischer . [ 3 ] The method has been extended to other metals, e.g. [Ru(C 6 Me 6 ) 2 ] 2+ . In this reaction, the AlCl 3 serves to remove chloride from the metal precursor, and the Al metal functions as the reductant. [ 1 ] The Fischer-Hafner synthesis is limited to arenes lacking sensitive functional groups.
Although many metal-arene complexes are robust, few are prepared by the direct reaction of arenes with metal salts. The main example is provided by silver perchlorate (and related salts), which dissolve in liquid arenes and crystallize with arene ligands. The strength of the metal-arene interaction is weak as indicated by the long Ag-C bond lengths and the nearly unperturbed nature of the arene. [ 4 ]
By metal vapor synthesis , metal atoms co-condensed with arenes react to give complexes of the type M(arene) 2 . Cr(C 6 H 6 ) 2 can be produced by this method. [ 1 ]
Cr(CO) 6 reacts directly with benzene and other arenes to give the piano stool complexes Cr(C 6 R 6 )(CO) 3 . [ 5 ] The carbonyls of Mo and W behave comparably. The method works particularly well with electron-rich arenes (e.g., anisole , mesitylene ). The reaction has been extended to the synthesis of [Mn(C 6 R 6 )(CO) 3 ] + : [ 6 ]
Few Ru(II) and Os(II) complexes react directly with arenes. Instead, arene complexes of these metals are typically prepared by treatment of M(III) precursors with cyclohexadienes . For example, heating alcohol solutions of 1,3- or 1,4-cyclohexadiene and ruthenium trichloride gives (benzene)ruthenium dichloride dimer . The conversion entails dehydrogenation of an intermediate diene complex.
Metal complexes are known to catalyze alkyne trimerization to give arenes. These reactions have been used to prepare arene complexes. Illustrative is the reaction of [Co(mesitylene) 2 ] + with 2-butyne to give [Co(C 6 Me 6 ) 2 ] + . [ 1 ]
In most of its complexes, arenes bind in an η 6 mode , with six nearly equidistant M-C bonds. The C-C-C angles are unperturbed vs the parent arene, but the C-C bonds are elongated by 0.2 Å. In the fullerene complex Ru 3 (CO) 9 (C 60 ), the fullerene binds to the triangular face of the cluster. [ 7 ]
In some complexes, the arene binds through only two or four carbons, η 2 and η 4 bonding, respectively. In these cases, the arene is no longer planar. Because the arene is dearomatized, the uncoordinated carbon centers display enhanced reactivity. A well studied example is [Ru(η 6 -C 6 Me 6 )(η 4 -C 6 Me 6 )] 0 , formed by the reduction of [Ru(η 6 -C 6 Me 6 ) 2 ] 2+ . An example of an [Os(η 2 -C 6 H 6 )(NH 3 ) 5 )] 2+ . [ 8 ]
When bound in the η 6 manner, arenes often function as unreactive spectator ligands , as illustrated by several homogeneous catalysts used for transfer hydrogenation , such as (η 6 -C 6 R 6 )Ru(TsDPEN). In cationic arene complexes or those supported by several CO ligands, the arene is susceptible to attack by nucleophiles to give cyclohexadienyl derivatives.
Particularly from the perspective of organic synthesis , the decomplexation of arenes is of interest. Decomplexation can often be induced by treatment with excess of ligand (MeCN, CO, etc). [ 5 ] | https://en.wikipedia.org/wiki/Transition_metal_arene_complex |
Transition metal benzyne complexes are organometallic complexes that contain benzyne ligands (C 6 H 4 ). Unlike benzyne itself, these complexes are less reactive although they undergo a number of insertion reactions . [ 2 ]
The studies of metal-benzyne complexes were initiated with the preparation of zirconocene complex by reaction diphenylzirconocene with trimethylphosphine . [ 3 ]
The preparation of Ta(η 5 -C 5 Me 5 )(C 6 H 4 )Me 2 proceeds similarly, requiring the phenyl complex Ta(η 5 -C 5 Me 5 )(C 6 H 5 )Me 3 . This complex is prepared by treatment of Ta(η 5 -C 5 Me 5 )Me 3 Cl with phenyllithium . [ 4 ] Upon heating, this complex eliminates methane, leaving the benzyne complex:
The second example of a benzyne complex is Ni(η 2 -C 6 H 4 )(dcpe) (dcpe = Cy 2 PCH 2 CH 2 PCy 2 ). It is produced by dehalogenation of the bromophenyl complex NiCl(C 6 H 4 Br-2)(dcpe) with sodium amalgam . Its coordination geometry is close to trigonal planar.
Benzyne complexes react with a variety of electrophiles, resulting in insertion into one M-C bond. [ 5 ] With trifluoroacetic acid, benzene is lost to give the trifluoroacetate Ni(O 2 CF 3 ) 2 (dcpe). [ 5 ]
Several benzyne complexes have been examined by X-ray crystallography . | https://en.wikipedia.org/wiki/Transition_metal_benzyne_complex |
In chemistry , a transition metal boryl complex is a molecular species with a formally anionic boron center coordinated to a transition metal . [ 1 ] They have the formula L n M-BR 2 or L n M-(BR 2 LB) (L = ligand, R = H, organic substituent, LB = Lewis base ). One example is (C 5 Me 5 )Mn(CO) 2 (BH 2 PMe 3 ) (Me = methyl ). [ 2 ] Such compounds, especially those derived from catecholborane and the related pinacolborane , are intermediates in transition metal-catalyzed borylation reactions.
Oxidative addition is the main route to metal boryl complexes. Both B-H and B-B bonds add to low-valent metal complexes. For example, catecholborane oxidatively adds to Pt(0) to give the boryl hydride. [ 4 ]
Addition of diboron tetrafluoride to Vaska's complex gives the triboryl iridium(III) derivative: | https://en.wikipedia.org/wiki/Transition_metal_boryl_complex |
A transition metal carbene complex is an organometallic compound featuring a divalent carbon ligand , itself also called a carbene . [ 1 ] Carbene complexes have been synthesized from most transition metals and f-block metals , [ 2 ] using many different synthetic routes such as nucleophilic addition and alpha-hydrogen abstraction. [ 1 ] The term carbene ligand is a formalism since many are not directly derived from carbenes and most are much less reactive than lone carbenes. [ 2 ] Described often as =CR 2 , carbene ligands are intermediate between alkyls (−CR 3 ) and carbynes (≡CR) . Many different carbene-based reagents such as Tebbe's reagent are used in synthesis. They also feature in catalytic reactions, especially alkene metathesis , and are of value in both industrial heterogeneous and in homogeneous catalysis for laboratory- and industrial-scale preparation of fine chemicals. [ 1 ] [ 3 ] [ 4 ]
Metal carbene complexes are often classified into two types. The Fischer carbenes, named after Ernst Otto Fischer , feature strong π-acceptors at the metal and are electrophilic at the carbene carbon atom. Schrock carbenes , named after Richard R. Schrock , are characterized by more nucleophilic carbene carbon centers; these species typically feature higher oxidation state (valency) metals. N -Heterocyclic carbenes (NHCs) were popularized following Arduengo's isolation of a stable free carbene in 1991. [ 5 ] Reflecting the growth of the area, carbene complexes are now known with a broad range of different reactivities and diverse substituents. Often it is not possible to classify a carbene complex solely with regards to its electrophilicity or nucleophilicity. [ 1 ]
The common features of Fischer carbenes are: [ 6 ]
Examples include (CO) 5 W=COMePh and (OC) 5 Cr=C(NR 2 )Ph .
Fischer carbene complexes are related to the singlet form of carbenes, where both electrons occupy the same sp 2 orbital at the carbon. This lone pair donates to a metal-based empty d orbital, forming a σ bond. π-backbonding from a filled metal d orbital to the empty p orbital of the carbon atom is possible. However this interaction is generally weak since the alpha donor atoms also donate to this orbital. As such, Fischer carbenes are characterized as having partial double bond character. The major resonance structures of Fischer carbenes put the negative charge on the metal centre, and the positive on the carbon atom, making it electrophilic. [ 6 ]
Fischer carbenes can be likened to ketones, with the carbene carbon atom being electrophilic, like the carbonyl carbon atom of a ketone. This can be seen from the resonance structures , where there is a significant contribution from the structure bearing a positive carbon centre. [ 6 ] Like ketones, Fischer carbene species can undergo aldol -like reactions. The hydrogen atoms attached to the carbon atom α to the carbene carbon atom are acidic, and can be deprotonated by a base such as n -butyllithium , to give a nucleophile, which can undergo further reaction. [ 7 ]
Schrock carbenes do not have π-accepting ligands on the metal centre. They are often called alkylidene complexes . Typically this subset of carbene complexes are found with: [ 6 ]
Examples include ((CH 3 ) 3 CCH 2 )Ta=CHC(CH 3 ) 3 [ 9 ] and Os(PPh 3 ) 2 (NO)Cl(=CH 2 ) . [ 10 ]
Bonding in such complexes can be viewed as the coupling of a triplet state metal and triplet carbene, forming a true double bond. Both the metal and carbon atom donate 2 electrons, one to each bond. Since there is no donation to the carbene atom from adjacent groups, the extent of pi backbonding is much greater, giving a strong double bond. These bonds are weakly polarized towards carbon and therefore the carbene atom is a nucleophile. Furthermore, the major resonance structures of Schrock carbene put the negative charge on the carbon atom, making it nucleophilic. [ 6 ] Complexes with the methylidene ligand ( =CH 2 ) are the simplest Schrock-type carbenes.
N -Heterocyclic carbenes (NHCs) are particularly common carbene ligands. [ 11 ] They are popular because they are more readily prepared than Schrock and Fischer carbenes. In fact, many NHCs are isolated as the free ligand, since they are persistent carbenes . [ 12 ] [ 13 ] Being strongly stabilized by π-donating substituents, NHCs are powerful σ-donors but π-bonding with the metal is weak. [ 14 ] For this reason, the bond between the carbon and the metal center is often represented by a single dative bond, whereas Fischer and Schrock carbenes are usually depicted with double bonds to metal. Continuing with this analogy, NHCs are often compared with trialkyl phosphine ligands. Like phosphines, NHCs serve as spectator ligands that influence catalysis through a combination of electronic and steric effects, but they do not directly bind substrates. [ 15 ] [ 16 ] Examples to NHC complexes of transition metals include coinage metal NHC complexes , and cyclic iron tetra N-heterocyclic carbenes .
An early example of this bonding mode was provided by [C 5 Me 5 Mn(CO) 2 ] 2 (μ−CO) prepared from diazomethane :
Another example of this family of compounds is Tebbe's reagent . It features a methylene bridge joining titanium and aluminum . [ 17 ]
Metal carbene complexes have applications in hetereogeneous and homogeneous catalysis, and as reagents for organic reactions.
The dominant application of metal carbenes involves none of the above classes of compounds, but rather heterogeneous catalysts used for alkene metathesis for the synthesis of higher alkenes. A variety of related reactions are used to interconvert light alkenes, e.g. butenes, propylene, and ethylene. [ 18 ] Carbene complexes are invoked as intermediates in the Fischer–Tropsch route to hydrocarbons. [ 3 ]
A variety of homogeneous carbene catalysts, especially the Grubbs' ruthenium and Schrock molybdenum-imido catalysts have been used for olefin metathesis in laboratory-scale synthesis of natural products and materials science . [ 4 ]
Homogeneous Schrock-type carbene complexes such as Tebbe's reagent can be used for the olefination of carbonyls, replacing the oxygen atom with a methylidene group. The nucleophilic carbon atom behaves similarly to the carbon atom of the phosphorus ylide in the Wittig reaction , attacking the electrophilic carbonyl atom of a ketone, followed by elimination of a metal oxide. [ 1 ]
In the nucleophilic abstraction reaction, a methyl group can be abstracted from the donating group of a Fischer carbene, making it a strong nucleophile for further reaction. [ 6 ]
Diazo compounds like methyl phenyldiazoacetate can be used for cyclopropanation or to insert into C-H bonds of organic substrates. These reactions are catalyzed by dirhodium tetraacetate or related chiral derivatives. Such catalysis is assumed to proceed via the intermediacy of carbene complexes. [ 19 ]
Fischer carbenes are used with alkynes as the starting reagents for the Wulff–Dötz reaction , forming phenols. [ 20 ]
The first metal carbene complex to have been reported was Chugaev's red salt , first synthesized as early as 1925, although it was never identified to be a carbene complex. [ 21 ] The characterization of (CO)5W(COCH3(Ph)) in the 1960s is often cited as the starting point of the area and Ernst Otto Fischer , for this and other achievements in organometallic chemistry, was awarded the 1973 Nobel Prize in Chemistry . [ 22 ] In 1968, Hans-Werner Wanzlick and Karl Öfele separately reported metal-bonded N-heterocyclic carbenes. [ 6 ] [ 23 ] [ 24 ] The synthesis and characterization of ((CH 3 ) 3 CCH 2 )Ta=CHC(CH 3 ) 3 by Richard R. Schrock in 1974 marked the first metal alkylidene complex. [ 9 ] In 1991, Anthony J. Arduengo synthesized and crystallized the first persistent carbene , an NHC with large adamantane alkyl groups, accelerating the field of N-heterocarbene ligands to its current use. [ 5 ] [ 6 ] | https://en.wikipedia.org/wiki/Transition_metal_carbene_complex |
Transition metal carbonate and bicarbonate complexes are coordination compounds containing carbonate (CO 3 2- ) and bicarbonate (HCO 3 − ) as ligands . The inventory of complexes is large, enhanced by the fact that the carbonate ligand can bind metal ions in a variety of bonding modes. [ 1 ] [ 2 ] They illustrate the fate of low valent complexes when exposed to air.
Carbonate is a pseudohalide ligand. With a saturated pi-system, it has no pi-acceptor properties. With multiple electronegative elements, it is not strongly basic. The latter is consistent with the pK a ’s of carbonic acid: pK 1 = 6.77 and pK 2 = 9.93.
To a single metal ion, carbonate is observed to bind in both unidentate (κ 1 -) and bidentate (κ 2 -) fashions. [ 5 ] In the covalent bond classification method , κ 1 -carbonate is anX ligand and κ 2 -carbonate is an X 2 ligand. With two metals, the number of bonding modes increases because carbonate often serves as a bridging ligand . It can span metal-metal bonds as in [Ru 2 (CO 3 ) 4 Cl 2 ] 5- , where again it functions as an (X) 2 ligand. More commonly all three oxygen centers bind, as illustrated by [(C 5 H 5 ) 2 Ti] 2 CO 3 . In such cases, carbonate is an LX ligand, providing 3e − to each metal. More complicated motifs have been characterized by X-ray crystallography including {(VO) 6 (μ-OH) 9 (CO 3 ) 4 } 5- .
The bonding modes of bicarbonate are more limited than those for carbonate, in part because it is less basic and in part because the proton occupies a metal-binding site. Typically bicarbonate is assumed to bind as an unidentate X ligand. Structural studies on such complexes are, however, rare. [ 6 ]
Carbonato complexes are prepared by salt metathesis reactions using alkali metal carbonate salts as precursors. In some cases, bicarbonate intermediates are implicated since carbonate does not exist in appreciable concentrations near neutral pH. The other chief route to metal carbonato complexes involves addition of CO 2 to metal oxides. Such reactions may be catalyzed by water since the carbonation of metal hydroxides is particularly well established. Isotope labeling studies show that these reactions can proceed (and perhaps usually proceed) without scission of the M-OH bond (L = generic ligand):
Many esoteric routes have been demonstrated. For example, the deoxygenation of peroxycarbonate by tertiary phosphines:
Carbon dioxide undergoes disproportionation upon reaction with low-valence metals. [ 7 ]
Most fundamental reactivity of bicarbonate/carbonato complexes is their interconversion. This acid-base reaction has been examined mainly for unimolecular complexes. Such reactions are molecular versions of the familiar reaction of acids with carbonate minerals.
Protonation of carbonato complexes gives the corresponding bicarbonate. The structure of bicarbonate complex indicates that protonation occurs at the coordinated oxygen. [ 8 ] This process is the microscopic reverse or the first step in the carbonation of metal hydroxides. Protonation of bicarbonate ligands results in loss of carbon dioxide and formation of the metal hydroxide. Particularly well studied are the reactions of [Co(NH 3 ) 4 (CO 3 )] + and its ethylenediamine analogue carbonatobis(ethylenediamine)cobalt(III) . [ 1 ]
Few homoleptic carbonato complexes have been characterized. One is [Zr(CO 3 ) 4 ] 4- , featuring 8-ccordinate Zr(IV). [ 9 ] Tris(carbonato)cobalt(III) ([Co(CO) 3 ] 3- ) is another example.
Metal carbonato and bicarbonate complexes are of no direct commercial importance. Several minerals are metal carbonates, and a few feature molecular carbonate complexes, e.g. hellyerite ([Ni 2 (CO 3 ) 2 (H 2 O) 8 ] . H 2 O. [ 10 ]
In the biological sphere, zinc bicarbonate complexes are central intermediates in the action of the carbonic anhydrase : [ 11 ] | https://en.wikipedia.org/wiki/Transition_metal_carbonate_and_bicarbonate_complexes |
Transition metal carboxamide complexes are coordination complexes containing one or more amide ligands (RC(O)NH 2 being the simplest members) bound to a transition metal . [ 1 ] Many amides are known, proteins for example. Amides are generally at least weakly basic, so the inventory of their coordination complexes is large. Amide complexation is an important structural motif in bioinorganic chemistry . This binding is also relevant to catalysis, since metal-amide complexes are intermediates in the metal-catalyzed hydrolysis of amides to carboxylic acids . [ 2 ]
Several principles and trends are illustrated by the case of complexes of dimethylformamide (DMF), a very common amide ligand. Amides bind to metals through oxygen, which is the basic site of amides. Amides are thus L ligands according to the covalent bond classification method , i.e. charge-neutral 2e donors. With respect to HSAB theory , amides are classified as hard ligands.
The M-O=C(NH 2 )H entity is planar in complexes of formamide . Similarly, the M-O=C(NC 2 )H entity is planar in complexes of DMF. Two geometrically distinct bonding modes are possible depending on the relative positions of the metal ion and the N-substituent on the amide. For simple unidentate amides, like DMF, the M and N are transoid.
Often amide refers to carboxamide, and often amide refers to the anions R 2 N - and their derivatives. Tetrakis(dimethylamido)titanium ( Ti(N(CH 3 ) 2 ) 4 ) illustrates the ambiguity of the terminology.
Being a compact ligand, DMF forms homoleptic complexes with several metal cations. Some of those characterized by X-ray crystallography are listed below.
By contrast with DMF, homoleptic complexes with formamide and methylformamide are rare.
Diacetamide ( HN(C(O)CH 3 ) 2 ) and glycinamide ( H 2 NC(O)CH 2 NH 2 ) are two of many examples of chelating amide ligands. They respectively form the complexes [Co(( HN(COCH 3 ) 2 (SCN)2}} [ 10 ] and ( [Co(H 2 NCOCH 2 NH 2 )(H 2 NCH 2 CH 2 NH 2 ) 2 ] 3+ .
Some prominent examples of transition metal complexes of carboxamido (deprotonated carboxamide) ligands: bleomycin (Fe), Nickel superoxide dismutase (Ni), and nitrile hydratase (Co). [ 11 ]
The amide ligand in cationic complexes is prone toward hydrolysis: [ 12 ]
The N-H bonds in amide ligands are acidified relative to the free ligand. Consequently, amide complexes are susceptible to deprotonation. This conversion is often accompanied by isomerization to the N-bonded form. This form of linkage isomerism is manifested in glycinamide complexes. [ 13 ]
Urea (O=C(NH 2 ) 2 ) is more basic at oxygen than simple amides owing to the combined pi-donation from the two amino groups. One consequence is that the inventory of urea complexes is large, including many homoleptic derivatives. Urea forms a broader range of complexes, reflected by the existence of [M(urea) 6 ](ClO 4 ) 3 (M = Ti, Mn). [ 14 ] [ 15 ] As for other complexes of carboxamide ligands, the MOC(NH 2 ) 2 core of urea is planar with a bent M-O-C angle.
Biuret (H 2 NC(O)N(H)C(O)NH 2 ) is a derivative of urea but with two amido groups. Biuret forms a variety of metal complexes, e.g. [Cu(NH 2 CONHCONH 2+ ) 2 ] 2 . [ 16 ] In addition to the parent urea and biuret, many derivatives are known where N-H is replaced by alkyl or aryl. | https://en.wikipedia.org/wiki/Transition_metal_carboxamide_complex |
Transition metal carboxylate complexes are coordination complexes with carboxylate (RCO 2 − ) ligands . Reflecting the diversity of carboxylic acids, the inventory of metal carboxylates is large. Many are useful commercially, and many have attracted intense scholarly scrutiny. Carboxylates exhibit a variety of coordination modes, most common are κ 1 - (O-monodentate), κ 2 (O,O-bidentate), and bridging.
Carboxylates bind to single metals by one or both oxygen atoms, the respective notation being κ 1 - and κ 2 -. In terms of electron counting , κ 1 -carboxylates are "X"-type ligands, i.e., a pseudohalide-like. κ 2 -carboxylates are "L-X ligands", i.e. resembling the combination of a Lewis base (L) and a pseudohalide (X). Carboxylates are classified as hard ligands, in HSAB theory.
For simple carboxylates, the acetate complexes are illustrative. Most transition metal acetates are mixed ligand complexes. One common example is hydrated nickel acetate , Ni(O 2 CCH 3 ) 2 (H 2 O) 4 , which features intramolecular hydrogen-bonding between the uncoordinated oxygens and the protons of aquo ligands . Stoichiometrically simple complexes are often multimetallic. One family are the basic metal acetates , of the stoichiometry [M 3 O(OAc) 6 (H 2 O) 3 ] n+ . [ 2 ]
Homoleptic carboxylate complexes are usually coordination polymers . But exceptions exist.
Many methods allow the synthesis of metal carboxylates. From preformed carboxylic acid, the following routes have been demonstrated: [ 5 ]
From preformed carboxylate, salt metathesis reactions are common:
Metal carboxylates can be prepared by carbonation of highly basis metal alkyls:
A common reaction of metal carboxylates is their displacement by more basic ligands. Acetate is a common leaving group . They are especially prone to protonolysis, which is widely used to introduce ligands, displacing the carboxylic acid. In this way octachlorodimolybdate is produced from dimolybdenum tetraacetate :
Acetates of electrophilic metals are proposed to function as bases in concerted metalation deprotonation reactions. [ 6 ]
Attempts to prepare some carboxylate complexes, especially for electrophilic metals, often gives oxo derivatives. Examples include the oxo-acetates of Fe(III), Mn(III), and Cr(III). Pyrolysis of metal carboxylates affords acid anhydrides and the metal oxide. This reaction explains the formation of basic zinc acetate from anhydrous zinc diacetate .
In some cases, monodentate carboxylates undergo O-alkylation to give esters. Strong alkylating agents are required.
Many carboxylates form complexes with transition metals. Alkyl and simple aryl carboxylates behave similarly to the acetates. Trifluoroacetates differ in mononuclear complexes because it is usually monodentate, e.g. [Zn(κ 2 -O 2 CCH 3 ) 2 (OH 2 ) 2 ] vs [Zn(κ 1 -O 2 CCF 3 ) 2 (OH 2 ) 4 ]. [ 7 ]
Naphthenic acids , mixtures of long chain and cyclic carboxylic acids extracted from petroleum, form lipophilic complexes (often called salts) with transition metals. These metal naphthenates , have the formula M(naphthenate) 2 , or M 3 O(naphthenate) 6 , have diverse applications [ 8 ] [ 9 ] including synthetic detergents , lubricants , corrosion inhibitors, fuel and lubricating oil additives, wood preservatives , insecticides , fungicides , acaricides , wetting agents , thickening agent , and oil drying agents . Industrially useful naphthenates include those of aluminium, magnesium, calcium, barium, cobalt, copper, lead, manganese, nickel, vanadium, and zinc.< [ 9 ] Illustrative is the use of cobalt naphthenate for the oxidation of tetrahydronaphthalene to the hydroperoxide. [ 10 ]
Like naphthenic acid, 2-ethylhexanoic acid forms lipophilic complexes that are used in organic and industrial chemical synthesis . They function as catalysts in polymerizations as well as for oxidation reactions as oil drying agents . [ 11 ] Metal ethylhexanoates are referred to as metallic soaps. [ 12 ]
A commercially important family of metal carboxylates are derived from aminopolycarboxylates , e.g., EDTA 4- . Related to these synthetic chelating agents are the amino acids , which form large families of amino acid complexes . Two amino acids, glutamate and aspartate, have carboxylate side chains, which function as ligands for iron in nonheme iron proteins, such as hemerythrin . [ 13 ]
Metal organic frameworks , porous, three-dimensional coordination polymers, are often derived from metal carboxylate clusters. These clusters, called secondary bonding units (SBU's), are often linked by the conjugate bases of benzenedi- and tri- carboxylic acids. [ 14 ]
It has been claimed that "cobalt carboxylates are the most widely used homogeneous catalysts in industry" as they are used in the oxidation of p-xylene to terephthalic acid . [ 15 ]
Palladium(II) acetate has been described as being "among the most extensively used transition metal complexes in metal-mediated organic synthesis". Many coupling reactions utilize this reagent, which is soluble in organic solvents and which contains a built-in Bronsted base (acetate). [ 16 ]
Dirhodium tetrakis(trifluoroacetate) is widely used catalyst for reactions involving diazo compounds. [ 17 ] | https://en.wikipedia.org/wiki/Transition_metal_carboxylate_complex |
Transition metal carbyne complexes are organometallic compounds with a triple bond between carbon and the transition metal . [ 1 ] This triple bond consists of a σ-bond and two π-bonds . [ 2 ] The HOMO of the carbyne ligand interacts with the LUMO of the metal to create the σ-bond. The two π-bonds are formed when the two HOMO orbitals of the metal back-donate to the LUMO of the carbyne. They are also called metal alkylidynes—the carbon is a carbyne ligand. Such compounds are useful in organic synthesis of alkynes and nitriles . They have been the focus on much fundamental research. [ 3 ]
Transition metal carbyne complexes are most common for the early transition metals, especially niobium , tantalum , molybdenum , tungsten , and rhenium . They can also have low-valence metals as well as high-valence metals.
The first Fischer carbyne complex was reported in 1973. [ 4 ] Two years later in 1975, the first "Schrock carbyne" was reported. [ 5 ]
Many high-valent carbyne complexes have since been prepared, often by dehydrohalogenation of carbene complexes. Alternatively, amino-substituted carbyne ligands sometimes form upon protonation of electron-rich isonitrile complexes. Similarly, O -protonation of μ 3 -CO ligands in clusters gives hydroxycarbyne complexes. Vinyl ligands have been shown to rearrange into carbyne ligands. Addition of electrophiles to vinylidene ligands also affords carbyne complexes. [ 3 ]
Some metal carbynes dimerize to give dimetallacyclobutadienes. In these complexes, the carbyne ligand serves as a bridging ligand .
Several cluster-bound carbyne complexes are known, typically with CO ligands . These compounds do not feature MC triple bonds; instead the carbyne carbon is tetrahedral. Tricobalt derivatives are prepared by treating cobalt carbonyl with haloforms : [ 6 ]
Monomeric metal carbyne complexes exhibit fairly linear M–C–R linkages according to X-ray crystallography . The M–C distances are typically shorter than the M–C bonds found in metal carbenes. The bond angle is generally between 170° and 180°. [ 8 ] Analogous to Fischer and Schrock carbenes ; Fischer and Schrock carbynes are also known. Fischer carbynes usually have lower oxidation state metals and the ligands are π-accepting/electron-withdrawing ligands. Schrock carbynes on the other hand typically have higher oxidation state metals and electron-donating/anionic ligands. In a Fischer carbyne the C-carbyne exhibits electrophilic behavior while Schrock carbynes display nucleophilic reactivity on the carbyne carbon. [ 9 ] In 2025, radical reactivity at the carbyne carbon was also reported. [ 10 ] Carbyne complexes have also been characterized by many methods including infrared Spectroscopy , Raman spectroscopy, and single-crystal X-ray diffraction . [ 11 ] Bond lengths, bond angles and structures can be inferred from these and other analytical techniques.
Metal carbyne complexes also exhibit a large trans effect , where the ligand opposite the carbyne is typically labile.
Hexa(tert-butoxy)ditungsten(III) is a catalyst for alkyne metathesis . [ 12 ] The catalytic cycle involves an carbyne intermediate. [ 13 ]
Some carbyne complexes react with electrophiles at C-carbyne followed by association of the anion. The net reaction gives a transition metal carbene complex :
These complexes can also undergo photochemical reactions .
In some carbyne complexes, coupling of the carbyne ligand to a carbonyl is observed. Protonation of the carbyne carbon and conversion of the carbyne ligand into a π- allyl . [ 14 ]
A sulfur-based main group analog of a carbyne complex has been prepared by Seppalt and coworkers. [ 15 ] The compound, trifluoro(2,2,2-trifluoroethylidyne)-λ 6 -sulfurane, F 3 C–C≡SF 3 , prepared by dehydrofluorination of F 3 C–CH=SF 4 or F 3 C–CH 2 –SF 5 , is an unstable gas that readily undergoes dimerization to form trans -(CF 3 )(SF 3 )C=C(CF 3 )(SF 3 ) at above –50 °C. | https://en.wikipedia.org/wiki/Transition_metal_carbyne_complex |
In chemistry , a transition metal chloride complex is a coordination complex that consists of a transition metal coordinated to one or more chloride ligand . The class of complexes is extensive. [ 1 ]
Halides are X-type ligands in coordination chemistry . They are both σ- and π-donors. Chloride is commonly found as both a terminal ligand and a bridging ligand . The halide ligands are weak field ligands . Due to a smaller crystal field splitting energy, the homoleptic halide complexes of the first transition series are all high spin. Only [CrCl 6 ] 3− is exchange inert.
Homoleptic metal halide complexes are known with several stoichiometries, but the main ones are the hexahalometallates and the tetrahalometallates. The hexahalides adopt octahedral coordination geometry , whereas the tetrahalides are usually tetrahedral. Square planar tetrahalides are known for Pd(II), Pt(II), and Au(III). Examples with 2- and 3-coordination are common for Au(I), Cu(I), and Ag(I).
Due to the presence of filled p π orbitals, halide ligands on transition metals are able to reinforce π-backbonding onto a π-acid. They are also known to labilize cis -ligands. [ 2 ] [ 3 ]
Homoleptic complexes (complexes with only chloride ligands) are often common reagents. Almost all examples are anions .
Some homoleptic complexes of the second row transition metals feature metal-metal bonds.
Heteroleptic complexes containing chloride are numerous. Most hydrated metal halides are members of this class. Hexamminecobalt(III) chloride and Cisplatin ( cis -Pt(NH 3 ) 2 Cl 2 ) are prominent examples of metal-ammine-chlorides.
As indicated in the table below, many hydrates of metal chlorides are molecular complexes. [ 78 ] [ 79 ] These compounds are often important commercial sources of transition metal chlorides. Several hydrated metal chlorides are not molecular and thus are not included in this tabulation. For example the dihydrates of manganese(II) chloride , nickel(II) chloride , copper(II) chloride , iron(II) chloride , and cobalt(II) chloride are coordination polymers .
Metal chlorides form adducts with ethers to give transition metal ether complexes . | https://en.wikipedia.org/wiki/Transition_metal_chloride_complex |
Transition metal complexes of aldehydes and ketones describes coordination complexes with aldehyde (RCHO) and ketone (R 2 CO) ligands . Because aldehydes and ketones are common, the area is of fundamental interest. Some reactions that are useful in organic chemistry involve such complexes.
In monometallic complexes, aldehydes and ketones can bind to metals in either of two modes, η 1 -O-bonded and η 2 -C,O-bonded. These bonding modes are sometimes referred to sigma- and pi-bonded. These forms may sometimes interconvert.
The sigma bonding mode is more common for higher valence, Lewis-acidic metal centers (e.g., Zn 2+ ). [ 1 ] The pi-bonded mode is observed for low valence, electron-rich metal centers (e.g., Fe(0) and Os(0)). [ 2 ]
For the purpose of electron-counting, O-bonded ligands count as 2-electron "L ligands" : they are Lewis bases. η 2 -C,O ligands are described as analogues of alkene ligands , i.e. the Dewar-Chatt-Duncanson model . [ 3 ]
η 2 -C,O ketones and aldehydes can function as bridging ligands, utilizing a lone pair of electrons on oxygen. One such complex is [(C 5 H 5 ) 2 Zr(CH 2 O)] 3 , which features a Zr 3 O 3 ring. [ 4 ]
Related to η 1 -O-bonded complexes of aldehydes and ketones are metal acetylacetonates and related species, which can be viewed as a combination of ketone and enolate ligands.
Compounds in which metals replace the aldehydic hydrogen, instead of enolizing the carbonyl, are transition metal acyl complexes .
Some η 2 -aldehyde complexes insert alkenes to give five-membered metallacycles . [ 5 ]
η 1 -Complexes of alpha-beta unsaturated carbonyls exhibit enhanced reactivity toward dienes . This interaction is the basis of Lewis-acid catalyzed Diels-Alder reactions . | https://en.wikipedia.org/wiki/Transition_metal_complexes_of_aldehydes_and_ketones |
Transition metal complexes of phosphine oxides are coordination complex containing one or more phosphine oxide ligands. Many phosphine oxides exist and most behave as hard Lewis bases . Almost invariably, phosphine oxides bind metals by formation of M-O bonds. [ 1 ]
The structure of the phosphine oxide is not strongly perturbed by coordination. The geometry at phosphorus remains tetrahedral. The P-O distance elongates by ca. 2%. In triphenylphosphine oxide , the P-O distance is 1.48 Å. [ 3 ] In NiCl 2 [OP(C 6 H 5 ) 3 ] 2 , the distance is 1.51 Å (see figure). A similar elongation of the P-O bond is seen in cis -WCl 4 (OPPh 3 ) 2 . [ 4 ] The trend is consistent with the stabilization of the ionic resonance structure upon complexation.
Typically, complexes are derived from hard metal centers. Examples include cis -WCl 4 (OPPh 3 ) 2 [ 4 ] and NbOCl 3 (OPPh 3 ) 2 [ 5 ] Trialkylphosphine oxides are more basic (better ligands) than triarylphosphine oxides. One such complex is FeCl 2 (OPMe 3 ) 2 (Me = CH 3 ). [ 6 ]
Most complexes of phosphine oxides are prepared by treatment of a labile metal complex with preformed phosphine oxide. In some cases, the phosphine oxide is unintentionally generated by air-oxidation of the parent phosphine ligand.
Since phosphine oxides are weak Lewis bases, they are readily displaced from their metal complexes. This behavior has led to investigation of mixed phosphine-phosphine oxide ligands, which exhibit hemilability . Typical phosphine-phosphine oxide ligands are Ph 2 P(CH 2 ) n P(O)Ph 2 (Ph = C 6 H 5 ) derived from bis(diphenylphosphino)ethane (n = 2) and bis(diphenylphosphino)methane (n = 1). [ 1 ]
In one case, coordination of the oxide of dppe to W(0) results in deoxygenation, giving an oxotungsten complex of dppe. [ 7 ]
Secondary phosphine oxides have the formula R 2 P(O)H. [ 8 ] They tautomerize to small amounts of the hydroxy tautomer R 2 P-OH. Regardless, the hydroxy tautomer forms a wide variety of complexes with transition metals. In contrast to O-bonded phosphine oxide ligands, the P-bonded phosphine oxides are strong field ligands. These ligands, which tend to engage in intramolecular hydrogen bonds. Illustrative is the complex derived from dimethylphosphine oxide , PtH(PMe 2 OH) 2 (PMe 2 O) (Me = CH 3 ). [ 9 ]
The pattern also applies to several phosphorus compounds including phosphorous acid , which forms complexes as P(OH) 3 . The complex platinum pop is one example.
The Kläui ligand is the anion {(C 5 H 5 )Co[(CH 3 O) 2 PO] 3 } − . It is derived from the trimethylphosphite ligand by dealkylation. In this case the "ligand" is a complex of cobalt that also binds to other metals in a tridentate manner . [ 10 ] | https://en.wikipedia.org/wiki/Transition_metal_complexes_of_phosphine_oxides |
Transition metal complexes of thiocyanate describes coordination complexes containing one or more thiocyanate (SCN − ) ligands . The topic also includes transition metal complexes of isothiocyanate . These complexes have few applications but played significant role in the development of coordination chemistry. [ 1 ]
Hard metal cations , as classified by HSAB theory , tend to form N -bonded complexes (isothiocyanates), whereas class B or soft metal cations tend to form S -bonded thiocyanate complexes. For the isothiocyanates, the M-N-C angle is usually close to 180°. For the thiocyanates, the M-S-C angle is usually close to 100°.
Most homoleptic complexes of NCS − feature isothiocyanate ligands (N-bonded). All first-row metals bind thiocyanate in this way. [ 4 ] Octahedral complexes [M(NCS) 6 ] z- include M = Ti(III), Cr(III), Mn(II), Fe(III), Ni(II), Mo(III), Tc(IV), and Ru(III). [ 5 ] Four-coordinated tetrakis(isothiocyanate) complexes would be tetrahedral since isothiocyanate is a weak-field ligand. Two examples are the deep blue [Co(NCS) 4 ] 2- and the green [Ni(NCS) 4 ] 2- . [ 6 ]
Few homoleptic complexes of NCS − feature thiocyanate ligands (S-bonded). Octahedral complexes include [M(SCN) 6 ] 3- (M = Rh [ 7 ] and Ir [ 8 ] ) and [Pt(SCN) 6 ] 2- . Square planar complexes include [M(SCN) 4 ] z- (M = Pd(II), Pt(II), [ 9 ] and Au(III)). Colorless [Hg(SCN) 4 ] 2- is tetrahedral.
Some octahedral isothiocyanate complexes undergo redox reactions reversibly. Orange [Os(NCS) 6 ] 3- can be oxidized to violet [Os(NCS) 6 ] 2- . The Os-N distances in both derivatives are almost identical at 200 picometers . [ 10 ]
Thiocyanate shares its negative charge approximately equally between sulfur and nitrogen . [ 11 ] Thiocyanate can bind metals at either sulfur or nitrogen — it is an ambidentate ligand . Other factors, e.g. kinetics and solubility, sometimes influence the observed isomer. For example, [Co(NH 3 ) 5 (NCS)] 2+ is the thermodynamic isomer, but [Co(NH 3 ) 5 (SCN)] 2+ forms as the kinetic product of the reaction of thiocyanate salts with [Co(NH 3 ) 5 (H 2 O)] 3+ . [ 12 ]
Some complexes of SCN − feature both but only thiocyanate and isothiocyanate ligands. Examples are found for heavy metals in the middle of the d-period: Ir(III), [ 13 ] and Re(IV). [ 3 ]
As a ligand, [SCN] − can also bridge two (M−SCN−M) or even three metals (>SCN− or −SCN<). One example of an SCN-bridged complex is [Ni 2 (SCN) 8 ] 4- . [ 6 ]
This article focuses on homoleptic complexes, which are simpler to describe and analyze. Most complexes of SCN − , however are mixed ligand species. Mentioned above is one example, [Co(NH 3 ) 5 (NCS)] 2+ . Another example is [OsCl 2 (SCN) 2 (NCS) 2 ] 2- . [ 14 ] Reinecke's salt , a precipitating agent, is a derivative of [Cr(NCS) 4 (NH 3 ) 2 ] − .
Thiocyanate complexes are not widely used commercially. Possibly the oldest application of thiocyanate complexes was the use of thiocyanate as a test for ferric ions in aqueous solution. Addition of a thiocyanate salt to a solution containing ferric ions gives a deep red color. The identity of the chromophore remains unknown. [ 15 ] The reverse was also used: testing for the presence of thiocyanate by the addition of ferric salts. The 1:1 complex of thiocyanate and iron is deeply red. The effect was first reported in 1826. [ 16 ] The structure of this species has never been confirmed by X-ray crystallography . The test is largely archaic.
Copper(I) thiocyanate is a reagent for the conversion of aryl diazonium salts to arylthiocyanates, a version of the Sandmeyer reaction .
Since thiocyanate occurs naturally, it is to be expected that it serves as a substrate for enzymes. Two metalloenzymes , thiocyanate hydrolases , catalyze the hydrolysis of thiocyanate. A cobalt-containing hydrolase catalyzes its conversion to carbonyl sulfide : [ 17 ]
A copper-containing thiocyanate hydrolase catalyzes its conversion to cyanate : [ 18 ]
In both cases, metal-SCN complexes are invoked as intermediates.
Almost all thiocyanate complexes are prepared from thiocyanate salts using ligand substitution reactions . [ 12 ] [ 19 ] [ 20 ] Typical thiocyanate sources include ammonium thiocyanate and potassium thiocyanate .
An unusual route to thiocyanate complexes involves oxidative addition of thiocyanogen to low valent metal complexes: [ 21 ]
Even though the reaction involves cleavage of the S-S bond in thiocyanogen, the product is the Ru-NCS linkage isomer.
In another unusual method, thiocyanate functions as both a ligand and as a reductant in its reaction with dichromate to give [Cr(NCS) 4 (NH 3 ) 2 ] − . In this conversion, Cr(VI) converts to Cr(III). [ 22 ] | https://en.wikipedia.org/wiki/Transition_metal_complexes_of_thiocyanate |
Dioxygen complexes are coordination compounds that contain O 2 as a ligand . [ 1 ] [ 2 ] The study of these compounds is inspired by oxygen-carrying proteins such as myoglobin , hemoglobin , hemerythrin , and hemocyanin . [ 3 ] Several transition metals form complexes with O 2 , and many of these complexes form reversibly. [ 4 ] The binding of O 2 is the first step in many important phenomena, such as cellular respiration , corrosion , and industrial chemistry. The first synthetic oxygen complex was demonstrated in 1938 with cobalt(II) complex reversibly bound O 2 . [ 5 ]
O 2 binds to a single metal center either "end-on" ( η 1 - ) or "side-on" ( η 2 -). The bonding and structures of these compounds are usually evaluated by single-crystal X-ray crystallography , focusing both on the overall geometry as well as the O–O distances, which reveals the bond order of the O 2 ligand.
O 2 adducts derived from cobalt (II) and iron (II) complexes of porphyrin (and related anionic macrocyclic ligands) exhibit this bonding mode. Myoglobin and hemoglobin are famous examples, and many synthetic analogues have been described that behave similarly. Binding of O 2 is usually described as proceeding by electron transfer from the metal(II) center to give superoxide ( O − 2 ) complexes of metal(III) centers. As shown by the mechanisms of cytochrome P450 and alpha-ketoglutarate-dependent hydroxylase , Fe- η 1 -O 2 bonding is conducive to formation of Fe(IV) oxo centers. O 2 can bind to one metal of a bimetallic unit via the same modes discussed above for mononuclear complexes. A well-known example is the active site of the protein hemerythrin , which features a diiron carboxylate that binds O 2 at one Fe center. Dinuclear complexes can also cooperate in the binding, although the initial attack of O 2 probably occurs at a single metal.
η 2 -bonding is the most common motif seen in coordination chemistry of dioxygen. Such complexes can be generated by treating low-valent metal complexes with oxygen. For example, Vaska's complex reversibly binds O 2 (Ph = C 6 H 5 ):
The conversion is described as a 2 e − redox process: Ir(I) converts to Ir(III) as dioxygen converts to peroxide . Since O 2 has a triplet ground state and Vaska's complex is a singlet, the reaction is slower than when singlet oxygen is used. [ 7 ] The magnetic properties of some η 2 -O 2 complexes show that the ligand, in fact, is superoxide, not peroxide. [ 8 ]
Most complexes of η 2 -O 2 are generated using hydrogen peroxide , not from O 2 . Chromate ([CrO 4 )] 2− ) can for example be converted to the tetraperoxide [Cr(O 2 ) 4 ] 2− . The reaction of hydrogen peroxide with aqueous titanium(IV) gives a brightly colored peroxy complex that is a useful test for titanium as well as hydrogen peroxide. [ 9 ]
These binding modes include μ 2 - η 2 , η 2 -, μ 2 - η 1 , η 1 -, and μ 2 - η 1 , η 2 -. Depending on the degree of electron-transfer from the dimetal unit, these O 2 ligands can again be described as peroxo or superoxo. Hemocyanin is an O 2 -carrier that utilizes a bridging O2 binding motif. It features a pair of copper centers. [ 10 ]
.
Salcomine , the cobalt(II) complex of salen ligand is the first synthetic O 2 carrier. [ 12 ] Solvated derivatives of the solid complex bind 0.5 equivalent of O 2 :
Reversible electron transfer reactions are observed in some dinuclear O 2 complexes. [ 13 ]
Dioxygen complexes are the precursors to other families of oxygenic ligands. Metal oxo compounds arise from the cleavage of the O–O bond after complexation. Hydroperoxo complexes are generated in the course of the reduction of dioxygen by metals. The reduction of O 2 by metal catalysts is a key half-reaction in fuel cells .
Metal-catalyzed oxidations with O 2 proceed via the intermediacy of dioxygen complexes, although the actual oxidants are often oxo derivatives. The reversible binding of O 2 to metal complexes has been used as a means to purify oxygen from air, but cryogenic distillation of liquid air remains the dominant technology. | https://en.wikipedia.org/wiki/Transition_metal_dioxygen_complex |
In organometallic chemistry , a transition metal formyl complex is a metal complex containing one (usually) or more formyl (CHO) ligand. A subset of transition metal acyl complexes , formyl complexes can be viewed as metalla-aldehydes. A representative example is (CO) 5 ReCHO. The formyl is viewed as an X (pseudohalide) ligand. Metal formyls are proposed as intermediates in the hydrogenation of carbon monoxide , as occurs in the Fischer-Tropsch process . [ 2 ]
The MCHO group is planar. A C=O double bond is indicated by X-ray crystallography . A second resonance structure has a M=C double bond, with negative charge on oxygen.
Metal formyl complexes are often prepared by the reaction of metal carbonyls with hydride reagents: [ 3 ]
The CO ligand is the electrophile and the hydride (provided typically from a borohydride ) is the nucleophile.
Some metal formyls are produced by reaction of metal carbonyl anions with reagents that donate the equivalent of a formyl cation, such a mixed formate anhydrides. [ 4 ]
Metal formyls participate in many reactions, many of which are motivated by interest in Fischer-Tropsch chemistry. O-alkylation gives carbenoid complexes. The formyl ligand also functions as a base, allowing the formation of M-CH=O-M' linkages. [ 5 ] Decarbonylation leads to de-insertion of the carbonyl, yielding hydride complexes. [ 2 ] | https://en.wikipedia.org/wiki/Transition_metal_formyl_complex |
In coordination chemistry and organometallic chemistry , transition metal imido complexes is a coordination compound containing an imido ligand . Imido ligands can be terminal or bridging ligands . The parent imido ligand has the formula NH, but most imido ligands have alkyl or aryl groups in place of H. The imido ligand is generally viewed as a dianion, akin to oxide.
In some terminal imido complexes, the M=N−C angle is 180° but often the angle is decidedly bent. Complexes of the type M=NH are assumed to be intermediates in nitrogen fixation by synthetic catalysts. [ 3 ]
Imido ligands are observed as doubly and, less often, triply bridging ligands.
Commonly metal-imido complexes are generated from metal oxo complexes . They arise by condensation of amines and metal oxides and metal halides:
This approach is illustrated by the conversion of MoO 2 Cl 2 to the diimido derivative MoCl 2 (NAr) 2 ( dimethoxyethane ), precursors to the Schrock carbenes of the type Mo(OR) 2 (NAr)(CH-t-Bu). [ 4 ]
Aryl isocyanates react with metal oxides concomitant with decarboxylation:
Some are generated from the reaction of low-valence metal complexes with azides:
A few imido complexes have been generated by the alkylation of metal nitride complexes :
Metal imido complexes are mainly of academic interest. They are however assumed to be intermediates in ammoxidation catalysis, in the Sharpless oxyamination , and in nitrogen fixation .
A molybdenum imido complex appears in a common nitrogen fixation cycle:
with the oxidation state of molybdenum varying to accommodate the number bonds from nitrogen. [ 6 ] | https://en.wikipedia.org/wiki/Transition_metal_imido_complex |
In organometallic chemistry , a transition metal indenyl complex is a coordination compound that contains one or more indenyl ligands. The indenyl ligand is formally the anion derived from deprotonation of indene . The η 5 -indenyl ligand is related to the η 5 cyclopentadienyl anion (Cp), thus indenyl analogues of many cyclopentadienyl complexes are known. Indenyl ligands lack the 5-fold symmetry of Cp, so they exhibit more complicated geometries. Furthermore, some indenyl complexes also exist with only η 3 -bonding mode. The η 5 - and η 3 -bonding modes sometimes interconvert. [ 1 ]
Indene is deprotonated by butyl lithium and related reagents to give the equivalent of the indenyl anion: [ 2 ]
The resulting lithium indenide can be used to prepare indenyl complexes by salt metathesis reactions of metal halides. [ 3 ] When the metal halide is easily reduced, the trimethylstannylindenyl can be used as a source of indenyl anion:
The M-C distances in indenyl complexes are comparable to those in cyclopentadienyl complexes. For the metallocenes M(Ind) 2 , ring slipping is evident for the case of M = Co and especially Ni, but not for M = Fe. [ 4 ] A number of chelating or ansa-bis(indenyl complexes are known, such as those derived from 2,2'-bis(2-indenyl) biphenyl
The indenyl effect refers to an explanation for the enhanced rates of substitution exhibited by η 5 - indenyl complexes vs the related η 5 - cyclopentadienyl complexes. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
Associative substitution occurs by the addition of a ligand to a metal complex followed by dissociation of an original ligand. Associative pathways are not typically seen in 18-electron complexes due to the requisite intermediates having more than 18 electrons associated with the metal atom. 18 electron indenyl complexes; however, have been shown to undergo substitution via associative pathways quite readily. This is attributed to the relative ease of η 5 to η 3 rearrangement due to stabilization by the arene . This stabilization is responsible for substitution rate enhancements of about 10 8 for the substitution of indenyl complexes compared to the corresponding cyclopentadienyl complex.
Kinetic data support two proposed mechanisms for associative ligand substitution. The first mechanism, proposed by Hart-Davis and Mawby, is a concerted attack by the nucleophile and η 5 to η 3 transition followed by loss of a ligand and a η 3 to η 5 transition.
In a mechanism proposed by Basolo , η 5 and η 3 isomers exist in rapid chemical equilibrium . The rate-limiting step occurs with the attack of the nucleophile on a η 3 isomer. The nature of the substituents of the allyl group can strongly affect the kinetics and regiochemistry of the nucleophilic attack. [ 12 ]
Indenyl like effects are also observed in a number of non indenyl substituted metal complexes. In fluorenyl complexes, associative substitution is enhanced even further than indenyl compounds. The substitution rate of Mn(η 5 -C 13 H 9 )(CO) 3 is about 60 times faster than that of Mn(η 5 -C 9 H 7 )(CO) 3
Veiros conducted a study comparing the rate of substitution on [(η 5 -X)Mn(CO) 3 ] where X is cyclopentadienyl , indenyl, fluorenyl, cyclohexadienyl, and 1-hydronaphthalene. Unsurprisingly, it was found that the ease of η 5 to η 3 haptotropic shift correlated to the strength of the Mn-X bond. [ 10 ]
The indenyl analogues of ferrocene , which is orange, and cobaltocenium cation were first reported by Pauson and Wilkinson . [ 3 ] The cobalt derivative is a poorer reductant than cobaltocene.
The indenyl effect was discovered by Hart-Davis and Mawby in 1969 through studies on the conversion of (η 5 -C 9 H 7 )Mo(CO) 3 CH 3 to the phosphine-substituted acetyl complex, which follows bimolecular kinetics. This rate law was attributed to the haptotropic rearrangement of the indenyl ligand from η 5 to η 3 . The corresponding reaction of tributylphosphine with (η 5 -C 5 H 5 )Mo(CO) 3 CH 3 was 10 x slower. [ 13 ] The term indenyl effect was coined by Fred Basolo .
Subsequent work by Hart-Davis, Mawby, and White compared CO substitution by phosphines in Mo(η 5 -C 9 H 7 )(CO) 3 X and Mo(η 5 -C 5 H 5 )(CO) 3 X (X = Cl, Br, I) and found the cyclopentadienyl compounds to substitute by an S N 1 pathway and the indenyl compounds to substitute by both S N 1 and S N 2 pathways. Mawby and Jones later studied the rate of CO substitution with P(OEt) 3 with Fe(η 5 -C 9 H 7 )(CO) 2 I and Fe(η 5 -C 5 H 5 )(CO) 2 I and found that both occur by an S N 1 pathway with the indenyl substitution occurring about 575 times faster. Hydrogenation of the arene ring in the indenyl ligand resulted in CO substitution at about half the rate of the cyclopentadienyl compound.
Work in the early 1980s by Basolo found the S N 2 replacement of CO in Rh(η 5 -C 9 H 7 )(CO) 2 to be 10 8 times faster than in Rh(η 5 -C 5 H 5 )(CO) 2 . Shortly afterwards, Basolo tested the effect of the indene ligand on Mn(η 5 -C 9 H 7 )(CO) 3 , the cyclopentadienyl analogue of which having been shown to be inert to CO substitution. Mn(η 5 -C 9 H 7 )(CO) 3 did undergo CO loss and was found to substitute via an S N 2 mechanism. | https://en.wikipedia.org/wiki/Transition_metal_indenyl_complex |
In organometallic chemistry , transition metal complexes of nitrite describes families of coordination complexes containing one or more nitrite ( −NO 2 ) ligands . [ 2 ] Although the synthetic derivatives are only of scholarly interest, metal-nitrite complexes occur in several enzymes that participate in the nitrogen cycle . [ 3 ]
Three linkage isomers are common for nitrite ligands, O-bonded, N-bonded, and bidentate O,O-bonded. The former two isomers have been characterized for the pentamminecobalt(III) system, i.e. [(NH 3 ) 5 Co−NO 2 ] 2+ and [(NH 3 ) 5 Co−ONO] 2+ , referred to as N-nitrito and O-nitrito, respectively. These two forms are sometimes called nitro and nitrito. These isomers can be interconverted in some complexes. [ 4 ]
An example of chelating nitrite is [Cu(bipy) 2 (O 2 N)]NO 3 – "bipy" is the bidentate ligand 2,2′-bipyridyl . This bonding mode is sometimes described as κ 2O,O - NO 2 ..
The kinetically-favored O-bonded isomer [(NH 3 ) 5 Co−ONO] 2+ converts to [(NH 3 ) 5 Co−NO 2 ] 2+ . In its reaction with ferric porphyrin complexes, nitrite gives the O-bonded isomer, Fe(porph)ONO . Addition of donor ligands to this complex induces the conversion to the octahedral low-spin isomer, which now is a soft Lewis acid. The nitrite isomerizes to the N-bonded isomer, Fe(porph)NO 2 (L) . [ 5 ]
The isomerization of [(NH 3 ) 5 Co−ONO] 2+ to [(NH 3 ) 5 Co−NO 2 ] 2+ proceeds in an intramolecular manner. [ 6 ]
N- and O-bonding NO 2 − are classified as X ligand in the Covalent bond classification method . In the usual electron counting method , they are a two-electron ligand.
With respect to HSAB theory , the N-bonded ligand is soft than the isomeric O-bonded form.
Several homoleptic (complexes with only one kind of ligand) complexes have been characterized by X-ray crystallography . The inventory includes octahedral complexes [M(NO 2 ) 6 ] 3− , where M = Co ( Sodium cobaltinitrite ) [ 7 ] [ 8 ] and Rh. Square-planar homoleptic complexes are also known for Pt(II) and Pd(II). The potassium salts of [M(NO 2 ) 4 ] 2− (M = Zn, Cd) feature homoleptic complexes with four O,O-bidentate nitrite ligands. [ 9 ]
Traditionally metal nitrito complexes are prepared by salt metathesis or ligand substitution reactions using alkali metal nitrite salts, such as sodium nitrite . At neutral pH, nitrite exists predominantly as the anion, not nitrous acid. [ 10 ]
Metal nitrosyl complexes undergo base hydrolysis, yielding nitrite complexes. This pattern is manifested in the behavior of nitroprusside :
Some anionic nitrito complexes undergo acid-induced deoxygenation to give the nitrosyl complex . [ 11 ]
The reaction is reversible in some cases. Thus, one can generate nitrito complexes by base-hydrolysis of electrophilic metal nitrosyls.
Nitro complexes also catalyze the oxidation of alkenes. [ 12 ]
Metal nitrito complexes figure prominently in the nitrogen cycle , which describes the relationships and interconversions of ammonia up to nitrate . Because nitrogen is often a limiting nutrient, this cycle is important. Nitrite itself does not readily undergo redox reactions, but its metal complexes do. [ 13 ]
The molybdenum-containing enzyme nitrite oxidoreductase catalyzes the oxidation of nitrite to nitrate:
The heme -based enzyme nitrite reductase catalyzes the conversion of nitrite to ammonia. The cycle begins with reduction of an iron-nitrite complex to a metal nitrosyl complex . [ 3 ]
The copper-containing enzyme nitrite reductase (CuNIR) catalyzes the 1-electron reduction of nitrite to nitric oxide. The proposed mechanism entails the protonation of a κ 2O,O -NO 2 -Cu(I) complex. This protonation induces cleavage of an N–O bond, giving a HO–Cu–ON center, which features a nitric oxide ligand O-bonded to Cu(II) (an isonitrosyl ). | https://en.wikipedia.org/wiki/Transition_metal_nitrite_complex |
Transition metal nitroso complexes are coordination complexes containing one or more organo nitroso ligands (RNO). [ 1 ]
Organic nitroso compounds bind to metals in several ways, but most commonly as monodentate N-bonded ligands. Also known are O-bonded, η 2 -N,O-bonded. Dimers of organic nitroso compounds also bind in a κ 2 --O,O bidentate manner. Illustrative are Ru(acac) 2 (C 6 H 5 NO) 2 , where a pair nitrosobenzenes are monodentate, and [Ru(acac) 2 (μ−C 6 H 5 NO)] 2 where two nitrosobenzenes bridge . [ 2 ]
Arylnitroso compounds with a flanking hydroxy group are a well-developed, e.g. 1-nitroso-2-naphthol . They are precursors to anionic N,O chelating ligands. Chelating dinitrosoarenes are uncommon but have been investigated. [ 3 ]
Organic nitroso complexes can be prepared from preformed organic nitroso precursors. These precursors usually exist as N-N bonded dimers, but the dimer dissociates readily. This direct method is used to give W(CO) 5 (tert-BuNO) (where tert-Bu is (CH 3 ) 3 C ). [ 4 ] The Fe-porphyrin complex depicted below is prepared by this route. More complicated but more biorelevant routes involve degradation of precursors such as nitrobenzene and phenylhydroxylamine . [ 5 ]
The coupling of organic ligands and nitric oxide is yet another route. [ 1 ]
Methemoglobinemia is a disorder where a large fraction of hemoglobin in one's blood has converted to inactive forms, generically called methemoglobin . Since methemoglobin is not an oxygen-carrier, methemoglobinemia is a serious disorder, sometimes fatal. Exposure to nitrobenzene , aniline , and their derivatives cause this disorder, which is attributed to their conversion to nitrosobenzene (and derivatives), which inactivate hemoglobin by forming a complex with the Fe center, precluding binding of O 2 . [ 6 ]
As indicated by the applications in dyeing, chelating aryl nitroso compounds often form deeply colored complexes | https://en.wikipedia.org/wiki/Transition_metal_nitroso_complexes |
Transition metal perchlorate complexes are coordination complexes with one or more perchlorate ligands . Perchlorate can bind to metals through one, two, three, or all four oxygen atoms. Usually however, perchlorate is a counterion, not a ligand.
Homoleptic complexes, i.e. complexes where all the ligands are the same (in this case perchlorate), are of fundamental interest because of their simple stoichiometries.
Several anhydrous metal diperchlorate complexes are known but most are not molecular (and hence, not complexes). For example, many compounds with the formula M(ClO 4 ) 2 are coordination polymers (M = Mn, Fe, Co, Ni, Cu). An exception to this pattern is palladium(II) perchlorate Pd(ClO 4 ) 2 , which is a square planar complex consisting of a pair of bidentate perchlorate ligands. Furthermore, anhydrous Cu(ClO 4 ) 2 is sublimable, which implies the existence of molecular Cu(ClO 4 ) 2 . [ 1 ]
Titanium(IV) perchlorate and zirconium(IV) perchlorate are molecular, featuring four bidentate perchlorate ligands. They are volatile.
More common than homoleptic complexes are those with two or more types of ligands. A classic case is the dicationic complex pentamminecobalt(III) perchlorate, which had resisted formation by conventional substitution reactions. [ 2 ] It was prepared by oxidation of the azide complex : [ 3 ]
Another mixed ligand complex is the perchlorate complex of the ferric derivative of octaethylporphyrin . [ 4 ]
Being the conjugate base of the strongly acidic perchloric acid , perchlorate is very weakly basic. It is more commonly encountered as a counterion in coordination chemistry. Illustrative of its low basicity is the ability of water to outcompete perchlorate as a ligand for metal ions is indicated by the multitude of aquo complexes with noncoordinated perchlorate. Ferrous perchlorate , cobalt(II) perchlorate , chromium(III) perchlorate , manganese(II) perchlorate , nickel(II) perchlorate , and copper(II) perchlorate are commonly encountered as their hexaaquo complexes. [ 5 ]
The preparation of perchlorate complexes can be challenging because perchlorate is a weakly coordinating anion .
Chlorine trioxide is an important precursor to anhydrous perchlorate complexes. It serves as a source of ClO + 2 and ClO − 4 . It reacts with vanadium pentoxide ( V 2 O 5 ) to give VO(ClO 4 ) 3 and VO 2 (ClO 4 ) . Hydrated mercury and cadmium perchlorates can be dehydrated with Cl 2 O 6 , affording anhydrous compounds. [ 6 ]
In some cases, chlorine trioxide serves both as an oxidant and a dehydrating agent:
Silver perchlorate , which has some solubility in noncoordinating solvents, reacts with some metal chlorides to give the corresponding perchlorate complex. [ 4 ]
Anhydrous perchlorate complexes are susceptible to hydrolysis:
Upon heating, perchlorate complexes yield oxides, evolving chlorine oxides in the process. For example, thermolysis of titanium perchlorate gives TiO 2 , ClO 2 , and O 2 The titanyl species TiO(ClO 4 ) 2 is an intermediate in this decomposition. [ 7 ]
Perchlorate complexes and the reagents used to prepare them are often dangerously explosive intrinsically and especially in contact with organic compounds. [ 6 ] | https://en.wikipedia.org/wiki/Transition_metal_perchlorate_complexes |
Transition metal phosphate complexes are coordination complexes with one or more phosphate ligands . Phosphate binds to metals through one, two, three, or all four oxygen atoms. The bidentate coordination mode is common. The second and third pK a 's of phosphoric acid , pK a2 and pK a3 , are 7.2 and 12.37, respectively. It follows that HPO 2− 4 and PO 3− 4 are sufficiently basic to serve as ligands. The examples below confirm this expectation. The behavior of metal phosphate complexes is related to the mechanisms of metal-catalyzed reactions of phosphate esters and pyrophosphates. [ 1 ]
Aside from molecular metal phosphate complexes, the topic of this article, many or most transition metal phosphates are nonmolecular, being coordination polymers or dense ternary or quaternary phases. Iron(III) phosphate , contemplated as a cathode material for batteries , is one example. Vanadyl phosphate ( VOPO 4 (H 2 O) ) is a commercial catalyst for oxidation reactions. Many metal phosphates occur as minerals.
Phosphates exist in many condensed oligomeric forms. Many of these derivatives function as ligands for metal ions. Pyrophosphate ( P 2 O 4− 7 ) [ 8 ] and trimetaphosphate ( [P 3 O 9 ] 3− ) have been particularly studied. They typically function as bi- and tridentate ligands. | https://en.wikipedia.org/wiki/Transition_metal_phosphate_complex |
In chemistry , a transition metal pincer complex is a type of coordination complex with a pincer ligand. Pincer ligands are chelating agents that binds tightly to three adjacent coplanar sites in a meridional configuration. [ 1 ] [ 2 ] The inflexibility of the pincer-metal interaction confers high thermal stability to the resulting complexes. This stability is in part ascribed to the constrained geometry of the pincer, which inhibits cyclometallation of the organic substituents on the donor sites at each end. In the absence of this effect, cyclometallation is often a significant deactivation process for complexes, in particular limiting their ability to effect C-H bond activation . The organic substituents also define a hydrophobic pocket around the reactive coordination site. Stoichiometric and catalytic applications of pincer complexes have been studied at an accelerating pace since the mid-1970s. Most pincer ligands contain phosphines . [ 3 ] Reactions of metal-pincer complexes are localized at three sites perpendicular to the plane of the pincer ligand, although in some cases one arm is hemi-labile and an additional coordination site is generated transiently. Early examples of pincer ligands (not called such originally) were anionic with a carbanion as the central donor site and flanking phosphine donors; these compounds are referred to as PCP pincers.
Although the most common class of pincer ligands features PCP donor sets, variations have been developed where the phosphines are replaced by thioethers and tertiary amines. Many pincer ligands also feature nitrogenous donors at the central coordinating group position (see figure), such as pyridines . [ 4 ]
An easily prepared pincer ligand is POCOP . Many tridentate ligands types occupy three contiguous, coplanar coordination sites. The most famous such ligand is terpyridine (“terpy”). Terpy and its relatives lack the steric bulk of the two terminal donor sites found in traditional pincer ligands.
Metal pincer complexes are often prepared through C-H bond activation . [ 5 ] [ 6 ]
Ni(II) N,N,N pincer complexes are active in Kumada , Sonogashira , and Suzuki-Miyaura coupling reactions with unactivated alkyl halides. [ 7 ] [ 8 ]
The pincer ligand is most often an anionic, two-electron donor to the metal centre. It consists of a rigid, planar backbone usually consisting of aryl frameworks and has two neutral, two-electron donor groups at the meta-positions. The general formula for pincer ligands is 2,6-(ER 2 ) 2 C 6 H 3 – abbreviated ECE – where E is the two-electron donor and C is the ipso-carbon of the aromatic backbone (e.g. PCP – two phosphine donors). [ 9 ] Due to the firm tridentate coordination mode, it allows the metal complexes to exhibit high thermal stability as well as air-stability. [ 5 ] It also implies that a reduced number of coordination sites are available for reactivity, which often limits the number of undesirable products formed in the reaction due to ligand exchange, as this process is suppressed.
There are various types of pincer ligands that are used in transition metal catalysis . Often, they have the same two-electron donor flanking the metal centre, but this is not a requirement.
The most common pincer ligand designs are PCP, NCN, PCN, SCS, and PNO. Other elements that have been employed at different positions in the ligand are boron , arsenic , silicon , and even selenium .
By altering the properties of the pincer ligands, it is possible to significantly alter the chemistry at the metal centre. Changing the hardness/softness of the donor, using electron-withdrawing groups (EWGs) in the backbone, and the altering the steric constraints of the ligands are all methods used to tune the reactivity at the metal centre.
The synthesis of the ligands often involves the reaction between 1,3-dibromoethylbenzene with a secondary phosphine followed by deprotonation of the quaternary phosphorus intermediates to generate the ligand. [ 10 ]
To generate the metal complex, two common routes are employed. One is a simple oxidative addition of the ipso-C-X bond where X = Br, I to a metal centre, often a M(0) (M = Pd, Mo, Fe, Ru, Ni, Pt) though other metal complexes with higher oxidation states available can also be used (e.g. Rh(COD)Cl 2 ). [ 11 ] [ 12 ]
The other significant method of metal introduction is through C-H bond activation ., [ 5 ] The major difference is that the metal used in this method is already in a higher oxidation state (e.g. PdCl 2 – Pd(II) species). However, these reactions have been found to proceed much more efficiently by employing metal complexes with weakly-bound ligands (e.g. Pd(BF 4 ) 2 (CH 3 CN) 2 or Pd(OTf) 2 (CH 3 CN) 2 where OTf = F 3 CO 2 SO − ). [ 6 ]
The potential value of pincer ligands in catalysis has been investigated, although no process has been commercialized. Aspirational applications are motivated by the high thermal stability and rigidity. Disadvantages include the cost of the ligands.
Pincer complexes have been shown to catalyse Suzuki-Miyaura coupling reactions, a versatile carbon-carbon bond forming reaction.
Typical Suzuki coupling employ Pd(0) catalysts with monodentate tertiary phosphine ligands (e.g. Pd(PPh 3 ) 4 ). It is a very selective method to couple aryl substituents together, but requires elevated temperatures. [ 13 ]
Using PCP pincer-palladium catalysts, aryl-aryl couplings can be achieved with turnover numbers (TONs) upwards of 900,000 and high yields. [ 5 ] Additionally, other groups have found that very low catalyst loadings can be achieved with asymmetric palladium pincer complexes. Catalyst loadings of 0.0001 mol % have been found to have TONs upwards of 190,000 and upper limit TONs can reach 1,100,000.
Sonogashira coupling has found widespread use in coupling aryl halides with alkynes. TONs upwards of 2,000,000 and low catalyst loadings of 0.005 mol % can be achieved with PNP-based catalysts. [ 14 ]
Alkanes undergo dehydrogenation at high temperatures. Typically this conversion is promoted heterogeneously because typically homogeneous catalysts do not survive the required temperatures (~200 °C) The corresponding conversion can be catalyzed homogeneously by pincer catalysts, which are sufficiently thermally robust. Proof of concept was established in 1996 by Jensen and co-workers. They reported that an iridium and rhodium pincer complex catalyze the dehydrogenation of cyclooctane with a turnover frequency of 12 min −1 at 200 °C. They found that the dehydrogenation was performed at a rate two orders of magnitude greater than those previously reported. [ 15 ] The iridium pincer complex was also found to exhibit higher activity than the rhodium complex. This rate difference may be due to the availability of the Ir(V) oxidation state which allows stronger Ir-C and Ir-H bonds. [ 15 ]
The homogeneously catalyzed process can be coupled to other reactions such as alkene metathesis. Such tandem reactions have not been demonstrated with heterogeneous catalysts. [ 16 ] [ 17 ]
The original work on PCP ligands arose from studies of the Pt(II) complexes derived from long-chain ditertiary phosphines, species of the type R 2 P(CH 2 ) n PR 2 where n >4 and R = tert-butyl . Platinum metalates one methylene group with release of HCl, giving species such as PtCl(R 2 P(CH 2 ) 2 CH(CH 2 ) 2 PR 2 ). [ 3 ]
Pincer complexes catalyze the dehydrogenation of alkanes. Early reports described the dehydrogenation of cyclooctane by an Ir pincer complex with a turnover frequency of 12 min −1 at 200 °C. The complexes are thermally stable at such temperatures for days. [ 15 ] | https://en.wikipedia.org/wiki/Transition_metal_pincer_complex |
Transition metal porphyrin complexes are a family of coordination complexes of the conjugate base of porphyrins . Iron porphyrin complexes occur widely in nature, which has stimulated extensive studies on related synthetic complexes. The metal-porphyrin interaction is a strong one such that metalloporphyrins are thermally robust. [ 2 ] [ 3 ] They are catalysts and exhibit rich optical properties, although these complexes remain mainly of academic interest.
Porphyrin complexes consist of a square planar MN 4 core. The periphery of the porphyrins, consisting of sp 2 -hybridized carbons, generally display only small deviations from planarity. [ 6 ] Additionally, the metal is often not centered in the N 4 plane. [ 7 ]
Large metals such as zirconium, tantalum, and molybdenum tend to bind two porphyrin ligands. Some [M(OEP)] 2 feature a multiple bonds between the metals. [ 8 ]
Metal porphyrin complexes are almost always prepared by direct reaction of a metal halide with the free porphyrin, abbreviated here as H 2 P:
Two pyrrole protons are lost. The porphyrin dianion is an L 2 X 2 ligand.
These syntheses require somewhat forcing conditions, [ 9 ] consistent with the tight fit of the metal in the N 4 2- "pocket." In nature, the insertion is mediated by chelatase enzymes. The insertion of a metal in synthetic porphyrins proceeds by the intermediacy of a "sitting atop complex" (SAC), whereby the entering metal interacts with only one or a two of the pyrrolic nitrogen centers. [ 10 ] [ 11 ]
In contrast to natural porphyrins, synthetic porphyrin ligands are typically symmetrical (i.e., their dianionic conjugate bases). Two major varieties are well studied, those with substituents at the meso positions, the premier example being tetraphenylporphyrin . These ligands are easy to prepare in one-pot procedures. A large number of aryl groups can be deployed aside from phenyl .
A second class of synthetic porphyrins have hydrogen at the meso positions. Octaethylporphyrin (H 2 OEP) is the subject of many such studies. It is more expensive than tetraphenylporphyrin.
Protoporphyrin IX , which occurs naturally, can be modified by removal of the vinyl groups and esterification of the carboxylic acid groups to gives deuteroporphyin IX dimethyl ester. [ 12 ]
Iron porphyrin complexes ("hemes") are the dominant metalloporphyrin complexes in nature. Consequently, synthetic iron porphyrin complexes are well investigated. Common derivatives are those of Fe(III) and Fe(II). Complexes of the type Fe(P)Cl are square-pyramidal and high spin with idealized C 4v symmetry. Base hydrolysis affords the "mu-oxo dimers" with the formula [Fe(P)] 2 O. These complexes have been widely investigated as oxidation catalysts. [ 13 ] Typical stoichiometries of ferrous porphyrins are Fe(P)L 2 where L is a neutral ligand such as pyridine and imidazole . Cobalt(II) porphyrins behave similarly to the ferrous derivatives. They bind O 2 to form dioxygen complexes .
Catalysts based on synthetic metalloporphyrins have been extensively investigated, although few or no applications exist. Due to their distinctive redox properties, Co(II)–porphyrin-based systems are radical initiators. [ 14 ] [ 15 ] Some complexes emulate the action of various heme enzymes such as cytochrome P450 , lignin peroxidase . [ 16 ] [ 17 ] Metalloporphyrins are also studied as catalysts for water splitting, with the purpose of generating molecular hydrogen and oxygen for fuel cells. [ 18 ]
In addition, porous organic polymers based on porphyrins, along with metal oxide nanoparticles, [ 19 ]
Porphyrins are often used to construct structures in supramolecular chemistry . [ 22 ] These systems take advantage of the Lewis acidity of the metal, typically zinc. An example of a host–guest complex that was constructed from a macrocycle composed of four porphyrins. [ 21 ] A guest-free base porphyrin is bound to the center by coordination with its four-pyridine substituents. | https://en.wikipedia.org/wiki/Transition_metal_porphyrin_complexes |
Transition metal sulfate complexes or sulfato complexes are coordination complexes with one or more sulfate ligands . Being the conjugate base of a strong acid ( sulfuric acid ), sulfate is not basic. It is more commonly a counterion in coordination chemistry, not a ligand.
Sulfate binds to metals through one, two, three, or all four oxygen atoms. [ 1 ]
Among the handful of complexes containing sulfate (or sulfato) ligands, most examples feature unidentate or chelating bidentate sulfate. Well characterized xamples are found with cobalt(III) ammines since these complexes are exchange inert. Monodentate sulfate is found in [Co(tren)(NH 3 )(SO 4 )] + (tren = tris(2-aminoethyl)amine ) [ 2 ] Although [Co(en) 2 O 2 SO 2 ] + is unknown, [(en) 2 Co(OS(O) 2 O) 2 Co(en) 2 ] 2+ forms instead (en = ethylenediamine ). Bidentate sulfate is observed crystallographically in [Co(tetraamine)O 2 SO 2 ] + . [ 3 ]
Sulfate function as a tridentate bridging ligand [Re 3 Cl 9 (SO 4 )] 2− . [ 4 ]
All four oxygen atoms of sulfate bond to metals in some Dawson-type polyoxometalates , e.g. [S 2 Mo 18 O 62 ] 4- . [ 5 ]
Tutton's salts , with the formula M' 2 M(SO 4 ) 2 (H 2 O) 6 (M' = K + , etc.; M = Fe 2+ , etc.), illustrate the ability of water to outcompete sulfate as a ligand for M 2+ . Similarly alums , such as chrome alum ([K(H 2 O) 6 ][Cr(H 2 O) 6 ][SO 4 ] 2 ), features [Cr(H 2 O) 6 ] 3+ with noncoordinated sulfate. In a related vein, some sulfato complexes confirmed by X-ray crystallography , convert to simple aquo complexes when dissolved in water. Copper(II) sulfate examplifies this behavior, sulfate is bonded to copper in the crystal but dissociates upon dissolution.
Sulfato complexes are commonly produced by reaction of metal sulfates with other ligands.
In some cases, sulfato complexes are produced from sulfur dioxide : [ 7 ]
Sulfato complexes also arise by air-oxidation of metal sulfides. [ 8 ]
A dominant reaction of sulfato complexes is solvolysis, i.e. displacement of sulfate from the first coordination sphere by polar solvents such as water.
Sulfato complexes are susceptible to protonation of uncoordinated oxygen atoms. [ 9 ] | https://en.wikipedia.org/wiki/Transition_metal_sulfate_complex |
Transition metal thiolate complexes are metal complexes containing thiolate ligands . Thiolates are ligands that can be classified as soft Lewis bases. Therefore, thiolate ligands coordinate most strongly to metals that behave as soft Lewis acids as opposed to those that behave as hard Lewis acids. Most complexes contain other ligands in addition to thiolate, but many homoleptic complexes are known with only thiolate ligands. The amino acid cysteine has a thiol functional group, consequently many cofactors in proteins and enzymes feature cysteinate-metal cofactors. [ 2 ]
Thiolate is classified as an X ligand in the Covalent bond classification method . In the usual electron counting method , it is a one-electron ligand when terminal and a three-electron ligand when doubly bridging. From the electric structure perspective, thiolate is a pi-donor ligand, akin to alkoxide . One consequence is that few polythiolate complexes are low spin. Another consequence is that electron-precise thiolate complexes tend to be rather nucleophilic. From the perspective of HSAB theory , thiolates are soft. Late metal thiolates are water stable but early metal thiolates tend to hydrolyze. For example,
Mo( t -BuS) 4 , a dark red diamagnetic solid, is sensitive to oxygen and moisture. [ 3 ]
Monothiolate ligands range from simple, nonbulky methanethiolate ( CH 3 S − ) to very bulky arylthiolates. Many di thiolate ligands are known, starting with the conjugate bases of 1,2-ethanedithiol and 1,3-propanedithiol . Unsaturated versions of 1,2-dithiolates usually take the form of dithiolene ligands , the parent member being (HC) 2 S 2 2- . 3 ) 2 ] 2 . A weak Fe-C(ipso) bond is indicated by the Fe---C distance of 2.427(1) Å. The structure illustrates the low coordination numbers enabled by bulky ligands. [ 1 ]
Metal thiolate complexes are commonly prepared by reactions of metal complexes with thiols (RSH), thiolates (RS − ), and disulfides (R 2 S 2 ). The salt metathesis reaction route is common. In this method, an alkali metal thiolate is treated with a transition metal halide to produce an alkali metal halide and the metal thiolate complex. For example, lithium tert-butylthiolate reacts with MoCl 4 to give the tetrathiolate complex: [ 3 ]
Nickelocene and ethanethiol react to give a dimeric thiolate, one cyclopentadienyl ligand serving as a base:
Regarding their mechanism of formation from thiols, metal thiolate complexes can arise via deprotonation of thiol complexes. [ 4 ] [ 5 ]
Many thiolate complexes are prepared by redox reactions. Organic disulfides oxidize low valence metals, as illustrated by the oxidation of titanocene dicarbonyl :
Some metal centers are oxidized by thiols, the coproduct being hydrogen gas:
These reactions may proceed by the oxidative addition of the thiol to Fe(0).
Thiols and especially thiolate salts are reducing agents . Consequently, they induce redox reactions with certain transition metals. This phenomenon is illustrated by the synthesis of cuprous thiolates from cupric precursors:
Thiolate clusters of the type [Fe 4 S 4 (SR) 4 ] 2− occur in iron–sulfur proteins . Synthetic analogues can be prepared by combined redox and salt metathesis reactions: [ 6 ]
Thiolates are relatively basic ligands, being derived from conjugate acids with pK a 's of 6.5 ( thiophenol ) to 10.5 ( butanethiol ). Consequently, thiolate ligand often bridge pairs of metals. One example is Fe 2 (SCH 3 ) 2 (CO) 6 . Thiolate ligands, especially when nonbridging, are susceptible to attack by electrophiles including acids, alkylating agents , and oxidants.
Metal thiolate functionality, almost always provided by the cysteine residue, is pervasive in metalloenzymes . Iron-sulfur proteins , blue copper proteins , and the zinc-containing enzyme liver alcohol dehydrogenase feature thiolate ligands. The pi-donor property of thiolate ligands stabilizes Fe(IV) states in the enzyme cytochrome P450 . All molybdoproteins feature thiolates in the form of cysteinyl and/or molybdopterin . [ 8 ] Metallothionins are cysteine-rich proteins that bind heavy metals. | https://en.wikipedia.org/wiki/Transition_metal_thiolate_complex |
A transition metal thiosulfate complex is a coordination complex containing one or more thiosulfate ligands. Thiosulfate occurs in nature and is used industrially, so its interactions with metal ions are of some practical interest. [ 1 ]
Thiosulfate is a potent ligand for soft metal ions. A typical complex is [Pd(S 2 O 3 ) 2 ( ethylenediamine )] 2− , which features a pair of S-bonded thiosulfate ligands. Simple aquo and ammine complexes are also known. [ 2 ] Three binding modes are common: monodentate (κ 1 -), O , S -bidentate (κ 2 -), and bridging (μ-).
Linkage isomerism (O vs S) has been observed in [Co(NH 3 ) 5 (S 2 O 3 )] + . [ 3 ]
Typically, thiosulfate complexes are prepared from thiosulfate salts by displacement of aquo or chloro ligands. [ 2 ] In some cases, they arise by oxidation of polysulfido complexes, or by binding of sulfur trioxide to sulfido ligands. [ 4 ] [ 5 ]
Silver-thiosulfate complexes are produced by common photographic fixers . By dissolving silver halides , the fixer stabilises the image. The dissolution process entails reactions involving the formation of 1:2 and 1:3 complexes (X = halide): [ 6 ] Fixation involves these chemical reactions (X = halide , typically Br − ): [ 7 ]
Sodium thiosulfate and ammonium thiosulfate have been proposed as alternative lixiviants to cyanide for extraction of gold from ores [ 8 ] and printed circuit boards. [ 9 ] The complex [Au(S 2 O 3 ) 2 ] 3- is assumed to be the principal product in such extractions. Presently cyanide salts are used on a large scale for that purpose with obvious risks. [ 8 ] The advantages of this approach are that (i) thiosulfate is far less toxic than cyanide and (ii) that ore types that are refractory to gold cyanidation (e.g. carbonaceous or Carlin-type ores ) can be leached by thiosulfate. One problem with this alternative process is the high consumption of thiosulfate, which is more expensive than cyanide. Another issue is the lack of a suitable recovery technique since [Au(S 2 O 3 ) 2 ] 3− does not adsorb to activated carbon , which is the standard technique used in gold cyanidation to separate the gold complex from the ore slurry. [ 8 ]
In the IUPAC Red Book the following terms may be used for thiosulfate as a ligand: trioxido-1κ 3 O -disulfato( S — S )(2−); trioxidosulfidosulfato(2−); thiosulfato; sulfurothioato. In the naming for thiosulfate salts, the final "o" is replaced by "e". [ 10 ] Thus, sodium aurothiosulfate could be called trisodium di(thiosulfato)aurate(I). | https://en.wikipedia.org/wiki/Transition_metal_thiosulfate_complex |
Transition path sampling (TPS) is a rare-event sampling method used in computer simulations of rare events: physical or chemical transitions of a system from one stable state to another that occur too rarely to be observed on a computer timescale. Examples include protein folding , chemical reactions and nucleation . Standard simulation tools such as molecular dynamics can generate the dynamical trajectories of all the atoms in the system. However, because of the gap in accessible time-scales between simulation and reality, even present supercomputers might require years of simulations to show an event that occurs once per millisecond without some kind of acceleration.
TPS focuses on the most interesting part of the simulation, the transition . For example, an initially unfolded protein will vibrate for a long time in an open-string configuration before undergoing a transition and fold on itself. The aim of the method is to reproduce precisely those folding moments.
Consider in general a system with two stable states A and B. The system will spend a long time in those states and occasionally jump from one to the other. There are many ways in which the transition can take place. Once a probability is assigned to each of the many pathways, one can construct a Monte Carlo random walk in the path space of the transition trajectories, and thus generate the ensemble of all transition paths. All the relevant information can then be extracted from the ensemble, such as the reaction mechanism, the transition states, and the rate constants .
Given an initial path, TPS provides some algorithms to perturb that path and create a new one. As in all Monte Carlo walks, the new path will then be accepted or rejected in order to have the correct path probability. The procedure is iterated and the ensemble is gradually sampled.
A powerful and efficient algorithm is the so-called shooting move . [ 1 ] Consider the case of a classical many-body system described by coordinates r and momenta p . Molecular dynamics generates a path as a set of ( r t , p t ) at discrete times t in [0, T ] where T is the length of the path. For a transition from A to B, ( r 0 , p 0 ) is in A, and ( r T , p T ) is in B . One of the path times is chosen at random, the momenta p are modified slightly into p + δp , where δp is a random perturbation consistent with system constraints, e.g. conservation of energy and linear and angular momentum. A new trajectory is then simulated from this point, both backward and forward in time until one of the states is reached. Being in a transition region, this will not take long. If the new path still connects A to B it is accepted, otherwise it is rejected and the procedure starts again.
In the Bennett–Chandler procedure, [ 2 ] [ 3 ] the rate constant k AB for the transition from A to B is derived from the correlation function
where h X is the characteristic function of state X , and h X ( t ) is either 1 if the system at time t is in state X or 0 if not. The time-derivative C'( t ) starts at time 0 at the transition state theory (TST) value k AB TST and reaches a plateau k AB ≤ k AB TST for times of the order of the transition time. Hence once the function is known up to these times, the rate constant is also available.
In the TPS framework C ( t ) can be rewritten as an average in the path ensemble
where the subscript AB denotes an average in the ensemble of paths that start in A and visit B at least once. Time t' is an arbitrary time in the plateau region of C ( t ). The factor C ( t ') at this specific time can be computed with a combination of path sampling and umbrella sampling .
The TPS rate constant calculation can be improved in a variation of the method called Transition interface sampling (TIS). [ 4 ] In this method the transition region is divided in subregions using interfaces. The first interface defines state A and the last state B. The interfaces are not physical interfaces but hypersurfaces in the phase space .
The rate constant can be viewed as a flux through these interfaces. The rate k AB is the flux of trajectories starting before the first interface and going through the last interface. Being a rare event, the flux is very small and practically impossible to compute with a direct simulation. However, using the other interfaces between the states, one can rewrite the flux in terms of transition probabilities between interfaces
k A B = Φ 1 , 0 ∏ i = 1 n − 1 P A ( i + 1 | i ) {\displaystyle k_{AB}=\Phi _{1,0}\prod _{i=1}^{n-1}P_{A}(i+1|i)} ,
where P A ( i + 1| i ) is the probability for trajectories, coming from state A and crossing interface i, to reach interface i + 1. Here interface 0 defines state A and interface n defines state B. The factor Φ 1,0 is the flux through the interface closest to A . By making this interface close enough, the quantity can be computed with a standard simulation, as the crossing event through this interface is not a rare event any more.
Remarkably, in the formula above there is no Markov assumption of independent transition probabilities. The quantities P A ( i + 1|i) carry a subscript A to indicate that the probabilities are all dependent on the history of the path, all the way from when it left A . These probabilities can be computed with a path sampling simulation using the TPS shooting move. A path crossing interface i is perturbed and a new path is shot . If the path still starts from A and crosses interface i , is accepted. The probability P A ( i + 1| i ) follows from the ratio of the number of paths that reach interface i + 1 to the total number of paths in the ensemble.
Theoretical considerations show that TIS computations are at least twice as fast as TPS, and computer experiments have shown that the TIS rate constant can converge up to 10 times faster. A reason for this is due to TIS using paths of adjustable length and on average shorter than TPS. Also, TPS relies on the correlation function C ( t ), computed by summation of positive and negative terms due to recrossings. TIS instead computes the rate as an effective positive flux, the quantity k AB is directly computed as an average of only positive terms contributing to the interface transition probabilities.
TPS/TIS as normally implemented can be acceptable for non-equilibrium calculations provided that the interfacial fluxes are time-independent ( stationary ). To treat non-stationary systems in which there is time dependence in the dynamics, due either to variation of an external parameter or to evolution of the system itself, then other rare-event methods may be needed, such as stochastic-process rare-event sampling . [ 5 ]
For a review of TPS:
For a review of TIS | https://en.wikipedia.org/wiki/Transition_path_sampling |
Transition radiation ( TR ) is a form of electromagnetic radiation emitted when a charged particle passes through inhomogeneous media, such as a boundary between two different media. This is in contrast to Cherenkov radiation , which occurs when a charged particle passes through a homogeneous dielectric medium at a speed greater than the phase velocity of electromagnetic waves in that medium.
Transition radiation was demonstrated theoretically by Ginzburg and Frank in 1945. [ 1 ] They showed the existence of Transition radiation when a charged particle perpendicularly passed through a boundary between two different homogeneous media. The frequency of radiation emitted in the backwards direction relative to the particle was mainly in the range of visible light . The intensity of radiation was logarithmically proportional to the Lorentz factor of the particle. After the first observation of the transition radiation in the optical region, [ 2 ] many early studies indicated that the application of the optical transition radiation for the detection and identification of individual particles seemed to be severely limited due to the inherent low intensity of the radiation.
Interest in transition radiation was renewed when Garibian showed that the radiation should also appear in the x-ray region for ultrarelativistic particles. His theory predicted some remarkable features for transition radiation in the x-ray region. [ 3 ] In 1959 Garibian showed theoretically that energy losses of an ultrarelativistic particle, when emitting TR while passing the boundary between media and vacuum , were directly proportional to the Lorentz factor of the particle. [ 4 ] Theoretical discovery of x-ray transition radiation, which was directly proportional to the Lorentz factor, made possible further use of TR in high-energy physics . [ 5 ]
Thus, from 1959 intensive theoretical and experimental research of TR, and x-ray TR in particular began. [ 6 ] [ 7 ]
Transition radiation in the x-ray region ( TR ) is produced by relativistic charged particles when they cross the interface of two media of different dielectric constants . The emitted radiation is the homogeneous difference between the two inhomogeneous solutions of Maxwell's equations of the electric and magnetic fields of the moving particle in each medium separately. In other words, since the electric field of the particle is different in each medium, the particle has to "shake off" the difference when it crosses the boundary. The total energy loss of a charged particle on the transition depends on its Lorentz factor γ = E / mc 2 and is mostly directed forward, peaking at an angle of the order of 1/ γ relative to the particle's path. The intensity of the emitted radiation is roughly proportional to the particle's energy E .
Optical transition radiation is emitted both in the forward direction and reflected by the interface surface. In case of a foil having an angle at 45 degrees with respect to a particle beam , the particle beam's shape can be visually seen at an angle of 90 degrees. More elaborate analysis of the emitted visual radiation may allow for the determination of γ and emittance.
In the approximation of relativistic motion ( γ ≫ 1 {\displaystyle \gamma \gg 1} ), small angles ( θ ≪ 1 {\displaystyle \theta \ll 1} ) and high frequency ( ω ≫ ω p {\displaystyle \omega \gg \omega _{p}} ), the energy spectrum can be expressed as: [ 8 ]
Where z {\displaystyle z} is the atomic charge, e {\displaystyle e} is the charge of an electron, γ {\displaystyle \gamma } is the Lorentz factor , ω p {\displaystyle \omega _{p}} is the Plasma Frequency . This divergences at low frequencies where the approximations fail. The total energy emitted is:
The characteristics of this electromagnetic radiation makes it suitable for particle discrimination, particularly of electrons and hadrons in the momentum range between 1 GeV/c and 100 GeV/c .
The transition radiation photons produced by electrons have wavelengths in the x-ray range, with energies typically in the range from 5 to 15 keV . However, the number of produced photons per interface crossing is very small: for particles with γ = 2×10 3 , about 0.8 x-ray photons are detected. Usually several layers of alternating materials or composites are used to collect enough transition radiation photons for an adequate measurement—for example, one layer of inert material followed by one layer of detector (e.g. microstrip gas chamber), and so on.
By placing interfaces (foils) of very precise thickness and foil separation, coherence effects will modify the transition radiation's spectral and angular characteristics. This allows a much higher number of photons to be obtained in a smaller angular "volume". Applications of this x-ray source are limited by the fact that the radiation is emitted in a cone, with a minimum intensity at the center. X-ray
focusing devices (crystals/mirrors) are not easy to build for such radiation patterns.
A special type of transition radiation is diffusive radiation. It is emitted provided that a charged particle crosses a medium with randomly inhomogeneous dielectric permittivity^{9,10,11}.
9. ^S.R.Atayan and Zh.S.Gevorkian, Pseudophoton diffusion and radiation of a charged particle in a randomly inhomogeneous medium, Sov.Phys.JETP,v.71(5),862,(1990).\\
10. ^Zh.S.Gevorkian, Radiation of a relativistic charged particle in a system with one-dimensional randomness, Phys.Rev.E,v.57,2338,(1998).\\
11. ^ Zh.S.Gevorkian, C.P.Chen and Chin-Kun Hu, New Mechanism of X-ray radiation from a relativistic charged particle in a dielectric random medium, Phys.Rev.Lett. v.86,3324,(2001). | https://en.wikipedia.org/wiki/Transition_radiation |
In chemistry , the transition state of a chemical reaction is a particular configuration along the reaction coordinate . It is defined as the state corresponding to the highest potential energy along this reaction coordinate. [ 1 ] It is often marked with the double dagger (‡) symbol.
As an example, the transition state shown below occurs during the S N 2 reaction of bromoethane with a hydroxide anion:
The activated complex of a reaction can refer to either the transition state or to other states along the reaction coordinate between reactants and products , especially those close to the transition state. [ 3 ]
According to the transition state theory , once the reactants have passed through the transition state configuration, they always continue to form products. [ 3 ]
The concept of a transition state has been important in many theories of the rates at which chemical reactions occur. This started with the transition state theory (also referred to as the activated complex theory), developed independently in 1935 by Eyring , Evans and Polanyi , and introduced basic concepts in chemical kinetics that are still used today. [ 4 ]
A collision between reactant molecules may or may not result in a successful reaction . The outcome depends on factors such as the relative kinetic energy , relative orientation and internal energy of the molecules. Even if the collision partners form an activated complex they are not bound to go on and form products , and instead the complex may fall apart back to the reactants. [ citation needed ]
Because the structure of the transition state is a first-order saddle point along a potential energy surface , the population of species in a reaction that are at the transition state is negligible. Since being at a saddle point along the potential energy surface means that a force is acting along the bonds to the molecule, there will always be a lower energy structure that the transition state can decompose into. This is sometimes expressed by stating that the transition state has a fleeting existence , with species only maintaining the transition state structure for the time-scale of vibrations of chemical bonds (femtoseconds). However, cleverly manipulated spectroscopic techniques can get us as close as the timescale of the technique allows. Femtochemical IR spectroscopy was developed for that reason, and it is possible to probe molecular structure extremely close to the transition point. Often, along the reaction coordinate, reactive intermediates are present not much lower in energy from a transition state making it difficult to distinguish between the two.
Transition state structures can be determined by searching for first-order saddle points on the potential energy surface (PES) of the chemical species of interest. [ 5 ] A first-order saddle point is a critical point of index one, that is, a position on the PES corresponding to a minimum in all directions except one. This is further described in the article geometry optimization .
The Hammond–Leffler postulate states that the structure of the transition state more closely resembles either the products or the starting material, depending on which is higher in enthalpy . A transition state that resembles the reactants more than the products is said to be early , while a transition state that resembles the products more than the reactants is said to be late . Thus, the Hammond–Leffler Postulate predicts a late transition state for an endothermic reaction and an early transition state for an exothermic reaction .
A dimensionless reaction coordinate that quantifies the lateness of a transition state can be used to test the validity of the Hammond–Leffler postulate for a particular reaction. [ 6 ]
The structure–correlation principle states that structural changes that occur along the reaction coordinate can reveal themselves in the ground state as deviations of bond distances and angles from normal values along the reaction coordinate . [ 7 ] According to this theory if one particular bond length on reaching the transition state increases then this bond is already longer in its ground state compared to a compound not sharing this transition state. One demonstration of this principle is found in the two bicyclic compounds depicted below. [ 8 ] The one on the left is a bicyclo[2.2.2]octene, which, at 200 °C, extrudes ethylene in a retro-Diels–Alder reaction .
Compared to the compound on the right (which, lacking an alkene group, is unable to give this reaction) the bridgehead carbon-carbon bond length is expected to be shorter if the theory holds, because on approaching the transition state this bond gains double bond character. For these two compounds the prediction holds up based on X-ray crystallography .
One way that enzymatic catalysis proceeds is by stabilizing the transition state through electrostatics . By lowering the energy of the transition state, it allows a greater population of the starting material to attain the energy needed to overcome the transition energy and proceed to product. | https://en.wikipedia.org/wiki/Transition_state |
Transition state analogs ( transition state analogues ), are chemical compounds with a chemical structure that resembles the transition state of a substrate molecule in an enzyme-catalyzed chemical reaction . Enzymes interact with a substrate by means of strain or distortions, moving the substrate towards the transition state. [ 1 ] Transition state analogs can be used as inhibitors in enzyme-catalyzed reactions by blocking the active site of the enzyme. Theory suggests that enzyme inhibitors which resembled the transition state structure would bind more tightly to the enzyme than the actual substrate. [ 2 ] Examples of drugs that are transition state analog inhibitors include flu medications such as the neuraminidase inhibitor oseltamivir and the HIV protease inhibitors saquinavir in the treatment of AIDS.
The transition state of a structure can best be described in regards to statistical mechanics where the energies of bonds breaking and forming have an equal probability of moving from the transition state backwards to the reactants or forward to the products. In enzyme-catalyzed reactions, the overall activation energy of the reaction is lowered when an enzyme stabilizes a high energy transition state intermediate. Transition state analogs mimic this high energy intermediate but do not undergo a catalyzed chemical reaction and can therefore bind much stronger to an enzyme than simple substrate or product analogs.
To design a transition state analogue, the pivotal step is the determination of transition state structure of substrate on the specific enzyme of interest with experimental method, for example, kinetic isotope effect . In addition, the transition state structure can also be predicted with computational approaches as a complementary to KIE. We will explain these two methods in brief.
Kinetic isotope effect (KIE) is a measurement of the reaction rate of isotope -labeled reactants against the more common natural substrate. Kinetic isotope effect values are a ratio of the turnover number and include all steps of the reaction. [ 3 ] Intrinsic kinetic isotope values stem from the difference in the bond vibrational environment of an atom in the reactants at ground state to the environment of the atom's transition state. [ 3 ] Through the kinetic isotope effect much insight can be gained as to what the transition state looks like of an enzyme-catalyzed reaction and guide the development of transition state analogs.
Computational approaches have been regarded as a useful tool to elucidate the mechanism of action of enzymes. [ 4 ] Molecular mechanics itself can not predict the electron transfer which is the fundamental of organic reaction but the molecular dynamics simulation provide sufficient information considering the flexibility of protein during catalytic reaction. The complementary method would be combined molecular mechanics/ quantum mechanics simulation ( QM/MM )methods. [ 5 ] With this approach, only the atoms responsible for enzymatic reaction in the catalytic region will be reared with quantum mechanics and the rest of the atoms were treated with molecular mechanics . [ 6 ]
After determining the transition state structures using either KIE or computation simulations, the inhibitor can be designed according to the determined transition state structures or intermediates. The following three examples illustrate how the inhibitors mimic the transition state structure by changing functional groups correspond to the geometry and electrostatic distribution of the transition state structures.
Methylthioadenosine nucleosidase are enzymes that catalyse the hydrolytic deadenylation reaction of 5'-methylthioadenosine and S-adenosylhomocysteine. It is also regarded as an important target for antibacterial drug discovery because it is important in the metabolic system of bacteria and only produced by bacteria. [ 7 ] Given the different distance between nitrogen atom of adenine and the ribose anomeric carbon (see in the diagram in this section), the transition state structure can be defined by early or late dissociation stage. Based on the finding of different transition state structures, Schramm and coworkers designed two transition state analogues mimicking the early and late dissociative transition state. The early and late transition state analogue shown binding affinity (Kd) of 360 and 140 pM, respectively. [ 8 ]
Thermolysin is an enzyme produced by Bacillus thermoproteolyticus that catalyses the hydrolysis of peptides containing hydrophobic amino acids. [ 9 ] Therefore, it is also a target for antibacterial agents. The enzymatic reaction mechanism starts form the small peptide molecule and replaces the zinc binding water molecule towards Glu143 of thermolysin. The water molecule is then activated by both the zinc ion and the Glu143 residue and attacks the carbonyl carbon to form a tetrahedral transition state (see figure). Holden and coworkers then mimicked that tetrahedral transition state to design a series of phosphonamidate peptide analogues. Among the synthesized analogues, R = L -Leu possesses the most potent inhibitory activity ( K i = 9.1 nM). [ 10 ]
Arginase is a binuclear manganese metalloprotein that catalyses the hydrolysis of L- arginine to L- ornithine and urea . It is also regarded as a drug target for the treatment of asthma . [ 11 ] The mechanism of hydrolysis of L-arginine is carried out via nucleophilic attack on the guanidino group by water, forming a tetrahedral intermediate. Studies shown that a boronic acid moiety adopts a tetrahedral configuration and serves as an inhibitor. In addition, the sulfonamide functional group can also mimic the transition state structure. [ 12 ] Evidence of boronic acid mimics as transition state analogue inhibitors of human arginase I was elucidated by x-ray crystal structures. [ 13 ] | https://en.wikipedia.org/wiki/Transition_state_analog |
In chemistry , transition state theory ( TST ) explains the reaction rates of elementary chemical reactions . The theory assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated transition state complexes. [ 1 ]
TST is used primarily to understand qualitatively how chemical reactions take place. TST has been less successful in its original goal of calculating absolute reaction rate constants because the calculation of absolute reaction rates requires precise knowledge of potential energy surfaces , [ 2 ] but it has been successful in calculating the standard enthalpy of activation (Δ H ‡ , also written Δ ‡ H ɵ ), the standard entropy of activation (Δ S ‡ or Δ ‡ S ɵ ), and the standard Gibbs energy of activation (Δ G ‡ or Δ ‡ G ɵ ) for a particular reaction if its rate constant has been experimentally determined (the ‡ notation refers to the value of interest at the transition state ; Δ H ‡ is the difference between the enthalpy of the transition state and that of the reactants).
This theory was developed simultaneously in 1935 by Henry Eyring , then at Princeton University , and by Meredith Gwynne Evans and Michael Polanyi of the University of Manchester . [ 3 ] [ 4 ] [ 5 ] TST is also referred to as "activated-complex theory", "absolute-rate theory", and "theory of absolute reaction rates". [ 6 ]
Before the development of TST, the Arrhenius rate law was widely used to determine energies for the reaction barrier. The Arrhenius equation derives from empirical observations and ignores any mechanistic considerations, such as whether one or more reactive intermediates are involved in the conversion of a reactant to a product. [ 7 ] Therefore, further development was necessary to understand the two parameters associated with this law, the pre-exponential factor ( A ) and the activation energy ( E a ). TST, which led to the Eyring equation , successfully addresses these two issues; however, 46 years elapsed between the publication of the Arrhenius rate law , in 1889, and the Eyring equation derived from TST, in 1935. During that period, many scientists and researchers contributed significantly to the development of the theory.
The basic ideas behind transition state theory are as follows:
In the development of TST, three approaches were taken as summarized below.
In 1884, Jacobus van 't Hoff proposed the Van 't Hoff equation describing the temperature dependence of the equilibrium constant for a reversible reaction:
where Δ U is the change in internal energy, K is the equilibrium constant of the reaction, R is the universal gas constant , and T is thermodynamic temperature . Based on experimental work, in 1889, Svante Arrhenius proposed a similar expression for the rate constant of a reaction, given as follows:
Integration of this expression leads to the Arrhenius equation
where k is the rate constant. A was referred to as the frequency factor (now called the pre-exponential coefficient), and E a is regarded as the activation energy. By the early 20th century many had accepted the Arrhenius equation, but the physical interpretation of A and E a remained vague. This led many researchers in chemical kinetics to offer different theories of how chemical reactions occurred in an attempt to relate A and E a to the molecular dynamics directly responsible for chemical reactions. [ citation needed ]
In 1910, French chemist René Marcelin introduced the concept of standard Gibbs energy of activation. His relation can be written as
At about the same time as Marcelin was working on his formulation, Dutch chemists Philip Abraham Kohnstamm, Frans Eppo Cornelis Scheffer, and Wiedold Frans Brandsma introduced standard entropy of activation and the standard enthalpy of activation. They proposed the following rate constant equation
However, the nature of the constant was still unclear.
In early 1900, Max Trautz and William Lewis studied the rate of the reaction using collision theory , based on the kinetic theory of gases . Collision theory treats reacting molecules as hard spheres colliding with one another; this theory neglects entropy changes, since it assumes that the collision between molecules are completely elastic.
Lewis applied his treatment to the following reaction and obtained good agreement with experimental result.
However, later when the same treatment was applied to other reactions, there were large discrepancies between theoretical and experimental results.
Statistical mechanics played a significant role in the development of TST. However, the application of statistical mechanics to TST was developed very slowly given the fact that in mid-19th century, James Clerk Maxwell , Ludwig Boltzmann , and Leopold Pfaundler published several papers discussing reaction equilibrium and rates in terms of molecular motions and the statistical distribution of molecular speeds.
It was not until 1912 when the French chemist A. Berthoud used the Maxwell–Boltzmann distribution law to obtain an expression for the rate constant.
where a and b are constants related to energy terms.
Two years later, René Marcelin made an essential contribution by treating the progress of a chemical reaction as a motion of a point in phase space . He then applied Gibbs' statistical-mechanical procedures and obtained an expression similar to the one he had obtained earlier from thermodynamic consideration.
In 1915, another important contribution came from British physicist James Rice. Based on his statistical analysis, he concluded that the rate constant is proportional to the "critical increment". His ideas were further developed by Richard Chace Tolman . In 1919, Austrian physicist Karl Ferdinand Herzfeld applied statistical mechanics to the equilibrium constant and kinetic theory to the rate constant of the reverse reaction, k −1 , for the reversible dissociation of a diatomic molecule. [ 8 ]
He obtained the following equation for the rate constant of the forward reaction [ 9 ]
where E ⊖ {\displaystyle \textstyle E^{\ominus }} is the dissociation energy at absolute zero, k B is the Boltzmann constant , h is the Planck constant , T is thermodynamic temperature, ν {\displaystyle \nu } is vibrational frequency of the bond.
This expression is very important since it is the first time that the factor k B T / h , which is a critical component of TST, has appeared in a rate equation.
In 1920, the American chemist Richard Chace Tolman further developed Rice's idea of the critical increment. He concluded that critical increment (now referred to as activation energy) of a reaction is equal to the average energy of all molecules undergoing reaction minus the average energy of all reactant molecules.
The concept of potential energy surface was very important in the development of TST. The foundation of this concept was laid by René Marcelin in 1913. He theorized that the progress of a chemical reaction could be described as a point in a potential energy surface with coordinates in atomic momenta and distances.
In 1931, Henry Eyring and Michael Polanyi constructed a potential energy surface for the reaction below. This surface is a three-dimensional diagram based on quantum-mechanical principles as well as experimental data on vibrational frequencies and energies of dissociation.
A year after the Eyring and Polanyi construction, Hans Pelzer and Eugene Wigner made an important contribution by following the progress of a reaction on a potential energy surface. The importance of this work was that it was the first time that the concept of col or saddle point in the potential energy surface was discussed. They concluded that the rate of a reaction is determined by the motion of the system through that col.
By modeling reactions as Langevin motion along a one dimensional reaction coordinate, Hendrik Kramers was able to derive a relationship between the shape of the potential energy surface along the reaction coordinate and the transition rates of the system. The formulation relies on approximating the potential energy landscape as a series of harmonic wells. In a two state system, there will be three wells; a well for state A, an upside-down well representing the potential energy barrier, and a well for state B.
In the overdamped (or "diffusive") regime, the transition rate from state A to B is related to the resonant frequency of the wells via
where ω a {\displaystyle \omega _{a}} is the frequency of the well for state A, ω H {\displaystyle \omega _{H}} is the frequency of the barrier well, γ {\displaystyle \gamma } is the viscous damping, E H {\displaystyle E_{H}} is the energy of the top of the barrier, E A {\displaystyle E_{A}} is the energy of bottom of the well for state A, and k B T {\displaystyle k_{\text{B}}T} is the temperature of the system times the Boltzmann constant. [ 10 ]
For general damping (overdamped or underdamped), there is a similar formula. [ 11 ]
One of the most important features introduced by Eyring , Polanyi and Evans was the notion that activated complexes are in quasi-equilibrium with the reactants. The rate is then directly proportional to the concentration of these complexes multiplied by the frequency ( k B T / h ) with which they are converted into products. Below, a non-rigorous plausibility argument is given for the functional form of the Eyring equation. However, the key statistical mechanical factor k B T / h will not be justified, and the argument presented below does not constitute a true "derivation" of the Eyring equation. [ 12 ]
Quasi-equilibrium is different from classical chemical equilibrium, but can be described using a similar thermodynamic treatment. [ 6 ] [ 13 ] Consider the reaction below
where complete equilibrium is achieved between all the species in the system including activated complexes, [AB] ‡ . Using statistical mechanics, concentration of [AB] ‡ can be calculated in terms of the concentration of A and B.
TST assumes that even when the reactants and products are not in equilibrium with each other, the activated complexes are in quasi-equilibrium with the reactants. As illustrated in Figure 2, at any instant of time, there are a few activated complexes, and some were reactant molecules in the immediate past, which are designated [AB l ] ‡ (since they are moving from left to right). The remainder of them were product molecules in the immediate past ([AB r ] ‡ ).
In TST, it is assumed that the flux of activated complexes in the two directions are independent of each other. That is, if all the product molecules were suddenly removed from the reaction system, the flow of [AB r ] ‡ stops, but there is still a flow from left to right. Hence, to be technically correct, the reactants are in equilibrium only with [AB l ] ‡ , the activated complexes that were reactants in the immediate past.
The activated complexes do not follow a Boltzmann distribution of energies, but an "equilibrium constant" can still be derived from the distribution they do follow. The equilibrium constant K ‡ for the quasi-equilibrium can be written as
So, the chemical activity of the transition state AB ‡ is
Therefore, the rate equation for the production of product is
where the rate constant k is given by
Here, k ‡ is directly proportional to the frequency of the vibrational mode responsible for converting the activated complex to the product; the frequency of this vibrational mode is ν {\displaystyle \nu } . Every vibration does not necessarily lead to the formation of product, so a proportionality constant κ {\displaystyle \kappa } , referred to as the transmission coefficient, is introduced to account for this effect. So k ‡ can be rewritten as
For the equilibrium constant K ‡ , statistical mechanics leads to a temperature dependent expression given as
Combining the new expressions for k ‡ and K ‡ , a new rate constant expression can be written, which is given as
Since, by definition, Δ G ‡ = Δ H ‡ – T Δ S ‡ , the rate constant expression can be expanded, to give an alternative form of the Eyring equation:
For correct dimensionality, the equation needs to have an extra factor of ( c ⊖ ) 1– m for reactions that are not unimolecular:
where c ⊖ is the standard concentration 1 mol⋅L −1 and m is the molecularity. [ 14 ]
The rate constant expression from transition state theory can be used to calculate the Δ G ‡ , Δ H ‡ , Δ S ‡ , and even Δ V ‡ (the volume of activation) using experimental rate data. These so-called activation parameters give insight into the nature of a transition state , including energy content and degree of order, compared to the starting materials and has become a standard tool for elucidation of reaction mechanisms in physical organic chemistry . The free energy of activation, Δ G ‡ , is defined in transition state theory to be the energy such that Δ G ‡ = − R T ln K ‡ ′ {\displaystyle \Delta G^{\ddagger }=-RT\ln K^{\ddagger '}} holds. The parameters Δ H ‡ and Δ S ‡ can then be inferred by determining Δ G ‡ = Δ H ‡ – T Δ S ‡ at different temperatures.
Because the functional form of the Eyring and Arrhenius equations are similar, it is tempting to relate the activation parameters with the activation energy and pre-exponential factors of the Arrhenius treatment. However, the Arrhenius equation was derived from experimental data and models the macroscopic rate using only two parameters, irrespective of the number of transition states in a mechanism. In contrast, activation parameters can be found for every transition state of a multistep mechanism, at least in principle. Thus, although the enthalpy of activation, Δ H ‡ , is often equated with Arrhenius's activation energy E a , they are not equivalent. For a condensed-phase (e.g., solution-phase) or unimolecular gas-phase reaction step, E a = Δ H ‡ + RT . For other gas-phase reactions, E a = Δ H ‡ + (1 − Δ n ‡ ) RT , where Δ n ‡ is the change in the number of molecules on forming the transition state. [ 15 ] (Thus, for a bimolecular gas-phase process, E a = Δ H ‡ + 2 RT. )
The entropy of activation, Δ S ‡ , gives the extent to which transition state (including any solvent molecules involved in or perturbed by the reaction) is more disordered compared to the starting materials. It offers a concrete interpretation of the pre-exponential factor A in the Arrhenius equation; for a unimolecular, single-step process, the rough equivalence A = ( k B T / h ) exp(1 + Δ S ‡ / R ) (or A = ( k B T / h ) exp(2 + Δ S ‡ / R ) for bimolecular gas-phase reactions) holds. For a unimolecular process, a negative value indicates a more ordered, rigid transition state than the ground state, while a positive value reflects a transition state with looser bonds and/or greater conformational freedom. It is important to note that, for reasons of dimensionality, reactions that are bimolecular or higher have Δ S ‡ values that depend on the standard state chosen (standard concentration, in particular). For most recent publications, 1 mol L −1 or 1 molar is chosen. Since this choice is a human construct, based on our definitions of units for molar quantity and volume, the magnitude and sign of Δ S ‡ for a single reaction is meaningless by itself; only comparisons of the value with that of a reference reaction of "known" (or assumed) mechanism, made at the same standard state, is valid. [ 16 ]
The volume of activation is found by taking the partial derivative of Δ G ‡ with respect to pressure (holding temperature constant): Δ V ‡ := ( ∂ Δ G ‡ / ∂ P ) T {\displaystyle \Delta V^{\ddagger }:=(\partial \Delta G^{\ddagger }/\partial P)_{T}} . It gives information regarding the size, and hence, degree of bonding at the transition state. An associative mechanism will likely have a negative volume of activation, while a dissociative mechanism will likely have a positive value.
Given the relationship between equilibrium constant and the forward and reverse rate constants, K = k 1 / k − 1 {\displaystyle K=k_{1}/k_{-1}} , the Eyring equation implies that
Another implication of TST is the Curtin–Hammett principle : the product ratio of a kinetically-controlled reaction from R to two products A and B will reflect the difference in the energies of the respective transition states leading to product, assuming there is a single transition state to each one:
(In the expression for ΔΔ G ‡ above, there is an extra Δ G ∘ = G S A ∘ − G S B ∘ {\displaystyle \Delta G^{\circ }=G_{\mathrm {S} _{\mathrm {A} }}^{\circ }-G_{\mathrm {S} _{\mathrm {B} }}^{\circ }} term if A and B are formed from two different species S A and S B in equilibrium.)
For a thermodynamically-controlled reaction , every difference of RT ln 10 ≈ (1.987 × 10 −3 kcal/mol K)(298 K)(2.303) ≈ 1.36 kcal/mol in the free energies of products A and B results in a factor of 10 in selectivity at room temperature (298 K), a principle known as the "1.36 rule":
Analogously, every 1.36 kcal/mol difference in the free energy of activation results in a factor of 10 in selectivity for a kinetically-controlled process at room temperature: [ 17 ]
Using the Eyring equation, there is a straightforward relationship between Δ G ‡ , first-order rate constants, and reaction half-life at a given temperature. At 298 K, a reaction with Δ G ‡ = 23 kcal/mol has a rate constant of k ≈ 8.4 × 10 −5 s −1 and a half life of t 1/2 ≈ 2.3 hours, figures that are often rounded to k ~ 10 −4 s −1 and t 1/2 ~ 2 h. Thus, a free energy of activation of this magnitude corresponds to a typical reaction that proceeds to completion overnight at room temperature. For comparison, the cyclohexane chair flip has a Δ G ‡ of about 11 kcal/mol with k ~ 10 5 s −1 , making it a dynamic process that takes place rapidly (faster than the NMR timescale) at room temperature. At the other end of the scale, the cis/trans isomerization of 2-butene has a Δ G ‡ of about 60 kcal/mol, corresponding to k ~ 10 −31 s −1 at 298 K. This is a negligible rate: the half-life is 12 orders of magnitude longer than the age of the universe . [ 18 ]
In general, TST has provided researchers with a conceptual foundation for understanding how chemical reactions take place. Even though the theory is widely applicable, it does have limitations. For example, when applied to each elementary step of a multi-step reaction, the theory assumes that each intermediate is long-lived enough to reach a Boltzmann distribution of energies before continuing to the next step. When the intermediates are very short-lived, TST fails. [ 19 ] In such cases, the momentum of the reaction trajectory from the reactants to the intermediate can carry forward to affect product selectivity. An example of such a reaction is the ring closure of cyclopentane biradicals generated from the gas-phase thermal decomposition of 2,3-diazabicyclo[2.2.1]hept-2-ene. [ 20 ] [ 21 ]
Transition state theory is also based on the assumption that atomic nuclei behave according to classical mechanics . [ 22 ] It is assumed that unless atoms or molecules collide with enough energy to form the transition structure, then the reaction does not occur. However, according to quantum mechanics, for any barrier with a finite amount of energy, there is a possibility that particles can still tunnel across the barrier. With respect to chemical reactions this means that there is a chance that molecules will react, even if they do not collide with enough energy to overcome the energy barrier. [ 23 ] While this effect is negligible for reactions with large activation energies, it becomes an important phenomenon for reactions with relatively low energy barriers, since the tunneling probability increases with decreasing barrier height.
Transition state theory fails for some reactions at high temperature. The theory assumes the reaction system will pass over the lowest energy saddle point on the potential energy surface. While this description is consistent for reactions occurring at relatively low temperatures, at high temperatures, molecules populate higher energy vibrational modes; their motion becomes more complex and collisions may lead to transition states far away from the lowest energy saddle point. This deviation from transition state theory is observed even in the simple exchange reaction between diatomic hydrogen and a hydrogen radical. [ 24 ]
Given these limitations, several alternatives to transition state theory have been proposed. A brief discussion of these theories follows.
Any form of TST, such as microcanonical variational TST, canonical variational TST , and improved canonical variational TST, in which the transition state is not necessarily located at the saddle point, is referred to as generalized transition state theory.
A fundamental flaw of transition state theory is that it counts any crossing of the transition state as a reaction from reactants to products or vice versa. In reality, a molecule may cross this "dividing surface" and turn around, or cross multiple times and only truly react once. As such, unadjusted TST is said to provide an upper bound for the rate coefficients. To correct for this, variational transition state theory varies the location of the dividing surface that defines a successful reaction in order to minimize the rate for each fixed energy. [ 25 ] The rate expressions obtained in this microcanonical treatment can be integrated over the energy, taking into account the statistical distribution over energy states, so as to give the canonical, or thermal rates.
A development of transition state theory in which the position of the dividing surface is varied so as to minimize the rate constant at a given temperature.
A modification of canonical variational transition state theory in which, for energies below the threshold energy, the position of the dividing surface is taken to be that of the microcanonical threshold energy. This forces the contributions to rate constants to be zero if they are below the threshold energy. A compromise dividing surface is then chosen so as to minimize the contributions to the rate constant made by reactants having higher energies.
An expansion of TST to the reactions when two spin-states are involved simultaneously is called nonadiabatic transition state theory (NA-TST).
Using vibrational perturbation theory, effects such as tunnelling and variational effects can be accounted for within the SCTST formalism.
Enzymes catalyze chemical reactions at rates that are astounding relative to uncatalyzed chemistry at the same reaction conditions. Each catalytic event requires a minimum of three or often more steps, all of which occur within the few milliseconds that characterize typical enzymatic reactions. According to transition state theory, the smallest fraction of the catalytic cycle is spent in the most important step, that of the transition state. The original proposals of absolute reaction rate theory for chemical reactions defined the transition state as a distinct species in the reaction coordinate that determined the absolute reaction rate. Soon thereafter, Linus Pauling proposed that the powerful catalytic action of enzymes could be explained by specific tight binding to the transition state species [ 26 ] Because reaction rate is proportional to the fraction of the reactant in the transition state complex, the enzyme was proposed to increase the concentration of the reactive species.
This proposal was formalized by Wolfenden and coworkers at University of North Carolina at Chapel Hill , who hypothesized that the rate increase imposed by enzymes is proportional to the affinity of the enzyme for the transition state structure relative to the Michaelis complex. [ 27 ] Because enzymes typically increase the non-catalyzed reaction rate by factors of 10 6 -10 26 , and Michaelis complexes [ clarification needed ] often have dissociation constants in the range of 10 −3 -10 −6 M, it is proposed that transition state complexes are bound with dissociation constants in the range of 10 −14 -10 −23 M. As substrate progresses from the Michaelis complex to product, chemistry occurs by enzyme-induced changes in electron distribution in the substrate. Enzymes alter the electronic structure by protonation, proton abstraction, electron transfer, geometric distortion, hydrophobic partitioning, and interaction with Lewis acids and bases. Analogs that resemble the transition state structures should therefore provide the most powerful noncovalent inhibitors known.
All chemical transformations pass through an unstable structure called the transition state, which is poised between the chemical structures of the substrates and products. The transition states for chemical reactions are proposed to have lifetimes near 10 −13 seconds, on the order of the time of a single bond vibration. No physical or spectroscopic method is available to directly observe the structure of the transition state for enzymatic reactions, yet transition state structure is central to understanding enzyme catalysis since enzymes work by lowering the activation energy of a chemical transformation.
It is now accepted that enzymes function to stabilize transition states lying between reactants and products, and that they would therefore be expected to bind strongly any inhibitor that closely resembles such a transition state. Substrates and products often participate in several enzyme catalyzed reactions, whereas the transition state tends to be characteristic of one particular enzyme, so that such an inhibitor tends to be specific for that particular enzyme. The identification of numerous transition state inhibitors supports the transition state stabilization hypothesis for enzymatic catalysis.
Currently there is a large number of enzymes known to interact with transition state analogs, most of which have been designed with the intention of inhibiting the target enzyme. Examples include HIV-1 protease, racemases, β-lactamases, metalloproteinases, cyclooxygenases and many others.
Desorption as well as reactions on surfaces are straightforward to describe with transition state theory. Analysis of adsorption to a surface from a liquid phase can present a challenge due to lack of ability to assess the concentration of the solute near the surface. When full details are not available, it has been proposed that reacting species' concentrations should be normalized to the concentration of active surface sites, an approximation called the surface reactant equi-density approximation (SREA). [ 28 ] | https://en.wikipedia.org/wiki/Transition_state_theory |
In crystallography , the transition temperature is the temperature at which a material changes from one crystal state ( allotrope ) to another. [ 1 ] More formally, it is the temperature at which two crystalline forms of a substance can co-exist in equilibrium . For example, when rhombic sulfur is heated above 95.6 °C, it changes form into monoclinic sulfur; when cooled
below 95.6 °C, it reverts to rhombic sulfur. At 95.6 °C the two forms can co-exist. Another example is tin , which transitions from a cubic crystal below 13.2 °C to a tetragonal crystal above that temperature.
In the case of ferroelectric or ferromagnetic crystals, a transition temperature may be known as the Curie temperature .
This crystallography -related article is a stub . You can help Wikipedia by expanding it .
This thermodynamics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transition_temperature |
Transitional B cells are B cells at an intermediate stage in their development between bone marrow immature cells and mature B cells in the spleen . Primary B cell development takes place in the bone marrow , where immature B cells must generate a functional B cell receptor (BCR) and overcome negative selection induced by reactivity with autoantigens . [ 1 ] Transitional cells can be found in the bone marrow, peripheral blood , and spleen, and only a fraction of the immature B cells that survive after the transitional stage become mature B cells in secondary lymphoid organs such as the spleen.
The term "transitional B cell" was first used in 1995 for cells that are developmentally intermediate between immature bone marrow B lineage cells and fully mature naïve B cells in the peripheral blood and secondary lymphoid tissues, found in mice. In humans, it is postulated that the transitional cells, after leaving the bone marrow, are subjected to peripheral checks to prevent the production of autoantibodies . [ 2 ] Transitional B cells that survive selection against autoreactivity develop eventually into naive B cells. [ 3 ] Given the fact that only a small fraction of immature B cells survive the transition to the mature naive stage, the transitional B cell compartment is widely believed to represent a key negative selection checkpoint for autoreactive B cells. [ 4 ] [ 5 ] All transitional B cells are high in heat-stable antigen (HSA, CD24) relative to their mature counterparts and express the phenotypic surface markers AA4. [ 6 ]
There are two transitional stages for the B cells in mouse, T1 and T2, with the T1 stage occurring from its migration from the bone marrow to its entry into the spleen, and the T2 stage occurring within the spleen where they developed into mature B cells. [ 7 ] As in the mouse, human transitional cells can be found in the bone marrow, peripheral blood, and spleen. However, in contrast to the nuanced models proposed in the mouse, thus far human studies have, by and large, described a rather homogeneous population of transitional B cells (T1/T2) defined by the expression of high levels of CD24 , CD38 and CD10 . [ 1 ] [ 8 ]
Overall there is general agreement on the markers used to separate the subpopulations, although some differences exist in the number of subgroups and in the functional characteristics of the T2 population. T1 B cells are distinguished from the other subsets by the following surface marker characteristics: they are IgM hi IgD − CD21 − CD23 − , whereas T2 B cells retain high levels of surface IgM but are also IgD + CD21 + and CD23 + . [ 8 ] Differences in functional characteristics of the T2 subpopulation reported by different laboratories are unexplained, although they might be due to differences in isolation strategies. In any case, there is consensus that T2 B cells clearly differ functionally from T1 B cells. [ 9 ] | https://en.wikipedia.org/wiki/Transitional_B_cell |
In interior design and furniture design , Transitional Style refers to a contemporary style mixing traditional and modern styles. It emerged in the mid-20th century, combining elements from both traditional and contemporary approaches. Distinguished by its balanced use of clean lines and comfortable furnishings, the style represents a deliberate fusion of historical and modern aesthetics.
The approach typically features neutral color schemes built around whites, creams, and grays, with visual interest created through varied textures rather than bold patterns or ornate details. While retaining some classical elements like crown molding and traditional furniture forms, transitional design simplifies these features to create spaces that feel both refined and welcoming.
Unlike contemporary design, which evolves with current trends, transitional style maintains consistent principles focused on merging formal architectural elements with casual comfort. This synthesis has made it a significant influence in residential and commercial design since its development. [ 1 ]
The style combines curves with straight lines to create a design that balances masculine and feminine attributes, aiming to create a comfortable and relaxing style. A lack of ornamentation and decoration with minimal accessories keeps the focus on the simplicity and sophistication of the design. [ 2 ] Color palettes are typically neutral and subtle and may be monochromatic, with color in art and accents rather than upholstery and floors. [ 3 ]
Transitional style focuses on comfort and practicality, valuing form over function. [ 4 ] Soft textures are often used in transitional furniture. [ 5 ]
21st century transitional style furniture designers include Nina Petronzio and Thomas Pheasant . [ 6 ] [ failed verification ] [ 7 ] [ failed verification ] | https://en.wikipedia.org/wiki/Transitional_Style |
In the mathematical field of graph theory , a transitive reduction of a directed graph D is another directed graph with the same vertices and as few edges as possible, such that for all pairs of vertices v , w a (directed) path from v to w in D exists if and only if such a path exists in the reduction. Transitive reductions were introduced by Aho, Garey & Ullman (1972) , who provided tight bounds on the computational complexity of constructing them.
More technically, the reduction is a directed graph that has the same reachability relation as D . Equivalently, D and its transitive reduction should have the same transitive closure as each other, and the transitive reduction of D should have as few edges as possible among all graphs with that property.
The transitive reduction of a finite directed acyclic graph (a directed graph without directed cycles ) is unique and is a subgraph of the given graph. However, uniqueness fails for graphs with (directed) cycles, and for infinite graphs not even existence is guaranteed.
The closely related concept of a minimum equivalent graph is a subgraph of D that has the same reachability relation and as few edges as possible. [ 1 ] The difference is that a transitive reduction does not have to be a subgraph of D . For finite directed acyclic graphs , the minimum equivalent graph is the same as the transitive reduction. However, for graphs that may contain cycles, minimum equivalent graphs are NP-hard to construct, while transitive reductions can be constructed in polynomial time .
Transitive reduction can be defined for an abstract binary relation on a set , by interpreting the pairs of the relation as arcs in a directed graph.
The transitive reduction of a finite directed graph G is a graph with the fewest possible edges that has the same reachability relation as the original graph. That is, if there is a path from a vertex x to a vertex y in graph G , there must also be a path from x to y in the transitive reduction of G , and vice versa. Specifically, if there is some path from x to y, and another from y to z, then there may be no path from x to z which does not include y. Transitivity for x, y, and z means that if x < y and y < z, then x < z. If for any path from y to z there is a path x to y, then there is a path x to z; however, it is not true that for any paths x to y and x to z that there is a path y to z, and therefore any edge between vertices x and z are excluded under a transitive reduction, as they represent walks which are not transitive. The following image displays drawings of graphs corresponding to a non-transitive binary relation (on the left) and its transitive reduction (on the right).
The transitive reduction of a finite directed acyclic graph G is unique, and consists of the edges of G that form the only path between their endpoints. In particular, it is always a spanning subgraph of the given graph. For this reason, the transitive reduction coincides with the minimum equivalent graph in this case.
In the mathematical theory of binary relations , any relation R on a set X may be thought of as a directed graph that has the set X as its vertex set and that has an arc xy for every ordered pair of elements that are related in R . In particular, this method lets partially ordered sets be reinterpreted as directed acyclic graphs, in which there is an arc xy in the graph whenever there is an order relation x < y between the given pair of elements of the partial order. When the transitive reduction operation is applied to a directed acyclic graph that has been constructed in this way, it generates the covering relation of the partial order, which is frequently given visual expression by means of a Hasse diagram .
Transitive reduction has been used on networks which can be represented as directed acyclic graphs (e.g. citation graphs or citation networks ) to reveal structural differences between networks. [ 2 ]
In a finite graph that has cycles, the transitive reduction may not be unique: there may be more than one graph on the same vertex set that has a minimum number of edges and has the same reachability relation as the given graph. Additionally, it may be the case that none of these minimum graphs is a subgraph of the given graph. Nevertheless, it is straightforward to characterize the minimum graphs with the same reachability relation as the given graph G . [ 3 ] If G is an arbitrary directed graph, and H is a graph with the minimum possible number of edges having the same reachability relation as G , then H consists of
The total number of edges in this type of transitive reduction is then equal to the number of edges in the transitive reduction of the condensation, plus the number of vertices in nontrivial strongly connected components (components with more than one vertex).
The edges of the transitive reduction that correspond to condensation edges can always be chosen to be a subgraph of the given graph G . However, the cycle within each strongly connected component can only be chosen to be a subgraph of G if that component has a Hamiltonian cycle , something that is not always true and is difficult to check. Because of this difficulty, it is NP-hard to find the smallest subgraph of a given graph G with the same reachability (its minimum equivalent graph). [ 3 ]
Aho et al. provide the following example to show that in infinite graphs , even when the graph is acyclic, a transitive reduction may not exist. Form a graph with a vertex for each real number , with an edge x → y {\displaystyle x\to y} whenever x < y {\displaystyle x<y} as real numbers. Then this graph is infinite, acyclic, and transitively closed. However, in any subgraph that has the same transitive closure, each remaining edge x → y {\displaystyle x\to y} can be removed without changing the transitive closure, because there still must remain a path from x {\displaystyle x} to y {\displaystyle y} through any vertex between them. Therefore, among the subgraphs with the same transitive closure, none of these subgraphs is minimal: there is no transitive reduction. [ 3 ]
As Aho et al. show, [ 3 ] when the time complexity of graph algorithms is measured only as a function of the number n of vertices in the graph, and not as a function of the number of edges, transitive closure and transitive reduction of directed acyclic graphs have the same complexity. It had already been shown that transitive closure and multiplication of Boolean matrices of size n × n had the same complexity as each other, [ 4 ] so this result put transitive reduction into the same class. The best exact algorithms for matrix multiplication , as of 2023, take time O( n 2.371552 ), [ 5 ] and this gives the fastest known worst-case time bound for transitive reduction in dense graphs, by applying it to matrices over the integers and looking at the nonzero entries in the result.
To prove that transitive reduction is as easy as transitive closure, Aho et al. rely on the already-known equivalence with Boolean matrix multiplication. They let A be the adjacency matrix of the given directed acyclic graph, and B be the adjacency matrix of its transitive closure (computed using any standard transitive closure algorithm). Then an edge uv belongs to the transitive reduction if and only if there is a nonzero entry in row u and column v of matrix A , and there is a zero entry in the same position of the matrix product AB . In this construction, the nonzero elements of the matrix AB represent pairs of vertices connected by paths of length two or more. [ 3 ]
To prove that transitive reduction is as hard as transitive closure, Aho et al. construct from a given directed acyclic graph G another graph H , in which each vertex of G is replaced by a path of three vertices, and each edge of G corresponds to an edge in H connecting the corresponding middle vertices of these paths. In addition, in the graph H , Aho et al. add an edge from every path start to every path end. In the transitive reduction of H , there is an edge from the path start for u to the path end for v , if and only if edge uv does not belong to the transitive closure of G . Therefore, if the transitive reduction of H can be computed efficiently, the transitive closure of G can be read off directly from it. [ 3 ]
When measured both in terms of the number n of vertices and the number m of edges in a directed acyclic graph, transitive reductions can also be found in time O( nm ), a bound that may be faster than the matrix multiplication methods for sparse graphs . To do so, apply a linear time longest path algorithm in the given directed acyclic graph, for each possible choice of starting vertex. From the computed longest paths, keep only those of length one (single edge); in other words, keep those edges ( u , v ) for which there exists no other path from u to v . This O( nm ) time bound matches the complexity of constructing transitive closures by using depth-first search or breadth first search to find the vertices reachable from every choice of starting vertex, so again with these assumptions transitive closures and transitive reductions can be found in the same amount of time.
For a graph with n vertices, m edges, and r edges in the transitive reduction, it is possible to find the transitive reduction using an output-sensitive algorithm in an amount of time that depends on r in place of m . The algorithm is: [ 6 ]
The ordering of the edges in the inner loop can be obtained by using two passes of counting sort or another stable sorting algorithm to sort the edges, first by the topological numbering of their end vertex, and secondly by their starting vertex. If the sets are represented as bit arrays , each set union operation can be performed in time O ( n ), or faster using bitwise operations . The number of these set operations is proportional to the number of output edges, leading to the overall time bound of O ( nr ). The reachable sets obtained during the algorithm describe the transitive closure of the input. [ 6 ]
If the graph is given together with a partition of its vertices into k chains (pairwise-reachable subsets), this time can be further reduced to O ( kr ), by representing each reachable set concisely as a union of suffixes of chains. [ 7 ] | https://en.wikipedia.org/wiki/Transitive_reduction |
In mathematics , a binary relation R on a set X is transitive if, for all elements a , b , c in X , whenever R relates a to b and b to c , then R also relates a to c .
Every partial order and every equivalence relation is transitive. For example, less than and equality among real numbers are both transitive: If a < b and b < c then a < c ; and if x = y and y = z then x = z .
All definitions tacitly require the homogeneous relation R {\displaystyle R} be transitive : for all a , b , c , {\displaystyle a,b,c,} if a R b {\displaystyle aRb} and b R c {\displaystyle bRc} then a R c . {\displaystyle aRc.} A term's definition may require additional properties that are not listed in this table.
A homogeneous relation R on the set X is a transitive relation if, [ 1 ]
Or in terms of first-order logic :
where a R b is the infix notation for ( a , b ) ∈ R .
As a non-mathematical example, the relation "is an ancestor of" is transitive. For example, if Amy is an ancestor of Becky, and Becky is an ancestor of Carrie, then Amy is also an ancestor of Carrie.
On the other hand, "is the birth mother of" is not a transitive relation, because if Alice is the birth mother of Brenda, and Brenda is the birth mother of Claire, then it does not follow that Alice is the birth mother of Claire. In fact, this relation is antitransitive : Alice can never be the birth mother of Claire.
Non-transitive, non-antitransitive relations include sports fixtures (playoff schedules), 'knows' and 'talks to'.
The examples "is greater than", "is at least as great as", and "is equal to" ( equality ) are transitive relations on various sets.
As are the set of real numbers or the set of natural numbers:
More examples of transitive relations:
Examples of non-transitive relations:
The empty relation on any set X {\displaystyle X} is transitive [ 3 ] because there are no elements a , b , c ∈ X {\displaystyle a,b,c\in X} such that a R b {\displaystyle aRb} and b R c {\displaystyle bRc} , and hence the transitivity condition is vacuously true . A relation R containing only one ordered pair is also transitive: if the ordered pair is of the form ( x , x ) {\displaystyle (x,x)} for some x ∈ X {\displaystyle x\in X} the only such elements a , b , c ∈ X {\displaystyle a,b,c\in X} are a = b = c = x {\displaystyle a=b=c=x} , and indeed in this case a R c {\displaystyle aRc} , while if the ordered pair is not of the form ( x , x ) {\displaystyle (x,x)} then there are no such elements a , b , c ∈ X {\displaystyle a,b,c\in X} and hence R {\displaystyle R} is vacuously transitive.
A transitive relation is asymmetric if and only if it is irreflexive . [ 6 ]
A transitive relation need not be reflexive . When it is, it is called a preorder . For example, on set X = {1,2,3}:
As a counter example, the relation < {\displaystyle <} on the real numbers is transitive, but not reflexive.
Let R be a binary relation on set X . The transitive extension of R , denoted R 1 , is the smallest binary relation on X such that R 1 contains R , and if ( a , b ) ∈ R and ( b , c ) ∈ R then ( a , c ) ∈ R 1 . [ 7 ] For example, suppose X is a set of towns, some of which are connected by roads. Let R be the relation on towns where ( A , B ) ∈ R if there is a road directly linking town A and town B . This relation need not be transitive. The transitive extension of this relation can be defined by ( A , C ) ∈ R 1 if you can travel between towns A and C by using at most two roads.
If a relation is transitive then its transitive extension is itself, that is, if R is a transitive relation then R 1 = R .
The transitive extension of R 1 would be denoted by R 2 , and continuing in this way, in general, the transitive extension of R i would be R i + 1 . The transitive closure of R , denoted by R * or R ∞ is the set union of R , R 1 , R 2 , ... . [ 8 ]
The transitive closure of a relation is a transitive relation. [ 8 ]
The relation "is the birth parent of" on a set of people is not a transitive relation. However, in biology the need often arises to consider birth parenthood over an arbitrary number of generations: the relation "is a birth ancestor of" is a transitive relation and it is the transitive closure of the relation "is the birth parent of".
For the example of towns and roads above, ( A , C ) ∈ R * provided you can travel between towns A and C using any number of roads.
No general formula that counts the number of transitive relations on a finite set (sequence A006905 in the OEIS ) is known. [ 9 ] However, there is a formula for finding the number of relations that are simultaneously reflexive, symmetric, and transitive – in other words, equivalence relations – (sequence A000110 in the OEIS ), those that are symmetric and transitive, those that are symmetric, transitive, and antisymmetric, and those that are total, transitive, and antisymmetric. Pfeiffer [ 10 ] has made some progress in this direction, expressing relations with combinations of these properties in terms of each other, but still calculating any one is difficult. See also Brinkmann and McKay (2005). [ 11 ]
Since the reflexivization of any transitive relation is a preorder , the number of transitive relations an on n -element set is at most 2 n time more than the number of preorders, thus it is asymptotically 2 ( 1 / 4 + o ( 1 ) ) n 2 {\displaystyle 2^{(1/4+o(1))n^{2}}} by results of Kleitman and Rothschild. [ 12 ]
Note that S ( n , k ) refers to Stirling numbers of the second kind .
A relation R is called intransitive if it is not transitive, that is, if xRy and yRz , but not xRz , for some x , y , z .
In contrast, a relation R is called antitransitive if xRy and yRz always implies that xRz does not hold.
For example, the relation defined by xRy if xy is an even number is intransitive, [ 13 ] but not antitransitive. [ 14 ] The relation defined by xRy if x is even and y is odd is both transitive and antitransitive. [ 15 ] The relation defined by xRy if x is the successor number of y is both intransitive [ 16 ] and antitransitive. [ 17 ] Unexpected examples of intransitivity arise in situations such as political questions or group preferences. [ 18 ]
Generalized to stochastic versions ( stochastic transitivity ), the study of transitivity finds applications of in decision theory , psychometrics and utility models . [ 19 ]
A quasitransitive relation is another generalization; [ 5 ] it is required to be transitive only on its non-symmetric part. Such relations are used in social choice theory or microeconomics . [ 20 ]
Proposition: If R is a univalent , then R;R T is transitive.
Corollary : If R is univalent, then R;R T is an equivalence relation on the domain of R . | https://en.wikipedia.org/wiki/Transitive_relation |
Transketolase (abbreviated as TK ) is an enzyme that, in humans, is encoded by the TKT gene . [ 1 ] It participates in both the pentose phosphate pathway in all organisms and the Calvin cycle of photosynthesis . Transketolase catalyzes two important reactions, which operate in opposite directions in these two pathways. In the first reaction of the non-oxidative pentose phosphate pathway, the cofactor thiamine diphosphate accepts a 2-carbon fragment from a 5-carbon ketose ( D-xylulose-5-P ), then transfers this fragment to a 5-carbon aldose ( D-ribose-5-P ) to form a 7-carbon ketose ( sedoheptulose-7-P ). The abstraction of two carbons from D-xylulose-5-P yields the 3-carbon aldose glyceraldehyde-3-P . In the Calvin cycle, transketolase catalyzes the reverse reaction, the conversion of sedoheptulose-7-P and glyceraldehyde-3-P to pentoses, the aldose D-ribose-5-P and the ketose D-xylulose-5-P.
The second reaction catalyzed by transketolase in the pentose phosphate pathway involves the same thiamine diphosphate-mediated transfer of a 2-carbon fragment from D-xylulose-5-P to the aldose erythrose-4-phosphate , affording fructose 6-phosphate and glyceraldehyde-3-P. Again, the same reaction occurs in the Calvin cycle but in the opposite direction. Moreover, in the Calvin cycle, this is the first reaction catalyzed by transketolase rather than the second.
Transketolase connects the pentose phosphate pathway to glycolysis , feeding excess sugar phosphates into the main carbohydrate metabolic pathways in mammals. Its presence is necessary for the production of NADPH , especially in tissues actively engaged in biosyntheses, such as fatty acid synthesis by the liver and mammary glands , and for steroid synthesis by the liver and adrenal glands . Thiamine diphosphate is an essential cofactor, along with calcium .
Transketolase is abundantly expressed in the mammalian cornea by the stromal keratocytes and epithelial cells and is reputed to be one of the corneal crystallins . [ 2 ]
Transketolase is widely expressed in many organisms, including bacteria, plants, and mammals. The following human genes encode proteins with transketolase activity: [ citation needed ]
The entrance to the active site for this enzyme is made up mainly of several arginine , histidine , serine , and aspartate side-chains, with a glutamate side-chain playing a secondary role. These side-chains, specifically Arg359, Arg528, His469, and Ser386, are conserved within each transketolase enzyme and interact with the phosphate group of the donor and acceptor substrates . Because the substrate channel is so narrow, the donor and acceptor substrates cannot be bound simultaneously. Also, the substrates conform into a slightly extended form upon binding in the active site to accommodate this narrow channel. [ citation needed ]
Although this enzyme can bind numerous types of substrates, such as phosphorylated and nonphosphorylated monosaccharides including the keto and aldosugars fructose , ribose , etc., it has a high specificity for the stereoconfiguration of the hydroxyl groups of the sugars. These hydroxyl groups at C-3 and C-4 of the ketose donor must be in the D- threo configuration to correctly correspond to the C-1 and C-2 positions on the aldose acceptor. [ 3 ] Also, they stabilize the substrate in the active site by interacting with the Asp477, His30, and His263 residues. Disruption of this configuration, both the placement of hydroxyl groups or their stereochemistry, would consequently alter the H-bonding between the residues and substrates thus causing a lower affinity for the substrates. [ citation needed ]
In the first half of this pathway, His263 is used to effectively abstract the C3 hydroxyl proton , which thus allows a 2-carbon segment to be cleaved from fructose 6-phosphate . [ 4 ] The cofactor necessary for this step to occur is thiamin pyrophosphate (TPP). The binding of TPP to the enzyme incurs no major conformational change to the enzyme; instead, the enzyme has two flexible loops at the active site that make TPP accessible and binding possible. [ 3 ] Thus, this allows the active site to have a "closed" conformation rather than a large conformational change. Later in the pathway, His263 is used as a proton donor for the substrate acceptor-TPP complex, which can then generate erythrose-4-phosphate . [ citation needed ]
The histidine and aspartate side-chains are used to effectively stabilize the substrate within the active site and participate in deprotonation of the substrate. To be specific, the His 263 and His30 side-chains form hydrogen bonds to the aldehyde end of the substrate, which is deepest into the substrate channel, and Asp477 forms hydrogen bonds with the alpha hydroxyl group on the substrate, where it works to effectively bind the substrate and check for proper stereochemistry. It is also thought that Asp477 could have important catalytic effects because of its orientation in the middle of the active site and its interactions with the alpha hydroxyl group of the substrate. Glu418, located in the deepest region of the active site, plays a critical role in stabilizing the TPP cofactor. Specifically, it is involved in the cofactor-assisted proton abstraction from the substrate molecule. [ 3 ]
The phosphate group of the substrate also plays an important role in stabilizing the substrate upon its entrance into the active site. The tight ionic and polar interactions between this phosphate group and the residues Arg359, Arg528, His469, and Ser386 collectively work to stabilize the substrate by forming H-bonds to the oxygen atoms of the phosphate. [ 3 ] The ionic nature is found in the salt bridge formed from Arg359 to the phosphate group. [ citation needed ]
The catalysis of this mechanism is initiated by the deprotonation of TPP at the thiazolium ring. This carbanion then binds to the carbonyl of the donor substrate, thus cleaving the bond between C-2 and C-3. This keto fragment remains covalently bound to the C-2 carbon of TPP. The donor substrate is then released, and the acceptor substrate enters the active site where the fragment, bound to the intermediate α-β-dihydroxyethyl thiamin diphosphate, is transferred to the acceptor. [ 3 ]
Experiments have also been conducted that test the effect of replacing alanine for the amino acids at the entrance to the active site, Arg359, Arg528, and His469, which interact with the phosphate group of the substrate. This replacement creates a mutant enzyme with impaired catalytic activity. [ 3 ]
Transketolase activity decreases due to thiamine deficiency, generally due to malnutrition . Several diseases are associated with thiamine deficiency, including beriberi , biotin-thiamine-responsive basal ganglia disease (BTBGD) , [ 5 ] Wernicke–Korsakoff syndrome , and others (see thiamine for a comprehensive listing).
In Wernicke–Korsakoff syndrome, while no mutations could be demonstrated, [ 6 ] there is an indication that thiamine deficiency leads to Wernicke–Korsakoff syndrome only in those whose transketolase has a reduced affinity for thiamine. [ 7 ] In this way, the activity of transketolase is greatly hindered, and, as a consequence, the entire pentose phosphate pathway is inhibited. [ 8 ]
In Transketolase Deficiency, also known as SDDHD (Short Stature, Developmental Delay, and congenital Heart Defects), the disease is caused by an inherited autosomal recessive mutation in the TKT gene. A rare disorder of pentose phosphate metabolism with symptoms apparent in infancy including developmental delay and intellectual disability, delayed or absent speech, short stature, and congenital heart defects. Additional reported features include hypotonia, hyperactivity, stereotypic behavior, ophthalmologic abnormalities, hearing impairment, and variable facial dysmorphism, among others. Laboratory analysis shows elevated plasma and urinary polyols (erythritol, arabitol, and ribitol) and urinary sugar-phosphates (ribose-5-phosphate and xylulose/ribulose-5-phosphate). [ 9 ] "Cell extracts from all 5 patients showed absent or low residual TKT activity. Boyle et al. (2016) suggested that the low TKT activity in some tissues, possibly from another protein with the same function, might explain why TKT deficiency is compatible with life even though TKT is an essential enzyme." [ 10 ]
Red blood cell transketolase activity is reduced in deficiency of thiamine (vitamin B 1 ), and may be used in the diagnosis of Wernicke encephalopathy and other B 1 -deficiency syndromes if the diagnosis is in doubt. [ 11 ] Apart from the baseline enzyme activity (which may be normal even in deficiency states), acceleration of enzyme activity after the addition of thiamine pyrophosphate may be diagnostic of thiamine deficiency (0-15% normal, 15-25% deficiency, >25% severe deficiency). [ 12 ] | https://en.wikipedia.org/wiki/Transketolase |
In biology , translation is the process in living cells in which proteins are produced using RNA molecules as templates. The generated protein is a sequence of amino acids . This sequence is determined by the sequence of nucleotides in the RNA. The nucleotides are considered three at a time. Each such triple results in the addition of one specific amino acid to the protein being generated. The matching from nucleotide triple to amino acid is called the genetic code . The translation is performed by a large complex of functional RNA and proteins called ribosomes . The entire process is called gene expression .
In translation, messenger RNA (mRNA) is decoded in a ribosome, outside the nucleus, to produce a specific amino acid chain, or polypeptide . The polypeptide later folds into an active protein and performs its functions in the cell. The polypeptide can also start folding during protein synthesis. [ 1 ] The ribosome facilitates decoding by inducing the binding of complementary transfer RNA (tRNA) anticodon sequences to mRNA codons . The tRNAs carry specific amino acids that are chained together into a polypeptide as the mRNA passes through and is "read" by the ribosome.
Translation proceeds in three phases:
In prokaryotes (bacteria and archaea), translation occurs in the cytosol, where the large and small subunits of the ribosome bind to the mRNA. In eukaryotes , translation occurs in the cytoplasm or across the membrane of the endoplasmic reticulum through a process called co-translational translocation . In co-translational translocation, the entire ribosome–mRNA complex binds to the outer membrane of the rough endoplasmic reticulum (ER), and the new protein is synthesized and released into the ER; the newly created polypeptide can be immediately secreted or stored inside the ER for future vesicle transport and secretion outside the cell.
Many types of transcribed RNA, such as tRNA, ribosomal RNA, and small nuclear RNA, do not undergo a translation into proteins.
Several antibiotics act by inhibiting translation. These include anisomycin , cycloheximide , chloramphenicol , tetracycline , streptomycin , erythromycin , and puromycin . Prokaryotic ribosomes have a different structure from that of eukaryotic ribosomes, and thus antibiotics can specifically target bacterial infections without harming a eukaryotic host 's cells.
The basic process of protein production is the addition of one amino acid at a time to the end of a protein. This operation is performed by a ribosome . [ 2 ] A ribosome is made up of two subunits, a small subunit, and a large subunit. These subunits come together before the translation of mRNA into a protein to provide a location for translation to be carried out and a polypeptide to be produced. [ 3 ] The choice of amino acid type to add is determined by a messenger RNA (mRNA) molecule. Each amino acid added is matched to a three-nucleotide subsequence of the mRNA. For each such triplet possible, the corresponding amino acid is accepted. The successive amino acids added to the chain are matched to successive nucleotide triplets in the mRNA. In this way, the sequence of nucleotides in the template mRNA chain determines the sequence of amino acids in the generated amino acid chain. [ 4 ] The addition of an amino acid occurs at the C-terminus of the peptide; thus, translation is said to be amine-to-carboxyl directed. [ 5 ]
The mRNA carries genetic information encoded as a ribonucleotide sequence from the chromosomes to the ribosomes. The ribonucleotides are "read" by translational machinery in a sequence of nucleotide triplets called codons. Each of those triplets codes for a specific amino acid . [ citation needed ]
The ribosome molecules translate this code to a specific sequence of amino acids. The ribosome is a multisubunit structure containing ribosomal RNA (rRNA) and proteins. It is the "factory" where amino acids are assembled into proteins.
Transfer RNAs (tRNAs) are small noncoding RNA chains (74–93 nucleotides) that transport amino acids to the ribosome. The repertoire of tRNA genes varies widely between species, with some bacteria having between 20 and 30 genes while complex eukaryotes could have thousands. [ 6 ] tRNAs have a site for amino acid attachment, and a site called an anticodon. The anticodon is an RNA triplet complementary to the mRNA triplet that codes for their cargo amino acid .
Aminoacyl tRNA synthetases ( enzymes ) catalyze the bonding between specific tRNAs and the amino acids that their anticodon sequences call for. The product of this reaction is an aminoacyl-tRNA . The amino acid is joined by its carboxyl group to the 3' OH of the tRNA by an ester bond . When the tRNA has an amino acid linked to it, the tRNA is termed "charged". In bacteria, this aminoacyl-tRNA is carried to the ribosome by EF-Tu , where mRNA codons are matched through complementary base pairing to specific tRNA anticodons. Aminoacyl-tRNA synthetases that mispair tRNAs with the wrong amino acids can produce mischarged aminoacyl-tRNAs, which can result in inappropriate amino acids at the respective position in the protein. This "mistranslation" [ 7 ] of the genetic code naturally occurs at low levels in most organisms, but certain cellular environments cause an increase in permissive mRNA decoding, sometimes to the benefit of the cell.
The ribosome has two binding sites for tRNA. They are the aminoacyl site (abbreviated A), and the peptidyl site/ exit site (abbreviated P/E). Concerning the mRNA, the three sites are oriented 5' to 3' E-P-A, because ribosomes move toward the 3' end of mRNA. The A-site binds the incoming tRNA with the complementary codon on the mRNA. The P/E-site holds the tRNA with the growing polypeptide chain. When an aminoacyl-tRNA initially binds to its corresponding codon on the mRNA, it is in the A site. Then, a peptide bond forms between the amino acid of the tRNA in the A site and the amino acid of the charged tRNA in the P/E site. The growing polypeptide chain is transferred to the tRNA in the A site. Translocation occurs, moving the tRNA to the P/E site, now without an amino acid; the tRNA that was in the A site, now charged with the polypeptide chain, is moved to the P/E site and the uncharged tRNA leaves, and another aminoacyl-tRNA enters the A site to repeat the process. [ 8 ]
After the new amino acid is added to the chain, and after the tRNA is released out of the ribosome and into the cytosol, the energy provided by the hydrolysis of a GTP bound to the translocase EF-G (in bacteria ) and a/eEF-2 (in eukaryotes and archaea ) moves the ribosome down one codon towards the 3' end . The energy required for translation of proteins is significant. For a protein containing n amino acids, the number of high-energy phosphate bonds required to translate it is 4 n -1. [ 9 ] The rate of translation varies; it is significantly higher in prokaryotic cells (up to 17–21 amino acid residues per second) than in eukaryotic cells (up to 6–9 amino acid residues per second). [ 10 ]
Initiation involves the small subunit of the ribosome binding to the 5' end of mRNA with the help of initiation factors (IF). In bacteria and a minority of archaea, initiation of protein synthesis involves the recognition of a purine-rich initiation sequence on the mRNA called the Shine–Dalgarno sequence . The Shine–Dalgarno sequence binds to a complementary pyrimidine-rich sequence on the 3' end of the 16S rRNA part of the 30S ribosomal subunit. The binding of these complementary sequences ensures that the 30S ribosomal subunit is bound to the mRNA and is aligned such that the initiation codon is placed in the 30S portion of the P-site. Once the mRNA and 30S subunit are properly bound, an initiation factor brings the initiator tRNA–amino acid complex, f-Met -tRNA, to the 30S P site. The initiation phase is completed once a 50S subunit joins the 30S subunit, forming an active 70S ribosome. [ 11 ] Termination of the polypeptide occurs when the A site of the ribosome is occupied by a stop codon (UAA, UAG, or UGA) on the mRNA, creating the primary structure of a protein. tRNA usually cannot recognize or bind to stop codons. Instead, the stop codon induces the binding of a release factor protein [ 12 ] (RF1 & RF2) that prompts the disassembly of the entire ribosome/mRNA complex by the hydrolysis of the polypeptide chain from the peptidyl transferase center [ 2 ] of the ribosome. [ 13 ] Drugs or special sequence motifs on the mRNA can change the ribosomal structure so that near-cognate tRNAs are bound to the stop codon instead of the release factors. In such cases of 'translational readthrough', translation continues until the ribosome encounters the next stop codon. [ 14 ]
Even though the ribosomes are usually considered accurate and processive machines, the translation process is subject to errors that can lead either to the synthesis of erroneous proteins or to the premature abandonment of translation, either because a tRNA couples to a wrong codon or because a tRNA is coupled to the wrong amino acid. [ 15 ] The rate of error in synthesizing proteins has been estimated to be between 1 in 10 5 and 1 in 10 3 misincorporated amino acids, depending on the experimental conditions. [ 16 ] The rate of premature translation abandonment, instead, has been estimated to be of the order of magnitude of 10 −4 events per translated codon. [ 17 ] [ 18 ]
The process of translation is highly regulated in both eukaryotic and prokaryotic organisms. Regulation of translation can impact the global rate of protein synthesis which is closely coupled to the metabolic and proliferative state of a cell.
To study this process, scientists have used a wide variety of methods such as structural biology, analytical chemistry (mass-spectrometry based), imaging of reporter mRNA translation (in which the translation of a mRNA is linked to an output, such as luminescence or fluorescence), and next-generation sequencing based methods. [ 19 ] Other methods such as toeprinting assay can also be used to determine to determine the location of ribosomes of a particular mRNA in vitro, and footprints of other proteins regulating translation.
In particular, ribosome profiling, which is a powerful method, [ 20 ] enables researchers to take a snapshot of all the proteins being translated at a given time, showing which parts of the mRNA are being translated into proteins by ribosomes at a given time. This method is useful because it looks at all the mRNAs instead of using reporters that would typically look at one specific mRNA at a time. Ribosome profiling provides valuable insights into translation dynamics, revealing the complex interplay between gene sequence, mRNA structure, and translation regulation. For example, research utilizing this method has revealed that genetic differences and their subsequent expression as mRNAs can also impact translation rate in an RNA-specific manner. [ 21 ]
Expanding on this concept, a more recent development is single-cell ribosome profiling, a technique that allows us to study the translation process at the resolution of individual cells. [ 22 ] This is particularly significant as cells, even those of the same type, can exhibit considerable variability in their protein synthesis. Single-cell ribosome profiling has the potential to shed light on the heterogeneous nature of cells, leading to a more nuanced understanding of how translation regulation can impact cell behavior, metabolic state, and responsiveness to various stimuli or conditions.
Translational control is critical for the development and survival of cancer . Cancer cells must frequently regulate the translation phase of gene expression, though it is not fully understood why translation is targeted over steps like transcrion . While cancer cells often have genetically altered translation factors, it is much more common for cancer cells to modify the levels of existing translation factors. [ 23 ] Several major oncogenic signaling pathways, including the RAS–MAPK , PI3K/AKT/mTOR , MYC, and WNT–β-catenin pathways, ultimately reprogram the genome via translation. [ 24 ] Cancer cells also control translation to adapt to cellular stress. During stress, the cell translates mRNAs that can mitigate the stress and promote survival. An example of this is the expression of AMPK in various cancers; its activation triggers a cascade that can ultimately allow the cancer to escape apoptosis (programmed cell death) triggered by nutrition deprivation. Future cancer therapies may involve disrupting the translation machinery of the cell to counter the downstream effects of cancer. [ 23 ]
The transcription-translation process description, mentioning only the most basic "elementary" processes, consists of:
The process of amino acid building to create protein in translation is a subject of various physic models for a long time starting from the first detailed kinetic models such as [ 26 ] or others taking into account stochastic aspects of translation and using computer simulations. Many chemical kinetics-based models of protein synthesis have been developed and analyzed in the last four decades. [ 27 ] [ 28 ] Beyond chemical kinetics, various modeling formalisms such as Totally Asymmetric Simple Exclusion Process , [ 28 ] Probabilistic Boolean Networks , Petri Nets and max-plus algebra have been applied to model the detailed kinetics of protein synthesis or some of its stages. A basic model of protein synthesis that takes into account all eight 'elementary' processes has been developed, [ 25 ] following the paradigm that "useful models are simple and extendable". [ 29 ] The simplest model M0 is represented by the reaction kinetic mechanism (Figure M0). It was generalised to include 40S, 60S and initiation factors (IF) binding (Figure M1'). It was extended further to include effect of microRNA on protein synthesis. [ 30 ] Most of models in this hierarchy can be solved analytically. These solutions were used to extract 'kinetic signatures' of different specific mechanisms of synthesis regulation.
It is also possible to translate either by hand (for short sequences) or by computer (after first programming one appropriately, see section below); this allows biologists and chemists to draw out the chemical structure of the encoded protein on paper.
First, convert each template DNA base to its RNA complement (note that the complement of A is now U), as shown below. Note that the template strand of the DNA is the one the RNA is polymerized against; the other DNA strand would be the same as the RNA, but with thymine instead of uracil.
Then split the RNA into triplets (groups of three bases). Note that there are 3 translation "windows", or reading frames , depending on where you start reading the code.
Finally, use the table at Genetic code to translate the above into a structural formula as used in chemistry.
This will give the primary structure of the protein. However, proteins tend to fold , depending in part on hydrophilic and hydrophobic segments along the chain. Secondary structure can often still be guessed at, but the proper tertiary structure is often very hard to determine.
Whereas other aspects such as the 3D structure, called tertiary structure , of protein can only be predicted using sophisticated algorithms , the amino acid sequence, called primary structure, can be determined solely from the nucleic acid sequence with the aid of a translation table .
This approach may not give the correct amino acid composition of the protein, in particular if unconventional amino acids such as selenocysteine are incorporated into the protein, which is coded for by a conventional stop codon in combination with a downstream hairpin (SElenoCysteine Insertion Sequence, or SECIS).
There are many computer programs capable of translating a DNA/RNA sequence into a protein sequence. Normally this is performed using the Standard Genetic Code, however, few programs can handle all the "special" cases, such as the use of the alternative initiation codons which are biologically significant. For instance, the rare alternative start codon CTG codes for Methionine when used as a start codon, and for Leucine in all other positions.
Example: Condensed translation table for the Standard Genetic Code (from the NCBI Taxonomy webpage). [ 31 ]
The "Starts" row indicate three start codons, UUG, CUG, and the very common AUG. It also indicates the first amino acid residue when interpreted as a start: in this case it is all methionine.
Even when working with ordinary eukaryotic sequences such as the Yeast genome, it is often desired to be able to use alternative translation tables—namely for translation of the mitochondrial genes. Currently the following translation tables are defined by the NCBI Taxonomy Group for the translation of the sequences in GenBank : [ 31 ] | https://en.wikipedia.org/wiki/Translation_(biology) |
In Euclidean geometry , a translation is a geometric transformation that moves every point of a figure, shape or space by the same distance in a given direction . A translation can also be interpreted as the addition of a constant vector to every point, or as shifting the origin of the coordinate system . In a Euclidean space , any translation is an isometry .
If v {\displaystyle \mathbf {v} } is a fixed vector, known as the translation vector , and p {\displaystyle \mathbf {p} } is the initial position of some object, then the translation function T v {\displaystyle T_{\mathbf {v} }} will work as T v ( p ) = p + v {\displaystyle T_{\mathbf {v} }(\mathbf {p} )=\mathbf {p} +\mathbf {v} } .
If T {\displaystyle T} is a translation, then the image of a subset A {\displaystyle A} under the function T {\displaystyle T} is the translate of A {\displaystyle A} by T {\displaystyle T} . The translate of A {\displaystyle A} by T v {\displaystyle T_{\mathbf {v} }} is often written as A + v {\displaystyle A+\mathbf {v} } .
In classical physics , translational motion is movement that changes the position of an object, as opposed to rotation . For example, according to Whittaker: [ 1 ]
If a body is moved from one position to another, and if the lines joining the initial and final points of each of the points of the body are a set of parallel straight lines of length ℓ , so that the orientation of the body in space is unaltered, the displacement is called a translation parallel to the direction of the lines, through a distance ℓ .
A translation is the operation changing the positions of all points ( x , y , z ) {\displaystyle (x,y,z)} of an object according to the formula
where ( Δ x , Δ y , Δ z ) {\displaystyle (\Delta x,\ \Delta y,\ \Delta z)} is the same vector for each point of the object. The translation vector ( Δ x , Δ y , Δ z ) {\displaystyle (\Delta x,\ \Delta y,\ \Delta z)} common to all points of the object describes a particular type of displacement of the object, usually called a linear displacement to distinguish it from displacements involving rotation, called angular displacements.
When considering spacetime , a change of time coordinate is considered to be a translation.
The translation operator turns a function of the original position, f ( v ) {\displaystyle f(\mathbf {v} )} , into a function of the final position, f ( v + δ ) {\displaystyle f(\mathbf {v} +\mathbf {\delta } )} . In other words, T δ {\displaystyle T_{\mathbf {\delta } }} is defined such that T δ f ( v ) = f ( v + δ ) . {\displaystyle T_{\mathbf {\delta } }f(\mathbf {v} )=f(\mathbf {v} +\mathbf {\delta } ).} This operator is more abstract than a function, since T δ {\displaystyle T_{\mathbf {\delta } }} defines a relationship between two functions, rather than the underlying vectors themselves. The translation operator can act on many kinds of functions, such as when the translation operator acts on a wavefunction , which is studied in the field of quantum mechanics.
The set of all translations forms the translation group T {\displaystyle \mathbb {T} } , which is isomorphic to the space itself, and a normal subgroup of Euclidean group E ( n ) {\displaystyle E(n)} . The quotient group of E ( n ) {\displaystyle E(n)} by T {\displaystyle \mathbb {T} } is isomorphic to the group of rigid motions which fix a particular origin point, the orthogonal group O ( n ) {\displaystyle O(n)} :
Because translation is commutative , the translation group is abelian . There are an infinite number of possible translations, so the translation group is an infinite group .
In the theory of relativity , due to the treatment of space and time as a single spacetime , translations can also refer to changes in the time coordinate . For example, the Galilean group and the Poincaré group include translations with respect to time.
One kind of subgroup of the three-dimensional translation group are the lattice groups , which are infinite groups , but unlike the translation groups, are finitely generated . That is, a finite generating set generates the entire group.
A translation is an affine transformation with no fixed points . Matrix multiplications always have the origin as a fixed point. Nevertheless, there is a common workaround using homogeneous coordinates to represent a translation of a vector space with matrix multiplication : Write the 3-dimensional vector v = ( v x , v y , v z ) {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z})} using 4 homogeneous coordinates as v = ( v x , v y , v z , 1 ) {\displaystyle \mathbf {v} =(v_{x},v_{y},v_{z},1)} . [ 2 ]
To translate an object by a vector v {\displaystyle \mathbf {v} } , each homogeneous vector p {\displaystyle \mathbf {p} } (written in homogeneous coordinates) can be multiplied by this translation matrix :
As shown below, the multiplication will give the expected result:
The inverse of a translation matrix can be obtained by reversing the direction of the vector:
Similarly, the product of translation matrices is given by adding the vectors:
Because addition of vectors is commutative , multiplication of translation matrices is therefore also commutative (unlike multiplication of arbitrary matrices).
While geometric translation is often viewed as an active transformation that changes the position of a geometric object, a similar result can be achieved by a passive transformation that moves the coordinate system itself but leaves the object fixed. The passive version of an active geometric translation is known as a translation of axes .
An object that looks the same before and after translation is said to have translational symmetry . A common example is a periodic function , which is an eigenfunction of a translation operator.
The graph of a real function f , the set of points ( x , f ( x ) ) {\displaystyle (x,f(x))} , is often pictured in the real coordinate plane with x as the horizontal coordinate and y = f ( x ) {\displaystyle y=f(x)} as the vertical coordinate.
Starting from the graph of f , a horizontal translation means composing f with a function x ↦ x − a {\displaystyle x\mapsto x-a} , for some constant number a , resulting in a graph consisting of points ( x , f ( x − a ) ) {\displaystyle (x,f(x-a))} . Each point ( x , y ) {\displaystyle (x,y)} of the original graph corresponds to the point ( x + a , y ) {\displaystyle (x+a,y)} in the new graph, which pictorially results in a horizontal shift.
A vertical translation means composing the function y ↦ y + b {\displaystyle y\mapsto y+b} with f , for some constant b , resulting in a graph consisting of the points ( x , f ( x ) + b ) {\displaystyle {\bigl (}x,f(x)+b{\bigr )}} . Each point ( x , y ) {\displaystyle (x,y)} of the original graph corresponds to the point ( x , y + b ) {\displaystyle (x,y+b)} in the new graph, which pictorially results in a vertical shift. [ 3 ]
For example, taking the quadratic function y = x 2 {\displaystyle y=x^{2}} , whose graph is a parabola with vertex at ( 0 , 0 ) {\displaystyle (0,0)} , a horizontal translation 5 units to the right would be the new function y = ( x − 5 ) 2 = x 2 − 10 x + 25 {\displaystyle y=(x-5)^{2}=x^{2}-10x+25} whose vertex has coordinates ( 5 , 0 ) {\displaystyle (5,0)} . A vertical translation 3 units upward would be the new function y = x 2 + 3 {\displaystyle y=x^{2}+3} whose vertex has coordinates ( 0 , 3 ) {\displaystyle (0,3)} .
The antiderivatives of a function all differ from each other by a constant of integration and are therefore vertical translates of each other. [ 4 ]
For describing vehicle dynamics (or movement of any rigid body ), including ship dynamics and aircraft dynamics , it is common to use a mechanical model consisting of six degrees of freedom , which includes translations along three reference axes (as well as rotations about those three axes). These translations are often called surge , sway , and heave . | https://en.wikipedia.org/wiki/Translation_(geometry) |
In mathematics , a translation of axes in two dimensions is a mapping from an xy - Cartesian coordinate system to an x'y' -Cartesian coordinate system in which the x' axis is parallel to the x axis and k units away, and the y' axis is parallel to the y axis and h units away. This means that the origin O' of the new coordinate system has coordinates ( h , k ) in the original system. The positive x' and y' directions are taken to be the same as the positive x and y directions. A point P has coordinates ( x , y ) with respect to the original system and coordinates ( x' , y' ) with respect to the new system, where
or equivalently
In the new coordinate system, the point P will appear to have been translated in the opposite direction. For example, if the xy -system is translated a distance h to the right and a distance k upward, then P will appear to have been translated a distance h to the left and a distance k downward in the x'y' -system . A translation of axes in more than two dimensions is defined similarly. [ 3 ] A translation of axes is a rigid transformation , but not a linear map . (See Affine transformation .)
Coordinate systems are essential for studying the equations of curves using the methods of analytic geometry . To use the method of coordinate geometry, the axes are placed at a convenient position with respect to the curve under consideration. For example, to study the equations of ellipses and hyperbolas , the foci are usually located on one of the axes and are situated symmetrically with respect to the origin. If the curve (hyperbola, parabola , ellipse, etc.) is not situated conveniently with respect to the axes, the coordinate system should be changed to place the curve at a convenient and familiar location and orientation. The process of making this change is called a transformation of coordinates . [ 4 ]
The solutions to many problems can be simplified by translating the coordinate axes to obtain new axes parallel to the original ones. [ 5 ]
Through a change of coordinates, the equation of a conic section can be put into a standard form , which is usually easier to work with. For the most general equation of the second degree, which takes the form
it is always possible to perform a rotation of axes in such a way that in the new system the equation takes the form
that is, eliminating the xy term. [ 6 ] Next, a translation of axes can reduce an equation of the form ( 3 ) to an equation of the same form but with new variables ( x' , y' ) as coordinates, and with D and E both equal to zero (with certain exceptions—for example, parabolas). The principal tool in this process is "completing the square." [ 7 ] In the examples that follow, it is assumed that a rotation of axes has already been performed.
Given the equation
by using a translation of axes, determine whether the locus of the equation is a parabola, ellipse, or hyperbola. Determine foci (or focus), vertices (or vertex), and eccentricity .
Solution: To complete the square in x and y , write the equation in the form
Complete the squares and obtain
Define
That is, the translation in equations ( 2 ) is made with h = − 1 , k = 2. {\displaystyle h=-1,k=2.} The equation in the new coordinate system is
Divide equation ( 5 ) by 225 to obtain
which is recognizable as an ellipse with a = 5 , b = 3 , c 2 = a 2 − b 2 = 16 , c = 4 , e = 4 5 . {\displaystyle a=5,b=3,c^{2}=a^{2}-b^{2}=16,c=4,e={\tfrac {4}{5}}.} In the x'y' -system, we have: center ( 0 , 0 ) {\displaystyle (0,0)} ; vertices ( ± 5 , 0 ) {\displaystyle (\pm 5,0)} ; foci ( ± 4 , 0 ) . {\displaystyle (\pm 4,0).}
In the xy -system, use the relations x = x ′ − 1 , y = y ′ + 2 {\displaystyle x=x'-1,y=y'+2} to obtain: center ( − 1 , 2 ) {\displaystyle (-1,2)} ; vertices ( 4 , 2 ) , ( − 6 , 2 ) {\displaystyle (4,2),(-6,2)} ; foci ( 3 , 2 ) , ( − 5 , 2 ) {\displaystyle (3,2),(-5,2)} ; eccentricity 4 5 . {\displaystyle {\tfrac {4}{5}}.} [ 8 ]
For an xyz -Cartesian coordinate system in three dimensions, suppose that a second Cartesian coordinate system is introduced, with axes x' , y' and z' so located that the x' axis is parallel to the x axis and h units from it, the y' axis is parallel to the y axis and k units from it, and the z' axis is parallel to the z axis and l units from it. A point P in space will have coordinates in both systems. If its coordinates are ( x , y , z ) in the original system and ( x' , y' , z' ) in the second system, the equations
hold. [ 9 ] Equations ( 6 ) define a translation of axes in three dimensions where ( h , k , l ) are the xyz -coordinates of the new origin. [ 10 ] A translation of axes in any finite number of dimensions is defined similarly.
In three-space, the most general equation of the second degree in x , y and z has the form
where the quantities A , B , C , … , J {\displaystyle A,B,C,\ldots ,J} are positive or negative numbers or zero. The points in space satisfying such an equation all lie on a surface . Any second-degree equation which does not reduce to a cylinder, plane, line, or point corresponds to a surface which is called quadric. [ 11 ]
As in the case of plane analytic geometry, the method of translation of axes may be used to simplify second-degree equations, thereby making evident the nature of certain quadric surfaces. The principal tool in this process is "completing the square." [ 12 ]
Use a translation of coordinates to identify the quadric surface
Solution: Write the equation in the form
Complete the square to obtain
Introduce the translation of coordinates
The equation of the surface takes the form
which is recognizable as the equation of an ellipsoid . [ 13 ] | https://en.wikipedia.org/wiki/Translation_of_axes |
Translation regulation by 5′ transcript leader cis-elements is a process in cellular translation .
Gene expression is tightly controlled at many different stages. Alterations in translation of mRNA into proteins rapidly modulates the proteome without changing upstream steps such as transcription, pre-mRNA splicing, and nuclear export. [ 1 ] The strict regulation of translation in both space and time is in part governed by cis-regulatory elements located in 5′ mRNA transcript leaders (TLs) and 3′ untranslated regions (UTRs).
Due to their role in translation initiation, mRNA 5′ transcript leaders (TLs) strongly influence protein expression. [ 2 ] [ 3 ] [ 4 ] Eukaryotic translation consists of three stages: initiation elongation, and termination. Translation is primary regulated at the initiation stage where the small ribosomal subunit and initiation factors are recruited to the mRNA; directionally scanning along the 5′ TL to select the first “best” start codon to begin protein synthesis. [ 5 ] Cap-dependent ribosomal scanning accounts for 95-97% of all translation in eukaryotes under normal conditions. [ 6 ] Therefore, the cis -regulatory elements in TLs greatly influence translation initiation and ultimately protein expression.
The first step in initiation is formation of the pre-initiation complex, 48S PIC. The small ribosomal subunit and various eukaryotic initiation factors are recruited to the mRNA 5′ TL and to form the 48S PIC complex, which scans 5′ to 3′ along the mRNA transcript, inspecting each successive triplet for a functional start codon. [ 7 ] [ 8 ] Translation initiation is most successful at an AUG codon surrounded upstream and downstream by a favorable sequence known as the “ Kozak consensus sequence ” or “Kozak context”. [ 9 ] (See A ) Weak or absent Kozak context surrounding the AUG leads to “leaky” scanning where the start codon is skipped, whereas a strong Kozak context leads to start codon recognition by the 48S PIC and binding of Met-tRNAi in the “closed” state. Recent studies suggest that initiation occurs surprisingly often in eukaryotes at Near Cognate Codons (NCCs), which differ from AUG by one nucleotide. [ 10 ] [ 11 ] Eukaryotic initiation factors rearrange the 48S PIC and permit the large subunit to join, thus forming the complete translation competent 80S ribosome. [ 12 ]
Upstream open reading frames (uORFs) in the 5′ TLs typically inhibit translation of the downstream main protein coding region (CDS). [ 13 ] [ 14 ] (See B) Translation suppression of the CDS is attributable to the 5′ to 3′ directional nature of 48S PIC scanning. After successfully translating the uORF, the ribosome dissociates from the mRNA as part of termination before it can reach and translate the CDS. This destabilization of the translational machinery can trigger nonsense mediated decay of the mRNA transcript. However, in some cases uORFs will actually enhance the translation of the downstream CDS. For example, in S. cerevisiae, the gene GCN4 has a 5′ TL with multiple uORFs. The uORFs closest to the 5′ cap protect the CDS from the inhibitory activities of the downstream uORFs located closer to the CDS. [ 15 ] In summary, uORFs generally decease translation of the main ORF, but they are also capable of increasing protein synthesis under certain circumstances.
The 3-dimensional structure of the 5′ TL may also impact translation. (See C ) Stem-loops have been demonstrated to both inhibit and enhance translation. Stem-loops can prevent cap binding and efficient 48S PIC scanning. Conversely, downstream stem-loops may increase the probability of translation initiation at start codons with a weak Kozak context, possibly by blocking scanning. [ 16 ] [ 17 ] [ 18 ] Besides stem-loops, other higher order structures such as G-quadraplexes and pseudoknots also impede eukaryotic translation. [ 19 ] To overcome translati on suppression by structures, DEAD-box RNA helicases unwind RNA structures, promoting scanning through the 5′ TL. [ 20 ]
Multiple transcription start sites may be used for the same gene generating alternative 5′ TLs with varied length and regulatory features. (See D )This is especially common in organisms with relatively compact genomes such as yeasts. In S. cerevisiae , alternative transcription start sites generate long alternative mRNA TLs with substantially lower translation efficiencies. [ 21 ] Counterintuitively, upstream transcriptional induction of these genes actually silences their expression during meiosis by blocking translation. [ 22 ] [ 23 ] Furthermore, alternative transcription initiation within the CDS may generate protein isoforms with varied functions in S. cerevisiae . [ 24 ] These examples from the model organism S. cerevisiae suggest that mRNA transcripts with alternative 5′ TLs may have a regulatory function in eukaryotes especially during events requiring proteome remodeling such as meiosis and stress responses. | https://en.wikipedia.org/wiki/Translation_regulation_by_5′_transcript_leader_cis-elements |
In mathematics a translation surface is a surface obtained from identifying the sides of a polygon in the Euclidean plane by translations. An equivalent definition is a Riemann surface together with a holomorphic 1-form .
These surfaces arise in dynamical systems where they can be used to model billiards , and in Teichmüller theory . A particularly interesting subclass is that of Veech surfaces (named after William A. Veech ) which are the most symmetric ones.
A translation surface is the space obtained by identifying pairwise by translations the sides of a collection of plane polygons.
Here is a more formal definition. Let P 1 , … , P m {\displaystyle P_{1},\ldots ,P_{m}} be a collection of (not necessarily convex) polygons in the Euclidean plane and suppose that for every side s i {\displaystyle s_{i}} of any P k {\displaystyle P_{k}} there is a side s j {\displaystyle s_{j}} of some P l {\displaystyle P_{l}} with j ≠ i {\displaystyle j\not =i} and s j = s i + v → i {\displaystyle s_{j}=s_{i}+{\vec {v}}_{i}} for some nonzero vector v → i {\displaystyle {\vec {v}}_{i}} (and so that v → j = − v → i {\displaystyle {\vec {v}}_{j}=-{\vec {v}}_{i}} . Consider the space obtained by identifying all s i {\displaystyle s_{i}} with their corresponding s j {\displaystyle s_{j}} through the map x ↦ x + v → i {\displaystyle x\mapsto x+{\vec {v}}_{i}} .
The canonical way to construct such a surface is as follows: start with vectors w → 1 , … , w → n {\displaystyle {\vec {w}}_{1},\ldots ,{\vec {w}}_{n}} and a permutation σ {\displaystyle \sigma } on { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} , and form the broken lines L = x , x + w → 1 , … , x + w → 1 + ⋯ + w → n {\displaystyle L=x,x+{\vec {w}}_{1},\ldots ,x+{\vec {w}}_{1}+\cdots +{\vec {w}}_{n}} and L ′ = x , x + w → σ ( 1 ) , … , x + w → σ ( 1 ) + ⋯ + w → σ ( n ) {\displaystyle L'=x,x+{\vec {w}}_{\sigma (1)},\ldots ,x+{\vec {w}}_{\sigma (1)}+\cdots +{\vec {w}}_{\sigma (n)}} starting at an arbitrarily chosen point. In the case where these two lines form a polygon (i.e. they do not intersect outside of their endpoints) there is a natural side-pairing.
The quotient space is a closed surface. It has a flat metric outside the set Σ {\displaystyle \Sigma } images of the vertices. At a point in Σ {\displaystyle \Sigma } the sum of the angles of the polygons around the vertices which map to it is a positive multiple of 2 π {\displaystyle 2\pi } , and the metric is singular unless the angle is exactly 2 π {\displaystyle 2\pi } .
Let S {\displaystyle S} be a translation surface as defined above and Σ {\displaystyle \Sigma } the set of singular points. Identifying the Euclidean plane with the complex plane one gets coordinates charts on S ∖ Σ {\displaystyle S\setminus \Sigma } with values in C {\displaystyle \mathbb {C} } . Moreover, the changes of charts are holomorphic maps, more precisely maps of the form z ↦ z + w {\displaystyle z\mapsto z+w} for some w ∈ C {\displaystyle w\in \mathbb {C} } . This gives S ∖ Σ {\displaystyle S\setminus \Sigma } the structure of a Riemann surface, which extends to the entire surface S {\displaystyle S} by Riemann's theorem on removable singularities . In addition, the differential d z {\displaystyle dz} where z : U → C {\displaystyle z:U\to \mathbb {C} } is any chart defined above, does not depend on the chart. Thus these differentials defined on chart domains glue together to give a well-defined holomorphic 1-form ω {\displaystyle \omega } on S {\displaystyle S} . The vertices of the polygon where the cone angles are not equal to 2 π {\displaystyle 2\pi } are zeroes of ω {\displaystyle \omega } (a cone angle of 2 k π {\displaystyle 2k\pi } corresponds to a zero of order ( k − 1 ) {\displaystyle (k-1)} ).
In the other direction, given a pair ( X , ω ) {\displaystyle (X,\omega )} where X {\displaystyle X} is a compact Riemann surface and ω {\displaystyle \omega } a holomorphic 1-form one can construct a polygon by using the complex numbers ∫ γ j ω {\textstyle \int _{\gamma _{j}}\omega } where γ j {\displaystyle \gamma _{j}} are disjoint paths between the zeroes of ω {\displaystyle \omega } which form an integral basis for the relative cohomology.
The simplest example of a translation surface is obtained by gluing the opposite sides of a parallelogram. It is a flat torus with no singularities.
If P {\displaystyle P} is a regular 4 g {\displaystyle 4g} -gon then the translation surface obtained by gluing opposite sides is of genus g {\displaystyle g} with a single singular point, with angle ( 2 g − 1 ) 2 π {\displaystyle (2g-1)2\pi } .
If P {\displaystyle P} is obtained by putting side to side a collection of copies of the unit square then any translation surface obtained from P {\displaystyle P} is called a square-tiled surface . The map from the surface to the flat torus obtained by identifying all squares is a branched covering with branch points the singularities (the cone angle at a singularity is proportional to the degree of branching).
Suppose that the surface X {\displaystyle X} is a closed Riemann surface of genus g {\displaystyle g} and that ω {\displaystyle \omega } is a nonzero holomorphic 1-form on X {\displaystyle X} , with zeroes of order d 1 , … , d m {\displaystyle d_{1},\ldots ,d_{m}} . Then the Riemann–Roch theorem implies that
If the translation surface ( X , ω ) {\displaystyle (X,\omega )} is represented by a polygon P {\displaystyle P} then triangulating it and summing angles over all vertices allows to recover the formula above (using the relation between cone angles and order of zeroes), in the same manner as in the proof of the Gauss–Bonnet formula for hyperbolic surfaces or the proof of Euler's formula from Girard's theorem .
If ( X , ω ) {\displaystyle (X,\omega )} is a translation surface there is a natural measured foliation on X {\displaystyle X} . If it is obtained from a polygon it is just the image of vertical lines, and the measure of an arc is just the euclidean length of the horizontal segment homotopic to the arc. The foliation is also obtained by the level lines of the imaginary part of a (local) primitive for ω {\displaystyle \omega } and the measure is obtained by integrating the real part.
Let H {\displaystyle {\mathcal {H}}} be the set of translation surfaces of genus g {\displaystyle g} (where two such ( X , ω ) , ( X ′ , ω ′ ) {\displaystyle (X,\omega ),(X',\omega ')} are considered the same if there exists a holomorphic diffeomorphism ϕ : X → X ′ {\displaystyle \phi :X\to X'} such that ϕ ∗ ω ′ = ω {\displaystyle \phi ^{*}\omega '=\omega } ). Let M g {\displaystyle {\mathcal {M}}_{g}} be the moduli space of Riemann surfaces of genus g {\displaystyle g} ; there is a natural map H → M g {\displaystyle {\mathcal {H}}\to {\mathcal {M}}_{g}} mapping a translation surface to the underlying Riemann surface. This turns H {\displaystyle {\mathcal {H}}} into a locally trivial fiber bundle over the moduli space.
To a compact translation surface ( X , ω ) {\displaystyle (X,\omega )} there is associated the data ( k 1 , … , k m ) {\displaystyle (k_{1},\ldots ,k_{m})} where k 1 ≤ k 2 ≤ ⋯ {\displaystyle k_{1}\leq k_{2}\leq \cdots } are the orders of the zeroes of ω {\displaystyle \omega } . If α = ( k 1 , … , k m ) {\displaystyle \alpha =(k_{1},\ldots ,k_{m})} is any integer partition of 2 g − 2 {\displaystyle 2g-2} then the stratum H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} is the subset of H {\displaystyle {\mathcal {H}}} of translation surfaces which have a holomorphic form whose zeroes match the partition.
The stratum H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} is naturally a complex orbifold of complex dimension 2 g + m − 1 {\displaystyle 2g+m-1} (note that H ( 0 ) {\displaystyle {\mathcal {H}}(0)} is the moduli space of tori, which is well-known to be an orbifold; in higher genus, the failure to be a manifold is even more dramatic). Local coordinates are given by
where n = dim ( H 1 ( S , { x 1 , … , x m } ) ) = 2 g + m − 1 {\displaystyle n=\dim(H_{1}(S,\{x_{1},\ldots ,x_{m}\}))=2g+m-1} and γ 1 , … , γ k {\displaystyle \gamma _{1},\ldots ,\gamma _{k}} is as above a symplectic basis of this space.
The stratum H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} admits a C ∗ {\displaystyle {\mathbb {C} }^{*}} -action and thus a real and complex projectivization H ( α ) → H 1 ( α ) → H 2 ( α ) {\displaystyle {{\mathcal {H}}(\alpha )}\to {\mathcal {H}}_{1}(\alpha )\to {\mathcal {H}}_{2}(\alpha )} . The real projectivization admits a natural section H 1 ( α ) → H ( α ) {\displaystyle {\mathcal {H}}_{1}(\alpha )\to {\mathcal {H}}(\alpha )} if we define it as the space of translation surfaces of area 1.
The existence of the above period coordinates allows to endow the stratum H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} with an integral affine structure and thus a natural volume form ν {\displaystyle \nu } . We also get a volume form ν 1 ( α ) {\displaystyle \nu _{1}(\alpha )} on H 1 ( α ) {\displaystyle {\mathcal {H}}_{1}(\alpha )} by disintegration of ν {\displaystyle \nu } . The Masur-Veech volume V o l ( α ) {\displaystyle Vol(\alpha )} is the total volume of H 1 ( α ) {\displaystyle {\mathcal {H}}_{1}(\alpha )} for ν 1 ( α ) {\displaystyle \nu _{1}(\alpha )} . This volume was proved to be finite independently by William A. Veech [ 1 ] and Howard Masur . [ 2 ]
In the 90's Maxim Kontsevich and Anton Zorich evaluated these volumes numerically by counting the lattice points of H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} . They observed that V o l ( α ) {\displaystyle Vol(\alpha )} should be of the form π 2 g {\displaystyle \pi ^{2g}} times a rational number. From this observation they expected the existence of a formula expressing the volumes in terms of intersection numbers on moduli spaces of curves.
Alex Eskin and Andrei Okounkov gave the first algorithm to compute these volumes. They showed that the generating series of these numbers are q-expansions of computable quasi-modular forms. Using this algorithm they could confirm the numerical observation of Kontsevich and Zorich. [ 3 ]
More recently Chen, Möller, Sauvaget, and don Zagier showed that the volumes can be computed as intersection numbers on an algebraic compactification of H 2 ( α ) {\displaystyle {\mathcal {H}}_{2}(\alpha )} . Currently the problem is still open to extend this formula to strata of half-translation surfaces. [ 4 ]
If ( X , ω ) {\displaystyle (X,\omega )} is a translation surface obtained by identifying the faces of a polygon P {\displaystyle P} and g ∈ S L 2 ( R ) {\displaystyle g\in \mathrm {SL} _{2}(\mathbb {R} )} then the translation surface g ⋅ ( X , ω ) {\displaystyle g\cdot (X,\omega )} is that associated to the polygon g ( P ) {\displaystyle g(P)} . This defined a continuous action of S L 2 ( R ) {\displaystyle \mathrm {SL} _{2}(\mathbb {R} )} on the moduli space H {\displaystyle {\mathcal {H}}} which preserves the strata H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} . This action descends to an action on H 1 ( α ) {\displaystyle {\mathcal {H}}_{1}(\alpha )} that is ergodic with respect to ν 1 {\displaystyle \nu _{1}} .
A half-translation surface is defined similarly to a translation surface but allowing the gluing maps to have a nontrivial linear part which is a half turn. Formally, a translation surface is defined geometrically by taking a collection of polygons in the Euclidean plane and identifying faces by maps of the form z ↦ ± z + w {\displaystyle z\mapsto \pm z+w} (a "half-translation"). Note that a face can be identified with itself. The geometric structure obtained in this way is a flat metric outside of a finite number of singular points with cone angles positive multiples of π {\displaystyle \pi } .
As in the case of translation surfaces there is an analytic interpretation: a half-translation surface can be interpreted as a pair ( X , ϕ ) {\displaystyle (X,\phi )} where X {\displaystyle X} is a Riemann surface and ϕ {\displaystyle \phi } a quadratic differential on X {\displaystyle X} . To pass from the geometric picture to the analytic picture one simply takes the quadratic differential defined locally by ( d z ) 2 {\displaystyle (dz)^{2}} (which is invariant under half-translations), and for the other direction one takes the Riemannian metric induced by ϕ {\displaystyle \phi } , which is smooth and flat outside of the zeros of ϕ {\displaystyle \phi } .
If X {\displaystyle X} is a Riemann surface then the vector space of quadratic differentials on X {\displaystyle X} is naturally identified with the tangent space to Teichmüller space at any point above X {\displaystyle X} . This can be proven by analytic means using the Bers embedding . Half-translation surfaces can be used to give a more geometric interpretation of this: if ( X , g ) , ( Y , h ) {\displaystyle (X,g),(Y,h)} are two points in Teichmüller space then by Teichmüller's mapping theorem there exists two polygons P , Q {\displaystyle P,Q} whose faces can be identified by half-translations to give flat surfaces with underlying Riemann surfaces isomorphic to X , Y {\displaystyle X,Y} respectively, and an affine map f {\displaystyle f} of the plane sending P {\displaystyle P} to Q {\displaystyle Q} which has the smallest distortion among the quasiconformal mappings in its isotopy class, and which is isotopic to h ∘ g − 1 {\displaystyle h\circ g^{-1}} .
Everything is determined uniquely up to scaling if we ask that f {\displaystyle f} be of the form f s {\displaystyle f_{s}} , where f t : ( x , y ) ↦ ( e t x , e − t y ) {\displaystyle f_{t}:(x,y)\mapsto (e^{t}x,e^{-t}y)} , for some s > 0 {\displaystyle s>0} ; we denote by X t {\displaystyle X_{t}} the Riemann surface obtained from the polygon f t ( P ) {\displaystyle f_{t}(P)} . Now the path t ↦ ( X t , f t ∘ g ) {\displaystyle t\mapsto (X_{t},f_{t}\circ g)} in Teichmüller space joins ( X , g ) {\displaystyle (X,g)} to ( Y , h ) {\displaystyle (Y,h)} , and differentiating it at t = 0 {\displaystyle t=0} gives a vector in the tangent space; since ( Y , g ) {\displaystyle (Y,g)} was arbitrary we obtain a bijection.
In facts the paths used in this construction are Teichmüller geodesics. An interesting fact is that while the geodesic ray associated to a flat surface corresponds to a measured foliation, and thus the directions in tangent space are identified with the Thurston boundary , the Teichmüller geodesic ray associated to a flat surface does not always converge to the corresponding point on the boundary, [ 5 ] though almost all such rays do so. [ 6 ]
If ( X , ω ) {\displaystyle (X,\omega )} is a translation surface its Veech group is the Fuchsian group which is the image in P S L 2 ( R ) {\displaystyle \mathrm {PSL} _{2}(\mathbb {R} )} of the subgroup S L ( X , ω ) ⊂ S L 2 ( R ) {\displaystyle \mathrm {SL} (X,\omega )\subset \mathrm {SL} _{2}(\mathbb {R} )} of transformations g {\displaystyle g} such that g ⋅ ( X , ω ) {\displaystyle g\cdot (X,\omega )} is isomorphic (as a translation surface) to ( X , ω ) {\displaystyle (X,\omega )} . Equivalently, S L ( X , ω ) {\displaystyle \mathrm {SL} (X,\omega )} is the group of derivatives of affine diffeomorphisms ( X , ω ) → ( X , ω ) {\displaystyle (X,\omega )\to (X,\omega )} (where affine is defined locally outside the singularities, with respect to the affine structure induced by the translation structure). Veech groups have the following properties: [ 7 ]
Veech groups can be either finitely generated or not. [ 8 ]
A Veech surface is by definition a translation surface whose Veech group is a lattice in P S L 2 ( R ) {\displaystyle \mathrm {PSL} _{2}(\mathbb {R} )} , equivalently its action on the hyperbolic plane admits a fundamental domain of finite volume. Since it is not cocompact it must then contain parabolic elements.
Examples of Veech surfaces are the square-tiled surfaces, whose Veech groups are commensurable to the modular group P S L 2 ( Z ) {\displaystyle \mathrm {PSL} _{2}(\mathbb {Z} )} . [ 9 ] [ 10 ] The square can be replaced by any parallelogram (the translation surfaces obtained are exactly those obtained as ramified covers of a flat torus). In fact the Veech group is arithmetic (which amounts to it being commensurable to the modular group) if and only if the surface is tiled by parallelograms. [ 10 ]
There exists Veech surfaces whose Veech group is not arithmetic, for example the surface obtained from two regular pentagons glued along an edge: in this case the Veech group is a non-arithmetic Hecke triangle group. [ 9 ] On the other hand, there are still some arithmetic constraints on the Veech group of a Veech surface: for example its trace field is a number field [ 10 ] that is totally real . [ 11 ]
A geodesic in a translation surface (or a half-translation surface) is a parametrised curve which is, outside of the singular points, locally the image of a straight line in Euclidean space parametrised by arclength. If a geodesic arrives at a singularity it is required to stop there. Thus a maximal geodesic is a curve defined on a closed interval, which is the whole real line if it does not meet any singular point. A geodesic is closed or periodic if its image is compact, in which case it is either a circle if it does not meet any singularity, or an arc between two (possibly equal) singularities. In the latter case the geodesic is called a saddle connection .
If ( X , ω ) {\displaystyle (X,\omega )} θ ∈ R / 2 π Z {\displaystyle \theta \in \mathbb {R} /2\pi \mathbb {Z} } (or θ ∈ R / π Z {\displaystyle \theta \in \mathbb {R} /\pi \mathbb {Z} } in the case of a half-translation surface) then the geodesics with direction theta are well-defined on X {\displaystyle X} : they are those curves c {\displaystyle c} which satisfy ω ( c ⋅ ) = e i θ {\displaystyle \omega ({\overset {\cdot }{c}})=e^{i\theta }} (or ϕ ( c ⋅ ) = e i θ {\displaystyle \phi ({\overset {\cdot }{c}})=e^{i\theta }} in the case of a half-translation surface ( X , ϕ ) {\displaystyle (X,\phi )} ). The geodesic flow on ( X , ω ) {\displaystyle (X,\omega )} with direction θ {\displaystyle \theta } is the flow ϕ t {\displaystyle \phi _{t}} on X {\displaystyle X} where t ↦ ϕ t ( p ) {\displaystyle t\mapsto \phi _{t}(p)} is the geodesic starting at p {\displaystyle p} with direction θ {\displaystyle \theta } if p {\displaystyle p} is not singular.
On a flat torus the geodesic flow in a given direction has the property that it is either periodic or ergodic . In general this is not true: there may be directions in which the flow is minimal (meaning every orbit is dense in the surface) but not ergodic. [ 12 ] On the other hand, on a compact translation surface the flow retains from the simplest case of the flat torus the property that it is ergodic in almost every direction. [ 13 ]
Another natural question is to establish asymptotic estimates for the number of closed geodesics or saddle connections of a given length. On a flat torus T {\displaystyle T} there are no saddle connections and the number of closed geodesics of length ≤ L {\displaystyle \leq L} is equivalent to L 2 / volume ( T ) {\displaystyle L^{2}/\operatorname {volume} (T)} . In general one can only obtain bounds: if ( X , ω ) {\displaystyle (X,\omega )} is a compact translation surface of genus g {\displaystyle g} then there exists constants (depending only on the genus) c 1 , c 2 {\displaystyle c_{1},c_{2}} such that the both N c g ( L ) {\displaystyle N_{cg}(L)} of closed geodesics and N s c ( L ) {\displaystyle N_{sc}(L)} of saddle connections of length ≤ L {\displaystyle \leq L} satisfy
Restraining to a probabilistic results it is possible to get better estimates: given a genus g {\displaystyle g} , a partition α {\displaystyle \alpha } of g {\displaystyle g} and a connected component C {\displaystyle {\mathcal {C}}} of the stratum H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} there exists constants c c g c s c {\displaystyle c_{\mathrm {cg} }c_{\mathrm {sc} }} such that for almost every ( X , ω ) ∈ C {\displaystyle (X,\omega )\in {\mathcal {C}}} the asymptotic equivalent holds: [ 13 ]
The constants c c g , c s c {\displaystyle c_{\mathrm {cg} },c_{\mathrm {sc} }} are called Siegel–Veech constants . Using the ergodicity of the S L 2 ( R ) {\displaystyle \mathrm {SL} _{2}(\mathbb {R} )} -action on H ( α ) {\displaystyle {\mathcal {H}}(\alpha )} , it was shown that these constants can explicitly be computed as ratios of certain Masur-Veech volumes. [ 14 ]
The geodesic flow on a Veech surface is much better behaved than in general. This is expressed via the following result, called the Veech dichotomy : [ 15 ]
If P 0 {\displaystyle P_{0}} is a polygon in the Euclidean plane and θ ∈ R / 2 π Z {\displaystyle \theta \in \mathbb {R} /2\pi \mathbb {Z} } a direction there is a continuous dynamical system called a billiard . The trajectory of a point inside the polygon is defined as follows: as long as it does not touch the boundary it proceeds in a straight line at unit speed; when it touches the interior of an edge it bounces back (i.e. its direction changes with an orthogonal reflection in the perpendicular of the edge), and when it touches a vertex it stops.
This dynamical system is equivalent to the geodesic flow on a flat surface: just double the polygon along the edges and put a flat metric everywhere but at the vertices, which become singular points with cone angle twice the angle of the polygon at the corresponding vertex. This surface is not a translation surface or a half-translation surface, but in some cases it is related to one. Namely, if all angles of the polygon P 0 {\displaystyle P_{0}} are rational multiples of π {\displaystyle \pi } there is ramified cover of this surface which is a translation surface, which can be constructed from a union of copies of P 0 {\displaystyle P_{0}} . The dynamics of the billiard flow can then be studied through the geodesic flow on the translation surface.
For example, the billiard in a square is related in this way to the billiard on the flat torus constructed from four copies of the square; the billiard in an equilateral triangle gives rise to the flat torus constructed from an hexagon. The billiard in a "L" shape constructed from squares is related to the geodesic flow on a square-tiled surface; the billiard in the triangle with angles π / 5 , π / 5 , 3 π / 5 {\displaystyle \pi /5,\pi /5,3\pi /5} is related to the Veech surface constructed from two regular pentagons constructed above.
Let ( X , ω ) {\displaystyle (X,\omega )} be a translation surface and θ {\displaystyle \theta } a direction, and let ϕ t {\displaystyle \phi _{t}} be the geodesic flow on ( X , ω ) {\displaystyle (X,\omega )} with direction θ {\displaystyle \theta } . Let I {\displaystyle I} be a geodesic segment in the direction orthogonal to θ {\displaystyle \theta } , and defined the first recurrence, or Poincaré map σ : I → I {\displaystyle \sigma :I\to I} as follows: σ ( p ) {\displaystyle \sigma (p)} is equal to ϕ t ( p ) {\displaystyle \phi _{t}(p)} where ϕ s ( p ) ∉ I {\displaystyle \phi _{s}(p)\not \in I} for 0 < s < t {\displaystyle 0<s<t} . Then this map is an interval exchange transformation and it can be used to study the dynamic of the geodesic flow. [ 16 ] | https://en.wikipedia.org/wiki/Translation_surface |
Translational Behavioral Medicine is a quarterly peer-reviewed medical journal covering behavioral medicine . It is one of two official journals of the Society of Behavioral Medicine . The journal was launched in 2011 by founding editor and editor in-chief, Bonnie Spring (Northwestern University Feinberg School of Medicine ) with a team of field editors (Sherry Pagoto, Rodger Kessler, Brian Oldenburg, and Frank Keefe). By 2016, Translational Behavioral Medicine had been indexed in PubMed, MEDLINE, and Thomson Reuters, and earned its first impact factor. The journal was originally published by Springer Science+Business Media until January 1, 2018. Since then, it has been published by Oxford University Press . [ 1 ] The current editor-in-chief is Cheryl L. Knott ( University of Maryland ). According to the Journal Citation Reports , the journal has a 2021 impact factor of 3.626. [ 2 ]
This article about a medical journal is a stub . You can help Wikipedia by expanding it .
See tips for writing articles about academic journals . Further suggestions might be found on the article's talk page . | https://en.wikipedia.org/wiki/Translational_Behavioral_Medicine |
The Translational Centre for Regenerative Medicine ( TRM ) was a central scientific institution of the University of Leipzig . It focussed on the development of diagnostic and therapeutic concepts in the field of regenerative medicine and their implementation into a clinical setting.
The TRM Leipzig was established in October 2006 with funds from the Federal Ministry of Education and Research , the Free State of Saxony and the University of Leipzig . It was part of the Life Sciences Network Leipzig and one of the initiators of the Regenerative Medicine Initiative Germany (RMIG).
There is no information available for the TRM after 2020.
The TRM Leipzig aims to accelerate the translation of laboratory research in therapeutics and diagnostics into the clinic. The centre introduces an organisational process to assure the effective implementation of therapy-oriented gateway research. The foundation of this concept consists of an award system in which three main gates are conceived. Passing the gates precedes the conceptual, preclinical, and clinical working phases each new diagnostic or therapeutic concept has to go through. This enables the TRM Leipzig to ensure an effective translation in conjunction with a comprehensive support and coordination of research projects.
Professor Frank Emmrich led the institute from 2006 to 2015. [ 1 ] The scientific work of TRM Leipzig is guided by two boards. The Executive Board provides the strategic direction of research at the TRM. Expertise and scientific support is given by the Internal Advisory Board which includes experts of science, senior scientists, researchers and entrepreneurs. TRM Leipzig promotes application-oriented and interdisciplinary research projects in four areas:
The research areas of the TRM Leipzig are supported by three core units:
The TRM Leipzig selects and funds investigator-initiated translational awards that supports young researchers pursuing their own therapy-oriented concepts and extending their innovation potential. Awards can be requested by individual researchers, scientific groups or tandem research teams consisting of clinicians and researchers. Applications for the awards can be submitted anytime. | https://en.wikipedia.org/wiki/Translational_Centre_for_Regenerative_Medicine |
Translational Health Science and Technology Institute (THSTI) is an institute of the Biotechnology Research and Innovation Council (BRIC), Department of Biotechnology , Ministry of Science and Technology , Government of India . It was set up in 2009 and is located in NCR Biotech Science Cluster, Faridabad . Envisioned by former secretary of DBT, Dr. M. K. Bhan, the institute was created to enable a faster transition of lab research to market. Prof. Ganesan Karthikeyan is the Executive Director of THSTI.
THSTI has developed capabilities in indigenous vaccines, monoclonal antibodies, in vitro diagnostic kits, biotherapeutics, drug discovery and provides a scientific atmosphere for clinical research propelling healthcare advancements forward.
THSTI fosters a dynamic and collaborative research environment, bringing together diverse scientific minds - physicians, biologists, chemists, mathematicians, and more - to translate innovative concepts into tangible healthcare products.
THSTI operates a network of specialized research centres addressing various healthcare areas. These centers include:
• Centre for Maternal and Child Health
• Centre for Virus research, Therapeutics and Vaccines
• Centre for Tuberculosis Research
• Centre for Microbial Research
• Centre for Immunobiology and Immunotherapy
• Computational and Mathematical Biology Centre
• Centre for Bio-design and Diagnostics
• Centre for Drug Discovery
• Clinical Development Services Agency
Augmenting these centres are the state-of-the-art facilities of THSTI viz., Bioassay Laboratory, Biorepository, Biosafety Level -3 Lab, Data Management Centre, Immunology Core laboratory, Multi-OMICS facility, Experimental Animal Facility, Vaccine design and Development facility, and School of Innvoation in Biodesign which serve as huge resources for the research programs of THSTI and also the National Capital Region Biotech Science Cluster and other academic and industrial partners.
Committed to capacity building in the healthcare research sector, THSTI offers educational programs like the Master of Science in Clinical Research and PhD programs.
THSTI has more than 700 scientific publications in reputed scientific journals. It has also filed more than 100 patent applications in India and abroad.
Executive Director | https://en.wikipedia.org/wiki/Translational_Health_Science_and_Technology_Institute |
Translational Research: The Journal of Laboratory and Clinical Medicine is a monthly peer-reviewed medical journal covering translational research . It was established in 1915 as The Journal of Laboratory and Clinical Medicine obtaining its current title in 2006. Jeffrey Laurence ( Weill Cornell Medical College ) has been editor-in-chief since 2006. [ 1 ] He was preceded by Dale Hammerschmidt. [ 1 ] It is the official journal of the Central Society for Clinical and Translational Research . It is published by Mosby .
The journal is abstracted and indexed in:
According to the Journal Citation Reports , the journal has a 2014 impact factor of 5.03, ranking it second out of 30 journals in the category "Medical Laboratory Technology", [ 6 ] 17th out of 153 journals in the category "Medicine, General & Internal" [ 7 ] and 17th out of 123 journals in the category "Medicine, Research & Experimental" [ 8 ] | https://en.wikipedia.org/wiki/Translational_Research_(journal) |
Translational drift also known as melty brain or tornado drive is a form of locomotion, notably found in certain combat robots . [ 1 ]
The principle is applied to spinning robots, where the driving wheels are normally on for the whole revolution, resulting in an increased rotational energy, which is stored for destructive effect, but, given perfect symmetry, no net translational acceleration.
The drive works by modulating the power to the wheel or wheels that spin the robot. The net application of force in one direction results in acceleration in the plane – it can't really be characterised as "forward", "backward" and so forth, as the whole robot is spinning. However, in a standard configuration an accelerometer is used to determine the speed of rotation, and a light emitting diode is turned on once per revolution, to give a nominal forward direction indicator to the operator. The internal controls implement the commands received from the remote control to modulate the drive to the wheels, typically by turning it off for part of a revolution to move in a specific direction.
The benefits of using translational drift include less weight needing to be allocated to a weapon due to it being part of the drive system. Disadvantages include the complexity of design, cost, and reliance on the drive system.
In the past, a robot would be classified as a "Sit-and-spin" robot, and would depend on the opponent to engage it to cause damage. As this was deemed less aggressive (which is a common judging criteria) than what could be done by robots armed with a spinning shell mounted on top of their drive, it waned in popularity in most competitions.
The first robot to attempt to use this technology was Blade Runner, a middleweight robot built by Ilya Polyakov for the first five seasons of Comedy Central's Battlebots. Unfortunately, the technology never worked as planned. A lightweight two-wheel drive hammer robot, Herr Gepoünden, implemented the design in their final season of Battlebots. The first symmetrical robot, with a similar in design to a contemporary full-body spinner, to use this technology successfully was CycloneBot, which competed at Steel Conflict 4. [ 2 ] The most successful heavyweight competitor, Nuts, relied entirely on translational drift for its weaponry en route to its 3rd place finish in the 10th series of Robot Wars . [ 3 ]
Open Melt is an open source implementation of melty brain, the code being licensed under Creative Commons Attribution-Noncommercial-Share Alike licence. [ 4 ]
Different rules exist for each competition, some of which allow robots that use translational drift to compete. In Battlebots , the use of translational drift does not count towards the active weapon requirement for the primary weapon, as translational drive relies on the entire robot's movement. [ 5 ] Conversely, in Robot Wars, there is no such prohibition against using translational drift as a primary, active weapon. [ 6 ]
This robotics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Translational_drift |
In cell biology , translational efficiency or translation efficiency is the rate of mRNA translation into proteins within cells .
It has been measured in protein per mRNA per hour. [ 1 ] Several RNA elements within mRNAs have been shown to affect the rate. These include miRNA and protein binding sites. RNA structure may also affect translational efficiency through the altered protein or microRNA binding. [ 2 ]
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Translational_efficiency |
Translational glycobiology or applied glycobiology is the branch of glycobiology and glycochemistry that focuses on developing new pharmaceuticals through glycomics and glycoengineering . [ 1 ] Although research in this field presents many difficulties, translational glycobiology presents applications with therapeutic glycoconjugates , with treating various bone diseases , and developing therapeutic cancer vaccines and other targeted therapies . [ 2 ] [ 3 ] Some mechanisms of action include using the glycan for drug targeting, engineering protein glycosylation for better efficacy, and glycans as drugs themselves.
Glycans , or polysaccharides , are instrumental in many facets of biology, from decorations on cell membranes being involved in cell signaling and interaction to post-translational modifications on proteins warranting function. [ 4 ] Yet even though sugars are the most abundant class of organic molecules found on earth, the study of their structure and function are not as well known as other biological molecules such as proteins and ribonucleic acids . This is partly due to the fact that glycans have no direct biosynthetic template in the genome, as opposed to protein, and thus have not been as effectively elucidated by the age of genomics . [ 5 ] Furthermore, the polymeric nature of glycans presents a challenge to study, as there are plethora of combinations of linkages (unlike in DNA and protein) and many different types of monosaccharides and isomers . [ 5 ]
Seeing as glycans play a key role in the biology of organisms, translational glycobiology thus aims to utilize them both as targets for drugs or as drugs themselves. New or improved glycan products arise as more is learned about the complex biological and chemical roles glycans play, paralleled by advancements in the carbohydrate synthesis toolbox.
Since glycans play an important role in intercellular interactions and protein, they serve as viable targets for various therapeutic interactions. Multiple current therapeutics aim to take advantage of their role in signaling pathways, and target their biosynthesis or engineer related glycoproteins. These interactions can be controlled by encouraging or inhibiting the presence of those glycans that mediate signaling, which is the mechanism of action for a number of extant drugs, including heparin , erythropoietin , the antivirals oseltamivir and zanamivir , and the Hib vaccine . [ 6 ] Furthermore, the glycans themselves can serve as drugs and there is ongoing research and development to engineer more effective ones.
The surfaces of cancer cells often exhibit aberrant glycosylation , which serves to mediate cell proliferation , metastasis , and tumor progression . However, because these glycans often differ from those present on healthy cells, they also serve as candidates to act as cancer biomarkers for use in diagnostics and in developing targeted therapies that discriminate between cancerous cells and normal host tissue. One such therapy involves the use of enzyme inhibitors that target those enzymes involved in the biosynthesis of cancer-associated glycans. [ 7 ] Another treatment is cancer immunotherapy , which directs the immune system to attack tumor cells expressing the targeted altered glycoconjugates . [ 8 ]
For example, modifying CD44 antigens using glycosyltransferase-programmed stereosubstitution (GPS), the HCELL expression on the surfaces of human mesenchymal stem cells and hematopoietic stem cells can be enforced, effectively homing those cells to the bone marrow of their host. [ 9 ] Once mesenchymal stem cells transmigrate through the bone marrow endothelium , they differentiate into osteoblasts and begin contributing to bone formation . This technique has been proposed as a potential treatment for numerous bone diseases, including osteogenesis imperfecta . [ 10 ]
Other therapeutic measures involving glycans include epitope recognition for both vaccine and antibody production. This has been an area of interest especially in the field of HIV vaccines, as the immense genetic diversity of strains and high degree of glycosylation leads to much difficulty in developing antibodies that bind to viral particles. [ 11 ] The heavy glycosylation of these proteins can mask peptide epitopes, making designing antibodies targeted to certain proteins sections all the more difficult. Therefore, some have turned to translational glycobiology to develop antibodies using semi-synthetic and fully synthetic oligosaccharides as antigens . Many of these discoveries have focused on the GP120 surface glycoprotein , which is naturally heavily glycosylated with high mannose glycans. [ 11 ]
Many proteins are glycosylated on certain residues , which can affect the proteome . [ 12 ]
Glycans can interact with receptors , which in turn affect their cellular and subcellular localization . For example, cytokines and the subgroup chemokines are small signaling proteins that are involved in the immune response . [ 13 ] Many of the N-linked glycans on these cytokines play an important role in metabolic turnover and by engineering the glycoform and its branching, there can be advantageous physiochemical affects on the immune response.
Furthermore, glycosylated proteins, or glycoproteins, can have increased resistance to degradation by proteases , which will increase the half-life of those proteins. For example, interferon beta has been shown to be important in the treatment of multiple sclerosis . Recombinant versions of interferon beta have been produced in Escherichia coli , with the glycosylated form being more stable and resistant to protease degradation, while the non-glycosylated form is degraded much more quickly. [ 14 ] Engineered glycoproteins have also been instrumental in enzyme replacement therapy (ERT) . This has been of particular interest in the development of therapeutics for lysosomal storage disease . Proper delivery of these enzymes is highly dependent on the mannose 6-phosphate (M6P) tagging on N-glycans. [ 15 ] Thus, engineering of these N-glyans, such as by modification of branching patterns, sialic acid capping, M6P tagging, monosaccharide constituents, and glycosidic bond linkage , there can be increased efficacy of lysosomal targeting and better delivery to the central nervous system through the blood brain barrier . [ 15 ]
Additionally, glycoengineering has been utilized with neural stem cell cultures to increase adhesion to the extracellular matrix through the treatment of an N-acetylmannosamine analog.
Glycans and glycan-based molecules have been used as drugs themselves. The two main functions of these drugs are to either bind protein or inhibit glycosyl degradation. [ 16 ] For example, engineered glycans, such as Zanamivir and Oseltamivir have been designed to bind to viral sialidases , which are enzymes that play key roles in viral replication cycles, such as for influenza . With these sialidases inhibited, viral budding and entry into host cells is inhibited. Other drugs, such as Miglitol and Acarbose , serve as therapeutic drugs to people with Type 2 diabetes , as these engineered glycan derivatives bind to glucosidases and amylases to help control patient's blood sugar level . [ 16 ] | https://en.wikipedia.org/wiki/Translational_glycobiology |
Translational medicine (often called translational science , of which it is a form) develops the clinical practice applications of the basic science aspects of the biomedical sciences ; that is, it translates basic science to applied science in medical practice . It is defined by the European Society for Translational Medicine as "an interdisciplinary branch of the biomedical field supported by three main pillars: benchside, bedside, and community". [ 1 ] The goal of translational medicine is to combine disciplines, resources, expertise, and techniques within these pillars to promote enhancements in prevention, diagnosis, and therapies. Accordingly, translational medicine is a highly interdisciplinary field, the primary goal of which is to coalesce assets of various natures within the individual pillars in order to improve the global healthcare system significantly. [ 1 ]
Translational medicine is a rapidly growing discipline in biomedical research and aims to expedite the discovery of new diagnostic tools and treatments by using a multi-disciplinary, highly collaborative, "bench-to-bedside" approach. [ 2 ] Within public health, translational medicine is focused on ensuring that proven strategies for disease treatment and prevention are actually implemented within the community. One prevalent description of translational medicine, first introduced by the Institute of Medicine's Clinical Research Roundtable, highlights two roadblocks (i.e., distinct areas in need of improvement): the first translational block (T1) prevents basic research findings from being tested in a clinical setting; the second translational block (T2) prevents proven interventions from becoming standard practice. [ 2 ]
The National Institutes of Health has made a major push to fund translational medicine, especially within biomedical research, with a focus on cross-functional collaborations (e.g., between researchers and clinicians); leveraging new technology and data analysis tools; and increasing the speed at which new treatments reach patients. In December 2011, The National Center for Advancing Translational Science was established within the National Institutes of Health to "transform the translational science process so that new treatments and cures for disease can be delivered to patients faster." [ 3 ] The Clinical and Translational Science Awards, established in 2006, supports 60 centers across the country that provide "academic homes for translational sciences and supporting research resources needed by local and national research communities." [ 4 ] According to an article published in 2007 in Science Career Magazine, in 2007 to 2013 the European Commission targeted a majority of its €6 billion budget for health research to further translational medicine. [ 5 ]
In recent years, a number of educational programs have emerged to provide professional training in the skills necessary for successfully translating research into improved clinical outcomes. These programs go by various names (including Master of Translational Medicine and Master of Science in Bioinnovation). Many such programs emerge from bioengineering departments, often in collaboration with clinical departments.
The University of Edinburgh has been running an MSc in Translational Medicine program since 2007. It is a 3-year online distance learning programme aimed at the working health professional. [ 6 ]
Aalborg University Denmark has been running a master's degree in translational medicine since 2009.
A master's degree program in translational medicine was started at the University of Helsinki in 2010.
In 2010, UC Berkeley and UC San Francisco used a founding grant from Andy Grove [ 7 ] to launch a joint program that became the Master of Translational Medicine (MTM). [ 8 ] The program links the Bioengineering department at Berkeley with the Bioengineering and Therapeutic Science department at UCSF to give students a one-year experience in fostering medical innovation.
Cedars-Sinai Medical Center in Los Angeles, California was accredited in 2012 for a doctoral program in Biomedical Science and Translational Medicine. The PhD program focuses on biomedical and clinical research that relate directly to developing new therapies for patients. [ 9 ]
Since 2013, the Official Master in Translational Medicine-MSc [ 10 ] from the University of Barcelona [ 11 ] offers the opportunity to gain an excellent training through theoretical and practical courses. [ 12 ] Furthermore, this master is linked to the doctoral programme “Medicine and Translational Research”, with quality mention from the National Agency for the Quality Evaluation and Accreditation ( ANECA ). [ 13 ]
In Fall 2015, the City College of New York [ 14 ] established a master in translational medicine program. [ 15 ] [ 14 ] A partnership between The Grove School of Engineering and the Sophie Davis School for Biomedical Education/CUNY School of Medicine, this program provides scientists, engineers, and pre-med students with training in product design, intellectual property, regulatory affairs, and medical ethics over 3 semesters.
The University of Toronto The Master of Health Science (MHSc) in Translational Research in Health Sciences is a two-year, course-based program is designed for interprofessional students from diverse backgrounds (such as medicine, life sciences, social sciences, engineering, design, and communications) who want to learn creative problem-solving skills, strategies, and competencies to translate (scientific) knowledge into innovations that improve medicine, health, and care. They have international speakers and contributors, including Dr Iseult Roche from The UK.
Translational medicine is a key to the future of international health and in facilitating public health and promoting health policy to actively be implemented to establish care.
University of Liverpool , King's College London , Imperial College London , University College London , St George's, University of London , Oxford and Cambridge Universities run post-graduate courses in Translational Medicine too.
The George Washington University's School of Medicine & Health Sciences offers a PhD program in Translational Health Sciences. [ 16 ]
The University of Manchester, Newcastle University and Queen's University Belfast also offer research-focussed Masters of Research (MRes) courses in Translational Medicine. [ 17 ]
Queen's University 's School of Graduate Studies offers both an MSc and PhD program in Translational Medicine. [ 18 ]
University of Windsor 's Faculty of Graduate Studies offers a one year MSc program in Translational Health Science. [ 19 ]
Tulane University has a PhD program in Bio-Innovation [ 20 ] to foster design and implementation of innovative biomedical technologies.
Temple University 's College of Public Health offers a Master of Science in Clinical Research and Translational Medicine. [ 21 ] The program is jointly offered with the Lewis Katz School of Medicine.
Mahidol University at Faculty of Medicine Ramathibodi hospital has a Master and PhD programme in Translational Medicine since 2012. Mahidol University is the first University in Thailand and in Southeast Asia. The most of programme lecturers are physicians and clinicians who contribute in many fields of study such as Cancer , Allergy and Immunology , Haematology , Paediatric , Rheumatology , etc. Here, the student will be directly exposed to patients. To find out the thing in between basic science and clinical application.
University of Würzburg started a Master programme in Translational Medicine in 2018. [ 22 ] It is aimed at medical students in the third or fourth year pursuing a career as a Clinician Scientist.
St. George's University of London offers a 1-year Translational Medicine master's programme since 2018, including pathways leading to a Master of Research (MRes), Master of Science (MSc), Postgraduate Certificate (PGCert) or Postgraduate Diploma (PGDip). Both master's degrees include a research component that makes up 33% of the MSc pathway and 58% of the MRes pathway. Taught modules are designed to cover major areas of modern translational science including drug development , genomics and development of skills related to research and data analysis . [ 23 ]
Perdana University 's Graduate School of Medicine in Kuala Lumpur, Malaysia offers a Master of Science (MSc) in Translational Medicine. [ 24 ]
Taipei Medical University 's College of Medical Science and Technology in Taipei, Taiwan offers an MSc and PhD program in Translational Science. [1]
Academy of Translational Medicine Professionals offers a regular professional certification course "Understanding Translational Medicine Tools and Techniques". [ 25 ]
James Lind Institute has been conducting a Postgraduate Diploma in Translational Medicine since early 2013. The program has been supported by the Universiti Sains Malaysia . [ 26 ]
The University of Southern California School of Pharmacy offers a course in translational medicine.
The European Society for Translational Medicine is a global non-profit and neutral healthcare organization whose principal objective is to enhance world-wide healthcare by using translational medicine approaches, resources and expertise. [ 1 ]
The society facilitates cooperation and interaction among clinicians, scientists, academia, industry, governments, funding and regulatory agencies, investors and policy makers in order to develop and deliver high-quality translational medicine programs and initiatives with overall aim to enhance the healthcare of global population. The society's goal is to enhance research and development of novel and affordable diagnostic tools and treatments for the clinical disorders affecting global population. [ 1 ]
The society provides an annual platform in the form of global congresses where global key opinion leaders, scientists from bench side, public health professionals, clinicians from bedside and industry professionals gather and take part in the panel discussions and scientific sessions on latest updates and developments in translational medicine field including biomarkers, omics sciences, cellular and molecular biology, data mining & management, precision medicine & companion diagnostics, disease modelling, vaccines and community healthcare. [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ]
In response to the COVID-19 coronavirus pandemic, the European Society for Translational Medicine announced a global virtual congress on COVID-19 (EUSTM-2020). The Virtual Congress focused on the key principles of Translational Medicine: Bench side, Bed side and Public Health. The congress aims to address current challenges, highlight novel solutions and identify critical hurdles as they apply to COVID-19. [ 33 ]
Academy of Translational Medicine Professionals is working to advance the ongoing knowledge and skills for clinicians and scientific professionals at all levels. Academy's high quality, standard and ethical training and educational programs ensure that all clinical and scientific professionals achieve excellence in their respective fields. Programs are accredited by the European Society for Translational Medicine. [ 1 ]
Academy of Translational Medicine Professionals offers fellowship program which is open to highly experienced professionals who have a record of significant achievements in benchside, bedside or community health fields. [ 34 ] | https://en.wikipedia.org/wiki/Translational_medicine |
Translational regulation refers to the control of the levels of protein synthesized from its mRNA . This regulation is vastly important to the cellular response to stressors, growth cues, and differentiation . In comparison to transcriptional regulation , it results in much more immediate cellular adjustment through direct regulation of protein concentration. The corresponding mechanisms are primarily targeted on the control of ribosome recruitment on the initiation codon , but can also involve modulation of peptide elongation, termination of protein synthesis , or ribosome biogenesis . While these general concepts are widely conserved, some of the finer details in this sort of regulation have been proven to differ between prokaryotic and eukaryotic organisms.
Initiation of translation is regulated by the accessibility of ribosomes to the Shine-Dalgarno sequence . This stretch of four to nine purine residues are located upstream the initiation codon and hybridize to a pyrimidine-rich sequence near the 3' end of the 16S RNA within the 30S bacterial ribosomal subunit . [ 1 ] Polymorphism in this particular sequence has both positive and negative effects on the efficiency of base-pairing and subsequent protein expression. [ 2 ] Initiation is also regulated by proteins known as initiation factors which provide kinetic assistance to the binding between the initiation codon and tRNA fMet , which supplies the 3'-UAC-5' anticodon. IF1 binds the 30S subunit first, instigating a conformational change [ 3 ] that allows for the additional binding of IF2 and IF3. [ 4 ] IF2 ensures that tRNA fMet remains in the correct position while IF3 proofreads initiation codon base-pairing to prevent non-canonical initiation at codons such as AUU and AUC. [ 5 ] Generally, these initiation factors are expressed in equal proportion to ribosomes, however experiments using cold-shock conditions have shown to create stoichiometric imbalances between these translational machinery. In this case, two to three fold changes in expression of initiation factors coincide with increased favorability towards translation of specific cold-shock mRNAs. [ 6 ]
Due to the fact that translation elongation is an irreversible process, there are few known mechanisms of its regulation. However, it has been shown that translational efficiency is reduced via diminished tRNA pools, which are required for the elongation of polypeptides. In fact, the richness of these tRNA pools are susceptible to change through cellular oxygen supply. [ 7 ]
The termination of translation requires coordination between release factor proteins, the mRNA sequence, and ribosomes. Once a termination codon is read, release factors RF-1, RF-2, and RF-3 contribute to the hydrolysis of the growing polypeptide, which terminates the chain. Bases downstream the stop codon affect the activity of these release factors. In fact, some bases proximal to the stop codon suppress the efficiency of translation termination by reducing the enzymatic activity of the release factors. For instance, the termination efficiency of a UAAU stop codon is near 80% while the efficiency of UGAC as a termination signal is only 7%. [ 8 ]
When comparing initiation in eukaryotes to prokaryotes, perhaps one of the first noticeable differences is the use of a larger 80S ribosome. Regulation of this process begins with the supply of methionine by a tRNA anticodon that basepairs AUG. This base pairing comes about by the scanning mechanism that ensues once the small 40S ribosomal subunit binds the 5' untranslated region (UTR) of mRNA. The usage of this scanning mechanism, in opposition to the Shine-Dalgarno sequence that was referenced in prokaryotes, is the ability to regulate translation through upstream RNA secondary structures . This inhibition of initiation through complex RNA structures may be circumvented in some cases by way of internal ribosomal entry sites (IRESs) that localize pre-initiation complexes (PIC) to the start site. [ 9 ] In addition to this, the guidance of the PIC to the 5' UTR is coordinated by subunits of the PIC, known as eukaryotic initiation factors (eIFs). When some of these proteins are down-regulated through stresses, translation initiation is reduced by inhibiting cap dependent initiation , the activation of translation by binding eIF4E to the 5' 7-methylguanylate cap . eIF2 is responsible for coordinating the interaction between the Met-tRNA i Met and the P-site of the ribosome. Regulation by phosphorylation of eIF2 is largely associated with the termination of translation initiation. [ 10 ] Serine kinases , GCN2 , PERK, PKR, and HRI are examples of detection mechanisms for differing cellular stresses that respond by slowing translation through eIF2 phosphorylation.
The hallmark difference of elongation in eukaryotes in comparison to prokaryotes is its separation from transcription. While prokaryotes are able to undergo both cellular processes simultaneously, the spatial separation that is provided by the nuclear membrane prevents this coupling in eukaryotes. Eukaryotic elongation factor 2 (eEF2) is a regulateable GTP -dependent translocase that moves nascent polypeptide chains from the A-site to the P-site in the ribosome. Phosphorylation of threonine 56 is inhibitory to the binding of eEF2 to the ribosome. [ 11 ] Cellular stressors, such as anoxia have proven to induce translational inhibition through this biochemical interaction. [ 12 ]
Mechanistically, eukaryotic translation termination matches its prokaryotic counterpart. In this case, termination of the polypeptide chain is achieved through the hydrolytic action of a heterodimer consisting of release factors, eRF1 and eRF3 . Translation termination is said to be leaky in some cases as noncoding-tRNAs may compete with release factors to bind stop codons. This is possible due to the matching of 2 out 3 bases within the stop codon by tRNAs that may occasionally outcompete release factor base pairing. [ 13 ] An example of regulation at the level of termination is functional translational readthrough of the lactate dehydrogenase gene LDHB. This readthrough provides a peroxisomal targeting signal that localizes the distinct LDHBx to the peroxisome. [ 14 ]
Translation in plants is tightly regulated as in animals, however, it is not as well understood as transcriptional regulation. There are several levels of regulation including translation initiation, mRNA turnover and ribosome loading. Recent studies have shown that translation is also under the control of the circadian clock. Like transcription, the translation state of numerous mRNAs changes over the diel cycle (day night period). [ 15 ] | https://en.wikipedia.org/wiki/Translational_regulation |
Translational research (also called translation research , translational science , or, when the context is clear, simply translation ) [ 1 ] [ 2 ] is research aimed at translating (converting) results in basic research into results that directly benefit humans. The term is used in science and technology , especially in biology and medical science . As such, translational research forms a subset of applied research .
The term has been used most commonly in life sciences and biotechnology, but applies across the spectrum of science and humanities. In the context of biomedicine, translational research is also known as bench to bedside . [ 3 ] In the field of education, it is defined as research which translates concepts to classroom practice.
Critics of translational medical research (to the exclusion of more basic research) point to examples of important drugs that arose from fortuitous discoveries in the course of basic research such as penicillin and benzodiazepines . Other problems have stemmed from the widespread irreproducibility thought to exist in translational research literature.
Although translational research is relatively new, there are now several major research centers focused on it. In the U.S., the National Institutes of Health has implemented a major national initiative to leverage existing academic health center infrastructure through the Clinical and Translational Science Awards. Furthermore, some universities acknowledge translational research as its own field in which to study for a PhD or graduate certificate.
Translational research is aimed at solving particular problems; the term has been used most commonly in life sciences and biotechnology, but applies across the spectrum of science and humanities. [ citation needed ]
In the field of education, it is defined for school-based education by the Education Futures Collaboration (www.meshguides.org) as research which translates concepts to classroom practice. [ 4 ] Examples of translational research are commonly found in education subject association journals and in the MESHGuides which have been designed for this purpose. [ 5 ]
In bioscience, translational research is a term often used interchangeably with translational medicine or translational science or bench to bedside. The adjective "translational" refers to the " translation " (the term derives from the Latin for "carrying over") of basic scientific findings in a laboratory setting into potential treatments for disease. [ 6 ] [ 7 ] [ 8 ] [ 9 ]
Biomedical translational research adopts a scientific investigation/enquiry into a given problem facing medical/health practices: [ 10 ] it aims to "translate" findings in fundamental research into practice. In the field of biomedicine, it is often called "translational medicine", defined by the European Society for Translational Medicine (EUSTM) as "an interdisciplinary branch of the biomedical field supported by three main pillars: benchside, bedside and community", [ 11 ] from laboratory experiments through clinical trials, to therapies , [ 12 ] to point-of-care patient applications. [ 13 ] The end point of translational research in medicine is the production of a promising new treatment that can be used clinically. [ 6 ] Translational research is conceived due to the elongated time often taken to bring to bear discovered medical idea in practical terms in a health system. [ citation needed ] It is for these reasons that translational research is more effective in dedicated university science departments or isolated, dedicated research centers. [ 14 ] Since 2009, the field has had specialized journals, the American Journal of Translational Research and Translational Research dedicated to translational research and its findings.
Translational research in biomedicine is broken down into different stages. In a two-stage model, T1 research , refers to the "bench-to-bedside" enterprise of translating knowledge from the basic sciences into the development of new treatments and T2 research refers to translating the findings from clinical trials into everyday practice, although this model is actually referring to the 2 "roadblocks" T1 and T2. [ 6 ] Waldman et al. [ 15 ] propose a scheme going from T0 to T5. T0 is laboratory (before human) research. In T1-translation, new laboratory discoveries are first translated to human application, which includes phase I & II clinical trials. In T2-translation, candidate health applications progress through clinical development to engender the evidence base for integration into clinical practice guidelines. This includes phase III clinical trials. In T3-translation, dissemination into community practices happens. T4-translation seeks to (1) advance scientific knowledge to paradigms of disease prevention, and (2) move health practices established in T3 into population health impact. Finally, T5-translation focuses on improving the wellness of populations by reforming suboptimal social structures
Basic research is the systematic study directed toward greater knowledge or understanding of the fundamental aspects of phenomena and is performed without thought of practical ends. It results in general knowledge and understanding of nature and its laws. [ 16 ] For instance, basic biomedical research focuses on studies of disease processes using, for example, cell cultures or animal models without consideration of the potential utility of that information. [ 11 ]
Applied research is a form of systematic inquiry involving the practical application of science . It accesses and uses the research communities' accumulated theories, knowledge, methods, and techniques, for a specific, often stated, business , or client-driven purpose. [ 17 ] Translational research forms a subset of applied research. In life-sciences, this was evidenced by a citation pattern between the applied and basic sides in cancer research that appeared around 2000. [ 18 ] In fields such as psychology, translational research is seen as a bridging between applied research and basic research types. The field of psychology defines translational research as the use of basic research to develop and test applications, such as treatment.
Critics of translational medical research (to the exclusion of more basic research) point to examples of important drugs that arose from fortuitous discoveries in the course of basic research such as penicillin and benzodiazepines , [ 19 ] and the importance of basic research in improving our understanding of basic biological facts (e.g. the function and structure of DNA ) that go on to transform applied medical research. [ 20 ] Examples of failed translational research in the pharmaceutical industry include the failure of anti-aβ therapeutics in Alzheimer's disease. [ 21 ] Other problems have stemmed from the widespread irreproducibility thought to exist in translational research literature. [ 22 ]
In U.S., the National Institutes of Health has implemented a major national initiative to leverage existing academic health center infrastructure through the Clinical and Translational Science Awards.
The National Center for Advancing Translational Sciences (NCATS) was established on December 23, 2011. [ 23 ]
Although translational research is relatively new, it is being recognized and embraced globally. Some major centers for translational research include:
Additionally, translational research is now acknowledged by some universities as a dedicated field to study a PhD or graduate certificate in, in a medical context. These institutes currently include Monash University in Victoria, Australia , [ 33 ] the University of Queensland , Diamantina Institute in Brisbane, Australia , [ 34 ] at Duke University in Durham, North Carolina , America , [ 35 ] at Creighton University in Omaha, Nebraska [ 36 ] at Emory University in Atlanta, Georgia , [ 37 ] and at [ 38 ] The George Washington University in Washington, D.C. The industry and academic interactions to promote translational science initiatives has been carried out by various global centers such as European Commission , GlaxoSmithKline and Novartis Institute for Biomedical Research . [ 39 ] | https://en.wikipedia.org/wiki/Translational_research |
Translational research informatics ( TRI ) is a sister domain to or a sub-domain of biomedical informatics or medical informatics concerned with the application of informatics theory and methods to translational research . There is some overlap with the related domain of clinical research informatics , but TRI is more concerned with enabling multi-disciplinary research to accelerate clinical outcomes, with clinical trials often being the natural step beyond translational research.
Translational research as defined by the National Institutes of Health includes two areas of translation. One is the process of applying discoveries generated during research in the laboratory, and in preclinical studies, to the development of trials and studies in humans. The second area of translation concerns research aimed at enhancing the adoption of best practices in the community. Cost-effectiveness of prevention and treatment strategies is also an important part of translational research.
Translational research informatics can be described as "an integrated software solution to manage the: (i) logistics, (ii) data integration, and (iii) collaboration, required by translational investigators and their supporting institutions". It is the class of informatics systems that sits between and often interoperates with: (i) health information technology / electronic medical record systems, (ii) CTMS / clinical research informatics , and (iii) statistical analysis and data mining .
Translational research informatics is relatively new, with most CTSA awardee academic medical centers actively acquiring and integrating systems to enable the end-to-end TRI requirements. One advanced TRI system is being implemented at the Windber Research Institute in collaboration with GenoLogics and InforSense . Translational Research Informatics systems are expected to rapidly develop and evolve over the next couple of years.
Further discussion of this domain can be found at the Clinical Research Informatics Wiki (CRI Wiki), a wiki dedicated to issues in clinical and translational research informatics.
This bioinformatics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Translational_research_informatics |
In physics and mathematics , continuous translational symmetry is the invariance of a system of equations under any translation (without rotation ). Discrete translational symmetry is invariant under discrete translation.
Analogously, an operator A on functions is said to be translationally invariant with respect to a translation operator T δ {\displaystyle T_{\delta }} if the result after applying A doesn't change if the argument function is translated.
More precisely it must hold that ∀ δ A f = A ( T δ f ) . {\displaystyle \forall \delta \ Af=A(T_{\delta }f).}
Laws of physics are translationally invariant under a spatial translation if they do not distinguish different points in space. According to Noether's theorem , space translational symmetry of a physical system is equivalent to the momentum conservation law .
Translational symmetry of an object means that a particular translation does not change the object. For a given object, the translations for which this applies form a group, the symmetry group of the object, or, if the object has more kinds of symmetry, a subgroup of the symmetry group.
Translational invariance implies that, at least in one direction, the object is infinite: for any given point p , the set of points with the same properties due to the translational symmetry form the infinite discrete set { p + n a | n ∈ Z } = p + Z a . Fundamental domains are e.g. H + [0, 1] a for any hyperplane H for which a has an independent direction. This is in 1D a line segment , in 2D an infinite strip, and in 3D a slab, such that the vector starting at one side ends at the other side. Note that the strip and slab need not be perpendicular to the vector, hence can be narrower or thinner than the length of the vector.
In spaces with dimension higher than 1, there may be multiple translational symmetries. For each set of k independent translation vectors, the symmetry group is isomorphic with Z k .
In particular, the multiplicity may be equal to the dimension. This implies that the object is infinite in all directions. In this case, the set of all translations forms a lattice . Different bases of translation vectors generate the same lattice if and only if one is transformed into the other by a matrix of integer coefficients of which the absolute value of the determinant is 1. The absolute value of the determinant of the matrix formed by a set of translation vectors is the hypervolume of the n -dimensional parallelepiped the set subtends (also called the covolume of the lattice). This parallelepiped is a fundamental region of the symmetry: any pattern on or in it is possible, and this defines the whole object.
See also lattice (group) .
E.g. in 2D, instead of a and b we can also take a and a − b , etc. In general in 2D, we can take p a + q b and r a + s b for integers p , q , r , and s such that ps − qr is 1 or −1. This ensures that a and b themselves are integer linear combinations of the other two vectors. If not, not all translations are possible with the other pair. Each pair a , b defines a parallelogram, all with the same area, the magnitude of the cross product . One parallelogram fully defines the whole object. Without further symmetry, this parallelogram is a fundamental domain. The vectors a and b can be represented by complex numbers. For two given lattice points, equivalence of choices of a third point to generate a lattice shape is represented by the modular group , see lattice (group) .
Alternatively, e.g. a rectangle may define the whole object, even if the translation vectors are not perpendicular, if it has two sides parallel to one translation vector, while the other translation vector starting at one side of the rectangle ends at the opposite side.
For example, consider a tiling with equal rectangular tiles with an asymmetric pattern on them, all oriented the same, in rows, with for each row a shift of a fraction, not one half, of a tile, always the same, then we have only translational symmetry, wallpaper group p 1 (the same applies without shift). With rotational symmetry of order two of the pattern on the tile we have p 2 (more symmetry of the pattern on the tile does not change that, because of the arrangement of the tiles). The rectangle is a more convenient unit to consider as fundamental domain (or set of two of them) than a parallelogram consisting of part of a tile and part of another one.
In 2D there may be translational symmetry in one direction for vectors of any length. One line, not in the same direction, fully defines the whole object. Similarly, in 3D there may be translational symmetry in one or two directions for vectors of any length. One plane ( cross-section ) or line, respectively, fully defines the whole object. | https://en.wikipedia.org/wiki/Translational_symmetry |
Translatomics is the study of all open reading frames (ORFs) that are being actively translated in a cell or organism . This collection of ORFs is called the translatome . Characterizing a cell's translatome can give insight into the array of biological pathways that are active in the cell. According to the central dogma of molecular biology , the DNA in a cell is transcribed to produce RNA , which is then translated to produce a protein . Thousands of proteins are encoded in an organism's genome , and the proteins present in a cell cooperatively carry out many functions to support the life of the cell. Under various conditions, such as during stress or specific timepoints in development, the cell may require different biological pathways to be active, and therefore require a different collection of proteins. Depending on intrinsic and environmental conditions, the collection of proteins being made at one time varies. Translatomic techniques can be used to take a "snapshot" of this collection of actively translating ORFs, which can give information about which biological pathways the cell is activating under the present conditions. [ 1 ]
Usually, the ribosome profiling technique is used to acquire the translatome information. [ 2 ] Recent advancements, including single-cell ribosome profiling, have significantly improved the resolution of these studies, allowing researchers to gain insights into translation at the level of individual cells. [ 3 ] This is particularly important for heterogeneous cell populations, where overall bulk measurements may mask important cell-to-cell differences in protein synthesis. Other methods are polysome profiling , full-length translating mRNA profiling ( RNC-seq ), and translating ribosome affinity purification (TRAP-seq). [ 4 ] Unlike the transcriptome , the translatome is a more accurate approximation for estimating the expression level of some genes , since the correlation between the proteome and translatome is higher than the correlation between the transcriptome and proteome. [ 5 ]
Nearing the completion of the Human Genome Project the field of genetics was shifting its focus toward determining the functions of genes. This involved cataloguing other collections of biological materials, like RNA and proteins in cells. These collections of materials were called -omes, evoking the widespread excitement surrounding the sequencing of the human genome. The term translatome was first proposed in 2001 by Greenbaum et al. [ 6 ] The translatome was intended to describe the relative quantities of proteins in a proteome . The term translatome now generally refers to the collection of proteins actively being created in a cell. Translatomics, in combination with degradomics , aims to describe the net change to the proteome under different conditions.
The aim of genomics is to study the genome, or the collection of genetic material in an organism. Genomics subfields, or other - omics , such as Transcriptomics and proteomics aim to characterize genome function by quantifying products of the genome (such as RNA and proteins) under different conditions. In doing so, omics gain insight into different levels of regulation of gene expression and therefore genome function. However, these fields characterize biomolecules that have already been formed. In some cases, RNA or protein abundance does not reflect function because these biomolecules may be degraded rapidly, or they may remain in a cell long after they are initially synthesized. When using proteomics techniques to study the proteome , regulation of protein abundance at the level of post-translational modification and protein degradation may obscure earlier regulatory processes. Because cellular functions are often regulated at the level of translation, meaning the transcriptome does not always reflect genome function, [ 1 ] using translatomics techniques to study the translatome may allow one to observe regulation of genome function that would be obscured in transcriptomics or proteomics studies.
Most translatomics techniques focus on characterizing the mRNAs that are complexed with ribosomes and therefore being translated.
Polysome profiling is a technique used to characterize the degree of translation of one or more mRNAs. A highly translated mRNA exists as a polysome , meaning it is complexed with multiple ribosomes. mRNAs translated at lower levels are complexed with fewer ribosomes. In polysome profiling, a sucrose gradient is used to separate molecular complexes in a cell lysate based on size. [ 7 ] The fractions from the column are analyzed by sequencing or other methods. The translation rate of mRNAs is determined based on its detection and abundance in the fractions of lower and higher molecular weight.
The full length translating mRNA (RNC-seq) involves centrifugation of lysated sample on a sucrose cushion. This allows separation of the Ribosome-nascent chain complex(RNC) from free mRNAs and other cell components. The RNCs form a pellet in the centrifugation that is collected for further analysis. The mRNA being translated in these RNCs can be sequenced, allowing identification and quantification of the mRNAs being translated at the time. [ 1 ] [ 8 ] However, RNC-mRNA complexes are fragile which can lead to ribosomes to dissociate from the mRNAs and degradation of mRNAs, potentially biasing the collected results. [ 1 ]
In Ribosome profiling , cellular mRNA including polysomes is subjected to ribonucleases, enzymes that cleave RNA. Those positions in the RNA molecules that are bound by ribosomes are protected against digestion. After cessation of ribonuclease activity, these protected sites can be recovered and sequenced. The sequences obtained from ribo-seq therefore represent fragments of mRNAs that were being actively translated. [ 9 ]
TRAP-seq is used to identify mRNAs being actively translated in a specific cell type within a tissue or other assortment of cells. The cell type of interest is engineered to express a ribosomal subunit fused to an epitope tag such as green fluorescent protein . [ 10 ] After cell lysis, antibodies targeting the epitope are used to isolate mRNAs that are bound to the ribosomes containing the fusion proteins. This RNA is then converted to cDNA and sequenced. This technique specifically identifies mRNAs that were being translated in the cell type of interest.
Mass spectrometry methods are unable to determine folding of nascent polypeptides . No current methods examine the folding states of nascent polypeptides globally in the cell. There are methods for examining the folding state of individual nascent polypeptides. [ 1 ]
One method uses a nonspecific protease to cleave the nascent peptide at a low temperature. The protease can cleave the unfolded, flexible regions, but cannot cut tightly folded regions. The products of the cleavage can then be separated and studied to determine the folded regions of the nascent peptide. [ 1 ] [ 11 ]
To analyze the full structure of a nascent polypeptide, nuclear magnetic resonance (NMR) is used. NMR allows a dynamic view of molecules in solution and so can be used on ribosome-nascent chain complexes (RNC) . Labelling of ribosomes allows the NMR data to be filtered for suspected ribosome signal and identify the signal of the nascent polypeptide. [ 12 ]
While the structure of nascent polypeptides is currently unexaminable on a global scale, identification and quantification is not.
One method uses a variant of Stable isotope labeling by amino acids in cell culture (SILAC). SILAC labels proteins with stable isotopes to allow quantification, comparing labelled and unlabeled peptides for quantification. Pulse SILAC (pSILAC), only allows peptides created during the pulse to be labelled. In theory, this allows a capture of only nascent peptides for quantification. SILAC, however, requires similar levels of labelled and unlabelled proteins for accurate quantification. As such, pSILAC pulses have to run much longer than the translation process, making quantification of nascent peptides inaccurate. [ 1 ] [ 13 ]
Bio-Orthogonal/Quantitative Non-Canonical Amino acid Tagging (BONCAT/QuaNCAT) uses azidohomoalanine (AHA) to tag proteins. This allows isolation of newly created proteins for MS. [ 14 ] [ 15 ] However, using AHA requires predepletion of intracellular methionine and introduction of AHA, stressing the cell and potentially altering translation dynamics within. Similar to pSILAC, AHA methods require longer pulses, thus limiting its efficacy in quantifying nascent peptides. [ 1 ] [ 13 ]
Puromycin-associated nascent chain proteomics (PUNCH-P) uses a puromycin-biotin label to capture nascent polypeptides for MS. Though it does not disturb the cell process, it is less sensitive than other methods in detecting nascent peptides. [ 1 ] [ 13 ] In addition to these quantification methods, other MS based methods are being worked on.
Degradation of mRNA also plays an important part in regulating the translation process.
To explore mechanisms of decay, genome-wide mapping of uncapped and cleaved transcripts (GMUCT), parallel analysis of RNA ends (PARE), and degradome sequencing use the T4 ligase of the Illumina sequencing platform to sequence decapped mRNAs. T4 ligase ligates on RNA with a free 5’ monophosphate. As mature mRNAs have a 5’ cap, they are not bound as substrates, leaving decapped and degrading mRNAs to be bound. [ 16 ]
5′-monophosphorylated ends sequencing (5Pseq) captures both capped and decapped sequences to allow sequencing of both mature mRNA and degraded products. This helps identify mRNA degradation products, and has uses in studying ribosome stalling. [ 1 ] [ 17 ]
These methods study 5’ to 3’ degradation, miRNA -mediated cleavage, and nonsense mediated mRNA decay , but cannot measure 3’ to 5’ degradation and other degradation mechanisms.
The techniques above require lysis of cells and thus cannot be performed in living cells. Single molecule fluorescence resonance energy transfer (smFRET) and Nascent chain tracking (NCT) use fluorescence to track translational activity. Both methods track elongation rates of the polypeptides on single mRNAs. [ 18 ] [ 19 ] Neither technique, however, is capable of high throughput. [ 1 ]
Transfer RNA (tRNA) is an important part of Translation . tRNAs read the mRNA, bringing the amino acids the ribosome assembles into a polypeptide. As such, the abundances and types of tRNAs has a large effect on the speed of protein synthesis. [ 20 ] tRNAs can be very similar to other tRNAs, with some tRNA species only differing by a single nucleotide. This, coupled with similar secondary and tertiary structures makes separating different tRNA species difficult. [ 1 ]
2D-Gel electrophoresis is a classic method used in separating tRNAs. Initially, the tRNAs are denatured in 7M urea and separated in the first gel dimension. 4M urea allows partial refolding for additional separation in the second gel dimension. This method has allowed separation into 48 sets in E. coli and 30 in B. subtilis but has limited resolution. Large numbers of different tRNA species cannot be fully separated by 2D-gel electrophoresis, with only 62 spots found for the 269 rat tRNAs. [ 21 ]
High performance liquid chromatography can be used to separate tRNAs based on aminoacylated tRNA isoacceptors. This method cannot fully separate the tRNA species and cannot distinguish between codons, though it still can find quantitative differences between different cell lines. [ 21 ]
Mass spectrometry (MS) can be used to separate tRNAs based on unique endonuclease digestion products. [ 22 ] This, however, has limited resolution with mixtures of 30 tRNA species and needs fractionation prior to MS in larger groups of tRNA. It also cannot be used to identify deNovo tRNA species as it requires prior knowledge of the digestion patterns of tRNA species. [ 21 ]
Hybridization based microarrays use the 3’CCA conserved sequence in tRNAs to attach a fluorescent probe. 70-80 nucleotide long probes, covering the length of the tRNA, are then used to bind tRNAs. tRNAs with at least 8 base differences are able to be distinguished with microarrays, but tRNAs with smaller differences bind to the same probe. [ 21 ]
Quantitative reverse transcription, real time PCR (qRT-PCR) is another way for identifying and quantifying the tRNAome. A primer for the conserved 3’CCA sequence is used for priming reverse transcription for all tRNA species. Heavy modification of the tRNA nucleotides and the stability of the tRNA molecule can be an issue, so high temperatures and demethylation have been used to counter these. Demethylation has also been used to counteract errors heavy methylation induces in sequencing. [ 1 ]
Ribo-tRNA-seq has been developed to observe the role of tRNA more closely to translation. Similar to Ribo-seq, Ribo-tRNA-seq captures tRNA molecules in ribosomes for library preparation before sequencing [ 23 ] | https://en.wikipedia.org/wiki/Translatomics |
Translocated promoter region is a component of the tpr-met fusion protein .
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Translocated_promoter_region |
55026
245386
ENSG00000125355
ENSMUSG00000036502
Q5JRV8
Q8BHW5
NM_001104544 NM_001104545 NM_017938
NM_001289727 NM_001289728 NM_172930
NP_001098014 NP_001098015 NP_060408
NP_001276656 NP_001276657 NP_766518
Transmembrane protein 255A [ 5 ] is a protein that is encoded by the TMEM255A gene . [ 6 ] TMEM255A is often referred to as family with sequence similarity 70, member A ( FAM70A ). [ 7 ] The TMEM255A protein is transmembrane and is predicted to be located the nuclear envelope of eukaryote organisms. [ 8 ]
The TMEM25A gene (often referred to as Family with Sequence Similarity 70 Member A; FAM70A) is located on Xq24 , spanning 60,555 base pairs . [ 10 ] TMEM255A is flanked by the genes ATPase Na+/K+ transporting family member beta 4 ( ATP1B4 ) and NF K B activating protein pseudogene 1 ( NKAPP1 ). [ 11 ]
There are three variants of the transcript seen, where isoform 1 is the longest. The 5’- and 3’- UTRs of the mRNA spans 227 and 2207 base pairs, respectively, and are predicted to contain several stem-loops . [ 12 ] The mRNA is 3512 base pairs long and the gene consists of 9 exons. [ 13 ]
The longest protein encoded for is isoform 1, which spans 349 amino acids, and is predicted to have a molecular weight at 38 kDa and isoelectric point at pH 7.89. [ 15 ] [ 16 ] [ 17 ] Compared to the average vertebrate protein, TMEM255A is rich in aspartic acid, isoleucine, proline and tyrosine, and relatively poor in glutamic acid and lysine. [ 18 ] No charge clusters have been found in this protein.
The protein is predicted to be post-translationally modified by phosphorylation and glycosylation . [ 19 ] The protein is predicted to have four transmembrane domains in the nuclear membrane. The structure of the protein is predicted to be helical in the transmembrane domains. [ 20 ] [ 21 ] [ 22 ] Disulfide bonds are predicted to be found in the region in between transmembrane domains 3 and 4, which indicates that this particular region is located in the nucleoplasm. [ 23 ] [ 24 ] [ 25 ] [ 26 ]
TMEM255A is predicted to be most abundantly expressed in nerve, brain, testis, ovary, thymus and kidney. The protein is expressed in a variety of tissues, but at relatively moderate levels. [ 27 ] [ 28 ] [ 29 ]
Both the 5' and 3' Untranslated Regions (UTRs) are predicted to consist of several stem-loops. [ 30 ] The 3' UTR also contain a conserved miRNA target site (amino acids 22-29). [ 31 ] Phosphorylation and glycosylation sites have also been predicted in TMEM255A. [ 32 ] [ 33 ]
Affinity Capture MS experimentally predicts that TMEM255A interacts with ten different proteins; Ankyrin repeat domain 13D ( ANKRD13D ), Collagen beta (1-O) galactosyltransferase 2 (COLGALT2), Grancalcin ( GCA ), Itchy E3 ubiquitin protein ligase (ITCH), Potassium channel tetramerization domain containing 2 (KCTD2), Neural precursor cell expressed developmentally down-regulated 4 ( NEDD4 ), SEC24 family member B ( SEC24D ), Ubiquitin associated and SH3 domain containing B (UBASH3D), WW domain containing E3 ubiquitin protein ligase 1 and 2 ( WWP1 , WWP2) - most of these are included in ubiquitination processes, transcription regulation and protein degradation. [ 34 ]
TMEM255A is predicted to be highly expressed in peroxisome proliferator-activated receptor γ coactivator 1α-upregulated glioblastoma multiforme cells (specific gene function not yet fully established). [ 35 ] Ongoing research is investigating the possibility of TMEM255A to be used in personalized immunotherapy . [ 36 ]
There is one known paralog for TMEM255A, called TMEM255B, which is found on chromosome 13 (position 13q34). [ 37 ] TMEM255A is only found in the kingdom of animalia , and its most distant homolog is found in invertebrata (i.e. Saccoglossus kowalenskii ). | https://en.wikipedia.org/wiki/Transmembrane_protein_255A |
Transmetalation (alt. spelling: transmetallation) is a type of organometallic reaction that involves the transfer of ligands from one metal to another. It has the general form:
where R and R′ can be, but are not limited to, an alkyl , aryl , alkynyl , allyl , halogen , or pseudohalogen group. The reaction is usually an irreversible process due to thermodynamic and kinetic reasons. Thermodynamics will favor the reaction based on the electronegativities of the metals and kinetics will favor the reaction if there are empty orbitals on both metals. [ 1 ] There are different types of transmetalation including redox-transmetalation and redox-transmetalation/ligand exchange. During transmetalation the metal-carbon bond is activated, leading to the formation of new metal-carbon bonds. [ 2 ] Transmetalation is commonly used in catalysis , synthesis of main group complexes, and synthesis of transition metal complexes.
There are two main types of transmetalation, redox-transmetalation (RT) and redox-transmetalation/ligand-exchange (RTLE). Below, M 1 is usually a 4d or 5d transition metal and M 2 is usually a main group or 3d transition metal. By looking at the electronegativities of the metals and ligands, one can predict whether the RT or RTLE reaction will proceed and what products the reaction will yield. For example, one can predict that the addition of 3 HgPh 2 to 2 Al will yield 3 Hg and 2 AlPh 3 because Hg is a more electronegative element than Al.
In redox -transmetalation a ligand is transferred from one metal to the other through an intermolecular mechanism. During the reaction one of the metal centers is oxidized and the other is reduced. The electronegativities of the metals and ligands is what causes the reaction to go forward. If M 1 is more electronegative than M 2 , it is thermodynamically favorable for the R group to coordinate to the less electronegative M 2 .
In redox-transmetalation/ligand exchange the ligands of two metal complexes switch places with each other, bonding with the other metal center. The R ligand can be an alkyl, aryl, alkynyl, or allyl group and the X ligand can be a halogen, pseudo-halogen, alkyl, or aryl group. The reaction can proceed by two possible intermediate steps. The first is an associative intermediate, where the R and X ligands bridge the two metals, stabilizing the transition state . The second and less common intermediate is the formation of a cation where R is bridging the two metals and X is anionic. The RTLE reaction proceeds in a concerted manner. Like in RT reactions, the reaction is driven by electronegativity values. The X ligand is attracted to highly electropositive metals. If M 1 is a more electropositive metal than M 2 , it is thermodynamically favorable for the exchange of the R and X ligands to occur.
Transmetalation is often used as a step in the catalytic cycles of cross-coupling reactions. Some of the cross-coupling reactions that include a transmetalation step are Stille cross-coupling , Suzuki cross-coupling , Sonogashira cross-coupling , and Negishi cross-coupling . The most useful cross-coupling catalysts tend to be ones that contain palladium. Cross-coupling reactions have the general form of R′–X + M–R → R′–R + M–X and are used to form C–C bonds. R and R′ can be any carbon fragment. The identity of the metal, M, depends on which cross-coupling reaction is being used. Stille reactions use tin, Suzuki reactions use boron, Sonogashira reactions use copper, and Negishi reactions use zinc. The transmetalation step in palladium catalyzed reactions involve the addition of an R–M compound to produce an R′–Pd–R compound. Cross-coupling reactions have a wide range of applications in synthetic chemistry including the area of medicinal chemistry . The Stille reaction has been used to make an antitumor agent, (±)- epi -jatrophone; [ 3 ] the Suzuki reaction has been used to make an antitumor agent, oximidine II ; [ 4 ] the Sonogashira reaction has been used to make an anticancer drug, eniluracil; [ 5 ] and the Negishi reaction has been used to make the carotenoid β-carotene via a transmetalation cascade. [ 6 ]
Lanthanide organometallic complexes have been synthesized by RT and RTLE. Lanthanides are very electropositive elements.
Organomercurials, such as HgPh 2 , are common kinetically inert RT and RTLE reagents that allow functionalized
derivatives to be synthesized, unlike organolithiums and Grignard reagents . [ 7 ] Diarylmercurials are often used to synthesize lanthanide organometallic complexes. Hg(C 6 F 5 ) 2 is a better RT reagent to use with lanthanides than HgPh 2 because it does not require a step to activate the metal. [ 8 ] However, phenyl-substituted lanthanide complexes are more thermally stable than the pentafluorophenyl complexes. The use of HgPh 2 led to the synthesis of a ytterbium complex with different oxidation states on the two Yb atoms: [ 9 ]
In the Ln(C 6 F 5 ) 2 complexes, where Ln = Yb, Eu, or Sm, the Ln–C bonds are very reactive, making them useful in RTLE reactions. Protic substrates have been used as a reactant with the Ln(C 6 F 5 ) 2 complex as shown: Ln(C 6 F 5 ) 2 + 2LH → Ln(L) 2 + 2C 6 F 5 H. It is possible to avoid the challenges of working with the unstable Ln(C 6 F 5 ) 2 complex by forming it in situ by the following reaction:
Organotins are also kinetically inert RT and RTLE reagents that have been used in a variety of organometallic reactions. They have applications to the synthesis of lanthanide complexes, such as in the following reaction: [ 10 ]
RT can be used to synthesize actinide complexes. RT has been used to synthesize uranium halides using uranium metal and mercury halides as shown:
This actinide RT reaction can be done with multiple mercury compounds to coordinate ligands other than halogens to the metal:
Alkaline earth metal complexes have been synthesized by RTLE, employing the same methodology used in synthesizing lanthanide complexes. The use of diphenylmercury in alkaline-earth metal reactions leads to the production of elemental mercury. The handling and disposal of elemental mercury is challenging due to its toxicity to humans and the environment. This led to the desire for an alternative RTLE reagent that would be less toxic and still very effective. Triphenylbismuth, BiPh 3 , was discovered to be a suitable alternative. [ 12 ] Mercury and bismuth have similar electronegativity values and behave similarly in RTLE reactions. BiPh 3 has been used to synthesize alkaline-earth metal amides and alkaline-earth metal cyclopentadienides . The difference between HgPh 2 and BiPh 3 in these syntheses was that the reaction time was longer when using BiPh 3 . | https://en.wikipedia.org/wiki/Transmetalation |
Transmethylation is a biologically important organic chemical reaction in which a methyl group is transferred from one compound to another.
An example of transmethylation is the recovery of methionine from homocysteine . In order to sustain sufficient reaction rates during metabolic stress, this reaction requires adequate levels of vitamin B 12 and folate . Methyl tetrahydrofolate delivers methyl groups to form the active methyl form of vitamin B 12 that is required for methylation of homocysteine. Deficiencies of vitamin B 12 or folate cause increased levels of circulating homocysteine. Elevated homocysteine is a risk factor for cardiovascular disease and is linked to the metabolic syndrome (insulin insensitivity). [ 1 ]
Transmethylation is decreased sometimes in parents of children with autism . [ 2 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transmethylation |
Transmissibility is the ratio of output to input .
It is defined as the ratio of the force transmitted to the force applied. Transmitted force implies the one which is being transmitted to the foundation or to the body of a particular system. Applied force is the external agent that cause the force to be generated in the first place and be transmitted.
Transmissibility: T = output input {\displaystyle T={\frac {\text{output}}{\text{input}}}}
T > 1 {\displaystyle T>1} means amplification and maximum amplification occurs when forcing frequency ( f f {\displaystyle f_{f}} ) and natural frequency ( f n {\displaystyle f_{n}} ) of the system coincide.
There is no unit designation for transmissibility, although it may sometimes be referred to as the Q factor .
The transmissibility is used in calculation of passive hon efficiency.
The lesser the transmissibility the better is the damping or the isolation system.
T < 1 {\displaystyle T<1} is Desirable, T = 1 {\displaystyle T=1} acts as a rigid body, T > 1 {\displaystyle T>1} is Undesirable
This electronics-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transmissibility_(vibration) |
Transmission Kikuchi Diffraction ( TKD ), also sometimes called transmission electron backscatter diffraction ( t-EBSD ), is a method for orientation mapping at the nanoscale. It’s used for analysing the microstructures of thin transmission electron microscopy (TEM) specimens in the scanning electron microscope (SEM). This technique has been widely utilised in the characterization of nano-crystalline materials, including oxides, superconductors, and metallic alloys.
TKD offers improved spatial resolution, enabling effective characterization of nanocrystalline materials and heavily deformed samples where high dislocation densities can prevent successful characterization using conventional Electron backscatter diffraction . Many studies have reported sub-10 nm resolution using TKD.
The main difference between diffraction spots and Kikuchi bands is that in TEM, discrete diffraction spots arise from coherent scattering of the incident beam, while the formation of Kikuchi bands is described as a two-step process consisting of incoherent scattering of the primary beam followed by coherent scattering of these forward biased electrons. TKD has also been applied to analyse fine-grained ultramylonite peridotite samples in a scanning electron microscope. The preparation of TKD samples can be done with standard methods used for transmission electron microscopy (TEM). [ 1 ]
Transmission Kikuchi diffraction (TKD or t-EBSD [ 2 ] ) is an Electron backscatter diffraction (EBSD) technique that is used to analyse the crystallographic orientation and microstructure of materials at a high spatial resolution. [ 3 ] It is a variation of convergent-beam electron diffraction , which has been introduced around the 1970s, and has since become increasingly popular in materials science research, especially for studying materials at the nanoscale. [ 4 ]
In TKD, a thin foil sample is prepared and placed perpendicular to the electron beam of a scanning electron microscope . The electron beam is then focused on a small spot on the sample, and the crystal lattice of the sample diffracts the transmitted electrons . The diffraction pattern is then collected by a detector and analysed to determine the crystallographic orientation and microstructure of the sample. [ 5 ]
One of the key advantages of TKD is its high spatial resolution that can reach a few nanometres. This is achieved by using a small electron beam spot size, typically less than 10 nanometres in diameter, and by collecting the transmitted electrons with a small-angle annular dark-field detector (STEM-ADF) in a scanning transmission electron microscope (STEM). Another advantage of TKD is its high sensitivity to local variations in crystallographic orientation. This is because the transmitted electrons in TKD are diffracted at very small angles, which makes the diffraction pattern highly sensitive to local variations in the crystal lattice. [ 4 ]
TKD can also be used to study nano-sized materials, such as nanoparticles and thin films. [ 6 ] Thin foil samples can be prepared for TKD using a Focused ion beam (FIB) or ion milling machine . However, such machines are expensive and their operation requires particular skills and training. Additionally, the diffraction patterns obtained from TKD can be more complex to interpret than those obtained from conventional EBSD techniques due to the complex geometry of the diffracted electrons. [ 5 ] [ 7 ]
On-axis and off-axis TKD methods differ in the sample's orientation with respect to the electron beam. [ 5 ] In on-axis TKD, the sample is oriented so that the incident electron beam is nearly perpendicular to the sample surface. This results in a diffraction pattern that is nearly centred around the transmitted beam direction. [ 9 ] On-axis TKD is typically used for analysing samples with low lattice strain and high crystallographic symmetry, such as single crystals or large grains. [ 7 ] [ 5 ]
In off-axis TKD, the sample is tilted with respect to the incident electron beam, typically at an angle of several degrees. This results in a diffraction pattern that is shifted away from the transmitted beam direction. Off-axis TKD is typically used for analysing samples with high lattice strain and/or low crystallographic symmetry, such as nano-crystalline materials or materials with defects. Off-axis TKD is often preferred for materials science research because it provides more information about the crystallographic orientation and microstructure of the sample, especially in samples with a high density of defects [ 10 ] or a high degree of lattice strain. [ 11 ] [ 12 ] However, on-axis TKD can still be useful for studying samples with high crystallographic symmetry or for verifying the crystallographic orientation of a sample before performing off-axis TKD. [ 5 ] The on-axis technique can speed up acquisition by more than 20 times, and a low scattering angle setup also gives rise to higher quality patterns. [ 13 ]
EBSD resolution is influenced by multiple factors including the beam size, electron accelerating voltage, the material's atomic mass and the specimen's thickness. Out of these variables, sample thickness has the greatest effect on the pattern quality and resolution of the image. An increase in the sample thickness broadens the beam, thus reducing the lateral spatial resolution. [ 9 ] [ 6 ] [ 14 ] | https://en.wikipedia.org/wiki/Transmission_Kikuchi_diffraction |
The transmission coefficient is used in physics and electrical engineering when wave propagation in a medium containing discontinuities is considered. A transmission coefficient describes the amplitude, intensity, or total power of a transmitted wave relative to an incident wave.
Different fields of application have different definitions for the term. All the meanings are very similar in concept: In chemistry , the transmission coefficient refers to a chemical reaction overcoming a potential barrier; in optics and telecommunications it is the amplitude of a wave transmitted through a medium or conductor to that of the incident wave; in quantum mechanics it is used to describe the behavior of waves incident on a barrier, in a way similar to optics and telecommunications .
Although conceptually the same, the details in each field differ, and in some cases the terms are not an exact analogy.
In chemistry , in particular in transition state theory , there appears a certain "transmission coefficient" for overcoming a potential barrier. It is (often) taken to be unity for monomolecular reactions. It appears in the Eyring equation .
In optics , transmission is the property of a substance to permit the passage of light, with some or none of the incident light being absorbed in the process. If some light is absorbed by the substance, then the transmitted light will be a combination of the wavelengths of the light that was transmitted and not absorbed. For example, a blue light filter appears blue because it absorbs red and green wavelengths. If white light is shone through the filter, the light transmitted also appears blue because of the absorption of the red and green wavelengths.
The transmission coefficient is a measure of how much of an electromagnetic wave ( light ) passes through a surface or an optical element. Transmission coefficients can be calculated for either the amplitude or the intensity of the wave. Either is calculated by taking the ratio of the value after the surface or element to the value before. The transmission coefficient for total power is generally the same as the coefficient for intensity.
In telecommunication , the transmission coefficient is the ratio of the amplitude of the complex transmitted wave to that of the incident wave at a discontinuity in the transmission line . [ 1 ]
Consider a wave travelling through a transmission line with a step in impedance from Z A {\displaystyle Z_{\mathrm {A} }} to Z B {\displaystyle Z_{\mathrm {B} }} . When the wave transitions through the impedance step, a portion of the wave Γ {\displaystyle \Gamma } will be reflected back to the source. Because the voltage on a transmission line is always the sum of the forward and reflected waves at that point, if the incident wave amplitude is 1, and the reflected wave is Γ {\displaystyle \Gamma } , then the amplitude of the forward wave must be sum of the two waves or ( 1 + Γ ) {\displaystyle (1+\Gamma )} .
The value for Γ {\displaystyle \Gamma } is uniquely determined from first principles by noting that the incident power on the discontinuity must equal the sum of the power in the reflected and transmitted waves:
Solving the quadratic for Γ {\displaystyle \Gamma } leads both to the reflection coefficient :
and to the transmission coefficient :
The probability that a portion of a communications system , such as a line, circuit , channel or trunk , will meet specified performance criteria is also sometimes called the "transmission coefficient" of that portion of the system. [ 1 ] The value of the transmission coefficient is inversely related to the quality of the line, circuit, channel or trunk.
In non-relativistic quantum mechanics , the transmission coefficient and related reflection coefficient are used to describe the behavior of waves incident on a barrier. [ 2 ] The transmission coefficient represents the probability flux of the transmitted wave relative to that of the incident wave. This coefficient is often used to describe the probability of a particle tunneling through a barrier.
The transmission coefficient is defined in terms of the incident and transmitted probability current density J according to:
where J → i n c {\displaystyle {\vec {J}}_{\mathrm {inc} }} is the probability current in the wave incident upon the barrier with normal unit vector n ^ {\displaystyle {\hat {n}}} and J → t r a n s {\displaystyle {\vec {J}}_{\mathrm {trans} }} is the probability current in the wave moving away from the barrier on the other side.
The reflection coefficient R is defined analogously:
Law of total probability requires that T + R = 1 {\displaystyle T+R=1} , which in one dimension reduces to the fact that the sum of the transmitted and reflected currents is equal in magnitude to the incident current.
For sample calculations, see rectangular potential barrier .
Using the WKB approximation, one can obtain a tunnelling coefficient that looks like
where x 1 , x 2 {\displaystyle x_{1},\,x_{2}} are the two classical turning points for the potential barrier. [ 2 ] [ failed verification ] In the classical limit of all other physical parameters much larger than the reduced Planck constant , denoted ℏ → 0 {\displaystyle \hbar \rightarrow 0} , the transmission coefficient goes to zero. This classical limit would have failed in the situation of a square potential .
If the transmission coefficient is much less than 1, it can be approximated with the following formula:
where L = x 2 − x 1 {\displaystyle L=x_{2}-x_{1}} is the length of the barrier potential. | https://en.wikipedia.org/wiki/Transmission_coefficient |
In power engineering , transmission congestion occurs when overloaded transmission lines in an electrical grid are unable to carry additional electricity flow due to the risk of overheating. During grid congestion, the transmission system operator (TSO) has to direct the providers to adjust their dispatch levels to accommodate the constraint. [ 1 ] In an electricity market a power plant may be able to produce electricity at a competitive price but cannot transmit the power to a willing buyer. [ 2 ] Congestion increases the electricity prices for some customers. [ 3 ]
There is no universally accepted definition of the transmission congestion. [ 2 ] Congestion is not an event, so it is frequently not possible to pinpoint its place and time (in this respect it is similar to traffic congestion [ 4 ] ). Regulators define congestion as a condition that prevents market transactions from being completed, [ 2 ] while a transmission system operator sees it as inability to maintain the security of the power system operation with the power flow scheduled for the grid. [ 3 ]
A congestion is a symptom of a constraint or a combination of constraints in a transmission system, [ 3 ] usually the limits on physical electricity flow are used to prevent the overheating, unacceptable voltage levels , and loss of system stability. Congestion can be permanent, an effect of the system configuration, or temporary, due to a fault in the transmission equipment. [ 5 ]
Avoiding the congestion is essential for a competitive electricity market and is "one of the toughest problems" of its design. The goal is to ensure that a power flow as defined by the wholesale market result does not violate the constraints during the normal operation of the grid and in the case of failure of any one particular component (so called n-1 criterion ). [ 6 ]
The existing markets use a range of approaches to solve the problem. On one end of this range is "uniform pricing" that ignores the transmission constraints altogether and lets the market find a single price for all the locations ("nodes"). On the other end " locational marginal pricing " accommodates all the constraints by defining a separate pricing for each node (thus another name, "nodal pricing"). [ 6 ]
The uniform pricing has an advantage of transparent market design and quick clearing, so auctions can happen frequently, typically they start a day ahead of the delivery ("day-ahead" auction) and continue until the delivery (so called "intra-day" auctions). [ citation needed ] However, the market result might violate the congestion constraints and thus cannot be implemented at the time of delivery (in "real-time"). If this is the case, the TSO intervenes and uses so called system redispatch by changing the schedules of the generators in a way that the load can be served. [ 6 ] Redispatch payments are usually negotiated in advance and providers are paid as they bid in a "command and control" fashion, without creating a market. [ 7 ]
With nodal pricing all grid constraints are accounted for during the clearing and different prices are set for different nodes, this typically requires the independent system operator (ISO) to manage the market clearing. [ 8 ] The drawback of the nodal pricing is that the local markets might not have enough participants to efficiently function. In particular, in the load pockets (areas of the grid with concentrated load and lack of tie lines to the rest of the system) a large generator might exhibit significant market power , forcing the price for this node to be directly regulated on a cost basis. [ citation needed ]
The zonal pricing represents a compromise where the grid is split into relatively large zones, electricity price within each zone is uniform (and thus intra-zone congestion need to be resolved with a redispatch), but the inter-zone constraints are accounted for during the market clearing via different prices for different zones. [ 9 ]
The "discriminatory pricing" the providers in case of acceptance of their bids by the system operator are paid the amount of their bid ("pay-as-offered", [ 10 ] "pay-as-bid"). [ 11 ] The discriminatory pricing is also used in a market-based redispatch scenario ( counter-trading ). [ 7 ]
To avoid congestion, it might be necessary to deny some transmission transactions. One way to do it is through the transmission rights . The owner of a transmission right is entitled to transport a predefined amount of electric power from a source location on the network to the destination. There are two types of transmission rights: [ 12 ]
In a simple example of FTR operation, [ 13 ] locations A and B are connected with a 1000 MW line. Location A has a load of 200 MW and two generation companies:
Location B has a load of 2500 MW and a single generator GB with 2000 MW capacity and marginal cost of $30/MW.
The electricity market with locational pricing will fully engage the 1000 MW line and
settle on:
GA1, standing to gain most if the links between A and B are improved, decides to build another 1000-MW transmission line. Now there is no congestion, and the market will settle at the same price in both A and B ($30, since GA1 and GA2 cannot satisfy all demand, and the price will be determined by the cost of GB). GA1 will hold the FTR for 1000 MW, but will not collect anything from this right, instead pocketing the difference between its $10 cost and $30 price.
A new plant, GA3, is constructed in A with capacity of 1000 MW and marginal cost of $9/MW. Now the pricing in A is $15 again (determined by GA2), pricing at B is still $30. Although the line built by GA1 might now be effectively used by GA3, GA1 as a holder of FTR receives the congestion rent for electricity transmitted over the line that GA1 had invested into. The arrangement works as if GA1 had leased the line to GA3 for the full value of the line, so FTRs are similar to tradable securities , but with automated trading. [ 13 ]
Some transmission system operators offer the transmission rights owner to collect the transmission fees for them. For example, in California the California Independent System Operator (CAISO) offers the PTR owners a scheme where the owners hand over the operational control of their infrastructure to CAISO in exchange for the "Transmission Revenue Requirement" (TRR) that recoups the owner's costs. CAISO in turn collects the Transmission Access Charge (TAC) from the utilities based on the gross load , [ 14 ] and utilities bill TAC to their customers. | https://en.wikipedia.org/wiki/Transmission_congestion |
In a network based on packet switching , transmission delay (or store-and-forward delay , also known as packetization delay or serialization delay ) is the amount of time required to push all the packet's bits into the wire. In other words, this is the delay caused by the data-rate of the link.
Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits. It is given by the following formula:
where:
Most packet switched networks use store-and-forward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus, store-and-forward packet switches introduce a store-and-forward delay at the input to each link along the packet's route.
This computer networking article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transmission_delay |
In genetics , the transmission disequilibrium test ( TDT ) was proposed by Spielman, McGinnis and Ewens (1993) [ 1 ] as a family-based association test for the presence of genetic linkage between a genetic marker and a trait. It is an application of McNemar's test .
A specificity of the TDT is that it will detect genetic linkage only in the presence of genetic association .
While genetic association can be caused by population structure, genetic linkage will not be affected, which makes the TDT robust to the presence of population structure .
We first describe the TDT in the case where families consist of trios (two parents and one affected child). Our description follows the notations used in Spielman, McGinnis & Ewens (1993). [ 1 ]
The TDT measures the over-transmission of an allele from heterozygous parents to affected offsprings.
The n affected offsprings have 2 n parents. These can be represented by the transmitted and the non-transmitted alleles M 1 and M 2 at some genetic locus. Summarizing the data in a 2 by 2 table gives:
The derivation of the TDT shows that one should only use the heterozygous parents (total number b + c ).
The TDT tests whether the proportions b / b + c and c / b + c are compatible with probabilities (0.5, 0.5) .
This hypothesis can be tested using a binomial (asymptotically chi-square) test with one degree of freedom:
χ 2 = [ b − ( b + c ) / 2 ] 2 ( b + c ) / 2 + [ c − ( b + c ) / 2 ] 2 ( b + c ) / 2 = ( b − c ) 2 b + c {\displaystyle \chi ^{2}={\frac {[b-(b+c)/2]^{2}}{(b+c)/2}}+{\frac {[c-(b+c)/2]^{2}}{(b+c)/2}}={\frac {(b-c)^{2}}{b+c}}}
A derivation of the test consists of using a population genetics model to obtain the expected proportions for the quantities a , b , c , d in the table above. In particular, one can show that under nearly all disease models the expected proportion of b and c are identical. This result motivates the use of a binomial (asymptotically χ 2 ) test to test whether these proportions are equal.
On the other hand, one can also show that under such models the proportions a , b , c , d are not equal to the product of the marginals probabilities a + b 2 n , c + d 2 n {\displaystyle {\tfrac {a+b}{2n}},{\tfrac {c+d}{2n}}} and a + c 2 n , b + d 2 n . {\displaystyle {\tfrac {a+c}{2n}},{\tfrac {b+d}{2n}}.} A rewording of this statement would be that the type of the transmitted allele is not, in general, independent of the type of the non-transmitted allele. A consequence is that a χ 2 test for homogeneity/independence does not test the appropriate hypothesis, and thus, only heterozygous parents are included.
The TDT can be readily extended beyond the case of trios. We keep following the notations of Spielman, McGinnis & Ewens (1993). [ 1 ] Consider a total of h heterozygous parents. We use the fact that the transmission to different children are independent. The information can be then summarized in three categories:
Using the notations of the previous paragraph we have:
b = 2 i + ( h − i − j ) = h + i − j c = 2 j + ( h − i − j ) = h − i + j {\displaystyle {\begin{aligned}b&=2i+(h-i-j)=h+i-j\\c&=2j+(h-i-j)=h-i+j\end{aligned}}}
leading to the chi-squared test statistic:
χ t d t 2 = 4 ( i − j ) 2 h . {\displaystyle \chi _{tdt}^{2}={\frac {4(i-j)^{2}}{h}}.}
The comparison with the more traditional (at least at the time when the TDT was proposed) linkage test proposed by Blackwelder and Elston 1985 [ 2 ] is informative.
The Blackwelder and Elston approach uses the total number of haplotypes identical by descent (mean haplotype sharing). This measure ignores the allelic state of a marker and simply compares the number of times a parent transmits the same allele to both affected children with the number of times a different allele is transmitted.
The test statistic is:
χ h s 2 = ( 2 i + 2 j − h ) 2 h . {\displaystyle \chi _{hs}^{2}={\frac {(2i+2j-h)^{2}}{h}}.}
Under the null hypothesis of no linkage the expected proportions of ( i , h − i − j , j ) are (0.25, 0.5, 0.25) . One can derive a simple chi-square statistic with 2 degrees of freedom:
χ t o t a l 2 = ( i − h / 4 ) 2 h / 4 + ( h − i − j − h / 2 ) 2 h / 2 + ( j − h / 4 ) 2 h / 4 = χ t d t 2 + χ h s 2 . {\displaystyle \chi _{total}^{2}={\frac {(i-h/4)^{2}}{h/4}}+{\frac {(h-i-j-h/2)^{2}}{h/2}}+{\frac {(j-h/4)^{2}}{h/4}}=\chi _{tdt}^{2}+\chi _{hs}^{2}.}
It clearly appears that the total statistic (with two degree of freedom) is the sum of two independent components: one is the traditional linkage measure and the other is the TDT statistic.
More recently, Wittkowski KM, Liu X. (2002/2004) [ 3 ] proposed a modification to the TDT that can be more powerful under some alternatives, although the asymptotic properties under the null hypothesis are equivalent.
The motivating idea for this modification is the fact that, while the transmissions of both allele from parents to a child are independent, the effects of other filial genetic or environmental covariates on penetrance are the same for both alleles transmitted to the same child. This situation can be important if, for example, the genetic marker is linked to a disease locus with a strong selection against heterozygous individuals. This observation suggests to shift the statistical model from a set of independent transmissions to a set of independent children (see Sasieni (1997) [ 4 ] for the corresponding problem in case-control association tests). While this observation does not affect the distribution under the null hypothesis of no linkage, it allows, for some disease models, to design a more powerful test.
In this modified TDT test the children are stratified by parental type and the modified test statistic becomes:
χ 2 = ( [ n P Q − n Q Q ] P Q ∼ Q Q + 2 × [ n P P − n Q Q ] P Q ∼ P Q + [ n P P − n P Q ] P P ∼ P Q ) 2 [ n P Q + n Q Q ] P Q ∼ Q Q + 4 × [ n P P + n Q Q ] P Q ∼ P Q + [ n P Q + n P P ] P P ∼ P Q {\displaystyle \chi ^{2}={\frac {{\bigl (}[n_{\rm {PQ}}-n_{\rm {QQ}}]_{\rm {PQ\sim QQ}}+2\times [n_{\rm {PP}}-n_{\rm {QQ}}]_{\rm {PQ\sim PQ}}+[n_{\rm {PP}}-n_{\rm {PQ}}]_{\rm {PP\sim PQ}}{\bigr )}^{2}}{[n_{\rm {PQ}}+n_{\rm {QQ}}]_{\rm {PQ\sim QQ}}+4\times [n_{\rm {PP}}+n_{\rm {QQ}}]_{\rm {PQ\sim PQ}}+[n_{\rm {PQ}}+n_{\rm {PP}}]_{\rm {PP\sim PQ}}}}}
where [ n P Q ] P Q ∼ Q Q {\displaystyle [n_{\rm {PQ}}]_{\rm {PQ\sim QQ}}} is the number of PQ children from parents with the PQ and QQ types.
Beagle | https://en.wikipedia.org/wiki/Transmission_disequilibrium_test |
Transmission electron microscopy DNA sequencing is a single-molecule sequencing technology that uses transmission electron microscopy techniques. The method was conceived and developed in the 1960s and 70s, [ 1 ] but lost favor when the extent of damage to the sample was recognized. [ 2 ]
In order for DNA to be clearly visualized under an electron microscope , it must be labeled with heavy atoms. In addition, specialized imaging techniques and aberration corrected optics are beneficial for obtaining the resolution required to image the labeled DNA molecule. In theory, transmission electron microscopy DNA sequencing could provide extremely long read lengths, but the issue of electron beam damage may still remain and the technology has not yet been commercially developed.
Only a few years after James Watson and Francis Crick deduced the structure of DNA , and nearly two decades before Frederick Sanger published the first method for rapid DNA sequencing , Richard Feynman , an American physicist, envisioned the electron microscope as the tool that would one day allow biologists to "see the order of bases in the DNA chain". [ 3 ] Feynman believed that if the electron microscope could be made powerful enough, then it would become possible to visualize the atomic structure of any and all chemical compounds, including DNA.
In 1970, Albert Crewe developed the high-angle annular dark-field imaging (HAADF) imaging technique in a scanning transmission electron microscope . Using this technique, he visualized individual heavy atoms on thin amorphous carbon films. [ 4 ] In 2010 Krivanek and colleagues reported several technical improvements to the HAADF method, including a combination of aberration corrected electron optics and low accelerating voltage. The latter is crucial for imaging biological objects, as it allows to reduce damage by the beam and increase the image contrast for light atoms. As a result, single atom substitutions in a boron nitride monolayer could be imaged. [ 5 ]
Despite the invention of a multitude of chemical and fluorescent sequencing technologies, electron microscopy is still being explored as a means of performing single-molecule DNA sequencing. For example, in 2012 a collaboration between scientists at Harvard University , the University of New Hampshire and ZS Genetics demonstrated the ability to read long sequences of DNA using the technique, [ 6 ] however transmission electron microscopy DNA sequencing technology is still far from being commercially available. [ 7 ]
The electron microscope has the capacity to obtain a resolution of up to 100 pm, whereby microscopic biomolecules and structures such as viruses, ribosomes, proteins, lipids, small molecules and even single atoms can be observed. [ 8 ]
Although DNA is visible when observed with the electron microscope, the resolution of the image obtained is not high enough to allow for deciphering the sequence of the individual bases , i.e. , DNA sequencing . However, upon differential labeling of the DNA bases with heavy atoms or metals, it is possible to both visualize and distinguish between the individual bases. Therefore, electron microscopy in conjunction with differential heavy atom DNA labeling could be used to directly image the DNA in order to determine its sequence. [ 7 ] [ 9 ] [ 10 ] [ 11 ]
As in a standard polymerase chain reaction (PCR) , the double stranded DNA molecules to be sequenced must be denatured before the second strand can be synthesized with labeled nucleotides.
The elements that make up biological molecules ( C , H , N , O , P , S ) are too light (low atomic number, Z ) to be clearly visualized as individual atoms by transmission electron microscopy . To circumvent this problem, the DNA bases can be labeled with heavier atoms (higher Z). Each nucleotide is tagged with a characteristic heavy label, so that they can be distinguished in the transmission electron micrograph.
The DNA molecules must be stretched out on a thin, solid substrate so that order of the labeled bases will be clearly visible on the electron micrograph. Molecular combing is a technique that utilizes the force of a receding air-water interface to extend DNA molecules, leaving them irreversibly bound to a silane layer once dry. [ 13 ] [ 14 ] This is one means by which alignment of the DNA on a solid substrate may be achieved.
Transmission electron microscopy (TEM) produces high magnification, high resolution images by passing a beam of electrons through a very thin sample. Whereas atomic resolution has been demonstrated with conventional TEM, further improvement in spatial resolution requires correcting the spherical and chromatic aberrations of the microscope lenses . This has only been possible in scanning transmission electron microscopy where the image is obtained by scanning the object with a finely focused electron beam, in a way similar to a cathode ray tube . However, the achieved improvement in resolution comes together with irradiation of the studied object by much higher beam intensities, the concomitant sample damage and the associated imaging artefacts. [ 15 ] Different imaging techniques are applied depending on whether the sample contains heavy or light atoms:
Dark and bright spots on the electron micrograph, corresponding to the differentially labeled DNA bases, are analyzed by computer software.
Transmission electron microscopy DNA sequencing is not yet commercially available, but the long read lengths that this technology may one day provide will make it useful in a variety of contexts.
When sequencing a genome, it must be broken down into pieces that are short enough to be sequenced in a single read. These reads must then be put back together like a jigsaw puzzle by aligning the regions that overlap between reads; this process is called de novo genome assembly . The longer the read length that a sequencing platform provides, the longer the overlapping regions, and the easier it is to assemble the genome. From a computational perspective, microfluidic Sanger sequencing is still the most effective way to sequence and assemble genomes for which no reference genome sequence exists. The relatively long read lengths provide substantial overlap between individual sequencing reads, which allows for greater statistical confidence in the assembly. In addition, long Sanger reads are able to span most regions of repetitive DNA sequence which otherwise confound sequence assembly by causing false alignments. However, de novo genome assembly by Sanger sequencing is extremely expensive and time-consuming. Second generation sequencing technologies , [ 19 ] while less expensive, are generally unfit for de novo genome assembly due to short read lengths. In general, third generation sequencing technologies, [ 11 ] including transmission electron microscopy DNA sequencing, aim to improve read length while maintaining low sequencing cost. Thus, as third generation sequencing technologies improve, rapid and inexpensive de novo genome assembly will become a reality.
A haplotype is a series of linked alleles that are inherited together on a single chromosome. DNA sequencing can be used to genotype all of the single nucleotide polymorphisms (SNPs) that constitute a haplotype. However, short DNA sequencing reads often cannot be phased; that is, heterozygous variants cannot be confidently assigned to the correct haplotype. In fact, haplotyping with short read DNA sequencing data requires very high coverage (average >50x coverage of each DNA base) to accurately identify SNPs, as well as additional sequence data from the parents so that Mendelian transmission can be used to estimate the haplotypes. [ 20 ] Sequencing technologies that generate long reads, including transmission electron microscopy DNA sequencing, can capture entire haploblocks in a single read. That is, haplotypes are not broken up among multiple reads, and the genetically linked alleles remain together in the sequencing data. Therefore, long reads make haplotyping easier and more accurate, which is beneficial to the field of population genetics .
Genes are normally present in two copies in the diploid human genome; genes that deviate from this standard copy number are referred to as copy number variants (CNVs) . Copy number variation can be benign (these are usually common variants, called copy number polymorphisms) or pathogenic. [ 21 ] CNVs are detected by fluorescence in situ hybridization (FISH) or comparative genomic hybridization (CGH) . To detect the specific breakpoints at which a deletion occurs, or to detect genomic lesions introduced by a duplication or amplification event, CGH can be performed using a tiling array ( array CGH ), or the variant region can be sequenced. Long sequencing reads are especially useful for analyzing duplications or amplifications, as it is possible to analyze the orientation of the amplified segments if they are captured in a single sequencing read.
Cancer genomics, or oncogenomics , is an emerging field in which high-throughput, second generation DNA sequencing technology is being applied to sequence entire cancer genomes. Analyzing this short read sequencing data encompasses all of the problems associated with de novo genome assembly using short read data. [ 22 ] Furthermore, cancer genomes are often aneuploid . [ 23 ] These aberrations, which are essentially large scale copy number variants, can be analyzed by second-generation sequencing technologies using read frequency to estimate the copy number. [ 22 ] Longer reads would, however, provide a more accurate picture of copy number, orientation of amplified regions, and SNPs present in cancer genomes.
The microbiome refers the total collection of microbes present in a microenvironment and their respective genomes. For example, an estimated 100 trillion microbial cells colonize the human body at any given time. [ 24 ] The human microbiome is of particular interest, as these commensal bacteria are important for human health and immunity. Most of the Earth's bacterial genomes have not yet been sequenced; undertaking a microbiome sequencing project would require extensive de novo genome assembly, a prospect which is daunting with short read DNA sequencing technologies. [ 25 ] Longer reads would greatly facilitate the assembly of new microbial genomes.
Compared to other second- and third-generation DNA sequencing technologies, transmission electron microscopy DNA sequencing has a number of potential key strengths and weaknesses, which will ultimately determine its usefulness and prominence as a future DNA sequencing technology.
Many non-Sanger second- and third-generation DNA sequencing technologies have been or are currently being developed with the common aim of increasing throughput and decreasing cost such that personalized genetic medicine can be fully realized.
Both the US$10 million Archon X Prize for Genomics supported by the X Prize Foundation (Santa Monica, CA, USA) and the US$70 million in grant awards supported by the National Human Genome Research Institute of the National Institutes of Health (NIH-NHGRI) are fueling the rapid burst of research activity in the development of new DNA sequencing technologies. [ 7 ]
Since different approaches, techniques, and strategies are what define each DNA sequencing technology, each has its own strengths and weaknesses. Comparison of important parameters between various second- and third-generation DNA sequencing technologies are presented in Table 1. | https://en.wikipedia.org/wiki/Transmission_electron_microscopy_DNA_sequencing |
In telecommunications , a transmission level point ( TLP ) is a test point in an electronic circuit that is typically a transmission channel . At the TLP, a test signal may be introduced or measured. [ 1 ] Various parameters, such as the power of the signal, noise, voltage levels, wave forms, may be measured at the TLP. [ 2 ]
The nominal transmission level at a TLP is a function of system design and is an expression of the design gain or attenuation (loss).
Voice-channel transmission levels at test points are measured in decibel-milliwatts (dBm) at a frequency of ~1000 hertz .
The dBm is an absolute reference level measurement (see Decibel § Suffixes and reference values ) with respect to 1 mW power. When the nominal signal power is 0 dBm at the TLP, the test point is called a zero transmission level point , or zero-dBm TLP . The abbreviation dBm0 stands for the power in dBm measured at a zero transmission level point. The TLP is thus characterized by the relation: [ 1 ]
The term TLP is commonly used as if it were a unit, [ 2 ] preceded by the nominal level for the test point. For example, the expression 0 TLP refers to a 0 dBm TLP . If for instance a signal is specified as -13 dBm0 at a particular point and -6 dBm is measured at that point, the TLP is +7 TLP. [ 3 ]
The level at a TLP where an end instrument , such as a telephone set, is connected is usually specified as 0 dBm . | https://en.wikipedia.org/wiki/Transmission_level_point |
In electrical engineering , a transmission line is a specialized cable or other structure designed to conduct electromagnetic waves in a contained manner. The term applies when the conductors are long enough that the wave nature of the transmission must be taken into account. This applies especially to radio-frequency engineering because the short wavelengths mean that wave phenomena arise over very short distances (this can be as short as millimetres depending on frequency). However, the theory of transmission lines was historically developed to explain phenomena on very long telegraph lines, especially submarine telegraph cables .
Transmission lines are used for purposes such as connecting radio transmitters and receivers with their antennas (they are then called feed lines or feeders), distributing cable television signals, trunklines routing calls between telephone switching centres, computer network connections and high speed computer data buses . RF engineers commonly use short pieces of transmission line, usually in the form of printed planar transmission lines , arranged in certain patterns to build circuits such as filters . These circuits, known as distributed-element circuits , are an alternative to traditional circuits using discrete capacitors and inductors .
Ordinary electrical cables suffice to carry low frequency alternating current (AC), such as mains power , which reverses direction 100 to 120 times per second, and audio signals . However, they are not generally used to carry currents in the radio frequency range, [ 1 ] above about 30 kHz, because the energy tends to radiate off the cable as radio waves , causing power losses. Radio frequency currents also tend to reflect from discontinuities in the cable such as connectors and joints, and travel back down the cable toward the source. [ 1 ] [ 2 ] These reflections act as bottlenecks, preventing the signal power from reaching the destination. Transmission lines use specialized construction, and impedance matching , to carry electromagnetic signals with minimal reflections and power losses. The distinguishing feature of most transmission lines is that they have uniform cross sectional dimensions along their length, giving them a uniform impedance , called the characteristic impedance , [ 2 ] [ 3 ] [ 4 ] to prevent reflections. Types of transmission line include parallel line ( ladder line , twisted pair ), coaxial cable , and planar transmission lines such as stripline and microstrip . [ 5 ] [ 6 ] The higher the frequency of electromagnetic waves moving through a given cable or medium, the shorter the wavelength of the waves. Transmission lines become necessary when the transmitted frequency's wavelength is sufficiently short that the length of the cable becomes a significant part of a wavelength.
At frequencies of microwave and higher, power losses in transmission lines become excessive, and waveguides are used instead, [ 1 ] which function as "pipes" to confine and guide the electromagnetic waves. [ 6 ] Some sources define waveguides as a type of transmission line; [ 6 ] however, this article will not include them.
Mathematical analysis of the behaviour of electrical transmission lines grew out of the work of James Clerk Maxwell , Lord Kelvin , and Oliver Heaviside . In 1855, Lord Kelvin formulated a diffusion model of the current in a submarine cable. The model correctly predicted the poor performance of the 1858 trans-Atlantic submarine telegraph cable . In 1885, Heaviside published the first papers that described his analysis of propagation in cables and the modern form of the telegrapher's equations . [ 7 ]
For the purposes of analysis, an electrical transmission line can be modelled as a two-port network (also called a quadripole), as follows:
In the simplest case, the network is assumed to be linear (i.e. the complex voltage across either port is proportional to the complex current flowing into it when there are no reflections), and the two ports are assumed to be interchangeable. If the transmission line is uniform along its length, then its behaviour is largely described by two parameters called characteristic impedance , symbol Z 0 and propagation delay , symbol τ p {\displaystyle \tau _{p}} . Z 0 is the ratio of the complex voltage of a given wave to the complex current of the same wave at any point on the line. Typical values of Z 0 are 50 or 75 ohms for a coaxial cable , about 100 ohms for a twisted pair of wires, and about 300 ohms for a common type of untwisted pair used in radio transmission. Propagation delay is proportional to the length of the transmission line and is never less than the length divided by the speed of light . Typical delays for modern communication transmission lines vary from 3.33 ns/m to 5 ns/m .
When sending power down a transmission line, it is usually desirable that as much power as possible will be absorbed by the load and as little as possible will be reflected back to the source. This can be ensured by making the load impedance equal to Z 0 , in which case the transmission line is said to be matched .
Some of the power that is fed into a transmission line is lost because of its resistance. This effect is called ohmic or resistive loss (see ohmic heating ). At high frequencies, another effect called dielectric loss becomes significant, adding to the losses caused by resistance. Dielectric loss is caused when the insulating material inside the transmission line absorbs energy from the alternating electric field and converts it to heat (see dielectric heating ). The transmission line is modelled with a resistance (R) and inductance (L) in series with a capacitance (C) and conductance (G) in parallel. The resistance and conductance contribute to the loss in a transmission line.
The total loss of power in a transmission line is often specified in decibels per metre (dB/m), and usually depends on the frequency of the signal. The manufacturer often supplies a chart showing the loss in dB/m at a range of frequencies. A loss of 3 dB corresponds approximately to a halving of the power.
Propagation delay is often specified in units of nanoseconds per metre. While propagation delay usually depends on the frequency of the signal, transmission lines are typically operated over frequency ranges where the propagation delay is approximately constant.
The telegrapher's equations (or just telegraph equations ) are a pair of linear differential equations which describe the voltage ( V {\displaystyle V} ) and current ( I {\displaystyle I} ) on an electrical transmission line with distance and time. They were developed by Oliver Heaviside who created the transmission line model , and are based on Maxwell's equations .
The transmission line model is an example of the distributed-element model . It represents the transmission line as an infinite series of two-port elementary components, each representing an infinitesimally short segment of the transmission line:
The model consists of an infinite series of the elements shown in the figure, and the values of the components are specified per unit length so the picture of the component can be misleading. R {\displaystyle R} , L {\displaystyle L} , C {\displaystyle C} , and G {\displaystyle G} may also be functions of frequency. An alternative notation is to use R ′ {\displaystyle R'} , L ′ {\displaystyle L'} , C ′ {\displaystyle C'} and G ′ {\displaystyle G'} to emphasize that the values are derivatives with respect to length. These quantities can also be known as the primary line constants to distinguish from the secondary line constants derived from them, these being the propagation constant , attenuation constant and phase constant .
The line voltage V ( x ) {\displaystyle V(x)} and the current I ( x ) {\displaystyle I(x)} can be expressed in the frequency domain as
When the elements R {\displaystyle R} and G {\displaystyle G} are negligibly small the transmission line is considered as a lossless structure. In this hypothetical case, the model depends only on the L {\displaystyle L} and C {\displaystyle C} elements which greatly simplifies the analysis. For a lossless transmission line, the second order steady-state Telegrapher's equations are:
These are wave equations which have plane waves with equal propagation speed in the forward and reverse directions as solutions. The physical significance of this is that electromagnetic waves propagate down transmission lines and in general, there is a reflected component that interferes with the original signal. These equations are fundamental to transmission line theory.
In the general case the loss terms, R {\displaystyle R} and G {\displaystyle G} , are both included, and the full form of the Telegrapher's equations become:
where γ {\displaystyle \gamma } is the ( complex ) propagation constant . These equations are fundamental to transmission line theory. They are also wave equations , and have solutions similar to the special case, but which are a mixture of sines and cosines with exponential decay factors. Solving for the propagation constant γ {\displaystyle \gamma } in terms of the primary parameters R {\displaystyle R} , L {\displaystyle L} , G {\displaystyle G} , and C {\displaystyle C} gives:
and the characteristic impedance can be expressed as
The solutions for V ( x ) {\displaystyle V(x)} and I ( x ) {\displaystyle I(x)} are:
The constants V ( ± ) {\displaystyle V_{(\pm )}} must be determined from boundary conditions. For a voltage pulse V i n ( t ) {\displaystyle V_{\mathrm {in} }(t)\,} , starting at x = 0 {\displaystyle x=0} and moving in the positive x {\displaystyle x} direction, then the transmitted pulse V o u t ( x , t ) {\displaystyle V_{\mathrm {out} }(x,t)\,} at position x {\displaystyle x} can be obtained by computing the Fourier Transform, V ~ ( ω ) {\displaystyle {\tilde {V}}(\omega )} , of V i n ( t ) {\displaystyle V_{\mathrm {in} }(t)\,} , attenuating each frequency component by e − Re ( γ ) x {\displaystyle e^{-\operatorname {Re} (\gamma )\,x}\,} , advancing its phase by − Im ( γ ) x {\displaystyle -\operatorname {Im} (\gamma )\,x\,} , and taking the inverse Fourier Transform . The real and imaginary parts of γ {\displaystyle \gamma } can be computed as
with
the right-hand expressions holding when neither L {\displaystyle L} , nor C {\displaystyle C} , nor ω {\displaystyle \omega } is zero, and with
where atan2 is the everywhere-defined form of two-parameter arctangent function, with arbitrary value zero when both arguments are zero.
Alternatively, the complex square root can be evaluated algebraically, to yield:
and
with the plus or minus signs chosen opposite to the direction of the wave's motion through the conducting medium. ( a is usually negative, since G {\displaystyle G} and R {\displaystyle R} are typically much smaller than ω C {\displaystyle \omega C} and ω L {\displaystyle \omega L} , respectively, so −a is usually positive. b is always positive.)
For small losses and high frequencies, the general equations can be simplified: If R ω L ≪ 1 {\displaystyle {\tfrac {R}{\omega \,L}}\ll 1} and G ω C ≪ 1 {\displaystyle {\tfrac {G}{\omega \,C}}\ll 1} then
Since an advance in phase by − ω δ {\displaystyle -\omega \,\delta } is equivalent to a time delay by δ {\displaystyle \delta } , V o u t ( t ) {\displaystyle V_{out}(t)} can be simply computed as
The Heaviside condition is G C = R L {\displaystyle {\frac {G}{C}}={\frac {R}{L}}} .
If R, G, L, and C are constants that are not frequency dependent and the Heaviside condition is met,
then waves travel down the transmission line without dispersion distortion.
The characteristic impedance Z 0 {\displaystyle Z_{0}} of a transmission line is the ratio of the amplitude of a single voltage wave to its current wave. Since most transmission lines also have a reflected wave, the characteristic impedance is generally not the impedance that is measured on the line.
The impedance measured at a given distance ℓ {\displaystyle \ell } from the load impedance Z L {\displaystyle Z_{\mathrm {L} }} may be expressed as
where γ {\displaystyle \gamma } is the propagation constant and Γ L = Z L − Z 0 Z L + Z 0 {\displaystyle {\mathit {\Gamma }}_{\mathrm {L} }={\frac {\,Z_{\mathrm {L} }-Z_{0}\,}{Z_{\mathrm {L} }+Z_{0}}}} is the voltage reflection coefficient measured at the load end of the transmission line. Alternatively, the above formula can be rearranged to express the input impedance in terms of the load impedance rather than the load voltage reflection coefficient:
For a lossless transmission line, the propagation constant is purely imaginary, γ = j β {\displaystyle \gamma =j\,\beta } , so the above formulas can be rewritten as
where β = 2 π λ {\displaystyle \beta ={\frac {\,2\pi \,}{\lambda }}} is the wavenumber .
In calculating β , {\displaystyle \beta ,} the wavelength is generally different inside the transmission line to what it would be in free-space. Consequently, the velocity factor of the material the transmission line is made of needs to be taken into account when doing such a calculation.
For the special case where β ℓ = n π {\displaystyle \beta \,\ell =n\,\pi } where n is an integer (meaning that the length of the line is a multiple of half a wavelength), the expression reduces to the load impedance so that
for all n . {\displaystyle n\,.} This includes the case when n = 0 {\displaystyle n=0} , meaning that the length of the transmission line is negligibly small compared to the wavelength. The physical significance of this is that the transmission line can be ignored (i.e. treated as a wire) in either case.
For the case where the length of the line is one quarter wavelength long, or an odd multiple of a quarter wavelength long, the input impedance becomes
Another special case is when the load impedance is equal to the characteristic impedance of the line (i.e. the line is matched ), in which case the impedance reduces to the characteristic impedance of the line so that
for all ℓ {\displaystyle \ell } and all λ {\displaystyle \lambda } .
For the case of a shorted load (i.e. Z L = 0 {\displaystyle Z_{\mathrm {L} }=0} ), the input impedance is purely imaginary and a periodic function of position and wavelength (frequency)
For the case of an open load (i.e. Z L = ∞ {\displaystyle Z_{\mathrm {L} }=\infty } ), the input impedance is once again imaginary and periodic
The simulation of transmission lines embedded into larger systems generally utilize admittance parameters (Y matrix), impedance parameters (Z matrix), and/or scattering parameters (S matrix) that embodies the full transmission line model needed to support the simulation.
Admittance (Y) parameters may be defined by applying a fixed voltage to one port (V1) of a transmission line with the other end shorted to ground and measuring the resulting current running into each port (I1, I2) [ 8 ] [ 9 ] and computing the admittance on each port as a ratio of I/V The admittance parameter Y11 is I1/V1, and the admittance parameter Y12 is I2/V1. Since transmission lines are electrically passive and symmetric devices, Y12 = Y21, and Y11 = Y22.
For lossless and lossy transmission lines respectively, the Y parameter matrix is as follows: [ 10 ] [ 11 ]
Y Lossless = [ − j c o t ( β l ) Z o j c s c ( β l ) Z o j c s c ( β l ) Z o − j c o t ( β l ) Z o ] Y Lossy = [ c o t h ( γ l ) Z o − c s c h ( γ l ) Z o − c s c h ( γ l ) Z o c o t h ( γ l ) Z o ] {\displaystyle Y_{\text{Lossless}}={\begin{bmatrix}{\frac {-jcot(\beta l)}{Z_{o}}}&{\frac {jcsc(\beta l)}{Z_{o}}}\\{\frac {jcsc(\beta l)}{Z_{o}}}&{\frac {-jcot(\beta l)}{Z_{o}}}\end{bmatrix}}{\text{ }}Y_{\text{Lossy}}={\begin{bmatrix}{\frac {coth(\gamma l)}{Z_{o}}}&{\frac {-csch(\gamma l)}{Z_{o}}}\\{\frac {-csch(\gamma l)}{Z_{o}}}&{\frac {coth(\gamma l)}{Z_{o}}}\end{bmatrix}}}
Impedance (Z) parameter may defines by applying a fixed current into one port (I1) of a transmission line with the other port open and measuring the resulting voltage on each port (V1, V2) [ 8 ] [ 9 ] and computing the impedance parameter Z11 is V1/I1, and the impedance parameter Z12 is V2/I1. Since transmission lines are electrically passive and symmetric devices, V12 = V21, and V11 = V22.
In the Y and Z matrix definitions, Y = Z − 1 {\displaystyle Y=Z^{-1}} and Z = Y − 1 {\displaystyle Z=Y^{-1}} . [ 12 ] Unlike ideal lumped 2 port elements ( resistors , capacitors , inductors , etc.) which do not have defined Z parameters, transmission lines have an internal path to ground, which permits the definition of Z parameters.
For lossless and lossy transmission lines respectively, the Z parameter matrix is as follows: [ 10 ] [ 11 ]
Z Lossless = [ − j Z o c o t ( β l ) − j Z o c s c ( β l ) − j Z o c s c ( β l ) − j Z o c o t ( β l ) ] Z Lossy = [ Z o c o t h ( γ l ) Z o c s c h ( γ l ) Z o c s c h ( γ l ) Z o c o t h ( γ l ) ] {\displaystyle Z_{\text{Lossless}}={\begin{bmatrix}-jZ_{o}cot(\beta l)&-jZ_{o}csc(\beta l)\\-jZ_{o}csc(\beta l)&-jZ_{o}cot(\beta l)\end{bmatrix}}{\text{ }}Z_{\text{Lossy}}={\begin{bmatrix}Z_{o}coth(\gamma l)&Z_{o}csch(\gamma l)\\Z_{o}csch(\gamma l)&Z_{o}coth(\gamma l)\end{bmatrix}}}
Scattering (S) matrix parameters model the electrical behavior of the transmission line with matched loads at each termination . [ 10 ]
For lossless and lossy transmission lines respectively, the S parameter matrix is as follows, [ 13 ] [ 14 ] using standard hyperbolic to circular complex translations .
S Lossless = [ ( Z o 2 − Z p 2 ) s i n ( β l ) ( Z o 2 + Z p 2 ) s i n ( β l ) − j 2 Z o Z p c o s ( β l ) 2 Z o Z p j ( Z o 2 + Z p 2 ) s i n ( β l ) + 2 Z o Z p c o s ( β l ) 2 Z o Z p j ( Z o 2 + Z p 2 ) s i n ( β l ) + 2 Z o Z p c o s ( β l ) ( Z o 2 − Z p 2 ) s i n ( β l ) ( Z o 2 + Z p 2 ) s i n ( β l ) − j 2 Z o Z p c o s ( β l ) ] S Lossy = [ ( Z o 2 − Z p 2 ) s i n h ( γ l ) ( Z o 2 + Z p 2 ) s i n h ( γ l ) + 2 Z o Z p c o s h ( γ l ) 2 Z o Z p ( Z o 2 + Z p 2 ) s i n h ( γ l ) + 2 Z o Z p c o s h ( γ l ) 2 Z o Z p ( Z o 2 + Z p 2 ) s i n h ( γ l ) + 2 Z o Z p c o s h ( γ l ) ( Z o 2 − Z p 2 ) s i n h ( γ l ) ( Z o 2 + Z p 2 ) s i n h ( γ l ) + 2 Z o Z p c o s h ( γ l ) ] {\displaystyle S_{\text{Lossless}}={\begin{bmatrix}{\frac {(Z_{o}^{2}-Z_{p}^{2})sin(\beta l)}{(Z_{o}^{2}+Z_{p}^{2})sin(\beta l)-j2Z_{o}Z_{p}cos(\beta l)}}&{\frac {2Z_{o}Z_{p}}{j(Z_{o}^{2}+Z_{p}^{2})sin(\beta l)+2Z_{o}Z_{p}cos(\beta l)}}\\{\frac {2Z_{o}Z_{p}}{j(Z_{o}^{2}+Z_{p}^{2})sin(\beta l)+2Z_{o}Z_{p}cos(\beta l)}}&{\frac {(Z_{o}^{2}-Z_{p}^{2})sin(\beta l)}{(Z_{o}^{2}+Z_{p}^{2})sin(\beta l)-j2Z_{o}Z_{p}cos(\beta l)}}\end{bmatrix}}{\text{ }}S_{\text{Lossy}}={\begin{bmatrix}{\frac {(Z_{o}^{2}-Z_{p}^{2})sinh(\gamma l)}{(Z_{o}^{2}+Z_{p}^{2})sinh(\gamma l)+2Z_{o}Z_{p}cosh(\gamma l)}}&{\frac {2Z_{o}Z_{p}}{(Z_{o}^{2}+Z_{p}^{2})sinh(\gamma l)+2Z_{o}Z_{p}cosh(\gamma l)}}\\{\frac {2Z_{o}Z_{p}}{(Z_{o}^{2}+Z_{p}^{2})sinh(\gamma l)+2Z_{o}Z_{p}cosh(\gamma l)}}&{\frac {(Z_{o}^{2}-Z_{p}^{2})sinh(\gamma l)}{(Z_{o}^{2}+Z_{p}^{2})sinh(\gamma l)+2Z_{o}Z_{p}cosh(\gamma l)}}\end{bmatrix}}}
In all matrix parameters above, the following variable definitions apply:
Z o {\displaystyle Z_{o}} = characteristic impedance
Zp = port impedance, or termination impedance
γ = α + j β {\displaystyle \gamma =\alpha +j\beta } = the propagation constant per unit length
α {\displaystyle \alpha } = attenuation constant in nepers per unit length
β = 2 π λ = ω V {\displaystyle \beta ={\frac {2\pi }{\lambda }}={\frac {\omega }{V}}} = wave number or phase constant radians per unit length
ω {\displaystyle \omega } = frequency radians / second
V = 1 L C = V C E r e {\displaystyle V={\frac {1}{\sqrt {LC}}}={\frac {V_{C}}{\sqrt {E_{re}}}}} = Speed of propagation
λ {\displaystyle \lambda } = wave length in unit length
L = inductance per unit length
C = capacitance per unit length
E r e {\displaystyle E_{re}} = effective dielectric constant
V C {\displaystyle V_{C}} = 299,792,458 meters / second = Speed of light in a vacuum
Transmission lines may be placed in proximity to each other such that they electrically interact, such as two microstrip lines in close proximity. Such transmission lines are said to be coupled transmission lines. Coupled transmission lines are characterized by an even and odd mode analysis. The even mode is characterized by excitation of the two conductors with a signal of equal amplitude and phase. The odd mode is characterized by excitation with signals of equal and opposite magnitude. The even and odd modes each have their own characteristic impedances (Zoe, Zoo) and phase constants ( β e , β o {\displaystyle \beta _{e}{\text{, }}\beta _{o}} ). Lossy coupled transmission lines have their own even and odd mode attenuation constants ( α e , α o {\displaystyle \alpha _{e}{\text{, }}\alpha _{o}} ), which in turn leads to even and odd mode propagation constants ( γ e , γ o {\displaystyle \gamma _{e}{\text{, }}\gamma _{o}} ). [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ]
Coupled transmission lines may be modeled using even and odd mode transmission line parameters defined in the prior paragraph as shown with ports 1 and 2 on the input and ports 3 and 4 on the output, [ 21 ]
Y = [ y 11 y 12 y 13 y 14 y 21 y 22 y 23 y 24 y 31 y 32 y 33 y 34 y 41 y 42 y 43 y 44 ] Z = [ Y ] − 1 Where: For lossless coupled lines: y 11 = y 22 = y 33 = y 44 = − j 2 ( c o t ( β e l ) Z o e + c o t ( β o l ) Z o o ) y 12 = y 22 = y 34 = y 43 = − j 2 ( c o t ( β e l ) Z o e − c o t ( β o l ) Z o o ) y 13 = y 31 = y 24 = y 42 = j 2 ( c s c ( β e l ) Z o e + c s c ( β o l ) Z o o ) y 14 = y 41 = y 23 = y 32 = j 2 ( c s c ( β e l ) Z o e − c s c ( β o l ) Z o o ) For lossy coupled lines: y 11 = y 22 = y 33 = y 44 = 1 2 ( c o t h ( γ e l ) Z o e + c o t h ( γ o l ) Z o o ) y 12 = y 22 = y 34 = y 43 = 1 2 ( c o t h ( γ e l ) Z o e − c o t h ( γ o l ) Z o o ) y 13 = y 31 = y 24 = y 42 = − 1 2 ( c s c h ( γ e l ) Z o e + c s c h ( γ o l ) Z o o ) y 14 = y 41 = y 23 = y 32 = − 1 2 ( c s c h ( γ e l ) Z o e − c s c h ( γ o l ) Z o o ) {\displaystyle {\begin{aligned}Y&={\begin{bmatrix}y11&y12&y13&y14\\y21&y22&y23&y24\\y31&y32&y33&y34\\y41&y42&y43&y44\\\end{bmatrix}}\\Z&=[Y]^{-1}\\&\\{\text{Where:}}&\\{\text{For lossless coupled lines:}}&\\y11&=y22=y33=y44={\frac {-j}{2}}{\bigg (}{\frac {cot(\beta _{e}l)}{Z_{oe}}}+{\frac {cot(\beta _{o}l)}{Z_{oo}}}{\bigg )}\\y12&=y22=y34=y43={\frac {-j}{2}}{\bigg (}{\frac {cot(\beta _{e}l)}{Z_{oe}}}-{\frac {cot(\beta _{o}l)}{Z_{oo}}}{\bigg )}\\y13&=y31=y24=y42={\frac {j}{2}}{\bigg (}{\frac {csc(\beta _{e}l)}{Z_{oe}}}+{\frac {csc(\beta _{o}l)}{Z_{oo}}}{\bigg )}\\y14&=y41=y23=y32={\frac {j}{2}}{\bigg (}{\frac {csc(\beta _{e}l)}{Z_{oe}}}-{\frac {csc(\beta _{o}l)}{Z_{oo}}}{\bigg )}\\{\text{For lossy coupled lines:}}&\\y11&=y22=y33=y44={\frac {1}{2}}{\bigg (}{\frac {coth(\gamma _{e}l)}{Z_{oe}}}+{\frac {coth(\gamma _{o}l)}{Z_{oo}}}{\bigg )}\\y12&=y22=y34=y43={\frac {1}{2}}{\bigg (}{\frac {coth(\gamma _{e}l)}{Z_{oe}}}-{\frac {coth(\gamma _{o}l)}{Z_{oo}}}{\bigg )}\\y13&=y31=y24=y42={\frac {-1}{2}}{\bigg (}{\frac {csch(\gamma _{e}l)}{Z_{oe}}}+{\frac {csch(\gamma _{o}l)}{Z_{oo}}}{\bigg )}\\y14&=y41=y23=y32={\frac {-1}{2}}{\bigg (}{\frac {csch(\gamma _{e}l)}{Z_{oe}}}-{\frac {csch(\gamma _{o}l)}{Z_{oo}}}{\bigg )}\\\end{aligned}}} ..
Coaxial lines confine virtually all of the electromagnetic wave to the area inside the cable. Coaxial lines can therefore be bent and twisted (subject to limits) without negative effects, and they can be strapped to conductive supports without inducing unwanted currents in them.
In radio-frequency applications up to a few gigahertz, the wave propagates in the transverse electric and magnetic mode (TEM) only, which means that the electric and magnetic fields are both perpendicular to the direction of propagation (the electric field is radial, and the magnetic field is circumferential). However, at frequencies for which the wavelength (in the dielectric) is significantly shorter than the circumference of the cable other transverse modes can propagate. These modes are classified into two groups, transverse electric (TE) and transverse magnetic (TM) waveguide modes. When more than one mode can exist, bends and other irregularities in the cable geometry can cause power to be transferred from one mode to another.
The most common use for coaxial cables is for television and other signals with bandwidth of multiple megahertz. In the middle 20th century they carried long distance telephone connections.
Planar transmission lines are transmission lines with conductors , or in some cases dielectric strips, that are flat, ribbon-shaped lines. They are used to interconnect components on printed circuits and integrated circuits working at microwave frequencies because the planar type fits in well with the manufacturing methods for these components. Several forms of planar transmission lines exist.
A microstrip circuit uses a thin flat conductor which is parallel to a ground plane . Microstrip can be made by having a strip of copper on one side of a printed circuit board (PCB) or ceramic substrate while the other side is a continuous ground plane. The width of the strip, the thickness of the insulating layer (PCB or ceramic) and the dielectric constant of the insulating layer determine the characteristic impedance. Microstrip is an open structure whereas coaxial cable is a closed structure.
A stripline circuit uses a flat strip of metal which is sandwiched between two parallel ground planes. The insulating material of the substrate forms a dielectric. The width of the strip, the thickness of the substrate and the relative permittivity of the substrate determine the characteristic impedance of the strip which is a transmission line.
A coplanar waveguide consists of a center strip and two adjacent outer conductors, all three of them flat structures that are deposited onto the same insulating substrate and thus are located in the same plane ("coplanar"). The width of the center conductor, the distance between inner and outer conductors, and the relative permittivity of the substrate determine the characteristic impedance of the coplanar transmission line.
A balanced line is a transmission line consisting of two conductors of the same type, and equal impedance to ground and other circuits. There are many formats of balanced lines, amongst the most common are twisted pair, star quad and twin-lead.
Twisted pairs are commonly used for terrestrial telephone communications. In such cables, many pairs are grouped together in a single cable, from two to several thousand. [ 22 ] The format is also used for data network distribution inside buildings, but the cable is more expensive because the transmission line parameters are tightly controlled.
Star quad is a four-conductor cable in which all four conductors are twisted together around the cable axis. It is sometimes used for two circuits, such as 4-wire telephony and other telecommunications applications. In this configuration each pair uses two non-adjacent conductors. Other times it is used for a single, balanced line , such as audio applications and 2-wire telephony. In this configuration two non-adjacent conductors are terminated together at both ends of the cable, and the other two conductors are also terminated together.
When used for two circuits, crosstalk is reduced relative to cables with two separate twisted pairs.
When used for a single, balanced line , magnetic interference picked up by the cable arrives as a virtually perfect common mode signal, which is easily removed by coupling transformers.
The combined benefits of twisting, balanced signalling, and quadrupole pattern give outstanding noise immunity, especially advantageous for low signal level applications such as microphone cables, even when installed very close to a power cable. [ 23 ] [ 24 ] The disadvantage is that star quad, in combining two conductors, typically has double the capacitance of similar two-conductor twisted and shielded audio cable. High capacitance causes increasing distortion and greater loss of high frequencies as distance increases. [ 25 ] [ 26 ]
Twin-lead consists of a pair of conductors held apart by a continuous insulator. By holding the conductors a known distance apart, the geometry is fixed and the line characteristics are reliably consistent. It is lower loss than coaxial cable because the characteristic impedance of twin-lead is generally higher than coaxial cable, leading to lower resistive losses due to the reduced current. However, it is more susceptible to interference.
Lecher lines are a form of parallel conductor that can be used at UHF for creating resonant circuits. They are a convenient practical format that fills the gap between lumped components (used at HF / VHF ) and resonant cavities (used at UHF / SHF ).
Unbalanced lines were formerly much used for telegraph transmission, but this form of communication has now fallen into disuse. Cables are similar to twisted pair in that many cores are bundled into the same cable but only one conductor is provided per circuit and there is no twisting. All the circuits on the same route use a common path for the return current (earth return). There is a power transmission version of single-wire earth return in use in many locations.
Electrical transmission lines are very widely used to transmit high frequency signals over long or short distances with minimum power loss. One familiar example is the down lead from a TV or radio aerial to the receiver.
A large variety of circuits can also be constructed with transmission lines including impedance matching circuits, filters , power dividers and directional couplers .
A stepped transmission line is used for broad range impedance matching. It can be considered as multiple transmission line segments connected in series, with the characteristic impedance of each individual element to be Z 0 , i {\displaystyle Z_{\mathrm {0,i} }} . [ 27 ] The input impedance can be obtained from the successive application of the chain relation
where β i {\displaystyle \beta _{\mathrm {i} }} is the wave number of the i {\displaystyle \mathrm {i} } -th transmission line segment and ℓ i {\displaystyle \ell _{\mathrm {i} }} is the length of this segment, and Z i {\displaystyle Z_{\mathrm {i} }} is the front-end impedance that loads the i {\displaystyle \mathrm {i} } -th segment.
Because the characteristic impedance of each transmission line segment Z 0 , i {\displaystyle Z_{\mathrm {0,i} }} is often different from the impedance Z 0 {\displaystyle Z_{0}} of the fourth, input cable (only shown as an arrow marked Z 0 {\displaystyle Z_{0}} on the left side of the diagram above), the impedance transformation circle is off-centred along the x {\displaystyle x} axis of the Smith Chart whose impedance representation is usually normalized against Z 0 {\displaystyle Z_{0}} .
At higher frequencies, the reactive parasitic effects of real world lumped elements , including inductors and capacitors , limits their usefulness. [ 28 ] Therefore, it is sometimes useful to approximate the electrical characteristics of inductors and capacitors with transmission lines at the higher frequencies using Richards' Transformations and then substitute the transmission lines for the lumped elements. [ 29 ] [ 30 ]
More accurate forms of multimode high frequency inductor modeling with transmission lines exist for advanced designers. [ 31 ]
If a short-circuited or open-circuited transmission line is wired in parallel with a line used to transfer signals from point A to point B, then it will function as a filter. The method for making stubs is similar to the method for using Lecher lines for crude frequency measurement, but it is 'working backwards'. One method recommended in the RSGB 's radiocommunication handbook is to take an open-circuited length of transmission line wired in parallel with the feeder delivering signals from an aerial. By cutting the free end of the transmission line, a minimum in the strength of the signal observed at a receiver can be found. At this stage the stub filter will reject this frequency and the odd harmonics, but if the free end of the stub is shorted then the stub will become a filter rejecting the even harmonics.
Wideband filters can be achieved using multiple stubs. However, this is a somewhat dated technique. Much more compact filters can be made with other methods such as parallel-line resonators.
Transmission lines are used as pulse generators. By charging the transmission line and then discharging it into a resistive load, a rectangular pulse equal in length to twice the electrical length of the line can be obtained, although with half the voltage. A Blumlein transmission line is a related pulse forming device that overcomes this limitation. These are sometimes used as the pulsed power sources for radar transmitters and other devices.
The theory of sound wave propagation is very similar mathematically to that of electromagnetic waves, so techniques from transmission line theory are also used to build structures to conduct acoustic waves; and these are called acoustic transmission lines .
Part of this article was derived from Federal Standard 1037C . | https://en.wikipedia.org/wiki/Transmission_line |
A transmission line loudspeaker is a loudspeaker enclosure design which uses the topology of an acoustic transmission line within the cabinet, compared to the simpler enclosures used by sealed (closed) or ported (bass reflex) designs. Instead of reverberating in a fairly simple damped enclosure, sound from the back of the bass speaker is directed into a long (generally folded) damped pathway within the speaker enclosure, which allows far greater control and use of speaker energy and the resulting sound.
Inside a transmission line (TL) loudspeaker is a (usually folded) pathway into which the sound is directed. The pathway is often covered with varying types and depths of absorbent material, and it may vary in size or taper, and may be open or closed at its far end. Used correctly, such a design ensures that undesired resonances and energies, which would otherwise cause undesirable auditory effects, are instead selectively absorbed or reduced ("damped") due to the effects of the duct, or alternatively only emerge from the open end in phase with the sound radiated from the front of the driver, enhancing the output level ("sensitivity") at low frequencies. The transmission line acts as an acoustic waveguide , and the padding both reduces reflection and resonance, and also slows the speed of sound within the cabinet to allow for better tuning.
Transmission line loudspeakers designs are more complex to implement, making mass production difficult, but their advantages have led to commercial success for a number of manufacturers such as IMF, TDL, and PMC . As a rule, transmission line speakers tend to have exceptionally high fidelity low frequency response far below that of a typical speaker or subwoofer , reaching into the infrasonic range (British company TDL's studio monitor range from the 1990s quoted their frequency responses as starting from as low as 17 Hz depending upon model with a sensitivity of 87 dB for 1 W @ 1 meter), without the need for a separate enclosure or driver. [ 1 ] [ 2 ] Acoustically, TL speakers roll off more slowly (less steeply) at low frequencies, and they are thought to provide better driver control than standard vented-box cabinet designs, [ 3 ] are less sensitive to positioning, and tend to create a very spacious soundstage . Modern TL speakers were described in a 2000 review as "match[ing] reflex cabinet designs in every respect, but with an extra octave of bass, lower LF distortion and a frequency balance which is more independent of listening level". [ 4 ]
Although more complex to design and tune, and not as easy to analyze and calculate as other designs, the transmission line design is valued by several smaller manufacturers, as it avoids many of the major disadvantages of other loudspeaker designs. In particular, the basic parameters and equations describing sealed and reflex designs are fairly well understood, the range of options involved in a transmission line design mean that the general design can be somewhat calculated but final transmission line tuning requires considerable attention and is less easy to automate.
Low frequencies, which remain in phase, emerge from the vent which essentially acts as a second driver. The advantage of this approach is that the air pressure loading the main driver is maintained which controls the driver over a wide frequency range and reduces distortion. [The TL design] also produces higher SPL [sensitivity or loudness] and lower bass extension than ported or sealed box of similar size.
I have an intuitive abhorrence of resonance enhancement to give a loudspeaker more "kick" or apparent bass as they can sound "single-noted". Yes you can pick out the bass rhythm but what about the melody. What a transmission line gives in my experience is a much smoother and more realistic bass quality.
A transmission line is used in loudspeaker design to reduce time, phase, and resonance related distortions, and in many designs to gain exceptional bass extension to the lower end of human hearing, and in some cases the near- infrasonic (below 20 Hz). TDL's 1980s reference speaker range (now discontinued) contained models with frequency ranges of 20 Hz upwards, down to 17 Hz upwards, without needing a separate subwoofer . [ 2 ] Irving M. Fried , an advocate of TL design, stated that:
Some proponents of TL loudspeakers consider that using a TL is the theoretical ideal manner in which to load a moving-coil drive unit. [ citation needed ] However, it is also one of the more complex of constructions. The most common and practical implementation is to fit a drive unit to the end of a long duct that is usually open at the far end. In practice, the duct is folded inside a conventional shaped cabinet, so that the open end of the duct appears as a vent on the speaker cabinet. There are many ways in which the duct can be folded, and the line is often tapered in cross section to avoid parallel internal surfaces that encourage standing waves. Some speaker designs also use a spiral or elliptic spiral shaped duct, usually with one speaker element in the front or two speaker elements arranged one on each side of the cabinet. Depending upon the drive unit, and quantity and various physical properties of absorbent material, the amount of taper will be adjusted during the design process to tune the duct to remove irregularities in its response. The internal partitioning provides substantial bracing for the entire structure, reducing cabinet flexing and colouration. The inside faces of the duct or line, are treated with an absorbent material to provide the correct termination with frequency to load the drive unit as a TL. The enclosure behaves like an infinite baffle , potentially absorbing most or all of the speaker unit's rear energies. [ 8 ] A theoretically perfect TL would absorb all frequencies entering the line from the rear of the drive unit but remains theoretical, as it would have to be infinitely long. The physical constraints of the real world, demand that the length of the line must often be less than 4 meters before the cabinet becomes too large for any practical applications, so not all the rear energy can be absorbed by the line. In a realized TL, only the upper bass is TL loaded in the true sense of the term (i.e. fully absorbed); the low bass is allowed to freely radiate from the vent in the cabinet. The line therefore effectively works as a low pass filter, another crossover point in fact, achieved acoustically by the line and its absorbent filling. Below this “crossover point” the low bass is loaded by the column of air formed by the length of the line. The length of the line is specified so as to reverse the phase of the rear output of the drive unit as it exits the vent. This acoustic energy combines with the output of the bass unit, extending its response and effectively creating a second driver.
Essentially, the goal of the transmission line is to minimize acoustical or mechanical impedance at frequencies corresponding to the fundamental free-air resonance of the bass driver. This simultaneously reduces stored energy in the driver's motion, reduces distortion, and critically damps the driver by maximizing acoustic output (maximal acoustical loading or coupling) at the terminus. This also minimizes the negative effects of acoustic energy that would otherwise (as with a sealed enclosure) be reflected back to the driver in a sealed cavity. [ 9 ]
Transmission line loudspeakers employ this tube-like resonant cavity, with the length set between 1/6 and 1/2 the wavelength of the fundamental resonant frequency of the loudspeaker driver being used. The cross-sectional area of the tube is typically comparable to the cross-sectional area of the driver's radiating surface area. This cross section is typically tapered down to approximately 1/4 of the starting area at the terminus or open end of the line. While not all lines use a taper, the standard classical transmission line employs a taper from 1/3 to 1/4 area (ratio of terminus area to starting area directly behind driver). This taper serves to dampen the buildup of standing waves within the line, which can create sharp nulls in response at the terminus output at even multiples of the driver's Fs.
In a transmission line speaker, the transmission line itself can be open ("vented") or closed at the far end. Closed designs typically have negligible acoustic output from the enclosure except from the driver, while open ended designs exploit the low-pass filter effect of the line, and the resultant low bass energy emerges to reinforce the output from the driver at low frequencies. Well designed transmission line enclosures have smooth impedance curves , possibly from a lack of frequency-specific resonances, but can also have low efficiency if poorly designed.
One key advantage of transmission lines is their ability to conduct the back wave behind the transducer more effectively away from it – reducing the chance for reflected energy permeating back through the diaphragm out of phase with the primary signal. Not all transmission lines designs do this effectively. Most offset transmission line speakers place a reflective wall fairly close behind the transducer within the enclosure – posing a problem for internal reflections emanating back through the transducer diaphragm. Older descriptions explained the design in terms of "impedance mismatch", or pressure waves "reflected" back into the enclosure; these descriptions are now considered outdated and inaccurate as technically the transmission line works through selective production of standing waves and constructive and destructive interference (see below).
A second benefit is that the resulting music is time coherent (i.e., in phase ). Fried quoted in 2002, a listening test performed and reported in December 2000's Hi-Fi News (as he believed) in which a high-quality recording was obtained using reputable but non-time-coherent loudspeakers and this recording was then time phase corrected; an expert listening panel "voted unanimously for the superior realism and accuracy of the time corrected output" for high quality sound reproduction. [ 7 ]
One of the significant and common problems with a transmission line loudspeaker system is the unwanted phase-cancellation effects of higher line harmonics bleeding from the transmission line and adversely affecting the overall sound field. For example, in the PMC PMC6 mid-sized transmission line monitoring loudspeaker, there is a dip around 300 Hz that is caused by the fifth harmonic of the transmission line’s resonant frequency. [ 10 ] This type of problem is quite common, and it was readily apparent in other transmission line loudspeakers. For example, the large IMF TLS80 MkII from 1977 also had an anomaly, but this time at the lower frequency of about 140 Hz, consisting of an almost one-octave-wide deleterious 2-dB dip in the on-axis response. [ 11 ] Another problem is that the sound radiation from the exit of the line is spread over a quite broad frequency range caused by the hump of the quarter-wave transmission line resonance, whereas the high-Q port resonance of a vented-box loudspeaker rolls off much more quickly and extends over a much narrower frequency band. [ 12 ] These sorts of issues with transmission line loudspeakers can lead to tonal accuracy problems that cannot be resolved.
A transmission line speaker employs, essentially, two distinct forms of bass loading, which historically and confusingly have been amalgamated in the TL description. Separating the upper and lower bass analysis reveals why such designs have so many potential advantages and disadvantages over reflex and infinite baffle designs. Measurements indicate that the upper bass is only partially absorbed by the line, making a clean and neutral response somewhat difficult if not impossible to achieve. The lower bass is extended and distortion is lowered by the line's control over the drive unit's excursion. One of the exclusive benefits of a TL design is its ability to produce very low frequencies even at low monitoring levels – TL speakers can routinely produce full range sound usually requiring a subwoofer , and do so to very high levels of low-frequency accuracy. The main disadvantage of the design is that it is more labor-intensive to create and tune a high quality and consistent transmission line, compared to building a simple vented-box or closed-box enclosure. One PMC employee was quoted as saying that optimising a transmission line loudspeaker is "like juggling water". [ 12 ] A 2010 Hifi Avenue TL speaker review commented that "One thing I have noticed about transmission line designs is that they create a rather big soundstage and seem to handle crescendoes with ease". [ 5 ]
The concept was innovated within acoustic enclosure design, and originally termed an "acoustical labyrinth", by acoustic engineer and later Director of Research, Benjamin Olney, who developed the concept at the Stromberg-Carlson Telephone Co. in the early 1930s while studying the effect of enclosure shape and size on speaker output, including the effect of "extreme length in a box baffle". [ 13 ] A patent was filed in 1934. [ 14 ] The design was used in their console radios beginning in 1936. [ 15 ] A loudspeaker enclosure based on the concept was proposed in October 1965 by Dr A.R. Bailey in Wireless World magazine, referencing a production version of an acoustic-line enclosure design from Radford Electronics Ltd. [ 16 ] The article postulated that energy from the rear of a driver unit could be essentially absorbed, without damping the cone's motion or superimposing internal reflections and resonance, so Bailey and Radford reasoned that the rear wave could be channelled down a long pipe. If the acoustic energy was absorbed, it would not be available to excite resonances. A pipe of sufficient length could be tapered, and stuffed so that the energy loss was almost complete, minimizing output from the open end. No broad consensus on the ideal taper (expanding, uniform cross-section, or contracting) has been established.
The birth of the modern transmission line speaker design came about in 1965 with the publication of A.R. Bailey's article in Wireless World, “A Non-resonant Loudspeaker Enclosure Design”, [ 16 ] detailing a working Transmission Line. Bailey followed up his first article with a second one in 1972. [ 18 ] Radford Electronics Ltd took up this innovative design and briefly manufactured the first commercial Transmission Line loudspeaker. Although acknowledged as the father of the Transmission Line, Bailey's work drew on the work on labyrinth design, dating back as early as the 1930s. His design, however, differed significantly in the way in which he filled the cabinet with absorbent materials. Bailey hit upon the idea of absorbing all the energy generated by the bass unit inside the cabinet, providing an inert platform for the drive unit to work from; unchecked, this energy produces spurious resonances in the cabinet and its structure, adding distortion to the original signal.
Shortly thereafter the design entered mainstream Hi-Fi , through the works of Irving M. "Bud" Fried in the United States, and a British trio: John Hayes, John Wright, and David Brown. Dave D'Lugos describes the period that followed (approximately 35 years until the start of the 21st Century) as a period when the "classical designs" were created.
Fried was exposed during his time at Harvard University to high fidelity audio reproduction, and later became an importer of audiophile items. Under the trademark "IMF" (his initials), from 1961, he eventually became involved with many advancements in audiophile equipment: cartridges (IMF – London, IMF – Goldring), tonearms (SME, Gould, Audio and Design), amplifiers (Quad, Custom Series), loudspeakers (Lowther, Quad, Celestion, Bowers and Wilkins, Barker, etc.). [ 19 ] In 1968 he met John Hayes and John Wright, who had already designed an award-winning tonearm in the UK and had brought along a transmission line speaker designed by John Wright — described by Hayes as "fanatical regarding quality" [ 7 ] — in order to promote and demonstrate the tonearm at a New York hifi show. Fried unexpectedly received a number of orders for the unnamed speaker, which he dubbed the "IMF". [ 7 ] The British pair, along with Hayes' colleague David Brown, agreed to form a UK company to design and manufacture speakers which would be sold by Fried in the United States. John Hayes later wrote that:
The relationship broke down acrimoniously when Fried began to make his own, poorer quality speakers, also marketed as "IMF", and refused to cease until a court agreed that the UK business had the right to the trademark IMF for loudspeakers. [ 7 ] Following the split, Fried in the USA (under the brandname "Fried") and the three founders of IMF Electronics in the UK (via a joint venture with driver manufacturer Elac under the name TDL), both became well known in audiophile circles for many years as major advocates of transmission line speaker design. [ 7 ] TDL closed after John Wright's gradual failing health and death in 1999 from cancer . [ 7 ] He was described in his 1999 obituary as "one of the most important figures on the British hi-fi scene since the mid-1960s... best remembered for his transmission-line loudspeaker designs". [ 20 ] The brand was acquired by Audio Partnerships (part of retailer group Richer Sounds ). Fried died six years later, in 2005. [ 21 ]
In the early 21st century, mathematical models that seemed to approximate the behavior of real-world TL speakers and cabinets began to emerge. [ 22 ] According to the website t-linespeakers.org, this led to an understanding that what he termed the "classical" speakers, designed largely by "trial and error", were a "good job" and the best that was reasonably possible at those time, but that better designs were now achievable based on modeled responses. [ 23 ]
Phase inversion is achieved by selecting a length of line that is equal to the quarter wavelength of the target lowest frequency. The effect is illustrated in Fig. 1, which shows a hard boundary at one end (the speaker) and the open-ended line vent at the other. The phase relationship between the bass driver and vent is in phase in the pass band until the frequency approaches the quarter wavelength, when the relationship reaches 90 degrees as shown. However, by this time the vent is producing most of the output (Fig. 2). Because the line is operating over several octaves with the drive unit, cone excursion is reduced, providing higher SPLs and lower distortion levels, as compared with bass reflex and infinite baffle loudspeaker enclosure designs.
The complex loading of the bass drive unit demands specific Thiele-Small driver parameters to realise the full benefits of a TL design. Most drive units in the marketplace are developed for the more common reflex and infinite baffle designs and are usually not suitable for TL loading. High efficiency bass drivers with extended low frequency ability, are usually designed to be extremely light and flexible, having very compliant suspensions. Whilst performing well in a reflex design, these characteristics do not match the demands of a TL design. The drive unit is effectively coupled to a long column of air which has mass. This lowers the resonant frequency of the drive unit, negating the need for a highly compliant device. Furthermore, the column of air provides greater force on the driver itself than a driver opening onto a large volume of air (in simple terms it provides more resistance to the driver's attempt to move it), so to control the movement of air requires an extremely rigid cone, to avoid deformation and consequent distortion.
The introduction of the absorption materials reduces the velocity of sound through the line, as discovered by Bailey in his original work. Bradbury published his extensive tests to determine this effect in an AES journal article in 1976, [ 24 ] and his results agreed that heavily damped lines could reduce the velocity of sound by as much as 50%, although 35% is typical in medium damped lines. The behaviour of various damping materials has also been studied by Lusztak and Bujacz. [ 25 ] Bradbury's tests were carried out using fibrous materials, typically longhaired wool and glass fibre. However, these kinds of materials produce highly variable effects that are not consistently repeatable for production purposes. They are also liable to produce inconsistencies due to movement, climatic factors and effects over time. High specification acoustic foams, developed by manufacturers such as PMC, with similar characteristics to longhaired wool, provide repeatable results for consistent production. The density of the polymer, the diameter of the pores and the sculptured profiling are all specified to provide the correct absorption for each speaker model. The quantity and position of the foam is critical to engineer a low-pass acoustic filter that provides adequate attenuation of the upper bass frequencies, whilst allowing an unimpeded path for the low bass frequencies. Although the result may require a lot of modeling and testing, the starting point is usually based on one of three basic principles. Filling the entire tube treats the TL as a damper, aiming at completely eliminating the rear wave. Filling half the cross section throughout the line's entire length treats the TL as an infinite baffle, basically damping high frequencies and wall-to-wall resonances. Filling the tube from the driver to half the tube's length aims at a quarter-wave resonator, leaving the fundamental tone with its velocity maxima at the open end of the tube intact, while damping all the overtones.
For most of the 20th century, transmission line design remained more of an art than a science, requiring much trial and error . Jon Risch states in an article on classic transmission line design, that the hard part was finding the best stuffing density along the line's length, because "the line stuffing affects both the total apparent line length AND the total apparent box volume simultaneously". He summarized the state of design at the time as: [ 26 ]
Dave D'Lugos, founder of fan site t-linespeakers.org, comments that this reflects the "classical" designs from the 1960s until Risch's writing, during which period "TL design was seat of the pants". [ 23 ]
However, from the 21st Century, Martin King and George Augspurger (both separately and referencing each other's works), produced models which show these to be "generally less than optimal" designs which "did a good job of approaching what was possible in their day". Audio engineer Augspurger had modeled TLs using an electrical analogy, [ 22 ] and found it to agree closely with King's existing work, based on a mechanical analogy. [ 23 ] D'Lugos concluded in his overview of TL modeling and design theory: "I think that using modern drivers and tools such as King's software you can build a better TL easier today". [ 23 ]
More recently, Andrea Rubino has developed a sophisticated simulation model based on electrical circuit theory and published a series of articles in the Italian electroacoustic journal AUDIOreview . Many resources are available on his website: transmissionlinespeakers.com
In addition to these more sophisticated models a number of approximation algorithms exist. One such is to design a closed-box loudspeaker enclosure, then building a transmission line of the same volume tuned to the closed-box loudspeaker's resonance frequency. Another is to design a bass reflex loudspeaker, again building a transmission line of the same volume, tuned to the frequency of the Helmholtz resonator.
Pioneers:
Other companies and individuals who have produced or researched TL speakers:
DIY kit manufacturers: | https://en.wikipedia.org/wiki/Transmission_line_loudspeaker |
Transmission loss (TL) in duct acoustics describes the acoustic performances of a muffler -like system. It is frequently used in the industry areas such as muffler manufacturers and NVH (noise, vibration and harshness) department of automobile manufacturers, and in academic studies. Generally the higher transmission loss of a system it has, the better it will perform in terms of noise cancellation.
Transmission loss (TL) in duct acoustics is defined as the difference between the power incident on a duct acoustic device ( muffler ) and that transmitted downstream into an anechoic termination. Transmission loss is independent of the source, [ 1 ] [ 2 ] if only plane waves are incident at the inlet of the device. [ 2 ] Transmission loss does not involve the radiation impedance inasmuch as it represents the difference between incident acoustic energy and that transmitted into an anechoic environment. Being made independent of the terminations, TL finds favor with researchers who are sometimes interested in finding the acoustic transmission behavior of an element or a set of elements in isolation of the terminations. But measurement of the incident wave in a standing wave acoustic field requires uses of impedance tube technology, may be quite laborious, unless one makes use of the two-microphone method with modern instrumentation. [ 1 ] [ 2 ]
By definition the plane wave TL on an acoustic component with negligible mean flow may be described as: [ 1 ]
where:
Note that p i + {\displaystyle p_{i+}} cannot be measured directly in isolation from the reflected wave pressure p i − {\displaystyle p_{i-}} (in the inlet, away from muffler). One has to resort to impedance tube technology or two-microphone method with modern instrumentation. [ 1 ] However at the downstream side of the muffler, p o = p o + {\displaystyle p_{o}=p_{o+}} in view of the anechoic termination, which ensures p o − = 0 {\displaystyle p_{o-}=0} .
And in most muffler applications, Si and So , the area of the exhaust pipe and tail pipe, are generally made equal, thus we have:
Thus, TL equals 20 times the logarithm (to the base 10) of the ratio of the acoustic pressure associated with the incident wave (in the exhaust pipe) and that of the transmitted wave (in the tail pipe), with the two pipes having the same cross-sectional area and the tail pipe terminating anechoically. [ 1 ] However this anechoic condition is normally difficult to meet under practical industry environment, thus it is usually more convenient for the muffler manufacturers to measure insertions loss during their muffler performance tests under working conditions (mounted on an engine). However, there is no relationship between the insertion loss and the transmission loss of a muffler.
Also, since the transmitted sound power cannot possibly exceed the incident sound power (or | p i + | {\displaystyle \left\vert p_{i+}\right\vert } is always larger than | p o | {\displaystyle \left\vert p_{o}\right\vert } ), it is known that TL will never be less than 0 dB.
If the system contains non-negligible mean flow and have duct sizes which support wave modes of orders higher than the plane wave mode at the frequencies of interest, transmission loss calculations are modified accordingly. [ 2 ]
Transmission matrix description
The low-frequency approximation implies that each subsystem is an acoustic two-port (or four-pole system) with two (and only two) unknown parameters, the complex amplitudes of two interfering waves travelling in opposite directions. Such a system can be described by its transmission matrix (or four-pole matrix), as follows [ 3 ]
where p ^ i {\displaystyle {\hat {p}}_{i}} , p ^ o {\displaystyle {\hat {p}}_{o}} , q ^ i {\displaystyle {\hat {q}}_{i}} and q ^ o {\displaystyle {\hat {q}}_{o}} are the sound pressures and volume velocities at the input and at the output. A, B, C and D are complex numbers. With this representation it can be proved that the transmission loss (TL) of this subsystem can be calculated as,
where:
Considering we have the most simplest reactive silencer with only one expansion chamber (length l and cross-sectional area S2), with inlet and outlet both having cross-sectional area S1). As we know the transmission matrix of a tube (in this case, the expansion chamber) is [ 3 ]
Substitute to the equation of TL above, it can be seen that the TL of this simple reactive silencer is
where h {\displaystyle h} is the ratio of the cross-sectional areas and l {\displaystyle l} is the length of the chamber. k = ω / c {\displaystyle k=\omega /c} is the wave number while c {\displaystyle c} is the sound speed. Note that the transmission loss is zero when l {\displaystyle l} is a multiple of half a wavelength.
As a simple example, consider a one chamber silencer with h = S1 / S2 =1/3, at around 400 °C the sound speed is about 520 m/s, with l =0.5 m, one easily calculate the TL result shown on the plot on the right. Note that the TL equals zero when frequency is a multiple of c 2 l {\displaystyle c \over {2l}} and TL peaks when frequency is c 4 l + n ∗ c 2 l {\displaystyle {c \over {4l}}+{n*{c \over {2l}}}} .
Also note that the above calculation is only valid for low-frequency range because at low-frequency range the sound wave can be treated as a plane wave. The TL calculation will start losing its accuracy when the frequency goes above the cutoff frequency , which can be calculated as f c = 1.84 c π D {\displaystyle f_{c}=1.84{c \over {\pi D}}} , [ 1 ] where D is diameter of the largest pipe in the structure. In the case above, if for example the muffler body has a diameter of 300mm, then the cut-off frequency is then 1.84*520/pi/0.3=1015 Hz. | https://en.wikipedia.org/wiki/Transmission_loss_(duct_acoustics) |
In telecommunications , a transmission system is a communication system that transmits a signal from one place to another. The signal can be an electrical , optical or radio signal . The goal of a transmission system is to transmit data accurately and efficiently from point A to point B over a distance, using a variety of technologies such as copper cable and fiber-optic cables , satellite links , and wireless communication technologies .
The International Telecommunication Union (ITU) and the European Telecommunications Standards Institute (ETSI) define a transmission system as the interface and medium through which peer physical layer entities transfer bits . It encompasses all the components and technologies involved in transmitting digital data from one location to another, including modems , cables, and other networking equipment. [ 1 ] [ 2 ] Some transmission systems contain multipliers, which amplify a signal prior to re-transmission , or regenerators , which attempt to reconstruct and re-shape the coded message before re-transmission.
One of the most widely used transmission system technologies in the Internet and the public switched telephone network (PSTN) is synchronous optical networking (SONET).
Also, transmission system is the medium through which data is transmitted from one point to another. Examples of common transmission systems people use everyday are: the internet, mobile networks , cordless cables, etc.
The ITU defines a digital transmission system as a system that uses digital signals to transmit information. In a digital transmission system, the data is first converted into a digital format and then transmitted over a communication channel. The digital format provides a number of benefits over analog transmission systems, including improved signal quality, reduced noise and interference , and increased data accuracy.
ITU defines digital transmission system (DTS) as following:
A specific means of providing a digital section. [ 3 ] [ 4 ]
The ITU sets global standards for digital transmission systems, including the encoding and decoding methods used, the data rates and transmission speeds , and the types of communication channels used. These standards ensure that digital transmission systems are compatible and interoperable with each other, regardless of the type of data being transmitted or the geographical location of the sender and receiver.
These techniques are used to improve signal-to-noise ratio , which helps to maintain the integrity of the signal during transmission. | https://en.wikipedia.org/wiki/Transmission_system |
A transmissometer or transmissiometer is an instrument for measuring the extinction coefficient of the atmosphere and sea water , and for the determination of visual range . It operates by sending a narrow, collimated beam of energy (usually a laser ) through the propagation medium. A narrow field of view receiver at the designated measurement distance determines how much energy is arriving at the detector , and determines the path transmission and/or extinction coefficient. [ 1 ] In a transmissometer the extinction coefficient is determined by measuring direct light transmissivity, and the extinction coefficient is then used to calculate visibility range. [ 2 ]
Atmospheric extinction is a wavelength dependent phenomenon, but the most common wavelength in use for transmissometers is 550 nm , which is in the middle of the visible waveband, and allows a good approximation of visual range. [ citation needed ]
Transmissometers are also referred to as telephotometers, transmittance meters, or hazemeters.
Transmissometers are also used by oceanographers and limnologists to measure the optical properties of natural water. [ 2 ] In this context, a transmissometer measures the transmittance or attenuation of incident radiation from a light source with a wavelength of around 660 nm, generally through a shorter distance than in air, as water has a smaller maximum visibility distance. [ citation needed ]
Latest generation transmissometer technology makes use of a co-located forward scatter visibility sensor on the transmitter unit to allow for higher accuracies over an Extended Meteorological Optical Range or EMOR. After 10,000 meters the accuracy of transmissometer technology diminishes, and at higher visibilities forward scatter visibility sensor technology is more accurate. The co-location of the two sensors allows for the most accurate technology to be used when reporting current visibility. The forward scatter sensor also enables auto-alignment and auto-calibration of the transmissometer device. Hence it is very useful for oceanography and water optics study. [ citation needed ] [ clarification needed ]
This optics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transmissometer |
In telecommunications , transmit-after-receive time delay is the time interval from removal of RF energy at the local receiver input until the local transmitter is automatically keyed on and the transmitted RF signal amplitude has increased to 90% of its steady-state value. An Exception: High-frequency (HF) transceiver equipment is normally not designed with an interlock between receiver squelch and transmitter on-off key. The transmitter can be keyed on at any time, independent of whether or not a signal is being received at the receiver input.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transmit-after-receive_time_delay |
In quantum computing , and more specifically in superconducting quantum computing , a transmon is a type of superconducting charge qubit designed to have reduced sensitivity to charge noise. The transmon was developed by Jens Koch, Terri M. Yu, Jay Gambetta , Andrew Houck , David Schuster, Johannes Majer, Alexandre Blais, Michel Devoret , Steven M. Girvin , and Robert J. Schoelkopf at Yale University and Université de Sherbrooke in 2007. [ 1 ] [ 2 ] Its name is an abbreviation of the term transmission line shunted plasma oscillation qubit ; one which consists of a Cooper-pair box "where the two superconductors are also [capacitively] shunted in order to decrease the sensitivity to charge noise, while maintaining a sufficient anharmonicity for selective qubit control". [ 3 ]
The transmon achieves its reduced sensitivity to charge noise by significantly increasing the ratio of the Josephson energy to the charging energy. This is accomplished through the use of a large shunting capacitor. The result is energy level spacings that are approximately independent of offset charge. Planar on-chip transmon qubits have T 1 coherence times approximately 30 μs to 40 μs. [ 5 ] Recent work has shown significantly improved T 1 times as long as 95 μs by replacing the superconducting transmission line cavity with a three-dimensional superconducting cavity, [ 6 ] [ 7 ] and by replacing niobium with tantalum in the transmon device, T 1 is further improved up to 0.3 ms. [ 8 ] These results demonstrate that previous T 1 times were not limited by Josephson junction losses. Understanding the fundamental limits on the coherence time in superconducting qubits such as the transmon is an active area of research.
The transmon design is similar to the first design of the charge qubit [ 9 ] known as a "Cooper-pair box"; both are described by the same Hamiltonian, with the only difference being the E J / E C {\displaystyle E_{\rm {J}}/E_{\rm {C}}} ratio. Here E J {\displaystyle E_{\rm {J}}} is the Josephson energy of the junction, and E C {\displaystyle E_{\rm {C}}} is the charging energy inversely proportional to the total capacitance of the qubit circuit. Transmons typically have E J / E C ≫ 1 {\displaystyle E_{\mathrm {J} }/E_{\mathrm {C} }\gg 1} (while E J / E C ≲ 1 {\displaystyle E_{\mathrm {J} }/E_{\mathrm {C} }\lesssim 1} for typical Cooper-pair-box qubits), which is achieved by shunting the Josephson junction with an additional large capacitor .
The benefit of increasing the E J / E C {\displaystyle E_{\rm {J}}/E_{\rm {C}}} ratio is the insensitivity to charge noise—the energy levels become independent of the offset charge n g {\displaystyle n_{g}} across the junction; thus the dephasing time of the qubit is prolonged. The disadvantage is the reduced anharmonicity α = ( E 21 − E 10 ) / E 10 {\displaystyle \alpha =(E_{21}-E_{10})/E_{10}} , where E i j {\displaystyle E_{ij}} is the energy difference between eigenstates | i ⟩ {\displaystyle |i\rangle } and | j ⟩ {\displaystyle |j\rangle } . Reduced anharmonicity complicates the device operation as a two level system, e.g. exciting the device from the ground state to the first excited state by a resonant pulse also populates the higher excited state. This complication is overcome by complex microwave pulse design, that takes into account the higher energy levels, and prohibits their excitation by destructive interference. Also, while the variation of E 10 {\displaystyle E_{10}} with respect to n g {\displaystyle n_{g}} tend to decrease exponentially with E J / E C {\displaystyle E_{\mathrm {J} }/E_{\mathrm {C} }} , the anharmonicity only has a weaker, algebraic dependence on E J / E C {\displaystyle E_{\mathrm {J} }/E_{\mathrm {C} }} as ∼ ( E J / E C ) − 1 / 2 {\displaystyle \sim (E_{\mathrm {J} }/E_{\mathrm {C} })^{-1/2}} . The significant gain in the coherence time outweigh the decrease in the anharmonicity for controlling the states with high fidelity.
Measurement, control and coupling of transmons is performed by means of microwave resonators with techniques from circuit quantum electrodynamics also applicable to other superconducting qubits . Coupling to the resonators is done by placing a capacitor between the qubit and the resonator, at a point where the resonator electromagnetic field is greatest. For example, in IBM Quantum Experience devices, the resonators are implemented with " quarter wave " coplanar waveguides with maximal field at the signal-ground short at the waveguide end; thus every IBM transmon qubit has a long resonator "tail". The initial proposal included similar transmission line resonators coupled to every transmon, becoming a part of the name. However, charge qubits operated at a similar E J / E C {\displaystyle E_{\rm {J}}/E_{\rm {C}}} regime, coupled to different kinds of microwave cavities are referred to as transmons as well.
Transmons have been explored for use as d -dimensional qudits via the additional energy levels that naturally occur above the qubit subspace (the lowest two states). For example, the lowest three levels can be used to make a transmon qutrit ; in the early 2020s, researchers have reported realizations of single-qutrit quantum gates on transmons [ 10 ] [ 11 ] as well as two-qutrit entangling gates. [ 12 ] Entangling gates on transmons have also been explored theoretically and in simulations for the general case of qudits of arbitrary d . [ 13 ] | https://en.wikipedia.org/wiki/Transmon |
In telecommunications , transparency can refer to:
Some communication systems are not transparent.
Non-transparent communication systems have one or both of the following problems: | https://en.wikipedia.org/wiki/Transparency_(telecommunication) |
In the field of optics , transparency (also called pellucidity or diaphaneity ) is the physical property of allowing light to pass through the material without appreciable scattering of light . On a macroscopic scale (one in which the dimensions are much larger than the wavelengths of the photons in question), the photons can be said to follow Snell's law . Translucency (also called translucence or translucidity ) is the physical property of allowing light to pass through the material (with or without scattering of light). It allows light to pass through but the light does not necessarily follow Snell's law on the macroscopic scale; the photons may be scattered at either of the two interfaces, or internally, where there is a change in the index of refraction . In other words, a translucent material is made up of components with different indices of refraction. A transparent material is made up of components with a uniform index of refraction. [ 1 ] Transparent materials appear clear, with the overall appearance of one color, or any combination leading up to a brilliant spectrum of every color. The opposite property of translucency is opacity . Other categories of visual appearance, related to the perception of regular or diffuse reflection and transmission of light, have been organized under the concept of cesia in an order system with three variables, including transparency, translucency and opacity among the involved aspects.
When light encounters a material, it can interact with it in several different ways. These interactions depend on the wavelength of the light and the nature of the material. Photons interact with an object by some combination of reflection, absorption and transmission.
Some materials, such as plate glass and clean water , transmit much of the light that falls on them and reflect little of it; such materials are called optically transparent. Many liquids and aqueous solutions are highly transparent. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are mostly responsible for excellent optical transmission.
Materials that do not transmit light are called opaque . Many such substances have a chemical composition which includes what are referred to as absorption centers. Many substances are selective in their absorption of white light frequencies . They absorb certain portions of the visible spectrum while reflecting others. The frequencies of the spectrum which are not absorbed are either reflected or transmitted for our physical observation. This is what gives rise to color . The attenuation of light of all frequencies and wavelengths is due to the combined mechanisms of absorption and scattering . [ 2 ]
Transparency can provide almost perfect camouflage for animals able to achieve it. This is easier in dimly-lit or turbid seawater than in good illumination. Many marine animals such as jellyfish are highly transparent.
With regard to the absorption of light, primary material considerations include:
With regard to the scattering of light , the most critical factor is the length scale of any or all of these structural features relative to the wavelength of the light being scattered. Primary material considerations include:
Diffuse reflection - Generally, when light strikes the surface of a (non-metallic and non-glassy) solid material, it bounces off in all directions due to multiple reflections by the microscopic irregularities inside the material (e.g., the grain boundaries of a polycrystalline material or the cell or fiber boundaries of an organic material), and by its surface, if it is rough. Diffuse reflection is typically characterized by omni-directional reflection angles. Most of the objects visible to the naked eye are identified via diffuse reflection. Another term commonly used for this type of reflection is "light scattering". Light scattering from the surfaces of objects is our primary mechanism of physical observation. [ 3 ] [ 4 ]
Light scattering in liquids and solids depends on the wavelength of the light being scattered. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension (or spatial scale) of the scattering center. Visible light has a wavelength scale on the order of 0.5 μm . Scattering centers (or particles) as small as 1 μm have been observed directly in the light microscope (e.g., Brownian motion ). [ 5 ] [ 6 ]
Optical transparency in polycrystalline materials is limited by the amount of light scattered by their microstructural features. Light scattering depends on the wavelength of the light. Limits to spatial scales of visibility (using white light) therefore arise, depending on the frequency of the light wave and the physical dimension of the scattering center. For example, since visible light has a wavelength scale on the order of a micrometre, scattering centers will have dimensions on a similar spatial scale. Primary scattering centers in polycrystalline materials include microstructural defects such as pores and grain boundaries. In addition to pores, most of the interfaces in a typical metal or ceramic object are in the form of grain boundaries , which separate tiny regions of crystalline order. When the size of the scattering center (or grain boundary) is reduced below the size of the wavelength of the light being scattered, the scattering no longer occurs to any significant extent.
In the formation of polycrystalline materials (metals and ceramics) the size of the crystalline grains is determined largely by the size of the crystalline particles present in the raw material during formation (or pressing) of the object. Moreover, the size of the grain boundaries scales directly with particle size. Thus, a reduction of the original particle size well below the wavelength of visible light (about 1/15 of the light wavelength, or roughly 600 nm / 15 = 40 nm ) eliminates much of the light scattering, resulting in a translucent or even transparent material.
Computer modeling of light transmission through translucent ceramic alumina has shown that microscopic pores trapped near grain boundaries act as primary scattering centers. The volume fraction of porosity had to be reduced below 1% for high-quality optical transmission (99.99 percent of theoretical density). This goal has been readily accomplished and amply demonstrated in laboratories and research facilities worldwide using the emerging chemical processing methods encompassed by the methods of sol-gel chemistry and nanotechnology . [ 7 ]
Transparent ceramics have created interest in their applications for high energy lasers, transparent armor windows, nose cones for heat seeking missiles, radiation detectors for non-destructive testing, high energy physics, space exploration, security and medical imaging applications. Large laser elements made from transparent ceramics can be produced at a relatively low cost. These components are free of internal stress or intrinsic birefringence , and allow relatively large doping levels or optimized custom-designed doping profiles. This makes ceramic laser elements particularly important for high-energy lasers.
The development of transparent panel products will have other potential advanced applications including high strength, impact-resistant materials that can be used for domestic windows and skylights. Perhaps more important is that walls and other applications will have improved overall strength, especially for high-shear conditions found in high seismic and wind exposures. If the expected improvements in mechanical properties bear out, the traditional limits seen on glazing areas in today's building codes could quickly become outdated if the window area actually contributes to the shear resistance of the wall.
Currently available infrared transparent materials typically exhibit a trade-off between optical performance, mechanical strength and price. For example, sapphire (crystalline alumina ) is very strong, but it is expensive and lacks full transparency throughout the 3–5 μm mid-infrared range. Yttria is fully transparent from 3–5 μm, but lacks sufficient strength, hardness, and thermal shock resistance for high-performance aerospace applications. A combination of these two materials in the form of the yttrium aluminium garnet (YAG) is one of the top performers in the field. [ citation needed ]
When light strikes an object, it usually has not just a single frequency (or wavelength) but many. Objects have a tendency to selectively absorb, reflect, or transmit light of certain frequencies. That is, one object might reflect green light while absorbing all other frequencies of visible light. Another object might selectively transmit blue light while absorbing all other frequencies of visible light. The manner in which visible light interacts with an object is dependent upon the frequency of the light, the nature of the atoms in the object, and often, the nature of the electrons in the atoms of the object.
Some materials allow much of the light that falls on them to be transmitted through the material without being reflected. Materials that allow the transmission of light waves through them are called optically transparent. Chemically pure (undoped) window glass and clean river or spring water are prime examples of this.
Materials that do not allow the transmission of any light wave frequencies are called opaque . Such substances may have a chemical composition which includes what are referred to as absorption centers. Most materials are composed of materials that are selective in their absorption of light frequencies. Thus they absorb only certain portions of the visible spectrum. The frequencies of the spectrum which are not absorbed are either reflected back or transmitted for our physical observation. In the visible portion of the spectrum, this is what gives rise to color. [ 8 ] [ 9 ]
Absorption centers are largely responsible for the appearance of specific wavelengths of visible light all around us. Moving from longer (0.7 μm) to shorter (0.4 μm) wavelengths: Red, orange, yellow, green, and blue (ROYGB) can all be identified by our senses in the appearance of color by the selective absorption of specific light wave frequencies (or wavelengths). Mechanisms of selective light wave absorption include:
In electronic absorption, the frequency of the incoming light wave is at or near the energy levels of the electrons within the atoms that compose the substance. In this case, the electrons will absorb the energy of the light wave and increase their energy state, often moving outward from the nucleus of the atom into an outer shell or orbital .
The atoms that bind together to make the molecules of any particular substance contain a number of electrons (given by the atomic number Z in the periodic table ). Recall that all light waves are electromagnetic in origin. Thus they are affected strongly when coming into contact with negatively charged electrons in matter. When photons (individual packets of light energy) come in contact with the valence electrons of an atom, one of several things can and will occur:
Most of the time, it is a combination of the above that happens to the light that hits an object. The states in different materials vary in the range of energy that they can absorb. Most glasses, for example, block ultraviolet (UV) light. What happens is the electrons in the glass absorb the energy of the photons in the UV range while ignoring the weaker energy of photons in the visible light spectrum. But there are also existing special glass types, like special types of borosilicate glass or quartz that are UV-permeable and thus allow a high transmission of ultraviolet light.
Thus, when a material is illuminated, individual photons of light can make the valence electrons of an atom transition to a higher electronic energy level . The photon is destroyed in the process and the absorbed radiant energy is transformed to electric potential energy. Several things can happen, then, to the absorbed energy: It may be re-emitted by the electron as radiant energy (in this case, the overall effect is in fact a scattering of light), dissipated to the rest of the material (i.e., transformed into heat ), or the electron can be freed from the atom (as in the photoelectric effects and Compton effects ).
The primary physical mechanism for storing mechanical energy of motion in condensed matter is through heat , or thermal energy . Thermal energy manifests itself as energy of motion. Thus, heat is motion at the atomic and molecular levels. The primary mode of motion in crystalline substances is vibration . Any given atom will vibrate around some mean or average position within a crystalline structure, surrounded by its nearest neighbors. This vibration in two dimensions is equivalent to the oscillation of a clock's pendulum. It swings back and forth symmetrically about some mean or average (vertical) position. Atomic and molecular vibrational frequencies may average on the order of 10 12 cycles per second ( Terahertz radiation ).
When a light wave of a given frequency strikes a material with particles having the same or (resonant) vibrational frequencies, those particles will absorb the energy of the light wave and transform it into thermal energy of vibrational motion. Since different atoms and molecules have different natural frequencies of vibration, they will selectively absorb different frequencies (or portions of the spectrum) of infrared light. Reflection and transmission of light waves occur because the frequencies of the light waves do not match the natural resonant frequencies of vibration of the objects. When infrared light of these frequencies strikes an object, the energy is reflected or transmitted.
If the object is transparent, then the light waves are passed on to neighboring atoms through the bulk of the material and re-emitted on the opposite side of the object. Such frequencies of light waves are said to be transmitted. [ 10 ] [ 11 ]
An object may be not transparent either because it reflects the incoming light or because it absorbs the incoming light. Almost all solids reflect a part and absorb a part of the incoming light.
When light falls onto a block of metal , it encounters atoms that are tightly packed in a regular lattice and a " sea of electrons " moving randomly between the atoms. [ 12 ] In metals, most of these are non-bonding electrons (or free electrons) as opposed to the bonding electrons typically found in covalently bonded or ionically bonded non-metallic (insulating) solids. In a metallic bond, any potential bonding electrons can easily be lost by the atoms in a crystalline structure. The effect of this delocalization is simply to exaggerate the effect of the "sea of electrons". As a result of these electrons, most of the incoming light in metals is reflected back, which is why we see a shiny metal surface.
Most insulators (or dielectric materials) are held together by ionic bonds . Thus, these materials do not have free conduction electrons , and the bonding electrons reflect only a small fraction of the incident wave. The remaining frequencies (or wavelengths) are free to propagate (or be transmitted). This class of materials includes all ceramics and glasses .
If a dielectric material does not include light-absorbent additive molecules (pigments, dyes, colorants), it is usually transparent to the spectrum of visible light. Color centers (or dye molecules, or " dopants ") in a dielectric absorb a portion of the incoming light. The remaining frequencies (or wavelengths) are free to be reflected or transmitted. This is how colored glass is produced.
Most liquids and aqueous solutions are highly transparent. For example, water, cooking oil, rubbing alcohol, air, and natural gas are all clear. Absence of structural defects (voids, cracks, etc.) and molecular structure of most liquids are chiefly responsible for their excellent optical transmission. The ability of liquids to "heal" internal defects via viscous flow is one of the reasons why some fibrous materials (e.g., paper or fabric) increase their apparent transparency when wetted. The liquid fills up numerous voids making the material more structurally homogeneous. [ citation needed ]
Light scattering in an ideal defect-free crystalline (non-metallic) solid that provides no scattering centers for incoming light will be due primarily to any effects of anharmonicity within the ordered lattice. Light transmission will be highly directional due to the typical anisotropy of crystalline substances, which includes their symmetry group and Bravais lattice . For example, the seven different crystalline forms of quartz silica ( silicon dioxide , SiO 2 ) are all clear, transparent materials . [ 13 ]
Optically transparent materials focus on the response of a material to incoming light waves of a range of wavelengths. Guided light wave transmission via frequency selective waveguides involves the emerging field of fiber optics and the ability of certain glassy compositions to act as a transmission medium for a range of frequencies simultaneously ( multi-mode optical fiber ) with little or no interference between competing wavelengths or frequencies. This resonant mode of energy and data transmission via electromagnetic (light) wave propagation is relatively lossless. [ citation needed ]
An optical fiber is a cylindrical dielectric waveguide that transmits light along its axis by the process of total internal reflection . The fiber consists of a core surrounded by a cladding layer. To confine the optical signal in the core, the refractive index of the core must be greater than that of the cladding. The refractive index is the parameter reflecting the speed of light in a material. (Refractive index is the ratio of the speed of light in vacuum to the speed of light in a given medium. The refractive index of vacuum is therefore 1.) The larger the refractive index, the more slowly light travels in that medium. Typical values for core and cladding of an optical fiber are 1.48 and 1.46, respectively. [ citation needed ]
When light traveling in a dense medium hits a boundary at a steep angle, the light will be completely reflected. This effect, called total internal reflection , is used in optical fibers to confine light in the core. Light travels along the fiber bouncing back and forth off of the boundary. Because the light must strike the boundary with an angle greater than the critical angle , only light that enters the fiber within a certain range of angles will be propagated. This range of angles is called the acceptance cone of the fiber. The size of this acceptance cone is a function of the refractive index difference between the fiber's core and cladding. Optical waveguides are used as components in integrated optical circuits (e.g., combined with lasers or light-emitting diodes , LEDs) or as the transmission medium in local and long-haul optical communication systems. [ citation needed ]
Attenuation in fiber optics , also known as transmission loss , is the reduction in intensity of the light beam (or signal) with respect to distance traveled through a transmission medium. It is an important factor limiting the transmission of a signal across large distances. Attenuation coefficients in fiber optics usually use units of dB/km through the medium due to the very high quality of transparency of modern optical transmission media. The medium is usually a fiber of silica glass that confines the incident light beam to the inside.
In optical fibers, the main source of attenuation is scattering from molecular level irregularities, called Rayleigh scattering , [ 15 ] due to structural disorder and compositional fluctuations of the glass structure . This same phenomenon is seen as one of the limiting factors in the transparency of infrared missile domes. [ 16 ] Further attenuation is caused by light absorbed by residual materials, such as metals or water ions, within the fiber core and inner cladding. Light leakage due to bending, splices, connectors, or other outside forces are other factors resulting in attenuation. At high optical powers, scattering can also be caused by nonlinear optical processes in the fiber. [ 17 ]
Many marine animals that float near the surface are highly transparent, giving them almost perfect camouflage . [ 18 ] However, transparency is difficult for bodies made of materials that have different refractive indices from seawater. Some marine animals such as jellyfish have gelatinous bodies, composed mainly of water; their thick mesogloea is acellular and highly transparent. This conveniently makes them buoyant , but it also makes them large for their muscle mass, so they cannot swim fast, making this form of camouflage a costly trade-off with mobility. [ 18 ] Gelatinous planktonic animals are between 50 and 90 percent transparent. A transparency of 50 percent is enough to make an animal invisible to a predator such as cod at a depth of 650 metres (2,130 ft); better transparency is required for invisibility in shallower water, where the light is brighter and predators can see better. For example, a cod can see prey that are 98 percent transparent in optimal lighting in shallow water. Therefore, sufficient transparency for camouflage is more easily achieved in deeper waters. [ 18 ] For the same reason, transparency in air is even harder to achieve, but a partial example is found in the glass frogs of the South American rain forest, which have translucent skin and pale greenish limbs. [ 19 ] Several Central American species of clearwing ( ithomiine ) butterflies and many dragonflies and allied insects also have wings which are mostly transparent, a form of crypsis that provides some protection from predators. [ citation needed ] | https://en.wikipedia.org/wiki/Transparency_and_translucency |
A transparency meter , also called a clarity meter , is an instrument used to measure the transparency of an object. Transparency refers to the optical distinctness with which an object can be seen when viewed through plastic film/sheet, glass , etc. In the manufacture of sheeting/film, or glass the quantitative assessment of transparency is just as important as that of haze. [ 1 ]
Transparency depends on the linearity of the passage of light rays through the material. Small deflections of the light, caused by scattering centers of the material, bring about a deterioration of the image. These deflections are much smaller than those registered in haze measurements. While haze measurements depend upon wide-angle scattering, clarity is determined by small-angle scattering. Wide and small angle scattering are not directly related to each other. By this, we mean that haze measurements cannot provide information about the clarity of the specimen and vice versa.
An object's transparency is measured by its total transmittance . [ 2 ] Total transmittance is the ratio of transmitted light to the incident light. There are two influencing factors; reflection and absorption. For example:
Incident light = 100%
- (Absorption = -1%
+ Reflection = -5%) = Total Transmittance = 94%
ASTM International (formerly known as the American Society for Testing and Materials) is the main body which works within the industry and develops standards for various tests/instruments. They dictate that the industry standard for the clarity meter entails the following;
1. Reference beam, self-diagnosis, and enclosed optics 2. Built-in statistics with average, standard deviation, coefficient of variance, and min/max 3. Large storage capacity and data transfer to a PC.
These standards are described in ASTM standard D 1746. [ 3 ]
This optics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Transparency_meter |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.